DiffusionPDE (2024)

DiffusionPDE: Generative PDE-Solving Under Partial Observation
Wenlong Tang; Lingling Fan; Haoyu Sun; Yichi Zhang; Yixin Chen

Paper: link
Quick facts

Type: conditional diffusion for fields
Setting: partial observation
Covers elliptic + fluid PDEs

← Research · Home

TL;DR

Train a diffusion model on joint fields (coefficients + solutions) and perform PDE reconstruction under sparse observations via guided sampling that combines observation consistency and PDE-residual guidance.

Problem

Given partial observations of a PDE solution (e.g., sparse sensors / masked pixels) and optionally unknown PDE coefficients, reconstruct the full field (and the hidden coefficient field) in a way that is consistent with both the observations and the governing PDE.

Benefits vs others

Interesting detail

Core method (math)

Template for Diffusion. Paper-specific equations are added when manually curated.

\[q(x_t\mid x_{t-1}) = \mathcal{N}(\sqrt{1-\beta_t}\,x_{t-1},\ \beta_t I),\quad x_t = \sqrt{\bar\alpha_t}\,x_0 + \sqrt{1-\bar\alpha_t}\,\epsilon\] \[p_\theta(x_{t-1}\mid x_t) = \mathcal{N}(\mu_\theta(x_t,t),\ \sigma_t^2 I)\] \[\mu_\theta(x_t,t) = \frac{1}{\sqrt{1-\beta_t}}\left(x_t - \frac{\beta_t}{\sqrt{1-\bar\alpha_t}}\,\epsilon_\theta(x_t,t)\right)\] \[\mathcal{L}_{\text{obs}} = \lVert M\odot(u - y)\rVert^2,\quad \mathcal{L}_{\text{PDE}} = \lVert \mathcal{N}(u,a) \rVert^2\] \[x_{t-1} \leftarrow x_{t-1} - \eta\,\nabla_{x_t}\big( \lambda_{\text{obs}}\mathcal{L}_{\text{obs}} + \lambda_{\text{PDE}}\mathcal{L}_{\text{PDE}} \big)\ \text{(guided sampling)}\]

Main theoretical contribution

Not curated yet. Add bullet points under <code>theory</code> in JSON.

Main contribution

Main results (headline)

(Optional) Add main_results for a quick headline summary.

Experiments

PDE problems

  • Darcy flow
  • Poisson equation
  • Helmholtz equation
  • Burgers equation
  • Navier–Stokes

Tasks

  • Partial observation reconstruction
  • Inverse problems
  • Generative PDE inference

Experiment setting (high level)

  • Mask-conditioned diffusion; partial measurements of inputs/outputs.
  • Evaluates forward and inverse settings with relative error (%).
  • Uses iterative sampling (multiple diffusion steps).

Comparable baselines

Main results

Partial observation (random masks): 1% and 3% observation ratio

Metric: relative error (lower is better). Values are copied from the paper’s Table 1 (forward and inverse settings).

SettingMethodDarcy (Fwd)Poisson (Fwd)Helmholtz (Fwd)Burgers (Fwd)Navier–Stokes bounded (Fwd)Navier–Stokes unbounded (Fwd)Darcy (Inv)Poisson (Inv)Helmholtz (Inv)Navier–Stokes bounded (Inv)Navier–Stokes unbounded (Inv)
1% obsDiffusionPDE0.0430.1360.0150.0120.0710.0810.0480.1960.0160.0710.086
1% obsFNO0.3950.6370.0550.2640.1870.2440.4000.6400.0990.2140.283
1% obsPINO0.5520.5720.0690.4360.2190.2610.5530.5740.1070.2530.300
1% obsU-NO0.2040.8160.0220.0720.2900.1830.2060.8180.0260.3290.207
1% obsU-WNO0.2870.3670.0440.0590.1110.1540.2880.3700.0760.1280.182
3% obsDiffusionPDE0.0310.0720.0140.0050.0180.0250.0320.1050.0140.0180.026
3% obsFNO0.2270.5110.0440.1830.1300.1840.2280.5140.0810.1500.215
3% obsPINO0.3200.5100.0660.3370.2010.2260.3220.5130.1030.2320.260
3% obsU-NO0.0720.4110.0200.0260.0280.0300.0760.4130.0230.0310.033
3% obsU-WNO0.0600.2380.0330.0330.0270.0300.0630.2410.0570.0320.036

Full observation (no masks)

Metric: relative error (lower is better). Values are copied from the paper’s Table 4 (forward and inverse settings).

SettingMethodDarcy (Fwd)Poisson (Fwd)Helmholtz (Fwd)Burgers (Fwd)Navier–Stokes bounded (Fwd)Navier–Stokes unbounded (Fwd)Darcy (Inv)Poisson (Inv)Helmholtz (Inv)Navier–Stokes bounded (Inv)Navier–Stokes unbounded (Inv)
Full obsDiffusionPDE0.0090.0130.0140.0020.0020.0030.0100.0150.0140.0020.004
Full obsFNO0.0140.0420.0440.0230.0280.0640.0150.0490.0810.0300.081
Full obsPINO0.0110.0160.0520.0100.0060.0100.0120.0190.0860.0090.014
Full obsU-NO0.0090.0160.0140.0020.0040.0090.0100.0200.0140.0050.011
Full obsU-WNO0.0100.0150.0300.0020.0020.0040.0110.0200.0640.0020.007

Guidance ablation (1% observation, random mask, Darcy)

Metric: relative error (lower is better). Values are copied from the paper’s Table 3.

GuidanceDarcy (Fwd)Darcy (Inv)
Observation loss only0.0650.073
PDE loss only0.1240.141
Observation + PDE loss0.0430.048
To reproduce, align observation masks, diffusion step schedule, and normalization.

Citation (BibTeX)

@article{tang2024diffusionpde,
  title={DiffusionPDE: Generative PDE-Solving Under Partial Observation},
  author={Tang, Wenlong and Fan, Lingling and Sun, Haoyu and Zhang, Yichi and Chen, Yixin},
  journal={arXiv preprint arXiv:2406.17763},
  year={2024}
}