TL;DR
FunDPS adapts diffusion posterior sampling to PDE solution *fields*: sample from a learned diffusion prior while enforcing observations and (optionally) physics constraints through guidance terms. The key idea is to treat the unknown as a function (field) and make the guidance compatible with PDE operators and masks.
Problem
Inverse problems for PDEs (inpainting, sparse sensors, data assimilation) require drawing samples of solution fields consistent with partial observations. Standard diffusion priors sample unconditionally; naive conditioning can be weak or expensive. FunDPS targets practical posterior sampling for PDE fields by combining diffusion priors with measurement/physics guidance in a function-space framing.
Benefits vs others
- Works naturally with **arbitrary masks / sparse sensors** via a measurement operator M.
- Guidance can incorporate **physics residuals** and/or observation likelihoods without retraining a new model for every mask.
- Function-space viewpoint encourages discretization-robust formulations (same method across grid resolutions).
Interesting detail
- Guidance terms are modular: you can swap in different observation models (noise, mask, forward operators) without retraining the prior.
- This makes FunDPS a good *infrastructure* baseline for future partial-observation PDE benchmarks.
Core method (math)
Template for Diffusion. Paper-specific equations are added when manually curated.
Main theoretical contribution
- Posterior sampling decomposes into a learned *prior* step (diffusion) plus *guidance* terms encoding measurements/constraints.
- Guidance with linear measurement operators M enables efficient masked inpainting and sparse-sensor conditioning.
Main contribution
- Extends DPS to **function/field-valued unknowns** with PDE-relevant operators (masking, sensors, residuals).
- Demonstrates PDE inverse tasks where guidance significantly improves fidelity under partial observations.
- Provides empirical comparisons across PDE datasets and masking patterns (see tables).
Main results (headline)
(Optional) Add main_results for a quick headline summary.
Experiments
PDE problems
- Darcy flow
- Poisson
- Helmholtz
- Navier–Stokes
Tasks
- Forward operator learning
- Inverse / partial-observation reconstruction
Experiment setting (high level)
- Guided diffusion sampling; compares different step counts.
- Evaluated on standard PDE operator benchmarks; reports relative errors.
- Includes both forward and inverse settings depending on PDE.
Comparable baselines
- DiffusionPDE
- PINO
- FNO
- DeepONet
Main results
Reported relative error (examples)
Transcribed from the earlier site draft; see the paper for full tables and exact splits.
| PDE | Dir. | FunDPS (1) | FunDPS (2) | DiffPDE (10) | DiffPDE (50) | PINO | FNO |
|---|---|---|---|---|---|---|---|
| Darcy | Fwd | 4.5% | 4.2% | 4.1% | 4.1% | 4.8% | 4.9% |
| Darcy | Inv | 18.8% | 18.4% | 22.2% | 21.7% | — | — |
| Poisson | Inv | 14.9% | 14.1% | 19.5% | 19.3% | — | — |
| Helmholtz | Inv | 8.4% | 7.8% | 8.5% | 8.5% | — | — |
| Navier–Stokes | Inv | 12.5% | 11.8% | 11.8% | 11.7% | — | — |
Citation (BibTeX)
@article{fundps2025,
title={Guided Diffusion Sampling on Function Spaces with Applications to PDEs},
author={Tong, Alex and others},
journal={arXiv preprint arXiv:2505.17004},
year={2025}
}