TL;DR
This work studies how to condition diffusion models to generate PDE simulations, e.g., given initial/boundary conditions or coarse states. It compares conditioning strategies (input concatenation, cross-attention, guidance variants) and reports which choices work best for PDE data.
Problem
Diffusion models are flexible generative priors, but PDE simulation requires strong conditioning: the output must be consistent with initial/boundary conditions and often with partial observations. The paper targets the practical question: which conditioning mechanisms are effective and stable for PDE simulations?
Benefits vs others
- Provides a **design map** for conditioning diffusion models in PDE settings (what to try first, what tends to fail).
- Clarifies tradeoffs between hard constraints (inpainting) and soft constraints (likelihood guidance).
- Useful as a reference baseline when building new diffusion PDE inference methods.
Interesting detail
- The paper's ablations help decide whether to implement conditioning as an architectural choice (attention) or as sampling-time guidance.
- It also motivates hybrid approaches (conditioning + lightweight guidance).
Core method (math)
Template for Diffusion. Paper-specific equations are added when manually curated.
Main theoretical contribution
- Conditioning can be viewed as learning p(x_0 | c); inpainting enforces hard equality constraints on observed entries.
- Guidance adds ∇ log p(c|x_t) terms during sampling, trading compute for tighter constraint satisfaction.
Main contribution
- Systematic comparison of conditioning strategies for diffusion-based PDE simulators.
- Reports empirical stability/quality tradeoffs across PDE datasets and conditioning types.
Main results (headline)
(Optional) Add main_results for a quick headline summary.
Experiments
PDE problems
- Burgers
- Kuramoto–Sivashinsky
- Kolmogorov flow
Tasks
- Conditional generation
- Partial-observation reconstruction
Experiment setting (high level)
- Studies multiple conditioning setups (known IC/BC, sparse sensors, masked fields).
- Compares diffusion sampling strategies and architecture choices.
Comparable baselines
- PDE-Refiner
- U-Net
- FNO
Main results
Key takeaways
| Experiment | Metric | Reported takeaway |
|---|---|---|
| Sparse observations | Error / likelihood | Conditional diffusion improves robustness vs deterministic models in low-observation regimes. |
| Sampling choices | Runtime vs error | Careful step schedules and conditioning significantly affect quality. |
Citation (BibTeX)
@article{conditionaldiffpde2024,
title={Conditional diffusion for PDE simulations},
author={Liu, Xueqi and others},
journal={arXiv preprint arXiv:2410.16415},
year={2024}
}