TL;DR
PRISMA reframes "physics guidance" for diffusion neural operators: instead of adding a PDE-residual gradient at sampling time, it feeds residual information into the denoiser as a *frequency-aware attention signal*. This aims to improve high-frequency fidelity and stability for inverse problems under sparse/partial observations.
Problem
Diffusion-based PDE priors are powerful for inverse problems, but *loss guidance* (injecting ∇||F(u)|| during sampling) can be unstable and expensive because it requires repeated PDE residual evaluations/gradients. PRISMA targets more stable and efficient conditioning by integrating residual information into the denoiser through spectral attention.
Benefits vs others
- Physics enters the **model** (conditioning/attention), reducing reliance on heavy sampling-time guidance loops.
- Residual-driven spectral attention emphasizes frequencies where the PDE constraint is violated, improving sharpness/high-frequency detail.
- Compatible with operator-learning backbones and inverse tasks (masking, sparse sensors).
Interesting detail
- PRISMA highlights a design pattern: *conditioning via residual features* (train-time) vs *guidance via residual gradients* (sample-time).
- The spectral view connects naturally to known frequency/pathology issues in PDE learning (spectral bias, aliasing).
Core method (math)
Template for Diffusion. Paper-specific equations are added when manually curated.
Main theoretical contribution
- Residual features act like a learned, frequency-dependent preconditioner for denoising steps (focus where constraints are violated).
- Compared to classical guidance, residual conditioning pushes constraint awareness into training, potentially lowering sampling-time compute.
Main contribution
- Introduces **residual-as-spectral-attention**: use Fourier features of PDE residuals to modulate denoising (instead of only guidance).
- Shows how to plug the mechanism into diffusion neural operators for **inverse problems under partial observations**.
- Provides empirical comparisons against guidance-based diffusion baselines and operator-learning baselines (e.g., FNO/PINO).
Main results (headline)
(Optional) Add main_results for a quick headline summary.
Experiments
PDE problems
- Darcy flow
- Poisson
- Helmholtz
- Navier–Stokes
Tasks
- Forward operator learning
- Inverse / partial-observation reconstruction
Experiment setting (high level)
- Partial observation (masked measurements) with diffusion sampling.
- Reports results across different step counts (1–200).
- Emphasis on fast sampling with minimal quality degradation.
Comparable baselines
- FunDPS
- DiffusionPDE
- FNO
- PINO
- DeepONet
Main results
Reported relative error (examples)
Numbers transcribed from the paper page previously added to this repo; please consult the paper for the complete experimental protocol.
| PDE | Dir. | PRISMA (200) | PRISMA (8) | PRISMA (1) | FunDPS (8) | DiffusionPDE (8) | DiffusionPDE (1) |
|---|---|---|---|---|---|---|---|
| Darcy | Fwd | 4.2% | 4.1% | 4.0% | 4.0% | 4.1% | 7.8% |
| Darcy | Inv | 17.5% | 13.5% | 13.5% | 21.6% | 29.0% | 79.6% |
| Poisson | Inv | 14.5% | 11.9% | 10.8% | 18.8% | 28.4% | 31.9% |
| Helmholtz | Inv | 10.4% | 5.6% | 5.0% | 8.4% | 8.5% | 8.9% |
| Navier–Stokes | Inv | 13.1% | 6.3% | 5.6% | 11.2% | 11.7% | 11.7% |
Citation (BibTeX)
@article{prisma2025,
title={Beyond Loss Guidance: Using PDE Residuals as Spectral Attention in Diffusion Neural Operators},
author={Batatia, Ilyes and others},
journal={arXiv preprint arXiv:2512.01370},
year={2025}
}