TL;DR
PDE-Refiner wraps an existing neural PDE solver with an iterative correction mechanism. Given an initial prediction, a refiner network predicts corrections based on the current state and (optionally) PDE residual/constraints, improving accuracy and long-horizon stability.
Problem
Neural PDE solvers often accumulate error during rollouts or when asked to reconstruct missing data. Training a single forward pass model can be insufficient for high accuracy across many steps. PDE-Refiner targets this by explicitly learning a correction step that can be applied repeatedly.
Benefits vs others
- Turns many solvers into **iterative** solvers: accuracy can improve with more refinement steps.
- Can incorporate PDE residual signals, making corrections physically meaningful.
- Often improves long-horizon stability without dramatically increasing base model size.
Interesting detail
- Refinement is a useful *bolt-on* strategy: you can apply it to existing operator learners (FNO/PINO) or CNN baselines.
- Connects to classical numerical ideas (fixed-point iteration, multigrid) but learned from data.
Core method (math)
Template for SciML. Paper-specific equations are added when manually curated.
Main theoretical contribution
- Refinement can be interpreted as learning a data-driven preconditioned fixed-point iteration for PDE constraints.
- Unrolled refinement aligns training-time objectives with test-time iterative use.
Main contribution
- Introduces an iterative refinement loop for PDE trajectory prediction.
- Targets error accumulation in long rollouts (forecasting) and reconstruction.
- Demonstrates gains on chaotic PDE benchmarks.
Main results (headline)
(Optional) Add main_results for a quick headline summary.
Experiments
PDE problems
- Kuramoto–Sivashinsky
- Kolmogorov flow
Tasks
- Long-horizon rollout forecasting
- Partial reconstruction
Experiment setting (high level)
- Iterative refine steps at inference; can be paired with neural operators.
- Evaluates stability/accuracy for long rollouts on chaotic PDEs.
Comparable baselines
- FNO
- U-Net
- ResNet
Main results
Key results
| Benchmark | Metric | Reported takeaway |
|---|---|---|
| KS long rollouts | Trajectory error | Refinement reduces drift vs one-shot operator baselines. |
| Kolmogorov flow | Rollout stability | Improves long-horizon stability without retraining the base solver. |
Citation (BibTeX)
@article{pderefiner2023,
title={PDE-Refiner: Iterative Refinement for PDE Forecasting and Reconstruction},
author={...},
journal={arXiv preprint arXiv:2308.05732},
year={2023}
}