TL;DR
A resolution-invariant neural operator that learns solution operators of PDEs by parameterizing integral kernels in Fourier space, enabling efficient training and strong cross-resolution generalization.
Problem
Learn the mapping (operator) from input functions (e.g., initial conditions, coefficients, forcing) to PDE solution fields, with an architecture that generalizes across discretizations/resolutions.
Benefits vs others
- Resolution-invariant operator parameterization: the learned operator can be applied to meshes/grids different from those used during training.
- FFT-based evaluation makes the global kernel mixing computationally efficient compared to dense integral kernels.
- Strong accuracy and data-efficiency vs CNN baselines on Navier–Stokes (especially at low viscosity).
Interesting detail
- The Fourier-layer design provides global receptive field mixing in O(n log n) via FFT, avoiding quadratic cost of dense kernels.
- The reported cross-resolution tests highlight one key advantage of operator learning: deploying the same learned operator on finer grids without retraining.
Core method (math)
Template for Operator learning. Paper-specific equations are added when manually curated.
Main theoretical contribution
Not curated yet. Add bullet points under <code>theory</code> in JSON.
Main contribution
- Introduce the Fourier Neural Operator (FNO): a neural operator built from stacked Fourier integral operator layers where the integral kernel is represented in the Fourier domain and evaluated efficiently with FFTs.
- Demonstrate strong cross-resolution generalization (“super-resolution” evaluation) on Burgers (1D), Darcy flow (2D), and Navier–Stokes (2D) benchmarks; FNO maintains low error when testing at resolutions different from training.
- Provide practical training recipes and comparisons against common surrogates (CNN/UNet, ResNet) and other operator-learning baselines (GNO/LNO/MGNO, PCA-NN, etc.).
Main results (headline)
(Optional) Add main_results for a quick headline summary.
Experiments
PDE problems
- Burgers equation
- Darcy flow
- Navier–Stokes
- Fluid dynamics
Tasks
- Operator learning / surrogate modeling
- Super-resolution / cross-resolution generalization
Experiment setting (high level)
- Supervised learning on simulated PDE datasets.
- Often evaluated on resolution generalization and rollout stability.
Comparable baselines
- U-Net
- TF-Net
- ResNet
- GCN
- FCN
- PCA-NN
- GNO
- LNO
- MGNO
- RBM (Darcy baseline in paper)
Main results
Navier–Stokes (64×64): relative error at t=1 (avg over test set) + time/epoch
Lower is better. Time per epoch is seconds (as reported in the paper).
| Method | #Params | Time/epoch (s) | ν=1e−3 (N=1000) | ν=1e−4 (N=1000) | ν=1e−4 (N=10000) | ν=1e−5 (N=1000) |
|---|---|---|---|---|---|---|
| U-Net | 4.3M | 119 | 1.0e-2 | 3.3e-2 | 1.0e-2 | 8.4e-2 |
| TF-Net | 2.3M | 100 | 7.0e-3 | 4.8e-2 | 7.2e-3 | 1.1e-1 |
| ResNet | 2.6M | 150 | 5.6e-3 | 4.5e-2 | 4.0e-3 | 1.1e-1 |
| FNO | 0.5M | 39 | 8.7e-4 | 6.9e-3 | 1.8e-3 | 1.5e-2 |
Burgers (1D): cross-resolution generalization (train s=2048, test varying s)
Metric: relative L2 error. Lower is better.
| Method | s=256 | s=512 | s=1024 | s=2048 | s=4096 | s=8192 |
|---|---|---|---|---|---|---|
| NN | 1.6e-2 | 1.4e-2 | 1.2e-2 | 1.1e-2 | 1.0e-2 | 9.8e-3 |
| GCN | 2.0e-2 | 1.8e-2 | 1.6e-2 | 1.6e-2 | 1.7e-2 | 1.8e-2 |
| FCN | 1.5e-2 | 1.4e-2 | 1.3e-2 | 1.3e-2 | 1.3e-2 | 1.2e-2 |
| PCA-NN | 2.6e-2 | 1.9e-2 | 1.7e-2 | 1.7e-2 | 1.6e-2 | 1.6e-2 |
| GNO | 1.1e-1 | 5.8e-2 | 3.7e-2 | 2.5e-2 | 2.0e-2 | 1.8e-2 |
| LNO | 1.2e-1 | 9.4e-2 | 7.8e-2 | 5.4e-2 | 4.2e-2 | 3.5e-2 |
| MGNO | 8.8e-3 | 7.2e-3 | 5.9e-3 | 5.7e-3 | 5.3e-3 | 5.0e-3 |
| FNO | 1.1e-3 | 7.3e-4 | 4.6e-4 | 3.4e-4 | 3.0e-4 | 2.7e-4 |
Darcy (2D): cross-resolution generalization (train s=421, test varying s)
Metric: relative L2 error. Lower is better.
| Method | s=85 | s=141 | s=211 | s=421 |
|---|---|---|---|---|
| NN | 5.1e-2 | 4.9e-2 | 4.8e-2 | 4.7e-2 |
| FCN | 8.9e-2 | 8.8e-2 | 8.5e-2 | 8.4e-2 |
| PCA-NN | 3.2e-2 | 2.5e-2 | 2.4e-2 | 2.4e-2 |
| RBM | 3.0e-2 | 2.0e-2 | 1.9e-2 | 1.5e-2 |
| GNO | 1.1e-1 | 8.6e-2 | 7.8e-2 | 6.0e-2 |
| LNO | 2.0e-1 | 1.8e-1 | 1.7e-1 | — |
| MGNO | 3.2e-2 | 2.6e-2 | 2.4e-2 | 2.2e-2 |
| FNO | 1.9e-2 | 1.3e-2 | 1.0e-2 | 8.8e-3 |
Citation (BibTeX)
@inproceedings{li2021fno,
title={Fourier Neural Operator for Parametric Partial Differential Equations},
author={Li, Zongyi and Kovachki, Nikola and Azizzadenesheli, Kamyar and Liu, Burigede and Bhattacharya, Kaushik and Stuart, Andrew and Anandkumar, Anima},
booktitle={International Conference on Learning Representations (ICLR)},
year={2021}
}