Contribute

Current DB: 301 papers (12 curated).

Contribute

Current DB: 301 papers (12 curated).

How the data is stored

Tip: keep most papers as index-only; curate a small set with high quality and lots of details.

Add a new curated paper (step-by-step)

  1. Find the paper in Research. Open its page (index view) and copy the slug from the URL (?slug=...).
  2. Create data/curations/<slug>.json using the template below.
  3. Fill in the fields. For results_tables, please copy the numbers from the paper’s tables (metrics, settings, baselines). For core_math, include the core idea + equations in LaTeX.
  4. Rebuild the website: python scripts/generate_research_site.py
  5. Commit and push. GitHub Pages will serve docs/.

Template (copy/paste)

{
  "slug": "my-paper-2025",
  "status": "curated",
  "full_title": "Full paper title goes here",
  "short_title": "MyPaper",
  "authors": "First Author; Second Author; ...",
  "year": 2025,
  "venue": "ICLR",
  "method_class": "Operator learning",
  "links": {
    "paper": "https://arxiv.org/abs/xxxx.xxxxx",
    "code": "https://github.com/user/repo"
  },

  "tldr": "2–4 sentences: what the method does + why it matters.",
  "problem": "What problem does the paper solve? (be concrete about partial observation / inverse / operator learning / etc.)",

  "contrib": [
    "Main contribution #1 (method idea).",
    "Main contribution #2 (training / inference trick).",
    "Main contribution #3 (benchmark / dataset / analysis)."
  ],
  "benefits": [
    "Why this is better than prior work (accuracy / speed / generalization / stability / data efficiency)."
  ],

  "core_math": [
    "Put the key equations here in LaTeX (no $...$ wrappers).",
    "Example: G_\theta(u)(y) = \sum_{k=1}^p b_k(u)\,t_k(y)"
  ],

  "data_setting": [
    "Dataset: size, how generated, train/val/test split.",
    "PDE + domain + discretization / resolution.",
    "Observation pattern (mask/sensors) + noise model."
  ],
  "model_setting": [
    "Architecture (layers, width, latent dims, Fourier modes, etc.).",
    "Inputs/outputs parameterization (what is u, what is a, what is y)."
  ],
  "training_setting": [
    "Optimizer, learning rate schedule, epochs/steps, batch size, hardware."
  ],

  "baselines": [
    "Baseline A",
    "Baseline B"
  ],

  "results_tables": [
    {
      "title": "Main quantitative results (copy numbers from the paper tables)",
      "note": "Write the metric + what lower/higher means.",
      "header": ["Setting", "Method", "Metric"],
      "rows": [
        ["...", "MyPaper", "0.012"],
        ["...", "Baseline A", "0.034"]
      ]
    }
  ],

  "interesting": [
    "Any extra insights that are useful for readers (failure modes, ablations, theory notes, etc.)."
  ],

  "bibtex": "@inproceedings{...}"
}

You can add extra fields if useful; unknown fields are ignored by the site generator.

Batch BibTeX export

On the Research page you can use the Pick checkboxes to select many papers and export a BibTeX file (copy or download).

Bulk import (optional)

If you have a BibTeX file and want to convert it into index entries (NDJSON), use:

python scripts/import_bibtex_to_json.py path/to/papers.bib

This script is best-effort and produces metadata. Curations still require human-written JSON files.