Experiment Memory bloomnet

Applying the Karpathy LLM Knowledge Base compilation pattern to a subset of vault notes (one article cluster) will produce higher-quality articles with better cross-linking than manual curation, while maintaining factual accuracy via provenance tracking

architectureai-agentsknowledge-management
Hypothesis

Applying the Karpathy LLM Knowledge Base compilation pattern to a subset of vault notes (one article cluster) will produce higher-quality articles with better cross-linking than manual curation, while maintaining factual accuracy via provenance tracking

Result: pending

Changelog

DateSummary
2026-04-06Audited: fixed chain (standalone, chain_prev null), iteration set to 4, last_audited stamped
2026-04-04Initial creation

Hypothesis

The Karpathy LLM KB pattern describes a compounding loop where the LLM compiles raw sources into wiki articles, and the compiled output gets fed back as input for future compilations. The vault as LLM KB idea proposes applying this to the Obsidian vault. The hypothesis is that LLM-compiled articles will have higher cross-link density (more wikilinks per article) and better structural consistency than manually-written notes, while the circular knowledge corruption pitfall can be mitigated via provenance tracking (source: llm-compiled tag).

Method

  1. Cluster selection: use the RATCHET article cluster as the pilot. It has the densest coverage across dimensions: 9 breakthroughs, ~10 research notes, ~8 experiments, 2 ideas, 7 journal entries.
  2. Raw source preparation: collect all notes in the cluster into a raw/ staging area. Strip frontmatter, keep content + wikilinks. This is the LLM’s input corpus.
  3. Compilation prompt: instruct the LLM to synthesize a comprehensive article from the raw sources:
    • Write a 2,000-3,000 word article covering the ratchet methodology across all projects
    • Add wikilinks to every referenced note, experiment, pitfall, and skill
    • Identify connections between notes that are not currently linked
    • Categorize into subsections by project
  4. Provenance tracking: all LLM-compiled output gets source: llm-compiled in frontmatter and links back to its raw inputs. This prevents the circular corruption pitfall.
  5. Quality comparison: measure cross-link density (wikilinks per 1000 words), factual accuracy (spot-check 20 claims against source notes), and structural consistency (section headings, changelog, frontmatter completeness) against existing manually-curated articles.
  6. Lint + Heal cycle: after initial compilation, run a second LLM pass to check for inconsistencies, missing cross-links, and factual errors. This is the “heal” step from the Karpathy pattern.

Results

Pending. Will measure:

  • Cross-link density: wikilinks/1000 words in compiled vs manual articles
  • Factual accuracy: percentage of claims traceable to source notes
  • New connections discovered: links the LLM found that didn’t exist in manual curation
  • Time investment: LLM compilation time vs estimated manual curation time

Findings

Pending.

Next Steps

If quality targets are met, extend the compilation pattern to all 6 article clusters. Build an automated pipeline that runs the compilation step weekly, producing a “compiled wiki” layer on top of the raw vault notes. This feeds directly into the blog-post-generator skill for The Public Lab.