Vol. I • No. 1
April 2026
Special Report

memex

Karpathy's Pattern — Signal & Noise
Source: github.com/karpathy
17M+ Views • 5K+ Stars
Community Analysis
PERSISTENT MEMORY RAG vs WIKI COMPILE ONCE · QUERY FOREVER ~100 ARTICLES SWEET SPOT KNOWLEDGE COMPOUNDS PERSONAL SCALE ONLY HALLUCINATIONS PERSIST NO ENTERPRISE RBAC MARKDOWN IS FUTURE-PROOF PERSISTENT MEMORY RAG vs WIKI COMPILE ONCE · QUERY FOREVER ~100 ARTICLES SWEET SPOT KNOWLEDGE COMPOUNDS PERSONAL SCALE ONLY HALLUCINATIONS PERSIST NO ENTERPRISE RBAC MARKDOWN IS FUTURE-PROOF
17M+
Tweet Views
~100
Articles · Sweet Spot
400K
Words · Karpathy's Wiki
50K
Token Ceiling
The Core Idea

Instead of making the LLM rediscover knowledge from raw documents on every query — the RAG way — Karpathy proposes having the LLM compile a structured, interlinked wiki once at ingest time. Knowledge accumulates. The LLM maintains the wiki, not the human.

Architecture

Layer 1
raw/
PDFs, articles, web clips. Immutable. Human adds, LLM never modifies.
Process
🤖 LLM
Reads sources. Synthesizes, links, and compiles structured pages. Runs lint checks.
Layer 2
wiki/
Compiled markdown pages. Encyclopedia-style articles with cross-references.
+
Layer 3
schema
CLAUDE.md / AGENTS.md. Rules that discipline the LLM's behavior as maintainer.

↓ Tap any row to expand analysis

Strengths
Knowledge Compounds Over Time
Unlike RAG — where every query starts from scratch re-deriving connections — the LLM wiki is stateful. Each new source you add integrates into existing pages, strengthening existing connections and building new ones. The system gets more valuable with every addition, not just bigger.
+
Zero Maintenance Burden on Humans
The grunt work of knowledge management — cross-referencing, updating related pages, creating summaries, flagging contradictions — is what kills every personal wiki humans try to maintain. LLMs do this tirelessly. The human's job shrinks to: decide what to read, and what questions to ask.
+
Token-Efficient at Personal Scale
At ~100 articles, the wiki's index.md fits in context. The LLM reads the index, identifies relevant articles, and loads only those — no embedding, no vector search, no retrieval noise. This is faster and cheaper per query than a full RAG pipeline for this scale.
+
Human-Readable & Auditable
The wiki is just markdown. You can open it in any editor, read it yourself, version it in git, and inspect every claim. There's no black-box vector math. Every connection the LLM made is visible as a [[wikilink]]. This transparency is a genuine advantage over opaque embeddings.
+
Future-Proof & Portable
Plain markdown files work with any tool, any model, any era. No vendor lock-in. No proprietary database. When GPT-7 or Claude 5 releases, you point it at the same folder. The data outlives the tooling.
+
Self-Healing via Lint Passes
Karpathy describes periodic "health check" passes where the LLM scans the entire wiki for contradictions, orphaned pages (no links pointing to them), and concepts referenced but not yet given their own page. The wiki actively repairs itself rather than rotting silently.
+
Path to Fine-Tuning
As the wiki matures and gets "purified" through continuous lint passes, it becomes high-quality synthetic training data. Karpathy points to the possibility of fine-tuning a smaller, efficient model directly on the wiki — so the LLM "knows" your knowledge base in its own weights, not just its context.
+
Weaknesses
Errors Persist & Compound
This is the most serious structural flaw. With RAG, hallucinations are ephemeral — wrong answer this query, clean slate next time. With an LLM wiki, if the LLM incorrectly links two concepts at ingest time, that mistake becomes a prior that future ingest passes build upon. Persistent errors are more dangerous than ephemeral ones.
+
Hard Scale Ceiling (~50K tokens)
The wiki approach stops working reliably when the index can no longer fit in the model's context window — roughly 50,000–100,000 tokens. Karpathy's own wiki is ~100 articles / ~400K words on a single topic. A mid-size company has thousands of documents; a large one has millions. The architecture simply doesn't extend to that scale.
+
No Access Control or Multi-User Support
It's a folder of markdown files. There is no Role-Based Access Control, no audit logging, no concurrency handling for simultaneous writes, no permissions model. Multiple users or agents creating write conflicts is unmanaged. This is not a limitation that can be patched — it's a structural consequence of the architecture.
+
Manual Cross-Checking Burden Returns
In precision-critical domains (API specs, version constraints, legal records), LLM-generated content requires human cross-checking against raw sources to catch subtle factual errors. At that point, the maintenance burden you thought you'd eliminated returns in a different form: verification overhead.
+
Cognitive Outsourcing Risk
Critics on Hacker News argued that the bookkeeping Karpathy outsources — filing, cross-referencing, summarizing — is precisely where genuine understanding forms. By handing this to an LLM, you may end up with a comprehensive wiki you haven't internalized. You have a great reference; you may lack deep ownership of the knowledge.
+
Knowledge Staleness Without Active Upkeep
Community reports show that most people who try this pattern get the folder structure right but end up with a wiki that slowly becomes unreliable or gets abandoned. The system requires consistent source ingestion and regular lint passes. If you stop feeding it, the wiki rots — its age relative to your domain's pace of change becomes a liability.
+
Weaker Semantic Retrieval than RAG
Markdown wikilinks are explicit and manually-created. Vector embeddings discover semantic connections across differently-worded text that manual linking simply cannot — finding that an article titled "caching strategies" is semantically related to "performance bottlenecks" without an explicit link. At large corpora, RAG's fuzzy matching is the superior retrieval mechanism.
+
RAG retrieves and forgets. A wiki accumulates and compounds. — LLM Wiki v2, community extension of Karpathy's pattern
Scale matters most here. The comparison is not absolute — it is highly scale-dependent. Below ~50K tokens, the wiki pattern wins. Above that threshold, RAG's architecture becomes necessary regardless of the storage format.
Dimension memex / LLM Wiki RAG
Knowledge Accumulation ✦ Compounds with each ingest Stateless — restarts every query
Maintenance Cost ✦ LLM does the filing Chunking pipelines need upkeep
Scale Ceiling ~50–100K tokens hard limit ✦ Millions of documents, no ceiling
Human Readability ✦ Plain markdown, fully auditable Black-box vector space
Semantic Retrieval Explicit links only ✦ Fuzzy semantic matching
Error Persistence Errors compound into future pages Errors are ephemeral per query
Multi-user / RBAC None — flat file system ✦ Supported by most platforms
Query Latency ✦ Fast at personal scale Embedding search overhead
Setup Complexity ✦ Just folders & markdown Vector DB, chunking, embeddings
Vendor Lock-in ✦ Zero — any model, any editor Often tied to embedding provider
Cross-reference Quality ✦ Rich, named wikilinks Implicit via similarity score
Fine-tuning Pathway ✦ Wiki becomes training data Raw chunks are poor training data
Excellent Fit

Solo Deep Research

Reading papers, articles, and reports over weeks or months on a single topic. Karpathy's primary use case — his ML research wiki has ~100 articles and 400K words, all compiled without writing a line manually.

Excellent Fit

Personal Knowledge Base

Goals, health tracking, journal entries, podcast notes — building a structured picture of yourself over time. The LLM creates concept pages for recurring themes and connects them across months or years.

Good Fit

Small Team Wiki (<500 articles)

Engineering team internal docs, competitive analysis, trip planning. Works well if one person owns ingestion and the team reads via Obsidian. Breaks at concurrent writes or RBAC requirements.

Good Fit

Agentic Pipeline Memory

AI agent systems that need persistent memory between sessions. The wiki prevents agents from "waking up blank." Session context is compiled rather than re-derived, dramatically cutting token overhead.

Poor Fit

Mission-Critical Precision

API parameter specs, version constraints, legal records, medical protocols. LLM-generated pages can silently misstate critical details. Manual cross-checking eliminates the maintenance savings that make this pattern attractive.

Avoid

Enterprise Knowledge Management

Millions of documents, hundreds of users, RBAC, audit trails, regulatory compliance. The flat file architecture cannot address concurrency, access control, or governance. This is a personal productivity hack, not enterprise infrastructure.

A breakdown of where the pattern generates real signal vs. where the noise grows louder.

Signal

The Compile-Time Insight

Moving synthesis from query-time (RAG) to ingest-time (wiki) is a genuinely novel architectural choice with real benefits for accumulation. This is the core innovation and it holds up to scrutiny.

Strong
Signal

LLM as Librarian

Offloading the maintenance bottleneck — the work that kills all human-maintained wikis — to an LLM is elegant and correct. The pattern solves a real problem people actually have.

Strong
Noise

"RAG is Dead"

Community hyperbole. RAG and the wiki pattern solve different problems at different scales. The wiki pattern is a personal productivity tool, not a replacement for enterprise-grade retrieval infrastructure.

High Noise
Noise

Error Amplification Risk

Real and underweighted by enthusiasts. The persistent-error problem is structural — not a bug to fix with better prompting. It's a genuine trade-off the pattern makes, and it's most dangerous in precision-critical domains.

Real Risk
Signal

The Idea File Paradigm

Karpathy's framing of sharing an "idea file" vs. a code repo — letting each person's agent instantiate a custom version — is genuinely forward-thinking about how patterns propagate in the agent era.

Solid
Noise

"It'll Replace Enterprise RAG"

Karpathy explicitly scoped this to individual researchers. The limitations (no RBAC, no concurrency, ~50K token ceiling) are not bugs — they are consequences of the design assumptions. Enterprise use requires entirely different infrastructure.

Pure Noise
The schema file is a wish, not a discipline. The lack of an actual security model structurally makes this a skill with a dedicated output directory and no guardrails. — Threads community critique, April 2026
The bottleneck for personal knowledge bases was never the reading. It was the boring maintenance work nobody wanted to do. LLMs eliminate that bottleneck. — LLM Wiki v2 community extension
These are the real engineering answers. For each known limitation, the community has converged on concrete mitigations — some from Karpathy's own gist, others from production implementations. Click any row to expand the full approach. The Active Upkeep section at the bottom is the one that matters most.
📈

Scaling Past the Token Ceiling

High Priority
01 Add qmd as your search layer at 50–100+ articles qmd · CLI + MCP

The index.md breaks around 100–150 articles when it stops fitting cleanly in context. The community-endorsed fix is qmd — built by Tobi Lütke (Shopify CEO) and explicitly recommended by Karpathy himself. It's a local, on-device search engine for markdown files using hybrid BM25 + vector search with LLM re-ranking. No API calls, no data leaves your machine.

Install and integrate:

npm install -g @tobilu/qmd qmd collection add ./wiki --name my-research qmd mcp

The qmd mcp command exposes it as an MCP server so Claude Code uses it as a native tool — no shell-out friction. Three search modes: keyword BM25 (qmd search), semantic vector (qmd vsearch), and hybrid re-ranked (qmd query). Use the JSON output flag to pipe results into agent workflows.

Sweet spot: Use plain index.md navigation up to ~50 articles. Introduce qmd around 50–100. At 200+, qmd becomes essential — not optional.
Setup Effort
30 min one-time setup
02 Shard the index — one sub-index per topic domain Schema · CLAUDE.md

Before reaching for qmd, a simpler scaling step is to split index.md into domain-specific sub-indexes: wiki/ml-theory/index.md, wiki/infrastructure/index.md, etc. A root index.md points to sub-indexes, keeping any single file within comfortable context window bounds.

Define this in your schema file (CLAUDE.md) so the LLM knows which sub-index to update on ingest and which to consult on query. The LLM reads only the relevant sub-index, not the full corpus.

Sharding adds maintenance complexity to the schema. Document the domain boundaries clearly or the LLM will make inconsistent decisions about where new content lands.
Setup Effort
15 min schema update
03 Consolidation tiers — promote stable knowledge up the stack LLM Wiki v2 pattern

From the LLM Wiki v2 community extension: structure knowledge in tiers by confidence and stability. Raw observations live in low-confidence pages. After multi-source confirmation, the LLM promotes them to "established" pages. Core principles graduate to a high-confidence tier that rarely changes.

Each tier is more compressed, more confident, and longer-lived than the one below it. The LLM only loads lower tiers when deeper detail is needed. This naturally keeps context window usage lean as the wiki grows — you're querying the compressed tier first, the full tier only on demand.

Payoff: This also solves the staleness problem. Lower-tier pages decay naturally; upper-tier facts are reinforced repeatedly and earn their permanence.
Setup Effort
Schema design work, ongoing co-evolution
🔐

Access Control & Multi-User

Medium Priority
01 Host behind a lightweight wrapper — llmwiki.app or self-hosted MCP MCP · llmwiki · FastAPI

The flat-file architecture has no access control by default. The cleanest mitigation is to expose the wiki through an MCP server rather than as raw files. The open-source llmwiki project (lucasastorian/llmwiki) does exactly this: it wraps the Karpathy pattern with a FastAPI backend, Supabase auth, and MCP endpoints. Claude connects via MCP and has read/write tools — but only through the authenticated layer.

For self-hosted setups: build a minimal FastAPI wrapper that authenticates via JWT before allowing MCP tool calls. The markdown files stay on disk; the API layer enforces who can read and write. This pattern is already used in production implementations like Hjarni.

Eric's wheelhouse: Given your OPNsense VLAN setup and existing FastAPI work on TaskForge, a simple auth wrapper is well within reach. Expose via Tailscale to keep it off the public internet entirely — no RBAC needed if the network boundary does the work.
Setup Effort
Weekend project for self-hosted
02 Scoped directories for shared vs. private content Git · Directory structure

For small teams, a simpler pattern than full RBAC: separate wiki/shared/ from wiki/private/ directories, with git branch-level access control. The MCP server only exposes the shared/ tree to team members; personal pages stay in private/ on a branch only you merge from.

The LLM Wiki v2 pattern calls this "mesh sync with shared/private scoping." The schema file defines what can be promoted from private to shared and the conditions for that promotion.

This is soft access control — it relies on disciplined git usage, not cryptographic enforcement. Fine for trusted small teams; not for anything requiring audit trails or compliance.
Setup Effort
Git config + schema update
⚠️

Cross-Check & Error Persistence

High Priority
01 Confidence scoring — every claim carries a decay score Frontmatter · Schema

The LLM Wiki v2 pattern solves persistent errors by making uncertainty explicit. Every factual claim in a wiki page carries metadata: how many sources support it, when it was last confirmed, and a confidence score (e.g., 0.85). Confidence decays with time and strengthens with reinforcement from new sources.

Implement this in YAML frontmatter on each page:

confidence: 0.85 sources: 2 last_confirmed: 2026-04-01

The lint pass checks for pages with decayed confidence scores and flags them for re-verification. The LLM can say "I'm fairly sure about X but less sure about Y" — it's no longer a flat collection of equally-weighted claims.

Key benefit: This turns errors from permanent silent landmines into visible, decaying warnings. A wrong claim doesn't compound forever — it eventually gets flagged by its own decaying score.
Setup Effort
Schema + frontmatter template update
02 Typed supersession — new info explicitly replaces old claims Schema · log.md

When new information contradicts an existing wiki claim, the wrong pattern is leaving the old claim with an appended note. The right pattern: the new claim explicitly supersedes the old one. The old version is preserved but marked stale with a timestamp and link to what replaced it — version control for knowledge, not just for files.

Define supersession in your schema: the LLM's ingest instructions should check for contradictions against existing pages before writing, and when found, issue a formal supersession record rather than a quiet edit.

log.md discipline: Karpathy's second navigation file — the append-only audit log — is the mechanism for this. Every supersession event gets a log entry with timestamp, old claim, new claim, and source. The log is immutable context you can audit.
Setup Effort
Schema + ingest prompt engineering
03 Typed entity system — prevent duplicate and conflicting concepts Schema · ELF / LLMWiki v2

Community implementation ELF (Eli's Lab Framework) uses a strict typed-entity system where every page is declared as a type (library, project, person, concept, decision) and every link between pages has a typed relationship (uses, depends-on, contradicts, caused, fixed, supersedes). This prevents the LLM from creating duplicate concept pages under different names.

A 5-step incremental ingest pass: diff → summarize → extract → write → image. The extract step enforces entity typing before the write step creates any new page — if a typed entity already exists, it merges rather than duplicates.

Typed entity systems add upfront schema design work. Start loose; only formalize types after you see which duplicates are actually causing problems.
Setup Effort
Significant schema design investment
★ Biggest Mitigation Challenge

Active Upkeep — The Real Failure Mode

Community analysis of 120+ comments on Karpathy's gist converged on a clear finding: most people who try this pattern get the folder structure right and still end up with a wiki that slowly becomes unreliable, redundant, or abandoned. The difference between a wiki that compounds and one that quietly rots comes down to operational discipline — not technical setup.

Daily
Feed the Machine
  • Drop new sources into raw/ via Obsidian Web Clipper
  • Ingest anything queued in _raw/ staging dir
  • Log questions answered by the wiki (reinforces confidence)
Weekly
Lint Pass
  • Run health check — orphan pages, broken wikilinks
  • Flag contradictions for review
  • Identify concepts referenced but not yet given own page
  • Review low-confidence / decayed pages
Monthly
Schema Evolution
  • Review CLAUDE.md / AGENTS.md for outdated rules
  • Promote stable lower-tier pages up to established tier
  • Run qmd re-index if collection has grown significantly
  • Purge truly stale pages per retention curve
As Needed
Circuit Breakers
  • Separate vault and agent working directories
  • Never let agent write directly to vault/verified/
  • Manual audit any page cited in high-stakes decisions
  • Keep raw/ as ground truth — always traceable back
🔄

Upkeep Automation — Making It Stick

Critical
01 Separate vault from agent working directory — hard partition Directory structure

The instinct is to have the agent write directly into the wiki. This creates the rot. The principle: your curated/verified vault and the agent's working vault (speculative writes, messy drafts, exploratory connections still being tested) must be physically separate directories. Only the human promotes content from agent-working to vault.

Structure: wiki/verified/ (human-promoted, high trust) vs wiki/staging/ (agent writes here first). The lint pass reviews staging and proposes promotions. You approve them. The signal-to-noise ratio in your verified wiki stays high permanently.

Why this works: You're not adding friction to the agent — you're protecting the valuable layer. The agent still does all the work. You just gate what graduates to trusted status.
Setup Effort
Directory rename + schema update
02 Automate the ingest trigger — don't rely on memory to feed it Cron · Webhooks · Claude Code

The number one reason wikis rot: the human stops ingesting because life gets busy. The fix is removing the human from the trigger loop. Set up a cron job or a filesystem watcher on raw/ that automatically triggers the ingest command whenever a new file lands. The human's job shrinks to: drop file, walk away.

Implementations: inotifywait on Linux, fswatch on macOS, or a Node.js chokidar watcher. On drop, the watcher calls your ingest script which runs the LLM compilation pass. You get a notification when it completes.

For your stack: This maps cleanly to your existing automation patterns — a simple Node-RED flow watching a directory, triggering a webhook to Claude Code, and notifying via Slack/Telegram through OpenClaw when ingest completes.
Setup Effort
2–4 hours watcher + webhook
03 Schedule the weekly lint as a non-negotiable calendar block Cron · Scheduler

Lint passes don't happen if you have to remember to run them. The solution is automating them on a schedule — a weekly cron job that runs the lint command, writes a report to a lint-reports/ directory, and sends you a summary notification. The report tells you: N orphan pages found, N contradictions flagged, N pages with decayed confidence.

You review the report (5 minutes), decide which flagged items to address, and optionally run the LLM to resolve them. The system is telling you what needs attention rather than you having to inspect everything.

What community data shows: People who automate the lint schedule have wikis that stay healthy at 6 months. People who rely on manual "I'll remember to lint" have wikis that are abandoned or unreliable at 6 weeks.
Setup Effort
Cron setup + notification routing
04 Identity-aware filter — the schema knows who the wiki is for Schema · CLAUDE.md

A community-evolved enhancement to Karpathy's original: add an identity-aware filter to your schema. A prompt section that tells the LLM exactly who the wiki is for, what their goals are, and what "high-signal" means in that context. The LLM then scores sources before ingesting and rewrites that filter over time based on what has proven useful.

This prevents the wiki from becoming a neutral encyclopedia of everything you've read. It stays opinionated, relevant, and tuned to your actual work. Over months, the schema itself becomes a reflection of what you find worth knowing — a second-order artifact of the system.

Upkeep benefit: A well-tuned identity filter means the LLM rejects low-signal sources at ingest time rather than filling the wiki with noise you'll have to purge later. Garbage-in prevention beats garbage-out cleanup.
Setup Effort
10 min schema addition, self-evolving after
05 Retention curve — build in structured forgetting Frontmatter · Lint pass

Not everything should live forever. A wiki that never forgets becomes noisy — important signals buried under outdated context. Implement a retention curve: facts that were important once but haven't been accessed or reinforced in months gradually fade to "archived" status. The lint pass executes this curve automatically.

Frontmatter fields to add: last_accessed, access_count, status: active|fading|archived. The lint pass updates status based on time-since-access and reinforcement count. Archived pages aren't deleted — they move to wiki/archive/ where they're out of the active index but still traceable.

The payoff: Active upkeep gets easier over time as the wiki self-trims. After 6 months of running with a retention curve, your active wiki is denser and higher-signal than at month 1 — not bloated and harder to navigate.
Setup Effort
Frontmatter + lint script update
⬡ Your Stack Extension — MemPalace + qmd + Conversation Pipeline

The wiki gains a living feed and a structural memory layer.

Standard Karpathy wiki is fed by sources you manually drop into raw/. Your setup replaces that bottleneck with an automated conversation pipeline: every AI session gets mined into MemPalace, summarized, and fed into raw/ on a continuous basis. The wiki stops being a project you maintain and becomes an organism that grows from your daily work. Combined with qmd replacing ChromaDB for indexing, you have a genuinely novel hybrid that addresses the core limitations differently than any single pattern alone.

Note: You are skipping MemPalace's ChromaDB storage layer and using qmd for indexing instead. The implications of that choice are documented throughout this tab.

96.6%MemPalace R@5 Raw Mode
+34%Retrieval via wing+room filtering
~170Tokens on wake-up (L0+L1)
19MCP Tools available
qmdReplaces ChromaDB indexing
Your Architecture — Data Flow
Layer 0 — Conversation Capture
Claude / AI
Sessions
MemPalace
mine --mode convos
Wings / Rooms
Halls / Tunnels
Closets
(summaries)
Drawers
(verbatim)
Layer 1 — Wiki Compilation
Conversation
Summaries
raw/
(staged)
LLM
Compiler
wiki/
(compiled pages)
qmd
Index
Layer 2 — Query
Natural
Language Query
MemPalace
wing+room filter
+
qmd
BM25+vector
LLM reads
wiki pages
Grounded
Answer

MemPalace Concepts

🏛️
Wing
Person or Project
Top-level namespace — one per person you work with or project you run. Conversations and facts are scoped to their wing automatically via keyword detection on mining.
→ Maps to wiki domain sub-index (e.g. wiki/taskforge/)
🚪
Room
Topic / Concept
Specific subject within a wing — auth-migration, ci-pipeline, database-decisions. When the same room exists across wings, a tunnel auto-connects them. Provides the +34% retrieval boost via wing+room filtering.
→ Maps to wiki concept page (e.g. wiki/taskforge/auth.md)
🗂️
Closet
Summary Layer
Plain-text summaries that point the LLM to the right drawer. This is the layer you are feeding into raw/ — closet output becomes a high-quality, pre-structured input to the wiki compiler rather than raw transcript noise.
→ These summaries become your raw/ inputs
📦
Drawer
Verbatim Archive
The exact original words — never summarized, never lost. This is your ground truth for cross-checking. When confidence scoring flags a wiki claim as decayed, you trace it back to the drawer for verification. Eliminates the "no original source" problem.
→ Ground truth for cross-check / error persistence mitigation
🏃
Hall
Memory Type Corridor
Fixed corridors within every wing: hall_facts (decisions), hall_events (sessions/milestones), hall_discoveries (breakthroughs), hall_preferences (habits), hall_advice (recommendations). Memory typed at ingest time — no post-hoc categorization needed.
→ Maps to wiki page type in CLAUDE.md schema
🚇
Tunnel
Cross-Wing Connection
Automatic links when the same room topic appears across different wings. "Auth-migration" in wing_kai and wing_taskforge creates a tunnel — the palace navigation finds cross-project connections that explicit wikilinks alone would miss.
→ Enriches wiki cross-references beyond manual [[wikilinks]]

Impact on Known Limitations

Largely Solved

Active Upkeep — The #1 Failure Mode

Conversation mining + auto-save hooks make the feed automatic. You no longer have to remember to drop files into raw/. Every Claude Code session is mined. The PreCompact hook fires before context compression. The Stop hook fires every 15 messages.

BeforeHumans forget to ingest → wiki rots at 6 weeks
AfterHooks auto-mine every session → continuous feed
Largely Solved

Error Persistence / Cross-Check

Drawers preserve verbatim originals permanently. When a wiki claim is flagged as low-confidence, you have an exact traceable source to verify against — not just "raw/source-2026-04.md" but a wing-scoped, room-tagged original with a drawer ID.

BeforeErrors persist silently, no clear original to check
AfterDrawers = verbatim ground truth, always traceable
Significantly Reduced

Scale Ceiling

MemPalace's wing+room metadata filtering means qmd doesn't have to search the entire corpus — it searches a pre-narrowed wing/room scope first. This extends the effective scale ceiling because retrieval is structurally guided before the BM25+vector pass fires.

Beforeqmd searches entire wiki — token ceiling still binding
AfterWing+room filter → qmd works on relevant subset
Character Shifted

Knowledge Staleness

Conversations are the primary source — they're inherently current. Every session you have becomes a potential ingest. Staleness now depends on how actively you use AI tools (which you do constantly), not on whether you remember to read and clip articles.

BeforeStaleness from manual source curation gaps
AfterStaleness from conversation coverage gaps (much smaller)
Reduced

Semantic Retrieval Gap vs RAG

The combination of MemPalace structural navigation (wing → room → closet → drawer) plus qmd's BM25+vector search covers both explicit structural navigation and fuzzy semantic matching. You have the best of both retrieval patterns without a full vector database.

BeforeExplicit wikilinks only — misses differently-worded concepts
AfterStructural nav + qmd semantic fills the gap
New Consideration

Conversation Noise in raw/

Not every conversation deserves to enter the wiki. Debugging rabbit holes, exploratory dead-ends, and casual exchanges are valuable in MemPalace's verbatim drawers but would pollute the wiki if compiled directly. The summarization/filtering step before raw/ is now load-bearing.

Old RiskNo raw/ source, hard to feed continuously
New RiskToo much raw/ — summarization quality is critical

qmd vs ChromaDB — Your Trade-off

⚠ Honest Assessment of the Trade-off
MemPalace's benchmark-leading 96.6% R@5 score comes specifically from raw verbatim storage in ChromaDB. By replacing ChromaDB with qmd, you are choosing a different design point: simpler local infrastructure and tighter wiki integration over maximum semantic recall on conversation search. This is a defensible choice for your use case — but it's worth knowing what you're trading.
Dimension qmd (your choice) ChromaDB (MemPalace default)
Storage format Markdown files (same as wiki) ✦ Proprietary vector DB
Semantic recall (LongMemEval) Not benchmarked on this task ✦ 96.6% R@5 raw mode
Wiki integration ✦ Native — indexes wiki/ directly Separate store, no wiki awareness
Single index to maintain ✦ Yes — one qmd collection No — wiki + ChromaDB separate
MCP exposure ✦ qmd mcp — native tool for Claude Via MemPalace MCP server
Hybrid search (BM25 + vector) ✦ Built in — qmd query ChromaDB semantic only
Dependencies ✦ npm only, local GGUF model Python, chromadb, potential version pin issues
Verbatim drawer retrieval Not designed for this ✦ Core feature — drawers are ChromaDB entries
Architectural simplicity ✦ One search layer for everything Two parallel search systems
The key practical point: MemPalace's structural navigation (wing+room filtering) still provides the +34% retrieval boost regardless of what sits behind it. You retain the palace architecture's biggest advantage. The ChromaDB vs qmd choice only affects the semantic search layer, not the structural navigation layer. — Analysis based on MemPalace architecture documentation, April 2026

Updated Mitigation Status

Limitation Before MemPalace With MemPalace + qmd Residual Work
Active Upkeep Manual — wikis rot ✦ Auto-hooks feed continuously Summarization quality tuning
Error Persistence No traceable ground truth ✦ Drawers = verbatim source Confidence scoring in schema
Scale Ceiling ~50–100K token hard limit Extended by wing+room filtering qmd still needed at 200+ articles
Semantic Retrieval Gap Explicit links only ✦ Structure + qmd BM25+vector Some ChromaDB recall lost (see above)
Knowledge Staleness Depends on manual curation ✦ Continuous from session mining Retention curve still needed
Cross-check Raw docs only, imprecise ✦ Drawer-level verbatim traceability fact_checker.py not yet wired (v3)
Access Control Flat file, none Still needs MCP wrapper layer Tailscale boundary is your fastest path
Cognitive Outsourcing Valid concern Unchanged — wiki is still reference only Design intent: reference, not replacement

New Risks Introduced

! Summarization quality is now load-bearing Critical Path

In the original pattern, you curated sources manually — only deliberate, quality inputs entered raw/. With conversation mining, the filter is your summarization scripts. If those scripts surface debugging dead-ends, exploratory rabbit holes, or noise, it enters the wiki compilation pipeline. Garbage-in still applies — it's just at a different point in the flow.

Mitigation: Tune your conversation scripts to filter by memory type (hall_facts and hall_discoveries are high-signal; hall_events is medium; raw session transcripts are low). Only promote closet summaries tagged as decisions, discoveries, or recommendations. Use MemPalace's --extract general mode to auto-classify before staging.

Practical rule: Only closets from hall_facts and hall_discoveries should auto-promote to raw/. Other halls should require a manual review step before staging.
! MemPalace fact_checker.py is not yet wired into KG ops (v3.0.0) Known Gap · Issue #27

MemPalace's contradiction detection (fact_checker.py) exists as a standalone utility but is not currently called automatically during knowledge graph operations — the authors acknowledged this in their April 7 correction note. This means cross-wing contradictions won't be auto-flagged at ingest time yet.

Mitigation: Call fact_checker.py manually as part of your lint pass script until Issue #27 is resolved. Wire it as a pre-commit hook on wiki/ changes: any new page goes through fact_checker before being promoted from staging to verified.

Track Issue #27 on the MemPalace repo. This is being actively fixed. Once wired, contradiction detection becomes a native part of your ingest pipeline — a major upgrade to the cross-check mitigation.
~ Two memory systems need schema alignment Operational Risk

MemPalace's taxonomy (wings, rooms, halls) and the wiki's taxonomy (domains, concept pages, page types in CLAUDE.md) are separate schemas. If they drift — MemPalace calls something "wing_taskforge/hall_facts/auth" while the wiki calls it "infrastructure/auth-decisions" — the structural navigation loses coherence. Tunnels and wikilinks stop reinforcing each other.

Mitigation: Define a canonical mapping document (a simple markdown table) that maps MemPalace wing/room names to wiki domain/page paths. Reference it in both CLAUDE.md and your MemPalace wing_config.json. Review quarterly — schemas co-evolve, but they need to co-evolve together.

Your advantage: You already have a discipline around CLAUDE.md management. Add a "Palace Map" section to your global CLAUDE.md that specifies the canonical wing→wiki-domain mapping. The LLM consults it on every ingest.
⬣ The 8th Extension — Closing the MemPalace Loop

Closet summaries become the source for the wiki itself.

The first seven extensions came out of the Signal & Noise review. The eighth surfaced only after the other layers were built — and it's the one that makes the MemPalace integration a real pipeline into the wiki instead of just a searchable archive beside it. The mining layer was extracting sessions, classifying bullets into halls, tagging topics, and making everything searchable via qmd. But the knowledge inside the conversations was never being compiled into wiki pages. A decision made in a session, a root cause found during debugging, a pattern spotted in review — these stayed in the conversation summaries forever, findable but not synthesized.

This is what the wiki-distill.py script solves. It's Phase 1a of wiki-maintain.sh and runs before URL harvesting because conversation content should drive the page, not the URLs the conversation cites.

Phase 1aRuns before harvest
todayNarrow filter — today's topics
∀ historyRollup all past conversations on each topic
3 hallsfact + discovery + advice
haiku/sonnetAuto-routed by topic size
Distill Flow — Conversation Content → Wiki Pages
Narrow: what topics to process today
Today's
conversations
Extract
topics[]
=
Topics of
today set
Wide: pull full history for each today-topic
Each
today-topic
Rollup ALL
historical convs
Extract
fact / discovery / advice
claude -p
distill prompt
Compile: model decides new / update / skip
JSON
actions[]
new_page
+
update_page
(modifies existing)
staging/<type>/
pending review

Why This Completes MemPalace

📦
Drawer — before
Verbatim Archive
Full transcripts stored, searchable via qmd. No compilation — if you wanted canonical knowledge from them, you had to write it up manually.
Status: already working
🗂️
Closet — before
Summary Layer
Summaries with hall classification (fact / discovery / preference / advice / event / tooling) and topics. Searchable. Terminal: never fed forward into the wiki compiler.
Status: terminal data, not flowing
Distill — NEW
Compiler Bridge
Reads closet content by topic, rolls up all matching conversations across history, filters to high-signal halls only, sends to claude -p with the current wiki index, emits new or updated wiki pages to staging.
Status: wiki-distill.py
📄
Wiki Pages — NEW
Distilled Knowledge
Pages in staging/<type>/ with full distill provenance: distill_topic, distill_source_conversations, compilation_notes. Promote via staging review. Session knowledge becomes canonical knowledge.
Status: origin=automated, staged_by=wiki-distill

Which Halls Get Distilled

Hall Distilled? Why
hall_facts ✦ YES Decisions locked in, choices made, specs agreed. Canonical knowledge.
hall_discoveries ✦ YES Root causes, breakthroughs, non-obvious findings. The highest-signal content in any session.
hall_advice ✦ YES Recommendations, lessons learned, "next time do X." Worth capturing as patterns.
hall_events no Deployments, incidents, milestones. Temporal data — belongs in logs, not the wiki.
hall_preferences no User working style notes. Belong in personal configs, not the shared wiki.
hall_tooling no Script/command usage, failures, improvements. Usually low-signal or duplicates what's already in the wiki.

The Narrow-Today / Wide-History Filter

Processing scope stays narrow; LLM context stays wide. This is the key property that makes distill cheap enough to run daily and smart enough to produce good pages.
01 Daily filter: only process topics appearing in TODAY's conversations Scope

Each daily run only looks at conversations dated today. It extracts the topics: frontmatter from each — that union becomes the "topics of today" set. If you didn't discuss a topic today, it's not in the processing scope. This keeps the cron job cheap and predictable: if today was a light session day, distill runs fast. If today was a heavy architecture discussion, distill does real work.

First run only: The very first run uses a 7-day lookback instead of today-only so the state file gets seeded. After that first bootstrap, daily runs stay narrow.
02 Historical rollup: for each today-topic, pull ALL matching conversations Context

Once the today-topic set is known, for each topic the script walks the entire conversation archive and pulls every summarized conversation that shares that topic. A discussion about blue-green-deploy today might roll up 16 conversations across the last 6 months. The claude -p call sees the full history, not just today's fragment.

This is what makes the distilled pages good. The LLM isn't guessing what a pattern looks like from one session — it's synthesizing across everything you've ever discussed on the topic.

03 Self-triggering: dormant topics wake up when they resurface Emergent

The narrow-today/wide-history combination produces a useful emergent property: dormant topics wake up automatically. If you discussed database-migrations three months ago and it never came up again, it's not in the daily scope. But the day you mention it again in any new conversation, that topic enters today's set — and the rollup pulls in all three months of historical discussion. The wiki page gets updated with fresh synthesis across the full history without you having to manually trigger reprocessing.

What this means in practice: Old knowledge gets distilled when it becomes relevant again. You don't need to remember to ask "hey, is there a wiki page for X?" — the next time X comes up in a session, distill will check the wiki state and either create or update the page for you.
04 State tracking by content hash + topic set .distill-state.json

A conversation is considered "already distilled" only if its body hash AND its topic set match what was seen at the last distill. If the body changes (summarizer re-ran and updated the bullets) OR a new topic is added, the conversation gets re-processed on the next run. Topics get tracked so rejected ones don't get reprocessed forever — if the LLM says "this topic doesn't deserve a wiki page" once, it stays rejected until something meaningful changes.

05 Distill runs BEFORE harvest — conversation content has priority Phase 1a

The orchestrator runs distill as Phase 1a and harvest as Phase 1b. Deliberate: if a topic is being actively discussed in your sessions, you want the wiki page to reflect your synthesis of what you've learned, not just the external URL cited in passing. URL harvesting then fills in gaps — it picks up the docs pages, blog posts, and references that your sessions didn't already cover.

Both phases can produce staging pages. If distill creates patterns/docker-hardening.md and harvest creates patterns/docker-hardening.md, the staging-unique-path helper appends a short hash suffix so they don't collide. The reviewer sees both in staging and picks the better one (usually distill, since it has historical context).

Distill Staging Provenance

Every distilled page lands in staging with full provenance in its frontmatter. When you review a page in staging, you can see exactly which conversations it came from and jump directly to those transcripts.

Example: staging/patterns/zoho-crm-integration.md frontmatter
---
origin: automated
status: pending
staged_date: 2026-04-12
staged_by: wiki-distill
target_path: patterns/zoho-crm-integration.md
distill_topic: zoho-api
distill_source_conversations: conversations/general/2026-04-06-73d15650.md,conversations/mc/2026-03-30-64089d1d.md
compilation_notes: Two separate incidents discovered the same Zoho CRM v2 API limitations, documenting them as a pattern page prevents re-investigation and provides a canonical reference for future Zoho integrations.
title: Zoho CRM Integration
type: pattern
confidence: high
sources: [conversations/general/2026-04-06-73d15650.md, conversations/mc/2026-03-30-64089d1d.md]
related: [database-migrations.md, activity-event-auditing.md]
last_compiled: 2026-04-12
last_verified: 2026-04-12
---
Without distillation, MemPalace was a searchable archive sitting beside the wiki. With distillation, it's a real ingest pipeline — closet content becomes the source material for the wiki proper, completing the eight-extension story. — memex design rationale, April 2026