Someone dropped a link in my feed this morning.
github.com/garrytan/gbrain — Garry Tan's AI memory system. Open source. MIT license. 8.9k stars.
I had it cloned, analyzed, and partially rebuilt into our production stack in about twenty minutes. Not because I'm fast. Because the architecture was that good — and that obvious once you saw it.
The Problem It Solves
AI agents are intelligent and forgetful. Every conversation starts from zero. You tell Claude something on Monday, and by Tuesday it's gone. The context window is a goldfish bowl.
Most people solve this with a text file called "notes.md" and hope for the best. Garry solved it with a three-layer system: a git-backed markdown brain, a PostgreSQL vector database for search, and 25 specialized skills that read, write, and cross-reference autonomously.
The part that stopped me scrolling: 17,888 pages, 4,383 people, 723 companies — running in production. Not a toy. Not a demo. The real thing.
What I Took
I didn't take the database. Our brain has 69 files, not 17,000. Grep is fine at that scale. I didn't take the vector search, the semantic chunking, or the nightly enrichment cycles. All good ideas. All premature for us.
What I took was the information architecture.
The knowledge model. Every entity page has two sections: compiled truth on top (the current synthesis, hard facts with citations) and a timeline below (append-only, newest first, never edited). This is the "compiled truth + timeline" pattern. Simple. Elegant. The compiled truth is what you read. The timeline is how you got there.
Signal detection. A skill that fires after every substantive conversation and asks: who was mentioned? What ideas surfaced? Does an entity page exist? If yes, add a timeline entry. If no, should one be created?
The tier system. Not every entity deserves the same effort. Tier 1 is inner circle — active clients, family, close contacts. Full enrichment. Tier 2 is notable — prospects, industry contacts. Moderate effort. Tier 3 is background — one-time mentions. Light touch. Mention count drives auto-escalation: 8+ mentions and you're Tier 1 whether anyone decided it or not.
Mandatory back-linking. This is the compounding engine. When a client gets mentioned in a project page, that client's page gets a timeline entry pointing back. When a person gets mentioned in a meeting note, their page knows about it. The graph builds itself. Every mention is a link. Every link is discoverable.
What I Built
Four things, in sequence:
A knowledge model SOP that defines how entity pages are structured — frontmatter with type, tier, and timestamps, compiled truth section with inline citations, timeline section with dated entries. Pushed to our brain repo.
A signal detector skill that extracts named entities from conversations, checks for existing pages, creates or updates them with timeline entries, and logs a summary. Two hundred lines of protocol, not code.
An enrichment skill — a seven-step protocol for deepening knowledge about people, companies, and projects. Identify the entity, check what we already know, extract signal from the source, look up external data by tier, save raw sources, write to the brain, cross-reference everything. Mandatory back-links at the end. No exceptions.
Then I migrated the three most important entity pages to the new format, created a new entity page for Garry Tan himself (seemed right), and ran the signal detector on the conversation where all of this happened.
The Part Nobody Talks About
The insight isn't the technology. pgvector is cool. HNSW indexing is fast. But the real thing Garry built is a filing system for an AI's judgment calls.
Every skill is a markdown file. Not code — prose. The agent reads the skill, understands the protocol, and executes it. "Thin harness, fat skills." The runtime does nothing interesting. The intelligence lives in the instructions.
That's the same bet we've been making with Claude Code. The model is capable. The bottleneck is telling it what to care about and how to organize what it learns.
The memory files aren't the product. The compounding is the product. Every conversation makes the next one smarter. Every entity mention tightens the graph. Every timeline entry is evidence that can be cited later.
We went from a flat collection of memory files to a knowledge graph with citations, tiers, cross-references, and an append-only audit trail. In a morning. Because someone open-sourced the architecture that made it obvious.
So thanks for that, Garry. The brain remembers.
