Garry Tan System Map

Competitive Map

Positioning vs coding agents, memory systems, and eval repos.

gstack = methodgbrain = continuitygbrain-evals = proofYC = network

Competitive Map

Category View

Garry's system spans three categories:

  1. Coding-agent workflow layer: gstack.
  2. Persistent agent memory layer: gbrain.
  3. Evaluation/proof layer: gbrain-evals.

The system competes less with one product than with the default way developers currently use AI: ad hoc prompts, editor autocomplete, scattered notes, and manual QA.

gstack Competitive Position

AlternativeWhat They Providegstack Difference
Claude CodeStrong agentic coding environmentgstack adds reusable roles, gates, browser workflows, and sprint process
Cursor/CopilotIn-editor coding assistancegstack is workflow/process-first, not autocomplete-first
OpenCode/Codex CLIAgent execution in terminalgstack provides an opinionated operating system for when and how to use the agent
Karpathy-style CLAUDE.md rulesGeneral coding rules and preferencesgstack turns rules into slash-command workflows with outputs and gates
Conductor-style parallel sessionsMulti-session executiongstack gives each parallel sprint a lifecycle, reviews, and stop conditions

gstack Wedge

"AI coding is not safe because the model is smart. It is safe when the work process is encoded."

Strengths:

Risks:

gbrain Competitive Position

AlternativeWhat They Providegbrain Difference
Obsidian/NotionHuman-readable knowledge basesgbrain is agent-operational with MCP, embeddings, source scoping, and skills
Mem0/SupermemoryAPI-first memorygbrain is local/source-aware/code-aware and open-source
MemPalace/Hindsight/Mastra/StellaMemory/retrieval systems and benchmarksgbrain ties retrieval to an operational personal/work brain
Basic RAG over docsSemantic search over filesgbrain adds pages, sources, graph/code edges, citations, maintenance, and workflows
Grep/read in codebasesFast local searchgbrain adds symbol-aware retrieval and cross-session memory

gbrain Wedge

"Your agent should remember the way an operator remembers: source-aware, cited, queryable, writable, and maintained."

Strengths:

Risks:

gbrain-evals Competitive Position

AlternativeWhat They Providegbrain-evals Difference
LongMemEvalPublic long-memory QA benchmarkgbrain-evals publishes runnable adapters and reports
LoCoMo/ConvoMemConversational memory tasksgbrain-evals frames them as roadmap benchmarks
Product claimsMarketing assertionsgbrain-evals gives reproducible corpora, reports, and harnesses

gbrain-evals Wedge

"Memory quality should be measured, not asserted."

Strengths:

Risks:

Strategic Takeaway

The system's real moat is not any individual skill, CLI, or table. It is the loop:

Workflow creates artifacts -> artifacts enter memory -> memory improves future workflow -> evals prove the memory works -> public proof attracts builders -> builders contribute back.

That loop is harder to copy than a prompt pack.

Deep Category Positioning

LayerGarry SystemCompetes AgainstWedge
Workflow/processgstackClaude Code alone, Cursor, Copilot, Codex CLI, OpenCode, Karpathy rules, ConductorEncoded sprint process with roles, gates, QA, browser, security, and ship discipline.
Memory/runtimegbrainObsidian, Notion, grep, basic RAG, Mem0, Supermemory, MemPalace, Hindsight, Mastra, StellaLocal/source-aware/agent-operational brain with MCP, skills, graph, citations, and code retrieval.
Proof/evalsgbrain-evalsLongMemEval, LoCoMo, ConvoMem, product marketing claimsRunnable public eval harness, benchmark reports, corpora, and caveats.

gstack Versus Coding Agents

CompetitorWhat They Owngstack Counterposition
Claude CodeStrong coding agent runtimegstack turns Claude into a role-based product/engineering org.
Cursor/CopilotIn-editor coding accelerationgstack owns workflow before and after code: product framing, review, QA, ship, retro.
OpenAI Codex CLIIndependent terminal coding/review agentgstack includes /codex as second opinion, making Codex a component inside the workflow.
OpenCode/Hermes/Factory/Kiro/SlateAlternative agent hostsgstack's host adapter layer says the host is replaceable; the method persists.
Karpathy-style rulesLightweight agent disciplinegstack operationalizes rules as slash-command gates across a sprint.
ConductorParallel Claude sessionsgstack gives parallel sessions lifecycle, review routing, and stop conditions.

Strategic wedge: the model is not the product. The encoded operating process is the product.

gbrain Versus Memory / RAG Systems

CompetitorWhat They Owngbrain Counterposition
Obsidian/NotionHuman-facing notes and knowledge basesgbrain is agent-facing, queryable, writable, source-scoped, and MCP-native.
grep/ripgrepFast exact local searchgbrain adds hybrid retrieval, graph links, code edges, citations, and synthesis.
Basic vector RAGSemantic retrieval over documentsgbrain adds source policy, pages, chunks, backlinks, typed links, timelines, skills, and maintenance.
Mem0/SupermemoryAPI-first memory productsgbrain is local/open/source-aware and built around operator-owned files.
MemPalaceHigh-performing memory benchmark posturegbrain competes with reproducible LongMemEval numbers and a cheap deterministic retrieval headline.
Hindsight/Mastra/StellaMemory/retrieval systems and benchmarksgbrain frames itself as a full personal knowledge runtime, not only a benchmark adapter.

Strategic wedge: agent memory should be durable, cited, source-aware, locally ownable, and maintained.

gbrain-evals Competitive Read

The LongMemEval report is unusually important because it turns memory from assertion into proof. The current story:

The honest competitive story: gbrain is close to the best public memory benchmark numbers while keeping the headline retrieval path mostly deterministic and cheap. The exposed weakness is temporal reasoning, where the report says gbrain trails MemPal raw by 1.5 points and likely needs temporal extraction wired into ranking.

System-Level Moat

The real moat is the loop:

workflow artifacts -> memory ingestion -> better future workflow -> eval proof -> public credibility -> contributors/candidates/founders -> stronger workflow

That loop is harder to copy than any single skill file.

Strategic Risks

Takeaway For Ren/OpenClaw

Ren/OpenClaw should not copy gstack as a prompt pack. The strategic lesson is the integrated architecture:

The sharper Ren wedge: OpenClaw routes the work; Ren chooses the right work; memory preserves the learning; evals prove the system is improving.