LakyFx/CogniLayer
26 stars · Last commit 2026-04-04
Stop re-explaining your codebase to AI. Infinite speed memory + code graph for Claude Code & Codex CLI. 17 MCP tools, subagent protocol, hybrid search, TUI dashboard, crash recovery. Save 80-200K+ tokens/session.
README preview
# 🧠 CogniLayer v4 ### Stop re-explaining your codebase to AI. **Infinite speed memory · Code graph · 200K+ tokens saved** Without CogniLayer, your AI agent starts every session blind. It re-reads files, re-discovers architecture, re-learns decisions you explained last week. On a 50-file project, that's 80-100K tokens burned before real work begins. With CogniLayer, it already knows. Three things your agent doesn't have today: 🔗 **Persistent knowledge across agents** - facts, decisions, error fixes, gotchas survive across sessions, crashes, and agents. Start in Claude Code, continue in Codex CLI - zero context loss 🔍 **Code intelligence** - who calls what, what depends on what, what breaks if you rename a function. Tree-sitter AST parsing across 10+ languages, not grep 🤖 **Subagent context compression** - research subagents write findings to DB instead of dumping 40K+ tokens into parent context. Parent gets a 500-token summary + on-demand `memory_search` retrieval ⚡ **80-200K+ tokens saved per session** - semantic search replaces file reads, subagent findings go to DB instead of context. Longer sessions with subagents save more [](#) [](LICENSE) [](https://www.python.org/)