🌐 Language
🔥 Research Highlights
CORE memory achieves 88.24% average accuracy on the Locomo dataset across all reasoning tasks, significantly outperforming other memory providers. Check out this blog for more information.
(1) Single-hop questions require answers based on a single session; (2) Multi-hop questions require synthesizing information from multiple different sessions; (3) Open-domain knowledge questions can be answered by integrating a speaker’s provided information with external knowledge such as commonsense or world facts; (4) Temporal reasoning questions can be answered through temporal reasoning and capturing time-related data cues within the conversation;
Overview
Problem
Developers waste time re-explaining context to AI tools. Hit token limits in Claude? Start fresh and lose everything. Switch from ChatGPT/Claude to Cursor? Explain your context again. Your conversations, decisions, and insights vanish between sessions. With every new AI tool, the cost of context switching grows.
Solution - CORE (Contextual Observation & Recall Engine)
CORE is an open-source unified, persistent memory layer for all your AI tools. Your context follows you from Cursor to Claude to ChatGPT to Claude Code. One knowledge graph remembers who said what, when, and why. Connect once, remember everywhere. Stop managing context and start building.
🚀 CORE Self-Hosting
Want to run CORE on your own infrastructure? Self-hosting gives you complete control over your data and deployment.Prerequisites:
- Docker (20.10.0+) and Docker Compose (2.20.0+) installed
- OpenAI API key
Setup
- Clone the repository:
git clone https://github.com/RedPlanetHQ/core.git
cd core
- Configure environment variables in
core/.env:
OPENAI_API_KEY=your_openai_api_key
- Start the service
docker-compose up -dOnce deployed, you can configure your AI providers (OpenAI, Anthropic) and start building your memory graph.
👉 View complete self-hosting guide
Note: We tried open-source models like Ollama or GPT OSS but fact generation was not good, we are still figuring out how to improve on that and then will also support OSS models.
🚀 CORE Cloud
Build your unified memory graph in 5 minutes:Don't want to manage infrastructure? CORE Cloud lets you build your personal memory system instantly - no setup, no servers, just memory that works.
- Sign Up at core.heysol.ai and create your account
- Visualize your memory graph and see how CORE automatically forms connections between facts
- Test it out - ask "What do you know about me?" in the conversation section
- Connect to your tools:
- Claude & Cursor - coding with context
- Claude Code CLI & Codex CLI - terminal-based coding with memory
- Add Browser Extension - bring your memory to any website
- Linear, Github - add project context automatically
🧩 Key Features
🧠 Unified, Portable Memory:
Add and recall your memory across Cursor, Windsurf, Claude Desktop, Claude Code, Gemini CLI, AWS's Kiro, VS Code, and Roo Code via MCP🕸️ Temporal + Reified Knowledge Graph:
Remember the story behind every fact—track who said what, when, and why with rich relationships and full provenance, not just flat storage
🌐 Browser Extension:
Save conversations and content from ChatGPT, Grok, Gemini, Twitter, YouTube, blog posts, and any webpage directly into your CORE memory.
How to Use Extension
- Download the Extension from the Chrome Web Store.
- Log in to the CORE dashboard
- Navigate to Settings (bottom left)
- Go to API Key → Generate new key → Name it “extension.”
- Open the extension, paste your API key, and save.
💬 Chat with Memory:
Ask questions like "What are my writing preferences?" with instant insights from your connected knowledge⚡ Auto-Sync from Apps:
Automatically capture relevant context from Linear, Slack, Notion, GitHub, and other connected apps into your CORE memory
📖 View All Integrations - Complete list of supported services and their features
🔗 MCP Integration Hub:
Connect Linear, Slack, GitHub, Notion once to CORE—then use all their tools in Claude, Cursor, or any MCP client with a single URL
How CORE creates memory
CORE’s ingestion pipeline has four phases designed to capture evolving context:
- Normalization: Links new information to recent context, splits long documents into coherent segments while maintaining cross-references, and standardizes terminology so that when CORE extracts knowledge, it works with clean, contextualized input instead of messy text.
- Extraction: Derives meaning from normalized text by identifying entities (people, tools, projects, concepts), converting them into statements with context, source, and time, and mapping relationships. For instance, “We wrote CORE in Next.js” becomes: Entities (CORE, Next.js), Statement (CORE was developed using Next.js), and Relationship (was developed using).
- Resolution: Detects contradictions, tracks preference changes over time, and preserves multiple perspectives with provenance rather than overwriting, so memory reflects your entire journey, not just the latest snapshot.
- Graph Integration: Connects entities, statements, and episodes into a temporal knowledge graph that links facts to their context and history, transforming isolated data into a living web of knowledge agents can actually use.
How CORE recalls from memory
When you ask CORE a question, it doesn’t just look up text—it explores your entire knowledge graph to find the most useful answers.
- Search: CORE searches memory from multiple perspectives simultaneously—keyword search for exact matches, semantic search for related ideas even if worded differently, and graph traversal to follow links between connected concepts.
- Re-Rank: The retrieved results are reordered to emphasize the most relevant and diverse ones, ensuring you see not only obvious matches but also deeper connections.
- Filtering: CORE applies smart filters based on time, reliability, and relationship strength, so only the most meaningful knowledge surfaces.
- Output: You receive both facts (clear statements) and episodes (the original context they came from), so recall is always grounded in context, time, and story.
Documentation
Explore our documentation to get the most out of CORE
- Basic Concepts
- Self Hosting
- Connect Core MCP with Claude
- Connect Core MCP with Cursor
- Connect Core MCP with Claude Code
- Connect Core MCP with Codex
- Basic Concepts
- API Reference
🔒 Security
CORE takes security seriously. We implement industry-standard security practices to protect your data:
- Data Encryption: All data in transit (TLS 1.3) and at rest (AES-256)
- Authentication: OAuth 2.0 and magic link authentication
- Access Control: Workspace-based isolation and role-based permissions
- Vulnerability Reporting: Please report security issues to harshith@poozle.dev
🧑💻 Support
Have questions or feedback? We're here to help:
- Discord: Join core-support channel
- Documentation: docs.heysol.ai
- Email: manik@poozle.dev
Usage Guidelines
Store:
- Conversation history
- User preferences
- Task context
- Reference materials
- Sensitive data (PII)
- Credentials
- System logs
- Temporary data
👥 Contributors
--- Tranlated By Open Ai Tx | Last indexed: 2025-10-16 ---