Web Analytics

core

⭐ 841 stars English by RedPlanetHQ

🌐 Language

CORE logo

CORE: Your Personal Memory Layer for AI Apps

DeepWiki Badge

DocumentationDiscord

🔥 Research Highlights

CORE memory achieves 88.24% average accuracy on the Locomo dataset across all reasoning tasks, significantly outperforming other memory providers. Check out this blog for more information.

benchmark (1) Single-hop questions require answers based on a single session; (2) Multi-hop questions require synthesizing information from multiple different sessions; (3) Open-domain knowledge questions can be answered by integrating a speaker’s provided information with external knowledge such as commonsense or world facts; (4) Temporal reasoning questions can be answered through temporal reasoning and capturing time-related data cues within the conversation;

Overview

Problem

Developers waste time re-explaining context to AI tools. Hit token limits in Claude? Start fresh and lose everything. Switch from ChatGPT/Claude to Cursor? Explain your context again. Your conversations, decisions, and insights vanish between sessions. With every new AI tool, the cost of context switching grows.

Solution - CORE (Contextual Observation & Recall Engine)

CORE is an open-source unified, persistent memory layer for all your AI tools. Your context follows you from Cursor to Claude to ChatGPT to Claude Code. One knowledge graph remembers who said what, when, and why. Connect once, remember everywhere. Stop managing context and start building.

🚀 CORE Self-Hosting

Want to run CORE on your own infrastructure? Self-hosting gives you complete control over your data and deployment.

Prerequisites:

> Note on Open-Source Models: We tested OSS options like Ollama and GPT models, but their fact extraction and graph quality fell short. We're actively looking for options.

Setup

git clone https://github.com/RedPlanetHQ/core.git
cd core
OPENAI_API_KEY=your_openai_api_key
docker-compose up -d

Once deployed, you can configure your AI providers (OpenAI, Anthropic) and start building your memory graph.

👉 View complete self-hosting guide

Note: We tried open-source models like Ollama or GPT OSS but fact generation was not good, we are still figuring out how to improve on that and then will also support OSS models.

🚀 CORE Cloud

Build your unified memory graph in 5 minutes:

Don't want to manage infrastructure? CORE Cloud lets you build your personal memory system instantly - no setup, no servers, just memory that works.

🧩 Key Features

🧠 Unified, Portable Memory:

Add and recall your memory across Cursor, Windsurf, Claude Desktop, Claude Code, Gemini CLI, AWS's Kiro, VS Code, and Roo Code via MCP

core-claude

🕸️ Temporal + Reified Knowledge Graph:

Remember the story behind every fact—track who said what, when, and why with rich relationships and full provenance, not just flat storage

core-memory-graph

🌐 Browser Extension:

Save conversations and content from ChatGPT, Grok, Gemini, Twitter, YouTube, blog posts, and any webpage directly into your CORE memory.

How to Use Extension

https://github.com/user-attachments/assets/6e629834-1b9d-4fe6-ae58-a9068986036a

💬 Chat with Memory:

Ask questions like "What are my writing preferences?" with instant insights from your connected knowledge

chat-with-memory

Auto-Sync from Apps:

Automatically capture relevant context from Linear, Slack, Notion, GitHub, and other connected apps into your CORE memory

📖 View All Integrations - Complete list of supported services and their features

core-slack

🔗 MCP Integration Hub:

Connect Linear, Slack, GitHub, Notion once to CORE—then use all their tools in Claude, Cursor, or any MCP client with a single URL

core-linear-claude

How CORE creates memory

memory-ingest-diagram

CORE’s ingestion pipeline has four phases designed to capture evolving context:

The Result: Instead of a flat database, CORE gives you a memory that grows and evolves with you—preserving context, evolution, and ownership so agents can actually use it.

memory-ingest-eg

How CORE recalls from memory

memory-search-diagram

When you ask CORE a question, it doesn’t just look up text—it explores your entire knowledge graph to find the most useful answers.

The result: CORE doesn’t just recall facts—it recalls them within the right context, time, and story, so agents can respond as you would remember.

Documentation

Explore our documentation to get the most out of CORE

🔒 Security

CORE takes security seriously. We implement industry-standard security practices to protect your data:

For detailed security information, see our Security Policy.

🧑‍💻 Support

Have questions or feedback? We're here to help:

Usage Guidelines

Store:

Don't Store:

👥 Contributors

--- Tranlated By Open Ai Tx | Last indexed: 2025-10-16 ---