Web Analytics

TimeCapsuleLLM

⭐ 545 stars English by haykgrigo3

🌐 Language

TimeCapsule LLM

A language model trained from scratch exclusively on data from certain locations and historical periods to reduce modern bias and emulate the voice, vocabulary, and worldview of that era.

Imagine if an AI model didn’t just pretend to be historical, but truly was.

v0 and v0.5 are built on nanoGPT by Andrej Karpathy. Core training scripts and model architecture are his work.

v1 built on Phi 1.5 by Microsoft

Hugging Face Link

Model Behavior & Limitations

v0

Early prompts show the model responding with 1800's language and behavior. Example: Prompt: "Who art Henry?" and it replied "I know that man, I have did not a black, the storm."

TimeLockLLM Sample Output

v0.5

A significant improvement over v0.

TimeLockLLM Sample Output

v1

First model to recall and connect a real historical event with an actual figure from the dataset.

Example: Prompt: "It was the year of our Lord 1834"

The output: "It was the year of our Lord 1834 and the streets of London were filled with protest and petition. The cause, as many re counted, was not bound in the way of private, but having taken up the same day in the day of Lord Palmerston, the public will receive a short statement of the difficulties under which the day of law has reached us. It is a matter of deep regret, that the present events in the history of the world are clear, and consequently will be'known. It is not true that the very men who first settled in the Gospel at Jerusalem should have so extensive and so interesting a record of the prosperity and prosperity"

At first I assumed that a protest might have coincidentally taken place the same year but take a look at this: 1834protest

Why this matters:

This is the first example of one of my models connecting a year to both a real historical event and a real person tied to that event (Lord Palmerston). Earlier models (v0 and v0.5) could mimic writing styles of the 19th century but would always hallucinate events, people and facts. This shows the model is beggining to remember things from the dataset

Upcoming Plans

How to Use

This project focuses mostly on curating historical data, preparing it for training and building a tokenizer. I am not going to cover the full LLM training process, for that refer to nanoGPT by Andrej Karpathy.

Step 1: Gather and Prepare Historical Texts

Step 2: Build a Custom Tokenizer

Step 3: Train Your Model

FAQ

What is Selective Temporal Training ?

Selective Temporal Training (STT) is a machine learning methodology where all training data is specifically curated to fall within a specific historical time period. It's done in order to model the language and knowledge of that era without influence from modern concepts. For example, the current model I have now (v0.5) is trained on data exclusively from 1800-1875, it's not fine tuned but trained from scratch resulting in output that reflects the linguistic style and historical context of that time period.

Why not just use fine-tuning or LoRA?

For this project I'm trying to create a language model that is unclouded from modern bias. If I fine-tune something like GPT-2, it's already pre-trained and that information won't go away. If I train from scratch the language model won't pretend to be old, it just will be. The Goal for this project right now is to create something can reason exclusively using knowledge from London books published between 1800 and 1875.

What kind of data did you use for training?

I'm using books, legal documents, newspapers, and other writings from 1800–1875 London. The list I linked (for v0) has like 200 but for the first training I just used 50 files about ~187 MB. You can view a list of the documents: https://github.com/haykgrigo3/TimeCapsuleLLM/blob/main/Copy%20of%20London%20Documents%20for%20Time%20Capsule%20LLM.txt

Dataset sizes: v0: ~187MB v0.5: ~435MB v1: ~6.25GB

How large are the models ?

V0: 16M Parameters

V0.5 123M Parameters

V1: 700M Parameters

Training Specs ?

V0/V0.5

GPU: Geforce rtx 4060 CPU: i5-13400F Ram: 16GB DDR5.

V1

GPU: A100 rented

--- Tranlated By Open Ai Tx | Last indexed: 2025-09-30 ---