AI with two brains - catastrophic forgetting solved
Brain 1 – General knowledge A pretrained foundation model (e.g., Llama‑2, 7‑13 B) that already knows English, basic math, common‑sense facts, etc. It stays static so it never loses the broad knowledge it was trained on. Brain 2 – Domain‑specific, continuously refreshed A lightweight component that learns from your daily 1 GB corpus. It can be implemented as: a retrieval index (FAISS/Chroma) that stores embeddings of the fresh documents, or a fine‑tuned adapter (LoRA/QLoRA) that gets updated each night with the new data. Because only this part changes, you avoid catastrophic forgetting in the general brain. How they interact A small router (a few‑shot classifier or a rule‑based switch) decides, for each query, whether to: answer directly from Brain 1, or pull the most relevant chunks from Brain 2 (retrieval) and feed them together with the question to Brain 1 for a grounded answer. Practical stack you could use General LLM – meta-llama/Llama-2-7b-chat (run locally via Ollama o...