fine tuning - from checpoint -- avoiding catastrophic forgetting of ai n retraining on new data alone

 Incremental training (fine‑tuning)

A few common ways to keep the forgetting in check are:

  • Regularization – penalize large changes to important weights (e.g., Elastic Weight Consolidation).
  • Replay / experience replay – mix in a small sample of the original data while fine‑tuning.
  • Adapters or LoRA – freeze most of the original weights and train only a tiny set of extra parameters, so the core knowledge stays intact.
  • Checkpoint averaging – keep a copy of the original checkpoint and merge it with the fine‑tuned version.


Comments

Popular posts from this blog

how to add all current and future projects of android studio to allow in windows firewall security..

Intel Gaussian & Neural Accelerator (GNA) in intel gold 7505 and 13th gen processors