fine tuning - from checpoint -- avoiding catastrophic forgetting of ai n retraining on new data alone

 Incremental training (fine‑tuning)

A few common ways to keep the forgetting in check are:

  • Regularization – penalize large changes to important weights (e.g., Elastic Weight Consolidation).
  • Replay / experience replay – mix in a small sample of the original data while fine‑tuning.
  • Adapters or LoRA – freeze most of the original weights and train only a tiny set of extra parameters, so the core knowledge stays intact.
  • Checkpoint averaging – keep a copy of the original checkpoint and merge it with the fine‑tuned version.


Comments

Popular posts from this blog

adjusting width of explorrer in xamp project folder file names - php LocalHost