Continual Learning: A paradigm where models incrementally learn from new data streams without forgetting previously learned information
Model Editing: Techniques for precisely modifying specific facts or behaviors in a model's weights without retraining the entire network (e.g., locating and changing a specific neuron)
Retrieval-Augmented Generation: RAG—Providing external knowledge to a model during inference by searching a database, rather than baking it into the model weights
Catastrophic Forgetting: The tendency of neural networks to lose previously learned information when trained on new data
Instruction Tuning: Fine-tuning a model on datasets of instructions and responses to improve its ability to follow user commands
RLHF: Reinforcement Learning from Human Feedback—aligning models to human values using reward models trained on preference data
Parameter-Isolation: A continual learning strategy that allocates different parameter subsets to different tasks or domains, keeping the majority frozen to prevent forgetting
Self-Information Updating: A method where the model generates its own training data based on new information to update its internal knowledge