Continual-learning for Modelling Low-Resource Languages from Large Language Models
5 authors
arXiv:2601.05874v1
Authors
Santosh Srinath KMudit SomaniVarun Reddy PadalaPrajna Devi UpadhyayAbhijit Das
Abstract
Modelling a language model for a multi-lingual scenario includes several potential challenges, among which catastrophic forgetting is the major challenge. For example, small language models (SLM) built for low-resource languages by adapting large language models (LLMs) pose the challenge of catastrophic forgetting. This work proposes to employ a continual learning strategy using parts-of-speech (POS)-based code-switching along with a replay adapter strategy to mitigate the identified gap of catastrophic forgetting while training SLM from LLM. Experiments conducted on vision language tasks such as visual question answering and language modelling task exhibits the success of the proposed architecture.
Paper Information
- arXiv ID:
- 2601.05874v1
- Published:
- Categories:
- cs.CL, cs.AI