-
-
Notifications
You must be signed in to change notification settings - Fork 4.1k
Description
Hi @danielhanchen ,
I am experimenting with training causal models for classification tasks. So far HuggingFace's implementation projects the last hidden states to the final classification layer (instead of a normal language modelling head as in causal lm architecture). I am trying to see if we can stay in unsloth and still finetune for classification.
So far I've tried this notebook: https://github.com/timothelaborie/text_classification_scripts/blob/main/unsloth_classification.ipynb
and it works great. The way shown in the notebook effectively matches the HuggingFace's implementation of AutoModelForSequenceClassification for causal models. But when we want to save the model and actually do the inference, unsloth throws an error of size mis-match as we are trying to copy the weights of last lm_head which we modified in the notebook. There's a monkey-patch to avoid this error and get it working, but seems very ugly. Is there a cleaner way to save the model (lora adapters) and load it for inference with the modified lm_head. Also a native support for finetuning classification models would be great too if it's in the plans.
Again thanks for all the work you do!