Skip to content

Add LoRA Method Support for Parameter-Efficient Fine-Tuning#27

Open
harrison-huan-liu wants to merge 1 commit intocodefuse-ai:mainfrom
harrison-huan-liu:lora_dev
Open

Add LoRA Method Support for Parameter-Efficient Fine-Tuning#27
harrison-huan-liu wants to merge 1 commit intocodefuse-ai:mainfrom
harrison-huan-liu:lora_dev

Conversation

@harrison-huan-liu
Copy link

Description

This PR introduces support for Low-Rank Adaptation (LoRA) method, a parameter-efficient fine-tuning technique that significantly reduces computational costs and memory usage during training while maintaining model performance.

Related Issue

#7

Key Changes

LoRA Implementation:

  • Added LoRA configuration parameters to the Args class in arguments.py
  • Implemented _apply_lora() method in model.py to apply LoRA adaptation using the PEFT library

Configuration Updates:

  • Added a new configuration file configs/config_lora.json with example LoRA settings
  • Updated requirements.txt to include the peft library dependency
  • Added documentation for LoRA training in README.md

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant