-
Notifications
You must be signed in to change notification settings - Fork 7.7k
Implement LoRA for MoE with support for LoRA injection for nn.parameters #9337
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from 12 commits
e61ff00
8c67986
ce14b48
d202ae9
96f15f0
8342bf8
282026b
69c5f15
770bf8e
640755d
03ba684
12ec8da
18418ba
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -198,8 +198,14 @@ def _setup_lora_tuning( | |
| logger.info_rank0("Loaded adapter(s): {}".format(",".join(model_args.adapter_name_or_path))) | ||
|
|
||
| if is_trainable and adapter_to_resume is None: # create new lora weights while training | ||
| target_modules = [] | ||
| target_parameters = [] | ||
| if len(finetuning_args.lora_target) == 1 and finetuning_args.lora_target[0] == "all": | ||
| target_modules = find_all_linear_modules(model, finetuning_args.freeze_vision_tower) | ||
| if finetuning_args.lora_parameters: # if specified the parameters to be adapted, use them | ||
|
Owner
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. When we specify the target parameters, the target modules should not be affected
Owner
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This if-else is strange
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Sorry, I didn't understand the idea very clearly, because I noticed that Lora target has a default value of "all". I was thinking that if this default value is not changed and Lora parameters are used for injection without the user specifying a target, then this if-else should be judged here. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The current implementation will prevent target_parameters (lora_parameters) from being passed to peftconfig when target_modules (lora_target) is specified. There is no need to modify this peice of code to pass lora_parameters. Instead I would pass it immediatly to peft_kwargs: target_parameters is an optional argument with None as default. |
||
| logger.info_rank0("Using specified LoRA parameters: {}", finetuning_args.lora_parameters) | ||
| target_parameters = finetuning_args.lora_parameters | ||
| else: | ||
| target_modules = find_all_linear_modules(model, finetuning_args.freeze_vision_tower) | ||
| else: | ||
| target_modules = finetuning_args.lora_target | ||
|
|
||
|
|
@@ -235,6 +241,7 @@ def _setup_lora_tuning( | |
| "use_rslora": finetuning_args.use_rslora, | ||
| "use_dora": finetuning_args.use_dora, | ||
| "modules_to_save": finetuning_args.additional_target, | ||
| "target_parameters": target_parameters, | ||
|
Owner
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Is target parameters always be defined? |
||
| } | ||
| elif finetuning_args.finetuning_type == "oft": | ||
| peft_kwargs = { | ||
|
|
||
Uh oh!
There was an error while loading. Please reload this page.