Skip to content

Conversation

@AAnoosheh
Copy link
Contributor

What does this PR do?

Type of change: ? Refactor and stabilization

Overview:

  • Enforce use of FSDP-2 on KD and QAD trainers in HF plugins/examples so that we can remove multiple restrictions

Usage

# Add a code snippet demonstrating how to use this

Testing

Before your PR is "Ready for review"

  • Make sure you read and follow Contributor guidelines and your commits are signed.
  • Is this change backward compatible?: Yes/No
  • Did you write any new necessary tests?: Yes/No
  • Did you add or update any necessary documentation?: Yes/No
  • Did you update Changelog?: Yes/No

Additional Information

@AAnoosheh AAnoosheh self-assigned this Dec 18, 2025
@copy-pr-bot
Copy link

copy-pr-bot bot commented Dec 18, 2025

Auto-sync is disabled for draft pull requests in this repository. Workflows must be run manually.

Contributors can view more details about this message here.

@codecov
Copy link

codecov bot commented Dec 18, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 74.62%. Comparing base (03dc386) to head (6f82998).
⚠️ Report is 16 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main     #708      +/-   ##
==========================================
- Coverage   74.69%   74.62%   -0.07%     
==========================================
  Files         192      192              
  Lines       18946    18989      +43     
==========================================
+ Hits        14152    14171      +19     
- Misses       4794     4818      +24     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@AAnoosheh AAnoosheh marked this pull request as ready for review December 19, 2025 14:07
@AAnoosheh AAnoosheh requested review from a team as code owners December 19, 2025 14:07
> **_NOTE:_** `launch.sh` defaults to use `LlamaDecoderLayer` as the transformer layer class. If your model uses a different class, you need to pass `--fsdp_transformer_layer_cls_to_wrap <your_layer_class>` to the `launch.sh` script. For example, for `Qwen/Qwen3-8B`, specify `--fsdp_transformer_layer_cls_to_wrap Qwen3DecoderLayer` as an additional argument.
> **_NOTE:_** The script defaults to using FSDP1. To use FSDP2, pass "--use_fsdp2 True" to the `launch.sh` script. Note that FSDP2 is less stable than FSDP1 currently. Use it with caution.
> **_NOTE:_** The script defaults to using FSDP1. To use FSDP2, pass "--backend=fsdp2" to the `launch.sh` script. Note that FSDP2 is less stable than FSDP1 currently. Use it with caution.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this statement still valid? Note that FSDP2 is less stable than FSDP1 currently. Use it with caution.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I doubt it, but I don't have proof. I don't have proof that it is less stable either.

Signed-off-by: Asha Anoosheh <[email protected]>
Signed-off-by: Asha Anoosheh <[email protected]>
Signed-off-by: Asha Anoosheh <[email protected]>
Signed-off-by: Asha Anoosheh <[email protected]>
Signed-off-by: Asha Anoosheh <[email protected]>
Signed-off-by: Asha Anoosheh <[email protected]>
@AAnoosheh AAnoosheh force-pushed the aanoosheh/kd-trainer-streamline branch from bde5788 to 190e4d2 Compare December 22, 2025 14:42
@AAnoosheh AAnoosheh force-pushed the aanoosheh/kd-trainer-streamline branch from c4c0d19 to 9f3b0f8 Compare January 5, 2026 13:50
@AAnoosheh AAnoosheh force-pushed the aanoosheh/kd-trainer-streamline branch from 9f3b0f8 to 50cb0f4 Compare January 5, 2026 13:55
@AAnoosheh AAnoosheh force-pushed the aanoosheh/kd-trainer-streamline branch from f6e1196 to 796b023 Compare January 6, 2026 14:56
Signed-off-by: Asha Anoosheh <[email protected]>
Comment on lines 423 to 427
model = self.accelerator.unwrap_model(self.model)
with model.hide_teacher_model(), model.hide_loss_modules(enable=not _internal_call):
return QATTrainer.save_model(self, output_dir, _internal_call, *args, **kwargs)
else:
return KDTrainer.save_model(self, output_dir, _internal_call, *args, **kwargs)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need def save_model( for QADTrainer?
MRO for QADTrainer.save_model is `QATTrainer.save_model -> KDTrainer.save_model -> HF Trainer.svae_model right? What in particular is being achieved here now?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we don't respect this MRO, the fix in https://github.com/NVIDIA/Model-Optimizer/pull/546/files wont be applied for QAD checkpoints

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we remove save_model overwriting from here - does this still work? If so can we remove it?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep it works. Done

Signed-off-by: Asha Anoosheh <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants