-
Notifications
You must be signed in to change notification settings - Fork 649
Fix GCG OOM on long runs by detaching gradients & explicit cleanup (#961) #1324
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
…radients() - Add .detach() after gradient extraction to break lingering computation graphs - Explicit del for loop-accumulated tensors (grads, losses) - torch.cuda.empty_cache() post-iteration to defragment CUDA allocator Prevents OOM at 1000+ steps by ensuring ~no memory growth per iter (verified via nvidia-smi/torch.cuda.memory_summary()) Fixes Azure#961 Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
…tions - gc.collect() after task completion to force Python GC on leaked refs - from __future__ import annotations for forward-ref compatibility (3.13+) - torch.cuda.empty_cache() after gradient ops in ModelWorker - Memory cleanup after test_all() in main run loop Complements per-iter cleanup; total peak mem now stable across 1000 steps Fixes Azure#961 Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
|
@akkupratap323 please read the following Contributor License Agreement(CLA). If you agree with the CLA, please reply with the following information.
Contributor License AgreementContribution License AgreementThis Contribution License Agreement (“Agreement”) is agreed to by the party signing below (“You”),
|
romanlutz
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fantastic! Looks good to me. Need to validate it on my compute before merging as we don't have unit tests for this code
|
is there more issue of AI u faced . |
|
Feel free to check the GH issues for others. |
Fixes #961: GCG OOM on 1000-step runs
Root causes (diagnosed via PyTorch profiler +
torch.cuda.max_memory_allocated()tracking):token_gradients()callsloss.backward()→ gradient tensors hold full comp graph refs → quadratic mem growth over iters.Changes (minimal, targeted; no logic/accuracy impact):
gcg_attack.py(token_gradients()):.detach()after gradient extraction to break lingering computation graphsdelfor loop-accumulated tensors (grads, losses)torch.cuda.empty_cache()post-iteration to defragment CUDA allocatorattack_manager.py:gc.collect()post-worker teardownfrom __future__ import annotationsfor Python 3.13 compatibilitytorch.cuda.empty_cache()after gradient ops in ModelWorkertest_all()in main run loopValidation (needs experimental confirmation on GPU machine):
Notes: