-
Notifications
You must be signed in to change notification settings - Fork 15
Open
Description
When using EvoProtGrad with a large number of parallel chains and/or a large pLM, emptying the GPU cache and garbage collection appear to help with OOM issues.
For example, adding
if torch.cuda.is_available():
torch.cuda.empty_cache()
gc.collect()after each sampler step.
Needs verification and tests.
Metadata
Metadata
Assignees
Labels
No labels