Skip to content

Working inference on a RTX4070#40

Draft
frutiemax92 wants to merge 1 commit intoFoundationVision:mainfrom
frutiemax92:low_vram
Draft

Working inference on a RTX4070#40
frutiemax92 wants to merge 1 commit intoFoundationVision:mainfrom
frutiemax92:low_vram

Conversation

@frutiemax92
Copy link

It is possible to run inference under 12GB VRAM with those modifications. It's not really fast as you need to swap the transformer to cpu, then text_encoder to gpu, etc. I also had to remove the torch.compile directives due to the precision being bfloat16 everywhere.

image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant