-
Notifications
You must be signed in to change notification settings - Fork 526
Description
Feature Summary
Add an equivalent argument to --main-gpu from llama.cpp
Detailed Description
There is presently no way to tell stable-diffusion.cpp which GPU to use if multiple are available. This makes queuing batch jobs across multiple cards unnecessarily difficult. It would be useful to have an equivalent to llama.cpp's --main-gpu along with related headers for wrappers like cyllama to interface with.
Alternatives you considered
Although hardware acceleration frameworks can pick up environment variables like HIP_VISIBLE_DEVICES and CUDA_VISIBLE_DEVICES to select device to run on, this is quite cumbersome and requires elaborate "shell-out" scripting for batch jobs. Having something internal to the application to control device selection would make larger scale usage of stable-diffusion.cpp much more viable.
Additional context
The absence of this feature blocks 3rd-party wrappers from using stable-diffusion.cpp at scale: shakfu/cyllama#11