Conversation
|
Having built in ROCM support would be great!
What is unclear exactly? You run Regarding Nunchaku/SVDQ:
For local testing of both the docker image and the UI installer it can be helpful to run the local file server instead of downloading from huggingface. From repo root do To use local file server with docker pass |
|
That's helpful, thank you. For the For SVDQ, I have Nunchaku working in my custom nvidia |
Likely you're missing a |
|
@Acly Yup, that was it. I've got everything building now, but do have an infrastructure question. It seems the
Each of the above would also get a version tag as they were updated on releases. Any objections to this? |
|
You probably saw there's a base version already, I use it to eg. build the cloud images. But what's the use case for making them public? just customization? I think it's more likely people will just find one of the many ComfyUI images and add the few things Krita needs. The existing image is mainly used by people who don't know much about docker and need something that just works. I believe nginx is required to expose ComfyUI on typical hosters for the websocket-over-http at least. The other stuff also doesn't hurt much all things considered. Building on GH actions would be nice, so far the images are pushed to https://hub.docker.com/r/aclysia/sd-comfyui-krita/tags though. |
|
I updated models.json to include non-nunchaku version of Kontext, (and fp4 versions for nvidia blackwell) Also download_models.py should automatically fetch the matching set of models for detected hardware now, see 21e7183 Haven't tested this with the docker images yet though. |
|
in #2228 i have it working / installing ROCm dependencies for the diffusers pipeline. |
|
Is this PR still relevant and applicable in March 2026? I thought ROCm support was added already. |
|
There is no ROCm install option. Problem is not the code, it's finding someone to test/maintain it. See #2315 |
I would just like to point out that the only thing you need to alter "at least for linux" is the pytorch url that you are pulling from. The only thing required to manually set this up is to run if you want to pull from amd's site or if you want to pull from pytorches website. Nothing else needs to change. |
Yeah this is kinda what I meant with "problem is not the code", it's a simple change. But I dislike publishing things I can't test, and have seen various comments about pulling from this rocm repo for certain GPU, and from another for certain other GPU, and sometimes you need nightly, etc. So I can put it in no problem, but then people open issues that it's not working for them, and there's not much I can do. Was hoping someone would at least show some interest into testing and commenting on such issues. |
I'd gladly try it on my machine (Ryzen 7700 with iGPU + Radeon RX 7900 GRE) in return for having it available, but I'd probably need some guidance if things get too technical, if you're eager to provide it. I'm not a coder but I managed to make a couple local models work (ROCm + LMStudio for local LLM models, or PyTorch + Whisper for local speech recognition). I'm using Manjaro (Arch btw). Let me know if I can help and how. |
This DRAFT PR is under development to facilitate conversation about a formalized ROCM implementaiton in Krita AI Diffusion.
Outside of KAD, I have a functioning
Dockerfilethat supports KAD with all features of SD1.5, SDXL, and FLUX (including Kontext, but not SDVQ quant since Nunchaku does not support ROCM). This PR will port my work directly into KAD so that it can formally support ROCM going forward, but will need some guidance to complete as I'm not familiar with the code base.Working:
In progress:
Dockerfilebased on successful external installmodelsto throw errors on lack of SDVQ (if needed, not yet in PR)To do:
I'm going to assume there are a number of things in the to do list that I'm missing. My biggest question atm is there is no documentation on how to use the
docker.pyscript in a development environment to build the container, as the ComfyUI setup is done outside of theDockerfile(my custom setup does the full build in theDockerfile).