Skip to content

Add Anima architecture support#2450

Open
FeepingCreature wants to merge 1 commit intoAcly:mainfrom
FeepingCreature:feature/anima
Open

Add Anima architecture support#2450
FeepingCreature wants to merge 1 commit intoAcly:mainfrom
FeepingCreature:feature/anima

Conversation

@FeepingCreature
Copy link
Copy Markdown
Contributor

@FeepingCreature FeepingCreature commented Apr 14, 2026

Anima is a promising up and comer! Not super slow like Flux, but also with much better prompt understanding than anything OpenCLIP based, it's still in testing but it looks to be the first model I'd cautiously say is a straight improvement over SDXL.

ComfyUI supports it ootb, so there's almost nothing to do. There are only two finicky details: there's no model class for it yet so we have to recognize by filename, and the preview releases don't bake in VAE and TE (because they're frozen) so they have to be loaded separately even tho it's a standard safetensors load node.

Also of course there's no controlnets yet, though a first derived model already popped up, so it's probably just a matter of time.

(thanks claude for code!)

- resources.py: Add Arch.anima enum member with text_encoders (qwen_3_06b),
  supports_attention_guidance, search paths for TE/VAE, required resource IDs
- workflow.py: Load separate CLIP (omnigen2 type) and VAE for anima in
  load_checkpoint_with_lora()
- comfy_client.py: Add qwen_3_06b to text encoder model discovery

Co-authored-by: Forge AI (github.com/FeepingCreature/forge) <noreply@forge-ai.invalid>
Comment thread ai_diffusion/resources.py
return Arch.qwen_l
if string == "qwen-image":
return Arch.qwen
if string == "anima" or (string == "unknown" and "anima" in filename):
Copy link
Copy Markdown
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this should already be detected, but probably as lumina2, can you check?
Since base Lumina 2 isn't supported, we might as well map it to Anima.

Comment thread ai_diffusion/workflow.py
match (model_info.format, model_info.quantization):
case (FileFormat.checkpoint, Quantization.none):
model, clip, vae = w.load_checkpoint(model_info.filename)
if arch is Arch.anima:
Copy link
Copy Markdown
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This shouldn't be needed if the model is correctly located in diffusion_models / unet folder.

Technically people could make a Anime "checkpoint" with integrated TE/VAE and this would prevent it from being loaded... not sure how important that is, but since other models don't do this atm I'd remove it for now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants