Skip to content

Conversation

@benliang99
Copy link
Contributor

Release 2.2.9 – Challenge Expansion

Release Date: April 14, 2025

Version 2.2.9 introduces CogView4 for t2i generation, and includes a new real face image dataset. This release also adds support for image-to-video (i2v) generation model coverage, which will be integrated into challenges in future releases.

Updates

Model Changes

  • Removed Models:
    • Removed stable-diffusion-v1-5/stable-diffusion-inpainting due to deprecation and inaccessibility

Support for Future Image-to-Video Models

  • Added model configuration in config.py
  • Enhanced prompt generation with motion-specific descriptions

Relevant code:

New Models and Datasets

Enhanced Prompt Generation for Video Tasks

The prompt generator has been updated to support video-specific descriptions:

  • Added structured motion descriptions for different scene types
  • Implemented camera motion and ambient motion combinations
  • Enhanced prompt formatting for better video generation results

Technical Details

Key changes include:

  1. Added I2V_MODELS configuration
  2. Enhanced prompt generation with task-aware descriptions
  3. Updated task handling throughout the generation pipeline
  4. Added fps configuration from model settings
  5. Integrated CogView4-6B model configuration

Impact

The enhanced prompt generation system ensures better quality and more natural motion in generated videos. The new dataset and t2i model broaden the subnet's challenge coverage.

Breaking Changes

  • Updated diffusers requirement
  • Modified prompt generation to include task parameter
  • Removed deprecated inpainting model configuration

dylanuys and others added 30 commits November 19, 2024 17:17
* adding rich arg, adding coldkeys and hotokeys

* moving rich to payload from headers

* bump version

---------

Co-authored-by: benliang99 <caliangben@gmail.com>
Adding two finetuned image models to expand validator challenges
Updated transformers version to fix tokenizer initialization error
* Made gpu id specification consistent across synthetic image generation models

* Changed gpu_id to device

* Docstring grammar

* add neuron.device to SyntheticImageGenerator init

* Fixed variable names

* adding device to start_validator.sh

* deprecating old/biased random prompt generation

* properly clear gpu of moderation pipeline

* simplifying usage of self.device

* fixing moderation pipeline device

* explicitly defining model/tokenizer for moderation pipeline to avoid accelerate auto device management

* deprecating random prompt generation

---------

Co-authored-by: benliang99 <caliangben@gmail.com>
bump version
* simple video challenge implementation wip

* dummy multimodal miner

* constants reorg

* updating verify_models script with t2v

* fixing MODEL_PIPELINE init

* cleanup

* __init__.py

* hasattr fix

* num_frames must be divisible by 8

* fixing dict iteration

* dummy response for videos

* fixing small bugs

* fixing video logging and compression

* apply image transforms uniformly to frames of video

* transform list of tensor to pil for synapse prep

* cleaning up vali forward

* miner function signatures to use Synapse base class instead of ImageSynapse

* vali requirements imageio and moviepy

* attaching separate video and image forward functions

* separating blacklist and priority fns for image/video synapses

* pred -> prediction

* initial synth video challenge flow

* initial video cache implementation

* video cache cleanup

* video zip downloads

* wip fairly large refactor of data generation, functionality and form

* generalized hf zip download fn

* had claude improve video_cache formatting

* vali forward cleanup

* cleanup + turning back on randomness for real/fake

* fix relative import

* wip moving video datasets to vali config

* Adding optimization flags to vali config

* check if captioning model already loaded

* async SyntheticDataGenerator wip

* async zip download

* ImageCache wip

* proper gpu clearing for moderation pipeline

* sdg cleanup

* new cache system WIP

* image/video cache updates

* cleaning up unused metadata arg, improving logging

* fixed frame sampling, parquet image extraction, image sampling

* synth data cache wip

* Moving sgd to its own pm2 process

* synthetic data gen memory management update

* mochi-1-preview

* util cleanup, new requirements

* ensure SyntheticDataGenerator process waits for ImageCache to populate

* adding new t2i models from main

* Fixing t2v model output saving

* miner cleanup

* Moving tall model weights to bitmind hf org

* removing test video pkl

* fixing circular import

* updating usage of hf_hub_download according to some breaking huggingface_hub changes

* adding ffmpeg to vali reqs

* adding back in video models in async generation after testing

* renaming UCF directory to DFB, since it now contains TALL

* remaining renames for UCF -> DFB

* pyffmpegg

* video compatible data augmentations

* Default values for level, data_aug_params for failure case

* switching image challenges back on

* using sample variable to store data for all challenge types

* disabling sequential_cpu_offload for CogVideoX5b

* logging metadata fields to w&b

* log challenge metadata

* bump version

* adding context manager for generation w different dtypes

* variable name fix in ComposeWithTransforms

* fixing broken DFB stuff in tall_detector.py

* removing unnecessary logging

* fixing outdated variable names

* cache refactor; moving shared functionality to BaseCache

* finally automating w&b project setting

* improving logs

* improving validator forward structure

* detector ABC cleanup + function headers

* adding try except for miner performance history loading

* fixing import

* cleaning up vali logging

* pep8 formatting video_utils

* cleaning up start_validator.sh, starting validator process before data gen

* shortening vali challenge timer

* moving data generation management to its own script & added w&B logging

* run_data_generator.py

* fixing full_path variable name

* changing w&b name for data generator

* yaml > json gang

* simplifying ImageCache.sample to always return one sample

* adding option to skip a challenge if no data are available in cache

* adding config vars for image/video detector

* cleaning up miner class, moving blacklist/priority to base

* updating call to image_cache.sample()

* fixing mochi gen to 84 frames

* fixing video data padding for miners

* updating setup script to create new .env file

* fixing weight loading after detector refactor

* model/detector separation for TALL & modifying base DFB code to allow device configuration

* standardizing video detector input to a frames tensor

* separation of concerns; moving all video preprocessing to detector class

* pep8 cleanup

* reformatting if statements

* temporarily removing initial dataset class

* standardizing config loading across video and image models

* finished VideoDataloader and supporting components

* moved save config file out of trian script

* backwards compatibility for ucf training

* moving data augmentation from RealFakeDataset to Dataset subclasses for video aug support

* cleaning up data augmentation and target_image_size

* import cleanup

* gitignore update

* fixing typos picked up by flake8

* fixing function name ty flake8

* fixing test fixtures

* disabling pytests for now, some are broken after refactor and its 4am
dylanuys and others added 28 commits March 31, 2025 18:27
… prompts

- Add CogVideoX1.5-5B-I2V model config
- Enhance prompt generation with task-specific motion descriptions
- Add memory optimizations for video generation
- Update task handling across generator pipeline
- Reduce complexity of motion instructions
- Add logic to detect and preserve existing motion
- Improve handling of static vs dynamic subjects
Release 2.2.9 – Challenge Expansion
Removed generate_from_prompt function used for external dataset generation
@benliang99 benliang99 merged commit 3bfb745 into main Apr 14, 2025
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants