Skip to content

Update dependency accelerate to v1.12.0#97

Open
renovate[bot] wants to merge 1 commit intomainfrom
renovate/accelerate-1.x
Open

Update dependency accelerate to v1.12.0#97
renovate[bot] wants to merge 1 commit intomainfrom
renovate/accelerate-1.x

Conversation

@renovate
Copy link
Contributor

@renovate renovate bot commented Jul 16, 2025

This PR contains the following updates:

Package Change Age Confidence
accelerate ==1.8.1==1.12.0 age confidence

Release Notes

huggingface/accelerate (accelerate)

v1.12.0: : Deepspeed Ulysses/ALST

Compare Source

Deepspeed Ulysses/ALST integration

Deepspeed Ulysses/ALST is an efficient way of training on long sequences by employing sequence parallelism and attention head parallelism. You can learn more about this technology in this paper https://arxiv.org/abs/2506.13996 or this deepspeed tutorial https://www.deepspeed.ai/tutorials/ulysses-alst-sequence-parallelism/.

0d8bd9e0

To enable Deepspeed Ulysses, you first need to create ParallelismConfig and setting sp related args:

parallelism_config = ParallelismConfig(
    sp_backend="deepspeed",
    sp_size=2,
    sp_handler=DeepSpeedSequenceParallelConfig(...),
)

Then, you need to make sure to compute the correct loss as described on our docs

        ...
        losses_per_rank = torch.distributed.nn.functional.all_gather(loss, group=sp_group)
        good_tokens = (shift_labels != -100).view(-1).sum()
        good_tokens_per_rank = torch.distributed.nn.functional.all_gather(good_tokens, group=sp_group)
        total_loss = sum(
            losses_per_rank[rank] * good_tokens_per_rank[rank]
            for rank in range(sp_world_size)
            if good_tokens_per_rank[rank] > 0
        )
        total_good_tokens = sum(good_tokens_per_rank)
        loss = total_loss / max(total_good_tokens, 1)

Thanks @​S1ro1 for starting this work and for @​stas00 for finishing this work. Also thanks @​kashif for adding docs and reviewing/testing this PR !

This feature will also be available in HF Trainer thanks for this PR from @​stas00: huggingface/transformers#41832

Minor changes

New Contributors

Full Changelog: huggingface/accelerate@v1.11.0...v1.12.0

v1.11.0: : TE MXFP8, FP16/BF16 with MPS, Python 3.10

Compare Source

TE MXFP8 support

We've added support for MXFP8 in our TransformerEngine integration. To use that, you need to set use_mxfp8_block_scaling in fp8_config. See nvidia docs [here]. (https://docs.nvidia.com/deeplearning/transformer-engine/user-guide/examples/fp8_primer.html#MXFP8-and-block-scaling)

FP16/BF16 Training for MPS devices

BF16 and FP16 support for MPS devices is finally here. You can now pass mixed_precision = "fp16" or "bf16" when training on a mac (fp16 requires torch 2.8 and bf16 requires torch 2.6)

FSDP updates

The following PRs add respectively support to ignored_params and no_sync() for FSDPv2:

Mixed precision can now be passed as a dtype string from accelerate cli flag or fsdp_config in accelerate config file:

Nd-parallel updates

Some minor updates concerning nd-parallelism.

Bump to Python 3.10

We've dropped support for python 3.9 as it reached EOL in October.

Lots of minor fixes:

New Contributors

Full Changelog: huggingface/accelerate@v1.10.1...v1.11.0

v1.10.1: : Patchfix

Compare Source

Full Changelog: huggingface/accelerate@v1.10.0...v1.10.1

v1.10.0: : N-D Parallelism

Compare Source

N-D Parallelism

Training large models across multiple GPUs can be complex, especially when combining different parallelism strategies (e.g TP, CP, DP). To simplify this process, we've collaborated with Axolotl to introduce an easy-to-use integration that allows you to apply any combination of parallelism strategies directly in your training script. Just pass a ParallelismConfig specifying the size of each parallelism type—it's that simple.
Learn more about how it works in our latest blogpost.

parallelism_config = ParallelismConfig(
    dp_shard_size=2,
    dp_replicate_size=2,
    cp_size=2,
    tp_size=2,
)
accelerator = Accelerator(
    parallelism_config=parallelism_config,
   ...
)
model = AutoModelForCausalLM.from_pretrained("your-model-name", device_mesh=accelerator.torch_device_mesh)
model = accelerator.prepare(model)

FSDP improvements

We've fixed ignored modules attribute. With this, it is now possible to train PEFT model that moe layers that contrains q_proj and v_proj parameters. This is especially important for fine-tuning gpt-oss model.

Minor improvements

New Contributors

Full Changelog: huggingface/accelerate@v1.9.0...v1.10.0

v1.9.0: : Trackio support, Model loading speedup, Minor distributed improvements

Compare Source

Trackio tracker support

We've added support for a trackio, lightweight, 💯 free experiment tracking Python library built on top of 🤗 Datasets and Spaces.

Screen Recording 2025-06-11 at 5 39 32 PM

Main features are:

  • Local-first design: dashboard runs locally by default. You can also host it on Spaces by specifying a space_id.
  • Persists logs locally (or in a private Hugging Face Dataset)
  • Visualize experiments with a Gradio dashboard locally (or on Hugging Face Spaces)
  • Everything here, including hosting on Hugging Faces, is free!

To use it with accelerate, you need to set log_with and initialize the trackers

accelerator = Accelerator(log_with="trackio")
config={"learning_rate": 0.001, "batch_size": 32}

# init_kwargs in order to host the dashboard on spaces
init_kwargs = {"trackio": {"space_id": "hf_username/space_name"}
accelerator.init_trackers("example_project", config=config, init_kwargs=init_kwargs})

Thanks @​pcuenca for the integration !

Model loading speedup when relying set_module_tensor_to_device

Setting tensor while clearing cache is very slow, so we added clear_device option to disable it.
Another small optimization is using non_blocking everywhere and syncing just before returning control to the user. This makes the loading slightly faster.

FDSP, Deepspeed, FP8 minor improvements

🚨🚨🚨 Breaking changes 🚨🚨🚨

find_executable_batch_size() will no longer halves the batch after every OOM. Instead, we will multiply the batch size by 0.9. This should help user not waste gpu capacity.

What's Changed

New Contributors

Full Changelog: huggingface/accelerate@v1.8.1...v1.9.0


Configuration

📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about this update again.


  • If you want to rebase/retry this PR, check this box

This PR was generated by Mend Renovate. View the repository job log.

@renovate renovate bot changed the title Update dependency accelerate to v1.9.0 Update dependency accelerate to v1.10.0 Aug 7, 2025
@renovate renovate bot force-pushed the renovate/accelerate-1.x branch from 43f17a7 to ce6ced7 Compare August 7, 2025 12:43
@renovate renovate bot force-pushed the renovate/accelerate-1.x branch from ce6ced7 to 167048b Compare August 25, 2025 18:52
@renovate renovate bot changed the title Update dependency accelerate to v1.10.0 Update dependency accelerate to v1.10.1 Aug 25, 2025
@renovate renovate bot changed the title Update dependency accelerate to v1.10.1 Update dependency accelerate to v1.11.0 Oct 20, 2025
@renovate renovate bot force-pushed the renovate/accelerate-1.x branch from 167048b to 5e2530c Compare October 20, 2025 17:57
@renovate renovate bot force-pushed the renovate/accelerate-1.x branch from 5e2530c to a3e6c13 Compare November 21, 2025 13:40
@renovate renovate bot changed the title Update dependency accelerate to v1.11.0 Update dependency accelerate to v1.12.0 Nov 21, 2025
@renovate renovate bot force-pushed the renovate/accelerate-1.x branch from a3e6c13 to 1a5afb0 Compare December 15, 2025 14:33
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

0 participants