Skip to content

Comments

Support merging gating gmm kernels#3199

Open
BirdsOfAFthr wants to merge 1 commit intomainfrom
amandaliang
Open

Support merging gating gmm kernels#3199
BirdsOfAFthr wants to merge 1 commit intomainfrom
amandaliang

Conversation

@BirdsOfAFthr
Copy link
Collaborator

@BirdsOfAFthr BirdsOfAFthr commented Feb 20, 2026

Description

This PR optimizes the MoE compute block by merging the two gating GMM kernels ($W_0$ and $W_1$) into a single, unified matrix multiplication pass.

Motivation
In the previous SwiGLU/GLU implementation, the gate-projection and up-projection were processed using two sequential gmm_fn calls. By concatenating these weights and processing them together, we effectively double the contiguous hidden dimension of the kernel. This is especially critical for FP8 utilizing Expert Parallelism (EP) that shard along the contracting dimension. Because this sharding strategy inherently shrinks the local MLP hidden dimension on each device, the matrix multiplications can become small and bottlenecked by memory bandwidth. Merging $W_0$ and $W_1$ effectively gives us a 2X increase in that local dimension, restoring arithmetic intensity and hardware utilization.
Expected Impact

Performance: Increased forward and backward pass throughput for the MoE layers, particularly on EP setups sharded along the contracting dimension due to the 2X larger local GMM sizes.

Tests

The operation is mathematically equivalent to the previous implementation. The quality has been verified through convergence test.

Checklist

Before submitting this PR, please make sure (put X in square brackets):

  • I have performed a self-review of my code. For an optional AI review, add the gemini-review label.
  • I have necessary comments in my code, particularly in hard-to-understand areas.
  • I have run end-to-end tests tests and provided workload links above if applicable.
  • I have made or will make corresponding changes to the doc if needed, including adding new documentation pages to the relevant Table of Contents (toctree directive) as explained in our documentation.

@codecov
Copy link

codecov bot commented Feb 20, 2026

Codecov Report

❌ Patch coverage is 0% with 6 lines in your changes missing coverage. Please review.

Files with missing lines Patch % Lines
src/maxtext/models/deepseek_batchsplit.py 0.00% 6 Missing ⚠️

📢 Thoughts on this report? Let us know!

Copy link
Collaborator

@suexu1025 suexu1025 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants