Skip to content

Deepseek sharding for vLLM and MLA kernel plumbing#3072

Draft
khatwanimohit wants to merge 1 commit intomainfrom
mohit/deepseek_vllm
Draft

Deepseek sharding for vLLM and MLA kernel plumbing#3072
khatwanimohit wants to merge 1 commit intomainfrom
mohit/deepseek_vllm

Conversation

@khatwanimohit
Copy link
Collaborator

@khatwanimohit khatwanimohit commented Feb 3, 2026

Description

  • Adds vLLM Ragged Paged Attention (RPA) support for Multi-Latent Attention (MLA) in attention_mla.py using tpu_inference.kernels.mla.v1.kernel.mla_ragged_paged_attention
  • Implements mla_rpa_vllm() method for vLLM serving with query absorption and tuned block sizes
  • Adds new vllm_deepseek.yml config with expert parallelism sharding rules for DeepSeek models
  • Updates deepseek.py layers to pass kv_cache and attention_metadata through attention pipeline
  • Modifies KV cache initialization to skip for vllm_rpa attention mode

If the change fixes a bug or a Github issue, please include a link, e.g.,:
FIXES: b/123456
FIXES: #123456

Notice 1: Once all tests pass, the "pull ready" label will automatically be assigned.
This label is used for administrative purposes. Please do not add it manually.

Notice 2: For external contributions, our settings currently require an approval from a MaxText maintainer to trigger CI tests.

Tests

Checklist

Before submitting this PR, please make sure (put X in square brackets):

  • I have performed a self-review of my code. For an optional AI review, add the gemini-review label.
  • I have necessary comments in my code, particularly in hard-to-understand areas.
  • I have run end-to-end tests tests and provided workload links above if applicable.
  • I have made or will make corresponding changes to the doc if needed, including adding new documentation pages to the relevant Table of Contents (toctree directive) as explained in our documentation.

@codecov
Copy link

codecov bot commented Feb 3, 2026

Codecov Report

❌ Patch coverage is 31.88406% with 47 lines in your changes missing coverage. Please review.

Files with missing lines Patch % Lines
src/MaxText/layers/attention_mla.py 14.28% 40 Missing and 2 partials ⚠️
src/MaxText/layers/moe.py 66.66% 4 Missing ⚠️
src/MaxText/rl/train_rl.py 0.00% 1 Missing ⚠️

📢 Thoughts on this report? Let us know!

@khatwanimohit khatwanimohit force-pushed the mohit/deepseek_vllm branch 3 times, most recently from 4b5526e to 68efcf1 Compare February 4, 2026 23:45
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant