Skip to content

feat: add MiniMax LLM provider support#197

Closed
octo-patch wants to merge 2 commits intovxcontrol:masterfrom
octo-patch:feature/add-minimax-provider
Closed

feat: add MiniMax LLM provider support#197
octo-patch wants to merge 2 commits intovxcontrol:masterfrom
octo-patch:feature/add-minimax-provider

Conversation

@octo-patch
Copy link
Copy Markdown
Contributor

Summary

Add MiniMax as a new LLM provider, following the existing provider architecture pattern (similar to DeepSeek, Kimi, GLM, Qwen).

MiniMax offers an OpenAI-compatible API with high-performance models:

  • MiniMax-M2.5 — 204K context window, suitable for general dialogue, code generation, and complex reasoning
  • MiniMax-M2.5-highspeed — Optimized version with faster inference, same 204K context

Changes

New files

  • backend/pkg/providers/minimax/minimax.go — Provider implementation using langchaingo OpenAI client
  • backend/pkg/providers/minimax/config.yml — Model configurations for all option types (simple, primary_agent, assistant, generator, coder, etc.)
  • backend/pkg/providers/minimax/models.yml — Model definitions with descriptions and pricing

Modified files

  • backend/pkg/providers/provider/provider.go — Add ProviderMiniMax type constant and DefaultProviderNameMiniMax
  • backend/pkg/config/config.go — Add MINIMAX_API_KEY, MINIMAX_SERVER_URL, MINIMAX_PROVIDER environment variable config fields
  • backend/pkg/providers/providers.go — Register MiniMax in provider factory (import, default config, instantiation, NewProvider switch case)

Configuration

Set these environment variables to enable MiniMax:

MINIMAX_API_KEY=your_api_key_here
# Optional: override default server URL
MINIMAX_SERVER_URL=https://api.minimax.io/v1
# Optional: model prefix for Langfuse logging
MINIMAX_PROVIDER=

Test Plan

  • Verify Go build compiles without errors
  • Test provider instantiation with valid MiniMax API key
  • Verify model selection for different option types (simple, primary_agent, coder, etc.)
  • Test streaming and tool calling capabilities
  • Verify provider appears in the UI provider list when API key is configured

octo-patch and others added 2 commits March 12, 2026 21:19
Add MiniMax as a new LLM provider following the existing provider pattern.
MiniMax offers an OpenAI-compatible API at https://api.minimax.io/v1
with models MiniMax-M2.5 (204K context) and MiniMax-M2.5-highspeed.

Changes:
- Add backend/pkg/providers/minimax/ package with provider implementation,
  config.yml (model configs per option type), and models.yml (model definitions)
- Add ProviderMiniMax type constant and DefaultProviderNameMiniMax
- Add MINIMAX_API_KEY, MINIMAX_SERVER_URL, MINIMAX_PROVIDER config fields
- Register MiniMax provider in factory (default config, instantiation, NewProvider switch)
- Add MiniMax-M2.7 and MiniMax-M2.7-highspeed to model list
- Set MiniMax-M2.7 as default model for all agent configs
- Keep all previous models (M2.5, M2.5-highspeed) as alternatives
@octo-patch
Copy link
Copy Markdown
Contributor Author

Updated to include MiniMax-M2.7 as the new default model:

  • Added MiniMax-M2.7 and MiniMax-M2.7-highspeed to model list
  • Set MiniMax-M2.7 as default model for all agent configurations
  • Retained MiniMax-M2.5 and MiniMax-M2.5-highspeed as available alternatives

MiniMax-M2.7 is the latest flagship model with enhanced reasoning and coding capabilities.

@asdek
Copy link
Copy Markdown
Contributor

asdek commented Mar 25, 2026

Thank you for your contribution!

We appreciate your effort, but we need to temporarily close this PR due to an ongoing license compliance audit. We are ensuring full compliance with open-source licensing requirements before accepting new contributions.

We expect to complete this process within one week. Please join our community (Discord | Telegram) to stay updated, and feel free to resubmit your changes once we reopen for contributions.

Thank you for your understanding!

@asdek asdek closed this Mar 25, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants