Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
32 changes: 30 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -202,7 +202,7 @@ OpenEvolve implements a sophisticated **evolutionary coding pipeline** that goes
<details>
<summary><b>Advanced LLM Integration</b></summary>

- **Universal API**: Works with OpenAI, Google, local models, and proxies
- **Universal API**: Works with OpenAI, Google, MiniMax, local models, and proxies
- **Intelligent Ensembles**: Weighted combinations with sophisticated fallback
- **Test-Time Compute**: Enhanced reasoning through proxy systems (see [OptiLLM setup](#llm-provider-setup))
- **Plugin Ecosystem**: Support for advanced reasoning plugins
Expand Down Expand Up @@ -281,6 +281,7 @@ docker run --rm -v $(pwd):/app ghcr.io/algorithmicsuperintelligence/openevolve:l
- **o3-mini**: ~$0.03-0.12 per iteration (more cost-effective)
- **Gemini-2.5-Pro**: ~$0.08-0.30 per iteration
- **Gemini-2.5-Flash**: ~$0.01-0.05 per iteration (fastest and cheapest)
- **MiniMax-M2.5**: ~$0.02-0.08 per iteration (204K context, OpenAI-compatible)
- **Local models**: Nearly free after setup
- **OptiLLM**: Use cheaper models with test-time compute for better results

Expand Down Expand Up @@ -320,6 +321,33 @@ export OPENAI_API_KEY="your-gemini-api-key"

</details>

<details>
<summary><b>🧠 MiniMax</b></summary>

[MiniMax](https://www.minimaxi.com/) offers powerful models with 204K context window via an OpenAI-compatible API:

```yaml
# config.yaml
llm:
api_base: "https://api.minimax.io/v1"
api_key: "${MINIMAX_API_KEY}"
models:
- name: "MiniMax-M2.5"
weight: 0.6
- name: "MiniMax-M2.5-highspeed"
weight: 0.4
```

```bash
export MINIMAX_API_KEY="your-minimax-api-key"
```

> **Note:** MiniMax requires temperature to be in (0.0, 1.0] — zero is not accepted. The default 0.7 works well.

See [`configs/minimax_config.yaml`](configs/minimax_config.yaml) for a complete configuration example.

</details>

<details>
<summary><b>🏠 Local Models (Ollama/vLLM)</b></summary>

Expand Down Expand Up @@ -792,7 +820,7 @@ See the [Cost Estimation](#cost-estimation) section in Installation & Setup for

**Yes!** OpenEvolve supports any OpenAI-compatible API:

- **Commercial**: OpenAI, Google, Cohere
- **Commercial**: OpenAI, Google, Cohere, MiniMax
- **Local**: Ollama, vLLM, LM Studio, text-generation-webui
- **Advanced**: OptiLLM for routing and test-time compute

Expand Down
3 changes: 3 additions & 0 deletions configs/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,9 @@ The main configuration file containing all available options with sensible defau

Use this file as a template for your own configurations.

### `minimax_config.yaml`
A complete configuration for using [MiniMax](https://www.minimaxi.com/) models (MiniMax-M2.5, MiniMax-M2.5-highspeed) with OpenEvolve. MiniMax provides an OpenAI-compatible API with 204K context window support.

### `island_config_example.yaml`
A practical example configuration demonstrating proper island-based evolution setup. Shows:
- Recommended island settings for most use cases
Expand Down
74 changes: 74 additions & 0 deletions configs/minimax_config.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,74 @@
# OpenEvolve Configuration for MiniMax
# MiniMax provides OpenAI-compatible API with powerful models like MiniMax-M2.5
# Get your API key from: https://platform.minimaxi.com/
#
# Set your API key:
# export MINIMAX_API_KEY="your-minimax-api-key"

# General settings
max_iterations: 100
checkpoint_interval: 10
log_level: "INFO"
random_seed: 42

# LLM configuration for MiniMax
llm:
api_base: "https://api.minimax.io/v1"
api_key: "${MINIMAX_API_KEY}"

# MiniMax models for evolution
models:
- name: "MiniMax-M2.5"
weight: 0.6
- name: "MiniMax-M2.5-highspeed"
weight: 0.4

# MiniMax models for LLM feedback
evaluator_models:
- name: "MiniMax-M2.5-highspeed"
weight: 1.0

# Generation parameters
# Note: MiniMax requires temperature to be in (0.0, 1.0] — zero is not accepted
temperature: 0.7
top_p: 0.95
max_tokens: 4096

# Request parameters
timeout: 120
retries: 3
retry_delay: 5

# Evolution settings
diff_based_evolution: true
max_code_length: 10000

# Prompt configuration
prompt:
system_message: "You are an expert coder helping to improve programs through evolution."
evaluator_system_message: "You are an expert code reviewer."
num_top_programs: 3
num_diverse_programs: 2
use_template_stochasticity: true
include_artifacts: true

# Database configuration
database:
population_size: 1000
num_islands: 5
migration_interval: 50
migration_rate: 0.1
feature_dimensions:
- "complexity"
- "diversity"
feature_bins: 10

# Evaluator configuration
evaluator:
timeout: 300
cascade_evaluation: true
cascade_thresholds:
- 0.5
- 0.75
- 0.9
parallel_evaluations: 4
2 changes: 1 addition & 1 deletion tests/test_valid_configs.py
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ def collect_files(self):
config_files.append(os.path.join(root, file))
return config_files

@patch.dict(os.environ, {"ANTHROPIC_API_KEY": "test-key-for-validation"})
@patch.dict(os.environ, {"ANTHROPIC_API_KEY": "test-key-for-validation", "MINIMAX_API_KEY": "test-key-for-validation"})
def test_import_config_files(self):
"""Attempt to import all config files"""
config_files = self.collect_files()
Expand Down