diff --git a/README.md b/README.md index 740909f5e5..f6d11d637d 100644 --- a/README.md +++ b/README.md @@ -202,7 +202,7 @@ OpenEvolve implements a sophisticated **evolutionary coding pipeline** that goes
Advanced LLM Integration -- **Universal API**: Works with OpenAI, Google, local models, and proxies +- **Universal API**: Works with OpenAI, Google, MiniMax, local models, and proxies - **Intelligent Ensembles**: Weighted combinations with sophisticated fallback - **Test-Time Compute**: Enhanced reasoning through proxy systems (see [OptiLLM setup](#llm-provider-setup)) - **Plugin Ecosystem**: Support for advanced reasoning plugins @@ -281,6 +281,7 @@ docker run --rm -v $(pwd):/app ghcr.io/algorithmicsuperintelligence/openevolve:l - **o3-mini**: ~$0.03-0.12 per iteration (more cost-effective) - **Gemini-2.5-Pro**: ~$0.08-0.30 per iteration - **Gemini-2.5-Flash**: ~$0.01-0.05 per iteration (fastest and cheapest) +- **MiniMax-M2.5**: ~$0.02-0.08 per iteration (204K context, OpenAI-compatible) - **Local models**: Nearly free after setup - **OptiLLM**: Use cheaper models with test-time compute for better results @@ -320,6 +321,33 @@ export OPENAI_API_KEY="your-gemini-api-key"
+
+🧠 MiniMax + +[MiniMax](https://www.minimaxi.com/) offers powerful models with 204K context window via an OpenAI-compatible API: + +```yaml +# config.yaml +llm: + api_base: "https://api.minimax.io/v1" + api_key: "${MINIMAX_API_KEY}" + models: + - name: "MiniMax-M2.5" + weight: 0.6 + - name: "MiniMax-M2.5-highspeed" + weight: 0.4 +``` + +```bash +export MINIMAX_API_KEY="your-minimax-api-key" +``` + +> **Note:** MiniMax requires temperature to be in (0.0, 1.0] — zero is not accepted. The default 0.7 works well. + +See [`configs/minimax_config.yaml`](configs/minimax_config.yaml) for a complete configuration example. + +
+
🏠 Local Models (Ollama/vLLM) @@ -792,7 +820,7 @@ See the [Cost Estimation](#cost-estimation) section in Installation & Setup for **Yes!** OpenEvolve supports any OpenAI-compatible API: -- **Commercial**: OpenAI, Google, Cohere +- **Commercial**: OpenAI, Google, Cohere, MiniMax - **Local**: Ollama, vLLM, LM Studio, text-generation-webui - **Advanced**: OptiLLM for routing and test-time compute diff --git a/configs/README.md b/configs/README.md index 6ce24383c1..40506def76 100644 --- a/configs/README.md +++ b/configs/README.md @@ -12,6 +12,9 @@ The main configuration file containing all available options with sensible defau Use this file as a template for your own configurations. +### `minimax_config.yaml` +A complete configuration for using [MiniMax](https://www.minimaxi.com/) models (MiniMax-M2.5, MiniMax-M2.5-highspeed) with OpenEvolve. MiniMax provides an OpenAI-compatible API with 204K context window support. + ### `island_config_example.yaml` A practical example configuration demonstrating proper island-based evolution setup. Shows: - Recommended island settings for most use cases diff --git a/configs/minimax_config.yaml b/configs/minimax_config.yaml new file mode 100644 index 0000000000..e11c090022 --- /dev/null +++ b/configs/minimax_config.yaml @@ -0,0 +1,74 @@ +# OpenEvolve Configuration for MiniMax +# MiniMax provides OpenAI-compatible API with powerful models like MiniMax-M2.5 +# Get your API key from: https://platform.minimaxi.com/ +# +# Set your API key: +# export MINIMAX_API_KEY="your-minimax-api-key" + +# General settings +max_iterations: 100 +checkpoint_interval: 10 +log_level: "INFO" +random_seed: 42 + +# LLM configuration for MiniMax +llm: + api_base: "https://api.minimax.io/v1" + api_key: "${MINIMAX_API_KEY}" + + # MiniMax models for evolution + models: + - name: "MiniMax-M2.5" + weight: 0.6 + - name: "MiniMax-M2.5-highspeed" + weight: 0.4 + + # MiniMax models for LLM feedback + evaluator_models: + - name: "MiniMax-M2.5-highspeed" + weight: 1.0 + + # Generation parameters + # Note: MiniMax requires temperature to be in (0.0, 1.0] — zero is not accepted + temperature: 0.7 + top_p: 0.95 + max_tokens: 4096 + + # Request parameters + timeout: 120 + retries: 3 + retry_delay: 5 + +# Evolution settings +diff_based_evolution: true +max_code_length: 10000 + +# Prompt configuration +prompt: + system_message: "You are an expert coder helping to improve programs through evolution." + evaluator_system_message: "You are an expert code reviewer." + num_top_programs: 3 + num_diverse_programs: 2 + use_template_stochasticity: true + include_artifacts: true + +# Database configuration +database: + population_size: 1000 + num_islands: 5 + migration_interval: 50 + migration_rate: 0.1 + feature_dimensions: + - "complexity" + - "diversity" + feature_bins: 10 + +# Evaluator configuration +evaluator: + timeout: 300 + cascade_evaluation: true + cascade_thresholds: + - 0.5 + - 0.75 + - 0.9 + parallel_evaluations: 4 diff --git a/tests/test_valid_configs.py b/tests/test_valid_configs.py index ec46bd6a44..92653d703a 100644 --- a/tests/test_valid_configs.py +++ b/tests/test_valid_configs.py @@ -24,7 +24,7 @@ def collect_files(self): config_files.append(os.path.join(root, file)) return config_files - @patch.dict(os.environ, {"ANTHROPIC_API_KEY": "test-key-for-validation"}) + @patch.dict(os.environ, {"ANTHROPIC_API_KEY": "test-key-for-validation", "MINIMAX_API_KEY": "test-key-for-validation"}) def test_import_config_files(self): """Attempt to import all config files""" config_files = self.collect_files()