Aider is a popular AI pair programmer in the terminal. It works excellently with tinyMem via Proxy Mode.
- Aider installed (
pip install aider-chat). - tinyMem installed and configured.
- A backend LLM running (Ollama, LM Studio) OR an API key for a cloud provider.
Ensure .tinyMem/config.toml points to your actual model provider.
Example (Ollama):
[proxy]
port = 8080
base_url = "http://localhost:11434/v1"tinymem proxyAider needs to be told to talk to localhost:8080 instead of the real API.
aider \
--openai-api-base http://localhost:8080/v1 \
--openai-api-key dummy \
--model openai/rnj-1 # Prefix 'openai/' tells Aider to use generic clientCritical: You MUST use the
openai/prefix for the model name (e.g.,openai/qwen2.5-coderoropenai/gpt-4). This forces Aider to use its generic OpenAI client, which respects the custom API base. If you just say--model gpt-4, it might try to hit the official OpenAI API directly.
export OPENAI_API_BASE=http://localhost:8080/v1
export OPENAI_API_KEY=dummy
aider --model openai/your-model-nameFor full configuration options, see Configuration.md.
Aider might not know the context limit of a local model proxied through tinyMem. Create a .aider.model.metadata.json file in your project root:
{
"openai/qwen2.5-coder": {
"max_tokens": 32768,
"input_cost_per_token": 0.0,
"output_cost_per_token": 0.0,
"litellm_provider": "openai",
"mode": "chat"
}
}Then run aider with:
aider --model-metadata-file .aider.model.metadata.json --model openai/qwen2.5-coder ...- Ensure
tinymem proxyis running. - Check
tinymem doctor. - Try using
127.0.0.1instead oflocalhostif on Windows/WSL.