Build AI agents that use tools, connect to MCP servers, and work with any AI provider (OpenAI, Anthropic, Gemini, Groq, local models). Each agent is a portable JSON manifest that anyone can share, install, and run.
- π Full MCP Support β stdio, SSE, and WebSocket transports
- π§ Built-in Tools β HTTP fetch, JSON parse, text transform, date/time
- π¨ Custom Tools β Write JS handlers, sandboxed via VM
- π€ Any AI Provider β Bring your own: OpenAI, Anthropic, Gemini, Ollama, anything
- π¦ Portable Agents β JSON manifests that anyone can share and install
- β‘ Event Streaming β Real-time execution tracking via EventEmitter
- π₯οΈ CLI Included β Create, run, and manage agents from the terminal
# Install globally
npm install -g openforge
# Create your first agent
openforge create my-agent "A helpful assistant"
# Run it (needs an API key)
OPENAI_API_KEY=sk-xxx openforge run my-agent "What's the weather in NYC?"const { AgentBuilder } = require('openforge');
const builder = new AgentBuilder({
aiProvider: async (messages, model, tools) => {
// Use any AI provider β OpenAI, Anthropic, Gemini, etc.
const { OpenAI } = require('openai');
const client = new OpenAI();
return await client.chat.completions.create({
model: model || 'gpt-4o-mini',
messages,
tools: tools?.length > 0 ? tools : undefined,
});
},
});
await builder.initialize();
// Run an agent
const result = await builder.runAgent('my-agent', 'Hello!');
console.log(result.output);
// Stream execution events
const { runtime, resultPromise } = await builder.runAgentStreaming('my-agent', 'Hello!');
runtime.on('tool_call', (data) => console.log(`π§ ${data.tool}`));
runtime.on('agent_response', (data) => console.log(`π¬ ${data.content}`));
const finalResult = await resultPromise;
await builder.shutdown();ββββββββββββββββββββββββββββββββββββββββββββ
β OpenForge β
β ββββββββββββ ββββββββββββ ββββββββββββ β
β β Runtime β β Registry β β Loader β β
β β(executor) β β (tools) β β (agents) β β
β βββββββ¬ββββββ βββββββ¬βββββ ββββββ¬ββββββ β
β β β β β
β βββββββ΄βββββββββββββββ΄ββββββββββββ€ β
β β Tool Registry β β
β ββββββββββ¬βββββββββ¬βββββββββββββββ€ β
β β MCP βBuilt-inβ Custom β β
β β Client β Tools β JS/VM β β
β ββββ¬ββββββββββββββββββββββββββββββ β
βββββββΌββββββββββββββββββββββββββββββββββββββ
β
MCP Servers (any!)
- stdio (spawn local process)
- SSE (HTTP streaming)
- WebSocket (persistent)
Every agent is defined by a .agent.json file:
{
"name": "my-agent",
"version": "1.0.0",
"description": "What this agent does",
"systemPrompt": "You are a helpful assistant...",
"runtime": {
"model": "gpt-4o-mini",
"maxLoops": 10,
"timeoutMs": 60000
},
"tools": {
"require": ["builtin:http_fetch", "builtin:json_parse"],
"optional": ["mcp:calendar"]
},
"ui": {
"icon": "π€"
}
}Full Model Context Protocol support β connect to any MCP server:
// stdio β spawn a local MCP server
await builder.addMcpServer('calculator', {
transport: 'stdio',
command: 'npx',
args: ['-y', '@example/mcp-calculator'],
});
// SSE β connect to an HTTP MCP server
await builder.addMcpServer('api', {
transport: 'sse',
url: 'http://localhost:3001/sse',
});
// WebSocket β persistent connection
await builder.addMcpServer('realtime', {
transport: 'websocket',
url: 'ws://localhost:8080',
});| Provider | Prefix | Description |
|---|---|---|
| Built-in | builtin: |
http_fetch, json_parse, text_transform, date_time, wait |
| MCP | mcp: |
Any MCP-compliant server (stdio, SSE, WebSocket) |
| Custom | custom: |
Agent-local JS handlers (sandboxed via VM) |
Define custom tools in your agent manifest:
{
"tools": {
"custom": [
{
"name": "analyze_data",
"description": "Analyze data from a CSV",
"parameters": {
"type": "object",
"properties": {
"data": { "type": "string" }
}
},
"handler": "./tools/analyzeData.js"
}
]
}
}Handler file (tools/analyzeData.js):
module.exports = async function(args, context) {
const { data } = args;
// Your analysis logic here
return { success: true, result: 'Analysis complete' };
};openforge list # List installed agents
openforge info <name> # Show agent details
openforge create <name> [description] # Create from template
openforge run <name> <input> # Run (needs OPENAI_API_KEY)
openforge tools # List available tools
openforge validate <path> # Validate manifestThe runtime emits events during execution for real-time tracking:
const { runtime, resultPromise } = await builder.runAgentStreaming('my-agent', input);
runtime.on('execution_started', (data) => { /* { id, agent } */ });
runtime.on('loop_iteration', (data) => { /* { id, loop, maxLoops } */ });
runtime.on('tool_call', (data) => { /* { id, tool, args } */ });
runtime.on('tool_result', (data) => { /* { id, tool, result } */ });
runtime.on('agent_response', (data) => { /* { id, content } */ });
runtime.on('execution_completed', (data) => { /* { id, agent, duration, loops } */ });
const result = await resultPromise;| Option | Type | Required | Description |
|---|---|---|---|
aiProvider |
Function |
β | async (messages, model, tools, meta) => OpenAI-format response |
agentDir |
string |
Custom agent directory (default: ~/.openforge/agents/) |
|
mcpServers |
Object |
Initial MCP server configs | |
additionalAgentDirs |
string[] |
Extra directories to scan for agents |
| Method | Returns | Description |
|---|---|---|
initialize() |
Promise |
Must be called before any operations |
runAgent(name, input, ctx) |
Promise<Result> |
Run an agent synchronously |
runAgentStreaming(name, input, ctx) |
{ runtime, resultPromise } |
Run with event streaming |
createAgent(config) |
AgentManifest |
Create a new agent |
deleteAgent(name) |
boolean |
Delete an agent |
listAgents(filter) |
Agent[] |
List all agents |
addMcpServer(name, config) |
Promise<Server> |
Connect MCP server |
removeMcpServer(name) |
Promise |
Disconnect MCP server |
listTools() |
Tool[] |
List all available tools |
shutdown() |
Promise |
Graceful shutdown |
Contributions welcome! See CONTRIBUTING.md for guidelines.
MIT β build whatever you want.