This guide helps you migrate from the HelpingAI Python SDK to the JavaScript/TypeScript SDK. While both SDKs maintain API compatibility where possible, there are important differences in syntax, patterns, and best practices.
- Quick Reference
- Installation & Setup
- Client Initialization
- Basic Chat Completions
- Streaming Responses
- Tool System
- MCP Integration
- Error Handling
- Async/Await Patterns
- Type Safety
- Environment Variables
- Best Practices
- Common Pitfalls
| Feature | Python | JavaScript/TypeScript |
|---|---|---|
| Import | from helpingai import HelpingAI |
import { HelpingAI } from 'helpingai' |
| Client Init | HelpingAI(api_key="key") |
new HelpingAI({ apiKey: 'key' }) |
| Tool Decorator | @tools |
tools(function ...) |
| Async Iteration | async for chunk in stream: |
for await (const chunk of stream) |
| Error Types | except RateLimitError: |
if (error instanceof RateLimitError) |
| Type Hints | def func(x: str) -> str: |
function func(x: string): string |
pip install helpingaifrom helpingai import HelpingAInpm install helpingai
# or
yarn add helpingaiimport { HelpingAI } from 'helpingai';# Basic initialization
client = HelpingAI(api_key="your-api-key")
# With options
client = HelpingAI(
api_key="your-api-key",
base_url="https://api.helpingai.com/v1",
timeout=30.0,
max_retries=3
)// Basic initialization
const client = new HelpingAI({ apiKey: 'your-api-key' });
// With options
const client = new HelpingAI({
apiKey: 'your-api-key',
baseURL: 'https://api.helpingai.com/v1',
timeout: 30000,
maxRetries: 3,
});Key Differences:
- JavaScript uses
newkeyword for instantiation - Options are passed as an object with camelCase properties
- Timeout is in milliseconds (not seconds)
response = client.chat.completions.create(
model="Dhanishtha-2.0-preview",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello!"}
],
max_tokens=100,
temperature=0.7
)
print(response.choices[0].message.content)const response = await client.chat.completions.create({
model: 'Dhanishtha-2.0-preview',
messages: [
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: 'Hello!' },
],
max_tokens: 100,
temperature: 0.7,
});
console.log(response.choices[0].message.content);Key Differences:
- JavaScript requires
awaitkeyword for async operations - Object properties use camelCase in JavaScript
- String literals use single quotes by convention in JavaScript
stream = client.chat.completions.create(
model="Dhanishtha-2.0-preview",
messages=[{"role": "user", "content": "Tell me a story"}],
stream=True
)
for chunk in stream:
if chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content, end="")const stream = await client.chat.completions.create({
model: 'Dhanishtha-2.0-preview',
messages: [{ role: 'user', content: 'Tell me a story' }],
stream: true,
});
for await (const chunk of stream) {
if (chunk.choices[0].delta.content) {
process.stdout.write(chunk.choices[0].delta.content);
}
}Key Differences:
- JavaScript uses
for await...ofinstead offor...in - Use
process.stdout.write()instead ofprint(..., end="")
from helpingai import HelpingAI, tools
@tools
def get_weather(city: str, units: str = "celsius") -> str:
"""Get weather information for a city.
Args:
city: The city name
units: Temperature units (celsius or fahrenheit)
"""
return f"Weather in {city}: 22°C, sunny"
@tools
def calculate(expression: str) -> float:
"""Perform mathematical calculations.
Args:
expression: Mathematical expression to evaluate
"""
return eval(expression) # Use proper math parser in production
# Usage
client = HelpingAI(api_key="your-key")
response = client.chat.completions.create(
model="Dhanishtha-2.0-preview",
messages=[{"role": "user", "content": "What's the weather in Paris?"}],
tools=[get_weather, calculate]
)import { HelpingAI, tools } from 'helpingai';
const getWeather = tools(function getWeather(
city: string,
units: 'celsius' | 'fahrenheit' = 'celsius'
): string {
/**
* Get weather information for a city.
* @param city - The city name
* @param units - Temperature units (celsius or fahrenheit)
*/
return `Weather in ${city}: 22°C, sunny`;
});
const calculate = tools(function calculate(expression: string): number {
/**
* Perform mathematical calculations.
* @param expression - Mathematical expression to evaluate
*/
return eval(expression); // Use proper math parser in production
});
// Usage
const client = new HelpingAI({ apiKey: 'your-key' });
const response = await client.chat.completions.create({
model: 'Dhanishtha-2.0-preview',
messages: [{ role: 'user', content: "What's the weather in Paris?" }],
tools: [getWeather, calculate],
});Key Differences:
- JavaScript uses
tools(function ...)wrapper instead of@toolsdecorator - JSDoc comments (
/** */) instead of Python docstrings - TypeScript provides better type safety with union types (
'celsius' | 'fahrenheit') - Function parameters and return types are explicitly typed
from helpingai import HelpingAI, MCPClient
# Connect to MCP server
mcp_client = MCPClient(
transport={
"type": "stdio",
"command": "node",
"args": ["path/to/mcp-server.js"]
}
)
await mcp_client.connect()
# Use with HelpingAI
client = HelpingAI(api_key="your-key")
response = client.chat.completions.create(
model="Dhanishtha-2.0-preview",
messages=[{"role": "user", "content": "Get my calendar"}],
mcp=mcp_client
)
await mcp_client.disconnect()import { HelpingAI, MCPClient } from 'helpingai';
// Connect to MCP server
const mcpClient = new MCPClient({
transport: {
type: 'stdio',
command: 'node',
args: ['path/to/mcp-server.js'],
},
});
await mcpClient.connect();
// Use with HelpingAI
const client = new HelpingAI({ apiKey: 'your-key' });
const response = await client.chat.completions.create({
model: 'Dhanishtha-2.0-preview',
messages: [{ role: 'user', content: 'Get my calendar' }],
mcp: mcpClient,
});
await mcpClient.disconnect();Key Differences:
- JavaScript uses
new MCPClient()constructor - Configuration object uses camelCase properties
- Same async/await patterns apply
from helpingai import (
HelpingAI,
APIError,
AuthenticationError,
RateLimitError,
TimeoutError
)
try:
response = client.chat.completions.create(
model="Dhanishtha-2.0-preview",
messages=[{"role": "user", "content": "Hello"}]
)
except AuthenticationError:
print("Invalid API key")
except RateLimitError as e:
print(f"Rate limited. Retry after {e.retry_after} seconds")
except TimeoutError:
print("Request timed out")
except APIError as e:
print(f"API error: {e.message}")import { HelpingAI, APIError, AuthenticationError, RateLimitError, TimeoutError } from 'helpingai';
try {
const response = await client.chat.completions.create({
model: 'Dhanishtha-2.0-preview',
messages: [{ role: 'user', content: 'Hello' }],
});
} catch (error) {
if (error instanceof AuthenticationError) {
console.log('Invalid API key');
} else if (error instanceof RateLimitError) {
console.log(`Rate limited. Retry after ${error.retryAfter} seconds`);
} else if (error instanceof TimeoutError) {
console.log('Request timed out');
} else if (error instanceof APIError) {
console.log(`API error: ${error.message}`);
}
}Key Differences:
- JavaScript uses
instanceofinstead of exception type matching - Single
catchblock with conditional logic - Property access uses camelCase (
retryAftervsretry_after)
import asyncio
async def main():
client = HelpingAI(api_key="your-key")
# Sequential requests
response1 = await client.chat.completions.create(...)
response2 = await client.chat.completions.create(...)
# Concurrent requests
responses = await asyncio.gather(
client.chat.completions.create(...),
client.chat.completions.create(...),
client.chat.completions.create(...)
)
if __name__ == "__main__":
asyncio.run(main())async function main() {
const client = new HelpingAI({ apiKey: 'your-key' });
// Sequential requests
const response1 = await client.chat.completions.create({...});
const response2 = await client.chat.completions.create({...});
// Concurrent requests
const responses = await Promise.all([
client.chat.completions.create({...}),
client.chat.completions.create({...}),
client.chat.completions.create({...})
]);
}
main().catch(console.error);Key Differences:
- JavaScript uses
Promise.all()instead ofasyncio.gather() - No need for
asyncio.run()- just call the async function - Error handling with
.catch()is common in JavaScript
from typing import List, Optional, Union
from helpingai import HelpingAI, ChatMessage
def create_messages(content: str, system_prompt: Optional[str] = None) -> List[ChatMessage]:
messages = []
if system_prompt:
messages.append({"role": "system", "content": system_prompt})
messages.append({"role": "user", "content": content})
return messages
client: HelpingAI = HelpingAI(api_key="your-key")import { HelpingAI, ChatMessage } from 'helpingai';
function createMessages(content: string, systemPrompt?: string): ChatMessage[] {
const messages: ChatMessage[] = [];
if (systemPrompt) {
messages.push({ role: 'system', content: systemPrompt });
}
messages.push({ role: 'user', content: content });
return messages;
}
const client: HelpingAI = new HelpingAI({ apiKey: 'your-key' });Key Differences:
- TypeScript has built-in type checking (no separate
typingimport needed) - Optional parameters use
?syntax instead ofOptional[] - Array types use
Type[]syntax instead ofList[Type]
import os
from helpingai import HelpingAI
# Reading environment variables
api_key = os.getenv("HELPINGAI_API_KEY")
base_url = os.getenv("HELPINGAI_BASE_URL", "https://api.helpingai.com/v1")
client = HelpingAI(
api_key=api_key,
base_url=base_url
)import { HelpingAI } from 'helpingai';
// Reading environment variables (Node.js)
const apiKey = process.env.HELPINGAI_API_KEY;
const baseURL = process.env.HELPINGAI_BASE_URL || 'https://api.helpingai.com/v1';
const client = new HelpingAI({
apiKey,
baseURL,
});
// For browser environments, use build-time environment variables
// or fetch from your backend APIKey Differences:
- Node.js uses
process.envinstead ofos.getenv() - Browser environments can't access environment variables directly
- Use logical OR (
||) for default values instead of second parameter
import asyncio
from contextlib import asynccontextmanager
from helpingai import HelpingAI
@asynccontextmanager
async def helpingai_client():
client = HelpingAI(api_key="your-key")
try:
yield client
finally:
await client.cleanup()
async def main():
async with helpingai_client() as client:
response = await client.chat.completions.create(...)TypeScript Best Practices
import { HelpingAI } from 'helpingai';
// Resource management with try/finally
async function withClient<T>(fn: (client: HelpingAI) => Promise<T>): Promise<T> {
const client = new HelpingAI({ apiKey: 'your-key' });
try {
return await fn(client);
} finally {
await client.cleanup();
}
}
async function main() {
const result = await withClient(async (client) => {
return await client.chat.completions.create({...});
});
}Key Differences:
- JavaScript doesn't have context managers, use try/finally blocks
- Create utility functions for resource management
- TypeScript generics provide type safety for utility functions
Python:
# This works in Python
response = client.chat.completions.create(...)JavaScript (Wrong):
// This returns a Promise, not the actual response!
const response = client.chat.completions.create({...});JavaScript (Correct):
// Always use await with async operations
const response = await client.chat.completions.create({...});Python:
response = client.chat.completions.create(
max_tokens=100,
tool_choice="auto"
)JavaScript (Wrong):
const response = await client.chat.completions.create({
max_tokens: 100, // Wrong: should be camelCase
tool_choice: 'auto', // Wrong: should be camelCase
});JavaScript (Correct):
const response = await client.chat.completions.create({
max_tokens: 100, // API maintains snake_case for compatibility
tool_choice: 'auto', // API maintains snake_case for compatibility
});Note: The HelpingAI API maintains snake_case for request parameters to ensure compatibility between SDKs.
Python:
try:
response = client.chat.completions.create(...)
except RateLimitError:
# Handle rate limit
pass
except APIError:
# Handle other API errors
passJavaScript (Wrong):
try {
const response = await client.chat.completions.create({...});
} catch (RateLimitError) { // Wrong: this won't work
// Handle rate limit
} catch (APIError) { // Wrong: this won't work
// Handle other API errors
}JavaScript (Correct):
try {
const response = await client.chat.completions.create({...});
} catch (error) {
if (error instanceof RateLimitError) {
// Handle rate limit
} else if (error instanceof APIError) {
// Handle other API errors
}
}Python:
@tools
def my_tool(param: str) -> str:
"""Tool description"""
return f"Result: {param}"JavaScript (Wrong):
// Wrong: trying to use decorator syntax
@tools
function myTool(param: string): string {
return `Result: ${param}`;
}JavaScript (Correct):
const myTool = tools(function myTool(param: string): string {
/**
* Tool description
* @param param - Parameter description
*/
return `Result: ${param}`;
});Python:
for chunk in stream:
if chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content, end="")JavaScript (Wrong):
// Wrong: regular for...of doesn't work with async iterables
for (const chunk of stream) {
if (chunk.choices[0].delta.content) {
console.log(chunk.choices[0].delta.content);
}
}JavaScript (Correct):
// Correct: use for await...of for async iterables
for await (const chunk of stream) {
if (chunk.choices[0].delta.content) {
process.stdout.write(chunk.choices[0].delta.content);
}
}- Review your Python code and identify all HelpingAI SDK usage
- List all custom tools and their signatures
- Document any custom error handling logic
- Note any MCP integrations or external dependencies
- Install the JavaScript/TypeScript SDK
- Set up TypeScript configuration (if using TypeScript)
- Convert client initialization to use object configuration
- Migrate all tool definitions from decorators to function wrappers
- Update error handling to use instanceof checks
- Convert async iteration patterns
- Update environment variable access patterns
- Test all functionality with the new SDK
- Verify error handling works as expected
- Check that streaming responses work correctly
- Validate tool calling functionality
- Test MCP integrations (if applicable)
- Update documentation and examples
- Set up proper TypeScript types (if using TypeScript)
Python Version:
import asyncio
from helpingai import HelpingAI, tools
@tools
def get_weather(city: str) -> str:
"""Get weather for a city"""
return f"Weather in {city}: 22°C, sunny"
@tools
def calculate(expression: str) -> float:
"""Calculate mathematical expression"""
return eval(expression)
async def main():
client = HelpingAI(api_key="your-key")
try:
response = await client.chat.completions.create(
model="Dhanishtha-2.0-preview",
messages=[
{"role": "user", "content": "What's the weather in Paris and what's 15 * 23?"}
],
tools=[get_weather, calculate],
tool_choice="auto"
)
print(response.choices[0].message.content)
except Exception as e:
print(f"Error: {e}")
finally:
await client.cleanup()
if __name__ == "__main__":
asyncio.run(main())JavaScript/TypeScript Version:
import { HelpingAI, tools } from 'helpingai';
const getWeather = tools(function getWeather(city: string): string {
/**
* Get weather for a city
* @param city - The city name
*/
return `Weather in ${city}: 22°C, sunny`;
});
const calculate = tools(function calculate(expression: string): number {
/**
* Calculate mathematical expression
* @param expression - Mathematical expression to evaluate
*/
return eval(expression); // Use proper math parser in production
});
async function main() {
const client = new HelpingAI({ apiKey: 'your-key' });
try {
const response = await client.chat.completions.create({
model: 'Dhanishtha-2.0-preview',
messages: [{ role: 'user', content: "What's the weather in Paris and what's 15 * 23?" }],
tools: [getWeather, calculate],
tool_choice: 'auto',
});
console.log(response.choices[0].message.content);
} catch (error) {
console.error('Error:', error);
} finally {
await client.cleanup();
}
}
main().catch(console.error);Python Version:
import asyncio
import time
from helpingai import HelpingAI
async def streaming_example():
client = HelpingAI(api_key="your-key")
stream = await client.chat.completions.create(
model="Dhanishtha-2.0-preview",
messages=[{"role": "user", "content": "Tell me a long story"}],
stream=True
)
content = ""
token_count = 0
start_time = time.time()
async for chunk in stream:
if chunk.choices[0].delta.content:
delta = chunk.choices[0].delta.content
content += delta
token_count += 1
print(delta, end="", flush=True)
if chunk.choices[0].finish_reason:
duration = time.time() - start_time
print(f"\n\nCompleted: {token_count} tokens in {duration:.2f}s")
break
asyncio.run(streaming_example())JavaScript/TypeScript Version:
import { HelpingAI } from 'helpingai';
async function streamingExample() {
const client = new HelpingAI({ apiKey: 'your-key' });
const stream = await client.chat.completions.create({
model: 'Dhanishtha-2.0-preview',
messages: [{ role: 'user', content: 'Tell me a long story' }],
stream: true,
});
let content = '';
let tokenCount = 0;
const startTime = Date.now();
for await (const chunk of stream) {
if (chunk.choices[0].delta.content) {
const delta = chunk.choices[0].delta.content;
content += delta;
tokenCount++;
process.stdout.write(delta);
}
if (chunk.choices[0].finish_reason) {
const duration = Date.now() - startTime;
console.log(`\n\nCompleted: ${tokenCount} tokens in ${duration}ms`);
break;
}
}
}
streamingExample().catch(console.error);# Connection pooling is handled automatically
client = HelpingAI(api_key="your-key")
# Concurrent requests
import asyncio
responses = await asyncio.gather(
client.chat.completions.create(...),
client.chat.completions.create(...),
client.chat.completions.create(...)
)// Connection pooling is handled automatically
const client = new HelpingAI({ apiKey: 'your-key' });
// Concurrent requests
const responses = await Promise.all([
client.chat.completions.create({...}),
client.chat.completions.create({...}),
client.chat.completions.create({...})
]);import pytest
from unittest.mock import AsyncMock, patch
from helpingai import HelpingAI
@pytest.mark.asyncio
async def test_chat_completion():
with patch('helpingai.HelpingAI') as mock_client:
mock_client.return_value.chat.completions.create = AsyncMock(
return_value={"choices": [{"message": {"content": "Test response"}}]}
)
client = HelpingAI(api_key="test")
response = await client.chat.completions.create(...)
assert response["choices"][0]["message"]["content"] == "Test response"import { HelpingAI } from 'helpingai';
// Using Jest
jest.mock('helpingai');
const mockClient = HelpingAI as jest.MockedClass<typeof HelpingAI>;
test('chat completion', async () => {
mockClient.prototype.chat.completions.create.mockResolvedValue({
choices: [{ message: { content: 'Test response' } }]
});
const client = new HelpingAI({ apiKey: 'test' });
const response = await client.chat.completions.create({...});
expect(response.choices[0].message.content).toBe('Test response');
});Migrating from the Python SDK to the JavaScript/TypeScript SDK involves several key changes:
- Syntax Changes: Object-based configuration, camelCase properties,
newkeyword - Async Patterns:
awaitkeyword,Promise.all(),for await...of - Tool System: Function wrappers instead of decorators
- Error Handling:
instanceofchecks instead of exception type matching - Type Safety: TypeScript provides excellent type safety with proper configuration
The core API remains consistent between both SDKs, making migration straightforward once you understand these key differences. The JavaScript/TypeScript SDK provides excellent performance and type safety, making it a great choice for modern web and Node.js applications.
For additional help with migration, refer to the API documentation and explore the examples directory for practical usage patterns.