This document provides comprehensive API documentation for the HelpingAI JavaScript SDK.
- Client Classes
- Type Definitions
- Tools System
- MCP Integration
- Error Handling
- Configuration Options
- Utilities
The main client class for interacting with the HelpingAI API.
new HelpingAI(options?: HelpingAIOptions)Parameters:
options(optional): Configuration options for the client
Example:
const client = new HelpingAI({
apiKey: 'your-api-key',
baseURL: 'https://api.helpingai.com/v1',
timeout: 30000,
maxRetries: 3,
});Access to chat completion functionality.
Type: ChatCompletions
The API key used for authentication.
Type: string
The base URL for API requests.
Type: string
Create a chat completion.
async create(request: ChatCompletionRequest): Promise<ChatCompletionResponse | AsyncIterable<ChatCompletionChunk>>Parameters:
request: Chat completion request configuration
Returns:
Promise<ChatCompletionResponse>for non-streaming requestsPromise<AsyncIterable<ChatCompletionChunk>>for streaming requests
Example:
const response = await client.chat.completions.create({
model: 'Dhanishtha-2.0-preview',
messages: [{ role: 'user', content: 'Hello!' }],
max_tokens: 100,
temperature: 0.7,
});Execute a tool directly.
async call<T = any>(toolName: string, parameters: Record<string, any>): Promise<T>Parameters:
toolName: Name of the tool to executeparameters: Parameters to pass to the tool
Returns:
Promise<T>: Result of the tool execution
Example:
const result = await client.call('getWeather', {
city: 'Paris',
units: 'celsius',
});Clean up client resources.
async cleanup(): Promise<void>Example:
await client.cleanup();Configuration options for the HelpingAI client.
interface HelpingAIOptions {
apiKey?: string;
baseURL?: string;
timeout?: number;
maxRetries?: number;
defaultHeaders?: Record<string, string>;
}Properties:
apiKey(optional): API key for authentication. If not provided, will attempt to read fromHELPINGAI_API_KEYenvironment variablebaseURL(optional): Base URL for API requests. Default:'https://api.helpingai.com/v1'timeout(optional): Request timeout in milliseconds. Default:30000maxRetries(optional): Maximum number of retry attempts. Default:3defaultHeaders(optional): Default headers to include with all requests
Request configuration for chat completions.
interface ChatCompletionRequest {
model: string;
messages: ChatMessage[];
max_tokens?: number;
temperature?: number;
top_p?: number;
stream?: boolean;
tools?: Tool[];
tool_choice?: 'auto' | 'none' | string;
stop?: string | string[];
presence_penalty?: number;
frequency_penalty?: number;
logit_bias?: Record<string, number>;
user?: string;
}Properties:
model: Model identifier (e.g., 'Dhanishtha-2.0-preview')messages: Array of conversation messagesmax_tokens(optional): Maximum tokens to generatetemperature(optional): Sampling temperature (0-2). Default:1top_p(optional): Nucleus sampling parameter (0-1). Default:1stream(optional): Enable streaming responses. Default:falsetools(optional): Available tools for the model to usetool_choice(optional): Tool selection strategystop(optional): Stop sequencespresence_penalty(optional): Presence penalty (-2 to 2). Default:0frequency_penalty(optional): Frequency penalty (-2 to 2). Default:0logit_bias(optional): Token logit biasuser(optional): User identifier for tracking
Individual message in a conversation.
interface ChatMessage {
role: 'system' | 'user' | 'assistant' | 'tool';
content: string;
name?: string;
tool_calls?: ToolCall[];
tool_call_id?: string;
}Properties:
role: Message rolecontent: Message contentname(optional): Name of the message sender (for tool messages)tool_calls(optional): Tool calls made by the assistanttool_call_id(optional): ID of the tool call this message responds to
Response from a chat completion request.
interface ChatCompletionResponse {
id: string;
object: 'chat.completion';
created: number;
model: string;
choices: ChatCompletionChoice[];
usage: Usage;
}Properties:
id: Unique identifier for the completionobject: Object type identifiercreated: Unix timestamp of creationmodel: Model used for the completionchoices: Array of completion choicesusage: Token usage information
Individual choice in a chat completion response.
interface ChatCompletionChoice {
index: number;
message: ChatMessage;
finish_reason: 'stop' | 'length' | 'tool_calls' | 'content_filter' | null;
}Properties:
index: Choice indexmessage: The generated messagefinish_reason: Reason the generation stopped
Streaming chunk from a chat completion.
interface ChatCompletionChunk {
id: string;
object: 'chat.completion.chunk';
created: number;
model: string;
choices: ChatCompletionChunkChoice[];
}Properties:
id: Unique identifier for the completionobject: Object type identifiercreated: Unix timestamp of creationmodel: Model used for the completionchoices: Array of streaming choices
Individual choice in a streaming chat completion chunk.
interface ChatCompletionChunkChoice {
index: number;
delta: ChatMessageDelta;
finish_reason: 'stop' | 'length' | 'tool_calls' | 'content_filter' | null;
}Properties:
index: Choice indexdelta: Incremental message contentfinish_reason: Reason the generation stopped (null if continuing)
Incremental message content in streaming responses.
interface ChatMessageDelta {
role?: 'system' | 'user' | 'assistant' | 'tool';
content?: string;
tool_calls?: ToolCallDelta[];
}Properties:
role(optional): Message role (only in first chunk)content(optional): Incremental contenttool_calls(optional): Incremental tool calls
Token usage information.
interface Usage {
prompt_tokens: number;
completion_tokens: number;
total_tokens: number;
}Properties:
prompt_tokens: Tokens in the promptcompletion_tokens: Tokens in the completiontotal_tokens: Total tokens used
Tool definition interface.
interface Tool {
type: 'function';
function: ToolFunction;
}Properties:
type: Tool type (always 'function')function: Function definition
Function definition for a tool.
interface ToolFunction {
name: string;
description: string;
parameters: JSONSchema;
}Properties:
name: Function namedescription: Function descriptionparameters: JSON Schema for parameters
Tool call made by the assistant.
interface ToolCall {
id: string;
type: 'function';
function: ToolCallFunction;
}Properties:
id: Unique identifier for the tool calltype: Tool call type (always 'function')function: Function call details
Function call details.
interface ToolCallFunction {
name: string;
arguments: string;
}Properties:
name: Function namearguments: JSON string of function arguments
Create a tool from a function.
function tools<T extends (...args: any[]) => any>(fn: T): Tool;Parameters:
fn: Function to convert to a tool
Returns:
Tool: Tool definition
Example:
const weatherTool = tools(function getWeather(
city: string,
units: 'celsius' | 'fahrenheit' = 'celsius'
): string {
/**
* Get weather information for a city
* @param city - The city name
* @param units - Temperature units
*/
return `Weather in ${city}: 22°C, sunny`;
});Registry for managing tools.
Register a tool.
register(name: string, tool: Tool, implementation: Function): voidParameters:
name: Tool nametool: Tool definitionimplementation: Tool implementation function
Get a registered tool.
get(name: string): RegisteredTool | undefinedParameters:
name: Tool name
Returns:
RegisteredTool | undefined: Registered tool or undefined if not found
List all registered tools.
list(): RegisteredTool[]Returns:
RegisteredTool[]: Array of all registered tools
List names of all registered tools.
listToolNames(): string[]Returns:
string[]: Array of tool names
Get the number of registered tools.
size(): numberReturns:
number: Number of registered tools
Clear all registered tools.
clear(): voidGet the global tool registry.
function getRegistry(): ToolRegistry;Returns:
ToolRegistry: The global tool registry
Get tools by name or get all tools.
function getTools(names?: string[]): Tool[];Parameters:
names(optional): Array of tool names to retrieve
Returns:
Tool[]: Array of tools
Clear the global tool registry.
function clearRegistry(): void;Client for Model Context Protocol integration.
new MCPClient(options: MCPClientOptions)Parameters:
options: MCP client configuration
Connect to the MCP server.
async connect(): Promise<void>Disconnect from the MCP server.
async disconnect(): Promise<void>List available tools from the MCP server.
async listTools(): Promise<Tool[]>Returns:
Promise<Tool[]>: Array of available tools
Call a tool on the MCP server.
async callTool(name: string, arguments: Record<string, any>): Promise<any>Parameters:
name: Tool namearguments: Tool arguments
Returns:
Promise<any>: Tool result
Configuration options for MCP client.
interface MCPClientOptions {
transport: MCPTransport;
timeout?: number;
}Properties:
transport: Transport configurationtimeout(optional): Connection timeout in milliseconds
Transport configuration for MCP.
type MCPTransport = MCPStdioTransport | MCPSSETransport | MCPWebSocketTransport;Standard I/O transport for MCP.
interface MCPStdioTransport {
type: 'stdio';
command: string;
args?: string[];
env?: Record<string, string>;
}Properties:
type: Transport type ('stdio')command: Command to executeargs(optional): Command argumentsenv(optional): Environment variables
Server-Sent Events transport for MCP.
interface MCPSSETransport {
type: 'sse';
url: string;
headers?: Record<string, string>;
}Properties:
type: Transport type ('sse')url: SSE endpoint URLheaders(optional): HTTP headers
WebSocket transport for MCP.
interface MCPWebSocketTransport {
type: 'websocket';
url: string;
protocols?: string[];
}Properties:
type: Transport type ('websocket')url: WebSocket URLprotocols(optional): WebSocket protocols
Base error class for all SDK errors.
class HelpingAIError extends Error {
constructor(message: string, cause?: Error);
}Properties:
message: Error messagecause(optional): Underlying error cause
Error from the HelpingAI API.
class APIError extends HelpingAIError {
constructor(
message: string,
public status: number,
public code?: string,
public response?: any
)
}Properties:
status: HTTP status codecode(optional): API error coderesponse(optional): Full API response
Authentication-related error.
class AuthenticationError extends APIError {
constructor(message: string);
}Rate limiting error.
class RateLimitError extends APIError {
constructor(
message: string,
public retryAfter?: number
)
}Properties:
retryAfter(optional): Seconds to wait before retrying
Request timeout error.
class TimeoutError extends HelpingAIError {
constructor(message: string);
}Input validation error.
class ValidationError extends HelpingAIError {
constructor(
message: string,
public field?: string
)
}** Properties:**
field(optional): Field that failed validation
The SDK uses the following default configuration:
const defaultConfig = {
baseURL: 'https://api.helpingai.com/v1',
timeout: 30000,
maxRetries: 3,
defaultHeaders: {
'User-Agent': 'helpingai/1.0.0',
'Content-Type': 'application/json',
},
};The SDK recognizes the following environment variables:
HELPINGAI_API_KEY: Default API keyHELPINGAI_BASE_URL: Default base URLHELPINGAI_TIMEOUT: Default timeout in millisecondsHELPINGAI_MAX_RETRIES: Default maximum retry attempts
You can override defaults when creating a client:
const client = new HelpingAI({
apiKey: 'your-api-key',
baseURL: 'https://custom-endpoint.com/v1',
timeout: 60000,
maxRetries: 5,
defaultHeaders: {
'Custom-Header': 'value',
},
});Check if a response is a streaming response.
function isStreamingResponse(response: any): response is AsyncIterable<ChatCompletionChunk>;Parameters:
response: Response to check
Returns:
boolean: True if response is streaming
Example:
const response = await client.chat.completions.create({
model: 'Dhanishtha-2.0-preview',
messages: [{ role: 'user', content: 'Hello' }],
stream: true,
});
if (isStreamingResponse(response)) {
for await (const chunk of response) {
console.log(chunk.choices[0].delta.content);
}
}Check if a message contains tool calls.
function isToolCall(message: ChatMessage): message is ChatMessage & { tool_calls: ToolCall[] };Parameters:
message: Message to check
Returns:
boolean: True if message contains tool calls
Extract content from a chat completion response.
function extractContent(response: ChatCompletionResponse): string;Parameters:
response: Chat completion response
Returns:
string: Extracted content
Example:
const response = await client.chat.completions.create({
model: 'Dhanishtha-2.0-preview',
messages: [{ role: 'user', content: 'Hello' }],
});
const content = extractContent(response);
console.log(content);Convert a streaming response to a complete string.
async function streamToString(stream: AsyncIterable<ChatCompletionChunk>): Promise<string>;Parameters:
stream: Streaming response
Returns:
Promise<string>: Complete response content
Example:
const stream = await client.chat.completions.create({
model: 'Dhanishtha-2.0-preview',
messages: [{ role: 'user', content: 'Tell me a story' }],
stream: true,
});
const fullContent = await streamToString(stream);
console.log(fullContent);Validate an API key format.
function validateApiKey(apiKey: string): boolean;Parameters:
apiKey: API key to validate
Returns:
boolean: True if API key format is valid
Format an error for display.
function formatError(error: Error): string;Parameters:
error: Error to format
Returns:
string: Formatted error message
You can provide a custom HTTP client implementation:
interface HTTPClient {
request(config: RequestConfig): Promise<Response>;
}
const customClient = new HelpingAI({
apiKey: 'your-api-key',
httpClient: myCustomHttpClient,
});Add request interceptors for logging, authentication, etc.:
const client = new HelpingAI({
apiKey: 'your-api-key',
requestInterceptor: config => {
console.log('Making request:', config);
return config;
},
});Add response interceptors for processing responses:
const client = new HelpingAI({
apiKey: 'your-api-key',
responseInterceptor: response => {
console.log('Received response:', response);
return response;
},
});Implement custom retry logic:
const client = new HelpingAI({
apiKey: 'your-api-key',
retryConfig: {
maxRetries: 5,
retryDelay: attempt => Math.pow(2, attempt) * 1000,
retryCondition: error => {
return error instanceof RateLimitError || error instanceof TimeoutError;
},
},
});The SDK automatically manages HTTP connections for optimal performance:
// Reuse the same client instance for multiple requests
const client = new HelpingAI({ apiKey: 'your-api-key' });
// Multiple requests will reuse connections
const response1 = await client.chat.completions.create({...});
const response2 = await client.chat.completions.create({...});For long-running applications, properly clean up resources:
const client = new HelpingAI({ apiKey: 'your-api-key' });
try {
// Use the client
const response = await client.chat.completions.create({...});
} finally {
// Clean up resources
await client.cleanup();
}When using streaming, handle backpressure appropriately:
const stream = await client.chat.completions.create({
model: 'Dhanishtha-2.0-preview',
messages: [{ role: 'user', content: 'Long response' }],
stream: true,
});
const chunks: string[] = [];
for await (const chunk of stream) {
if (chunk.choices[0].delta.content) {
chunks.push(chunk.choices[0].delta.content);
// Process chunks in batches to avoid memory issues
if (chunks.length >= 100) {
await processChunks(chunks);
chunks.length = 0;
}
}
}
// Process remaining chunks
if (chunks.length > 0) {
await processChunks(chunks);
}For older browsers, you may need polyfills:
<!-- For fetch API -->
<script src="https://polyfill.io/v3/polyfill.min.js?features=fetch"></script>
<!-- For async/await -->
<script src="https://polyfill.io/v3/polyfill.min.js?features=es2017"></script>When using in browsers, ensure your server has proper CORS headers:
// Server-side CORS configuration
app.use(
cors({
origin: 'https://your-domain.com',
credentials: true,
})
);Never expose API keys in client-side code:
// ❌ Bad - API key exposed in browser
const client = new HelpingAI({
apiKey: 'sk-your-secret-key',
});
// ✅ Good - Use a proxy endpoint
const response = await fetch('/api/chat', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ message: 'Hello' }),
});For testing, you can mock the HelpingAI client:
import { HelpingAI } from 'helpingai';
// Mock the client
jest.mock('helpingai');
const mockClient = HelpingAI as jest.MockedClass<typeof HelpingAI>;
// Setup mock responses
mockClient.prototype.chat.completions.create.mockResolvedValue({
choices: [{ message: { content: 'Mocked response' } }],
});The SDK provides test utilities:
import { createMockClient, createMockResponse } from 'helpingai/testing';
const mockClient = createMockClient();
const mockResponse = createMockResponse({
content: 'Test response',
});
mockClient.chat.completions.create.mockResolvedValue(mockResponse);Key changes in v1.x:
-
Constructor changes:
// v0.x const client = new HelpingAI('api-key'); // v1.x const client = new HelpingAI({ apiKey: 'api-key' });
-
Tool system changes:
// v0.x @tool function myTool() { ... } // v1.x const myTool = tools(function myTool() { ... });
-
Error handling changes:
// v0.x catch (error) { if (error.code === 'rate_limit') { ... } } // v1.x catch (error) { if (error instanceof RateLimitError) { ... } }
import { HelpingAI, tools } from 'helpingai';
// Define tools
const weatherTool = tools(function getWeather(city: string): string {
return `Weather in ${city}: 22°C, sunny`;
});
const calculatorTool = tools(function calculate(expression: string): number {
return eval(expression); // Use a proper math parser in production
});
// Create client
const client = new HelpingAI({
apiKey: process.env.HELPINGAI_API_KEY,
timeout: 30000,
});
// Chat function
async function chat(message: string): Promise<string> {
try {
const response = await client.chat.completions.create({
model: 'Dhanishtha-2.0-preview',
messages: [
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: message },
],
tools: [weatherTool, calculatorTool],
tool_choice: 'auto',
max_tokens: 1000,
temperature: 0.7,
});
return response.choices[0].message.content;
} catch (error) {
console.error('Chat error:', error);
return 'Sorry, I encountered an error. Please try again.';
}
}
// Usage
chat("What's the weather in Paris and what's 15 * 23?").then(console.log).catch(console.error);async function streamingChat(message: string): Promise<void> {
const stream = await client.chat.completions.create({
model: 'Dhanishtha-2.0-preview',
messages: [{ role: 'user', content: message }],
stream: true,
max_tokens: 500,
});
let content = '';
let tokenCount = 0;
const startTime = Date.now();
console.log('🤖 Assistant: ');
for await (const chunk of stream) {
if (chunk.choices[0].delta.content) {
const deltaContent = chunk.choices[0].delta.content;
content += deltaContent;
tokenCount++;
// Stream to console
process.stdout.write(deltaContent);
}
if (chunk.choices[0].finish_reason) {
const duration = Date.now() - startTime;
console.log(`\n\n📊 Stats: ${tokenCount} tokens in ${duration}ms`);
break;
}
}
}This completes the comprehensive API documentation for the HelpingAI JavaScript SDK. The documentation covers all major components, types, and usage patterns with practical examples.