-
-
Notifications
You must be signed in to change notification settings - Fork 387
support responses api , support native message-api, fix inconsistent credit consumption in chat #170
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
…se output message type
…nd state management
…ure in translation tests
…arsing and allign with vscode-copilot-chat extractThinkingData, otherwise it will cause miss cache occasionally
…ing signature check and update prompt
…ing small model if no tools are used 2.add bun idleTimeout = 0 3.feat: Compatible with Claude code JSONL file usage error scenarios, delay closeBlockIfOpen and map responses api to anthropic support tool_use and fix spelling errors 4.feat: add configuration management with extra prompt handling and ensure config file creation
…is incompatible with gpt-5-mini
…ssage translation
…just runServer to set verbose level correctly
…ponses-api # Conflicts: # src/start.ts
…adjusting input token calculations and handling tool prompts
Some clients, like RooCode may send `service_tier` to `/responses` endpoint, but Copilot do not support this field and returns error
… expanded reasoning options and add doc
…ndling in responses
… code skill tool_result
|
@ericc-ch also fix inconsistent credit consumption in chat and adapter claude code skill tool_result. opencode had fixed it. |
|
Also supports the vscode extension, not sure if you need it: https://github.com/caozhiyuan/copilot-api/tree/feature/vscode-extension. Does not depend on bun. |
…uming premium requests (caused by skill invocations, edit hooks or to do reminders)
…ent array handling
|
@ericc-ch this looks like a great improvement, can you please merge? |
GitHub Copilot's Responses API returns different IDs for the same item in 'added' vs 'done' events, which causes @ai-sdk/openai to throw errors: - 'activeReasoningPart.summaryParts' undefined - 'text part not found' This fix: - Tracks IDs from 'added' events and reuses them in 'done' events - Removes empty summary arrays from reasoning items that cause AI SDK parsing issues - Handles output_item, content_part, output_text, and response.completed events - Synchronizes item_id for message-type outputs across all related events
… API , simpler version * fix: sync stream IDs for @ai-sdk/openai compatibility with Responses API GitHub Copilot's Responses API returns different IDs for the same item in 'added' vs 'done' events, which causes @ai-sdk/openai to throw errors: - 'activeReasoningPart.summaryParts' undefined - 'text part not found' This fix: - Tracks IDs from 'added' events and reuses them in 'done' events - Removes empty summary arrays from reasoning items that cause AI SDK parsing issues - Handles output_item, content_part, output_text, and response.completed events - Synchronizes item_id for message-type outputs across all related events * simpler version of #72
|
We need wire_api = "responses". Hope these willl be merge soon |
✅ Successfully tested with Claude Code CLI %Thanks @caozhiyuan for this excellent work on the We extensively tested your fork with our project cc-copilot-bridge - a multi-provider wrapper for Claude Code CLI that Test Results: 6/6 Passed ✅
What we tested
Our setupWe created a fork launcher script that:
Script: launch-responses-fork.sh RecommendationStrongly recommend merging this PR. It unlocks all Codex models for Claude Code users via Copilot, which is a significant improvement. We've documented our findings in detail here:
Thanks again for the great work! 🚀 |
- CHANGELOG.md: Add v1.5.0 section documenting Codex models via fork - README.md: Add "GPT Codex Models" section with setup instructions - CLAUDE.md: Update Model Compatibility Matrix (Codex now supported) - scripts/VERSION: Bump to 1.5.0 PR tracking: ericc-ch/copilot-api#170 Co-Authored-By: Claude <noreply@anthropic.com>
|
nice feature, i have beening test it and work as expected: |
This pull request introduces a new configuration system, structured logging, and support for the /v1/responses endpoint, and support for the claude native message api, along with improvements to model selection and request handling. The most important changes are grouped below:
Responses API Integration:
Claude Native Message API:
Configuration Management:
src/lib/config.tsmodule to provide persistent application configuration, including support for model-specific prompts, reasoning effort levels, and default model selection. Configuration is stored in a newconfig.jsonfile in the app data directory, with automatic creation and safe permissions. [1] [2]Logging Improvements:
src/lib/logger.tsfor handler-level logging, with log rotation, retention, and structured output. Integrated this logger into key request handlers for better diagnostics. [1] [2] [3] [4] [5]Token Counting Logic:
src/lib/tokenizer.tsto more accurately account for tool calls, array parameters, and model-specific behaviors (including GPT and Anthropic/Grok models). Added support for excluding certain schema keys and improved calculation for nested parameters. [1] [2] [3] [4] [5] [6] [7] [8]Fix Credit Consumption Inconsistency: