Skip to content

Conversation

@roomote
Copy link
Contributor

@roomote roomote bot commented Jan 12, 2026

Related GitHub Issue

Closes: #10601

Description

This PR attempts to address Issue #10601 by enabling prompt caching for the Cerebras zai-glm-4.7 model.

The change is straightforward - updated supportsPromptCache from false to true in the model configuration at packages/types/src/providers/cerebras.ts.

This aligns with the Cerebras documentation which confirms that zai-glm-4.7 supports prompt caching.

Test Procedure

  • Ran existing Cerebras tests via cd src && npx vitest run api/providers/__tests__/cerebras.spec.ts - all 17 tests pass
  • The change is a configuration flag update with no logic changes

Pre-Submission Checklist

  • Issue Linked: This PR is linked to an approved GitHub Issue (see "Related GitHub Issue" above).
  • Scope: My changes are focused on the linked issue (one major feature/fix per PR).
  • Self-Review: I have performed a thorough self-review of my code.
  • Testing: New and/or updated tests have been added to cover my changes (if applicable).
  • Documentation Impact: I have considered if my changes require documentation updates (see "Documentation Updates" section below).
  • Contribution Guidelines: I have read and agree to the Contributor Guidelines.

Documentation Updates

  • No documentation updates are required.

Additional Notes

Feedback and guidance are welcome.


Important

Enable prompt caching for zai-glm-4.7 by updating supportsPromptCache to true in cerebras.ts.

  • Behavior:
    • Enable prompt caching for zai-glm-4.7 by setting supportsPromptCache to true in cerebras.ts.
  • Testing:
    • All 17 existing tests in cerebras.spec.ts pass, confirming no logic changes.

This description was created by Ellipsis for 856ec7a. You can customize this summary. It will automatically update as commits are pushed.

@roomote
Copy link
Contributor Author

roomote bot commented Jan 12, 2026

Rooviewer Clock   See task on Roo Cloud

Review complete. No issues found.

The change correctly enables prompt caching for the zai-glm-4.7 model, aligning with Cerebras documentation.

Mention @roomote in a comment to request specific changes to this pull request or fix all unresolved issues.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

Status: Triage

Development

Successfully merging this pull request may close these issues.

[ENHANCEMENT] Turn on prompt caching for supported Cerebras model zai-glm-4.7

2 participants