Skip to content

feat: add /context command to display context information#817

Open
anilzeybek wants to merge 2 commits intoMoonshotAI:mainfrom
anilzeybek:feat/context-command
Open

feat: add /context command to display context information#817
anilzeybek wants to merge 2 commits intoMoonshotAI:mainfrom
anilzeybek:feat/context-command

Conversation

@anilzeybek
Copy link

@anilzeybek anilzeybek commented Jan 30, 2026

Related Issue

No related issue.

Description

This commit introduces /context command that shows the current context related information such as the usage ratio. This command is especially important for using Kimi through ACP.

Checklist

  • I have read the CONTRIBUTING document.
  • I have linked the related issue, if any.
  • I have added tests that prove my fix is effective or that my feature works.
  • I have run make gen-changelog to update the changelog.
  • I have run make gen-docs to update the user documentation.

Open with Devin

Copilot AI review requested due to automatic review settings January 30, 2026 19:01
Copy link
Contributor

@devin-ai-integration devin-ai-integration bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Devin Review found 1 potential issue.

View issue and 3 additional flags in Devin Review.

Open in Devin Review

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR adds a new /context command to the CLI that displays contextual information about the current session, including message counts, token usage, checkpoints, and message distribution by role. This is particularly useful for users accessing Kimi through ACP (Agent Communication Protocol) to monitor their context usage.

Changes:

  • Added /context slash command implementation in src/kimi_cli/soul/slash.py
  • Updated English and Chinese documentation with command description and usage examples
  • Updated changelogs to document the new feature

Reviewed changes

Copilot reviewed 6 out of 6 changed files in this pull request and generated 3 comments.

Show a summary per file
File Description
src/kimi_cli/soul/slash.py Implements the new /context command that displays session info including messages, tokens, checkpoints, and role distribution
docs/zh/release-notes/changelog.md Adds Chinese changelog entry for the new command
docs/zh/reference/slash-commands.md Adds Chinese documentation for the /context command with usage examples
docs/en/release-notes/changelog.md Adds English changelog entry for the new command
docs/en/reference/slash-commands.md Adds English documentation for the /context command with usage examples
CHANGELOG.md Adds top-level changelog entry for the new command

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines 96 to 106
f" Total messages: {len(history)}\n",
f" Checkpoints: {ctx.n_checkpoints}\n",
]

# Add token usage with percentage if LLM is available
if soul.runtime.llm is not None:
max_context = soul.runtime.llm.max_context_size
usage_percent = (token_count / max_context * 100) if max_context > 0 else 0
lines.append(f" Token usage: {token_count:,} / {max_context:,} ({usage_percent:.1f}%)\n")
else:
lines.append(f" Token count: {token_count:,}\n")
Copy link

Copilot AI Jan 30, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The newline characters are being added inconsistently. Lines 96 and 97 append \n to the strings, but this creates extra blank lines in the output since "\n".join(lines) on line 118 will already insert newlines between elements. This results in double line breaks for these entries. Remove the \n characters from these lines to match the pattern used for other lines in the output.

Suggested change
f" Total messages: {len(history)}\n",
f" Checkpoints: {ctx.n_checkpoints}\n",
]
# Add token usage with percentage if LLM is available
if soul.runtime.llm is not None:
max_context = soul.runtime.llm.max_context_size
usage_percent = (token_count / max_context * 100) if max_context > 0 else 0
lines.append(f" Token usage: {token_count:,} / {max_context:,} ({usage_percent:.1f}%)\n")
else:
lines.append(f" Token count: {token_count:,}\n")
f" Total messages: {len(history)}",
f" Checkpoints: {ctx.n_checkpoints}",
]
# Add token usage with percentage if LLM is available
if soul.runtime.llm is not None:
max_context = soul.runtime.llm.max_context_size
usage_percent = (token_count / max_context * 100) if max_context > 0 else 0
lines.append(f" Token usage: {token_count:,} / {max_context:,} ({usage_percent:.1f}%)")
else:
lines.append(f" Token count: {token_count:,}")

Copilot uses AI. Check for mistakes.
Comment on lines 96 to 106
f" Total messages: {len(history)}\n",
f" Checkpoints: {ctx.n_checkpoints}\n",
]

# Add token usage with percentage if LLM is available
if soul.runtime.llm is not None:
max_context = soul.runtime.llm.max_context_size
usage_percent = (token_count / max_context * 100) if max_context > 0 else 0
lines.append(f" Token usage: {token_count:,} / {max_context:,} ({usage_percent:.1f}%)\n")
else:
lines.append(f" Token count: {token_count:,}\n")
Copy link

Copilot AI Jan 30, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The newline character is being added inconsistently. Lines 104 and 106 append \n to the strings, but this creates extra blank lines in the output since "\n".join(lines) on line 118 will already insert newlines between elements. This results in double line breaks for these entries. Remove the \n characters from these lines to match the pattern used for other lines in the output.

Suggested change
f" Total messages: {len(history)}\n",
f" Checkpoints: {ctx.n_checkpoints}\n",
]
# Add token usage with percentage if LLM is available
if soul.runtime.llm is not None:
max_context = soul.runtime.llm.max_context_size
usage_percent = (token_count / max_context * 100) if max_context > 0 else 0
lines.append(f" Token usage: {token_count:,} / {max_context:,} ({usage_percent:.1f}%)\n")
else:
lines.append(f" Token count: {token_count:,}\n")
f" Total messages: {len(history)}",
f" Checkpoints: {ctx.n_checkpoints}",
]
# Add token usage with percentage if LLM is available
if soul.runtime.llm is not None:
max_context = soul.runtime.llm.max_context_size
usage_percent = (token_count / max_context * 100) if max_context > 0 else 0
lines.append(f" Token usage: {token_count:,} / {max_context:,} ({usage_percent:.1f}%)")
else:
lines.append(f" Token count: {token_count:,}")

Copilot uses AI. Check for mistakes.
Comment on lines 83 to 118
@registry.command
async def context(soul: KimiSoul, args: str):
"""Display context information (messages, tokens, checkpoints)"""
ctx = soul.context
history = ctx.history

if not history:
wire_send(TextPart(text="Context is empty - no messages yet."))
return

token_count = ctx.token_count
lines = [
"Context Info:",
f" Total messages: {len(history)}\n",
f" Checkpoints: {ctx.n_checkpoints}\n",
]

# Add token usage with percentage if LLM is available
if soul.runtime.llm is not None:
max_context = soul.runtime.llm.max_context_size
usage_percent = (token_count / max_context * 100) if max_context > 0 else 0
lines.append(f" Token usage: {token_count:,} / {max_context:,} ({usage_percent:.1f}%)\n")
else:
lines.append(f" Token count: {token_count:,}\n")

# Count messages by role
role_counts: dict[str, int] = {}
for msg in history:
role_counts[msg.role] = role_counts.get(msg.role, 0) + 1

if role_counts:
lines.append(" Messages by role:")
for role, count in sorted(role_counts.items()):
lines.append(f" {role}: {count}")

wire_send(TextPart(text="\n".join(lines)))
Copy link

Copilot AI Jan 30, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The new /context command lacks test coverage. The PR description claims "I have added tests that prove my fix is effective or that my feature works," but no tests were found for this command in the test suite. While other slash commands like /yolo, /compact, /clear, and /debug may also lack dedicated tests, the PR explicitly states that tests were added, which is not reflected in the changes. Consider adding tests similar to the pattern in tests/core/test_kimisoul_slash_commands.py to verify the command's functionality, especially for different scenarios like empty context, context with messages, and with/without LLM availability.

Copilot uses AI. Check for mistakes.
@anilzeybek
Copy link
Author

I've addressed the review comments. Could you resolve them if the changes look good?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant