Skip to content

[ENHANCEMENT] Expose more powerful VS Code language-analysis APIs as tools for the Roo Code LLM #11903

@tinyvan

Description

@tinyvan

Problem (one or two sentences)

Roo Code's LLM has great access to basic file read/write/edit operations, but it lacks tools to use powerful built-in VS Code language features like find references, go to definition, workspace-wide symbol search, call hierarchy, and others. This limits how deeply and accurately the agent can understand and reason about large codebases.

Context (who is affected and when)

The Roo Code LLM should be able to call additional high-value VS Code commands as tools — the same way it can already read/write files — so it can ask the editor for definitions, references, symbols across the workspace, call hierarchies, incoming/outgoing calls, etc. This would let the agent truly understand project structure and give much smarter, more precise answers and code changes.

Desired behavior (conceptual, not technical)

  • These calls should be fast and lightweight when possible (most are already very efficient in VS Code).
  • Return only the necessary information (e.g. locations, symbol names, short previews) to keep context size reasonable.
  • Prefer simple, clear tool schemas (e.g. uri + position → list of locations or symbols).
  • No breaking changes to existing tools/behavior.

Constraints / preferences (optional)

No response

Request checklist

  • I've searched existing Issues and Discussions for duplicates
  • This describes a specific problem with clear context and impact

Roo Code Task Links (optional)

No response

Acceptance criteria (optional)

Given a user asks Roo Code something like "where is processPayment used across the project?"
When the agent needs to answer
Then it can call a tool (e.g. executeReferenceProvider) and correctly list files/locations where the symbol is referenced

    Given a user says "show me the call hierarchy for authenticateUser"
    When the agent analyzes impact
    Then it can use prepareCallHierarchy + outgoing/incoming calls tools to build and describe the call graph

    Given a user asks "find me the definition of UserService"
    When the agent navigates code
    Then it can use executeDefinitionProvider and jump to (or describe) the correct location

    And the tool responses are returned in a format that's easy for the LLM to parse and reason over
    But unnecessary large ASTs or full file contents are not included unless requested

Proposed approach (optional)

Add new tool functions that wrap these VS Code API commands (available via vscode.commands.executeCommand):

    - vscode.executeWorkspaceSymbolProvider(query)
    - vscode.executeReferenceProvider(uri, position)
    - vscode.executeDefinitionProvider(uri, position)
    - vscode.executeDocumentSymbolProvider(uri)
    - vscode.prepareCallHierarchy(uri, position)
    - CallHierarchyItem.provideOutgoingCalls()
    - CallHierarchyItem.provideIncomingCalls()

    Start with the most commonly useful ones: references, definitions, workspace symbols, and document symbols.

    Return clean JSON-like structures (locations as {uri, range}, symbols with name/kind/location, etc.) to keep token usage low.

Trade-offs / risks (optional)

  • Some calls (especially workspace-wide) can be slow on very large projects → consider optional limits or pagination.
  • Slightly larger context size if many locations are returned → the LLM can be instructed to ask for summaries or focus on specific files.
  • Dependency on language server quality — but that's already true for VS Code itself.

Metadata

Metadata

Assignees

No one assigned

    Labels

    EnhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions