Skip to content

fix(agents): persist output_key when before_agent_callback short-circuits LlmAgent#4838

Open
gautamvarmadatla wants to merge 1 commit intogoogle:mainfrom
gautamvarmadatla:fix/output-key-before-agent-callback
Open

fix(agents): persist output_key when before_agent_callback short-circuits LlmAgent#4838
gautamvarmadatla wants to merge 1 commit intogoogle:mainfrom
gautamvarmadatla:fix/output-key-before-agent-callback

Conversation

@gautamvarmadatla
Copy link

Problem:
When LlmAgent uses both output_key and before_agent_callback, and the before-agent callback returns types.Content, execution is correctly short-circuited and the response is returned to the user, but session.state[output_key] is not updated because the normal LlmAgent output-saving path is bypassed.

Solution:
Override _handle_before_agent_callback in LlmAgent to call __maybe_save_output_to_state() on any returned event. This keeps the fix scoped to LlmAgent, which owns output_key, avoids introducing BaseAgent changes for subclass-specific behavior, and fixes both run_async and run_live through the shared callback path.

Testing Plan

Added a regression test that runs LlmAgent with output_key and a short-circuiting before_agent_callback, then checks that session.state["result"] contains the returned text.

Unit Tests:

  • I have added or updated unit tests for my change.
  • All unit tests pass locally.

Manual End-to-End (E2E) Tests:
Run repro code in #4837

In that example, session.state["result"] should equal "cached answer"

Checklist

  • I have read the CONTRIBUTING.md document.
  • I have performed a self-review of my own code.
  • I have commented my code, particularly in hard-to-understand areas.
  • I have added tests that prove my fix is effective or that my feature works.
  • New and existing unit tests pass locally with my changes.
  • I have manually tested my changes end-to-end.
  • Any dependent changes have been merged and published in downstream modules.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses a bug in LlmAgent where the output_key was not being updated in the session state when a before_agent_callback returned a value, effectively short-circuiting the agent's normal execution path. The fix involves overriding the _handle_before_agent_callback method to ensure that the __maybe_save_output_to_state() method is explicitly called if the callback returns an event, thereby preserving the intended output persistence behavior. A new unit test has been added to confirm this fix.

Highlights

  • LlmAgent output persistence: Ensured that the output specified by output_key is correctly saved to the session state even when the before_agent_callback short-circuits the LlmAgent's execution.
  • Callback handling: Overrode the _handle_before_agent_callback method in LlmAgent to explicitly call __maybe_save_output_to_state() for any event returned by the callback.
  • Regression testing: Added a new regression test to validate that the output_key is correctly updated in session state when a before_agent_callback returns a value, preventing the agent from running.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • src/google/adk/agents/llm_agent.py
    • Overrode the _handle_before_agent_callback method.
    • Added logic to call __maybe_save_output_to_state() if the callback returns an event.
  • tests/unittests/agents/test_llm_agent_output_save.py
    • Imported CallbackContext and testing_utils.
    • Added a new asynchronous test case to verify output_key persistence when before_agent_callback short-circuits.
Activity
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@adk-bot adk-bot added the core [Component] This issue is related to the core interface and implementation label Mar 14, 2026
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request addresses a bug where output_key was not being persisted when a before_agent_callback short-circuits an LlmAgent. The fix involves overriding _handle_before_agent_callback in LlmAgent to ensure __maybe_save_output_to_state is called on the event returned from the callback. This is a clean and well-scoped solution. The addition of a specific regression test confirms the fix is effective. The changes are correct and well-implemented.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

core [Component] This issue is related to the core interface and implementation

Projects

None yet

Development

Successfully merging this pull request may close these issues.

LlmAgent output_key is not written to session state when before_agent_callback short-circuits execution

2 participants