fix: handle direct model answers in ReACT loop#763
fix: handle direct model answers in ReACT loop#763markstur wants to merge 1 commit intogenerative-computing:mainfrom
Conversation
The ReACT framework now properly handles cases where the model provides a direct answer without calling tools. Previously, these answers were ignored and the loop would continue until exhausting the budget. Added test coverage for both scenarios (no tools, unused tools). Fixes: generative-computing#762 Signed-off-by: Mark Sturdevant <mark.sturdevant@ibm.com>
|
The PR description has been updated. Please fill out the template for your PR to be reviewed. |
|
Note to reviewers: I do have concerns about my limited test environment and side effects. I'm seeing this as a good fix for when react_using_mellea with DuckDuckGo starts to find nothing. I suspect it could be impacted by DDGS rate limiting but not sure why it looks like this. Bottom line though is that there is a case where we have an answer in step value (is_complete) but we miss our is_final handling. I wonder if there is any reason to only do this check after running out of iterations (last ditch handling), but it seem seems more right to me to just use the value when this elif case happens. |
jakelorocco
left a comment
There was a problem hiding this comment.
Hi @markstur, thanks for the PR! I think this might not be an ideal way to fix the issue. I do agree that our current version of the react thinking pattern does get stuck in loops (especially for simpler answers that can be accomplished in one response).
However, I don't think we should automatically assume that a step with no tool calls and a response is the final answer. There are moments where the model will output it's thoughts as those intermediate steps and then continue.
The issue I see the most (especially with smaller models) is that the model thinks it's final tool has already been called. As a result, it just keeps repeating the same output and gets stuck till the loop exhausts.
I think there are a few potential solutions:
- We could change the requirements for calling a tool to finalize. A lot of react patterns just look for "final_answer:" and parse the output. We could also add this in addition to the current tool call approach.
- We could try to detect repetitions and prompt the model out of those situations. I'm not quite sure if the exact repetitions are applicable only to granite models / small models / our prompts though.
- We could add a subsequent LLM call after each step that is a requirement that validates if the question has been answered. This adds overhead to each loop iteration, but is likely relatively low since the context should be cached. Then, if that requirement is valid, we could do one more prompt to extract the final answer using the tool / the current approach.
|
Thanks @jakelorocco Yes I agree my naive approach is probably assuming too much. I was unsuccessful when I first tried to fix this with some alternative approaches but I think I need to revisit because the problem was flaky at the time. I think I can reproduce it better now (maybe). I'll see if I can get good results that align better with your bulleted suggestions. |
Misc PR
Type of PR
Description
Testing