-
Notifications
You must be signed in to change notification settings - Fork 0
story/HOP-8 #9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: develop
Are you sure you want to change the base?
story/HOP-8 #9
Conversation
|
Can one of the admins verify this patch? |
hospexplorer/ask/views.py
Outdated
|
|
||
| answer_text = "" | ||
| if "choices" in llm_response and llm_response["choices"]: | ||
| answer_text = llm_response["choices"][0].get("message", {}).get("content", "") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think there is the try/except block from the development branch missing here
hospexplorer/ask/views.py
Outdated
| elif "message" in llm_response: # Fallback for mock response format | ||
| answer_text = llm_response["message"] | ||
|
|
||
| QARecord.objects.create( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you want to create the record before sending it to the LLM since there can be a significant lack between sending and answering. and then update it with the answer once you have it. This will change a little anyways when polling I presume.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And one thing I'm just thinking of, we probably want to be able to indicate any error responses (e.g. when the request is timed out). So the model will need to accommodate that.
…and recording answer based on llm behavior
hospexplorer/ask/views.py
Outdated
| answer_text = "" | ||
| if "choices" in llm_response and llm_response["choices"]: | ||
| answer_text = llm_response["choices"][0].get("message", {}).get("content", "") | ||
| elif "message" in llm_response: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think there was a comment here in some version of the file explaining this was for the mock server? This should be here as well. or even better, the mock server should return the same format as the real one.
…nse is now the same" This reverts commit 215f66a.
| error_msg = f"Unexpected response from server: {e}" | ||
| except Exception as e: | ||
| return JsonResponse({"error": f"Failed to connect to server: {e}"}, status=500) No newline at end of file | ||
| error_msg = f"Failed to connect to server: {e}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the exceptions need to be logged
Guidelines for Pull Requests
If you haven't yet read our code review guidelines, please do so, You can find them here.
Please confirm the following by adding an x for each item (turn
[ ]into[x]).Please provide a brief description of your ticket
Store questions and answers for each user
Description
Properties:
Question
question
timestamp
user asking the question
answer
Answer
all the details send from the AI
timestamp
corresponding question
user that asked the question
HOP-8
Are there any other pull requests that this one depends on?
story/HOP-7
Anything else the reviewer needs to know?
... describe here ...