You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When i use transformersjs/onnx, i can use the LiquidAI/LFM2.5-1.2B-Instruct-ONNX model at q4 quantization and it works fine. Giving the model a weather tool and asking for the weather in SF results in <|tool_call_start|>[get_weather(location="San Francisco")]<|tool_call_end|>
When i use node-llama-cpp and LiquidAI/LFM2.5-1.2B-Instruct-GGUF, it does not work at all with Q4, Q4_K_M, even Q8_0. I get "I need to check the current weather conditions for San Francisco. Let me retrieve that information for you."
If i switch and use the thinking LiquidAI/LFM2.5-1.2B-Thinking-GGUF at Q4_K_M it kinda sorta works with lots of thinking noise "I need to check the current weather conditions for San Francisco. Let me retrieve that information for you." gguf thinking q4_k_m: "<think> Okay, let's see. The user is asking for the weather in San Francisco. I need to check the available tools. The provided functions include get_weather, which requires a location parameter. Since the user specified San Francisco, I should call get_weather with location set to \"San Francisco\". \n\nThe instructions say I must use at least one tool from the provided list. The get_weather function is the right choice here. The parameters need to be a JSON object with location as a string. So I'll structure the tool call accordingly. \n\nWait, the user also mentioned that I should call the function without prior explanation, just the tool call. Also, the user wants at least one tool used, which I'm doing here. Since the function requires the location parameter, I have to make sure that's included. \n\nI should make sure not to add any extra text before calling the function. The response after calling the function will be the result, but since I can't show that here, just need to ensure the tool call is correct. \n\nSo the correct function call is get_weather with params.location: \"San Francisco\". I need to format that properly in the required syntax. The user also said to use at least one tool, which I am. Alright, that should do it.\n</think>\n[get_weather(params={'location': 'San Francisco'})]"
But look at the tool call difference:
[get_weather(params={'location': 'San Francisco'})]
vs <|tool_call_start|>[get_weather(location="San Francisco")]<|tool_call_end|>
Is this a result of the model formatsthemselves? or maybe the node-llama-cpp binding?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
When i use transformersjs/onnx, i can use the LiquidAI/LFM2.5-1.2B-Instruct-ONNX model at q4 quantization and it works fine. Giving the model a weather tool and asking for the weather in SF results in
<|tool_call_start|>[get_weather(location="San Francisco")]<|tool_call_end|>When i use node-llama-cpp and LiquidAI/LFM2.5-1.2B-Instruct-GGUF, it does not work at all with Q4, Q4_K_M, even Q8_0. I get
"I need to check the current weather conditions for San Francisco. Let me retrieve that information for you."If i switch and use the thinking LiquidAI/LFM2.5-1.2B-Thinking-GGUF at Q4_K_M it kinda sorta works with lots of thinking noise
"I need to check the current weather conditions for San Francisco. Let me retrieve that information for you." gguf thinking q4_k_m: "<think> Okay, let's see. The user is asking for the weather in San Francisco. I need to check the available tools. The provided functions include get_weather, which requires a location parameter. Since the user specified San Francisco, I should call get_weather with location set to \"San Francisco\". \n\nThe instructions say I must use at least one tool from the provided list. The get_weather function is the right choice here. The parameters need to be a JSON object with location as a string. So I'll structure the tool call accordingly. \n\nWait, the user also mentioned that I should call the function without prior explanation, just the tool call. Also, the user wants at least one tool used, which I'm doing here. Since the function requires the location parameter, I have to make sure that's included. \n\nI should make sure not to add any extra text before calling the function. The response after calling the function will be the result, but since I can't show that here, just need to ensure the tool call is correct. \n\nSo the correct function call is get_weather with params.location: \"San Francisco\". I need to format that properly in the required syntax. The user also said to use at least one tool, which I am. Alright, that should do it.\n</think>\n[get_weather(params={'location': 'San Francisco'})]"But look at the tool call difference:
[get_weather(params={'location': 'San Francisco'})]vs
<|tool_call_start|>[get_weather(location="San Francisco")]<|tool_call_end|>Is this a result of the model formatsthemselves? or maybe the node-llama-cpp binding?
Beta Was this translation helpful? Give feedback.
All reactions