Skip to content

Commit 575192e

Browse files
committed
Update Submodule vendor/llama.cpp e463bbd..4b385bf
1 parent 3dc3271 commit 575192e

2 files changed

Lines changed: 3 additions & 3 deletions

File tree

llama_cpp/llama_cpp.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -3771,9 +3771,9 @@ def llama_detokenize(
37713771

37723772

37733773
# /// Apply chat template. Inspired by hf apply_chat_template() on python.
3774-
# /// Both "model" and "custom_template" are optional, but at least one is required. "custom_template" has higher precedence than "model"
3774+
# ///
37753775
# /// NOTE: This function does not use a jinja parser. It only support a pre-defined list of template. See more: https://github.com/ggerganov/llama.cpp/wiki/Templates-supported-by-llama_chat_apply_template
3776-
# /// @param tmpl A Jinja template to use for this chat. If this is nullptr, the model’s default chat template will be used instead.
3776+
# /// @param tmpl A Jinja template to use for this chat.
37773777
# /// @param chat Pointer to a list of multiple llama_chat_message
37783778
# /// @param n_msg Number of llama_chat_message in this chat
37793779
# /// @param add_ass Whether to end the prompt with the token(s) that indicate the start of an assistant message.

vendor/llama.cpp

0 commit comments

Comments
 (0)