Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
48 changes: 48 additions & 0 deletions .claude/settings.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
{
"permissions": {
"allow": [
"Bash(git log:*)",
"Bash(git branch:*)",
"Bash(git remote:*)",
"Bash(git fetch:*)",
"Bash(git stash:*)",
"Bash(git diff:*)",
"Bash(git status:*)",
"Bash(git show:*)",
"Bash(gh repo view:*)",
"Bash(gh run list:*)",
"Bash(gh run view:*)",
"Bash(gh pr view:*)",
"Bash(gh pr diff:*)",
"Bash(gh pr list:*)",
"Bash(gh issue view:*)",
"Bash(gh release:*)",
"Bash(gh api:*)",
"Bash(rake -T:*)",
"Bash(bundle show:*)",
"Bash(bundle list:*)",
"Bash(bundle platform:*)",
"Bash(gem search:*)",
"Bash(gem list:*)",
"Bash(tree:*)",
"Bash(grep:*)",
"Bash(find:*)",
"Bash(echo:*)",
"Bash(sort:*)",
"Bash(cat:*)",
"Bash(head:*)",
"Bash(tail:*)",
"Bash(less:*)",
"Bash(wc:*)",
"Bash(ls:*)",
"Bash(pwd:*)",
"Bash(which:*)",
"Bash(type:*)",
"Bash(file:*)",
"Bash(VCR_MODE=new_episodes bundle exec ruby:*)",
"Bash(VCR_MODE=new_episodes bundle exec rake:*)",
"Bash(VCR_MODE=new_episodes rake test:*)"
],
"deny": []
}
}
2 changes: 2 additions & 0 deletions Gemfile.lock
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@ PATH
remote: .
specs:
braintrust (0.0.12)
mustache (~> 1.0)
openssl (~> 3.3.1)
opentelemetry-exporter-otlp (~> 0.28)
opentelemetry-sdk (~> 1.3)
Expand Down Expand Up @@ -49,6 +50,7 @@ GEM
builder
minitest (>= 5.0)
ruby-progressbar
mustache (1.1.1)
openssl (3.3.1)
opentelemetry-api (1.7.0)
opentelemetry-common (0.23.0)
Expand Down
1 change: 1 addition & 0 deletions braintrust.gemspec
Original file line number Diff line number Diff line change
Expand Up @@ -31,6 +31,7 @@ Gem::Specification.new do |spec|
# Runtime dependencies
spec.add_runtime_dependency "opentelemetry-sdk", "~> 1.3"
spec.add_runtime_dependency "opentelemetry-exporter-otlp", "~> 0.28"
spec.add_runtime_dependency "mustache", "~> 1.0"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why a dependency on mustache? We should try to avoid external dependencies wherever possible (invites conflicts with user apps which limits where it can be deployed.)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we use mustache for templates. what should i do instead?


# OpenSSL 3.3.1+ fixes macOS CRL (Certificate Revocation List) verification issues
# that occur with OpenSSL 3.6 + Ruby (certificate verify failed: unable to get certificate CRL).
Expand Down
100 changes: 100 additions & 0 deletions examples/prompt.rb
Original file line number Diff line number Diff line change
@@ -0,0 +1,100 @@
#!/usr/bin/env ruby
# frozen_string_literal: true

# Example: Loading and executing prompts from Braintrust
#
# This example demonstrates how to:
# 1. Create a prompt (function) on the Braintrust server
# 2. Load it using Prompt.load
# 3. Build the prompt with Mustache variable substitution
# 4. Execute the prompt with OpenAI and get a response
#
# Benefits of loading prompts:
# - Centralized prompt management in Braintrust UI
# - Version control and A/B testing for prompts
# - No code deployment needed for prompt changes
# - Works with any LLM client (OpenAI, Anthropic, etc.)
# - Uses standard Mustache templating ({{variable}}, {{object.property}})

require "bundler/setup"
require "braintrust"
require "openai"

# Initialize Braintrust with tracing
Braintrust.init

# Wrap OpenAI client for tracing
openai = Braintrust::Trace::OpenAI.wrap(OpenAI::Client.new)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should be updated to match the new instrumentation API.


project_name = "ruby-sdk-examples"
prompt_slug = "greeting-prompt-#{Time.now.to_i}"

# First, create a prompt on the server
# In practice, you would create prompts via the Braintrust UI
puts "Creating prompt..."

api = Braintrust::API.new
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just to clarify, we want API to be a first class public interface? It is not a support class for enabling other features (e.g. Evals)?

api.functions.create(
project_name: project_name,
slug: prompt_slug,
function_data: {type: "prompt"},
prompt_data: {
prompt: {
type: "chat",
messages: [
{
role: "system",
content: "You are a friendly assistant. Respond in {{language}}. Keep responses brief (1-2 sentences)."
},
{
role: "user",
content: "Say hello to {{name}} and wish them a great {{time_of_day}}!"
}
]
},
options: {
model: "gpt-4o-mini",
params: {temperature: 0.7, max_tokens: 100}
}
}
)
puts "Created prompt: #{prompt_slug}"

# Load the prompt using Prompt.load
puts "\nLoading prompt..."
prompt = Braintrust::Prompt.load(project: project_name, slug: prompt_slug)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if now when we load prompts we may need to check what templating language the prompt was using and fail to load the prompt if the templating language isn't available in a specific sdk.


puts " ID: #{prompt.id}"
puts " Slug: #{prompt.slug}"
puts " Model: #{prompt.model}"

# Build the prompt with Mustache variable substitution
puts "\nBuilding prompt with variables..."
params = prompt.build(
name: "Alice",
language: "Spanish",
time_of_day: "morning"
)

puts " Model: #{params[:model]}"
puts " Temperature: #{params[:temperature]}"
puts " Messages:"
params[:messages].each do |msg|
puts " [#{msg[:role]}] #{msg[:content]}"
end

# Execute the prompt with OpenAI
puts "\nExecuting prompt with OpenAI..."
response = openai.chat.completions.create(**params)

puts "\nResponse:"
content = response.choices.first.message.content
puts " #{content}"

# Clean up - delete the test prompt
puts "\nCleaning up..."
api.functions.delete(id: prompt.id)
puts "Done!"

# Flush traces
OpenTelemetry.tracer_provider.shutdown
1 change: 1 addition & 0 deletions lib/braintrust.rb
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@
require_relative "braintrust/state"
require_relative "braintrust/trace"
require_relative "braintrust/api"
require_relative "braintrust/prompt"
require_relative "braintrust/internal/experiments"
require_relative "braintrust/eval"

Expand Down
8 changes: 8 additions & 0 deletions lib/braintrust/api/functions.rb
Original file line number Diff line number Diff line change
Expand Up @@ -85,6 +85,14 @@ def invoke(id:, input:)
http_post_json("/v1/function/#{id}/invoke", payload)
end

# Get a function by ID (includes full prompt_data)
# GET /v1/function/{id}
# @param id [String] Function UUID
# @return [Hash] Full function data including prompt_data
def get(id:)
http_get("/v1/function/#{id}")
end

# Delete a function by ID
# DELETE /v1/function/{id}
# @param id [String] Function UUID
Expand Down
172 changes: 172 additions & 0 deletions lib/braintrust/prompt.rb
Original file line number Diff line number Diff line change
@@ -0,0 +1,172 @@
# frozen_string_literal: true

require "mustache"

module Braintrust
# Prompt class for loading and building prompts from Braintrust
#
# @example Load and use a prompt
# prompt = Braintrust::Prompt.load(project: "my-project", slug: "summarizer")
# params = prompt.build(text: "Article to summarize...")
# client.messages.create(**params)
class Prompt
attr_reader :id, :name, :slug, :project_id

# Load a prompt from Braintrust
#
# @param project [String] Project name
# @param slug [String] Prompt slug
# @param version [String, nil] Specific version (default: latest)
# @param defaults [Hash] Default variable values for build()
# @param state [State, nil] Braintrust state (default: global)
# @return [Prompt]
def self.load(project:, slug:, version: nil, defaults: {}, state: nil)
state ||= Braintrust.current_state
Copy link
Collaborator

@delner delner Jan 14, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should a prompt define this dependency on state or should API? If API is meant to be a centrally used/reused public object perhaps we should be passing that in instead of State. The use of State seems like a detail that State should be an internal construct. I don't see Prompt using State beyond passing it through to API.

raise Error, "No state available - call Braintrust.init first" unless state

api = API.new(state: state)

# Find the function by project + slug
result = api.functions.list(project_name: project, slug: slug)
function = result.dig("objects")&.first
raise Error, "Prompt '#{slug}' not found in project '#{project}'" unless function

# Fetch full function data including prompt_data
full_data = api.functions.get(id: function["id"])

new(full_data, defaults: defaults)
end

# Initialize a Prompt from function data
#
# @param data [Hash] Function data from API
# @param defaults [Hash] Default variable values for build()
def initialize(data, defaults: {})
@data = data
@defaults = stringify_keys(defaults)

@id = data["id"]
@name = data["name"]
@slug = data["slug"]
@project_id = data["project_id"]
end

# Get the raw prompt definition
# @return [Hash, nil]
def prompt
@data.dig("prompt_data", "prompt")
end

# Get the prompt messages
# @return [Array<Hash>]
def messages
prompt&.dig("messages") || []
end

# Get the model name
# @return [String, nil]
def model
@data.dig("prompt_data", "options", "model")
end

# Get model options
# @return [Hash]
def options
@data.dig("prompt_data", "options") || {}
end

# Build the prompt with variable substitution
#
# Returns a hash ready to pass to an LLM client:
# {model: "...", messages: [...], temperature: ..., ...}
#
# @param variables [Hash] Variables to substitute (e.g., {name: "Alice"})
# @param strict [Boolean] Raise error on missing variables (default: false)
# @return [Hash] Built prompt ready for LLM client
#
# @example With keyword arguments
# prompt.build(name: "Alice", task: "coding")
#
# @example With explicit hash
# prompt.build({name: "Alice"}, strict: true)
def build(variables = nil, strict: false, **kwargs)
# Support both explicit hash and keyword arguments
variables_hash = variables.is_a?(Hash) ? variables : {}
vars = @defaults.merge(stringify_keys(variables_hash)).merge(stringify_keys(kwargs))

# Render Mustache templates in messages
built_messages = messages.map do |msg|
{
role: msg["role"].to_sym,
content: render_template(msg["content"], vars, strict: strict)
}
end

# Build result with model and messages
result = {
model: model,
messages: built_messages
}

# Add params (temperature, max_tokens, etc.) to top level
params = options.dig("params")
if params.is_a?(Hash)
params.each do |key, value|
result[key.to_sym] = value
end
end

result
end

private

# Render Mustache template with variables
def render_template(text, variables, strict:)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did we want to support the no template so people don't have to escape double braces in their prompts if they have any?

Technically you can work around it in mustache by doing something like changing the delimiters by doing {{=<% %>=}} but might be nice to have no template option as well.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't see anything specific to prompt in this function or variable manipulation methods. I think this should be extracted to Internal as a general utility.

return text unless text.is_a?(String)

if strict
# Check for missing variables before rendering
missing = find_missing_variables(text, variables)
if missing.any?
raise Error, "Missing required variables: #{missing.join(", ")}"
end
end

Mustache.render(text, variables)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I recently found out we have custom escape logic for mustache.

It exists here in python and here in typescript.

We should move that custom escaping over to make the experience consistent.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps naive thought here, but can this not be just done with a simple gsub? Instead of using this dependency?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We use mustache for templating in the UI when you create prompt.

We recently also added the ability to have no template or nunjucks (a jinja like templating).

This adds the same capability to the Ruby SDK that exists in the python and typescript SDK. If a user loads a prompt they created using mustache in the UI then it can be loaded from the SDK the same way.

end

# Find variables in template that are not provided
def find_missing_variables(text, variables)
# Extract {{variable}} and {{variable.path}} patterns
# Mustache uses {{name}} syntax
text.scan(/\{\{([^}#^\/!>]+)\}\}/).flatten.map(&:strip).uniq.reject do |var|
resolve_variable(var, variables)
end
end

# Check if a variable path exists in the variables hash
def resolve_variable(path, variables)
parts = path.split(".")
value = variables

parts.each do |part|
return nil unless value.is_a?(Hash)
# Try both string and symbol keys
value = value[part] || value[part.to_sym]
return nil if value.nil?
end

value
end

# Convert hash keys to strings (handles both symbol and string keys)
def stringify_keys(hash)
return {} unless hash.is_a?(Hash)

hash.transform_keys(&:to_s).transform_values do |v|
v.is_a?(Hash) ? stringify_keys(v) : v
end
end
end
end
Loading