Skip to content

add armo rm verifier for stackexchange-tezos and stackexchange-overflow #19

Closed
sklisa wants to merge 4 commits intoopen-thoughts:mainfrom
sklisa:armo-rm-verifier
Closed

add armo rm verifier for stackexchange-tezos and stackexchange-overflow #19
sklisa wants to merge 4 commits intoopen-thoughts:mainfrom
sklisa:armo-rm-verifier

Conversation

@sklisa
Copy link
Copy Markdown
Contributor

@sklisa sklisa commented Feb 26, 2026

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @sklisa, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces the ArmoRM verifier into the existing dataset generation workflows for StackExchange Tezos and StackExchange Overflow. The primary goal is to automatically evaluate agent trajectories using a pre-trained reward model, assigning a raw score that can be used for reinforcement learning or performance analysis. This integration streamlines the process of creating datasets with built-in reward mechanisms, enhancing the ability to assess and improve agent behavior.

Highlights

  • ArmoRM Verifier Implementation: Implemented the core logic for the ArmoRM verifier, which includes parsing agent trajectories, loading the RLHFlow/ArmoRM-Llama3-8B-v0.1 sequence classification model, and calculating a raw reward score based on the trajectory.
  • StackExchange Overflow Integration: Integrated the ArmoRM verifier into the dataset generation pipeline for StackExchange Overflow, enabling automatic injection of the verifier into generated tasks and facilitating the creation of ArmoRM-verified datasets.
  • StackExchange Tezos Integration: Integrated the ArmoRM verifier into the dataset generation pipeline for StackExchange Tezos, similarly enabling automatic injection, handling data loading, task generation, post-processing, and optional trace generation and upload.
  • Task Configuration Updates: Updated task configurations to increase the verifier timeout from 720 to 1200 seconds and specified hardware resources (8 CPUs, 24GB memory, 10GB storage) required for the ArmoRM model to run effectively.
  • Dependency Pinning: Pinned the transformers library version to 4.41.2 within the verifier's test script to ensure compatibility and stability with the specific ArmoRM model being used.
Changelog
  • data/armo_rm_verifier/armorm_verifier.py
    • Added a new Python module containing the core logic for the ArmoRM verifier.
    • Included functions for parsing multi-turn agent trajectories.
    • Implemented loading and scoring using the 'RLHFlow/ArmoRM-Llama3-8B-v0.1' model.
    • Defined templates for 'task.toml' resource requirements and 'test.sh' setup scripts.
    • Provided an 'inject_armorm_verifier' function to apply these templates to task directories.
  • data/armo_rm_verifier/generate_overflow.py
    • Added a new Python script to generate the StackExchange Overflow dataset.
    • Automated downloading the source dataset, extracting tasks from parquet files, and injecting the ArmoRM verifier.
    • Included functionality for optional uploading of the processed dataset to Hugging Face.
  • data/armo_rm_verifier/generate_tezos.py
    • Added a new Python script to generate the StackExchange Tezos dataset.
    • Managed downloading and parsing raw data, generating base tasks, and injecting the ArmoRM verifier.
    • Incorporated standard post-processing steps like subsampling and upsampling.
    • Provided options for generating and uploading traces, as well as uploading the final tasks to Hugging Face.
Activity
  • No specific activity (comments, reviews, approvals) was found in the provided context.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces an ArmoRM verifier and associated data generation scripts for stackexchange-tezos and stackexchange-overflow datasets. The changes are well-structured, adding a new verifier module and two new generation scripts.

My review focuses on improving the robustness and maintainability of the new scripts. I've suggested refactoring some duplicated code in the verifier template, making the TOML configuration update more resilient to changes, and ensuring the overflow data generation script correctly handles multiple parquet files. These changes will make the new data generation pipeline more reliable.

Comment on lines +57 to +62
# Use the converter directly to extract
tpc.from_parquet(
parquet_path=str(parquet_files[0]),
base=str(output_dir),
on_exist="overwrite"
)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The current implementation only processes the first parquet file found (parquet_files[0]). If the source dataset mlfoundations-dev/stackexchange-overflow-sandboxes ever contains multiple parquet files, the others will be ignored, potentially leading to incomplete data. To make this more robust, you should iterate over all found parquet files.

Suggested change
# Use the converter directly to extract
tpc.from_parquet(
parquet_path=str(parquet_files[0]),
base=str(output_dir),
on_exist="overwrite"
)
# Use the converter directly to extract
for parquet_file in parquet_files:
tpc.from_parquet(
parquet_path=str(parquet_file),
base=str(output_dir),
on_exist="overwrite"
)

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is consistent to how parquet is processed on main branch https://github.com/open-thoughts/OpenThoughts-Agent/blob/main/scripts/datagen/extract_tasks_from_parquet.py#L176

all 10k tasks are contained in one parquet

Comment on lines +64 to +76
if not traj_path.exists():
print(f"Error: {traj_path} not found.")
reward_file.parent.mkdir(parents=True, exist_ok=True)
reward_file.write_text("0")
return

messages = parse_trajectory_authentic_multiturn(traj_path)

if not messages:
print("Error: No valid steps found in trajectory.")
reward_file.parent.mkdir(parents=True, exist_ok=True)
reward_file.write_text("0")
return
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The error handling logic for when the trajectory is not found or is empty is duplicated. This can be refactored into a helper function to improve readability and maintainability. Defining a nested helper function can encapsulate this logic cleanly.

Suggested change
if not traj_path.exists():
print(f"Error: {traj_path} not found.")
reward_file.parent.mkdir(parents=True, exist_ok=True)
reward_file.write_text("0")
return
messages = parse_trajectory_authentic_multiturn(traj_path)
if not messages:
print("Error: No valid steps found in trajectory.")
reward_file.parent.mkdir(parents=True, exist_ok=True)
reward_file.write_text("0")
return
def _handle_verification_error(message: str):
"""Logs an error, writes a zero reward, and ensures the reward directory exists."""
print(message)
reward_file.parent.mkdir(parents=True, exist_ok=True)
reward_file.write_text("0")
if not traj_path.exists():
_handle_verification_error(f"Error: {traj_path} not found.")
return
messages = parse_trajectory_authentic_multiturn(traj_path)
if not messages:
_handle_verification_error("Error: No valid steps found in trajectory.")
return

# Get the baseline TOML and update the timeout for ArmoRM loading
base_toml = create_standard_task_toml()
# Increase verifier timeout from 720 to 1200 seconds
updated_toml = base_toml.replace("timeout_sec = 720.0", "timeout_sec = 1200.0")
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Using str.replace() to modify the TOML configuration is brittle. If the formatting in create_standard_task_toml() changes (e.g., extra space, or the value itself), this replacement will fail silently. It's more robust to parse the TOML, modify the value, and then write it back.

You could use a library like tomllib (for reading, standard in Python 3.11+) and tomli-w (for writing). This would require adding tomli-w as a dependency.

Example:

import tomllib
import tomli_w

# ... in inject_armorm_verifier
base_toml_str = create_standard_task_toml()
toml_data = tomllib.loads(base_toml_str)
if 'verifier' in toml_data:
    toml_data['verifier']['timeout_sec'] = 1200.0
updated_toml = tomli_w.dumps(toml_data)

@sklisa sklisa closed this Mar 4, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant