Add CI build caching and improve benchmark workflow#1148
Add CI build caching and improve benchmark workflow#1148sbryngelson wants to merge 9 commits intoMFlowCode:masterfrom
Conversation
GitHub-hosted runners: Add actions/cache@v4 to test.yml and coverage.yml, caching the build/ directory keyed by matrix config and source file hashes. Partial cache hits via restore-keys enable incremental builds. Self-hosted HPC runners (Phoenix, Frontier, Frontier AMD): Add a persistent build cache that symlinks build/ to $HOME/scratch/.mfc-ci-cache/<config>/build. This ensures cached artifacts persist across CI runs regardless of which runner instance picks up the job. Key details: - Cross-runner workspace path fixup via sed on CMake files - flock-based locking prevents concurrent builds from corrupting the cache - Retry logic uses targeted rm (staging/install only) instead of mfc.sh clean - Phoenix releases the lock after build, before tests Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
CodeAnt AI is reviewing your PR. Thanks for using CodeAnt! 🎉We're free for open-source projects. if you're enjoying it, help us grow by sharing. Share on X · |
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds persistent per-config build caching for GitHub and self-hosted CI: new cache-setup script with locking and workspace-path fixes, workflow cache restore steps for GitHub runners, and HPC build scripts updated to source the script and perform targeted cleanup on failures. Changes
Sequence DiagramsequenceDiagram
participant Workflow as Workflow
participant Runner as Runner (job script)
participant CacheSetup as setup-build-cache.sh
participant LockFile as Lock File
participant CacheDir as Cache Directory
participant Build as Build Process
Workflow->>Runner: start job
Runner->>CacheSetup: source(cluster, device, interface)
CacheSetup->>CacheSetup: compute cache key & path
CacheSetup->>LockFile: acquire exclusive flock (1h timeout)
alt lock acquired
LockFile-->>CacheSetup: granted
CacheSetup->>CacheDir: ensure dir, remove stale build symlink
CacheSetup->>Runner: create symlink `build` -> cache path
CacheSetup->>CacheDir: read/write workspace marker
CacheSetup->>CacheDir: patch CMake-related paths if workspace changed
Runner->>Build: run build using cached `build` dir
Build-->>Runner: success / failure
alt success
Runner->>LockFile: release lock
else failure
Runner->>CacheDir: rm -rf build/staging/* build/lock.yaml
Runner->>LockFile: release lock
end
else timeout
LockFile-->>CacheSetup: timeout
CacheSetup-->>Runner: fallback to local build (no cache)
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested labels
Poem
🚥 Pre-merge checks | ✅ 5 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
PR Reviewer Guide 🔍Here are some key observations to aid the review process:
|
Nitpicks 🔍
|
|
CodeAnt AI finished reviewing your PR. |
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In @.github/scripts/setup-build-cache.sh:
- Around line 74-78: The sed substitution uses the raw variable _old_workspace
as a regex which breaks on metacharacters; modify the script around the find ...
-exec sed -i call to first escape regex/sed-special characters in _old_workspace
(e.g. implement a small helper like sed_escape that uses printf '%s'
"$_old_workspace" piped to sed to backslash-escape characters such as / \ | & [
] * . + ? ^ $), then use the escaped value in the sed "s|ESCAPED_OLD|$(pwd)|g"
invocation so replacements work reliably for any workspace path.
- Around line 48-54: The fallback branch currently runs mkdir -p "build" which
is a no-op if build is a symlink to the shared cache; before creating the local
directory remove any existing symlink named build so we don't accidentally write
into the shared cache without the lock. In the else block (around the existing
exec 9>&-, mkdir -p "build", and return 0) add a check for a symlink named
"build" (e.g., test -L "build") and unlink/rm that symlink before running mkdir
-p "build" so the local directory is created safely while leaving real
directories untouched.
There was a problem hiding this comment.
3 issues found across 6 files
Confidence score: 3/5
- There’s some risk here because the most severe issue is a race condition in
.github/workflows/frontier/build.shwhere deletingbuild/installduring retries can corrupt concurrent jobs still reading binaries. .github/scripts/setup-build-cache.shcan leave a shared cache symlink intact after a lock timeout, which undermines the fallback and can cause concurrent cache use without a lock.- Severity is mid-to-high (5–7/10) and impacts build reliability rather than runtime behavior, so it’s likely safe but could cause CI flakiness.
- Pay close attention to
.github/workflows/frontier/build.shand.github/scripts/setup-build-cache.sh- cache lock and retry/fallback handling.
Prompt for AI agents (all issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name=".github/scripts/setup-build-cache.sh">
<violation number="1" location=".github/scripts/setup-build-cache.sh:51">
P2: On cache-lock timeout, an existing build/ symlink is left intact. This means the job can keep using the shared cache without a lock, contradicting the "build locally without cache" fallback and risking concurrent cache corruption. Remove build/ (symlink or dir) before creating the local build directory in this path.</violation>
<violation number="2" location=".github/scripts/setup-build-cache.sh:53">
P2: The fallback path after lock timeout doesn't exit cleanly when the script is executed directly (not sourced). `return 0` fails in that context, and `|| true` masks the error, causing the script to incorrectly continue into cache setup logic even though it decided to build locally.</violation>
</file>
<file name=".github/workflows/frontier/build.sh">
<violation number="1" location=".github/workflows/frontier/build.sh:52">
P1: Race condition: deleting `build/install` during retry can corrupt test runs from other concurrent jobs that have released the cache lock but are still reading installed binaries. Only `build/staging` and `build/lock.yaml` should be cleared on retry - the install directory needs to remain intact for concurrent readers.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
- Only remove build/staging (not build/install) on retry, so concurrent test jobs reading installed binaries are not disrupted - Remove stale symlink in lock-timeout fallback path to prevent writing into the shared cache without holding the lock - Remove redundant flock --unlock (closing fd is sufficient) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In @.github/workflows/frontier_amd/build.sh:
- Around line 51-52: Update the echo message to accurately describe what is
being removed: change the existing log that says "Clearing staging/install" to
mention only the actual targets being deleted (e.g., "Clearing build/staging and
build/lock.yaml") so CI logs match the rm -rf command; keep the rm command (rm
-rf build/staging build/lock.yaml) unchanged to preserve the intentional
decision not to remove build/install.
There was a problem hiding this comment.
2 issues found across 4 files (changes from recent commits).
Prompt for AI agents (all issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name=".github/workflows/frontier/build.sh">
<violation number="1" location=".github/workflows/frontier/build.sh:52">
P2: Retry cleanup no longer removes `build/install` despite the log claiming staging/install are cleared; a failed install can persist and poison the retry build.</violation>
</file>
<file name=".github/workflows/frontier_amd/build.sh">
<violation number="1" location=".github/workflows/frontier_amd/build.sh:52">
P2: Retry cleanup no longer removes build/install even though the retry message says it does. Leaving a partial install can contaminate the next build attempt. Either clean build/install or update the log to reflect the new behavior.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
There was a problem hiding this comment.
Pull request overview
This PR implements CI build caching to speed up GitHub Actions workflows and reduce build times on self-hosted HPC runners. The implementation uses GitHub's native actions/cache@v4 for hosted runners and a custom persistent caching solution for self-hosted systems (Phoenix, Frontier, Frontier AMD).
Changes:
- GitHub-hosted runners cache
build/directory keyed by matrix config + source hashes - Self-hosted HPC runners symlink
build/to persistent cache in$HOME/scratch/.mfc-ci-cache/ - Cross-runner workspace path fixup via sed enables incremental builds when jobs land on different runner instances
- Retry logic uses targeted removal (
build/staging,build/install,build/lock.yaml) instead of full clean to preserve cache
Reviewed changes
Copilot reviewed 6 out of 6 changed files in this pull request and generated 8 comments.
Show a summary per file
| File | Description |
|---|---|
.github/workflows/test.yml |
Adds build cache restoration for GitHub-hosted runners with matrix-specific keys |
.github/workflows/coverage.yml |
Adds build cache restoration for coverage workflow with simpler key (no matrix) |
.github/scripts/setup-build-cache.sh |
New script implementing persistent cache with flock-based locking and path fixup |
.github/workflows/phoenix/test.sh |
Integrates cache setup and releases lock before long-running tests |
.github/workflows/frontier/build.sh |
Integrates cache setup and updates retry logic |
.github/workflows/frontier_amd/build.sh |
Integrates cache setup and updates retry logic |
The echo said "Clearing staging/install" but build/install is intentionally preserved to avoid disrupting concurrent test jobs. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
actions/checkout@v4 defaults to clean: true, which runs git clean -ffdx. This follows the build/ symlink into the shared cache directory and deletes all cached artifacts (staging, install, venv), defeating the purpose of the persistent cache and causing SIGILL errors from partially destroyed build artifacts. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## master #1148 +/- ##
==========================================
+ Coverage 44.02% 44.07% +0.04%
==========================================
Files 70 70
Lines 20659 20431 -228
Branches 2059 1974 -85
==========================================
- Hits 9096 9004 -92
+ Misses 10373 10291 -82
+ Partials 1190 1136 -54 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
Benchmarks build PR and master in parallel — sharing a cache key causes collisions. Skip cache setup when run_bench=="bench" so each benchmark builds from scratch. Also fix two issues in the benchmark workflow trigger: - Cross-repo PRs don't populate pull_requests[]; fall back to searching by head SHA so the PR author is correctly detected. - Only count approvals from users with write/maintain/admin permission, filtering out AI bot approvals (Copilot, Qodo). - Remove wilfonba auto-run; only sbryngelson auto-runs benchmarks. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
CodeAnt AI is running Incremental review Thanks for using CodeAnt! 🎉We're free for open-source projects. if you're enjoying it, help us grow by sharing. Share on X · |
|
CodeAnt AI Incremental review completed. |
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In @.github/workflows/bench.yml:
- Around line 63-75: The current APPROVERS extraction uses select(.state ==
"APPROVED") on all reviews and can pick stale approvals; update the jq pipeline
used to set APPROVERS so it first reduces reviews to each user's latest review
(e.g., sort_by(.submitted_at) | group_by(.user.login) | map(last)) and then
selects those with .state == "APPROVED"; in short, modify the gh api call that
populates APPROVERS to compute per-user last review before filtering for .state
== "APPROVED" so the rest of the loop (variables APPROVERS, APPROVED and the
permission check) works correctly.
When the cache moves between runner instances (e.g. actions-runner-6 to actions-runner-1), the sed path replacement only updated staging/ CMake files. Config files in install/ (.pc, .cmake) still had the old runner path, causing silo/HDF5 to link against nonexistent paths and h5dump to fail on all tests. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Updating .pc and .cmake config files with sed is insufficient — the MFC executables (simulation, pre_process, post_process) and static libraries have the old runner workspace path baked in at compile time. When the cache moves between runner instances, these binaries fail at runtime. Replace the install/ sed fix with rm -rf install/ so CMake re-links and re-installs all binaries with correct paths. The staging/ object files remain valid, so this is a re-link, not a full rebuild. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Replace the shared cache (with flock, sed path fixups, and workspace tracking) with per-runner caches keyed by RUNNER_NAME. Each runner always uses the same workspace path, so CMake's absolute paths are always correct — no cross-runner path issues, no locking needed. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
CodeAnt AI is running Incremental review Thanks for using CodeAnt! 🎉We're free for open-source projects. if you're enjoying it, help us grow by sharing. Share on X · |
|
CodeAnt AI Incremental review completed. |
The prefix fallback can restore a cache built on a runner with AVX-512 onto a runner without it, causing SIGILL in Chemistry tests. Without restore-keys, only exact key matches are used — source changes trigger a full rebuild but binaries are always compatible with the runner. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
/config |
ⓘ You are approaching your monthly quota for Qodo. Upgrade your plan 🛠️ Wiki configuration file settings:🛠️ Local configuration file settings: # .pr_agent.toml
[github_app]
pr_commands = [
"/review",
"/improve",
]
handle_push_trigger = true
push_commands = ["/improve"]
[pr_reviewer] # (all fields optional)
num_max_findings = 10 # how many items to surface
require_tests_review = true
extra_instructions = """
Project context and review priorities: .github/copilot-instructions.md
Coding standards and common pitfalls: docs/documentation/contributing.md
GPU macro API: docs/documentation/gpuParallelization.md
Prioritize correctness over style (formatting is enforced by pre-commit hooks).
Key areas: logic bugs, numerical issues,
array bounds (non-unity lower bounds with ghost cells), MPI halo exchange
correctness (pack/unpack offsets, GPU data coherence), precision mixing
(stp vs wp), ALLOCATE/DEALLOCATE pairing (GPU memory leaks), physics model
consistency (pressure formula must match model_eqns), missing case_validator.py
constraints for new parameters, and compiler portability across all four
supported compilers.
Python toolchain requires Python 3.10+; do not suggest __future__ imports
or other backwards-compatibility shims.
"""
[pr_code_suggestions]
commitable_code_suggestions = true
apply_suggestions_checkbox = true
🛠️ Global configuration file settings:🛠️ PR-Agent final configurations:==================== CONFIG ====================
config.pr_compliance = {'ENABLE_RULES_PLATFORM': True}
config.model_reasoning = 'vertex_ai/gemini-2.5-pro'
config.model = 'gpt-5.2-2025-12-11'
config.model_turbo = 'anthropic/claude-haiku-4-5-20251001'
config.fallback_models = ['anthropic/claude-sonnet-4-5-20250929', 'bedrock/us.anthropic.claude-sonnet-4-5-20250929-v1:0']
config.second_model_for_exhaustive_mode = 'o4-mini'
config.git_provider = 'github'
config.publish_output = True
config.publish_output_no_suggestions = True
config.publish_output_progress = True
config.verbosity_level = 0
config.publish_logs = False
config.debug_mode = False
config.use_wiki_settings_file = True
config.use_repo_settings_file = True
config.use_global_settings_file = True
config.use_global_wiki_settings_file = False
config.disable_auto_feedback = False
config.ai_timeout = 150
config.response_language = 'en-US'
config.clone_repo_instead_of_fetch = True
config.always_clone = False
config.add_repo_metadata = True
config.clone_repo_time_limit = 300
config.publish_inline_comments_fallback_batch_size = 5
config.publish_inline_comments_fallback_sleep_time = 2
config.max_model_tokens = 32000
config.custom_model_max_tokens = -1
config.patch_extension_skip_types = ['.md', '.txt']
config.extra_allowed_extensions = []
config.allow_dynamic_context = True
config.allow_forward_dynamic_context = True
config.max_extra_lines_before_dynamic_context = 12
config.patch_extra_lines_before = 5
config.patch_extra_lines_after = 1
config.ai_handler = 'litellm'
config.cli_mode = False
config.trial_git_org_max_invokes_per_month = 30
config.trial_ratio_close_to_limit = 0.8
config.invite_only_mode = False
config.enable_request_access_msg_on_new_pr = False
config.check_also_invites_field = False
config.allowed_users = []
config.calculate_context = True
config.disable_checkboxes = False
config.output_relevant_configurations = False
config.large_patch_policy = 'clip'
config.seed = -1
config.temperature = 0.2
config.allow_dynamic_context_ab_testing = False
config.choose_dynamic_context_ab_testing_ratio = 0.5
config.ignore_pr_title = ['^\\[Auto\\]', '^Auto']
config.ignore_pr_target_branches = []
config.ignore_pr_source_branches = []
config.ignore_pr_labels = []
config.ignore_ticket_labels = []
config.allow_only_specific_folders = []
config.ignore_pr_authors = 'REDACTED'
config.ignore_repositories = []
config.ignore_language_framework = []
config.is_auto_command = False
config.is_new_pr = False
config.enable_ai_metadata = True
config.present_reasoning = True
config.max_tickets = 10
config.max_tickets_chars = 8000
config.prevent_any_approval = False
config.enable_comment_approval = False
config.enable_auto_approval = False
config.auto_approve_for_low_review_effort = -1
config.auto_approve_for_no_suggestions = False
config.ensure_ticket_compliance = False
config.new_diff_format = True
config.new_diff_format_add_external_references = True
config.tasks_queue_ttl_from_dequeue_in_seconds = 900
config.enable_custom_labels = False
==================== PR_REVIEWER ====================
pr_reviewer.require_score_review = False
pr_reviewer.require_tests_review = True
pr_reviewer.require_estimate_effort_to_review = True
pr_reviewer.require_can_be_split_review = False
pr_reviewer.require_security_review = True
pr_reviewer.require_todo_scan = False
pr_reviewer.require_ticket_analysis_review = True
pr_reviewer.require_ticket_labels = False
pr_reviewer.require_no_ticket_labels = False
pr_reviewer.check_pr_additional_content = False
pr_reviewer.persistent_comment = True
pr_reviewer.extra_instructions = 'Project context and review priorities: .github/copilot-instructions.md\nCoding standards and common pitfalls: docs/documentation/contributing.md\nGPU macro API: docs/documentation/gpuParallelization.md\n\nPrioritize correctness over style (formatting is enforced by pre-commit hooks).\nKey areas: logic bugs, numerical issues,\narray bounds (non-unity lower bounds with ghost cells), MPI halo exchange\ncorrectness (pack/unpack offsets, GPU data coherence), precision mixing\n(stp vs wp), ALLOCATE/DEALLOCATE pairing (GPU memory leaks), physics model\nconsistency (pressure formula must match model_eqns), missing case_validator.py\nconstraints for new parameters, and compiler portability across all four\nsupported compilers.\nPython toolchain requires Python 3.10+; do not suggest __future__ imports\nor other backwards-compatibility shims.\n'
pr_reviewer.final_update_message = True
pr_reviewer.enable_review_labels_security = True
pr_reviewer.enable_review_labels_effort = True
pr_reviewer.enable_help_text = False
pr_reviewer.num_max_findings = 10
==================== PR_COMPLIANCE ====================
pr_compliance.enabled = True
pr_compliance.enable_rules_platform = False
pr_compliance.rule_providers = []
pr_compliance.enable_security_section = True
pr_compliance.enable_ticket_section = True
pr_compliance.enable_codebase_duplication_section = True
pr_compliance.enable_custom_compliance_section = True
pr_compliance.require_ticket_analysis_review = True
pr_compliance.allow_repo_pr_compliance = True
pr_compliance.enable_global_pr_compliance = True
pr_compliance.max_lines_allowed = 2000
pr_compliance.local_wiki_compliance_str = ''
pr_compliance.global_wiki_pr_compliance = ''
pr_compliance.local_repo_compliance_str = ''
pr_compliance.global_repo_pr_compliance_str = ''
pr_compliance.global_compliance_str = ''
pr_compliance.enable_generic_custom_compliance_checklist = True
pr_compliance.persist_generic_custom_compliance_checklist = False
pr_compliance.display_no_compliance_only = False
pr_compliance.enable_security_compliance = True
pr_compliance.enable_update_pr_compliance_checkbox = True
pr_compliance.enable_todo_scan = False
pr_compliance.enable_ticket_labels = False
pr_compliance.enable_no_ticket_labels = False
pr_compliance.check_pr_additional_content = False
pr_compliance.enable_compliance_labels_security = True
pr_compliance.enable_user_defined_compliance_labels = True
pr_compliance.enable_estimate_effort_to_review = True
pr_compliance.max_rag_components_to_analyze = 5
pr_compliance.min_component_size = 5
pr_compliance.persistent_comment = True
pr_compliance.enable_help_text = False
pr_compliance.extra_instructions = ''
==================== PR_DESCRIPTION ====================
pr_description.publish_labels = False
pr_description.add_original_user_description = True
pr_description.generate_ai_title = False
pr_description.extra_instructions = ''
pr_description.enable_pr_type = True
pr_description.final_update_message = True
pr_description.enable_help_text = False
pr_description.enable_help_comment = False
pr_description.bring_latest_tag = False
pr_description.enable_pr_diagram = True
pr_description.publish_description_as_comment = False
pr_description.publish_description_as_comment_persistent = True
pr_description.enable_semantic_files_types = True
pr_description.collapsible_file_list = 'adaptive'
pr_description.collapsible_file_list_threshold = 8
pr_description.inline_file_summary = False
pr_description.use_description_markers = False
pr_description.include_generated_by_header = True
pr_description.enable_large_pr_handling = True
pr_description.max_ai_calls = 4
pr_description.auto_create_ticket = False
==================== PR_QUESTIONS ====================
pr_questions.aware_ai_handler = False
pr_questions.enable_help_text = False
==================== PR_CODE_SUGGESTIONS ====================
pr_code_suggestions.suggestions_depth = 'exhaustive'
pr_code_suggestions.commitable_code_suggestions = True
pr_code_suggestions.decouple_hunks = False
pr_code_suggestions.dual_publishing_score_threshold = -1
pr_code_suggestions.focus_only_on_problems = True
pr_code_suggestions.allow_thumbs_up_down = False
pr_code_suggestions.enable_suggestion_type_reuse = False
pr_code_suggestions.enable_more_suggestions_checkbox = True
pr_code_suggestions.high_level_suggestions_enabled = True
pr_code_suggestions.extra_instructions = ''
pr_code_suggestions.enable_help_text = False
pr_code_suggestions.show_extra_context = False
pr_code_suggestions.persistent_comment = True
pr_code_suggestions.max_history_len = 5
pr_code_suggestions.apply_suggestions_checkbox = True
pr_code_suggestions.enable_chat_in_code_suggestions = True
pr_code_suggestions.apply_limit_scope = True
pr_code_suggestions.suggestions_score_threshold = 0
pr_code_suggestions.new_score_mechanism = True
pr_code_suggestions.new_score_mechanism_th_high = 9
pr_code_suggestions.new_score_mechanism_th_medium = 7
pr_code_suggestions.discard_unappliable_suggestions = False
pr_code_suggestions.num_code_suggestions_per_chunk = 3
pr_code_suggestions.num_best_practice_suggestions = 2
pr_code_suggestions.max_number_of_calls = 3
pr_code_suggestions.final_clip_factor = 0.8
pr_code_suggestions.demand_code_suggestions_self_review = False
pr_code_suggestions.code_suggestions_self_review_text = '**Author self-review**: I have reviewed the PR code suggestions, and addressed the relevant ones.'
pr_code_suggestions.approve_pr_on_self_review = False
pr_code_suggestions.fold_suggestions_on_self_review = True
pr_code_suggestions.publish_post_process_suggestion_impact = True
pr_code_suggestions.wiki_page_accepted_suggestions = True
pr_code_suggestions.enable_local_self_reflect_in_large_prs = False
pr_code_suggestions.simplify_response = True
==================== PR_CUSTOM_PROMPT ====================
pr_custom_prompt.prompt = 'The code suggestions should focus only on the following:\n- ...\n- ...\n...\n'
pr_custom_prompt.suggestions_score_threshold = 0
pr_custom_prompt.num_code_suggestions_per_chunk = 4
pr_custom_prompt.self_reflect_on_custom_suggestions = True
pr_custom_prompt.enable_help_text = False
==================== PR_ADD_DOCS ====================
pr_add_docs.extra_instructions = ''
pr_add_docs.docs_style = 'Sphinx'
pr_add_docs.file = ''
pr_add_docs.class_name = ''
==================== PR_UPDATE_CHANGELOG ====================
pr_update_changelog.push_changelog_changes = False
pr_update_changelog.extra_instructions = ''
pr_update_changelog.add_pr_link = True
pr_update_changelog.skip_ci_on_push = True
==================== PR_ANALYZE ====================
pr_analyze.enable_help_text = False
==================== PR_TEST ====================
pr_test.enable = True
pr_test.extra_instructions = ''
pr_test.testing_framework = ''
pr_test.num_tests = 3
pr_test.avoid_mocks = True
pr_test.file = ''
pr_test.class_name = ''
pr_test.enable_help_text = False
==================== PR_IMPROVE_COMPONENT ====================
pr_improve_component.num_code_suggestions = 4
pr_improve_component.extra_instructions = ''
pr_improve_component.file = ''
pr_improve_component.class_name = ''
==================== PR_IMPLEMENT ====================
pr_implement.allow_agent_implementation = False
==================== REVIEW_AGENT ====================
review_agent.enabled = True
review_agent.publish_output = True
review_agent.enable_context_collector = False
review_agent.enable_history_context_collector = False
review_agent.enable_issues_agent = True
review_agent.enable_compliance_agent = True
review_agent.enable_deduplication = True
review_agent.persistent_comment = True
review_agent.enable_database_persistence = False
review_agent.llm_call_timeout = 180
review_agent.context_collector_llm_model = 'turbo'
review_agent.llm_model = 'openai/gpt-5.2_thinking'
review_agent.feedback_tool_llm_model = 'turbo'
review_agent.conversion_llm_model = 'openai/gpt-5.2'
review_agent.conversion_batching_mode = 'batch'
review_agent.conversion_batch_size = 10
review_agent.langsmith_project_name = 'review-agent'
review_agent.max_tokens_for_file = 'REDACTED'
review_agent.single_unified_diff_tokens_limit = 'REDACTED'
review_agent.max_llm_calls = 100
review_agent.context_collector_max_llm_calls = 6
review_agent.compliance_batch_size = 0
review_agent.deduplication_llm_max_tokens = 'REDACTED'
review_agent.publishing_action_level_rank_threshold = 0
review_agent.comments_location_policy = 'both'
review_agent.inline_comments_severity_threshold = 3
review_agent.issues_user_guidelines = ''
review_agent.compliance_user_guidelines = ''
==================== PR_HELP ====================
pr_help.force_local_db = False
pr_help.num_retrieved_snippets = 5
==================== PR_NEW_ISSUE ====================
pr_new_issue.label_to_prompt_part = {'general': 'general question', 'feature': 'feature request (may already be addressed in the documentation)', 'bug': 'possible bug report (may be a by design behavior)'}
pr_new_issue.supported_repos = ['qodo-ai/pr-agent']
==================== PR_HELP_DOCS ====================
pr_help_docs.repo_url = ''
pr_help_docs.repo_default_branch = 'main'
pr_help_docs.docs_path = 'docs'
pr_help_docs.exclude_root_readme = False
pr_help_docs.supported_doc_exts = ['.md', '.mdx', '.rst']
pr_help_docs.enable_help_text = False
==================== PR_SIMILAR_ISSUE ====================
pr_similar_issue.skip_comments = False
pr_similar_issue.force_update_dataset = False
pr_similar_issue.max_issues_to_scan = 500
pr_similar_issue.vectordb = 'pinecone'
==================== PR_FIND_SIMILAR_COMPONENT ====================
pr_find_similar_component.class_name = ''
pr_find_similar_component.file = ''
pr_find_similar_component.search_from_org = False
pr_find_similar_component.allow_fallback_less_words = True
pr_find_similar_component.number_of_keywords = 5
pr_find_similar_component.number_of_results = 5
==================== BEST_PRACTICES ====================
best_practices.auto_best_practices_str = ''
best_practices.wiki_best_practices_str = ''
best_practices.global_wiki_best_practices = ''
best_practices.local_repo_best_practices_str = ''
best_practices.global_repo_best_practices_str = ''
best_practices.global_best_practices_str = ''
best_practices.organization_name = ''
best_practices.max_lines_allowed = 2000
best_practices.enable_global_best_practices = True
best_practices.allow_repo_best_practices = True
best_practices.enabled = True
==================== AUTO_BEST_PRACTICES ====================
auto_best_practices.enable_auto_best_practices = True
auto_best_practices.utilize_auto_best_practices = True
auto_best_practices.extra_instructions = ''
auto_best_practices.min_suggestions_to_auto_best_practices = 10
auto_best_practices.number_of_days_to_update = 30
auto_best_practices.max_patterns = 5
auto_best_practices.minimal_date_to_update = '2025-01-26'
==================== JIRA ====================
jira.jira_client_id = 'REDACTED'
jira.jira_app_secret = 'REDACTED'
==================== LINEAR ====================
linear.linear_client_id = 'REDACTED'
==================== PR_TO_TICKET ====================
pr_to_ticket.default_base_url = ''
pr_to_ticket.default_project_key = 'REDACTED'
pr_to_ticket.fallback_to_git_provider_issues = True
pr_to_ticket.direct_update_compliance = False
==================== github_app ====================
github_app.bot_user = 'github-actions[bot]'
github_app.override_deployment_type = True
github_app.handle_pr_actions = ['opened', 'reopened', 'ready_for_review']
github_app.pr_commands = ['/review', '/improve']
github_app.feedback_on_draft_pr = False
github_app.handle_push_trigger = True
github_app.push_commands = ['/improve']
github_app.ignore_pr_title = []
github_app.ignore_bot_pr = True |
Fixes #1145
Summary
Adds build caching to CI for both GitHub-hosted and self-hosted HPC runners, and improves the benchmark workflow's PR detection and approval logic.
Build caching
GitHub-hosted runners:
actions/cache@v4cachesbuild/keyed by matrix config + source file hashes, with prefix-based fallback for partial cache hits.Self-hosted HPC runners (Phoenix, Frontier, Frontier AMD): A shared helper script (
setup-build-cache.sh) symlinksbuild/to a persistent per-runner cache directory at$HOME/scratch/.mfc-ci-cache/<key>/build/. Each runner gets its own cache keyed by(cluster, device, interface, RUNNER_NAME), so CMake's absolute paths are always correct — no cross-runner path fixups or locking needed.Key details:
actions/checkoutusesclean: falseon self-hosted runners to preventgit clean -ffdxfrom following thebuildsymlink and destroying cached artifactsbuild/stagingandbuild/lock.yaml(not the full cache)Benchmark workflow improvements
pull_requests[]is emptyFiles changed
.github/scripts/setup-build-cache.sh.github/workflows/test.ymlactions/cachefor GH-hosted jobs;clean: falsefor self-hosted.github/workflows/coverage.ymlactions/cachefor coverage job.github/workflows/phoenix/test.shsetup-build-cache.shbefore build.github/workflows/frontier/build.shsetup-build-cache.sh(skipped for benchmarks).github/workflows/frontier_amd/build.sh.github/workflows/bench.ymlTest plan
actions/cachehit)