-
Notifications
You must be signed in to change notification settings - Fork 0
perf: benchmarks #35
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
perf: benchmarks #35
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
6 issues found across 20 files
Prompt for AI agents (all issues)
Check if these issues are valid — if so, understand the root cause of each and fix them.
<file name="benchmarks/bench/common.py">
<violation number="1" location="benchmarks/bench/common.py:52">
P2: `enable_memory_tracking` ignores the value supplied via `options`/`DEFAULT_OPTIONS`, so callers cannot configure memory tracking unless they set an environment variable.</violation>
</file>
<file name="benchmarks/run_benchmarks.sh">
<violation number="1" location="benchmarks/run_benchmarks.sh:23">
P2: The runner bootstraps psutil and Flask but never installs the required `requests` dependency, so `run_benchmarks` will crash with `ModuleNotFoundError` on a clean environment. Add the same import-check/install block for `requests` to keep the script self-contained.</violation>
</file>
<file name="benchmarks/server/test_server.py">
<violation number="1" location="benchmarks/server/test_server.py:378">
P2: Binding to an ephemeral port with a temporary socket before creating the Flask server introduces a TOCTOU race and can crash the benchmarks with “Address already in use”; let `make_server` bind to port 0 and capture its `server_port` instead.</violation>
</file>
<file name="benchmarks/profile/profile.sh">
<violation number="1" location="benchmarks/profile/profile.sh:35">
P2: `sudo py-spy …` fails on macOS when py-spy is user-installed because root’s PATH doesn’t include the user pip bin, so the py-spy option never runs. Capture the resolved binary path (from `command -v`) and invoke that via sudo instead of relying on root’s PATH.</violation>
</file>
<file name="benchmarks/profile/simple_profile.py">
<violation number="1" location="benchmarks/profile/simple_profile.py:108">
P2: Cleanup of the `requests` session, test server, and SDK only occurs when the workload finishes successfully. If an exception is raised during the loop, the server keeps running and the SDK is never shut down. Wrap the workload in a `try/finally` so these cleanup calls always execute.</violation>
</file>
<file name="benchmarks/bench/resource_monitor.py">
<violation number="1" location="benchmarks/bench/resource_monitor.py:150">
P2: `_collect_memory_sample` checks `_current_task_stats` before updating it, but another thread can set it to `None` immediately after the check. This race can crash the monitor thread or lose samples when a task ends. Take a local snapshot of the task stats (or lock access) before the guard and use that reference for all subsequent updates.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
3 issues found across 10 files (changes from recent commits).
Prompt for AI agents (all issues)
Check if these issues are valid — if so, understand the root cause of each and fix them.
<file name="benchmarks/scripts/compare_runs.py">
<violation number="1" location="benchmarks/scripts/compare_runs.py:6">
P2: The usage docstring advertises a --download option that isn't implemented in argparse. Running the documented command will error. Update the usage text or add the missing flag implementation.</violation>
</file>
<file name=".github/workflows/benchmarks.yml">
<violation number="1" location=".github/workflows/benchmarks.yml:63">
P2: The workflow declares an `iterations` input, but the realistic workload benchmark always runs with the script’s hardcoded `iterations = 200`. This means any value provided in the dispatch input is ignored, which is misleading and prevents tuning the workload from the workflow UI.</violation>
<violation number="2" location=".github/workflows/benchmarks.yml:70">
P2: The workflow declares a `qps_duration` input, but the fixed QPS benchmark always uses the script’s hardcoded `duration = 10`. Any value provided in the workflow dispatch input is ignored, so users can’t actually adjust the test duration.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
1 issue found across 4 files (changes from recent commits).
Prompt for AI agents (all issues)
Check if these issues are valid — if so, understand the root cause of each and fix them.
<file name="benchmarks/bench/fixed_qps_latency.py">
<violation number="1" location="benchmarks/bench/fixed_qps_latency.py:160">
P2: Parsing `BENCHMARK_QPS_DURATION` directly with `int(...)` will crash the benchmark on non-numeric input. Guard the conversion so invalid values fall back to the default instead of raising `ValueError`.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
Resolves #32