Skip to content

cuda.bindings latency benchmarks#1736

Open
danielfrg wants to merge 12 commits intomainfrom
cuda-bindings-bench
Open

cuda.bindings latency benchmarks#1736
danielfrg wants to merge 12 commits intomainfrom
cuda-bindings-bench

Conversation

@danielfrg
Copy link

Description

closes #1580

Description

closes #1580

@leofang @mdboom I migrated one benchmark from the pytest suite to use pyperf and added a C++ equivalent.

  • Added a small benchmark discovery to find bench_*.py files with bench_*() functions
  • Uses bench_time_func
  • C++ benchmarks output pyperf-compatible JSON so both sides can be analyzed with the same pyperf stats / pyperf hist commands.
  • The readme explain how to run it on the different envs using pixi

The benchmark is cuPointerGetAttribute, both Python and C++ call the same driver API with error checking.

These are one set of results for Python and C++ in my system, so we are ok under the <1us. They dont run the same warmup and runs for each, i still need to finish that but just to give you an idea.

# Python (pyperf bench_time_func)
bindings.pointer_attributes.pointer_get_attribute: Mean +- std dev: 603 ns +- 25 ns

# C++ (driver API baseline)
cpp.pointer_attributes.pointer_get_attribute: Mean +- std dev: 29 ns +- 1 ns

I still need to work on matching params for all the benchmarks and so on and so on but wanted to get feedback first if this looks fine to keep going.

Checklist

  • New or existing tests cover these changes.
  • The documentation is up to date with these changes.

@copy-pr-bot
Copy link
Contributor

copy-pr-bot bot commented Mar 6, 2026

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

Copy link
Contributor

@mdboom mdboom left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm marking this as "approve" even though I have some questions inline and since I think it's totally fine to merge this and iterate if that's the easiest way forward.

(I am not a regular pixi user...) I tried to follow the instructions but I get:

 pixi run -e source bench
Error:   × failed to solve requirements of environment 'source' for platform 'linux-64'
  ├─▶   × failed to solve the environment
  │
  ╰─▶ Cannot solve the request because of: cuda-bindings * cannot be installed because there are no viable options:
      └─ cuda-bindings 13.1.0 would require
         └─ cuda-nvrtc >=13.2.51,<14.0a0, which cannot be installed because there are no viable options:
            └─ cuda-nvrtc 13.2.51 would require
               └─ cuda-version >=13.2,<13.3.0a0, for which no candidates were found.

 pixi run -e wheel bench
Error:   × failed to solve requirements of environment 'source' for platform 'linux-64'
  ├─▶   × failed to solve the environment
  │
  ╰─▶ Cannot solve the request because of: cuda-bindings * cannot be installed because there are no viable options:
      └─ cuda-bindings 13.1.0 would require
         └─ cuda-nvrtc >=13.2.51,<14.0a0, which cannot be installed because there are no viable options:
            └─ cuda-nvrtc 13.2.51 would require
               └─ cuda-version >=13.2,<13.3.0a0, for which no candidates were found.


- `bench`: Runs the Python benchmarks
- `bench-cpp`: Runs the C++ benchmarks

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe mention pyperf system tune here?

@danielfrg
Copy link
Author

Thanks for the comments! I dont think we need to merge now. I'll address the comments and once we are happy with a template we have here we can commit and then in another PR i can just add more benchmarks.

@danielfrg
Copy link
Author

Addressed the comments and i relaxed one of the deps in pixi so i think you should be able to try again.

@cpcloud
Copy link
Contributor

cpcloud commented Mar 17, 2026

Are these going to run in CI or in any sort of regular way? I'm not sure we should have this much additional code that is going to go stale immediately.

Can we run one iteration of the benchmarks in CI so they don't go stale?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Python latency testing & benchmarking

4 participants