Conversation
mdboom
left a comment
There was a problem hiding this comment.
I'm marking this as "approve" even though I have some questions inline and since I think it's totally fine to merge this and iterate if that's the easiest way forward.
(I am not a regular pixi user...) I tried to follow the instructions but I get:
pixi run -e source bench
Error: × failed to solve requirements of environment 'source' for platform 'linux-64'
├─▶ × failed to solve the environment
│
╰─▶ Cannot solve the request because of: cuda-bindings * cannot be installed because there are no viable options:
└─ cuda-bindings 13.1.0 would require
└─ cuda-nvrtc >=13.2.51,<14.0a0, which cannot be installed because there are no viable options:
└─ cuda-nvrtc 13.2.51 would require
└─ cuda-version >=13.2,<13.3.0a0, for which no candidates were found.
pixi run -e wheel bench
Error: × failed to solve requirements of environment 'source' for platform 'linux-64'
├─▶ × failed to solve the environment
│
╰─▶ Cannot solve the request because of: cuda-bindings * cannot be installed because there are no viable options:
└─ cuda-bindings 13.1.0 would require
└─ cuda-nvrtc >=13.2.51,<14.0a0, which cannot be installed because there are no viable options:
└─ cuda-nvrtc 13.2.51 would require
└─ cuda-version >=13.2,<13.3.0a0, for which no candidates were found.
|
|
||
| - `bench`: Runs the Python benchmarks | ||
| - `bench-cpp`: Runs the C++ benchmarks | ||
|
|
|
Thanks for the comments! I dont think we need to merge now. I'll address the comments and once we are happy with a template we have here we can commit and then in another PR i can just add more benchmarks. |
|
Addressed the comments and i relaxed one of the deps in pixi so i think you should be able to try again. |
|
Are these going to run in CI or in any sort of regular way? I'm not sure we should have this much additional code that is going to go stale immediately. Can we run one iteration of the benchmarks in CI so they don't go stale? |
Description
closes #1580
Description
closes #1580
@leofang @mdboom I migrated one benchmark from the pytest suite to use pyperf and added a C++ equivalent.
bench_*.pyfiles withbench_*()functionsbench_time_funcpyperf stats/pyperf histcommands.The benchmark is
cuPointerGetAttribute, both Python and C++ call the same driver API with error checking.These are one set of results for Python and C++ in my system, so we are ok under the <1us. They dont run the same warmup and runs for each, i still need to finish that but just to give you an idea.
I still need to work on matching params for all the benchmarks and so on and so on but wanted to get feedback first if this looks fine to keep going.
Checklist