⚡️ Speed up function bit_insert by 17%
#106
Open
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
📄 17% (0.17x) speedup for
bit_insertinaerospike_helpers/operations/bitwise_operations.py⏱️ Runtime :
14.7 microseconds→12.5 microseconds(best of5runs)📝 Explanation and details
The optimization achieves a 17% runtime improvement (from 14.7μs to 12.5μs) by caching the
aerospike.OP_BIT_INSERTconstant as a module-level variable_OP_BIT_INSERT.What changed:
A single line was added at module scope:
_OP_BIT_INSERT = aerospike.OP_BIT_INSERT, and the function now references this cached value instead of performing the attribute lookup each time.Why this is faster:
In Python, attribute access like
aerospike.OP_BIT_INSERTrequires:aerospikename in the global namespaceOP_BIT_INSERTBy caching this constant at module load time, the function only needs a single LOAD_GLOBAL operation instead of LOAD_GLOBAL + LOAD_ATTR on every invocation. The line profiler shows this improvement clearly: the line with the OP_KEY assignment dropped from 16,253ns to 14,752ns (~9% faster for that line alone).
Impact:
This optimization benefits any workload that calls
bit_insertrepeatedly, as the per-call overhead is reduced. Since this function creates operation dictionaries that are likely used in batch operations or loops when working with Aerospike's bitwise operations, the cumulative savings can be significant. The 17% speedup compounds when the function is called thousands of times in typical database operation scenarios.The optimization is purely mechanical with no behavioral changes—the same constant value is used, just accessed more efficiently.
✅ Correctness verification report:
⚙️ Click to see Existing Unit Tests
test_bitwise_operations.py::TestBitwiseOperations.test_bit_inserttest_bitwise_operations.py::TestBitwiseOperations.test_bit_insert_bad_arg_typetest_bitwise_operations.py::TestBitwiseOperations.test_bit_insert_multiple_bytestest_bitwise_operations.py::TestBitwiseOperations.test_bit_insert_multiple_bytes_with_offsettest_bitwise_operations.py::TestBitwiseOperations.test_bit_insert_nonexistent_bin_nametest_bitwise_operations.py::TestBitwiseOperations.test_bit_insert_offset_out_of_rangetest_bitwise_operations.py::TestBitwiseOperations.test_bit_insert_value_byte_size_smaller_than_valuetest_bitwise_operations.py::TestBitwiseOperations.test_bit_insert_value_byte_size_too_largeTo edit these changes
git checkout codeflash/optimize-bit_insert-ml0jidhfand push.