-
Notifications
You must be signed in to change notification settings - Fork 0
feat: grpc client instrumentation #47
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
5 issues found across 21 files
Prompt for AI agents (all issues)
Check if these issues are valid — if so, understand the root cause of each and fix them.
<file name="drift/instrumentation/grpc/e2e-tests/Dockerfile">
<violation number="1" location="drift/instrumentation/grpc/e2e-tests/Dockerfile:1">
P2: Pin the base image to a specific version/digest instead of `latest` to keep e2e builds reproducible and avoid unexpected upstream changes.</violation>
</file>
<file name="drift/instrumentation/grpc/e2e-tests/src/app.py">
<violation number="1" location="drift/instrumentation/grpc/e2e-tests/src/app.py:145">
P2: `has_initial_metadata` will always be true because `len(...) >= 0` is always true. Use a positive check so the flag reflects whether metadata is present.</violation>
<violation number="2" location="drift/instrumentation/grpc/e2e-tests/src/app.py:146">
P2: `has_trailing_metadata` will always be true because `len(...) >= 0` is always true. Use a positive check so the flag reflects whether metadata is present.</violation>
</file>
<file name="drift/instrumentation/grpc/e2e-tests/requirements.txt">
<violation number="1" location="drift/instrumentation/grpc/e2e-tests/requirements.txt:5">
P2: The e2e test requirements allow protobuf 4.x even though the SDK’s supported minimum is protobuf>=6.0. This can run tests against an unsupported version and miss compatibility issues. Align the test requirement with the SDK minimum.</violation>
</file>
<file name="drift/instrumentation/grpc/instrumentation.py">
<violation number="1" location="drift/instrumentation/grpc/instrumentation.py:881">
P2: Converting the streaming request iterator to a list consumes the whole stream up front, which breaks client/bidi streaming semantics (no incremental sending) and can cause unbounded memory usage for large or infinite streams. Consider lazily wrapping the iterator to record elements while allowing the RPC to stream normally.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
| try: | ||
| # For client streaming, we need to consume the iterator to capture all requests | ||
| # This changes the behavior slightly - the iterator is consumed upfront | ||
| requests_list = list(request_iterator) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
P2: Converting the streaming request iterator to a list consumes the whole stream up front, which breaks client/bidi streaming semantics (no incremental sending) and can cause unbounded memory usage for large or infinite streams. Consider lazily wrapping the iterator to record elements while allowing the RPC to stream normally.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At drift/instrumentation/grpc/instrumentation.py, line 881:
<comment>Converting the streaming request iterator to a list consumes the whole stream up front, which breaks client/bidi streaming semantics (no incremental sending) and can cause unbounded memory usage for large or infinite streams. Consider lazily wrapping the iterator to record elements while allowing the RPC to stream normally.</comment>
<file context>
@@ -0,0 +1,1976 @@
+ try:
+ # For client streaming, we need to consume the iterator to capture all requests
+ # This changes the behavior slightly - the iterator is consumed upfront
+ requests_list = list(request_iterator)
+ metadata = kwargs.get("metadata")
+ input_value = self._build_stream_input_value(method, requests_list, metadata)
</file context>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Documented limitation in notes.md. Required for record/replay - we need full input to serialize and match. Lazy wrapper won't work for replay mode.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the feedback! I've saved this as a new learning to improve future reviews.
Summary
Adds comprehensive gRPC client instrumentation for the Python SDK, enabling record and replay of all 4 gRPC communication patterns. This mirrors and extends the Node SDK's gRPC support, with the Python SDK now covering more patterns (client streaming, bidirectional streaming) than the Node SDK.
Changes
New gRPC instrumentation (
drift/instrumentation/grpc/)with_call(), andfuture()(async)grpc._channel.Channel(concrete implementation, not abstract base class) for proper method resolutionAuto-initialization: SDK automatically initializes gRPC instrumentation when
grpciois installedE2E tests: Docker-based test suite covering all gRPC patterns with a Flask gateway + gRPC backend architecture
Documentation: Added
notes.mdwith gRPC communication patterns, Python API details, use cases in Python ecosystem, and implementation notes