-
Notifications
You must be signed in to change notification settings - Fork 13
Support flutter #947
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support flutter #947
Conversation
WalkthroughThe changes enhance the file Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant ReportParser
participant TestSuite
participant TestCase
Client->>ReportParser: record_tests(reports)
ReportParser->>ReportParser: _parse_json(report)
ReportParser->>TestSuite: create suite
ReportParser->>TestCase: create test case
ReportParser->>TestSuite: add test case
ReportParser->>Client: return results
Poem
Warning Rate limit exceeded@Konboi has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 14 minutes and 13 seconds before requesting another review. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Outside diff range and nitpick comments (4)
launchable/test_runners/flutter.py (4)
3-3: Remove unused importPathfrompathlib.The import
from pathlib import Pathis unused in the code and can be safely removed to clean up the imports.Apply this diff to remove the unused import:
- from pathlib import Path🧰 Tools
🪛 Ruff
3-3:
pathlib.Pathimported but unusedRemove unused import:
pathlib.Path(F401)
85-87: Avoid accessing private methods of another class.The method
_get_test_caseis intended to be private (indicated by the leading underscore) but is accessed from theReportParserclass. It's generally discouraged to access another class's private methods. Consider making_get_test_casea public method or accessing the_test_casesattribute directly if appropriate.Apply this diff to rename the method and update its usage:
In
TestSuite:- def _get_test_case(self, id: int) -> Optional[TestCase]: + def get_test_case(self, id: int) -> Optional[TestCase]:In
ReportParser:- c = s._get_test_case(test_id) + c = s.get_test_case(test_id)
135-219: Refactor_parse_jsonmethod for better maintainability.The
_parse_jsonmethod handles multipledata_typecases within a single method, making it lengthy and harder to maintain. Consider refactoring eachdata_typecase into separate handler methods to enhance readability and adhere to the Single Responsibility Principle.Refactor the
_parse_jsonmethod:def _parse_json(self, data: Dict): if not isinstance(data, Dict): # Note: array sometimes comes in but won't use it return data_type = data.get("type") if data_type is None: return - elif data_type == "suite": - # Handle suite data - ... - elif data_type == "testStart": - # Handle testStart data - ... - elif data_type == "testDone": - # Handle testDone data - ... - elif data_type == "error": - # Handle error data - ... - elif data_type == "print": - # Handle print data - ... - else: - return + handler = getattr(self, f"_handle_{data_type}", None) + if callable(handler): + handler(data) + else: + return +def _handle_suite(self, data: Dict): + # Existing code for handling 'suite' data_type + ... +def _handle_testStart(self, data: Dict): + # Existing code for handling 'testStart' data_type + ... +def _handle_testDone(self, data: Dict): + # Existing code for handling 'testDone' data_type + ... +def _handle_error(self, data: Dict): + # Existing code for handling 'error' data_type + ... +def _handle_print(self, data: Dict): + # Existing code for handling 'print' data_type + ...
125-126: Add exception handling when opening report files.The
openfunction may raise an exception if the file does not exist or is unreadable. Consider adding exception handling to provide a clear error message and prevent the program from crashing.Modify the
parse_funcmethod to handle exceptions:def parse_func(self, report_file: str) -> Generator[CaseEvent, None, None]: - with open(report_file, "r") as ndjson: + try: + with open(report_file, "r") as ndjson: for j in ndjson: if not j.strip(): continue data = json.loads(j) self._parse_json(data) + except FileNotFoundError: + click.echo(click.style( + f"Error: Report file '{report_file}' not found.", fg='red'), err=True) + except IOError as e: + click.echo(click.style( + f"Error: Unable to read report file '{report_file}': {e}", fg='red'), err=True)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (1)
launchable/test_runners/flutter.py(1 hunks)
🧰 Additional context used
🪛 Ruff
launchable/test_runners/flutter.py
3-3: pathlib.Path imported but unused
Remove unused import: pathlib.Path
(F401)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Outside diff range and nitpick comments (4)
launchable/test_runners/flutter.py (4)
3-3: Remove unused importThe
pathlib.Pathimport is redundant sincepathlibis already imported.-from pathlib import Path🧰 Tools
🪛 Ruff
3-3:
pathlib.Pathimported but unusedRemove unused import:
pathlib.Path(F401)
207-209: Document retry behavior implicationsThe comment about retry handling should be expanded to explain the potential impact on test results and any planned future improvements.
- # It's difficult to identify the "Retry" case because Flutter reports it with the same test ID - # So we won't handle it at the moment. + # TODO: Retry handling is currently not supported because Flutter reports retried tests + # with the same test ID. This means that for retried tests: + # - Only the last attempt's results are recorded + # - The total duration might be incorrect + # - Previous attempt outputs might be lost + # Future improvement: Consider aggregating results from all attempts
169-172: Refactor warning message handlingMultiple similar warning messages about missing tests/suites should be consolidated into a helper method for consistency and maintainability.
+ def _warn_missing_entity(self, entity_type: str, entity_id: int, context: str = None) -> None: + message = f"Warning: Cannot find a {entity_type} (id: {entity_id})" + if context: + message += f". {context}" + click.echo(click.style(message, fg='yellow'), err=True) + # Replace existing warning messages with calls to _warn_missing_entity: -click.echo(click.style( - "Warning: Cannot find a parent test suite (id: {}). So won't send test result of {}".format( - suite_id, test_data.get("name")), fg='yellow'), err=True) +self._warn_missing_entity("test suite", suite_id, + f"So won't send test result of {test_data.get('name')}")Also applies to: 200-203, 212-215
223-230: Add validation for report file extensionsThe
record_testsfunction should validate that the provided report files have the correct extension for Flutter test reports.+def is_flutter_report(path: str) -> bool: + return path.endswith('.jsonl') or path.endswith('.json') + @click.argument('reports', required=True, nargs=-1) @launchable.record.tests def record_tests(client, reports): + invalid_reports = [r for r in reports if not is_flutter_report(r)] + if invalid_reports: + click.echo(click.style( + f"Warning: Skipping files with invalid extensions: {invalid_reports}. " + "Flutter test reports should end with .jsonl or .json", + fg='yellow'), err=True) + reports = [r for r in reports if is_flutter_report(r)] + file_path_normalizer = FilePathNormalizer(base_path=client.base_path, no_base_path_inference=client.no_base_path_inference) client.parse_func = ReportParser(file_path_normalizer).parse_func
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (1)
launchable/test_runners/flutter.py(1 hunks)
🧰 Additional context used
🪛 Ruff
launchable/test_runners/flutter.py
3-3: pathlib.Path imported but unused
Remove unused import: pathlib.Path
(F401)
🔇 Additional comments (1)
launchable/test_runners/flutter.py (1)
46-47: Verify the default test status behavior
The safe fallback to TEST_PASSED for unknown status values could mask potential issues. Consider if failing by default would be safer, or at least logging a warning when an unknown status is encountered.
35b4213 to
bb08f4a
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Outside diff range and nitpick comments (6)
tests/data/flutter/record_test_result.json (3)
15-16: Document status code meaningsThe status codes (0, 1, 2) are used across test cases but their meanings aren't documented. Consider adding documentation or using named constants:
- 0: Failed
- 1: Passed
- 2: Skipped
Consider adding a comment block at the top of the file or in the README documenting these status codes.
Also applies to: 33-34, 51-52
1-61: LGTM! Well-structured test result formatThe JSON structure effectively captures Flutter test results with appropriate detail:
- Comprehensive test case information including paths and durations
- Detailed error reporting with both stdout and stderr
- Good metadata with testRunner, group, and build settings
Consider documenting the schema structure in a separate schema.json file to help maintain compatibility and enable validation. This would be particularly helpful for:
- Validating test result outputs
- Maintaining backward compatibility
- Helping other developers integrate with the format
34-36: Consider standardizing error output formatThe error output in stdout contains Unicode box-drawing characters which might cause display issues in some environments. Consider standardizing the error output format to use plain ASCII or storing the formatted output separately.
launchable/test_runners/flutter.py (3)
3-3: Remove unused importThe
Pathimport from pathlib is unused aspathlib.Pathis used directly.-from pathlib import Path🧰 Tools
🪛 Ruff
3-3:
pathlib.Pathimported but unusedRemove unused import:
pathlib.Path(F401)
38-47: Consider using constants for status codesThe status property uses magic numbers for status codes. Consider defining constants for these status codes to improve maintainability and readability.
+# At the top of the file with other constants +TEST_STATUS_SKIPPED = 3 # CaseEvent.TEST_SKIPPED +TEST_STATUS_PASSED = 1 # CaseEvent.TEST_PASSED +TEST_STATUS_FAILED = 2 # CaseEvent.TEST_FAILED @property def status(self) -> int: # status code see: case_event.py if self._is_skipped: - return CaseEvent.TEST_SKIPPED + return TEST_STATUS_SKIPPED elif self._status == FLUTTER_TEST_RESULT_SUCCESS: - return CaseEvent.TEST_PASSED + return TEST_STATUS_PASSED elif self._status == FLUTTER_TEST_RESULT_FAILURE: - return CaseEvent.TEST_FAILED + return TEST_STATUS_FAILED # safe fallback - return CaseEvent.TEST_PASSED + return TEST_STATUS_PASSED
125-126: Document the pub get information formatThe TODO comment mentions supporting cases with pub get information but doesn't specify the expected format. Consider documenting the format or creating a GitHub issue to track this enhancement.
Would you like me to create a GitHub issue to track this enhancement with more details about the pub get information format?
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (3)
launchable/test_runners/flutter.py(1 hunks)tests/data/flutter/record_test_result.json(1 hunks)tests/data/flutter/report.json(1 hunks)
🧰 Additional context used
🪛 Ruff
launchable/test_runners/flutter.py
3-3: pathlib.Path imported but unused
Remove unused import: pathlib.Path
(F401)
🪛 Biome
tests/data/flutter/report.json
[error] 1-2: End of file expected
Use an array for a sequence of values: [1, 2]
(parse)
[error] 2-3: End of file expected
Use an array for a sequence of values: [1, 2]
(parse)
[error] 3-4: End of file expected
Use an array for a sequence of values: [1, 2]
(parse)
[error] 4-5: End of file expected
Use an array for a sequence of values: [1, 2]
(parse)
[error] 5-6: End of file expected
Use an array for a sequence of values: [1, 2]
(parse)
[error] 6-7: End of file expected
Use an array for a sequence of values: [1, 2]
(parse)
[error] 7-8: End of file expected
Use an array for a sequence of values: [1, 2]
(parse)
[error] 9-10: End of file expected
Use an array for a sequence of values: [1, 2]
(parse)
[error] 11-12: End of file expected
Use an array for a sequence of values: [1, 2]
(parse)
[error] 13-14: End of file expected
Use an array for a sequence of values: [1, 2]
(parse)
[error] 14-15: End of file expected
Use an array for a sequence of values: [1, 2]
(parse)
[error] 15-16: End of file expected
Use an array for a sequence of values: [1, 2]
(parse)
[error] 16-17: End of file expected
Use an array for a sequence of values: [1, 2]
(parse)
[error] 17-18: End of file expected
Use an array for a sequence of values: [1, 2]
(parse)
[error] 18-19: End of file expected
Use an array for a sequence of values: [1, 2]
(parse)
[error] 19-20: End of file expected
Use an array for a sequence of values: [1, 2]
(parse)
[error] 20-21: End of file expected
Use an array for a sequence of values: [1, 2]
(parse)
[error] 21-22: End of file expected
Use an array for a sequence of values: [1, 2]
(parse)
[error] 22-23: End of file expected
Use an array for a sequence of values: [1, 2]
(parse)
[error] 23-24: End of file expected
Use an array for a sequence of values: [1, 2]
(parse)
[error] 24-25: End of file expected
Use an array for a sequence of values: [1, 2]
(parse)
[error] 25-29: End of file expected
Use an array for a sequence of values: [1, 2]
(parse)
🔇 Additional comments (6)
tests/data/flutter/report.json (4)
1-8: LGTM: Well-structured test metadata.
The header section properly includes protocol version, runner version, and test suite initialization events.
🧰 Tools
🪛 Biome
[error] 1-2: End of file expected
Use an array for a sequence of values: [1, 2]
(parse)
[error] 2-3: End of file expected
Use an array for a sequence of values: [1, 2]
(parse)
[error] 3-4: End of file expected
Use an array for a sequence of values: [1, 2]
(parse)
[error] 4-5: End of file expected
Use an array for a sequence of values: [1, 2]
(parse)
[error] 5-6: End of file expected
Use an array for a sequence of values: [1, 2]
(parse)
[error] 6-7: End of file expected
Use an array for a sequence of values: [1, 2]
(parse)
[error] 7-8: End of file expected
Use an array for a sequence of values: [1, 2]
(parse)
15-28: LGTM: Comprehensive test execution coverage.
The test data effectively covers all major test scenarios:
- Successful test execution
- Failed test with detailed error messages and stack traces
- Skipped test case
The error reporting for the failed test is particularly well-structured, including:
- Clear error message
- Stack trace
- Test file location
- Test description
🧰 Tools
🪛 Biome
[error] 15-16: End of file expected
Use an array for a sequence of values: [1, 2]
(parse)
[error] 16-17: End of file expected
Use an array for a sequence of values: [1, 2]
(parse)
[error] 17-18: End of file expected
Use an array for a sequence of values: [1, 2]
(parse)
[error] 18-19: End of file expected
Use an array for a sequence of values: [1, 2]
(parse)
[error] 19-20: End of file expected
Use an array for a sequence of values: [1, 2]
(parse)
[error] 20-21: End of file expected
Use an array for a sequence of values: [1, 2]
(parse)
[error] 21-22: End of file expected
Use an array for a sequence of values: [1, 2]
(parse)
[error] 22-23: End of file expected
Use an array for a sequence of values: [1, 2]
(parse)
[error] 23-24: End of file expected
Use an array for a sequence of values: [1, 2]
(parse)
[error] 24-25: End of file expected
Use an array for a sequence of values: [1, 2]
(parse)
29-29: LGTM: Accurate completion status.
The overall test run is correctly marked as failed, which is consistent with the failed test case in the execution.
🧰 Tools
🪛 Biome
[error] 25-29: End of file expected
Use an array for a sequence of values: [1, 2]
(parse)
[error] 29-30: End of file expected
Use an array for a sequence of values: [1, 2]
(parse)
10-14: Verify if null observatory URIs are intentional.
All three test processes show null values for vmServiceUri and observatoryUri. This might limit debugging capabilities if these were intended to be configured.
🧰 Tools
🪛 Biome
[error] 11-12: End of file expected
Use an array for a sequence of values: [1, 2]
(parse)
[error] 13-14: End of file expected
Use an array for a sequence of values: [1, 2]
(parse)
launchable/test_runners/flutter.py (2)
78-87: LGTM!
The TestSuite class implementation is clean and efficient, using a dictionary for O(1) test case lookups.
225-235: LGTM!
The CLI implementation is clean and follows click patterns correctly. The initialization of the parser with file path normalization is properly handled.
31a8343 to
9a0ab94
Compare
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Outside diff range and nitpick comments (3)
launchable/test_runners/flutter.py (3)
1-4: Remove unused importThe
Pathimport frompathlibis redundant since we're already importing the entirepathlibmodule.import json import pathlib -from pathlib import Path from typing import Dict, Generator, List, Optional🧰 Tools
🪛 Ruff
3-3:
pathlib.Pathimported but unusedRemove unused import:
pathlib.Path(F401)
78-87: Consider adding helper methods for common operationsThe
TestSuiteclass could benefit from additional helper methods for common operations:
- Adding test cases
- Getting test case count
- Checking if a test case exists
class TestSuite: def __init__(self, id: int, platform: str, path: str): self._id = id self._platform = platform self._path = path self._test_cases: Dict[int, TestCase] = {} def _get_test_case(self, id: int) -> Optional[TestCase]: return self._test_cases.get(id) + + def add_test_case(self, test_case: TestCase) -> None: + self._test_cases[test_case.id] = test_case + + def has_test_case(self, id: int) -> bool: + return id in self._test_cases + + @property + def test_count(self) -> int: + return len(self._test_cases)
125-126: Address TODO comment about Flutter pub get supportThe TODO indicates missing support for cases that include
flutter pub getinformation. This could affect test result reporting accuracy.Would you like me to help implement support for these cases? I can analyze the example run mentioned in the comment (https://github.com/launchableinc/examples/actions/runs/11884312142/job/33112309450) and propose a solution.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (1)
launchable/test_runners/flutter.py(1 hunks)
🧰 Additional context used
🪛 Ruff
launchable/test_runners/flutter.py
3-3: pathlib.Path imported but unused
Remove unused import: pathlib.Path
(F401)
🔇 Additional comments (4)
launchable/test_runners/flutter.py (4)
97-107: Previous optimization suggestion still applies
239-246: Implementation looks good!
The record_tests function follows good practices:
- Uses dependency injection for
FilePathNormalizer - Properly integrates with the CLI framework
- Maintains separation of concerns
248-249: Implementation looks good!
The subset implementations properly utilize the common implementations and correctly specify the file pattern for Dart test files.
46-47: Review the default test status behavior
The safe fallback to TEST_PASSED might mask potential issues. Consider if failing by default would be safer, or if logging a warning when hitting this fallback would be helpful for debugging.
✅ Verification successful
Safe fallback to TEST_PASSED is appropriate in this context
After examining the Flutter test reports and implementation, the current fallback behavior is correct and safe because:
-
The code explicitly handles all known Flutter test states:
- Skipped tests via
is_skippedflag →TEST_SKIPPED - Success result →
TEST_PASSED - Error result →
TEST_FAILED
- Skipped tests via
-
The test report shows Flutter only emits standard test states ("success", "error") and skipped flag, with no other result values observed.
-
The fallback only triggers if Flutter somehow reports an unknown state, which based on the protocol and test data doesn't occur in practice.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check if there are any Flutter test reports where tests have undefined status
# This helps assess the impact of the fallback behavior
rg -l '"type":\s*"testDone".*"result":\s*(?!"success"|"error")' $(fd -e json)
Length of output: 2874
Script:
#!/bin/bash
# Let's try a different approach to check Flutter test reports for non-standard test statuses
# First, let's examine the test report structure in the test data
# Check the content of Flutter test reports
cat tests/data/flutter/record_test_result.json tests/data/flutter/report.json
# Also check the Flutter test runner implementation to understand status mapping
cat launchable/test_runners/flutter.py
Length of output: 17218
| def _parse_json(self, data: Dict): | ||
| if not isinstance(data, Dict): | ||
| # Note: array sometimes comes in but won't use it | ||
| return | ||
|
|
||
| data_type = data.get("type") | ||
| if data_type is None: | ||
| return | ||
| elif data_type == "suite": | ||
| suite_data = data.get("suite") | ||
| if suite_data is None: | ||
| # it's might be invalid suite data | ||
| return | ||
|
|
||
| suite_id = suite_data.get("id") | ||
| if self._get_suite(suite_data.get("id")) is not None: | ||
| # already recorded | ||
| return | ||
|
|
||
| self._suites[suite_id] = TestSuite(suite_id, suite_data.get("platform"), suite_data.get("path")) | ||
| elif data_type == "testStart": | ||
| test_data = data.get("test") | ||
|
|
||
| if test_data is None: | ||
| # it's might be invalid test data | ||
| return | ||
| if test_data.get("line") is None: | ||
| # Still set up test case, should skip | ||
| return | ||
|
|
||
| suite_id = test_data.get("suiteID") | ||
| suite = self._get_suite(suite_id) | ||
|
|
||
| if suite_id is None or suite is None: | ||
| click.echo(click.style( | ||
| "Warning: Cannot find a parent test suite (id: {}). So won't send test result of {}".format( | ||
| suite_id, test_data.get("name")), fg='yellow'), err=True) | ||
| return | ||
|
|
||
| test_id = test_data.get("id") | ||
| test = self._get_test(test_id) | ||
| if test is not None: | ||
| # already recorded | ||
| return | ||
|
|
||
| name = test_data.get("name") | ||
| metadata = test_data.get("metadata", {}) | ||
| is_skipped = metadata.get("skip", False) | ||
| suite._test_cases[test_id] = TestCase(test_id, name, is_skipped) | ||
|
|
||
| elif data_type == "testDone": | ||
| test_id = data.get("testID") | ||
| test = self._get_test(test_id) | ||
|
|
||
| if test is None: | ||
| return | ||
|
|
||
| test.status = data.get("result", "success") # safe fallback | ||
| duration_msec = data.get("time", 0) | ||
| test.duration = duration_msec / 1000 # to sec | ||
|
|
||
| elif data_type == "error": | ||
| test_id = data.get("testID") | ||
| test = self._get_test(test_id) | ||
| if test is None: | ||
| click.echo(click.style( | ||
| "Warning: Cannot find a test (id: {}). So we skip update stderr".format(test_id), fg='yellow'), | ||
| err=True) | ||
| return | ||
| test.stderr += ("\n" if test.stderr else "") + data.get("error", "") | ||
|
|
||
| elif data_type == "print": | ||
| # It's difficult to identify the "Retry" case because Flutter reports it with the same test ID | ||
| # So we won't handle it at the moment. | ||
| test_id = data.get("testID") | ||
| test = self._get_test(test_id) | ||
| if test is None: | ||
| click.echo(click.style( | ||
| "Warning: Cannot find a test (id: {}). So we skip update stdout".format(test_id), fg='yellow'), | ||
| err=True) | ||
| return | ||
|
|
||
| test.stdout += ("\n" if test.stdout else "") + data.get("message", "") | ||
|
|
||
| else: | ||
| return |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Refactor _parse_json into smaller methods
The method is handling multiple event types with similar patterns. Consider breaking it down into separate methods for each event type to improve maintainability.
def _parse_json(self, data: Dict):
if not isinstance(data, Dict):
return
data_type = data.get("type")
if data_type is None:
return
- elif data_type == "suite":
- suite_data = data.get("suite")
- # ... suite handling ...
- elif data_type == "testStart":
- test_data = data.get("test")
- # ... test start handling ...
+
+ handlers = {
+ "suite": self._handle_suite_event,
+ "testStart": self._handle_test_start_event,
+ "testDone": self._handle_test_done_event,
+ "error": self._handle_error_event,
+ "print": self._handle_print_event
+ }
+
+ handler = handlers.get(data_type)
+ if handler:
+ handler(data)
+def _handle_suite_event(self, data: Dict):
+ suite_data = data.get("suite")
+ if suite_data is None:
+ return
+ # ... suite handling ...Committable suggestion skipped: line range outside the PR's diff.
ninjinkun
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! Thanks for your great work. And I was surprised that CodeRabbit worked well.



To support flutter. Example project is cloudbees-oss/smart-tests-integration-examples#84
Please see usage https://github.com/launchableinc/examples/pull/84/files?short_path=316b034#diff-316b034d126691f8d97422721b134f6bfb0106f18b95eced6b2220cde52a0d6a
Summary by CodeRabbit
TestCase,TestSuite, andReportParserto improve organization and processing of test data.These updates provide improved insights and management capabilities for Flutter testing outcomes.