Skip to content

Conversation

@srkreddy1238
Copy link
Contributor

Verified for entire range of classification nets
Quantization is disabled at the moment
There exists few unspoorted ops in convertion maps which is need to be mapped in future when relax op inventory grows.

Verified for entire range of classification nets
Quantization is disabled at the moment
There exists few unspoorted ops in convertion maps which is need to be
mapped in future when relax op inventory grows.
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @srkreddy1238, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request integrates a TensorFlow Lite (TFLite) frontend into the TVM Relax framework, allowing for the conversion and execution of TFLite models. The implementation includes a specialized FlexBuffer parser to handle TFLite's custom options and is validated by a broad range of tests, ensuring compatibility with common TensorFlow operations and popular Keras classification models. This lays the groundwork for broader TFLite model support in Relax.

Highlights

  • New TFLite Frontend: Introduced a new frontend for TensorFlow Lite (TFLite) within the TVM Relax framework, enabling the conversion of TFLite models into Relax graphs.
  • FlexBuffer Decoder: Implemented a partial FlexBuffer decoder to correctly parse custom options embedded within TFLite models.
  • Comprehensive Test Coverage: Added an extensive test suite covering various TensorFlow operations (e.g., binary, element-wise, logical, reduce, conv2d, pooling) and end-to-end validation with several Keras classification networks.
  • Current Limitations: Quantization support is currently disabled, and a few TFLite operations are not yet mapped, pending future expansion of the Relax op inventory.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces the TFLite frontend for Relax, enabling conversion of TFLite graphs into Relax graphs. It includes a Flexbuffer decoder for custom options and a comprehensive set of tests for various TensorFlow operations and Keras applications. While the overall structure is good, there are several critical issues related to missing imports, incorrect handling of data types, and potential bugs in the Flexbuffer deserialization and quantized operations that need to be addressed before merging. Additionally, some security and maintainability concerns were identified.

unpack_str = "<B"
elif byte_width == 4:
unpack_str = "<i"
assert unpack_str != ""
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The indirect_jump function currently only handles byte_width values of 1 and 4. However, the BitWidth enum (lines 25-31) indicates that byte_width can also be 2 or 8 (derived from 1 << BitWidth(root_packed_type & 3)). This will lead to an assert unpack_str != "" failure or incorrect parsing for byte_width values of 2 or 8.

Suggested change
assert unpack_str != ""
unpack_str = "<B"
elif byte_width == 2:
unpack_str = "<H"
elif byte_width == 4:
unpack_str = "<i"
elif byte_width == 8:
unpack_str = "<q"

root_end -= root_byte_width

root_type = FlexBufferType(root_packed_type >> 2)
byte_width = 1 << BitWidth(root_packed_type & 3)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The byte_width calculation 1 << BitWidth(root_packed_type & 3) can result in values 1, 2, 4, or 8. However, the indirect_jump function (lines 82-86) only handles byte_width values of 1 and 4. This discrepancy will cause incorrect behavior or errors when processing Flexbuffers with 2-byte or 8-byte widths.


import ssl

ssl._create_default_https_context = ssl._create_unverified_context
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

security-high high

Disabling SSL certificate verification globally using ssl._create_unverified_context is a security risk and generally discouraged. For testing purposes, consider using a more targeted approach, such as a context manager, or ensuring the test environment has proper certificates.

elif value_type == FlexBufferType.FBT_FLOAT:
value = struct.unpack("<f", value_bytes)[0]
else:
raise Exception
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Raising a generic Exception is not ideal. It's better to raise a more specific exception type (e.g., NotImplementedError or ValueError) with a descriptive message to help in debugging.

Suggested change
raise Exception
raise NotImplementedError(f"FlexBufferType {value_type} not supported for vector decoding.")

Comment on lines +771 to +809
# Limiting the tests for CI
(keras_app.Xception, (1, 299, 299, 3)),
# (keras_app.VGG16, (1, 224, 224, 3)),
# (keras_app.VGG19, (1, 224, 224, 3)),
(keras_app.ResNet50, (1, 224, 224, 3)),
# (keras_app.ResNet50V2, (1, 224, 224, 3)),
# (keras_app.ResNet101, (1, 224, 224, 3)),
# (keras_app.ResNet101V2, (1, 224, 224, 3)),
# (keras_app.ResNet152, (1, 224, 224, 3)),
# (keras_app.ResNet152V2, (1, 224, 224, 3)),
(keras_app.InceptionResNetV2, (1, 299, 299, 3)),
# (keras_app.MobileNet, (1, 224, 224, 3)),
(keras_app.MobileNetV2, (1, 224, 224, 3)),
(keras_app.DenseNet121, (1, 224, 224, 3)),
# (keras_app.DenseNet169, (1, 224, 224, 3)),
# (keras_app.DenseNet201, (1, 224, 224, 3)),
(keras_app.NASNetMobile, (1, 224, 224, 3)),
# (keras_app.NASNetLarge, (1, 331, 331, 3)),
(keras_app.EfficientNetB0, (1, 224, 224, 3)),
# (keras_app.EfficientNetB1, (1, 240, 240, 3)),
# (keras_app.EfficientNetB2, (1, 260, 260, 3)),
# (keras_app.EfficientNetB3, (1, 300, 300, 3)),
# (keras_app.EfficientNetB4, (1, 380, 380, 3)),
# (keras_app.EfficientNetB5, (1, 456, 456, 3)),
# (keras_app.EfficientNetB6, (1, 528, 528, 3)),
# (keras_app.EfficientNetB7, (1, 600, 600, 3)),
(keras_app.EfficientNetV2B0, (1, 224, 224, 3)),
# (keras_app.EfficientNetV2B1, (1, 240, 240, 3)),
# (keras_app.EfficientNetV2B2, (1, 260, 260, 3)),
# (keras_app.EfficientNetV2B3, (1, 300, 300, 3)),
# (keras_app.EfficientNetV2S, (1, 384, 384, 3)),
# (keras_app.EfficientNetV2M, (1, 480, 480, 3)),
# (keras_app.EfficientNetV2L, (1, 480, 480, 3)),
(keras_app.ConvNeXtTiny, (1, 224, 224, 3)),
# (keras_app.ConvNeXtSmall, (1, 224, 224, 3)),
# (keras_app.ConvNeXtBase, (1, 224, 224, 3)),
# (keras_app.ConvNeXtLarge, (1, 224, 224, 3)),
# (keras_app.ConvNeXtXLarge, (1, 224, 224, 3)),
],
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Many network tests are commented out with the note "Limiting the tests for CI". While this might be necessary for CI resource constraints, it means a significant portion of the TFLite frontend's functionality for these Keras models is not being tested. This could lead to regressions or undetected issues. Consider enabling these tests for local development or in a less constrained CI environment, or adding a tracking issue to re-enable them.

@srkreddy1238 srkreddy1238 changed the title [FRONTEND][TFLITE] TFLite frontend ported from Relax [FRONTEND][TFLITE] TFLite frontend ported from Relay Jan 27, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant