Skip to content

Conversation

@LudovicoYIN
Copy link
Contributor

Motivation

The Relax Torch frontend crashes when importing an exported program that includes
a torch.sparse_csr_tensor registered as a buffer. The crash happens during
from_exported_program because run_decompositions() triggers a PyTorch
layout_impl error for sparse tensors.

This PR avoids the crash while keeping the import pipeline functional for such
models, even though Relax does not yet support sparse tensors.

Changes

  • Skip run_decompositions() when the exported program contains sparse tensors
  • Treat aten.to_sparse.default as a no-op in the Relax Torch frontend
  • Add a regression test that imports a model with a sparse CSR buffer

Testing

  • test_frontend_from_exported_program.py

Fixes: [Bug] Relax Torch frontend crash with sparse CSR buffer in ExportedProgramhttps://github.com/apache/tvm/issues/18648

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @LudovicoYIN, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses a critical issue where the Relax Torch frontend would crash when attempting to import PyTorch ExportedPrograms that included sparse CSR tensors. The fix involves a strategic workaround: detecting the presence of sparse tensors and, if found, bypassing the run_decompositions() step that caused the crash. Additionally, aten.to_sparse.default is now handled as a pass-through operation. This ensures the import pipeline remains functional for such models, even though full sparse tensor support in Relax is still under development.

Highlights

  • Crash Prevention: The PR prevents a crash in the Relax Torch frontend when importing ExportedPrograms containing torch.sparse_csr_tensor buffers.
  • Decomposition Skip: run_decompositions() is now conditionally skipped if the ExportedProgram is found to contain sparse tensors, avoiding a PyTorch layout_impl error.
  • Sparse No-op: The aten.to_sparse.default operation is now treated as a no-op within the Relax Torch frontend, as Relax does not yet support sparse tensors.
  • Regression Test: A new regression test has been added to ensure the fix works and prevents future regressions for models with sparse CSR buffers.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request effectively resolves a crash in the Relax Torch frontend when handling models with sparse CSR tensors. The approach of skipping the decomposition pass for such models is a pragmatic solution. The addition of a no-op for to_sparse and a dedicated regression test are also well-executed. My only suggestion is a minor refactoring to improve code conciseness.

@LudovicoYIN LudovicoYIN force-pushed the fix/sparse-csr-exported-program branch from fc30d61 to ef3e4a3 Compare January 19, 2026 03:06

def test_from_exported_program_sparse_csr_buffer():
if not hasattr(torch, "sparse_csr"):
pytest.skip("sparse CSR tensor is not supported in this PyTorch build")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what happens if we don't skip the test?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My intent was to run this test only on PyTorch builds that actually support torch.sparse_csr, to avoid false failures on builds without CSR support. In my local environment the test still passes even if I remove the skip, but I added it for portability. Do you prefer keeping the skip for compatibility, or should I remove it?

@LudovicoYIN LudovicoYIN requested a review from yongwww January 22, 2026 14:39
@LudovicoYIN
Copy link
Contributor Author

@yongwww Thanks for the earlier feedback!
I removed the pytest.skip as discussed, merged main.
Could you please take another look and approve if it looks good? Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Bug] relax.frontend.torch.from_exported_program aborts on sparse CSR buffer (layout_impl is only implemented for TensorImpl subclasses)

2 participants