Skip to content

Merged two runtimes#4164

Open
cehongwang wants to merge 1 commit intomainfrom
cehongw/python-runtime-rework
Open

Merged two runtimes#4164
cehongwang wants to merge 1 commit intomainfrom
cehongw/python-runtime-rework

Conversation

@cehongwang
Copy link
Copy Markdown
Collaborator

Description

Please include a summary of the change and which issue is fixed. Please also include relevant motivation and context. List any dependencies that are required for this change.

Fixes # (issue)

Type of change

Please delete options that are not relevant and/or add your own.

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • This change requires a documentation update

Checklist:

  • My code follows the style guidelines of this project (You can use the linters)
  • I have performed a self-review of my own code
  • I have commented my code, particularly in hard-to-understand areas and hacks
  • I have made corresponding changes to the documentation
  • I have added tests to verify my fix or my feature
  • New and existing unit tests pass locally with my changes
  • I have added the relevant labels to my PR in so that relevant reviewers are notified

@cehongwang cehongwang requested a review from narendasan April 4, 2026 01:22
@meta-cla meta-cla bot added the cla signed label Apr 4, 2026
@cehongwang cehongwang requested a review from zewenli98 April 4, 2026 01:22
@github-actions github-actions bot added documentation Improvements or additions to documentation component: tests Issues re: Tests component: conversion Issues re: Conversion stage component: core Issues re: The core compiler component: api [Python] Issues re: Python API component: runtime component: dynamo Issues relating to the `torch.compile` or `torch._dynamo.export` paths labels Apr 4, 2026
@cehongwang cehongwang force-pushed the cehongw/python-runtime-rework branch 3 times, most recently from 878f4b4 to 27703d5 Compare April 4, 2026 01:56
Signed-off-by: Torch-TensorRT Github Bot <torch-tensorrt.github.bot@nvidia.com>
@cehongwang cehongwang force-pushed the cehongw/python-runtime-rework branch from 27703d5 to ef0662c Compare April 4, 2026 02:05
Copy link
Copy Markdown
Collaborator

@narendasan narendasan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just reviewed the core stuff for now. I think this mostly is not really solving the issue. The core idea is that we want to have a Python implementation of Torchbind endpoints (execute_engine / TRTEngine) that lets us run the same programs with either standard torch-trt or python only rather than just having two implementations that are kind of mixed together

)


@torch.library.register_fake("tensorrt::execute_engine_python") # type: ignore
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we need a seperate operator for this arent we just changing the implementation of TRTEngine to either be python or C++?

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The problem is that if its a separate op then you can't interchange between C++ and Python only builds

# ---------------------------------------------------------------------------


class PythonTRTEngine:
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this class should be "TRTEngine" and only "registered" if the C++ runtime is unavailable. It should also be a valid script object so that the same operator works with the Python and C++ versions of the objects and it should uses the exact same APIs as the ones we expose in the JIT_hooks file

register_opaque_type(PythonTRTEngine, typ="reference")


@torch.library.custom_op( # type: ignore[misc]
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same thing here. this operator should only get registered if the C++ library is not available and it should take the name of the C++ op

def execute_engine_python(
input_tensors: List[torch.Tensor], engine: PythonTRTEngine
) -> List[torch.Tensor]:
outputs = engine.execute(input_tensors)
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would rather use a struct + function design rather than some masked call to a method similar to the c++ structure

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Its cool that we have this but we should look into if there is a way to drop / mask registrations to change the runtime implementation rather than relying on distinct graph constructions

return

if self._is_python_runtime:
self.engine = PythonTRTEngine(
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah we should be trying to monitor torch bind registration and register a class if there is no C++ api rather than two code paths

metadata = pickle.loads(dumped_metadata)
return metadata
def decode_metadata(encoded_metadata: bytes | str) -> Any:
if isinstance(encoded_metadata, str):
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why was this rewritten?

)

def set_extra_state(self, state: SerializedTorchTensorRTModuleFmt) -> None:
def set_extra_state(self, state: TorchTensorRTModuleExtraState) -> None:
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why are we changing any of this it should be the same

metadata["output_tensors_are_unowned"]
)

def __del__(self) -> None:
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this necessary, cant we just use del in the actual TRTEngine class?

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Generally theres a lot of changes in this file I dont really understand why we are changing. Isnt the entire point of this feature to detect when the C++ runtime is not available and drop in compatible Python implementations of C++ runtime APIs? rather than just fold two separate implementations into one class.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cla signed component: api [Python] Issues re: Python API component: conversion Issues re: Conversion stage component: core Issues re: The core compiler component: dynamo Issues relating to the `torch.compile` or `torch._dynamo.export` paths component: runtime component: tests Issues re: Tests documentation Improvements or additions to documentation

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants