Skip to content

Improve Docstring and Debugging #160

Open
Xeratec wants to merge 7 commits intopulp-platform:develfrom
Xeratec:pr/docs-and-debug
Open

Improve Docstring and Debugging #160
Xeratec wants to merge 7 commits intopulp-platform:develfrom
Xeratec:pr/docs-and-debug

Conversation

@Xeratec
Copy link
Member

@Xeratec Xeratec commented Feb 5, 2026

This PR adds many missing docstring comments and improves debugging, especially when using a GUI debugger, by providing more helpful __repr__() for the _ReferenceBuffer class. Additionally, it moves the MemoryAwareClosureGeneration and MemoryAwarePrint* passes from the CommonExtensions to the MemoryLevelExtension.

Added

  • Add many missing docstrings
  • Add __repr__() function for _ReferenceBuffer calss

Changed

  • Move MemoryAwareClosureGeneration pass to MemoryLevelExtension
  • Move MemoryAwarePrint* passes to MemoryLevelExtension
  • Make sizeInBytes a class property instead of a function
  • Move AnnotateNeurekaWeightMemoryLevel to Neureka specific folder

PR Merge Checklist

  1. The PR is rebased on the latest devel commit and pointing to devel.
  2. Your PR reviewed and approved.
  3. All checks are passing.
  4. The CHANGELOG.md file has been updated.
  5. If the Docker was modified, change back its link after review.

@Xeratec Xeratec added this to the Release 0.2.2 milestone Feb 5, 2026
@Xeratec Xeratec self-assigned this Feb 5, 2026
@Xeratec Xeratec requested a review from Victor-Jung as a code owner February 5, 2026 10:20
@Xeratec Xeratec added the Documentation Improvements or additions to documentation label Feb 5, 2026
@Xeratec Xeratec added this to Deeploy Feb 5, 2026
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 5, 2026

📝 Walkthrough

Summary by CodeRabbit

Release Notes

  • New Features

    • Added pattern matching framework for graph optimization with support for branching and non-branching patterns.
    • Introduced dedicated print transformation passes for debug instrumentation of inputs, outputs, and constants.
    • Added memory-aware closure generation in memory extension.
    • Introduced TilerDeployerWrapper for tiling-enhanced code deployment.
  • Refactoring

    • Changed sizeInBytes from method to property for improved API consistency.
    • Reorganized memory-aware transformation passes into dedicated extension module.
    • Moved Neureka-specific memory annotation to target-specific folder.
  • Documentation

    • Added comprehensive docstrings across code transformation passes and data types.
    • Enhanced documentation for memory management and tiling infrastructure.

Walkthrough

Reorganizes memory-aware passes into a new MemoryLevelExtension, converts sizeInBytes from method to @property, adds buffer/reference __repr__/__str__, moves MemoryAware print/closure passes out of common code to MemoryLevelExtension, adds Neureka-specific memory-annotation, and inserts extensive docstrings and tiling/tiler API additions.

Changes

Cohort / File(s) Summary
Changelog & release
CHANGELOG.md
Added Unreleased entry describing doc/debug updates and API moves.
Core types & debug representations
Deeploy/DeeployTypes.py, Deeploy/CommonExtensions/DataTypes.py
Converted VariableBuffer.sizeInBytes to @property; added TransientBuffer.fromVariableBuffer; added/updated __str__/__repr__ for _ReferenceBuffer/VariableBuffer; added docstrings for numeric types.
Closure generation relocation
Deeploy/CommonExtensions/CodeTransformationPasses/Closure.py, Deeploy/MemoryLevelExtension/CodeTransformationPasses/Closure.py
Removed MemoryAwareClosureGeneration from common; added a memory-aware MemoryAwareClosureGeneration under MemoryLevelExtension with startRegion/endRegion filtering; common ClosureGeneration gained new params, docs, and ClosureExecutionBlock.baseBlock.
Print / debug instrumentation
Deeploy/CommonExtensions/CodeTransformationPasses/PrintInputs.py, Deeploy/MemoryLevelExtension/CodeTransformationPasses/PrintInputs.py
Introduced generic PrintInput/Output/ConstantGeneration in common area; created _MemoryAwareGeneration mixin and MemoryAwarePrint*Generation in MemoryLevelExtension; removed old MemoryAware* classes from common area; updated filtering logic and imports.
Memory allocation & management
Deeploy/CommonExtensions/CodeTransformationPasses/MemoryAllocation.py
Added docstrings and topological buffer sorting helpers (topologicallySortBuffers); switched internal calls from sizeInBytes() to sizeInBytes property; minor allocation/deallocation adjustments.
Profiling & introspection docs
Deeploy/CommonExtensions/CodeTransformationPasses/CycleMeasurement.py, Deeploy/CommonExtensions/CodeTransformationPasses/IntrospectiveCodeTransformation.py
Added extensive docstrings; _reconstructCode now executes compiled template module and exposes template.module and template.callable_; other behaviors largely documentation-only.
Pattern matching / matchers
Deeploy/CommonExtensions/OptimizationPasses/Matchers.py
Added Match, SubgraphMatcher, NonBranchingMatcher, BranchingMatcher with anchor-based, non-overlapping matching logic, new methods and comprehensive docstrings/type hints.
Memory-level annotation
Deeploy/MemoryLevelExtension/OptimizationPasses/MemoryLevelAnnotationPasses.py, Deeploy/Targets/Neureka/OptimizationPasses/MemoryLevelAnnotationPasses.py
Centralized default memory-level annotation; removed generic Neureka weight annotator from the extension and added a Neureka-specific AnnotateNeurekaWeightMemoryLevel under Targets/Neureka that annotates weight buffers based on occupancy and level capacity.
Memory-level package & deployer
Deeploy/MemoryLevelExtension/CodeTransformationPasses/__init__.py, Deeploy/MemoryLevelExtension/NetworkDeployers/MemoryLevelDeployer.py
Added package initializer (from . import *); memory summary now uses sizeInBytes property access.
Tiling extension & codegen docs
Deeploy/TilingExtension/TilerExtension.py, Deeploy/TilingExtension/TilingCodegen.py
Large documentation additions and helper methods; added TilerDeployerWrapper, public tiler APIs, many scheduling/memory-constraint utilities and docstrings; minimal signature-breaking changes.
Imports / bindings / tests
Deeploy/Targets/*/Bindings.py, DeeployTest/*
Updated imports to point to moved MemoryLevelExtension modules; tests adjusted import paths; small formatting/import splits.

Sequence Diagram(s)

(omitted)

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~50 minutes

Possibly related PRs

Suggested labels

Refactor

Suggested reviewers

  • Victor-Jung
  • lukamac
🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Title check ✅ Passed The title 'Improve Docstring and Debugging' directly matches the main objectives: adding docstrings and improving debugging via repr(). It is concise and clearly summarizes the primary change.
Description check ✅ Passed The description clearly relates to the changeset by listing added docstrings and repr() function, and detailing moved components and the sizeInBytes property change. It provides sufficient detail about the changes made.
Docstring Coverage ✅ Passed Docstring coverage is 88.19% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 10

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (5)
Deeploy/CommonExtensions/DataTypes.py (1)

68-98: ⚠️ Potential issue | 🟡 Minor

Fix float64_t docstring mantissa/exponent swap (Line 93).
The docstring currently inverts mantissa/exponent widths relative to the class attributes.

Proposed fix
-class float64_t(FloatImmediate):
-    """64-bit float type with 11-bit mantissa and 52-bit exponent."""
+class float64_t(FloatImmediate):
+    """64-bit float type with 52-bit mantissa and 11-bit exponent."""
Deeploy/Targets/PULPOpen/Bindings.py (1)

63-75: ⚠️ Potential issue | 🟡 Minor

Duplicate closure definitions detected.

FunctionCallClosure and ForkClosure are defined twice:

  • Lines 63-69: First definitions
  • Lines 71-75: Duplicate definitions that overwrite the first

This appears to be an unintentional artifact from the refactoring. Please remove the duplicate definitions.

🔧 Proposed fix to remove duplicates
 FunctionCallClosure = partial(ClosureGeneration, closureSuffix = "_closure")
 ClusterClosure = partial(ClosureGeneration,
                          closureSuffix = "_cluster_entry",
                          closureCallTemplate = _clusterEntryClosureCallTemplate)
 ForkClosure = partial(ClosureGeneration,
                       closureSuffix = "_cluster_fork",
                       closureCallTemplate = _clusterForkClosureCallTemplate)
 
 TilingCallClosure = partial(ClosureGeneration, closureSuffix = "_tiling_closure")
-FunctionCallClosure = partial(ClosureGeneration, closureSuffix = "_closure")
-ForkClosure = partial(ClosureGeneration,
-                      closureSuffix = "_cluster_fork",
-                      closureCallTemplate = _clusterForkClosureCallTemplate)
 
 MemoryAwareClusterClosure = partial(MemoryAwareClosureGeneration,
Deeploy/CommonExtensions/OptimizationPasses/Matchers.py (2)

367-390: ⚠️ Potential issue | 🟠 Major

Guard against leaf nodes before indexing outputs.
When the graph node is a leaf but the pattern expects more nodes, gn.outputs[0] / pn.outputs[0] will raise IndexError. Short‑circuit when outputs are missing.

🛠️ Proposed fix
         # End of pattern
         if pattern_length == 1:
             return nodes_map
 
+        # If either graph or pattern stops here, it can't match a longer chain
+        if len(gn.outputs) == 0 or len(pn.outputs) == 0:
+            return None
+
         # if we are in the "active" pattern, the graph node has to be
         # single-output and single-use

581-633: ⚠️ Potential issue | 🟠 Major

Avoid in-place nodes_map contamination during branch backtracking.
Reusing the same dict across alternative branch attempts can leave partial mappings after a failed recursion and block valid matches. Clone per attempt and only commit on success.

🛠️ Proposed fix
                 if len(pn_input.inputs) > 0 and pn_input.inputs[0].name not in nodes_map.keys():
                     tmp = None
                     for gn_input in gn.inputs:
                         # Check if parent node of gn is constant or input node (in this case it has no additional inputs)
                         # and if node was already matched
                         if len(gn_input.inputs) > 0 and gn_input.inputs[0] not in nodes_map.values():
                             # Search for valid subgraphs
-                            tmp = self._match_nodes_recursive(pn_input.inputs[0],
-                                                              gn_input.inputs[0],
-                                                              nodes_map,
-                                                              direction = 'Reverse')
-                            if tmp is not None:
-                                nodes_map = tmp
+                            candidate_map = dict(nodes_map)
+                            tmp = self._match_nodes_recursive(pn_input.inputs[0],
+                                                              gn_input.inputs[0],
+                                                              candidate_map,
+                                                              direction='Reverse')
+                            if tmp is not None:
+                                nodes_map = tmp
+                                break
@@
                 if len(pn_input.outputs) > 0 and pn_input.outputs[0].name not in nodes_map.keys():
                     tmp = None
                     for gn_input in gn.outputs:
                         # Check if parent node of gn is is output node (in this case it has no additional outputs)
                         # and if node was already matched
                         if len(gn_input.outputs) > 0 and gn_input.outputs[0] not in nodes_map.values():
                             # Search for valid subgraphs
-                            tmp = self._match_nodes_recursive(pn_input.outputs[0],
-                                                              gn_input.outputs[0],
-                                                              nodes_map,
-                                                              direction = 'Forward')
-                            if tmp is not None:
-                                nodes_map = tmp
+                            candidate_map = dict(nodes_map)
+                            tmp = self._match_nodes_recursive(pn_input.outputs[0],
+                                                              gn_input.outputs[0],
+                                                              candidate_map,
+                                                              direction='Forward')
+                            if tmp is not None:
+                                nodes_map = tmp
+                                break
Deeploy/CommonExtensions/CodeTransformationPasses/MemoryAllocation.py (1)

172-213: ⚠️ Potential issue | 🟠 Major

Guard against already-sorted buffers in topological sort loop.

The while loop iterates through all buffers multiple times, but the for loop processes each buffer without checking if it was already removed in a previous iteration. When a buffer is removed from unsortedBufferNames in one pass and encountered again in a subsequent pass, calling .remove() on an already-removed name raises ValueError. Add the guard check before attempting removal.

Proposed fix
         while len(unsortedBufferNames) > 0:
             for buffer in buffers:
+                if buffer.name not in unsortedBufferNames:
+                    continue
                 if isinstance(buffer, _ReferenceBuffer) and buffer._referenceName in unsortedBufferNames:
                     continue
 
                 sortedBuffers.append(buffer)
                 unsortedBufferNames.remove(buffer.name)
🤖 Fix all issues with AI agents
In `@CHANGELOG.md`:
- Line 37: Fix the typo in the CHANGELOG entry: change "calss" to "class" in the
line that reads "Add `__repr__()` function for `_ReferenceBuffer` calss" so it
reads "Add `__repr__()` function for `_ReferenceBuffer` class"; update the text
around the `_ReferenceBuffer` and `__repr__()` mention if necessary to preserve
formatting consistency.

In
`@Deeploy/CommonExtensions/CodeTransformationPasses/IntrospectiveCodeTransformation.py`:
- Line 42: parseTreeDict is a mutable class attribute and should be annotated as
typing.ClassVar to make its shared, class-level nature explicit; update the
declaration of parseTreeDict (the class attribute named parseTreeDict: Dict[int,
TemplateNode]) to use ClassVar[Dict[int, TemplateNode]] and add/import ClassVar
from typing in the module so linters and readers see it's intentionally shared
across instances.

In `@Deeploy/CommonExtensions/CodeTransformationPasses/MemoryAllocation.py`:
- Around line 257-260: The allocation loop is reversing the output of
topologicallySortBuffers which already yields dependencies first, so using
reversed(...) causes dependents to be allocated before their references; remove
the reversed(...) call and iterate directly over
self.topologicallySortBuffers(outputs + transients) in the allocation loop (the
block that asserts buffer._live is False and allocates buffers referenced by
buffer.name) so reference buffers are allocated prior to their dependents.

In `@Deeploy/CommonExtensions/DataTypes.py`:
- Around line 136-148: The docstring for minimalFloatType has a parameter
name/type mismatch and a missing bracket: change the function signature and
docstring to consistently use "values" (not "value") and correct the type
annotation to Union[float, Iterable[float], npt.NDArray] (ensure the closing
bracket is present), then update the "Parameters" section to refer to "values"
and include the complete type expression (closing bracket/NDArray) so it matches
minimalIntegerType's style and the actual signature.
- Around line 110-122: The docstring for minimalIntegerType mismatches the
function signature: change the Parameters section to refer to "value" (singular)
and fix the type annotation to match the function signature (Union[int,
Iterable[int], npt.NDArray]) including the missing closing bracket and NDArray
type; also ensure the param description reflects that a single int, an iterable
of ints, or a numpy NDArray may be passed, and keep the Returns section as
Type[IntegerImmediate] to match the function's return type.

In `@Deeploy/DeeployTypes.py`:
- Around line 390-396: The fromVariableBuffer classmethod constructs a
TransientBuffer instance into the local variable ret (using cls(name =
buffer.name, size = buffer.sizeInBytes)) but never returns it; update the method
(fromVariableBuffer) to return ret (i.e., add a return ret at the end) so
callers receive the constructed instance when passing a VariableBuffer.

In `@Deeploy/Targets/Neureka/OptimizationPasses/__init__.py`:
- Line 24: The package __init__.py currently uses "from . import *" which
doesn't import submodule symbols; replace it by explicitly importing the class
from the submodule and re-exporting it so callers can do "from
Deeploy.Targets.Neureka.OptimizationPasses import
AnnotateNeurekaWeightMemoryLevel": import AnnotateNeurekaWeightMemoryLevel from
the MemoryLevelAnnotationPasses module (referencing MemoryLevelAnnotationPasses
and AnnotateNeurekaWeightMemoryLevel) and set __all__ to include
'AnnotateNeurekaWeightMemoryLevel'.

In `@Deeploy/Targets/Neureka/OptimizationPasses/MemoryLevelAnnotationPasses.py`:
- Line 53: The list comprehension creating neurekaNodes assumes every node has
an "engine" key and can raise KeyError; change the membership test to use safe
lookup (e.g., node.attrs.get("engine") or node.attrs.get("engine", None")) when
comparing to self.neurekaEngineName so nodes missing "engine" are treated as
non-matching; update the comprehension that builds neurekaNodes (referencing
neurekaNodes, graph.nodes, node.attrs, and self.neurekaEngineName) accordingly.

In `@Deeploy/TilingExtension/TilerExtension.py`:
- Around line 46-91: The docstring example for class Tiler uses an undefined
variable `hierarchy` when constructing the example instance; update the example
to use the defined variable name (`memoryHierarchy`) or define `hierarchy`
explicitly so it matches the earlier example variables—e.g. change `tiler =
Tiler(hierarchy)` to `tiler = Tiler(memoryHierarchy)` in the Tiler class
docstring.
- Around line 1792-1809: The docstring for the worstCaseBufferSize property
claims it includes input/output buffers but the current implementation returns
self.tiler.worstCaseBufferSize only; either update the implementation to
actually add IO sizes or soften the docstring. To fix, either (A) compute IO
buffer sizes from the class' IO sources (e.g., attributes or helper methods that
expose input/output sizes such as any inputBuffers/outputBuffers or
get_input_buffer_sizes/get_output_buffer_sizes), merge those per memory-level
with the dictionary returned by self.tiler.worstCaseBufferSize (summing values
for matching keys) and return the combined dict from worstCaseBufferSize, or (B)
change the worstCaseBufferSize docstring to state it delegates to
self.tiler.worstCaseBufferSize and does not include input/output buffers. Ensure
you modify the worstCaseBufferSize property accordingly and reference
self.tiler.worstCaseBufferSize in the implementation or docstring.
🧹 Nitpick comments (6)
Deeploy/CommonExtensions/CodeTransformationPasses/IntrospectiveCodeTransformation.py (1)

42-42: Consider adding cache size management.

The parseTreeDict cache has no size limit or eviction policy. While this may not be an issue for typical workloads with a limited number of unique templates, it could lead to unbounded memory growth in long-running processes that generate many unique templates.

Consider implementing a cache size limit (e.g., using functools.lru_cache pattern or a bounded dictionary) if memory consumption becomes a concern in production environments.

Deeploy/Targets/Neureka/OptimizationPasses/MemoryLevelAnnotationPasses.py (1)

61-63: Boundary condition uses < instead of <=.

A buffer that would exactly fill the remaining memory capacity is excluded. If this is intentional (e.g., reserving some headroom), consider adding a comment. Otherwise, using <= would allow buffers that exactly fit.

Suggested change if exact fit should be allowed
-                if weightMemoryOccupation + _neurekaWeightBufferSize(buffer) < self._weightMemoryLevel.size:
+                if weightMemoryOccupation + _neurekaWeightBufferSize(buffer) <= self._weightMemoryLevel.size:
Deeploy/MemoryLevelExtension/CodeTransformationPasses/Closure.py (2)

79-79: Misleading comment: "Don't override this".

This method is an override of the base class's _generateClosureStruct. The comment likely means "subclasses of this class should not override this method," but it reads as if the current implementation shouldn't override the parent. Consider clarifying.

Suggested clarification
-    # Don't override this
+    # Subclasses should not override this method
     def _generateClosureStruct(self, ctxt: NetworkContext, executionBlock: ExecutionBlock):

120-121: Filter condition can be simplified.

The condition buf._memoryLevel == self.startRegion or buf._memoryLevel != self.endRegion is equivalent to buf._memoryLevel != self.endRegion (assuming startRegion != endRegion). The startRegion check is only meaningful if startRegion == endRegion, which would be an unusual configuration.

If the intent is simply "exclude endRegion buffers," the condition could be simplified:

Simplified filter condition
-            if buf._memoryLevel == self.startRegion or buf._memoryLevel != self.endRegion:
+            if buf._memoryLevel != self.endRegion:

If the explicit startRegion check serves a purpose (e.g., documentation or future extensibility), consider adding a comment explaining why.

Deeploy/MemoryLevelExtension/CodeTransformationPasses/__init__.py (1)

24-24: from . import * does not import submodules; consider explicit imports for better code organization.

While this pattern technically doesn't expose MemoryAwareClosureGeneration, MemoryAwarePrintInputGeneration, or MemoryAwarePrintOutputGeneration via the package namespace, the codebase imports these classes directly from their submodules (e.g., from ...CodeTransformationPasses.Closure import MemoryAwareClosureGeneration), so there is no current functional issue. For improved code clarity and package organization, consider using explicit imports with __all__.

Suggested improvement for explicit imports
-from . import *
+from .Closure import MemoryAwareClosureGeneration
+from .PrintInputs import MemoryAwarePrintInputGeneration, MemoryAwarePrintOutputGeneration
+
+__all__ = [
+    "MemoryAwareClosureGeneration",
+    "MemoryAwarePrintInputGeneration",
+    "MemoryAwarePrintOutputGeneration",
+]
Deeploy/CommonExtensions/CodeTransformationPasses/MemoryAllocation.py (1)

310-372: Optional: silence unused verbose if you keep Ruff clean.
This parameter is unused and triggers ARG002; consider a no-op assignment or a # noqa: ARG002.

🧹 Example (no-op use)
     def apply(self,
               ctxt: NetworkContext,
               executionBlock: ExecutionBlock,
               name: str,
               verbose: CodeGenVerbosity = _NoVerbosity) -> Tuple[NetworkContext, ExecutionBlock]:
+        _ = verbose
         references = self.extractDynamicReferences(ctxt,
                                                    executionBlock,
                                                    unrollStructs = True,
                                                    includeGlobalReferences = False)

Comment on lines 110 to +122
def minimalIntegerType(value: Union[int, Iterable[int], npt.NDArray]) -> Type[IntegerImmediate]:
"""Returns the minimal integer type that can represent all values in the given list.

Parameters
----------
values : Union[int, Iterable[int]
The list of integer values to analyze.

Returns
-------
Type[IntegerImmediate]
The minimal integer type that can represent all values.
"""
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Docstring parameter name/type mismatch in minimalIntegerType.
The docstring uses values (plural) and is missing the closing bracket and NDArray type.

Proposed fix
-    Parameters
-    ----------
-    values : Union[int, Iterable[int]
-        The list of integer values to analyze.
+    Parameters
+    ----------
+    value : Union[int, Iterable[int], npt.NDArray]
+        The integer value(s) to analyze.
🤖 Prompt for AI Agents
In `@Deeploy/CommonExtensions/DataTypes.py` around lines 110 - 122, The docstring
for minimalIntegerType mismatches the function signature: change the Parameters
section to refer to "value" (singular) and fix the type annotation to match the
function signature (Union[int, Iterable[int], npt.NDArray]) including the
missing closing bracket and NDArray type; also ensure the param description
reflects that a single int, an iterable of ints, or a numpy NDArray may be
passed, and keep the Returns section as Type[IntegerImmediate] to match the
function's return type.

Comment on lines 136 to +148
def minimalFloatType(value: Union[float, Iterable[float], npt.NDArray]) -> Type[FloatImmediate]:
"""Returns the minimal float type that can represent all values in the given list.

Parameters
----------
values : Union[float, Iterable[float]
The list of float values to analyze.

Returns
-------
Type[FloatImmediate]
The minimal float type that can represent all values.
"""
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Docstring parameter name/type mismatch in minimalFloatType.
Same issue as minimalIntegerType: values vs value, and missing closing bracket/NDArray.

Proposed fix
-    Parameters
-    ----------
-    values : Union[float, Iterable[float]
-        The list of float values to analyze.
+    Parameters
+    ----------
+    value : Union[float, Iterable[float], npt.NDArray]
+        The float value(s) to analyze.
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def minimalFloatType(value: Union[float, Iterable[float], npt.NDArray]) -> Type[FloatImmediate]:
"""Returns the minimal float type that can represent all values in the given list.
Parameters
----------
values : Union[float, Iterable[float]
The list of float values to analyze.
Returns
-------
Type[FloatImmediate]
The minimal float type that can represent all values.
"""
def minimalFloatType(value: Union[float, Iterable[float], npt.NDArray]) -> Type[FloatImmediate]:
"""Returns the minimal float type that can represent all values in the given list.
Parameters
----------
value : Union[float, Iterable[float], npt.NDArray]
The float value(s) to analyze.
Returns
-------
Type[FloatImmediate]
The minimal float type that can represent all values.
"""
🤖 Prompt for AI Agents
In `@Deeploy/CommonExtensions/DataTypes.py` around lines 136 - 148, The docstring
for minimalFloatType has a parameter name/type mismatch and a missing bracket:
change the function signature and docstring to consistently use "values" (not
"value") and correct the type annotation to Union[float, Iterable[float],
npt.NDArray] (ensure the closing bracket is present), then update the
"Parameters" section to refer to "values" and include the complete type
expression (closing bracket/NDArray) so it matches minimalIntegerType's style
and the actual signature.

Comment on lines +390 to 396
@classmethod
def fromVariableBuffer(cls, buffer: VariableBuffer):
ret = cls(name = buffer.name, size = buffer.sizeInBytes)

@property
def sizeInBytes(self) -> int:
return int(self.size)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

find . -name "DeeployTypes.py" -type f

Repository: pulp-platform/Deeploy

Length of output: 90


🏁 Script executed:

sed -n '385,400p' ./Deeploy/DeeployTypes.py

Repository: pulp-platform/Deeploy

Length of output: 607


Add return statement to fromVariableBuffer method.

The fromVariableBuffer classmethod constructs a ret instance but never returns it, causing the method to implicitly return None. Callers will receive None instead of the expected TransientBuffer instance.

Fix
     `@classmethod`
     def fromVariableBuffer(cls, buffer: VariableBuffer):
         ret = cls(name = buffer.name, size = buffer.sizeInBytes)
+        return ret
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
@classmethod
def fromVariableBuffer(cls, buffer: VariableBuffer):
ret = cls(name = buffer.name, size = buffer.sizeInBytes)
@property
def sizeInBytes(self) -> int:
return int(self.size)
`@classmethod`
def fromVariableBuffer(cls, buffer: VariableBuffer):
ret = cls(name = buffer.name, size = buffer.sizeInBytes)
return ret
`@property`
def sizeInBytes(self) -> int:
return int(self.size)
🧰 Tools
🪛 Ruff (0.14.14)

[error] 392-392: Local variable ret is assigned to but never used

Remove assignment to unused variable ret

(F841)

🤖 Prompt for AI Agents
In `@Deeploy/DeeployTypes.py` around lines 390 - 396, The fromVariableBuffer
classmethod constructs a TransientBuffer instance into the local variable ret
(using cls(name = buffer.name, size = buffer.sizeInBytes)) but never returns it;
update the method (fromVariableBuffer) to return ret (i.e., add a return ret at
the end) so callers receive the constructed instance when passing a
VariableBuffer.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (3)
Deeploy/CommonExtensions/CodeTransformationPasses/MemoryAllocation.py (1)

172-213: ⚠️ Potential issue | 🟠 Major

topologicallySortBuffers can re-process already-sorted buffers.
A second while-iteration can hit unsortedBufferNames.remove(buffer.name) for buffers already removed, raising ValueError.

🛠️ Proposed fix
         while len(unsortedBufferNames) > 0:
             for buffer in buffers:
+                if buffer.name not in unsortedBufferNames:
+                    continue
                 if isinstance(buffer, _ReferenceBuffer) and buffer._referenceName in unsortedBufferNames:
                     continue
 
                 sortedBuffers.append(buffer)
                 unsortedBufferNames.remove(buffer.name)
Deeploy/CommonExtensions/OptimizationPasses/Matchers.py (2)

498-633: ⚠️ Potential issue | 🟠 Major

Backtracking uses shared nodes_map and can poison later candidates.

Recursive calls mutate nodes_map in-place even when a candidate path fails, which can incorrectly block subsequent candidates and lead to false negatives. Use a copy per branch and only merge on success.

🧩 Proposed fix (copy-on-branch)
-                    tmp = None
+                    tmp = None
                     for gn_input in gn.inputs:
@@
-                            tmp = self._match_nodes_recursive(pn_input.inputs[0],
-                                                              gn_input.inputs[0],
-                                                              nodes_map,
-                                                              direction = 'Reverse')
+                            candidate_map = dict(nodes_map)
+                            tmp = self._match_nodes_recursive(pn_input.inputs[0],
+                                                              gn_input.inputs[0],
+                                                              candidate_map,
+                                                              direction = 'Reverse')
                             if tmp is not None:
                                 nodes_map = tmp
+                                break
@@
-                    if tmp == None:
+                    if tmp == None:
                         return None
@@
-                    tmp = None
+                    tmp = None
                     for gn_input in gn.outputs:
@@
-                            tmp = self._match_nodes_recursive(pn_input.outputs[0],
-                                                              gn_input.outputs[0],
-                                                              nodes_map,
-                                                              direction = 'Forward')
+                            candidate_map = dict(nodes_map)
+                            tmp = self._match_nodes_recursive(pn_input.outputs[0],
+                                                              gn_input.outputs[0],
+                                                              candidate_map,
+                                                              direction = 'Forward')
                             if tmp is not None:
                                 nodes_map = tmp
+                                break
@@
-                    if tmp == None:
+                    if tmp == None:
                         return None

198-248: ⚠️ Potential issue | 🟡 Minor

Non-overlap tracking depends on unique node names.

Line 239 uses node.name for overlap detection. If the graph contains nodes with duplicate names, legitimate matches could be incorrectly rejected or overlapping matches could be allowed. Consider tracking identity via id(node) instead, or enforce name uniqueness upstream when constructing graphs.

🛠️ Safer overlap tracking by identity
-        matched_node_names = set()
+        matched_node_ids = set()

-        def node_names(match: Match):
-            return [node.name for node in match.nodes_map.values()]
+        def node_ids(match: Match):
+            return [id(node) for node in match.nodes_map.values()]

         def is_overlap(match: Match):
-            return not matched_node_names.isdisjoint(node_names(match))
+            return not matched_node_ids.isdisjoint(node_ids(match))
@@
-                matched_node_names.update(node_names(match))
+                matched_node_ids.update(node_ids(match))
🤖 Fix all issues with AI agents
In `@Deeploy/MemoryLevelExtension/CodeTransformationPasses/__init__.py`:
- Line 24: Replace the wildcard import in
Deeploy.MemoryLevelExtension.CodeTransformationPasses.__init__ (currently "from
. import *") with explicit re-exports: import the public classes/functions from
their submodules (e.g., from .submodule_name import ClassName, function_name)
and then define __all__ = ["ClassName", "function_name", ...] so the package
namespace exposes the intended API; update the list to include every public
symbol you want exported from the package.

In `@Deeploy/MemoryLevelExtension/CodeTransformationPasses/PrintInputs.py`:
- Around line 110-119: The code calls
self.regex.findall(ctxt.lookup(key)._memoryLevel) without guarding against
_memoryLevel being None; change the logic in the PrintInputs method to use the
already-retrieved _buffer (from ctxt.lookup(key)), check hasattr(_buffer,
"_memoryLevel") and also if getattr(_buffer, "_memoryLevel") is None return
False when self.regex is set, and only call self.regex.findall on a non-None
string value (avoid calling ctxt.lookup(key) twice); ensure the returned boolean
remains ret != [].

In `@Deeploy/TilingExtension/TilingCodegen.py`:
- Around line 486-528: The docstring example for minimizeRectangle is incorrect:
update the example output at the first example (the one after ">>>
minimizeRectangle(rect, (4, 4))") so that the returned reference shape remains
(4, 4) (i.e. the tuple in the docstring should show (HyperRectangle(offset=(0,
0), dims=(2, 2)), (4, 4))). Locate the minimizeRectangle docstring and replace
the wrong output tuple with the corrected one, ensuring HyperRectangle, offset
and dims text matches the actual repr used elsewhere in the docstring.
🧹 Nitpick comments (2)
Deeploy/TilingExtension/TilingCodegen.py (1)

709-749: Prefer zip(..., strict=True) after the length assert.

This makes the intent explicit and keeps a length mismatch check even if asserts are optimized out; it also resolves Ruff B905. Please verify the project targets Python ≥ 3.10 before adopting.

Suggested change
-    return sum(offset * stride for offset, stride in zip(offsets, strides))
+    return sum(offset * stride for offset, stride in zip(offsets, strides, strict = True))
Deeploy/CommonExtensions/OptimizationPasses/Matchers.py (1)

111-165: Add NotImplementedError to abstract methods for fail-fast behavior.

The base class methods _valid_pattern and _nodes_map_from_anchor are currently silent no-ops. Since SubgraphMatcher is never instantiated directly and both subclasses (NonBranchingMatcher and BranchingMatcher) properly override these methods with substantive implementations, raising NotImplementedError is safe and improves error handling if a new subclass accidentally fails to override them.

♻️ Proposed fix
 def _valid_pattern(self, pattern: gs.Graph) -> None:
@@
-        _ = pattern
+        raise NotImplementedError(
+            "SubgraphMatcher._valid_pattern must be implemented by subclasses"
+        )
@@
 def _nodes_map_from_anchor(self, anchor: gs.Node, pattern: gs.Graph) -> Optional[Dict[str, gs.Node]]:
@@
-        _, _ = anchor, pattern
+        raise NotImplementedError(
+            "SubgraphMatcher._nodes_map_from_anchor must be implemented by subclasses"
+        )

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
Deeploy/TilingExtension/TilingCodegen.py (1)

810-843: ⚠️ Potential issue | 🟡 Minor

Docstring claims "row-major order" but tile iteration is column-major.

Lines 815–816 state tiles are generated "in row-major order, iterating through dimensions from outermost to innermost," and line 832 repeats this. However, nextTileIndex (lines 848–853) increments dimension 0 first and carries to higher indices, meaning the leftmost (outermost) index varies fastest — this is column-major (Fortran) order, not row-major (C) order where the rightmost index varies fastest.

Proposed docstring fix
-    The tiles are generated in row-major order, iterating through dimensions
-    from outermost to innermost.
+    The tiles are generated in column-major order, iterating through dimensions
+    from innermost to outermost (i.e., the first/leftmost index varies fastest).
-        Generate tile indices in row-major order.
+        Generate tile indices in column-major order.
Deeploy/CommonExtensions/CodeTransformationPasses/MemoryAllocation.py (1)

172-215: ⚠️ Potential issue | 🟠 Major

topologicallySortBuffers can re‑remove already‑sorted names.

When any references remain after the first pass, the next iteration tries to remove names that are already removed, which raises ValueError. Skip buffers that are already sorted before removing.

🩹 Proposed fix
         while len(unsortedBufferNames) > 0:
             for buffer in buffers:
+                if buffer.name not in unsortedBufferNames:
+                    continue
                 if isinstance(buffer, _ReferenceBuffer) and buffer._referenceName in unsortedBufferNames:
                     continue

                 sortedBuffers.append(buffer)
                 unsortedBufferNames.remove(buffer.name)
🤖 Fix all issues with AI agents
In `@CHANGELOG.md`:
- Around line 16-19: The changelog line "Move `AnnotateNeurekaWeightMemoryLevel`
to `Neureka` specific folder" needs hyphenation: update it to "Move
`AnnotateNeurekaWeightMemoryLevel` to `Neureka`-specific folder" so the term
Neureka-specific is correctly hyphenated; leave the other lines
(MemoryAwareClosureGeneration, MemoryAwarePrint*, sizeInBytes) unchanged.

In `@Deeploy/MemoryLevelExtension/CodeTransformationPasses/Closure.py`:
- Around line 114-121: The current condition incorrectly makes self.startRegion
ineffective; change the filtering so buffers are included when they actually
belong to the start region or fall within the intended start..end range: inside
the loop (makoDynamicReferences / ctxt.lookup -> buf._memoryLevel), replace the
redundant "buf._memoryLevel == self.startRegion or buf._memoryLevel !=
self.endRegion" with an explicit check such as "buf._memoryLevel ==
self.startRegion or (self.startRegion <= buf._memoryLevel < self.endRegion)" if
memory levels are orderable, or otherwise implement the correct membership test
for the start..end region; if startRegion is truly unused, remove it and
simplify to a single clear condition.

In `@Deeploy/Targets/Neureka/OptimizationPasses/MemoryLevelAnnotationPasses.py`:
- Around line 61-63: The condition using a strict less-than excludes buffers
that exactly fit remaining memory; in the allocation logic inside
MemoryLevelAnnotationPasses.py update the check in the block that references
weightMemoryOccupation, _neurekaWeightBufferSize(buffer), and
self._weightMemoryLevel.size to use <= instead of < so an exact-fit buffer is
allowed and still sets buffer._memoryLevel and updates weightMemoryOccupation.
🧹 Nitpick comments (5)
Deeploy/CommonExtensions/CodeTransformationPasses/IntrospectiveCodeTransformation.py (1)

180-191: Minor docstring style inconsistency: list of str vs List[str].

Line 189 uses list of str while every other docstring in this file uses List[str] (e.g., lines 142, 219, 244, 280). Pick one convention — the rest of this file uses List[str].

Proposed fix
-        varNames : list of str
-            List of variable names to dereference within the template.
+        varNames : List[str]
+            The variable names to dereference within the template.
Deeploy/CommonExtensions/OptimizationPasses/Matchers.py (1)

253-273: Missing blank line before Notes section header.

All other docstrings in this file have a blank line separating section headers (e.g., before Notes, Parameters, Returns). Line 263 is missing this separator, breaking the NumPy docstring convention and potentially causing doc-generation tools (e.g., Sphinx with numpydoc) to misparse the section.

📝 Proposed fix
     The matching algorithm follows edges from the anchor node to build a complete
     mapping between pattern nodes and graph nodes, verifying operation types
     and attributes at each step.
+
     Notes
     -----
Deeploy/TilingExtension/TilingCodegen.py (1)

288-311: __add__ docstring for TilingSchedule is accurate.

One subtlety worth noting: the method uses self's inputBaseOffsets/outputBaseOffsets for the new schedule and silently discards other's base offset values (only their keys are validated). The current docstring phrase "maintaining the same base offsets" is technically correct but a reader might not realize other's offset values are dropped. Consider adding a brief note if this is intentional.

Deeploy/Targets/Neureka/OptimizationPasses/MemoryLevelAnnotationPasses.py (1)

34-39: SequentialPass base class is unnecessary here.

AnnotateNeurekaWeightMemoryLevel defines its own apply and registers no sub-passes. Inheriting from SequentialPass adds unused machinery. Consider inheriting from Pass directly.

Deeploy/CommonExtensions/CodeTransformationPasses/PrintInputs.py (1)

397-401: Consider extracting duplicated apply bodies into a shared helper.

All three apply methods (PrintInputGeneration, PrintOutputGeneration, PrintConstantGeneration) share the same structure: extract references → filter via _getRepDict → call addLeft/addRight. The only variation is the insertion side. A small base-class helper parameterized by addLeft vs addRight would eliminate the repetition.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (3)
Deeploy/CommonExtensions/DataTypes.py (1)

12-97: ⚠️ Potential issue | 🟡 Minor

Docstrings for data type classes look good overall, but float64_t has swapped mantissa/exponent.

Line 93: The docstring reads "11-bit mantissa and 52-bit exponent", but the class attributes are typeMantissa = 52 and typeExponent = 11. IEEE 754 double-precision has a 52-bit mantissa and 11-bit exponent, so the docstring has the values reversed.

Proposed fix
 class float64_t(FloatImmediate):
-    """64-bit float type with 11-bit mantissa and 52-bit exponent."""
+    """64-bit float type with 52-bit mantissa and 11-bit exponent."""
     typeName = "float64_t"
     typeWidth = 64
     typeMantissa = 52
     typeExponent = 11
Deeploy/CommonExtensions/CodeTransformationPasses/MemoryAllocation.py (1)

199-215: ⚠️ Potential issue | 🔴 Critical

Bug: topologicallySortBuffers will crash or produce duplicates on multi-pass iterations.

The inner for loop iterates over all buffers (the original list) on every pass of the while loop. Buffers that were already sorted in a prior pass will be re-processed: sortedBuffers.append(buffer) creates a duplicate, and unsortedBufferNames.remove(buffer.name) raises ValueError because the name was already removed.

This manifests whenever at least one _ReferenceBuffer is deferred to a later pass (i.e., any real dependency chain exists in the input).

Proposed fix — skip already-sorted buffers
         while len(unsortedBufferNames) > 0:
             for buffer in buffers:
+                if buffer.name not in unsortedBufferNames:
+                    continue
                 if isinstance(buffer, _ReferenceBuffer) and buffer._referenceName in unsortedBufferNames:
                     continue
 
                 sortedBuffers.append(buffer)
                 unsortedBufferNames.remove(buffer.name)
DeeployTest/testPrintInputOutputTransformation.py (1)

58-61: ⚠️ Potential issue | 🔴 Critical

Type mismatch: MemoryLevel object passed where a str is expected.

defaultTargetMemoryLevel at lines 59-60 is a MemoryLevel object, but _MemoryAwareGeneration.__init__ (line 65 in Deeploy/MemoryLevelExtension/CodeTransformationPasses/PrintInputs.py) expects memoryHierarchyRegex: Optional[str]. This will raise TypeError when re.compile(defaultTargetMemoryLevel) is called at line 77.

Pass defaultTargetMemoryLevel.name instead:

Suggested fix
                        binding.codeTransformer.passes += [
-                            MemoryAwarePrintInputGeneration(defaultTargetMemoryLevel),
-                            MemoryAwarePrintOutputGeneration(defaultTargetMemoryLevel),
+                            MemoryAwarePrintInputGeneration(defaultTargetMemoryLevel.name),
+                            MemoryAwarePrintOutputGeneration(defaultTargetMemoryLevel.name),
                        ]
🤖 Fix all issues with AI agents
In `@Deeploy/CommonExtensions/OptimizationPasses/Matchers.py`:
- Around line 253-272: The NumPy-style docstring in the sequential pattern
matcher block is missing a blank line before the "Notes" section; update the
docstring in Deeploy/CommonExtensions/OptimizationPasses/Matchers.py (the
sequential matcher docstring that starts "Pattern matcher for sequential
computational graphs...") by inserting a single blank line between the end of
the preceding bullet list and the "Notes" header so the sections are separated
per NumPy docstring conventions.
🧹 Nitpick comments (4)
Deeploy/CommonExtensions/CodeTransformationPasses/MemoryAllocation.py (2)

260-260: Use idiomatic boolean checks instead of == False / == True.

Flagged by Ruff (E712). Same applies to lines 348 and 365 in MemoryPassthroughGeneration.

Proposed fix for MemoryManagementGeneration
-            assert buffer._live == False, f"Tried to allocate already live buffer {buffer.name}"
+            assert not buffer._live, f"Tried to allocate already live buffer {buffer.name}"
...
-            assert buffer._live == True, f"Tried to deallocate already dead buffer {buffer.name}"
+            assert buffer._live, f"Tried to deallocate already dead buffer {buffer.name}"

Also applies to: 278-278


292-307: Docstring style inconsistency: Args: (Google-style) vs numpydoc used elsewhere.

Line 304 uses Args: while every other docstring in this file uses the Parameters\n---------- numpydoc convention.

Proposed fix
-        """Initialize the memory management passthrough pass.
-
-        Args:
-            memoryHierarchyRegex (Optional[str], optional): A regex pattern to match memory hierarchy.
-            Defaults to None.
-        """
+        """Initialize the memory management passthrough pass.
+
+        Parameters
+        ----------
+        memoryHierarchyRegex : Optional[str]
+            A regex pattern to match memory hierarchy. Defaults to None.
+        """
Deeploy/MemoryLevelExtension/CodeTransformationPasses/PrintInputs.py (1)

143-193: The three apply methods are nearly identical — consider extracting a shared helper.

MemoryAwarePrintInputGeneration.apply, MemoryAwarePrintOutputGeneration.apply, and MemoryAwarePrintConstantGeneration.apply share the same structure: extract references → filter by regex → get rep dict → add template. The only variation is addLeft vs addRight. A shared method in _MemoryAwareGeneration parameterized by side would eliminate this duplication.

Also applies to: 217-267, 296-348

Deeploy/CommonExtensions/CodeTransformationPasses/PrintInputs.py (1)

153-156: Minor: inconsistent local variable naming across apply methods.

PrintInputGeneration.apply uses refDict (Line 154) while PrintOutputGeneration.apply and PrintConstantGeneration.apply use rep (Lines 280, 398). Consider unifying for readability.

Also applies to: 279-282, 397-400

Comment on lines +253 to +272
"""
Pattern matcher for sequential computational graphs without branching.

This matcher is optimized for patterns that form a simple chain of operations
without splits or merges in the computational flow. It uses a recursive
algorithm to follow the linear path of operations.

The matching algorithm follows edges from the anchor node to build a complete
mapping between pattern nodes and graph nodes, verifying operation types
and attributes at each step.
Notes
-----
This matcher is efficient for linear operation sequences such as:
- Conv -> BatchNorm -> ReLU chains
- Linear -> Dropout -> Activation sequences
- Simple preprocessing pipelines

The algorithm assumes that each node in the pattern has at most one
output connection to the next node in the sequence.
"""
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Missing blank line before the Notes section header.

NumPy-style docstrings require a blank line separating each section. There's a missing blank line between the end of the bullet list (line 268) and the Notes section header (line 263).

Proposed fix
     - Simple preprocessing pipelines
 
     The algorithm assumes that each node in the pattern has at most one
     output connection to the next node in the sequence.
     """
-    Notes
+
+    Notes

Wait — let me re-read. The issue is between line 262 and 263:

     - Simple preprocessing pipelines
-    Notes
+
+    Notes

Actually, to be precise:

-    - Simple preprocessing pipelines
-    Notes
+    - Simple preprocessing pipelines
+
+    Notes
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
"""
Pattern matcher for sequential computational graphs without branching.
This matcher is optimized for patterns that form a simple chain of operations
without splits or merges in the computational flow. It uses a recursive
algorithm to follow the linear path of operations.
The matching algorithm follows edges from the anchor node to build a complete
mapping between pattern nodes and graph nodes, verifying operation types
and attributes at each step.
Notes
-----
This matcher is efficient for linear operation sequences such as:
- Conv -> BatchNorm -> ReLU chains
- Linear -> Dropout -> Activation sequences
- Simple preprocessing pipelines
The algorithm assumes that each node in the pattern has at most one
output connection to the next node in the sequence.
"""
"""
Pattern matcher for sequential computational graphs without branching.
This matcher is optimized for patterns that form a simple chain of operations
without splits or merges in the computational flow. It uses a recursive
algorithm to follow the linear path of operations.
The matching algorithm follows edges from the anchor node to build a complete
mapping between pattern nodes and graph nodes, verifying operation types
and attributes at each step.
Notes
-----
This matcher is efficient for linear operation sequences such as:
- Conv -> BatchNorm -> ReLU chains
- Linear -> Dropout -> Activation sequences
- Simple preprocessing pipelines
The algorithm assumes that each node in the pattern has at most one
output connection to the next node in the sequence.
"""
🤖 Prompt for AI Agents
In `@Deeploy/CommonExtensions/OptimizationPasses/Matchers.py` around lines 253 -
272, The NumPy-style docstring in the sequential pattern matcher block is
missing a blank line before the "Notes" section; update the docstring in
Deeploy/CommonExtensions/OptimizationPasses/Matchers.py (the sequential matcher
docstring that starts "Pattern matcher for sequential computational graphs...")
by inserting a single blank line between the end of the preceding bullet list
and the "Notes" header so the sections are separated per NumPy docstring
conventions.

@Xeratec Xeratec moved this to In progress in Deeploy Feb 6, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Documentation Improvements or additions to documentation

Projects

Status: In progress

Development

Successfully merging this pull request may close these issues.

1 participant