Skip to content

[ET-VK][conv1d] Implement height-packed depthwise conv1d operator#18333

Merged
meta-codesync[bot] merged 5 commits intogh/SS-JIA/495/basefrom
gh/SS-JIA/495/head
Mar 27, 2026
Merged

[ET-VK][conv1d] Implement height-packed depthwise conv1d operator#18333
meta-codesync[bot] merged 5 commits intogh/SS-JIA/495/basefrom
gh/SS-JIA/495/head

Conversation

@SS-JIA
Copy link
Copy Markdown
Contributor

@SS-JIA SS-JIA commented Mar 19, 2026

Stack from ghstack (oldest at bottom):

Implement a depthwise conv1d operator using height-packed layout where channels
are the packed dimension (WHCN dim 1). Depthwise conv applies a separate filter
to each channel independently (groups=C), so 4 channels can be processed in
parallel using element-wise vec4 FMA over kernel positions.

Thread mapping: X=C/4, Y=L_out, Z=N. Each thread computes one output texel
(4 channels at one spatial position). Inner loop iterates over kernel positions
K with bounds-checked input access for padding.

Weight [C,1,K] is prepacked as channels-packed so each vec4 load gives 4
channels' weights at one kernel position. Supports both buffer and texture3d
storage, fp32/fp16, optional bias, and arbitrary stride/padding/dilation.
Registered as et_vk.conv1d_dw.default (standalone custom op).

Performance on Adreno 750 (S24):

  • [1,128,4096] K=31 buffer f16: 231 GFLOP/s
  • [1,128,4096] K=31 buffer f32: 155 GFLOP/s
  • [1,512,2048] K=5 buffer f32: 66 GFLOP/s

Differential Revision: D97344091

Implement a depthwise conv1d operator using height-packed layout where channels
are the packed dimension (WHCN dim 1). Depthwise conv applies a separate filter
to each channel independently (groups=C), so 4 channels can be processed in
parallel using element-wise vec4 FMA over kernel positions.

Thread mapping: X=C/4, Y=L_out, Z=N. Each thread computes one output texel
(4 channels at one spatial position). Inner loop iterates over kernel positions
K with bounds-checked input access for padding.

Weight [C,1,K] is prepacked as channels-packed so each vec4 load gives 4
channels' weights at one kernel position. Supports both buffer and texture3d
storage, fp32/fp16, optional bias, and arbitrary stride/padding/dilation.
Registered as et_vk.conv1d_dw.default (standalone custom op).

Performance on Adreno 750 (S24):
- [1,128,4096] K=31 buffer f16: 231 GFLOP/s
- [1,128,4096] K=31 buffer f32: 155 GFLOP/s
- [1,512,2048] K=5 buffer f32: 66 GFLOP/s

Differential Revision: [D97344091](https://our.internmc.facebook.com/intern/diff/D97344091/)

[ghstack-poisoned]
@pytorch-bot
Copy link
Copy Markdown

pytorch-bot bot commented Mar 19, 2026

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/18333

Note: Links to docs will display an error until the docs builds have been completed.

✅ You can merge normally! (2 Unrelated Failures)

As of commit 1b33653 with merge base 7c79395 (image):

BROKEN TRUNK - The following jobs failed but were present on the merge base:

👉 Rebase onto the `viable/strict` branch to avoid these failures

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@github-actions
Copy link
Copy Markdown

This PR needs a release notes: label

If your change should be included in the release notes (i.e. would users of this library care about this change?), please use a label starting with release notes:. This helps us keep track and include your important work in the next release notes.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "release notes: none"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

ssjia added 2 commits March 19, 2026 15:48
…perator"

Implement a depthwise conv1d operator using height-packed layout where channels
are the packed dimension (WHCN dim 1). Depthwise conv applies a separate filter
to each channel independently (groups=C), so 4 channels can be processed in
parallel using element-wise vec4 FMA over kernel positions.

Thread mapping: X=C/4, Y=L_out, Z=N. Each thread computes one output texel
(4 channels at one spatial position). Inner loop iterates over kernel positions
K with bounds-checked input access for padding.

Weight [C,1,K] is prepacked as channels-packed so each vec4 load gives 4
channels' weights at one kernel position. Supports both buffer and texture3d
storage, fp32/fp16, optional bias, and arbitrary stride/padding/dilation.
Registered as et_vk.conv1d_dw.default (standalone custom op).

Performance on Adreno 750 (S24):
- [1,128,4096] K=31 buffer f16: 231 GFLOP/s
- [1,128,4096] K=31 buffer f32: 155 GFLOP/s
- [1,512,2048] K=5 buffer f32: 66 GFLOP/s

Differential Revision: [D97344091](https://our.internmc.facebook.com/intern/diff/D97344091/)

[ghstack-poisoned]
…perator"

Implement a depthwise conv1d operator using height-packed layout where channels
are the packed dimension (WHCN dim 1). Depthwise conv applies a separate filter
to each channel independently (groups=C), so 4 channels can be processed in
parallel using element-wise vec4 FMA over kernel positions.

Thread mapping: X=C/4, Y=L_out, Z=N. Each thread computes one output texel
(4 channels at one spatial position). Inner loop iterates over kernel positions
K with bounds-checked input access for padding.

Weight [C,1,K] is prepacked as channels-packed so each vec4 load gives 4
channels' weights at one kernel position. Supports both buffer and texture3d
storage, fp32/fp16, optional bias, and arbitrary stride/padding/dilation.
Registered as et_vk.conv1d_dw.default (standalone custom op).

Performance on Adreno 750 (S24):
- [1,128,4096] K=31 buffer f16: 231 GFLOP/s
- [1,128,4096] K=31 buffer f32: 155 GFLOP/s
- [1,512,2048] K=5 buffer f32: 66 GFLOP/s

Differential Revision: [D97344091](https://our.internmc.facebook.com/intern/diff/D97344091/)

[ghstack-poisoned]
ssjia added 2 commits March 27, 2026 10:40
…perator"

Implement a depthwise conv1d operator using height-packed layout where channels
are the packed dimension (WHCN dim 1). Depthwise conv applies a separate filter
to each channel independently (groups=C), so 4 channels can be processed in
parallel using element-wise vec4 FMA over kernel positions.

Thread mapping: X=C/4, Y=L_out, Z=N. Each thread computes one output texel
(4 channels at one spatial position). Inner loop iterates over kernel positions
K with bounds-checked input access for padding.

Weight [C,1,K] is prepacked as channels-packed so each vec4 load gives 4
channels' weights at one kernel position. Supports both buffer and texture3d
storage, fp32/fp16, optional bias, and arbitrary stride/padding/dilation.
Registered as et_vk.conv1d_dw.default (standalone custom op).

Performance on Adreno 750 (S24):
- [1,128,4096] K=31 buffer f16: 231 GFLOP/s
- [1,128,4096] K=31 buffer f32: 155 GFLOP/s
- [1,512,2048] K=5 buffer f32: 66 GFLOP/s

Differential Revision: [D97344091](https://our.internmc.facebook.com/intern/diff/D97344091/)

[ghstack-poisoned]
…perator"

Implement a depthwise conv1d operator using height-packed layout where channels
are the packed dimension (WHCN dim 1). Depthwise conv applies a separate filter
to each channel independently (groups=C), so 4 channels can be processed in
parallel using element-wise vec4 FMA over kernel positions.

Thread mapping: X=C/4, Y=L_out, Z=N. Each thread computes one output texel
(4 channels at one spatial position). Inner loop iterates over kernel positions
K with bounds-checked input access for padding.

Weight [C,1,K] is prepacked as channels-packed so each vec4 load gives 4
channels' weights at one kernel position. Supports both buffer and texture3d
storage, fp32/fp16, optional bias, and arbitrary stride/padding/dilation.
Registered as et_vk.conv1d_dw.default (standalone custom op).

Performance on Adreno 750 (S24):
- [1,128,4096] K=31 buffer f16: 231 GFLOP/s
- [1,128,4096] K=31 buffer f32: 155 GFLOP/s
- [1,512,2048] K=5 buffer f32: 66 GFLOP/s

Differential Revision: [D97344091](https://our.internmc.facebook.com/intern/diff/D97344091/)

[ghstack-poisoned]
@meta-codesync meta-codesync bot merged commit a65231b into gh/SS-JIA/495/base Mar 27, 2026
157 of 162 checks passed
@meta-codesync meta-codesync bot deleted the gh/SS-JIA/495/head branch March 27, 2026 22:00
@meta-codesync meta-codesync bot temporarily deployed to cherry-pick-bot March 27, 2026 22:00 Inactive
SS-JIA pushed a commit that referenced this pull request Mar 27, 2026
Pull Request resolved: #18333

Implement a depthwise conv1d operator using height-packed layout where channels
are the packed dimension (WHCN dim 1). Depthwise conv applies a separate filter
to each channel independently (groups=C), so 4 channels can be processed in
parallel using element-wise vec4 FMA over kernel positions.

Thread mapping: X=C/4, Y=L_out, Z=N. Each thread computes one output texel
(4 channels at one spatial position). Inner loop iterates over kernel positions
K with bounds-checked input access for padding.

Weight [C,1,K] is prepacked as channels-packed so each vec4 load gives 4
channels' weights at one kernel position. Supports both buffer and texture3d
storage, fp32/fp16, optional bias, and arbitrary stride/padding/dilation.
Registered as et_vk.conv1d_dw.default (standalone custom op).

Performance on Adreno 750 (S24):
- [1,128,4096] K=31 buffer f16: 231 GFLOP/s
- [1,128,4096] K=31 buffer f32: 155 GFLOP/s
- [1,512,2048] K=5 buffer f32: 66 GFLOP/s
ghstack-source-id: 358903219
@exported-using-ghexport

Differential Revision: [D97344091](https://our.internmc.facebook.com/intern/diff/D97344091/)
SS-JIA pushed a commit that referenced this pull request Mar 27, 2026
Pull Request resolved: #18333

Implement a depthwise conv1d operator using height-packed layout where channels
are the packed dimension (WHCN dim 1). Depthwise conv applies a separate filter
to each channel independently (groups=C), so 4 channels can be processed in
parallel using element-wise vec4 FMA over kernel positions.

Thread mapping: X=C/4, Y=L_out, Z=N. Each thread computes one output texel
(4 channels at one spatial position). Inner loop iterates over kernel positions
K with bounds-checked input access for padding.

Weight [C,1,K] is prepacked as channels-packed so each vec4 load gives 4
channels' weights at one kernel position. Supports both buffer and texture3d
storage, fp32/fp16, optional bias, and arbitrary stride/padding/dilation.
Registered as et_vk.conv1d_dw.default (standalone custom op).

Performance on Adreno 750 (S24):
- [1,128,4096] K=31 buffer f16: 231 GFLOP/s
- [1,128,4096] K=31 buffer f32: 155 GFLOP/s
- [1,512,2048] K=5 buffer f32: 66 GFLOP/s
ghstack-source-id: 358903219
@exported-using-ghexport

Differential Revision: [D97344091](https://our.internmc.facebook.com/intern/diff/D97344091/)
rascani pushed a commit to rascani/executorch that referenced this pull request Apr 1, 2026
Pull Request resolved: pytorch#18333

Implement a depthwise conv1d operator using height-packed layout where channels
are the packed dimension (WHCN dim 1). Depthwise conv applies a separate filter
to each channel independently (groups=C), so 4 channels can be processed in
parallel using element-wise vec4 FMA over kernel positions.

Thread mapping: X=C/4, Y=L_out, Z=N. Each thread computes one output texel
(4 channels at one spatial position). Inner loop iterates over kernel positions
K with bounds-checked input access for padding.

Weight [C,1,K] is prepacked as channels-packed so each vec4 load gives 4
channels' weights at one kernel position. Supports both buffer and texture3d
storage, fp32/fp16, optional bias, and arbitrary stride/padding/dilation.
Registered as et_vk.conv1d_dw.default (standalone custom op).

Performance on Adreno 750 (S24):
- [1,128,4096] K=31 buffer f16: 231 GFLOP/s
- [1,128,4096] K=31 buffer f32: 155 GFLOP/s
- [1,512,2048] K=5 buffer f32: 66 GFLOP/s
ghstack-source-id: 358903219
@exported-using-ghexport

Differential Revision: [D97344091](https://our.internmc.facebook.com/intern/diff/D97344091/)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported meta-exported

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants