Add 16bit xtensa depthwise conv kernel support#3481
Conversation
|
Hi @narrietal, Thank you for this PR. Can you please resolve the failed test cases for Hifi3z and Fusion F1 platforms and update the PR? |
914c0dd to
dac7d79
Compare
|
@rameshkunasi I pushed a new commit which should solve the previous conflicts. Could you approve the execution of the automated test suite to verify it? |
|
@rameshkunasi I just pushed a commit with the formatted code. Could you approve the execution of the automated test? Thanks. |
|
@rameshkunasi the CI/CD test pipeline went through 👍 It seems ready to be merged |
|
@unmeshna017 Can you please have a look into these changes? |
|
Hi,
Adding @vp-cad and @joshih-cad as watchers. |
|
Hi @narrietal, Can you please create a new PR with the suggested changes? |
|
Hi @unmeshna017 and @rameshkunasi,
Given this, I would suggest removing the DepthwiseConvReferenceEvalInt8 declaration from xtensa_depthwise_conv.h to avoid any future misunderstanding. |
|
Hi @narrietal, apologies for the delayed response.
|
This PR adds support for the optimized Xtensa depthwise convolution kernel when using 16-bit activations and 8-bit weights. Previously, this configuration would fall back to the reference implementation.
Changes:
bug=fixes #3484