Add tensor_image_aliasing and postprocessing_with_vgf samples#1476
Add tensor_image_aliasing and postprocessing_with_vgf samples#1476matthewsloyanARM wants to merge 4 commits intoKhronosGroup:mainfrom
Conversation
Signed-off-by: Matthew Sloyan <matthew.sloyan@arm.com>
|
I see build errors with this change: |
|
Looks like this is caused by one of the new third party dependencies. I see two new ones in there:
Adding new submodules will require people that already cloned the repo to update these submodules, or else they'll be empty. Not sure if that's also the cause for the CI failures. In light of the samples already having serious issues with build times and complexity, I'm not sure if we want to add in even more dependencies (we're already shy of 20 submodules). Are these really required for this sample? |
|
Aside from that, the sample works fine for me on Windows with an Nvidia RTX4070 using the latest ML emulation layer (0.8.0). |
|
Sorry about the delayed reply, but thank you @SaschaWillems and @gary-sweet for taking a look at this and trying it out. I will look into the build failures. I will also update the pull request description and checkboxes. Regarding the additional dependencies, the |
|
On today's call we agreed that the new dependencies are okay. We'll not try to review as soon as possible. |
Signed-off-by: Matthew Sloyan <matthew.sloyan@arm.com>
Signed-off-by: Matthew Sloyan <matthew.sloyan@arm.com>
|
Thanks for your patience with this and agreeing to accept the new dependencies. I have updated the description to give a better overview of the new samples and dependencies, which includes the minor build time implications. I have also updated the ai-ml-sdk-vgf-library to fix the build issues for Android. It is still failing for IOS though, which is something is doesn't support. I am also unable to test this, so would it be possible in the code to disable this dependency and sample for IOS? If so, how would I go about this? |
|
We have several samples not supported on all platforms. You can disable that platform in your CMakeLists.txt like this: |
Description
This is the fourth and fifth sample in a series of samples which introduces tensor/aliasing and a better way of loading a neural network using the VGF Library. Below is an overview of each sample.
samples/extensions/tensor_and_data_graph/tensor_image_aliasing:
Demonstrates zero-copy tensor/image aliasing by sharing memory between VkImage render targets and tensors, then running a simple CNN-based post-processing pass (edge/sharpening) via a data graph pipeline on a rendered 3D scene. Highlights required usage flags, layout transitions, and synchronization for aliased resources. There is a caveat with this sample that using the current emulation layer doesn't allow for proper aliasing due to limited support, so a copy is added as a workaround, however the technique is still displayed correctly about how you would do this with hardware that supports it. See tensor_image_aliasing.h:63 for a more detailed description.
samples/extensions/tensor_and_data_graph/postprocessing_with_vgf:
Demonstrates loading a VGF model file (modules, resources, sequence, constants), decoding it via the VGF library, and using the extracted SPIR-V, tensor metadata, and constants to build and run a data graph pipeline, again applying a neural post-process to a rendered scene with tensor/image aliasing. The goal in future patches is to the update this to use a more interesting mode, currently it uses the same sharpening model from sample four (tensor_image_aliasing).
The first, second and third samples can be found here: #1394, #1437 and #1438
Build & run guide
If you would like to try out this new sample locally, we provide an emulation layer which allows them to run. This can be found here: https://github.com/arm/ai-ml-emulation-layer-for-vulkan. There is also a pre-built Windows and Linux package available here: https://github.com/arm/ai-ml-emulation-layer-for-vulkan/releases/tag/v0.8.0 or you can grab it through our tutorial: https://learn.arm.com/learning-paths/mobile-graphics-and-gaming/vulkan-ml-sample/. This tutorial is for this first sample already merged (simple_tensor_and_data_graph), so it shows you everything needed to try it out. The guide is only for Windows though, but the README in the ai-ml-emulation-layer-for-vulkan repository provides a build and usage guide for Linux.
New dependencies
The ai-ml-sdk-vgf-library and flatbuffers has been added to the third_party submodules. flatbuffers is required to build the ai-ml-sdk-vgf-library.
The ai-ml-sdk-vgf-library provides an API for encoding and decoding a VGF model file, this contains information such as the operators, constants, model structure etc. In the postprocessing_with_vgf sample a VGF file is loaded using the decoder and this data is loaded into the data_graph_pipeline. This is a more scalable approach for bigger models, rather than manually creating the inputs, constants and outputs like in the previous examples. It is the recommended approach, but the previous samples still provide a useful guide in how to create the other aspects of the data_graph_pipeline.
Build time
I have tested the build times of this, it only takes a few seconds to build ai-ml-sdk-vgf-library as we are only targeting the vgf target (Very small encoder/decoder library). It doesn't build flatbuffers, but this target does require it unfortunately.
General Checklist:
Please ensure the following points are checked:
Note: The Samples CI runs a number of checks including:
If this PR contains framework changes:
batchcommand line argument to make sure all samples still work properlySample Checklist
If your PR contains a new or modified sample, these further checks must be carried out in addition to the General Checklist: