Skip to content

Update display size scale tests with video frame from video file#4614

Open
haoxli wants to merge 1 commit intogpuweb:mainfrom
haoxli:import-external-texture-video-display-size
Open

Update display size scale tests with video frame from video file#4614
haoxli wants to merge 1 commit intogpuweb:mainfrom
haoxli:import-external-texture-video-display-size

Conversation

@haoxli
Copy link
Contributor

@haoxli haoxli commented Mar 17, 2026

For display‑size scale tests, a VideoFrame created from a video file is the right source to validate importExternalTexture because it exercises the real decode, color conversion, and display‑size metadata path. A canvas ImageData frame bypasses the video decode path, so it’s removed.

Added a focused video list to validate scaling across codec and color‑space variation, covering both codecs (VP9, H.264) and both color spaces (bt.601, bt.709).

Issue: #4605


Requirements for PR author:

  • All missing test coverage is tracked with "TODO" or .unimplemented().
  • New helpers are /** documented */ and new helper files are found in helper_index.txt.
  • Test behaves as expected in a WebGPU implementation. (If not passing, explain above.)
  • Test have be tested with compatibility mode validation enabled and behave as expected. (If not passing, explain above.)

Requirements for reviewer sign-off:

  • Tests are properly located.
  • Test descriptions are accurate and complete.
  • Tests provide complete coverage (including validation control cases). Missing coverage MUST be covered by TODOs.
  • Tests avoid over-parameterization (see case count report).

When landing this PR, be sure to make any necessary issue status updates.

For display‑size scale tests, a VideoFrame created from a video file is
the right source to validate importExternalTexture because it exercises
the real decode, color conversion, and display‑size metadata path. A
canvas ImageData frame bypasses the video decode path, so it’s removed.

Added a focused video list to validate scaling across codec and
color‑space variation, covering both codecs (VP9, H.264) and both color
spaces (bt.601, bt.709).

Issue: gpuweb#4605
@github-actions
Copy link

Results for build job (at 183d872):

-webgpu:web_platform,external_texture,video:importExternalTexture,video_frame_display_size_diff_with_coded_size:* - 3 cases, 3 subcases (~1/case)
-webgpu:web_platform,external_texture,video:importExternalTexture,video_frame_display_size_from_textureDimensions:* - 3 cases, 3 subcases (~1/case)
+webgpu:web_platform,external_texture,video:importExternalTexture,video_frame_display_size_scale:* - 9 cases, 9 subcases (~1/case)
+webgpu:web_platform,external_texture,video:importExternalTexture,video_frame_display_size_from_textureDimensions:* - 9 cases, 9 subcases (~1/case)
-TOTAL: 280752 cases, 2322091 subcases
+TOTAL: 280764 cases, 2322103 subcases

@haoxli haoxli requested a review from kainino0x March 17, 2026 08:50
@shaoboyan091 shaoboyan091 self-requested a review March 19, 2026 01:12
const canvasContext = canvas.getContext('2d', { colorSpace: 'srgb' });
if (canvasContext === null) {
frame.close();
t.skip(' onscreen canvas 2d context not available');
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

leading space, pls remove.

}

function createVideoFrameWithDisplayScale(
async function createVideoFrameWithDisplayScale(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would you mind to add comments similar to your commit message to simple describe we need real video file instead of canvas created ones?


if (canvasContext === null) {
t.skip(' onscreen canvas 2d context not available');
if (sourceFrame === undefined || codedWidth === undefined || codedHeight === undefined) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using assert? Or skip with correct info


let displayWidth = kWidth;
let displayHeight = kHeight;
let displayWidth = codedWidth;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Or just assign 0 instead of codedWidth.

'four-colors-vp9-bt709.webm',
] as const;
type DisplayScale = 'smaller' | 'same' | 'larger';
const kDisplayScales: DisplayScale[] = ['smaller', 'same', 'larger'];
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If thit won't expand in future then I slightly lean to keep the origin format (Using 'smaller' | 'same' | 'larger' in all three places).

// Build expected sampled colors by drawing the same source frame to a 2D canvas.
// This makes the check robust across codecs/container metadata while still validating
// that importExternalTexture sampling matches browser video rendering behavior.
const canvas = createCanvas(t, 'onscreen', kWidth, kHeight);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just double check, are we using kWidth, kHeight instead of using codedWidth, codedHeight for some purpose?

The origin part use kWidth and kHeight due to the generated video frame is from a kWidth and kHeight (which mapping to codedWidth and codedHeight here) canvas.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's not entirely accurate. I only used kWidth and kHeight to avoid having to define an additional size variable here. We also can use displayWidth and displayHeight in VideoFrame for that. In the origin PR, I used canvas becuase I found it also can reporduce the issue and I can control the sizes easily.

Copy link
Contributor

@shaoboyan091 shaoboyan091 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

format: 'RGBA',
codedWidth: kWidth,
codedHeight: kHeight,
const frame = new VideoFrame(sourceFrame, {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a way to encode this data in a video file rather than going through VideoFrame? Doing it with VideoFrame seems kind of contrived - the way people would actually run into issues in the wild is through video files that have different dimensions encoded in them.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It doesn't seem easy to create a video file with display size scale. I see we can use setsar=2/1 to make video display size becomes 2 times.

ffmpeg.exe -loop 1 -i .\four-colors.png -c:v libx264 -pix_fmt yuv420p -frames 50 -colorspace smpte170m -color_primaries smpte170m -color_trc smpte170m -color_range tv -vf "setsar=2/1" four-colors-h264-bt601-sar2.mp4

The display height becomes two times of coded width, but the width does not, and the four colored squares are still displayed according to the coded size.
Image

If I force DAR using setsar=2/1,setdar=4/3, the SAR will be 1:1.
ffmpeg.exe -loop 1 -i .\four-colors.png -c:v libx264 -pix_fmt yuv420p -frames 50 -colorspace smpte170m -color_primaries smpte170m -color_trc smpte170m -color_range tv -vf "setsar=2/1,setdar=4/3" four-colors-h264-bt601-sar2.mp4

Verify:
ffprobe.exe -v error -select_streams v:0 -show_entries stream=width,height,coded_width,coded_height,sample_aspect_ratio,display_aspect_ratio -of default=noprint_wrappers=1 .\four-colors-h264-bt601-sar2.mp4

width=320
height=240
coded_width=320
coded_height=240
sample_aspect_ratio=1:1
display_aspect_ratio=4:3

I will further investigate how to make it works.

If it works, I think we need to add 6 new video files at least (3 different codes/color spaces videos x 2 display scales (larger/smaller).

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That should be fine as long as they're not too large. Thank you for investigating!

// Build expected sampled colors by drawing the same source frame to a 2D canvas.
// This makes the check robust across codecs/container metadata while still validating
// that importExternalTexture sampling matches browser video rendering behavior.
const canvas = createCanvas(t, 'onscreen', kWidth, kHeight);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are too many different widths and heights. Can you rename kWidth and kHeight to something more specific? (kCanvasWidth kCanvasHeight)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, I will return displayWidth and displayHegith from createVideoFrameWithDisplayScale, and use them in the cases stead of kWidth and kHeight.

@haoxli
Copy link
Contributor Author

haoxli commented Mar 20, 2026

BTW, this http://127.0.0.1:8080/standalone/?q=webgpu:web_platform,external_texture,video:importExternalTexture,video_frame_display_size_scale:videoName=%22four-colors-h264-bt601.mp4%22;displayScale=%22same%22 fail in canary.

I tested it pass on Intel GPU, but failed on NV GPU. It seems to be a tolerance issue or decoder differences on NVIDIA. I will check it again after we update the video file.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants