diff --git a/src/content/docs/waf/detections/malicious-uploads/get-started.mdx b/src/content/docs/waf/detections/malicious-uploads/get-started.mdx
index a734cfcbe15e808..f64193f08a5322c 100644
--- a/src/content/docs/waf/detections/malicious-uploads/get-started.mdx
+++ b/src/content/docs/waf/detections/malicious-uploads/get-started.mdx
@@ -59,6 +59,12 @@ Use a `POST` request similar to the following:
+:::note
+
+
+
+:::
+
## 2. Validate the content scanning behavior
Use [Security Analytics](/waf/analytics/security-analytics/) and HTTP logs to validate that malicious content objects are being detected correctly.
diff --git a/src/content/docs/waf/detections/malicious-uploads/index.mdx b/src/content/docs/waf/detections/malicious-uploads/index.mdx
index c80b7ddf1bcb393..afbe33a76dd5b0e 100644
--- a/src/content/docs/waf/detections/malicious-uploads/index.mdx
+++ b/src/content/docs/waf/detections/malicious-uploads/index.mdx
@@ -7,7 +7,7 @@ sidebar:
label: Malicious uploads
---
-import { GlossaryTooltip, Type } from "~/components";
+import { GlossaryTooltip, Type, Render } from "~/components";
The malicious uploads detection, also called uploaded content scanning, is a WAF [traffic detection](/waf/concepts/#detection-versus-mitigation) that scans content being uploaded to your application.
@@ -25,11 +25,11 @@ For every request with one or more detected content objects, the content scanner
Cloudflare uses the same [anti-virus (AV) scanner used in Cloudflare Zero Trust](/cloudflare-one/traffic-policies/http-policies/antivirus-scanning/) for WAF content scanning.
-:::note
+:::note[Notes]
-Content scanning will not apply any mitigation actions to requests with content objects considered malicious. It only provides a signal that you can use to define your attack mitigation strategy. You must create rules — [custom rules](/waf/custom-rules/) or [rate limiting rules](/waf/rate-limiting-rules/) — to perform actions based on detected signals.
+Content scanning will not apply any mitigation actions to requests with content objects considered malicious. It only provides a signal that you can use to define your attack mitigation strategy. You must create rules — [custom rules](/waf/custom-rules/) or [rate limiting rules](/waf/rate-limiting-rules/) — to perform actions based on detected signals.
For more information on detection versus mitigation, refer to [Concepts](/waf/concepts/#detection-versus-mitigation).
-For more information on detection versus mitigation, refer to [Concepts](/waf/concepts/#detection-versus-mitigation).
+
:::
diff --git a/src/content/partials/waf/content-scanning-latency.mdx b/src/content/partials/waf/content-scanning-latency.mdx
new file mode 100644
index 000000000000000..ef5d6425a6bcd03
--- /dev/null
+++ b/src/content/partials/waf/content-scanning-latency.mdx
@@ -0,0 +1,5 @@
+---
+{}
+---
+
+Enabling malicious uploads detection can introduce latency since content objects will be scanned. Latency can vary depending on object size.