From 8253a3e5d15c4c85cc33741b2b0c05cbc8b4c2a1 Mon Sep 17 00:00:00 2001
From: Chris Martinelli <56095825+chris-martinelli@users.noreply.github.com>
Date: Fri, 2 Jan 2026 05:39:39 -0600
Subject: [PATCH 1/2] Update get-started.mdx
add note about Malicious uploads detection latency
---
.../docs/waf/detections/malicious-uploads/get-started.mdx | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/src/content/docs/waf/detections/malicious-uploads/get-started.mdx b/src/content/docs/waf/detections/malicious-uploads/get-started.mdx
index a734cfcbe15e808..4dbe10a08991df7 100644
--- a/src/content/docs/waf/detections/malicious-uploads/get-started.mdx
+++ b/src/content/docs/waf/detections/malicious-uploads/get-started.mdx
@@ -67,6 +67,10 @@ You can use the [EICAR anti-malware test file](https://www.eicar.org/download-an
Alternatively, create a custom rule like described in the next step using a _Log_ action instead of a mitigation action like _Block_. This rule will generate [security events](/waf/analytics/security-events/) that will allow you to validate your configuration.
+:::note
+Enabling Malicious uploads detection can introduce latency as files will be scanned. Latency can vary depending on file size.
+:::
+
## 3. Create a custom rule
[Create a custom rule](/waf/custom-rules/create-dashboard/) that blocks detected malicious content objects uploaded to your application.
From 95c7b3b702b4e560d404c46e0a89aa1d3019e4ad Mon Sep 17 00:00:00 2001
From: Pedro Sousa <680496+pedrosousa@users.noreply.github.com>
Date: Mon, 5 Jan 2026 10:20:46 +0000
Subject: [PATCH 2/2] Update note (PCX review)
---
.../waf/detections/malicious-uploads/get-started.mdx | 10 ++++++----
.../docs/waf/detections/malicious-uploads/index.mdx | 8 ++++----
src/content/partials/waf/content-scanning-latency.mdx | 5 +++++
3 files changed, 15 insertions(+), 8 deletions(-)
create mode 100644 src/content/partials/waf/content-scanning-latency.mdx
diff --git a/src/content/docs/waf/detections/malicious-uploads/get-started.mdx b/src/content/docs/waf/detections/malicious-uploads/get-started.mdx
index 4dbe10a08991df7..f64193f08a5322c 100644
--- a/src/content/docs/waf/detections/malicious-uploads/get-started.mdx
+++ b/src/content/docs/waf/detections/malicious-uploads/get-started.mdx
@@ -59,6 +59,12 @@ Use a `POST` request similar to the following:
+:::note
+
+
+
+:::
+
## 2. Validate the content scanning behavior
Use [Security Analytics](/waf/analytics/security-analytics/) and HTTP logs to validate that malicious content objects are being detected correctly.
@@ -67,10 +73,6 @@ You can use the [EICAR anti-malware test file](https://www.eicar.org/download-an
Alternatively, create a custom rule like described in the next step using a _Log_ action instead of a mitigation action like _Block_. This rule will generate [security events](/waf/analytics/security-events/) that will allow you to validate your configuration.
-:::note
-Enabling Malicious uploads detection can introduce latency as files will be scanned. Latency can vary depending on file size.
-:::
-
## 3. Create a custom rule
[Create a custom rule](/waf/custom-rules/create-dashboard/) that blocks detected malicious content objects uploaded to your application.
diff --git a/src/content/docs/waf/detections/malicious-uploads/index.mdx b/src/content/docs/waf/detections/malicious-uploads/index.mdx
index c80b7ddf1bcb393..afbe33a76dd5b0e 100644
--- a/src/content/docs/waf/detections/malicious-uploads/index.mdx
+++ b/src/content/docs/waf/detections/malicious-uploads/index.mdx
@@ -7,7 +7,7 @@ sidebar:
label: Malicious uploads
---
-import { GlossaryTooltip, Type } from "~/components";
+import { GlossaryTooltip, Type, Render } from "~/components";
The malicious uploads detection, also called uploaded content scanning, is a WAF [traffic detection](/waf/concepts/#detection-versus-mitigation) that scans content being uploaded to your application.
@@ -25,11 +25,11 @@ For every request with one or more detected content objects, the content scanner
Cloudflare uses the same [anti-virus (AV) scanner used in Cloudflare Zero Trust](/cloudflare-one/traffic-policies/http-policies/antivirus-scanning/) for WAF content scanning.
-:::note
+:::note[Notes]
-Content scanning will not apply any mitigation actions to requests with content objects considered malicious. It only provides a signal that you can use to define your attack mitigation strategy. You must create rules — [custom rules](/waf/custom-rules/) or [rate limiting rules](/waf/rate-limiting-rules/) — to perform actions based on detected signals.
+Content scanning will not apply any mitigation actions to requests with content objects considered malicious. It only provides a signal that you can use to define your attack mitigation strategy. You must create rules — [custom rules](/waf/custom-rules/) or [rate limiting rules](/waf/rate-limiting-rules/) — to perform actions based on detected signals.
For more information on detection versus mitigation, refer to [Concepts](/waf/concepts/#detection-versus-mitigation).
-For more information on detection versus mitigation, refer to [Concepts](/waf/concepts/#detection-versus-mitigation).
+
:::
diff --git a/src/content/partials/waf/content-scanning-latency.mdx b/src/content/partials/waf/content-scanning-latency.mdx
new file mode 100644
index 000000000000000..ef5d6425a6bcd03
--- /dev/null
+++ b/src/content/partials/waf/content-scanning-latency.mdx
@@ -0,0 +1,5 @@
+---
+{}
+---
+
+Enabling malicious uploads detection can introduce latency since content objects will be scanned. Latency can vary depending on object size.