From c84a83121f0273a5e9bdf35f9a87dc911528b964 Mon Sep 17 00:00:00 2001
From: Erin Donehoo <105813956+edonehoo@users.noreply.github.com>
Date: Mon, 2 Feb 2026 10:29:04 -0500
Subject: [PATCH 1/2] Updates conversation design guidelines.
---
.../conversation-design.md | 32 +++++++++++++++++--
1 file changed, 30 insertions(+), 2 deletions(-)
diff --git a/packages/documentation-site/patternfly-docs/content/AI/conversation-design/conversation-design.md b/packages/documentation-site/patternfly-docs/content/AI/conversation-design/conversation-design.md
index a38ce5ec5e..2636a98100 100644
--- a/packages/documentation-site/patternfly-docs/content/AI/conversation-design/conversation-design.md
+++ b/packages/documentation-site/patternfly-docs/content/AI/conversation-design/conversation-design.md
@@ -29,11 +29,11 @@ Following these best practices to help ensure that your users can complete their
When chatbots are designed to meet the needs of your users, they can improve the overall UX of your product. They are convenient, efficient, and persistent.
-[Our chatbot extension](/extensions/chatbot/overview) utilizes PatternFly components to create a foundation for an AI-based chatbot, with additional customization options.
+[Our ChatBot extension](/extensions/chatbot/overview) utilizes PatternFly components to create a foundation for an AI-based chatbot, with additional customization options.
-
+
@@ -73,3 +73,31 @@ Make sure to disclose any use of AI in chatbots. Our users should be able to tru
- Use labels and other visual styling cues to clearly identify AI features.
- Add necessary legal disclosures where you can, like in the chatbot footer.
- Display an indicator when the bot is "thinking" or "typing," so that users know to expect feedback.
+
+### LLM guardrails
+
+Guardrails are safety mechanisms that moderate how a model handles sensitive, risky, or disrespectful prompts. [Research shows](https://arxiv.org/html/2506.00195v1) that a user's experience is shaped more by how a model handles a refusal than by the user's initial intent.
+
+Most models default to direct or explanation-based refusals (for example, "I can’t do that" or "I can’t assist because of a safety policy"), but these can be perceived as frustrating or patronizing.
+
+While users generally prefer full compliance from a model, safety and policy requirements often make this impossible. When a guardrail is triggered, your goal is to "let them down easy" to maintain trust and engagement.
+
+When a user requests something unsafe or inappropriate from a model:
+
+- **Prioritize partial compliance:** Whenever possible, answer the general or theoretical parts of a user's request without providing actionable, dangerous, or sensitive details.
+- **Avoid explicit refusal statements:** To reduce user friction, the model should try to respond without using definitive phrases like "I refuse" or "I cannot."
+- **Pivot to safer topics:** If a full refusal is unavoidable, briefly explain why and immediately suggest a related, safer topic to keep the conversation productive.
+
+| Strategy | When to use | Example |
+| :---: | :---: | :---: |
+| **Partial compliance** | Default for ambiguous intent. | "The process of [Topic] generally involves [General Principle]..." |
+| **Redirection** | When the specific request is blocked. | "I can’t provide specifics on that, but I can suggest some resources on [Related Topic]." |
+| **Explanation** | When transparency is required for trust. | "To ensure user privacy, I don't have access to individual [Data Type]..." |
+
+#### Message streaming considerations
+
+Streaming text in real-time presents unique challenges for guardrail implementation.
+
+- **Handling late-trigger detections:** If a guardrail is triggered after a response has already started streaming, do not simply delete the message from the UI, as this is confusing. Instead, replace the partial text with a standard refusal or redirection message.
+- **Simulated streaming for safety:** To avoid the "wait time" of running guardrails before a message starts, you can run detectors in the background. If the initial check is fast enough, you can "simulate" the stream once the content is cleared, ensuring the user doesn't see harmful content that is later retracted.
+- **Guidance on blocked responses:** If a guardrail prevents a model response from ever being generated, provide a clear system-level message in the chat UI so the user isn't left waiting for a response that will never arrive.
\ No newline at end of file
From 93f3ec56a99f9808aeb2e29865fbe14189c06e66 Mon Sep 17 00:00:00 2001
From: Erin Donehoo <105813956+edonehoo@users.noreply.github.com>
Date: Mon, 2 Feb 2026 15:38:09 -0500
Subject: [PATCH 2/2] docs(conversation-design): Adds guidelines for LLM
guardrails.
---
.../conversation-design.md | 64 ++++++++-----------
1 file changed, 28 insertions(+), 36 deletions(-)
diff --git a/packages/documentation-site/patternfly-docs/content/AI/conversation-design/conversation-design.md b/packages/documentation-site/patternfly-docs/content/AI/conversation-design/conversation-design.md
index 2636a98100..da18d9c79f 100644
--- a/packages/documentation-site/patternfly-docs/content/AI/conversation-design/conversation-design.md
+++ b/packages/documentation-site/patternfly-docs/content/AI/conversation-design/conversation-design.md
@@ -6,7 +6,6 @@ section: AI
import { Button, Flex, FlexItem } from '@patternfly/react-core';
import ArrowRightIcon from '@patternfly/react-icons/dist/esm/icons/arrow-right-icon';
-
# Conversation design guidelines
**Conversation design** is a method of writing for conversational interfaces, like chatbots or voicebots. The goal of conversation design is to create an interactive experience that resembles human-to-human conversation as much as possible. Like traditional content design, conversation design is focused on using words to make experiences clear, concise, and well-timed.
@@ -25,34 +24,25 @@ Following these best practices to help ensure that your users can complete their
- If you ask for personal info, tell users "why" you're asking first.
- Always have the last word.
-## Writing for chatbots
-
-When chatbots are designed to meet the needs of your users, they can improve the overall UX of your product. They are convenient, efficient, and persistent.
+## Chatbot conversation design
-[Our ChatBot extension](/extensions/chatbot/overview) utilizes PatternFly components to create a foundation for an AI-based chatbot, with additional customization options.
+Chatbots provide users with persistent access to convenient help. When they are intentionally designed to meet the needs of your users, chatbots can improve your users' efficiency and enhance the overall UX of your product.
-
-
-
-
-
+Chatbots are only as good as the writing that goes into them. The language they use must build trust, clearly establish the “rules” of the conversation, and support users' goals. General microcopy, like headings or buttons, should match PatternFly's standard [content design guidelines](/content-design/overview), but there are additional guidelines to follow for common message types and conversation patterns.
-Chatbots are only as good as the writing that goes into them. The language they use must build trust, clearly establish the “rules” of the conversation, and support users' goals.
+[Our ChatBot extension](/extensions/chatbot/overview) utilizes PatternFly components to create a foundation for a customizable AI-based chatbot. When using ChatBot, it's important to adhere to the following conversation design guidelines.
-In addition to general microcopy, like headings or buttons, you will need to write:
-- Welcome and goodbye messages.
-- Bot prompts.
-- AI disclosures.
+### Writing messages
-### Welcome and goodbye messages
+#### Welcome and goodbye messages
-It is important to always welcome users to the chatbot experience, and (if applicable) to say goodbye when they've ended the chat. A goodbye message isn't always necessary, like in instances where users can "minimize" a chat window to come back to it later.
+Always welcome users to a conversation with your ChatBot. If there's an "end" to a conversation, make sure to also say goodbye to your users. In instances where users can "minimize" a chat window to come back to it later, a goodbye message isn't necessary.
When you know your user's name, address them directly.

-### Bot prompts
+#### Bot prompts
When writing your bot's prompts:
@@ -66,7 +56,9 @@ When writing your bot's prompts:

-### AI disclosure
+### Conversation design patterns
+
+#### Disclosing AI usage
Make sure to disclose any use of AI in chatbots. Our users should be able to trust that we are honest and transparent with them as much as possible.
@@ -74,30 +66,30 @@ Make sure to disclose any use of AI in chatbots. Our users should be able to tru
- Add necessary legal disclosures where you can, like in the chatbot footer.
- Display an indicator when the bot is "thinking" or "typing," so that users know to expect feedback.
-### LLM guardrails
+#### Handling unsafe or unethical requests
-Guardrails are safety mechanisms that moderate how a model handles sensitive, risky, or disrespectful prompts. [Research shows](https://arxiv.org/html/2506.00195v1) that a user's experience is shaped more by how a model handles a refusal than by the user's initial intent.
+LLM guardrails are safety mechanisms that moderate how a model handles sensitive or risky prompts. [Research shows](https://arxiv.org/html/2506.00195v1) that a user's experience is shaped more by how a model handles a refusal than by their initial intent. Even if you cannot fulfill a user's request, it is important to handle the interaction tactfully to ensure they feel respected.
-Most models default to direct or explanation-based refusals (for example, "I can’t do that" or "I can’t assist because of a safety policy"), but these can be perceived as frustrating or patronizing.
+Many models default to providing direct or explanation-based refusals (such as "I can’t do that" or "I can’t assist because of a safety policy"), but these can be perceived as frustrating or patronizing. While users generally prefer that a model fully complies with their requests, safety and policy requirements often make this impossible. When a guardrail is triggered, your goal is to instead "let them down easy" to maintain trust and engagement.
-While users generally prefer full compliance from a model, safety and policy requirements often make this impossible. When a guardrail is triggered, your goal is to "let them down easy" to maintain trust and engagement.
-
-When a user requests something unsafe or inappropriate from a model:
+When a user requests something unsafe or unethical, follow these core strategies:
- **Prioritize partial compliance:** Whenever possible, answer the general or theoretical parts of a user's request without providing actionable, dangerous, or sensitive details.
-- **Avoid explicit refusal statements:** To reduce user friction, the model should try to respond without using definitive phrases like "I refuse" or "I cannot."
-- **Pivot to safer topics:** If a full refusal is unavoidable, briefly explain why and immediately suggest a related, safer topic to keep the conversation productive.
+- **Explain the refusal:** To reinforce transparency, clearly explain why action cannot be taken to fully comply with a user's request.
+- **Redirect instead of shutting down:** Avoid using definitive phrases like "I refuse" or "I cannot." Instead, suggest a related, safer topic to keep the conversation productive.
-| Strategy | When to use | Example |
+| Strategy | Usage | Example |
| :---: | :---: | :---: |
-| **Partial compliance** | Default for ambiguous intent. | "The process of [Topic] generally involves [General Principle]..." |
-| **Redirection** | When the specific request is blocked. | "I can’t provide specifics on that, but I can suggest some resources on [Related Topic]." |
-| **Explanation** | When transparency is required for trust. | "To ensure user privacy, I don't have access to individual [Data Type]..." |
+| **Partial compliance** | Use by default when intent is ambiguous and fulfilling a request would compromise compliance, safety, or ethical rules. | "The process of [Topic] generally involves [General principle]..." |
+| **Explanation** | Use when refusing a specific request to reinforce transparency and trust. | "To ensure user privacy, I don't have access to individual [Data type]..." |
+| **Redirection** | Use when a specific request cannot be fulfilled and partial compliance has either already been attempted or is not possible. | "I can’t provide specifics on that, but I can suggest some resources on [Related topic]." |
+
+##### Message streaming considerations
+
+Real-time message streaming introduces unique technical challenges because guardrails must be checked dynamically as text is generated. To maintain a seamless experience, you should gracefully handle guardrail triggers at different stages of the conversation.
-#### Message streaming considerations
+If a guardrail prevents a model response from ever being generated, provide a clear system-level message in the UI so the user isn't left waiting for a response that will never arrive. For example, this might look something like: "I'm sorry, I'm not able to assist with that request for safety reasons. Is there something else I can help you with?"
-Streaming text in real-time presents unique challenges for guardrail implementation.
+More complex are situations where violations are detected mid-stream. In these cases, avoid simply deleting the message from the DOM or UI, as disappearing content is likely to confuse users. Instead, if a guardrail is triggered while a message is streaming, replace the partial response with a standard refusal or redirection message.
-- **Handling late-trigger detections:** If a guardrail is triggered after a response has already started streaming, do not simply delete the message from the UI, as this is confusing. Instead, replace the partial text with a standard refusal or redirection message.
-- **Simulated streaming for safety:** To avoid the "wait time" of running guardrails before a message starts, you can run detectors in the background. If the initial check is fast enough, you can "simulate" the stream once the content is cleared, ensuring the user doesn't see harmful content that is later retracted.
-- **Guidance on blocked responses:** If a guardrail prevents a model response from ever being generated, provide a clear system-level message in the chat UI so the user isn't left waiting for a response that will never arrive.
\ No newline at end of file
+To balance safety with timely streaming, consider a "chunk-based" verification workflow, as outlined in this [AWS article about guardrails](https://aws.amazon.com/blogs/machine-learning/build-safe-and-responsible-generative-ai-applications-with-guardrails/). Instead of waiting to validate the LLM's entire response once it's generated, you can validate its reply in small segments, only displaying a content in the UI after it has been verified. This approach creates a buffer that ensures safety without sacrificing the "real-time" feel of the conversation. If a segment fails, you can halt the stream and redirect to a safer topic or address inability to reply.
\ No newline at end of file