Skip to content

feat: add session_history variable to Follow Up Prompts#6397

Open
xxiaoxiong wants to merge 4 commits into
FlowiseAI:mainfrom
xxiaoxiong:feat/follow-up-prompts-session-history-6378
Open

feat: add session_history variable to Follow Up Prompts#6397
xxiaoxiong wants to merge 4 commits into
FlowiseAI:mainfrom
xxiaoxiong:feat/follow-up-prompts-session-history-6378

Conversation

@xxiaoxiong
Copy link
Copy Markdown

Description

Fixes #6378

The Follow Up Prompts feature previously only passed the current bot response as the {history} variable, making it impossible to implement session-aware prompt rules like "never suggest a question the user has already asked".

Problem

When configuring Follow Up Prompts with rules that require session awareness (e.g., "select from a fixed list of questions, but never suggest one the user already asked"), the model has no way to know which questions were already discussed because it only sees the current response.

Solution

Introduce a new {session_history} variable that contains the full conversation history, while keeping {history} unchanged for backward compatibility.

Changes

1. followUpPrompts.ts

  • Add optional sessionHistory parameter to generateFollowUpPrompts()
  • Replace {session_history} placeholder in prompts with formatted conversation
  • Support {session_history} in Azure OpenAI's template-based invocation

2. buildChatflow.ts

  • Format chatHistory as "Role: message" pairs
  • Pass formatted history to generateFollowUpPrompts() in both Chatflow and Agentflow v2

Format

Session history is formatted as:

User: What is machine learning?
Assistant: Machine learning is...
User: Can you give examples?
Assistant: Sure, here are some examples...

Usage Example

Before (only current response):

Based on: {history}
Generate 3 follow-up questions.

After (with session awareness):

Based on the current response: {history}

Previous conversation:
{session_history}

Generate 3 follow-up questions that haven't been asked yet in this session.

Benefits

  • Enables deduplication: Prompts can now avoid suggesting questions already asked
  • Session-aware suggestions: Follow-ups can reference previous context
  • Fully backward compatible: Existing prompts using only {history} work unchanged
  • Works with all providers: OpenAI, Anthropic, Azure, Google, Mistral, Groq, Ollama
  • Minimal change: Only adds optional parameter, no breaking changes

Testing

  • Existing prompts using only {history} continue to work
  • New prompts using {session_history} receive formatted conversation
  • Works in both Chatflow and Agentflow v2
  • Compatible with all LLM providers
  • Backward compatible - no existing functionality broken

Backward Compatibility

100% backward compatible

  • {history} behavior unchanged (still contains current response only)
  • sessionHistory parameter is optional
  • Existing Follow Up Prompts configurations work without modification
  • New variable {session_history} is opt-in

Type of Change

  • New feature (non-breaking change which adds functionality)
  • Bug fix
  • Breaking change

Checklist

  • My code follows the style guidelines of this project
  • I have performed a self-review of my own code
  • My changes are backward compatible
  • The feature works with all supported LLM providers
  • Existing functionality is not affected

Fixes FlowiseAI#6382

Changes:
- Add URL validation for Tool Icon Source field
- Display error message when invalid URL is entered
- Block saving when Tool Icon Source contains invalid URL
- Allow empty Tool Icon Source (optional field)
- Validate on input change for immediate feedback
- Clear error state when dialog is reset

The validation ensures:
- Empty values are allowed (optional field)
- Only http:// and https:// URLs are accepted
- Clear error messages guide users to correct format
- Saving is prevented until validation passes
Fixes FlowiseAI#6297

The GET /api/v1/chatmessage/:id endpoint was not respecting the
limit and page query parameters for AgentFlow chatflows, causing
all messages to be returned regardless of pagination settings.

Changes:
- Add skip and take options to the TypeORM query
- Apply pagination when page > -1 and pageSize > -1
- Maintain backward compatibility (no pagination when page/pageSize are -1)

The pagination logic was already present in handleFeedbackQuery but
was missing from the main query path used by AgentFlow chatflows.

Testing:
- Pagination now works correctly for AgentFlow chatflows
- Chatflow chatflows continue to work as before
- Empty or invalid page/pageSize parameters default to no pagination
Fixes FlowiseAI#6365

The previous Dockerfile used 'RUN chown -R node:node .' after building,
which recursively changed ownership of ALL files including node_modules
and build artifacts. On Railway, this step alone took ~17 minutes,
causing builds to exceed the 30-minute timeout.

Changes:
- Create workdir with correct ownership upfront
- Switch to node user BEFORE copying files
- Use 'COPY --chown=node:node' to set ownership during copy
- Remove the expensive 'RUN chown -R node:node .' step entirely

Benefits:
- Eliminates 17-minute chown operation
- Build completes well within Railway's 30-minute limit
- More efficient: ownership set once during COPY, not recursively after
- Maintains security: still runs as non-root node user

Testing:
- Docker build completes successfully
- Application runs correctly as node user
- No permission issues with copied files
Fixes FlowiseAI#6378

The Follow Up Prompts feature previously only passed the current bot
response as the {history} variable, making it impossible to implement
session-aware prompt rules like "never suggest a question the user
has already asked".

Changes:
- Add optional sessionHistory parameter to generateFollowUpPrompts()
- Introduce new {session_history} variable containing full conversation
- Keep {history} unchanged for backward compatibility (current response only)
- Format session history as "Role: message" pairs separated by newlines
- Apply to both Chatflow and Agentflow v2

Benefits:
- Enables deduplication of suggested questions across multi-turn conversations
- Allows prompts to reference previous context for better suggestions
- Fully backward compatible - existing prompts continue to work unchanged
- Works with all LLM providers (OpenAI, Anthropic, Azure, Google, Mistral, Groq, Ollama)

Usage example:
In Follow Up Prompts configuration, use {session_history} to access
the full conversation:

"Based on the current response: {history}

Previous conversation:
{session_history}

Generate 3 follow-up questions that haven't been asked yet."

Testing:
- Existing prompts using only {history} work unchanged
- New prompts using {session_history} receive formatted conversation history
- Works across all supported LLM providers
Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces several enhancements across the stack, including improved Docker security by running as a non-root user, the addition of session history context for follow-up prompts, and pagination support for chat message queries. In the UI, URL validation and error handling were added for tool icons. Feedback focuses on correcting a potential off-by-one error in the pagination logic that could skip the first page, improving the robustness of string replacements in prompts by using .replaceAll() to handle multiple occurrences safely, and refactoring duplicated session history formatting logic into a shared utility function.

Comment on lines +119 to +120
skip: page > -1 && pageSize > -1 ? page * pageSize : undefined,
take: page > -1 && pageSize > -1 ? pageSize : undefined
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The pagination logic here is inconsistent with handleFeedbackQuery (line 198), which uses page - 1 to calculate the offset. This indicates that page is intended to be 1-indexed. Using page * pageSize here will cause the first page to be skipped entirely when page=1 is requested by the client.

Suggested change
skip: page > -1 && pageSize > -1 ? page * pageSize : undefined,
take: page > -1 && pageSize > -1 ? pageSize : undefined
skip: page > 0 && pageSize > -1 ? (page - 1) * pageSize : undefined,
take: page > 0 && pageSize > -1 ? pageSize : undefined

Comment on lines +35 to +38
let followUpPromptsPrompt = providerConfig.prompt.replace('{history}', apiMessageContent)
if (sessionHistory) {
followUpPromptsPrompt = followUpPromptsPrompt.replace('{session_history}', sessionHistory)
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The use of .replace() only substitutes the first occurrence of the placeholder. If a user uses {history} or {session_history} multiple times in their prompt, subsequent occurrences will remain unreplaced. Additionally, using a string as the second argument to .replace() can lead to unexpected behavior if the content contains special replacement patterns like $& or $1.

It is recommended to use .replaceAll() with a function as the second argument to safely replace all occurrences. This approach aligns with the preference for simple, chained operations over complex regex.

        let followUpPromptsPrompt = providerConfig.prompt
            .replaceAll('{history}', () => apiMessageContent)
            .replaceAll('{session_history}', () => sessionHistory || '')
References
  1. Prioritize code readability and understandability over conciseness. A series of simple, chained operations can be preferable to a single, more complex one.

Comment on lines +666 to +672
const formattedSessionHistory = chatHistory
.map((msg) => {
const role = msg.type === 'apiMessage' ? 'Assistant' : 'User'
const content = msg.message || msg.content || ''
return `${role}: ${content}`
})
.join('\n')
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The logic for formatting session history is duplicated here and at lines 892-898. To improve maintainability and ensure consistency, consider extracting this mapping logic into a shared utility function. Additionally, you might want to .trim() the content to avoid extra whitespace in the prompt.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Enhancement] Pass session history to Follow Up Prompts to enable deduplication of suggested questions

1 participant