AI Agents in Workflows
Strongly AI provides three categories of AI-powered workflow nodes: AI nodes for model inference, Agent nodes for autonomous reasoning and multi-agent patterns, and Memory nodes for persistent state and retrieval. Together these enable sophisticated AI workflows from simple LLM calls to fully autonomous multi-agent systems.
AI Nodes
AI nodes connect your workflows to language models and other AI capabilities through the AI Gateway.
AI Gateway
The primary node for calling LLM models (GPT-4, Claude, Llama, etc.) for text generation, summarization, extraction, classification, and question answering.
Inputs:
| Input | Type | Required | Description |
|---|---|---|---|
userPrompt | string | Yes | The user message/prompt to send to the AI model |
systemPrompt | string | No | System message to guide AI behavior |
temperature | number | No | Temperature value (0-2) to control randomness |
maxTokens | number | No | Maximum number of tokens to generate |
Outputs:
| Output | Type | Description |
|---|---|---|
response | string | The AI model's text response |
model | object | Model info: id, name, provider, type, contextWindow, maxOutputTokens |
usage | object | Token usage: promptTokens, completionTokens, totalTokens |
finishReason | string | Why generation stopped (stop, length, etc.) |
responseTimeMs | number | Response time in milliseconds |
Configuration:
| Setting | Description |
|---|---|
| AI Model | Select from available models via model-selector |
| Default Temperature | Default temperature if not provided via input (0-2, default: 0.7) |
| Default Max Tokens | Default max tokens if not provided via input (1-8000, default: 1000) |
| Default System Prompt | Default system message if not provided via input |
| Default User Prompt | Default user prompt if not provided via input mapping |
| Info Only | Return model capabilities without making an LLM call |
Accessing Output:
{{ aiGateway.response }}
{{ aiGateway.model.id }}
{{ aiGateway.usage.totalTokens }}
{{ aiGateway.finishReason }}
{{ aiGateway.responseTimeMs }}
Learn more about AI Gateway models -->
LLM
Text and chat completion via AI Gateway. Functionally similar to the AI Gateway node with the same inputs, outputs, and configuration options. Use this node when you want a dedicated text/chat completion step in your workflow.
Inputs/Outputs: Same as AI Gateway (userPrompt, systemPrompt, temperature, maxTokens --> response, model, usage, finishReason, responseTimeMs).
Configuration: Same as AI Gateway (model-selector, defaultTemperature, defaultMaxTokens, defaultSystemPrompt, defaultUserPrompt, infoOnly).
Embeddings
Generate vector embeddings from text for semantic search and similarity operations.
Inputs:
| Input | Type | Description |
|---|---|---|
text | string | Single text to embed |
texts | array | Array of texts to embed (batch) |
Outputs:
| Output | Type | Description |
|---|---|---|
embeddings | array | Array of embedding vectors |
model | string | Model used |
usage | object | Token usage statistics |
dimensions | number | Embedding dimensions |
count | number | Number of embeddings generated |
responseTimeMs | number | Response time in ms |
Configuration:
| Setting | Description |
|---|---|
| Embedding Model | Select an embedding model via model-selector |
| Dimensions | Override dimensions (0 = model default, max 4096) |
Vision
Analyze images and visual content using vision-capable AI models (GPT-4V, Claude Vision, etc.).
Inputs:
| Input | Type | Required | Description |
|---|---|---|---|
image | string | Yes | Image URL (http/https) or base64-encoded image data |
prompt | string | Yes | Text prompt describing what to analyze |
systemPrompt | string | No | System message to guide AI behavior |
temperature | number | No | Temperature value (0-2) |
maxTokens | number | No | Maximum tokens to generate |
Outputs: Same structure as AI Gateway (response, model, usage, responseTimeMs).
Configuration: model-selector (vision-capable model), defaultTemperature, defaultMaxTokens, defaultSystemPrompt, defaultPrompt.
Image Generation
Generate images from text prompts with async job-based processing.
Inputs:
| Input | Type | Required | Description |
|---|---|---|---|
prompt | string | Yes | Text prompt describing the image to generate |
negativePrompt | string | No | What to avoid in the generated image |
Outputs:
| Output | Type | Description |
|---|---|---|
images | array | Generated images with url or b64_json |
jobId | string | Async job ID (if applicable) |
model | string | Model used for generation |
revisedPrompt | string | Model-revised prompt (if applicable) |
responseTimeMs | number | Total response time in ms |
Configuration:
| Setting | Options |
|---|---|
| Image Size | 256x256, 512x512, 1024x1024, 1792x1024 (Landscape), 1024x1792 (Portrait) |
| Quality | Standard, HD |
| Number of Images | 1-4 |
| Style | Vivid, Natural |
| Poll Interval | 500-10000ms (for async jobs) |
| Max Wait | 10-600 seconds |
Speech to Text
Transcribe audio to text using AI speech recognition models (Whisper, etc.).
Inputs:
| Input | Type | Required | Description |
|---|---|---|---|
audio | string | No | Base64-encoded audio data (provide this OR audioPath) |
audioPath | string | No | Path to audio file from previous node |
language | string | No | Language code hint (e.g. 'en', 'es', 'fr') |
prompt | string | No | Context hint to guide transcription |
Outputs:
| Output | Type | Description |
|---|---|---|
text | string | Transcribed text from the audio |
language | string | Detected or specified language code |
duration | number | Audio duration in seconds |
segments | array | Timestamped segments (verbose_json format only) |
Configuration:
| Setting | Description |
|---|---|
| Transcription Model | Select a transcription model |
| Language | Language code hint (leave empty for auto-detection) |
| Response Format | JSON, Plain Text, SRT (Subtitles), VTT (Web Subtitles), Verbose JSON (with segments) |
| Transcription Prompt | Context hint with domain-specific terms |
Text to Speech
Convert text to speech audio via AI Gateway.
Inputs:
| Input | Type | Required | Description |
|---|---|---|---|
text | string | Yes | Text to convert to speech |
Outputs:
| Output | Type | Description |
|---|---|---|
audioPath | string | Path to cached audio file |
format | string | Audio format (mp3/opus/aac/flac/wav) |
voice | string | Voice used |
inputLength | number | Input text length |
audioSize | number | Audio file size in bytes |
model | string | Model used |
responseTimeMs | number | Response time in ms |
Configuration:
| Setting | Options |
|---|---|
| Voice | Alloy, Echo, Fable, Onyx, Nova, Shimmer |
| Speed | 0.25 - 4.0 |
| Audio Format | MP3, Opus, AAC, FLAC, WAV |
Agent Nodes
Agent nodes provide autonomous reasoning, multi-agent collaboration, and specialized AI-powered processing. Most agent nodes connect to an AI Gateway via a bottom "ai" dependency connector, and optionally to MCP tools providers and memory nodes.
Connector Pattern
Most agent nodes have three bottom dependency connectors:
- AI (bottom-left) -- Connect to an AI Gateway node for LLM reasoning
- Memory (bottom-center) -- Connect to a memory node for persistent state
- Tools (bottom-right) -- Connect to an MCP Tools Provider for external tool access
[Agent Node]
/ | \
[AI] [Memory] [Tools]
ReAct Agent
Autonomous AI agent using the ReAct (Reasoning + Acting) pattern. Iteratively thinks, acts using tools, and observes results until the goal is achieved.
Inputs:
| Input | Type | Required | Description |
|---|---|---|---|
task | string | Yes | The goal or task for the agent to accomplish |
context | string | No | Additional context to help the agent |
tools | array | No | Available tools (can also come from connected tool nodes) |
Outputs:
| Output | Type | Description |
|---|---|---|
finalAnswer | string | The agent's final response |
success | boolean | Whether the agent achieved its goal |
stopReason | string | Why the agent stopped (goal_achieved, max_iterations_reached, cost_budget_exceeded, error) |
iterations | number | Number of think-act-observe cycles completed |
toolsCalled | array | List of tools executed with arguments and results |
trajectory | array | Complete trajectory of think/act/observe steps |
totalTokens | number | Total tokens used across all AI calls |
Configuration:
| Setting | Default | Description |
|---|---|---|
| Max Iterations | 10 | Maximum think-act-observe cycles (1-50) |
| System Prompt | -- | Additional instructions for agent behavior |
| Stop Patterns | "FINAL ANSWER:", "Task completed", "I have completed" | Patterns indicating task completion |
| Temperature | 0.7 | Creativity level for reasoning (0-1) |
| Max Tokens per Call | 2000 | Maximum tokens for each AI reasoning call (100-8000) |
| Token Budget | unlimited | Maximum total tokens to use |
Dependencies: AI Gateway (required), Tools (optional), Memory (optional).
Agent Loop
Configurable autonomous think-act-observe agent loop. Similar to the ReAct Agent but with additional control over stop conditions.
Inputs:
| Input | Type | Required | Description |
|---|---|---|---|
goal | string | Yes | The goal for the agent to accomplish |
tools | array | No | Available tool definitions |
context | string | No | Additional context |
Outputs:
| Output | Type | Description |
|---|---|---|
finalAnswer | any | The agent's final response |
success | boolean | Whether goal was achieved |
stopReason | string | Why the agent stopped |
iterations | number | Number of think iterations |
toolsCalled | array | Tools that were executed |
trajectory | array | Full think/act/observe trajectory |
totalTokens | number | Total tokens used |
Configuration:
| Setting | Default | Description |
|---|---|---|
| Max Iterations | 10 | Maximum iterations (1-100) |
| System Prompt | -- | Custom system prompt |
| Stop Patterns | "FINAL ANSWER:", "Task completed" | Patterns that halt the loop |
| Stop Condition | Pattern Match | Pattern Match, Token Budget, or Max Tool Calls |
| Token Budget | 0 (unlimited) | Total token limit |
| Temperature | 0.7 | Temperature (0-2) |
| Max Tokens Per Call | 2000 | Tokens per AI call (100-8000) |
| Output Format | Text | Text or JSON |
Dependencies: AI Gateway (required), Tools (optional), Memory (optional).
Supervisor Agent
Orchestrates multiple sub-agents to accomplish complex tasks. Creates execution plans, delegates work, and synthesizes results. Similar to CrewAI and AutoGen patterns.
Inputs:
| Input | Type | Required | Description |
|---|---|---|---|
task | string | Yes | The complex task requiring multi-agent collaboration |
context | string | No | Additional context for the supervisor |
agents | array | No | Agent definitions (can also come from connections) |
tools | array | No | Tool definitions (can also come from mcp-tools-provider) |
Outputs:
| Output | Type | Description |
|---|---|---|
finalResult | string | Synthesized result from all agents |
agentResults | object | Individual results from each agent |
executionPlan | array | The execution plan that was followed |
success | boolean | Whether all required tasks completed |
agentsUsed | number | Number of agents used |
failedAgents | number | Number of agents that failed |
Configuration:
| Setting | Default | Description |
|---|---|---|
| Orchestration Mode | Adaptive (AI decides) | Adaptive, Sequential, Parallel, or Hierarchical |
| Supervisor Instructions | -- | Additional instructions for supervisor behavior |
| Synthesize Results | true | Combine all agent outputs into a coherent final answer |
| Max Retries | 2 | Maximum retries for failed agent tasks (0-5) |
Dependencies: AI Gateway (required), Sub-Agents (optional), Tools (optional).
Multi-Agent Chat
Multiple AI personas collaborate on a shared discussion thread.
Inputs:
| Input | Type | Required | Description |
|---|---|---|---|
topic | string | Yes | Discussion topic |
context | string | No | Additional context |
agents | array | No | Agent definitions (objects with name, role, systemPrompt) |
Outputs:
| Output | Type | Description |
|---|---|---|
transcript | array | Full discussion transcript |
finalConsensus | string | Synthesized conclusion |
roundsCompleted | number | Rounds completed |
terminationReason | string | Why discussion ended |
agentCount | number | Number of agents |
Configuration:
| Setting | Default | Description |
|---|---|---|
| Max Rounds | 5 | Maximum discussion rounds (1-20) |
| Termination Condition | Fixed Rounds | Fixed Rounds, Consensus Detected, or Keyword Match |
| Termination Keyword | CONSENSUS_REACHED | Keyword to end discussion |
| Enable Moderator | false | Add a moderator to guide discussion |
| Temperature | 0.7 | Temperature (0-2) |
Dependencies: AI Gateway (required).
Debate Agent
Multi-agent debate pattern for reaching consensus through structured argumentation, critique, and synthesis.
Inputs:
| Input | Type | Required | Description |
|---|---|---|---|
topic | string | Yes | The topic or question to debate |
context | string | No | Background information for the debate |
agents | array | No | Agent configurations (optional, can use connected agents) |
Outputs:
| Output | Type | Description |
|---|---|---|
conclusion | string | Synthesized conclusion from the debate |
converged | boolean | Whether agents reached natural consensus |
totalRounds | number | Number of debate rounds executed |
debateHistory | array | Full history of debate rounds and arguments |
votes | object | Voting results from agents |
consensusReached | boolean | Whether consensus was achieved |
agentCount | number | Number of agents that participated |
Configuration:
| Setting | Default | Description |
|---|---|---|
| Debate Mode | Structured | Structured (Propose/Critique/Rebut), Round Robin, Free Form, or Adversarial |
| Max Rounds | 3 | Maximum debate rounds (1-10) |
| Convergence Threshold | 0.8 | Agreement level to stop early (0.5-1.0) |
| Synthesize Conclusion | true | Generate a final synthesized conclusion |
| Enable Voting | true | Have agents vote on conclusions |
Dependencies: AI Gateway (required), Debaters/Sub-Agents (optional).
Planner
Decompose complex goals into ordered sub-tasks with dependencies.
Inputs:
| Input | Type | Required | Description |
|---|---|---|---|
goal | string | Yes | Goal to decompose into tasks |
context | string | No | Additional context |
capabilities | array | No | Available tools/capabilities |
Outputs:
| Output | Type | Description |
|---|---|---|
plan | array | Ordered list of sub-tasks with dependencies |
reasoning | string | Planning reasoning |
criticalPath | array | Critical path task IDs |
totalTasks | number | Total number of tasks |
Configuration:
| Setting | Default | Description |
|---|---|---|
| Planning Strategy | Flat | Flat (single decomposition), Hierarchical (phases then tasks), or Iterative (plan then refine) |
| Max Sub-tasks | 10 | Maximum sub-tasks (3-20) |
| Include Complexity Estimates | true | Add complexity estimates to tasks |
| Temperature | 0.3 | Temperature (0-1) |
| System Prompt | -- | Custom planning instructions |
Dependencies: AI Gateway (required).
Reflection
Self-review and iterative content improvement via critique-revise cycles.
Inputs:
| Input | Type | Required | Description |
|---|---|---|---|
content | string | Yes | Content to reflect on and improve |
originalPrompt | string | No | Original prompt that generated the content |
criteria | array | No | Override evaluation criteria |
Outputs:
| Output | Type | Description |
|---|---|---|
revisedContent | string | Final improved content |
originalContent | string | Original content before revision |
reflections | array | Array of critique-revise cycles |
improvementScore | number | Score improvement from first to last |
finalScore | number | Final evaluation score (0-1) |
totalRevisions | number | Number of revisions performed |
Configuration:
| Setting | Default | Description |
|---|---|---|
| Evaluation Criteria | accuracy, completeness, coherence | Criteria tags for evaluation |
| Custom Criteria | -- | Free-text custom criteria |
| Max Revisions | 2 | Maximum revision cycles (1-5) |
| Auto-Accept Threshold | 0.8 | Score threshold to auto-accept (0-1) |
| Temperature | 0.3 | Temperature (0-1) |
Dependencies: AI Gateway (required).
RAG Agent
Retrieval Augmented Generation agent that combines retrieved documents with AI generation. Builds prompts by combining user queries with retrieved documents from vector search.
Inputs:
| Input | Type | Required | Description |
|---|---|---|---|
retrievedDocs | array | Yes | Documents retrieved from vector search |
query | string | No | Query override (uses config if not provided) |
Outputs:
| Output | Type | Description |
|---|---|---|
rag_prompt | string | Formatted prompt with context and query |
query | string | The original query |
docs_used | number | Number of documents included in context |
context_length | number | Character count of the context |
Configuration:
| Setting | Default | Description |
|---|---|---|
| Query | -- | User question to answer (required) |
| Top K Documents | 5 | Number of documents to include in context |
Dependencies: AI (optional), Memory (optional), Tools (optional).
Entity Extraction
LLM-powered entity extraction agent. Extracts named entities from documents using configurable entity type definitions with descriptions and examples. This is NOT a traditional NER system -- it uses an LLM to identify entities based on your definitions.
Inputs:
| Input | Type | Required | Description |
|---|---|---|---|
filename | string | Yes | Path to document file (.md, .html, .txt) |
entities | array | Yes | Entity type definitions to extract |
outputMode | string | No | Output format: flat, grouped, or annotated |
confidence | number | No | Minimum confidence threshold (0-1) |
validate | boolean | No | Validate entities exist in document |
Outputs:
| Output | Type | Description |
|---|---|---|
entities | object | Extracted entities grouped by type |
documents | array | Per-document extraction stats |
summary | object | Summary with total entities and processing time |
Configuration:
| Setting | Description |
|---|---|
| Entities to Extract | Array of entity type definitions, each with name, description, examples, and output format (string/normalized/structured) |
| Output Mode | Flat List, Grouped by Type, or With Position Info |
| Confidence Threshold | Minimum confidence score (0-1, default: 0.7) |
| Validate Entities | Verify extracted entities exist in document text using LLM correction |
Dependencies: AI (required), Memory (optional), Tools (optional).
Document Classification
LLM-powered document classification agent. Classifies documents into configurable labels using an LLM with keyword hints.
Inputs:
| Input | Type | Required | Description |
|---|---|---|---|
filename | string | Yes | Path to document file (.md, .html, .txt) |
labels | array | Yes | List of classification labels |
keywords | object | No | Keywords per label for classification hints |
confidence | number | No | Minimum confidence threshold (0-1) |
Outputs:
| Output | Type | Description |
|---|---|---|
classifications | array | Array of results with filename, label, confidence, reasoning, alternativeLabels, keywords, metadata |
summary | object | Summary with totalDocuments, labelDistribution, averageConfidence, processingTime |
passThroughValues | object | Pass-through values from input |
Configuration:
| Setting | Description |
|---|---|
| Classification Labels | List of labels (e.g., Invoice, Contract, Receipt, Report, Other) |
| Keywords per Label | JSON mapping of labels to keyword arrays for classification hints |
| Confidence Threshold | Minimum confidence score (0-1, default: 0.7) |
Dependencies: AI (required), Memory (optional), Tools (optional).
Column Mapper Agent
Uses LLM to intelligently map source columns to a target schema. Handles varying column names across different data sources by understanding semantic meaning. Supports database caching for known mappings.
Inputs:
| Input | Type | Required | Description |
|---|---|---|---|
rows | array | Yes | Array of row objects with source column names (sample of 5-10 rows) |
headers | array | No | Source column names (inferred from rows if not provided) |
filename | string | No | Source filename for context (passed through) |
file | string | No | File path to pass through to downstream nodes |
parsedTableKey | string | No | Cache key from table-parser |
Outputs:
| Output | Type | Description |
|---|---|---|
columnMappings | object | Column mapping dictionary (target column --> source column) |
success | boolean | True if mapping meets confidence threshold |
confidenceScores | object | Confidence score (0-1) for each mapping |
overallConfidence | number | Average confidence across all mappings |
needsReview | boolean | True if any mappings are low confidence |
usedCache | boolean | True if cached mapping was used instead of LLM |
mappingCacheKey | string | Cache key for this mapping lookup |
Configuration:
| Setting | Description |
|---|---|
| Target Schema | Array of target columns with name, description, type, required flag, and examples |
| Confidence Threshold | Minimum confidence for successful mappings (0-1, default: 0.7) |
| Sample Size | Number of sample rows to send to LLM (1-20, default: 5) |
| Use Known Mappings | Use cached mappings from database if available (default: true) |
| Always Find New | Always use LLM even if cached mapping exists (default: false) |
| Memory Collection | Collection name for storing cached mappings (default: column_mappings) |
Dependencies: AI (required), Memory (optional), Tools (optional).
Data Cleanup Agent
Uses LLM to validate and fix malformed data rows from PDF extraction. Detects shifted columns, merged values, and data type mismatches.
Inputs:
| Input | Type | Required | Description |
|---|---|---|---|
current_item | object | Yes | Row data to validate and clean |
mappings | object | No | Column mappings from column-mapper node |
Outputs:
| Output | Type | Description |
|---|---|---|
current_item | object | Cleaned row data (or original if no issues) |
data_quality | string | Quality status: valid, cleaned, invalid, unfixable |
was_cleaned | boolean | True if the row was modified by LLM |
validation_issues | array | List of detected data quality issues |
original_issues | array | Issues that were fixed (when was_cleaned is true) |
Configuration:
| Setting | Description |
|---|---|
| Date Columns | Column names that should contain date values |
| Numeric Columns | Column names that should contain numeric values |
| Validate Only | If enabled, only detect issues without fixing them |
Dependencies: AI (required), Memory (optional), Tools (optional).
Function Calling Agent
Orchestrates function calls from AI responses, extracting and managing tool calls.
Inputs:
| Input | Type | Required | Description |
|---|---|---|---|
aiResponse | object | Yes | Response from AI containing function calls |
availableFunctions | array | No | List of available functions the AI can call |
Outputs:
| Output | Type | Description |
|---|---|---|
function_calls | array | Extracted function calls from AI response |
call_count | number | Number of function calls extracted |
Configuration:
| Setting | Description |
|---|---|
| Available Functions | JSON list of functions the agent can call |
Dependencies: AI (optional), Memory (optional), Tools (optional).
Tool Router
LLM-based dynamic tool selection for a given task. Analyzes a task and selects the best tools from a list of available options.
Inputs:
| Input | Type | Required | Description |
|---|---|---|---|
task | string | Yes | Task or query to route |
tools | array | Yes | Available tools (objects with name, description) |
Outputs:
| Output | Type | Description |
|---|---|---|
selectedTools | array | Selected tools (objects with name, confidence, reasoning) |
totalAvailable | number | Total available tools |
strategy | string | Routing strategy used |
Configuration:
| Setting | Default | Description |
|---|---|---|
| Routing Strategy | LLM-based | Keyword (no LLM), LLM-based, or Hybrid (keyword + LLM) |
| Max Selections | 3 | Maximum tools to select (1-10) |
| Confidence Threshold | 0.5 | Minimum confidence for selection (0-1) |
| Temperature | 0.2 | Temperature (0-1) |
Dependencies: AI Gateway (optional, required for LLM and hybrid strategies).
Agent Handoff
Package and transfer context between agent nodes. Supports full pass-through, LLM-compressed summary, or selective field extraction.
Inputs:
| Input | Type | Required | Description |
|---|---|---|---|
currentState | any | Yes | Current agent's state/results |
conversationHistory | array | No | Conversation history |
nextAgentInstructions | string | No | Instructions for next agent |
Outputs:
| Output | Type | Description |
|---|---|---|
handoffPackage | object | Packaged context for next agent |
strategy | string | Strategy used |
originalSize | number | Original context size |
compressedSize | number | Packaged size |
compressionRatio | number | Compression ratio |
Configuration:
| Setting | Default | Description |
|---|---|---|
| Handoff Strategy | Full | Full (pass everything), Summary (LLM-compressed), or Selective (specific fields) |
| Selective Fields | [] | Fields to extract (dot notation supported) |
| Summary Max Tokens | 500 | Max tokens for LLM summary (100-2000) |
| Include Key Findings | true | Include key findings in handoff |
Dependencies: AI Gateway (optional, required for Summary strategy).
Memory Nodes
Memory nodes provide persistent state, conversation history, vector storage, and cross-agent communication for your workflows.
Conversation Memory
Store and retrieve conversation history using MongoDB. Supports add, retrieve, summarize, and clear operations.
Operations:
| Operation | Description |
|---|---|
| Add | Store a new message in the conversation |
| Retrieve | Get conversation history |
| Summarize | Get conversation summary and formatted text |
| Clear | Clear conversation history |
Inputs:
| Input | Type | Description |
|---|---|---|
operation | string | Operation: add, retrieve, summarize, clear |
message | any | Message content to add |
role | string | Message role: user, assistant, system |
sessionId | string | Conversation session identifier |
windowSize | number | Number of recent messages to retrieve |
includeMetadata | boolean | Include message metadata |
Outputs (vary by operation):
| Output | Type | Description |
|---|---|---|
success | boolean | Whether operation succeeded |
messages | array | Retrieved messages (retrieve operation) |
messageCount | number | Number of messages |
session_id | string | Session identifier |
totalMessages | number | Total messages in conversation |
summary | object | Conversation summary (summarize operation) |
conversation_text | string | Formatted conversation text (summarize operation) |
cleared_count | number | Messages cleared (clear operation) |
Configuration:
| Setting | Default | Description |
|---|---|---|
| Max Messages | 50 | Maximum messages to store (1-1000) |
| Session Key | -- | Unique identifier for the session (required) |
| Include Metadata | true | Store timestamps and user info |
| Retention Days | 30 | Days to keep history (1-365) |
| Connection Type | Data Source | Use Add-on or Data Source (MongoDB) |
| Database | memory | MongoDB database name |
| Operation | Retrieve | Add Message, Retrieve History, Summarize, or Clear History |
Knowledge Base
Store and query structured knowledge with support for Neo4j graph database and Milvus vector database. Supports store, query, update, delete, and connect operations.
Operations:
| Operation | Description |
|---|---|
| Query | Search the knowledge base (semantic, keyword, hybrid, or exact) |
| Store | Store entities with optional relationships and embeddings |
| Update | Update existing entity properties |
| Delete | Remove an entity |
| Connect | Create relationships between entities |
Key Inputs:
| Input | Type | Description |
|---|---|---|
operation | string | Operation: store, query, update, delete, connect |
query | string | Search query string |
queryType | string | Query type: semantic, keyword, hybrid, exact |
entity | object | Entity/node to store |
relationships | array | Relationships to create |
embeddings | array | Vector embeddings for vector DB |
topK | number | Number of results to return |
entityId | string | Entity ID for update/delete |
Key Outputs:
| Output | Type | Description |
|---|---|---|
success | boolean | Whether operation succeeded |
results | array | Query results |
entity_id | string | Entity ID (store/update operations) |
neo4j_stored | boolean | Stored in Neo4j |
milvus_stored | boolean | Stored in Milvus |
total_count | number | Total results count |
Configuration:
| Setting | Default | Description |
|---|---|---|
| Connection Type | Add-on | Use Add-on or Data Source |
| Query Type | Semantic | Semantic, Keyword, Hybrid, or Exact Match |
| Max Results | 10 | Maximum results (1-100) |
| Similarity Threshold | 0.7 | Minimum similarity score (0-1) |
| Top K | 10 | Top results from search (1-100) |
Semantic Memory
Vector store with embedding-based retrieval via Milvus. Requires an AI Gateway connection for generating embeddings.
Inputs:
| Input | Type | Description |
|---|---|---|
text | string | Text to store |
query | string | Search query |
metadata | object | Additional metadata |
id | string | Document ID (for delete) |
Outputs:
| Output | Type | Description |
|---|---|---|
results | array | Search results (objects with text, metadata, similarity) |
count | number | Number of results |
storedId | string | ID of stored document |
success | boolean | Operation success |
Configuration:
| Setting | Default | Description |
|---|---|---|
| Operation | Search | Store, Search, or Delete |
| Milvus Add-on | -- | Select Milvus add-on (required) |
| Collection Name | semantic_memory | Milvus collection name |
| Embedding Dimensions | 1536 | Embedding vector dimensions |
| Top K Results | 5 | Number of results (1-100) |
Dependencies: AI Gateway (required, for generating embeddings).
Context Buffer
Manage working memory and context windows with configurable strategies for handling token limits.
Inputs:
| Input | Type | Description |
|---|---|---|
operation | string | Operation: store, retrieve, clear |
data | any | Data to store in context buffer |
query | string | Query string for filtering context |
lastN | number | Number of recent entries to retrieve |
metadata | object | Additional metadata for stored data |
Outputs:
| Output | Type | Description |
|---|---|---|
success | boolean | Whether operation succeeded |
stored_id | string | ID of stored context |
buffer_size | number | Current buffer size |
buffer_capacity | number | Maximum buffer capacity |
context | array | Retrieved context items |
items_retrieved | number | Items retrieved |
cleared_items | number | Items cleared |
Configuration:
| Setting | Default | Description |
|---|---|---|
| Max Tokens | 4000 | Maximum tokens in context (100-128000) |
| Buffer Strategy | Sliding Window | Sliding Window, Summarize Old Content, or Keep Important Content |
| Compression Ratio | 0.3 | How much to compress when summarizing (0.1-1) |
| Preserve System Messages | true | Always keep system messages in context |
| Buffer Size | 10 | Maximum items in buffer (1-1000) |
| Operation | Retrieve | Store, Retrieve, or Clear |
Working Memory
Short-term key-value scratchpad with TTL (time-to-live) support for temporary state during workflow execution.
Inputs:
| Input | Type | Description |
|---|---|---|
key | string | Key for the entry |
value | any | Value to store |
ttl | number | Time-to-live in seconds |
Outputs:
| Output | Type | Description |
|---|---|---|
value | any | Retrieved value |
found | boolean | Whether key was found |
success | boolean | Operation success |
key | string | Key operated on |
keys | array | All keys (list operation) |
entriesCount | number | Current entry count |
Configuration:
| Setting | Default | Description |
|---|---|---|
| Operation | Get | Set, Get, Update, Delete, List All, or Clear All |
| Max Entries | 100 | Maximum entries (1-10000) |
| Default TTL | 0 (no expiry) | Default time-to-live in seconds |
Episodic Memory
Store and retrieve past workflow experiences via MongoDB. Useful for agents that learn from previous executions.
Inputs:
| Input | Type | Description |
|---|---|---|
task | string | Task description (for recording) |
decisions | array | Decisions made |
outcome | string | Episode outcome |
success | boolean | Whether episode was successful |
query | string | Search query (for retrieval) |
filter | object | MongoDB filter (for search) |
episodeMetadata | object | Additional metadata |
Outputs:
| Output | Type | Description |
|---|---|---|
episodes | array | Retrieved episodes |
count | number | Number of episodes returned |
episodeId | string | ID of recorded episode |
success | boolean | Operation success |
Configuration:
| Setting | Default | Description |
|---|---|---|
| Operation | Retrieve Recent | Record Episode, Retrieve Similar, Retrieve Recent, or Search |
| Connection Type | Data Source | Data Source or Add-on (MongoDB) |
| Database | memory | MongoDB database name |
| Collection | workflow_episodes | Collection name |
| Max Episodes | 10 | Maximum episodes to return (1-100) |
Memory Retriever
Meta-node that queries multiple memory sources and merges/ranks results. Connect up to 3 memory sources as dependencies.
Inputs:
| Input | Type | Required | Description |
|---|---|---|---|
query | string | Yes | Search query across memory sources |
Outputs:
| Output | Type | Description |
|---|---|---|
results | array | Ranked results from all sources |
totalResults | number | Total results returned |
sourcesQueried | number | Number of sources queried |
sourceBreakdown | object | Results count per source |
rankingStrategy | string | Strategy used |
Configuration:
| Setting | Default | Description |
|---|---|---|
| Ranking Strategy | Relevance | Relevance, Recency, or Hybrid (relevance + recency) |
| Max Results | 10 | Maximum results (1-100) |
| Source Weights | (empty) | JSON weights per source (e.g., {"memory_1": 1.0, "memory_2": 0.8}) |
Dependencies: Memory Source 1 (required), Memory Source 2 (optional), Memory Source 3 (optional).
Shared Blackboard
Cross-agent shared key-value state via MongoDB. Enables multiple agents in a workflow to read and write shared state organized by sections.
Inputs:
| Input | Type | Description |
|---|---|---|
section | string | Section name |
key | string | Key name |
value | any | Value to write |
since | string | ISO timestamp for subscribe operation |
Outputs:
| Output | Type | Description |
|---|---|---|
value | any | Retrieved value |
found | boolean | Whether key was found |
sectionData | object | All data in section |
changes | array | Changes since last read |
success | boolean | Operation success |
entriesCount | number | Number of entries |
Configuration:
| Setting | Default | Description |
|---|---|---|
| Operation | Read | Write, Read, Subscribe (get changes), or Clear Section |
| Board ID | (empty = workflow-scoped) | Board identifier |
| Connection Type | Data Source | Data Source or Add-on (MongoDB) |
| Database | memory | MongoDB database name |
Key Workflow Patterns
Agent + AI Gateway + MCP Tools Pattern
The most common agent pattern connects three nodes via the agent's bottom dependency connectors:
[Trigger] --> [ReAct Agent] --> [Output]
| | |
[AI] [M] [Tools]
| |
[AI Gateway] [MCP Tools Provider]
- The agent's ai port connects to an AI Gateway node for LLM reasoning
- The agent's tools port connects to an MCP Tools Provider for external tool access
- The agent's memory port connects to any memory node for state persistence
Agent Loop Pattern
The ReAct Agent and Agent Loop nodes follow an iterative think-act-observe cycle:
1. THINK - Analyze the current situation and decide next action
2. ACT - Call a tool or produce a response
3. OBSERVE - Process the tool result
4. REPEAT - Continue until goal achieved or limits reached
The loop terminates when:
- A stop pattern is matched in the response (e.g., "FINAL ANSWER:")
- Maximum iterations are reached
- Token budget is exhausted
- An error occurs
Multi-Agent Patterns
Supervisor Pattern:
[Trigger] --> [Supervisor Agent] --> [Output]
| | |
[AI] [Agents] [Tools]
|
+---------+---------+
| | |
[Agent A] [Agent B] [Agent C]
The supervisor creates a plan, delegates to sub-agents, and synthesizes results.
Debate Pattern:
[Trigger] --> [Debate Agent] --> [Output]
| |
[AI] [Agents]
|
+---------+---------+
| | |
[Agent A] [Agent B] [Agent C]
Multiple agents debate a topic through structured rounds until consensus.
Handoff Pattern:
[Trigger] --> [Agent A] --> [Agent Handoff] --> [Agent B] --> [Output]
Context is packaged and transferred between specialized agents.
RAG (Retrieval Augmented Generation) Pattern
[Trigger] --> [Embeddings] --> [Semantic Memory (search)] --> [RAG Agent] --> [AI Gateway] --> [Output]
- Generate embeddings for the user query
- Search semantic memory (Milvus) for relevant documents
- RAG Agent builds a prompt combining the query with retrieved documents
- AI Gateway generates the final answer with context
Memory-Augmented Agent Pattern
[Trigger] --> [Conversation Memory (retrieve)] --> [ReAct Agent] --> [Conversation Memory (add)] --> [Output]
|
[AI Gateway]
- Retrieve conversation history from memory
- Agent uses history as context for reasoning
- New exchange is saved back to memory
Best Practices
Model Selection
- Use smaller/faster models for simple classification or extraction tasks
- Use larger models for complex reasoning, planning, and multi-step tasks
- Set appropriate
maxTokenslimits to control costs
Agent Configuration
- Start with low
maxIterations(5-10) and increase if tasks are under-completing - Use specific
systemPromptinstructions to guide agent behavior - Add
stopPatternsthat match your expected output format - Set token budgets to prevent runaway costs
Memory Usage
- Use Working Memory for temporary scratch state within a single execution
- Use Conversation Memory for persistent chat history across executions
- Use Semantic Memory for vector-based retrieval (requires Milvus add-on)
- Use Episodic Memory for learning from past workflow runs
- Use Shared Blackboard when multiple agents need to coordinate via shared state
- Use Memory Retriever to query across multiple memory sources at once
Error Handling
- Check the
successoutput from agent and memory nodes - Use the
stopReasonoutput from agent nodes to determine why an agent terminated - Set
maxRetrieson Supervisor Agent for resilience - Monitor
totalTokensoutput to track costs