Skip to main content

AI Agents in Workflows

Strongly AI provides three categories of AI-powered workflow nodes: AI nodes for model inference, Agent nodes for autonomous reasoning and multi-agent patterns, and Memory nodes for persistent state and retrieval. Together these enable sophisticated AI workflows from simple LLM calls to fully autonomous multi-agent systems.

AI Nodes

AI nodes connect your workflows to language models and other AI capabilities through the AI Gateway.

AI Gateway

The primary node for calling LLM models (GPT-4, Claude, Llama, etc.) for text generation, summarization, extraction, classification, and question answering.

Inputs:

InputTypeRequiredDescription
userPromptstringYesThe user message/prompt to send to the AI model
systemPromptstringNoSystem message to guide AI behavior
temperaturenumberNoTemperature value (0-2) to control randomness
maxTokensnumberNoMaximum number of tokens to generate

Outputs:

OutputTypeDescription
responsestringThe AI model's text response
modelobjectModel info: id, name, provider, type, contextWindow, maxOutputTokens
usageobjectToken usage: promptTokens, completionTokens, totalTokens
finishReasonstringWhy generation stopped (stop, length, etc.)
responseTimeMsnumberResponse time in milliseconds

Configuration:

SettingDescription
AI ModelSelect from available models via model-selector
Default TemperatureDefault temperature if not provided via input (0-2, default: 0.7)
Default Max TokensDefault max tokens if not provided via input (1-8000, default: 1000)
Default System PromptDefault system message if not provided via input
Default User PromptDefault user prompt if not provided via input mapping
Info OnlyReturn model capabilities without making an LLM call

Accessing Output:

{{ aiGateway.response }}
{{ aiGateway.model.id }}
{{ aiGateway.usage.totalTokens }}
{{ aiGateway.finishReason }}
{{ aiGateway.responseTimeMs }}

Learn more about AI Gateway models -->

LLM

Text and chat completion via AI Gateway. Functionally similar to the AI Gateway node with the same inputs, outputs, and configuration options. Use this node when you want a dedicated text/chat completion step in your workflow.

Inputs/Outputs: Same as AI Gateway (userPrompt, systemPrompt, temperature, maxTokens --> response, model, usage, finishReason, responseTimeMs).

Configuration: Same as AI Gateway (model-selector, defaultTemperature, defaultMaxTokens, defaultSystemPrompt, defaultUserPrompt, infoOnly).

Embeddings

Generate vector embeddings from text for semantic search and similarity operations.

Inputs:

InputTypeDescription
textstringSingle text to embed
textsarrayArray of texts to embed (batch)

Outputs:

OutputTypeDescription
embeddingsarrayArray of embedding vectors
modelstringModel used
usageobjectToken usage statistics
dimensionsnumberEmbedding dimensions
countnumberNumber of embeddings generated
responseTimeMsnumberResponse time in ms

Configuration:

SettingDescription
Embedding ModelSelect an embedding model via model-selector
DimensionsOverride dimensions (0 = model default, max 4096)

Vision

Analyze images and visual content using vision-capable AI models (GPT-4V, Claude Vision, etc.).

Inputs:

InputTypeRequiredDescription
imagestringYesImage URL (http/https) or base64-encoded image data
promptstringYesText prompt describing what to analyze
systemPromptstringNoSystem message to guide AI behavior
temperaturenumberNoTemperature value (0-2)
maxTokensnumberNoMaximum tokens to generate

Outputs: Same structure as AI Gateway (response, model, usage, responseTimeMs).

Configuration: model-selector (vision-capable model), defaultTemperature, defaultMaxTokens, defaultSystemPrompt, defaultPrompt.

Image Generation

Generate images from text prompts with async job-based processing.

Inputs:

InputTypeRequiredDescription
promptstringYesText prompt describing the image to generate
negativePromptstringNoWhat to avoid in the generated image

Outputs:

OutputTypeDescription
imagesarrayGenerated images with url or b64_json
jobIdstringAsync job ID (if applicable)
modelstringModel used for generation
revisedPromptstringModel-revised prompt (if applicable)
responseTimeMsnumberTotal response time in ms

Configuration:

SettingOptions
Image Size256x256, 512x512, 1024x1024, 1792x1024 (Landscape), 1024x1792 (Portrait)
QualityStandard, HD
Number of Images1-4
StyleVivid, Natural
Poll Interval500-10000ms (for async jobs)
Max Wait10-600 seconds

Speech to Text

Transcribe audio to text using AI speech recognition models (Whisper, etc.).

Inputs:

InputTypeRequiredDescription
audiostringNoBase64-encoded audio data (provide this OR audioPath)
audioPathstringNoPath to audio file from previous node
languagestringNoLanguage code hint (e.g. 'en', 'es', 'fr')
promptstringNoContext hint to guide transcription

Outputs:

OutputTypeDescription
textstringTranscribed text from the audio
languagestringDetected or specified language code
durationnumberAudio duration in seconds
segmentsarrayTimestamped segments (verbose_json format only)

Configuration:

SettingDescription
Transcription ModelSelect a transcription model
LanguageLanguage code hint (leave empty for auto-detection)
Response FormatJSON, Plain Text, SRT (Subtitles), VTT (Web Subtitles), Verbose JSON (with segments)
Transcription PromptContext hint with domain-specific terms

Text to Speech

Convert text to speech audio via AI Gateway.

Inputs:

InputTypeRequiredDescription
textstringYesText to convert to speech

Outputs:

OutputTypeDescription
audioPathstringPath to cached audio file
formatstringAudio format (mp3/opus/aac/flac/wav)
voicestringVoice used
inputLengthnumberInput text length
audioSizenumberAudio file size in bytes
modelstringModel used
responseTimeMsnumberResponse time in ms

Configuration:

SettingOptions
VoiceAlloy, Echo, Fable, Onyx, Nova, Shimmer
Speed0.25 - 4.0
Audio FormatMP3, Opus, AAC, FLAC, WAV

Agent Nodes

Agent nodes provide autonomous reasoning, multi-agent collaboration, and specialized AI-powered processing. Most agent nodes connect to an AI Gateway via a bottom "ai" dependency connector, and optionally to MCP tools providers and memory nodes.

Connector Pattern

Most agent nodes have three bottom dependency connectors:

  • AI (bottom-left) -- Connect to an AI Gateway node for LLM reasoning
  • Memory (bottom-center) -- Connect to a memory node for persistent state
  • Tools (bottom-right) -- Connect to an MCP Tools Provider for external tool access
                  [Agent Node]
/ | \
[AI] [Memory] [Tools]

ReAct Agent

Autonomous AI agent using the ReAct (Reasoning + Acting) pattern. Iteratively thinks, acts using tools, and observes results until the goal is achieved.

Inputs:

InputTypeRequiredDescription
taskstringYesThe goal or task for the agent to accomplish
contextstringNoAdditional context to help the agent
toolsarrayNoAvailable tools (can also come from connected tool nodes)

Outputs:

OutputTypeDescription
finalAnswerstringThe agent's final response
successbooleanWhether the agent achieved its goal
stopReasonstringWhy the agent stopped (goal_achieved, max_iterations_reached, cost_budget_exceeded, error)
iterationsnumberNumber of think-act-observe cycles completed
toolsCalledarrayList of tools executed with arguments and results
trajectoryarrayComplete trajectory of think/act/observe steps
totalTokensnumberTotal tokens used across all AI calls

Configuration:

SettingDefaultDescription
Max Iterations10Maximum think-act-observe cycles (1-50)
System Prompt--Additional instructions for agent behavior
Stop Patterns"FINAL ANSWER:", "Task completed", "I have completed"Patterns indicating task completion
Temperature0.7Creativity level for reasoning (0-1)
Max Tokens per Call2000Maximum tokens for each AI reasoning call (100-8000)
Token BudgetunlimitedMaximum total tokens to use

Dependencies: AI Gateway (required), Tools (optional), Memory (optional).

Agent Loop

Configurable autonomous think-act-observe agent loop. Similar to the ReAct Agent but with additional control over stop conditions.

Inputs:

InputTypeRequiredDescription
goalstringYesThe goal for the agent to accomplish
toolsarrayNoAvailable tool definitions
contextstringNoAdditional context

Outputs:

OutputTypeDescription
finalAnsweranyThe agent's final response
successbooleanWhether goal was achieved
stopReasonstringWhy the agent stopped
iterationsnumberNumber of think iterations
toolsCalledarrayTools that were executed
trajectoryarrayFull think/act/observe trajectory
totalTokensnumberTotal tokens used

Configuration:

SettingDefaultDescription
Max Iterations10Maximum iterations (1-100)
System Prompt--Custom system prompt
Stop Patterns"FINAL ANSWER:", "Task completed"Patterns that halt the loop
Stop ConditionPattern MatchPattern Match, Token Budget, or Max Tool Calls
Token Budget0 (unlimited)Total token limit
Temperature0.7Temperature (0-2)
Max Tokens Per Call2000Tokens per AI call (100-8000)
Output FormatTextText or JSON

Dependencies: AI Gateway (required), Tools (optional), Memory (optional).

Supervisor Agent

Orchestrates multiple sub-agents to accomplish complex tasks. Creates execution plans, delegates work, and synthesizes results. Similar to CrewAI and AutoGen patterns.

Inputs:

InputTypeRequiredDescription
taskstringYesThe complex task requiring multi-agent collaboration
contextstringNoAdditional context for the supervisor
agentsarrayNoAgent definitions (can also come from connections)
toolsarrayNoTool definitions (can also come from mcp-tools-provider)

Outputs:

OutputTypeDescription
finalResultstringSynthesized result from all agents
agentResultsobjectIndividual results from each agent
executionPlanarrayThe execution plan that was followed
successbooleanWhether all required tasks completed
agentsUsednumberNumber of agents used
failedAgentsnumberNumber of agents that failed

Configuration:

SettingDefaultDescription
Orchestration ModeAdaptive (AI decides)Adaptive, Sequential, Parallel, or Hierarchical
Supervisor Instructions--Additional instructions for supervisor behavior
Synthesize ResultstrueCombine all agent outputs into a coherent final answer
Max Retries2Maximum retries for failed agent tasks (0-5)

Dependencies: AI Gateway (required), Sub-Agents (optional), Tools (optional).

Multi-Agent Chat

Multiple AI personas collaborate on a shared discussion thread.

Inputs:

InputTypeRequiredDescription
topicstringYesDiscussion topic
contextstringNoAdditional context
agentsarrayNoAgent definitions (objects with name, role, systemPrompt)

Outputs:

OutputTypeDescription
transcriptarrayFull discussion transcript
finalConsensusstringSynthesized conclusion
roundsCompletednumberRounds completed
terminationReasonstringWhy discussion ended
agentCountnumberNumber of agents

Configuration:

SettingDefaultDescription
Max Rounds5Maximum discussion rounds (1-20)
Termination ConditionFixed RoundsFixed Rounds, Consensus Detected, or Keyword Match
Termination KeywordCONSENSUS_REACHEDKeyword to end discussion
Enable ModeratorfalseAdd a moderator to guide discussion
Temperature0.7Temperature (0-2)

Dependencies: AI Gateway (required).

Debate Agent

Multi-agent debate pattern for reaching consensus through structured argumentation, critique, and synthesis.

Inputs:

InputTypeRequiredDescription
topicstringYesThe topic or question to debate
contextstringNoBackground information for the debate
agentsarrayNoAgent configurations (optional, can use connected agents)

Outputs:

OutputTypeDescription
conclusionstringSynthesized conclusion from the debate
convergedbooleanWhether agents reached natural consensus
totalRoundsnumberNumber of debate rounds executed
debateHistoryarrayFull history of debate rounds and arguments
votesobjectVoting results from agents
consensusReachedbooleanWhether consensus was achieved
agentCountnumberNumber of agents that participated

Configuration:

SettingDefaultDescription
Debate ModeStructuredStructured (Propose/Critique/Rebut), Round Robin, Free Form, or Adversarial
Max Rounds3Maximum debate rounds (1-10)
Convergence Threshold0.8Agreement level to stop early (0.5-1.0)
Synthesize ConclusiontrueGenerate a final synthesized conclusion
Enable VotingtrueHave agents vote on conclusions

Dependencies: AI Gateway (required), Debaters/Sub-Agents (optional).

Planner

Decompose complex goals into ordered sub-tasks with dependencies.

Inputs:

InputTypeRequiredDescription
goalstringYesGoal to decompose into tasks
contextstringNoAdditional context
capabilitiesarrayNoAvailable tools/capabilities

Outputs:

OutputTypeDescription
planarrayOrdered list of sub-tasks with dependencies
reasoningstringPlanning reasoning
criticalPatharrayCritical path task IDs
totalTasksnumberTotal number of tasks

Configuration:

SettingDefaultDescription
Planning StrategyFlatFlat (single decomposition), Hierarchical (phases then tasks), or Iterative (plan then refine)
Max Sub-tasks10Maximum sub-tasks (3-20)
Include Complexity EstimatestrueAdd complexity estimates to tasks
Temperature0.3Temperature (0-1)
System Prompt--Custom planning instructions

Dependencies: AI Gateway (required).

Reflection

Self-review and iterative content improvement via critique-revise cycles.

Inputs:

InputTypeRequiredDescription
contentstringYesContent to reflect on and improve
originalPromptstringNoOriginal prompt that generated the content
criteriaarrayNoOverride evaluation criteria

Outputs:

OutputTypeDescription
revisedContentstringFinal improved content
originalContentstringOriginal content before revision
reflectionsarrayArray of critique-revise cycles
improvementScorenumberScore improvement from first to last
finalScorenumberFinal evaluation score (0-1)
totalRevisionsnumberNumber of revisions performed

Configuration:

SettingDefaultDescription
Evaluation Criteriaaccuracy, completeness, coherenceCriteria tags for evaluation
Custom Criteria--Free-text custom criteria
Max Revisions2Maximum revision cycles (1-5)
Auto-Accept Threshold0.8Score threshold to auto-accept (0-1)
Temperature0.3Temperature (0-1)

Dependencies: AI Gateway (required).

RAG Agent

Retrieval Augmented Generation agent that combines retrieved documents with AI generation. Builds prompts by combining user queries with retrieved documents from vector search.

Inputs:

InputTypeRequiredDescription
retrievedDocsarrayYesDocuments retrieved from vector search
querystringNoQuery override (uses config if not provided)

Outputs:

OutputTypeDescription
rag_promptstringFormatted prompt with context and query
querystringThe original query
docs_usednumberNumber of documents included in context
context_lengthnumberCharacter count of the context

Configuration:

SettingDefaultDescription
Query--User question to answer (required)
Top K Documents5Number of documents to include in context

Dependencies: AI (optional), Memory (optional), Tools (optional).

Entity Extraction

LLM-powered entity extraction agent. Extracts named entities from documents using configurable entity type definitions with descriptions and examples. This is NOT a traditional NER system -- it uses an LLM to identify entities based on your definitions.

Inputs:

InputTypeRequiredDescription
filenamestringYesPath to document file (.md, .html, .txt)
entitiesarrayYesEntity type definitions to extract
outputModestringNoOutput format: flat, grouped, or annotated
confidencenumberNoMinimum confidence threshold (0-1)
validatebooleanNoValidate entities exist in document

Outputs:

OutputTypeDescription
entitiesobjectExtracted entities grouped by type
documentsarrayPer-document extraction stats
summaryobjectSummary with total entities and processing time

Configuration:

SettingDescription
Entities to ExtractArray of entity type definitions, each with name, description, examples, and output format (string/normalized/structured)
Output ModeFlat List, Grouped by Type, or With Position Info
Confidence ThresholdMinimum confidence score (0-1, default: 0.7)
Validate EntitiesVerify extracted entities exist in document text using LLM correction

Dependencies: AI (required), Memory (optional), Tools (optional).

Document Classification

LLM-powered document classification agent. Classifies documents into configurable labels using an LLM with keyword hints.

Inputs:

InputTypeRequiredDescription
filenamestringYesPath to document file (.md, .html, .txt)
labelsarrayYesList of classification labels
keywordsobjectNoKeywords per label for classification hints
confidencenumberNoMinimum confidence threshold (0-1)

Outputs:

OutputTypeDescription
classificationsarrayArray of results with filename, label, confidence, reasoning, alternativeLabels, keywords, metadata
summaryobjectSummary with totalDocuments, labelDistribution, averageConfidence, processingTime
passThroughValuesobjectPass-through values from input

Configuration:

SettingDescription
Classification LabelsList of labels (e.g., Invoice, Contract, Receipt, Report, Other)
Keywords per LabelJSON mapping of labels to keyword arrays for classification hints
Confidence ThresholdMinimum confidence score (0-1, default: 0.7)

Dependencies: AI (required), Memory (optional), Tools (optional).

Column Mapper Agent

Uses LLM to intelligently map source columns to a target schema. Handles varying column names across different data sources by understanding semantic meaning. Supports database caching for known mappings.

Inputs:

InputTypeRequiredDescription
rowsarrayYesArray of row objects with source column names (sample of 5-10 rows)
headersarrayNoSource column names (inferred from rows if not provided)
filenamestringNoSource filename for context (passed through)
filestringNoFile path to pass through to downstream nodes
parsedTableKeystringNoCache key from table-parser

Outputs:

OutputTypeDescription
columnMappingsobjectColumn mapping dictionary (target column --> source column)
successbooleanTrue if mapping meets confidence threshold
confidenceScoresobjectConfidence score (0-1) for each mapping
overallConfidencenumberAverage confidence across all mappings
needsReviewbooleanTrue if any mappings are low confidence
usedCachebooleanTrue if cached mapping was used instead of LLM
mappingCacheKeystringCache key for this mapping lookup

Configuration:

SettingDescription
Target SchemaArray of target columns with name, description, type, required flag, and examples
Confidence ThresholdMinimum confidence for successful mappings (0-1, default: 0.7)
Sample SizeNumber of sample rows to send to LLM (1-20, default: 5)
Use Known MappingsUse cached mappings from database if available (default: true)
Always Find NewAlways use LLM even if cached mapping exists (default: false)
Memory CollectionCollection name for storing cached mappings (default: column_mappings)

Dependencies: AI (required), Memory (optional), Tools (optional).

Data Cleanup Agent

Uses LLM to validate and fix malformed data rows from PDF extraction. Detects shifted columns, merged values, and data type mismatches.

Inputs:

InputTypeRequiredDescription
current_itemobjectYesRow data to validate and clean
mappingsobjectNoColumn mappings from column-mapper node

Outputs:

OutputTypeDescription
current_itemobjectCleaned row data (or original if no issues)
data_qualitystringQuality status: valid, cleaned, invalid, unfixable
was_cleanedbooleanTrue if the row was modified by LLM
validation_issuesarrayList of detected data quality issues
original_issuesarrayIssues that were fixed (when was_cleaned is true)

Configuration:

SettingDescription
Date ColumnsColumn names that should contain date values
Numeric ColumnsColumn names that should contain numeric values
Validate OnlyIf enabled, only detect issues without fixing them

Dependencies: AI (required), Memory (optional), Tools (optional).

Function Calling Agent

Orchestrates function calls from AI responses, extracting and managing tool calls.

Inputs:

InputTypeRequiredDescription
aiResponseobjectYesResponse from AI containing function calls
availableFunctionsarrayNoList of available functions the AI can call

Outputs:

OutputTypeDescription
function_callsarrayExtracted function calls from AI response
call_countnumberNumber of function calls extracted

Configuration:

SettingDescription
Available FunctionsJSON list of functions the agent can call

Dependencies: AI (optional), Memory (optional), Tools (optional).

Tool Router

LLM-based dynamic tool selection for a given task. Analyzes a task and selects the best tools from a list of available options.

Inputs:

InputTypeRequiredDescription
taskstringYesTask or query to route
toolsarrayYesAvailable tools (objects with name, description)

Outputs:

OutputTypeDescription
selectedToolsarraySelected tools (objects with name, confidence, reasoning)
totalAvailablenumberTotal available tools
strategystringRouting strategy used

Configuration:

SettingDefaultDescription
Routing StrategyLLM-basedKeyword (no LLM), LLM-based, or Hybrid (keyword + LLM)
Max Selections3Maximum tools to select (1-10)
Confidence Threshold0.5Minimum confidence for selection (0-1)
Temperature0.2Temperature (0-1)

Dependencies: AI Gateway (optional, required for LLM and hybrid strategies).

Agent Handoff

Package and transfer context between agent nodes. Supports full pass-through, LLM-compressed summary, or selective field extraction.

Inputs:

InputTypeRequiredDescription
currentStateanyYesCurrent agent's state/results
conversationHistoryarrayNoConversation history
nextAgentInstructionsstringNoInstructions for next agent

Outputs:

OutputTypeDescription
handoffPackageobjectPackaged context for next agent
strategystringStrategy used
originalSizenumberOriginal context size
compressedSizenumberPackaged size
compressionRationumberCompression ratio

Configuration:

SettingDefaultDescription
Handoff StrategyFullFull (pass everything), Summary (LLM-compressed), or Selective (specific fields)
Selective Fields[]Fields to extract (dot notation supported)
Summary Max Tokens500Max tokens for LLM summary (100-2000)
Include Key FindingstrueInclude key findings in handoff

Dependencies: AI Gateway (optional, required for Summary strategy).


Memory Nodes

Memory nodes provide persistent state, conversation history, vector storage, and cross-agent communication for your workflows.

Conversation Memory

Store and retrieve conversation history using MongoDB. Supports add, retrieve, summarize, and clear operations.

Operations:

OperationDescription
AddStore a new message in the conversation
RetrieveGet conversation history
SummarizeGet conversation summary and formatted text
ClearClear conversation history

Inputs:

InputTypeDescription
operationstringOperation: add, retrieve, summarize, clear
messageanyMessage content to add
rolestringMessage role: user, assistant, system
sessionIdstringConversation session identifier
windowSizenumberNumber of recent messages to retrieve
includeMetadatabooleanInclude message metadata

Outputs (vary by operation):

OutputTypeDescription
successbooleanWhether operation succeeded
messagesarrayRetrieved messages (retrieve operation)
messageCountnumberNumber of messages
session_idstringSession identifier
totalMessagesnumberTotal messages in conversation
summaryobjectConversation summary (summarize operation)
conversation_textstringFormatted conversation text (summarize operation)
cleared_countnumberMessages cleared (clear operation)

Configuration:

SettingDefaultDescription
Max Messages50Maximum messages to store (1-1000)
Session Key--Unique identifier for the session (required)
Include MetadatatrueStore timestamps and user info
Retention Days30Days to keep history (1-365)
Connection TypeData SourceUse Add-on or Data Source (MongoDB)
DatabasememoryMongoDB database name
OperationRetrieveAdd Message, Retrieve History, Summarize, or Clear History

Knowledge Base

Store and query structured knowledge with support for Neo4j graph database and Milvus vector database. Supports store, query, update, delete, and connect operations.

Operations:

OperationDescription
QuerySearch the knowledge base (semantic, keyword, hybrid, or exact)
StoreStore entities with optional relationships and embeddings
UpdateUpdate existing entity properties
DeleteRemove an entity
ConnectCreate relationships between entities

Key Inputs:

InputTypeDescription
operationstringOperation: store, query, update, delete, connect
querystringSearch query string
queryTypestringQuery type: semantic, keyword, hybrid, exact
entityobjectEntity/node to store
relationshipsarrayRelationships to create
embeddingsarrayVector embeddings for vector DB
topKnumberNumber of results to return
entityIdstringEntity ID for update/delete

Key Outputs:

OutputTypeDescription
successbooleanWhether operation succeeded
resultsarrayQuery results
entity_idstringEntity ID (store/update operations)
neo4j_storedbooleanStored in Neo4j
milvus_storedbooleanStored in Milvus
total_countnumberTotal results count

Configuration:

SettingDefaultDescription
Connection TypeAdd-onUse Add-on or Data Source
Query TypeSemanticSemantic, Keyword, Hybrid, or Exact Match
Max Results10Maximum results (1-100)
Similarity Threshold0.7Minimum similarity score (0-1)
Top K10Top results from search (1-100)

Semantic Memory

Vector store with embedding-based retrieval via Milvus. Requires an AI Gateway connection for generating embeddings.

Inputs:

InputTypeDescription
textstringText to store
querystringSearch query
metadataobjectAdditional metadata
idstringDocument ID (for delete)

Outputs:

OutputTypeDescription
resultsarraySearch results (objects with text, metadata, similarity)
countnumberNumber of results
storedIdstringID of stored document
successbooleanOperation success

Configuration:

SettingDefaultDescription
OperationSearchStore, Search, or Delete
Milvus Add-on--Select Milvus add-on (required)
Collection Namesemantic_memoryMilvus collection name
Embedding Dimensions1536Embedding vector dimensions
Top K Results5Number of results (1-100)

Dependencies: AI Gateway (required, for generating embeddings).

Context Buffer

Manage working memory and context windows with configurable strategies for handling token limits.

Inputs:

InputTypeDescription
operationstringOperation: store, retrieve, clear
dataanyData to store in context buffer
querystringQuery string for filtering context
lastNnumberNumber of recent entries to retrieve
metadataobjectAdditional metadata for stored data

Outputs:

OutputTypeDescription
successbooleanWhether operation succeeded
stored_idstringID of stored context
buffer_sizenumberCurrent buffer size
buffer_capacitynumberMaximum buffer capacity
contextarrayRetrieved context items
items_retrievednumberItems retrieved
cleared_itemsnumberItems cleared

Configuration:

SettingDefaultDescription
Max Tokens4000Maximum tokens in context (100-128000)
Buffer StrategySliding WindowSliding Window, Summarize Old Content, or Keep Important Content
Compression Ratio0.3How much to compress when summarizing (0.1-1)
Preserve System MessagestrueAlways keep system messages in context
Buffer Size10Maximum items in buffer (1-1000)
OperationRetrieveStore, Retrieve, or Clear

Working Memory

Short-term key-value scratchpad with TTL (time-to-live) support for temporary state during workflow execution.

Inputs:

InputTypeDescription
keystringKey for the entry
valueanyValue to store
ttlnumberTime-to-live in seconds

Outputs:

OutputTypeDescription
valueanyRetrieved value
foundbooleanWhether key was found
successbooleanOperation success
keystringKey operated on
keysarrayAll keys (list operation)
entriesCountnumberCurrent entry count

Configuration:

SettingDefaultDescription
OperationGetSet, Get, Update, Delete, List All, or Clear All
Max Entries100Maximum entries (1-10000)
Default TTL0 (no expiry)Default time-to-live in seconds

Episodic Memory

Store and retrieve past workflow experiences via MongoDB. Useful for agents that learn from previous executions.

Inputs:

InputTypeDescription
taskstringTask description (for recording)
decisionsarrayDecisions made
outcomestringEpisode outcome
successbooleanWhether episode was successful
querystringSearch query (for retrieval)
filterobjectMongoDB filter (for search)
episodeMetadataobjectAdditional metadata

Outputs:

OutputTypeDescription
episodesarrayRetrieved episodes
countnumberNumber of episodes returned
episodeIdstringID of recorded episode
successbooleanOperation success

Configuration:

SettingDefaultDescription
OperationRetrieve RecentRecord Episode, Retrieve Similar, Retrieve Recent, or Search
Connection TypeData SourceData Source or Add-on (MongoDB)
DatabasememoryMongoDB database name
Collectionworkflow_episodesCollection name
Max Episodes10Maximum episodes to return (1-100)

Memory Retriever

Meta-node that queries multiple memory sources and merges/ranks results. Connect up to 3 memory sources as dependencies.

Inputs:

InputTypeRequiredDescription
querystringYesSearch query across memory sources

Outputs:

OutputTypeDescription
resultsarrayRanked results from all sources
totalResultsnumberTotal results returned
sourcesQueriednumberNumber of sources queried
sourceBreakdownobjectResults count per source
rankingStrategystringStrategy used

Configuration:

SettingDefaultDescription
Ranking StrategyRelevanceRelevance, Recency, or Hybrid (relevance + recency)
Max Results10Maximum results (1-100)
Source Weights(empty)JSON weights per source (e.g., {"memory_1": 1.0, "memory_2": 0.8})

Dependencies: Memory Source 1 (required), Memory Source 2 (optional), Memory Source 3 (optional).

Shared Blackboard

Cross-agent shared key-value state via MongoDB. Enables multiple agents in a workflow to read and write shared state organized by sections.

Inputs:

InputTypeDescription
sectionstringSection name
keystringKey name
valueanyValue to write
sincestringISO timestamp for subscribe operation

Outputs:

OutputTypeDescription
valueanyRetrieved value
foundbooleanWhether key was found
sectionDataobjectAll data in section
changesarrayChanges since last read
successbooleanOperation success
entriesCountnumberNumber of entries

Configuration:

SettingDefaultDescription
OperationReadWrite, Read, Subscribe (get changes), or Clear Section
Board ID(empty = workflow-scoped)Board identifier
Connection TypeData SourceData Source or Add-on (MongoDB)
DatabasememoryMongoDB database name

Key Workflow Patterns

Agent + AI Gateway + MCP Tools Pattern

The most common agent pattern connects three nodes via the agent's bottom dependency connectors:

          [Trigger] --> [ReAct Agent] --> [Output]
| | |
[AI] [M] [Tools]
| |
[AI Gateway] [MCP Tools Provider]
  1. The agent's ai port connects to an AI Gateway node for LLM reasoning
  2. The agent's tools port connects to an MCP Tools Provider for external tool access
  3. The agent's memory port connects to any memory node for state persistence

Agent Loop Pattern

The ReAct Agent and Agent Loop nodes follow an iterative think-act-observe cycle:

1. THINK  - Analyze the current situation and decide next action
2. ACT - Call a tool or produce a response
3. OBSERVE - Process the tool result
4. REPEAT - Continue until goal achieved or limits reached

The loop terminates when:

  • A stop pattern is matched in the response (e.g., "FINAL ANSWER:")
  • Maximum iterations are reached
  • Token budget is exhausted
  • An error occurs

Multi-Agent Patterns

Supervisor Pattern:

[Trigger] --> [Supervisor Agent] --> [Output]
| | |
[AI] [Agents] [Tools]
|
+---------+---------+
| | |
[Agent A] [Agent B] [Agent C]

The supervisor creates a plan, delegates to sub-agents, and synthesizes results.

Debate Pattern:

[Trigger] --> [Debate Agent] --> [Output]
| |
[AI] [Agents]
|
+---------+---------+
| | |
[Agent A] [Agent B] [Agent C]

Multiple agents debate a topic through structured rounds until consensus.

Handoff Pattern:

[Trigger] --> [Agent A] --> [Agent Handoff] --> [Agent B] --> [Output]

Context is packaged and transferred between specialized agents.

RAG (Retrieval Augmented Generation) Pattern

[Trigger] --> [Embeddings] --> [Semantic Memory (search)] --> [RAG Agent] --> [AI Gateway] --> [Output]
  1. Generate embeddings for the user query
  2. Search semantic memory (Milvus) for relevant documents
  3. RAG Agent builds a prompt combining the query with retrieved documents
  4. AI Gateway generates the final answer with context

Memory-Augmented Agent Pattern

[Trigger] --> [Conversation Memory (retrieve)] --> [ReAct Agent] --> [Conversation Memory (add)] --> [Output]
|
[AI Gateway]
  1. Retrieve conversation history from memory
  2. Agent uses history as context for reasoning
  3. New exchange is saved back to memory

Best Practices

Model Selection

  • Use smaller/faster models for simple classification or extraction tasks
  • Use larger models for complex reasoning, planning, and multi-step tasks
  • Set appropriate maxTokens limits to control costs

Agent Configuration

  • Start with low maxIterations (5-10) and increase if tasks are under-completing
  • Use specific systemPrompt instructions to guide agent behavior
  • Add stopPatterns that match your expected output format
  • Set token budgets to prevent runaway costs

Memory Usage

  • Use Working Memory for temporary scratch state within a single execution
  • Use Conversation Memory for persistent chat history across executions
  • Use Semantic Memory for vector-based retrieval (requires Milvus add-on)
  • Use Episodic Memory for learning from past workflow runs
  • Use Shared Blackboard when multiple agents need to coordinate via shared state
  • Use Memory Retriever to query across multiple memory sources at once

Error Handling

  • Check the success output from agent and memory nodes
  • Use the stopReason output from agent nodes to determine why an agent terminated
  • Set maxRetries on Supervisor Agent for resilience
  • Monitor totalTokens output to track costs

Next Steps