Skip to main content

Workflow Nodes

Workflow nodes are the building blocks of your automation pipelines. The Strongly platform provides 255 node types across 11 categories, covering data ingestion, transformation, AI processing, control flow, evaluation, and output delivery.

Node Categories Overview

CategoryCountPurpose
Sources121Read data from external systems and services
Transform33Parse, extract, reshape, and process data
Destinations23Write data to external systems and services
Control Flow20Manage execution paths, loops, and branching
Agents15Orchestrate multi-step AI reasoning and task execution
Triggers11Initiate workflow execution from events or schedules
Evaluation10Assess AI output quality and enforce guardrails
Memory8Store and retrieve context, conversation history, and knowledge
AI7Connect to language models, embeddings, vision, and speech
Tools6General-purpose utilities for code execution, API calls, and search
Operators1Infrastructure-level integration providers

Sources

Source nodes read data from external systems and services. Configure connection credentials once in Data Sources and reuse them across multiple workflows.

Common Configuration:

  • Connection credentials (via Data Sources)
  • Query or filter parameters
  • Response data mapping
  • Error handling and retries

Databases -- Relational

Node IDDisplay NameDescription
clickhouseClickHouseReal-time analytics database operations on ClickHouse
cockroachdbCockroachDBExecute queries on CockroachDB
cratedbCrateDBExecute queries on CrateDB distributed databases
greenplum-sourceGreenplumQuery data from Greenplum MPP database
mssqlMicrosoft SQL ServerExecute queries on Microsoft SQL Server databases
mysql-sourceMySQLQuery data from MySQL database
oracleOracleExecute queries on Oracle databases
postgresql-sourcePostgreSQLQuery data from PostgreSQL database
questdbQuestDBExecute queries on QuestDB time-series databases
singlestoreSingleStoreExecute queries on SingleStore distributed databases
snowflakeSnowflakeExecute queries on Snowflake data warehouse
timescaledbTimescaleDBExecute queries on TimescaleDB time-series databases

Databases -- NoSQL and Document

Node IDDisplay NameDescription
arangodbArangoDBExecute operations on ArangoDB multi-model databases
couchbaseCouchbaseExecute operations on Couchbase databases
couchdbCouchDBExecute operations on Apache CouchDB
dynamodbDynamoDBAWS NoSQL database operations on DynamoDB
faunadbFaunaDBExecute operations on FaunaDB serverless databases
firestoreFirestoreExecute operations on Google Cloud Firestore
mongodb-sourceMongoDBDocument database operations on MongoDB
surrealdb-sourceSurrealDBQuery data from SurrealDB

Databases -- Graph

Node IDDisplay NameDescription
neo4j-sourceNeo4jQuery data from Neo4j graph database
neptuneNeptuneExecute queries on Amazon Neptune graph databases
tigergraphTigerGraphExecute operations on TigerGraph graph databases

Databases -- Vector

Node IDDisplay NameDescription
chromaChromaVector database operations on Chroma
lancedbLanceDBServerless vector database operations on LanceDB
marqoMarqoExecute operations on Marqo tensor search engines
milvus-sourceMilvusVector similarity search with Milvus
pgvectorpgvectorVector operations using PostgreSQL pgvector extension
pineconePineconeVector database operations on Pinecone
qdrantQdrantVector database operations on Qdrant
vespaVespaExecute operations on Vespa search engines
weaviateWeaviateVector database operations on Weaviate

Databases -- Search and Cache

Node IDDisplay NameDescription
elasticsearchElasticsearchSearch and analytics on Elasticsearch
memcachedMemcachedExecute operations on Memcached caching systems
redis-sourceRedisIn-memory database operations on Redis

Databases -- Low-Code and Spreadsheet

Node IDDisplay NameDescription
airtableAirtableDatabase operations on Airtable
baserowBaserowExecute operations on Baserow tables and databases
gristGristExecute operations on Grist documents and tables
nocodbNocoDBExecute operations on NocoDB tables and databases
seatableSeaTableExecute operations on SeaTable bases and tables
supabaseSupabaseExecute operations on Supabase databases

Cloud Storage and File Systems

Node IDDisplay NameDescription
dropboxDropboxExecute operations on Dropbox files and folders
ftpFTPExecute file operations on FTP and SFTP servers
google-cloud-storageGoogle Cloud StorageExecute operations on Google Cloud Storage buckets
google-driveGoogle DriveExecute operations on Google Drive files and folders
minioMinIOExecute operations on MinIO and S3-compatible storage
onedriveOneDriveExecute operations on Microsoft OneDrive files and folders
s3-sourceAmazon S3List or download files from Amazon S3
sftpSFTPDownload files from SFTP server. Supports file patterns, recursive downloads, and file filtering
sharepointSharePointExecute operations on Microsoft SharePoint sites and lists

CRM and Sales

Node IDDisplay NameDescription
dynamicsMicrosoft Dynamics 365Interact with Microsoft Dynamics 365 CRM
freshdeskFreshdeskInteract with Freshdesk support platform
hubspotHubSpotInteract with HubSpot CRM
pipedrivePipedriveInteract with Pipedrive Sales CRM
salesforceSalesforceInteract with Salesforce CRM
zendeskZendeskInteract with Zendesk support

Project Management and Productivity

Node IDDisplay NameDescription
asanaAsanaInteract with Asana project management
clickupClickUpInteract with ClickUp project management
jiraJiraInteract with Jira project management
linearLinearInteract with Linear project management
mondaymonday.comInteract with monday.com boards
notionNotionExecute operations on Notion workspaces
trelloTrelloInteract with Trello boards

Communication and Messaging

Node IDDisplay NameDescription
discord-sourceDiscordInteract with Discord servers
gmailGmailInteract with Gmail API
microsoft-outlookMicrosoft OutlookInteract with Microsoft Outlook via Graph API
microsoft-teamsMicrosoft TeamsInteract with Microsoft Teams
ms-exchange-sourceMicrosoft ExchangeRead emails from Microsoft Exchange and save as .eml files with embedded attachments
slack-sourceSlackInteract with Slack workspaces
telegramTelegramInteract with Telegram Bot API
twilioTwilioSend SMS, make calls, and interact with Twilio
whatsappWhatsAppSend and receive messages via WhatsApp Business Cloud API

Social Media

Node IDDisplay NameDescription
facebookFacebookInteract with Facebook Graph API for pages and ads
linkedinLinkedInInteract with LinkedIn professional network
twitterTwitter/XInteract with Twitter/X API
youtubeYouTubeInteract with YouTube Data API

Analytics and Monitoring

Node IDDisplay NameDescription
google-adsGoogle AdsManage Google Ads campaigns and reporting
google-analyticsGoogle AnalyticsInteract with Google Analytics 4
grafanaGrafanaInteract with Grafana monitoring
metabaseMetabaseInteract with Metabase BI platform
posthogPostHogInteract with PostHog product analytics
segmentSegmentInteract with Segment customer data platform
sentrySentryInteract with Sentry error tracking

Google and Microsoft Workspace

Node IDDisplay NameDescription
excel-onlineExcel OnlineExecute operations on Microsoft Excel 365 workbooks
google-calendarGoogle CalendarInteract with Google Calendar
google-docsGoogle DocsInteract with Google Docs
google-formsGoogle FormsInteract with Google Forms
google-meetGoogle MeetInteract with Google Meet
google-sheetsGoogle SheetsRead and write data to Google Sheets

DevOps and CI/CD

Node IDDisplay NameDescription
cloudflareCloudflareInteract with Cloudflare infrastructure
githubGitHubInteract with GitHub repositories
gitlabGitLabInteract with GitLab repositories
jenkinsJenkinsInteract with Jenkins CI/CD

Identity and Access Management

Node IDDisplay NameDescription
entra-idMicrosoft Entra IDInteract with Microsoft Entra ID (Azure AD)
ldapLDAPQuery and manage LDAP directories
oktaOktaInteract with Okta Identity Management

Message Queues and Streaming

Node IDDisplay NameDescription
amqp-sourceAMQPExecute operations on AMQP brokers like RabbitMQ
kafkaKafkaExecute operations on Apache Kafka topics
mqttMQTTExecute operations on MQTT brokers
natsNATSExecute operations on NATS messaging system
pulsarPulsarExecute operations on Apache Pulsar
rabbitmq-sourceRabbitMQConsume messages from RabbitMQ
sns-sourceAWS SNSExecute operations on AWS Simple Notification Service
sqsAWS SQSExecute operations on AWS Simple Queue Service

E-Commerce and Payments

Node IDDisplay NameDescription
paypalPayPalInteract with PayPal payments
quickbooksQuickBooksInteract with QuickBooks Online accounting
shopifyShopifyInteract with Shopify stores
stripeStripeInteract with Stripe payments
woocommerceWooCommerceInteract with WooCommerce stores

IT Service Management

Node IDDisplay NameDescription
pagerdutyPagerDutyInteract with PagerDuty incident management
servicenowServiceNowInteract with ServiceNow ITSM platform
workdayWorkdayInteract with Workday HR and Finance platform

APIs and General Connectivity

Node IDDisplay NameDescription
bigqueryBigQueryExecute queries on Google BigQuery
execExecExecute shell commands and scripts
graphqlGraphQLExecute GraphQL queries and mutations
rest-apiREST APIFetch data from REST API endpoints with authentication and flexible configuration
sshSSHExecute commands and transfer files via SSH

Other Integrations

Node IDDisplay NameDescription
intercomIntercomInteract with Intercom customer messaging platform
typeformTypeformInteract with Typeform forms and surveys
wordpressWordPressInteract with WordPress REST API
zoomZoomInteract with Zoom video conferencing
Data Sources

Configure connection credentials once in Data Sources and reuse across multiple workflows. This avoids embedding secrets in workflow definitions.


Transform

Transform nodes parse, extract, reshape, and process data between source and destination nodes. They handle format conversion, aggregation, filtering, and custom logic.

Node IDDisplay NameDescription
aggregateAggregateAggregate data with sum, average, count, and more
ai-transformAI TransformTransform data using AI
codeCodeExecute custom Python code for data transformation
compareCompareCompare two datasets to find differences
compressionCompressionCompress and decompress data
cryptoCryptoEncryption, hashing, encoding, and cryptographic operations
data-aggregatorData AggregatorAggregate, flatten, filter, and transform array data from loop results
data-lookupData LookupHigh-performance in-memory lookup. Supports cached mode (O(1) hash lookups from file) or inline mode (reference array). Perfect for database lookups in row loops.
datetimeDate/TimeDate and time parsing, formatting, and arithmetic
dedupeDedupeRemove duplicate items from arrays
email-parserEmail ParserParse email files and extract content, attachments, and metadata
excel-parserExcel ParserParse Excel files and extract data as structured JSON
file-extractorFile ExtractorExtract files from ZIP, TAR, GZ, and other archive formats. Supports nested archives and file filtering
filterFilterFilter data based on conditions and rules
html-extractHTML ExtractExtract data from HTML content
jwtJWTCreate, verify, and decode JSON Web Tokens
limitLimitLimit the number of items in an array
markdownMarkdownParse and convert Markdown content
merge-dataMerge DataMerge data from multiple sources using various strategies like concatenation, object merge, or combine
pdf-generatorPDF GeneratorGenerate PDF documents from templates, data, or HTML/markdown content
pdf-parserPDF ParserExtract data from PDF files
pdf-redactorPDF RedactorRedact or obfuscate sensitive content in PDF documents. Supports row-based redaction and keyword/pattern modes with S3-cached indexing for faster processing
redaction-list-builderRedaction List BuilderBuild a list of values to redact from all rows EXCEPT the current/matched row. Perfect for creating per-row redacted PDFs
rename-keysRename KeysRename object keys and transform naming conventions
report-builderReport BuilderGenerate formatted reports (HTML, PDF, Markdown) with tables, sections, grouping, and master-detail layouts
set-fieldsSet FieldsSet, edit, rename, and delete data fields
sortSortSort arrays of data by specified fields
summarizeSummarizeCreate statistical summaries of data
table-parserTable ParserExtract structured data from tables with optional filtering, validation, and repair
text-chunkerText ChunkerSplit text into chunks for embedding and RAG pipelines. Supports multiple chunking strategies with overlap
to-fileTo FileConvert data to file format
totpTOTPGenerate and verify Time-based One-Time Passwords
xml-parserXML ParserParse XML to JSON or convert JSON to XML

Destinations

Destination nodes write processed data to external systems and services.

Common Configuration:

  • Destination credentials (via Data Sources)
  • Data mapping and field selection
  • Success/failure handling
  • Delivery confirmation
Node IDDisplay NameDescription
amqp-destAMQPPublish messages via AMQP (RabbitMQ)
chat-responseChat ResponseSend response to chat interface
discord-destDiscordSend messages and interact with Discord
greenplum-destGreenplumWrite data to Greenplum MPP database
mailchimpMailchimpEmail marketing and automation with Mailchimp
milvus-destMilvusStore vectors in Milvus
mongodb-destMongoDBSave data to MongoDB
ms-exchange-destMicrosoft ExchangeSend emails via Microsoft 365/Exchange with attachments support
mysql-destMySQLSave data to MySQL database
neo4j-destNeo4jStore data in Neo4j graph database
notificationNotificationSend multi-channel notifications (email, Slack, webhook, SMS)
postgresql-destPostgreSQLSave data to PostgreSQL database
rabbitmq-destRabbitMQPublish messages to RabbitMQ
redis-destRedisWrite data to Redis
s3-destAmazon S3Upload files to S3 bucket
sendgridSendGridSend email via SendGrid API with support for templates, attachments, and scheduling
slack-destSlackSend messages and interact with Slack
smtpSMTPSend email via SMTP server with support for HTML, attachments, and templates
sns-destAWS SNSSend notifications via AWS Simple Notification Service
streaming-responseStreaming ResponseStream data chunks to clients
surrealdb-destSurrealDBWrite data to SurrealDB
teamsMicrosoft TeamsSend messages to Microsoft Teams
webhook-responseWebhook ResponseSend HTTP response to webhook caller

Control Flow

Control flow nodes manage execution paths, looping, branching, and data routing within workflows.

Node IDDisplay NameDescription
backtrackBacktrackCheckpoint and restore workflow state for backtracking
conditionalConditionalIf/Else conditional branching
consensusConsensusMulti-agent voting and decision-making
event-waitEvent WaitWait for events from multiple sources before continuing
goal-loopGoal LoopLoop until LLM determines the goal is achieved
human-checkpointHuman CheckpointPause workflow for human review, approval, or input. Essential for AI safety and human oversight in agentic workflows
human-feedbackHuman FeedbackCollect structured human input mid-workflow
loopLoopIterate over arrays or repeat actions
mapMapProcess array items in parallel using visual scope boxes
mergeMergeMerge data from multiple workflow branches
noopNo-OpPass-through node that does nothing
parallel-branchParallel BranchExecute multiple branches in parallel with configurable join strategies (all, any, first-N, majority)
priority-queuePriority QueueQueue items and process in priority order
retryRetryImplement retry logic for failed operations with configurable backoff strategies
splitSplitSplit arrays into individual items for separate processing
stop-errorStop/ErrorStop workflow execution with an error
sub-workflowSub-WorkflowExecute another workflow
switch-caseSwitch/CaseMulti-way branching based on value matching with support for patterns, ranges, and multiple cases
waitWaitPause workflow execution for a duration or until a condition is met
while-loopWhile LoopExecute a branch repeatedly while a condition is true, with configurable limits and break/continue support

Conditional Node

Execute different branches based on conditions:

// Condition examples
{{ input.status }} === "approved"
{{ input.amount }} > 1000
{{ input.tags }}.includes("urgent")

Supported Operators:

  • Comparison: ==, !=, >, >=, <, <=
  • String: contains, starts_with, ends_with, regex
  • Null checks: is_null, is_not_null, is_empty, is_not_empty
  • List: in, not_in
  • Boolean: is_true, is_false

Loop Node

Iterate over array data with accumulator support:

// Loop over items
items: {{ apiResponse.data.users }}

// Access current item in loop
{{ loop.item.name }}
{{ loop.index }}

// Accumulator collects results from each iteration
// Access via final_results when loop completes

Map Node

Transform each item in an array with parallel processing:

// Input array
{{ source.products }}

// Transform expression
{
"id": {{ item.id }},
"price": {{ item.price * 1.1 }}
}

Data Aggregator

Process loop results with multiple operations:

// Operations
[
{"type": "extract", "field": "pdfPath", "outputField": "allPdfs"},
{"type": "flatten", "field": "errors", "outputField": "allErrors"},
{"type": "filter", "condition": {"field": "status", "operator": "==", "value": "failed"}},
{"type": "count", "outputField": "totalCount"}
]

Agents

Agent nodes orchestrate complex, multi-step AI workflows using specialized reasoning patterns for autonomous task execution.

Node IDDisplay NameDescription
agent-handoffAgent HandoffPackage and transfer context between agent nodes
agent-loopAgent LoopConfigurable autonomous think-act-observe agent loop
column-mapperColumn MapperUses LLM to intelligently map source columns to a target schema. Handles varying column names across different data sources by understanding semantic meaning. Supports database caching for known mappings
data-cleanupData CleanupUses LLM to validate and fix malformed data rows from PDF extraction. Detects shifted columns, merged values, and data type mismatches
debate-agentDebate AgentMulti-agent debate pattern for reaching consensus through structured argumentation, critique, and synthesis
document-classificationDocument ClassificationIntelligent document classification agent
entity-extractionEntity ExtractionIntelligent entity extraction agent
function-callingFunction CallingOrchestrates function calls from AI responses, extracting and managing tool calls
multi-agent-chatMulti-Agent ChatMultiple AI personas collaborate on a shared discussion thread
plannerPlannerDecompose complex goals into ordered sub-tasks with dependencies
ragRAGRetrieval Augmented Generation agent that combines retrieved documents with AI generation
react-agentReAct AgentAutonomous AI agent using the ReAct (Reasoning + Acting) pattern. Iteratively thinks, acts using tools, and observes results until the goal is achieved
reflectionReflectionSelf-review and iterative content improvement via critique-revise cycles
supervisor-agentSupervisor AgentOrchestrates multiple sub-agents to accomplish complex tasks. Creates execution plans, delegates work, and synthesizes results
tool-routerTool RouterLLM-based dynamic tool selection for a given task

Agent Patterns:

  • ReAct: Autonomous reasoning and acting loop with tool use
  • Debate: Multiple AI perspectives argue and converge on conclusions
  • Supervisor: Hierarchical task delegation and result synthesis
  • RAG: Knowledge-grounded generation with document retrieval
  • Function Calling: Tool use orchestration for AI actions
  • Reflection: Self-critique and iterative improvement
  • Multi-Agent Chat: Collaborative discussion between AI personas
  • Planner: Goal decomposition into ordered sub-tasks

Learn more about AI Agents


Triggers

Triggers initiate workflow execution. Every workflow must start with exactly one trigger node.

Node IDDisplay NameDescription
chat-triggerChat TriggerTrigger workflows from chat/conversational interfaces
email-triggerEmail TriggerTrigger workflows when new emails arrive via IMAP
error-triggerError TriggerTrigger workflows from error events in other workflows
file-triggerFile TriggerTrigger workflows based on local file system changes
formFormAccept public form submissions with file uploads and CAPTCHA protection
multi-modal-inputMulti-Modal InputAccept mixed media types as workflow input (text, image, audio, file)
rest-api-triggerREST API TriggerExpose workflow as authenticated REST API endpoint
rss-triggerRSS TriggerTrigger workflows when new RSS/Atom feed items appear
scheduleScheduleTrigger workflow on a schedule
sse-triggerSSE TriggerTrigger workflow on Server-Sent Events
webhookWebhookReceive webhooks from external services with maximum security

Learn more about triggers


Evaluation

Evaluation nodes assess AI output quality, detect hallucinations, enforce guardrails, and enable systematic testing of AI workflows. All LLM-based evaluation nodes connect to an AI Gateway for assessment.

Node IDDisplay NameDescription
answer-qualityAnswer QualityEvaluates overall answer quality using a composite of metrics: correctness, completeness, helpfulness, and coherence. Provides a holistic assessment of LLM response quality
cost-trackerCost TrackerTrack token usage and estimated costs with budget limits
faithfulness-checkerFaithfulness CheckerDetects hallucinations by checking if the generated response is grounded in the provided context. Essential for RAG systems to ensure answers do not contain fabricated information
guardrailsGuardrailsContent validation with PII detection, toxicity checking, and custom rules
llm-as-judgeLLM as JudgeUses an LLM to evaluate outputs based on configurable criteria. Supports single scoring, multi-criteria evaluation, and chain-of-thought reasoning for reliable assessments
output-parserOutput ParserParse and validate LLM output into structured formats
pairwise-comparatorPairwise ComparatorCompares two outputs/responses and determines which is better. Ideal for A/B testing, model comparison, prompt optimization, and relative quality assessment
rag-metricsRAG MetricsComprehensive RAG pipeline evaluation with context precision, context recall, answer relevancy, and faithfulness metrics. Essential for optimizing retrieval-augmented generation systems
rate-limiterRate LimiterEnforce rate limits with token bucket or sliding window
relevance-graderRelevance GraderEvaluates whether retrieved documents or context are relevant to the query. Essential for RAG pipeline evaluation and retrieval quality assessment

Use Cases:

  • RAG pipeline quality monitoring
  • Hallucination detection and prevention
  • A/B testing prompts and models
  • Automated quality gates in production
  • Continuous evaluation of AI outputs
  • Cost tracking and budget enforcement
  • Content safety and PII detection

Configuration:

  • Select evaluation criteria
  • Configure scoring scales (0-1, 1-5, 1-10, binary)
  • Set pass/fail thresholds
  • Enable chain-of-thought reasoning
  • Connect to AI Gateway for judge model
Metrics Logging

Evaluation nodes automatically log metrics (scores, pass rates) that can be viewed in the Workflow Monitor's execution trace and compared across runs.


Memory

Memory nodes store and retrieve conversation context, knowledge, and workflow state for multi-turn and agent-based workflows.

Node IDDisplay NameDescription
context-bufferContext BufferManage working memory and context windows
conversation-memoryConversation MemoryStore and retrieve conversation history
episodic-memoryEpisodic MemoryStore and retrieve past workflow experiences via MongoDB
knowledge-baseKnowledge BaseStore and query structured knowledge
memory-retrieverMemory RetrieverQuery multiple memory sources and merge/rank results
semantic-memorySemantic MemoryVector store with embedding-based retrieval via Milvus
shared-blackboardShared BlackboardCross-agent shared key-value state via MongoDB
working-memoryWorking MemoryShort-term key-value scratchpad with TTL support

Use Cases:

  • Multi-turn conversations with context retention
  • Context-aware AI responses
  • RAG (Retrieval Augmented Generation) knowledge stores
  • Long-term memory for autonomous agents
  • Cross-agent state sharing in multi-agent workflows

AI

AI nodes connect to language models for inference, embeddings, vision, and speech processing.

Node IDDisplay NameDescription
ai-gatewayAI GatewayProcess with AI models
embeddingsEmbeddingsGenerate vector embeddings via AI Gateway
image-generationImage GenerationGenerate images via AI Gateway with async job-based processing
llmLLMText and chat completion via AI Gateway
speech-to-textSpeech to TextTranscribe audio to text
text-to-speechText to SpeechGenerate speech audio from text via AI Gateway
visionVisionAnalyze images and visual content with AI models

Configuration:

  • Select model from AI Gateway
  • Set prompt template
  • Configure parameters (temperature, max tokens)
  • Define response format
  • Token usage tracking

Advanced Features:

  • Streaming responses
  • Function calling
  • Multi-turn conversations
  • Prompt engineering
  • Cost tracking

Learn more about AI in workflows


Tools

Tool nodes provide general-purpose utilities for code execution, HTTP calls, file operations, and web interaction.

Node IDDisplay NameDescription
api-callerAPI CallerMake dynamic HTTP API calls with optional datasource authentication
calculatorCalculatorSafe mathematical expression evaluator (no eval, uses AST)
code-interpreterCode InterpreterExecute Python or JavaScript code in a sandboxed subprocess
file-managerFile ManagerRead, write, and copy files via workflow storage
web-browserWeb BrowserChromium-based browser with full JS rendering, screenshots, and PDF generation
web-searchWeb SearchSearch the web using SerpAPI, Brave Search, Tavily, or a generic search API endpoint

Operators

Operators are infrastructure-level nodes that provide integration capabilities to other nodes in the workflow.

Node IDDisplay NameDescription
mcp-tools-providerMCP Tools ProviderProvides MCP server tools to agents. Connect to an agent's 'tools' connector

MCP (Model Context Protocol) servers provide 139 pre-integrated tools that can be exposed to agent nodes through this operator.

Learn more about MCP Tools


Node Configuration

Common Settings

All nodes share these basic settings:

Identity

  • Name: Display name on canvas
  • Description: Purpose and notes
  • Enabled: Toggle execution on/off

Execution

  • Retry Count: Number of retry attempts
  • Retry Delay: Wait time between retries
  • Timeout: Maximum execution time

Error Handling

  • On Error: Continue, stop workflow, or branch
  • Fallback Value: Default value on failure
  • Error Output: Capture error details

Data Mapping

Reference data from previous nodes:

// Simple field reference
{{ triggerNode.userId }}

// Nested fields
{{ apiCall.response.data.items[0].name }}

// Conditional mapping
{{ condition ? value1 : value2 }}

// Array operations
{{ array }}.map(item => item.id)
{{ array }}.filter(item => item.active)

Variable Context

Each node has access to:

ContextDescription
{{ trigger }}Trigger node output
{{ nodeName }}Output from specific node
{{ env }}Environment variables
{{ workflow }}Workflow metadata
{{ execution }}Current execution details

Best Practices

Node Organization

  1. Left-to-Right Flow: Arrange nodes to show progression
  2. Vertical Spacing: Group related processing paths
  3. Descriptive Names: Use clear, purpose-driven names
  4. Documentation: Add notes to complex nodes

Performance Optimization

  1. Minimize Sequential Chains: Use parallel execution where possible
  2. Cache Results: Store frequently accessed data
  3. Batch Operations: Process multiple items together
  4. Filter Early: Remove unnecessary data early in pipeline

Error Handling

  1. Add Retries: Configure retries for network operations
  2. Fallback Values: Provide defaults for optional data
  3. Error Branches: Route errors to notification/logging
  4. Validation: Check data format before processing

Security

  1. Credentials: Use Data Sources for sensitive credentials
  2. Input Validation: Sanitize user inputs
  3. Output Filtering: Do not expose sensitive data
  4. Access Control: Review workflow permissions

Node Examples

Example: Data Enrichment Pipeline

Webhook Trigger
-> REST API (Fetch user data)
-> AI Gateway (Analyze sentiment)
-> MongoDB (Store results)
-> Webhook Response (Notify completion)

Example: Document Processing

Schedule Trigger
-> Amazon S3 (List new PDFs)
-> Loop (For each PDF)
-> PDF Parser (Extract text)
-> Entity Extraction (Find entities)
-> Neo4j (Store relationships)

Example: Batch Processing with Aggregation

Schedule Trigger
-> SFTP Source (Download files)
-> Loop (For each file)
-> Conditional (Check file type)
-> [.zip] File Extractor -> PDF Parser
-> [.pdf] PDF Parser (direct)
-> Table Parser (Extract data)
-> MySQL (Lookup)
-> PDF Redactor (Redact sensitive data)
-> Data Aggregator (Collect results)
-> [allPdfs] Amazon S3 (Upload batch)
-> [allErrors] PDF Generator (Create report)
-> Microsoft Exchange (Email report)

Example: Conditional Routing

Form Trigger
-> Conditional (Check priority)
-> [High Priority]
-> AI Gateway (Urgent response)
-> Gmail (Send immediately)
-> [Normal Priority]
-> MongoDB (Queue for later)

Example: Multi-Agent RAG Pipeline

Chat Trigger
-> Embeddings (Generate query vector)
-> Semantic Memory (Retrieve relevant documents)
-> RAG Agent (Generate grounded response)
-> Faithfulness Checker (Verify no hallucinations)
-> Conditional (Check faithfulness score)
-> [Pass] Chat Response (Return answer)
-> [Fail] Reflection (Revise answer)
-> Chat Response (Return revised answer)

Next Steps