Skip to main content

Workflow Nodes

Workflow nodes are the building blocks of your automation pipelines. Each node type serves a specific purpose in the data processing flow.

Node Categories

Triggers

Triggers initiate workflow execution. Every workflow must start with exactly one trigger node.

Learn more about triggers →

Sources

Sources read data from external systems and services.

NodeDescription
Amazon S3List or download files from Amazon S3
GreenplumQuery data from Greenplum MPP database
Microsoft ExchangeRead emails from Microsoft Exchange and save as .eml files with embedded attachments
MilvusVector similarity search with Milvus
MongoDBQuery data from MongoDB
MySQLQuery data from MySQL database
Neo4jQuery data from Neo4j graph database
PostgreSQLQuery data from PostgreSQL database
RabbitMQConsume messages from RabbitMQ
RedisRead data from Redis
REST APIFetch data from REST API endpoints with authentication and flexible configuration
SFTPDownload files from SFTP server. Supports file patterns, recursive downloads, and file filtering
SurrealDBQuery data from SurrealDB

Common Configuration

  • Connection credentials (via Data Sources)
  • Query or filter parameters
  • Response data mapping
  • Error handling and retries
Data Sources

Configure connection credentials once in Data Sources and reuse across multiple workflows.

Transform

Transform nodes parse, extract, and process data from documents and various formats.

NodeDescription
Data AggregatorAggregate, flatten, filter, and transform array data from loop results
Data LookupLook up values in a reference array without making additional database queries. Efficient in-memory matching for batch processing
Email ParserParse email files and extract content, attachments, and metadata
Excel ParserParse Excel files and extract data as structured JSON
File ExtractorExtract files from ZIP, TAR, GZ, and other archive formats. Supports nested archives and file filtering
Merge DataMerge data from multiple sources using various strategies like concatenation, object merge, or combine
PDF GeneratorGenerate PDF documents from templates, data, or HTML/markdown content
PDF ParserExtract data from PDF files
PDF RedactorRedact or obfuscate sensitive content in PDF documents
Redaction List BuilderBuild a list of values to redact from all rows except the current row. Perfect for creating per-row redacted PDFs
Table ParserExtract structured data from tables in various formats (CSV, TSV, markdown, text)
Text ChunkerSplit text into chunks for embedding and RAG pipelines. Supports multiple chunking strategies with overlap

Features

  • Automatic format detection
  • Table extraction
  • Metadata parsing
  • Attachment handling
  • Text cleaning and normalization

AI

AI nodes connect to language models for inference and generation.

NodeDescription
AI GatewayProcess with AI models via the Strongly AI Gateway

Configuration

  • Select model from AI Gateway
  • Set prompt template
  • Configure parameters (temperature, max tokens)
  • Define response format
  • Token usage tracking

Advanced Features

  • Streaming responses
  • Function calling
  • Multi-turn conversations
  • Prompt engineering
  • Cost tracking

Learn more about AI in workflows →

Memory

Memory nodes store and retrieve conversation context and knowledge.

NodeDescription
Context BufferManage working memory and context windows
Conversation MemoryStore and retrieve conversation history
Knowledge BaseStore and query structured knowledge

Use Cases

  • Multi-turn conversations
  • Context-aware responses
  • RAG (Retrieval Augmented Generation)
  • Long-term memory for agents

Evaluation

Evaluation nodes assess AI output quality, detect hallucinations, and enable systematic testing of your AI workflows. All evaluation nodes connect to an AI Gateway for LLM-based assessment.

NodeDescription
LLM as JudgeGeneral-purpose evaluation with configurable criteria. Supports single-criterion, multi-criteria, and custom rubric evaluation with chain-of-thought reasoning
Relevance GraderEvaluate if retrieved documents are relevant to queries. Ideal for RAG pipeline quality assessment with binary, graded, or ternary classification
Faithfulness CheckerDetect hallucinations by verifying answers are grounded in source context. Uses claim extraction and verification
Answer QualityMulti-dimensional quality scoring across correctness, completeness, helpfulness, coherence, conciseness, safety, and fluency
RAG MetricsComprehensive RAG pipeline evaluation with context precision, context recall, answer relevancy, faithfulness, and context utilization
Pairwise ComparatorCompare two responses to determine which is better. Includes position-bias mitigation through order shuffling. Ideal for A/B testing and model comparison

Use Cases

  • RAG pipeline quality monitoring
  • Hallucination detection and prevention
  • A/B testing prompts and models
  • Automated quality gates in production
  • Continuous evaluation of AI outputs

Configuration

  • Select evaluation criteria
  • Configure scoring scales (0-1, 1-5, 1-10, binary)
  • Set pass/fail thresholds
  • Enable chain-of-thought reasoning
  • Connect to AI Gateway for judge model
Metrics Logging

Evaluation nodes automatically log metrics (scores, pass rates) that can be viewed in the Workflow Monitor's execution trace and compared across runs.

Control Flow

Control flow nodes manage execution paths and data routing.

NodeDescription
ConditionalIf/Else conditional branching
Human CheckpointPause workflow for human review, approval, or input. Essential for AI safety and human oversight in agentic workflows
LoopIterate over arrays or repeat actions
MapProcess array items in parallel using visual scope boxes
MergeMerge multiple data sources
Parallel BranchExecute multiple branches in parallel with configurable join strategies (all, any, first-N, majority)
RetryImplement retry logic for failed operations with configurable backoff strategies (fixed, linear, exponential)
Switch/CaseMulti-way branching based on value matching with support for patterns, ranges, and multiple cases
While LoopExecute a branch repeatedly while a condition is true, with configurable limits and break/continue support

Conditional Node

Execute different branches based on conditions:

// Condition examples
{{ input.status }} === "approved"
{{ input.amount }} > 1000
{{ input.tags }}.includes("urgent")

Supported Operators:

  • Comparison: ==, !=, >, >=, <, <=
  • String: contains, starts_with, ends_with, regex
  • Null checks: is_null, is_not_null, is_empty, is_not_empty
  • List: in, not_in
  • Boolean: is_true, is_false

Loop Node

Iterate over array data with accumulator support:

// Loop over items
items: {{ apiResponse.data.users }}

// Access current item in loop
{{ loop.item.name }}
{{ loop.index }}

// Accumulator collects results from each iteration
// Access via final_results when loop completes

Map Node

Transform each item in an array with parallel processing:

// Input array
{{ source.products }}

// Transform expression
{
"id": {{ item.id }},
"price": {{ item.price * 1.1 }}
}

Data Aggregator

Process loop results with multiple operations:

// Operations
[
{"type": "extract", "field": "pdfPath", "outputField": "allPdfs"},
{"type": "flatten", "field": "errors", "outputField": "allErrors"},
{"type": "filter", "condition": {"field": "status", "operator": "==", "value": "failed"}},
{"type": "count", "outputField": "totalCount"}
]

Destinations

Destinations send processed data to external systems.

NodeDescription
Amazon S3Upload files to S3 bucket
GreenplumWrite data to Greenplum MPP database
Microsoft ExchangeSend emails via Microsoft 365/Exchange with attachments support
MilvusStore vectors in Milvus
MongoDBSave data to MongoDB
MySQLSave data to MySQL database
Neo4jStore data in Neo4j graph database
PostgreSQLSave data to PostgreSQL database
RabbitMQPublish messages to RabbitMQ
RedisWrite data to Redis
SurrealDBWrite data to SurrealDB

Common Configuration

  • Destination credentials
  • Data mapping
  • Success/failure handling
  • Delivery confirmation

AI Agents

AI Agent nodes orchestrate complex, multi-step AI workflows using specialized patterns for autonomous task execution.

NodeDescription
Debate AgentMulti-agent debate pattern for reaching consensus through structured argumentation, critique, and synthesis
Function Calling AgentOrchestrates function calls from AI responses, extracting and managing tool calls for AI agents
RAG AgentRetrieval Augmented Generation agent that combines retrieved documents with AI generation for context-aware responses
Supervisor AgentOrchestrates multiple sub-agents to accomplish complex tasks. Creates execution plans, delegates work, and synthesizes results

Agent Patterns:

  • Debate: Multiple AI perspectives argue and converge on conclusions
  • Supervisor: Hierarchical task delegation and result synthesis
  • RAG: Knowledge-grounded generation with document retrieval
  • Function Calling: Tool use orchestration for AI actions

Learn more about AI Agents →

MCP Tools

MCP (Model Context Protocol) servers provide 139 pre-integrated tools.

Learn more about MCP Tools →

Node Configuration

Common Settings

All nodes share these basic settings:

Identity

  • Name: Display name on canvas
  • Description: Purpose and notes
  • Enabled: Toggle execution on/off

Execution

  • Retry Count: Number of retry attempts
  • Retry Delay: Wait time between retries
  • Timeout: Maximum execution time

Error Handling

  • On Error: Continue, stop workflow, or branch
  • Fallback Value: Default value on failure
  • Error Output: Capture error details

Data Mapping

Reference data from previous nodes:

// Simple field reference
{{ triggerNode.userId }}

// Nested fields
{{ apiCall.response.data.items[0].name }}

// Conditional mapping
{{ condition ? value1 : value2 }}

// Array operations
{{ array }}.map(item => item.id)
{{ array }}.filter(item => item.active)

Variable Context

Each node has access to:

ContextDescription
{{ trigger }}Trigger node output
{{ nodeName }}Output from specific node
{{ env }}Environment variables
{{ workflow }}Workflow metadata
{{ execution }}Current execution details

Best Practices

Node Organization

  1. Left-to-Right Flow: Arrange nodes to show progression
  2. Vertical Spacing: Group related processing paths
  3. Descriptive Names: Use clear, purpose-driven names
  4. Documentation: Add notes to complex nodes

Performance Optimization

  1. Minimize Sequential Chains: Use parallel execution where possible
  2. Cache Results: Store frequently accessed data
  3. Batch Operations: Process multiple items together
  4. Filter Early: Remove unnecessary data early in pipeline

Error Handling

  1. Add Retries: Configure retries for network operations
  2. Fallback Values: Provide defaults for optional data
  3. Error Branches: Route errors to notification/logging
  4. Validation: Check data format before processing

Security

  1. Credentials: Use Data Sources for sensitive credentials
  2. Input Validation: Sanitize user inputs
  3. Output Filtering: Don't expose sensitive data
  4. Access Control: Review workflow permissions

Node Examples

Example: Data Enrichment Pipeline

Webhook Trigger
→ REST API (Fetch user data)
→ AI Gateway (Analyze sentiment)
→ MongoDB (Store results)
→ Webhook (Notify completion)

Example: Document Processing

Schedule Trigger
→ Amazon S3 (List new PDFs)
→ Loop (For each PDF)
→ PDF Parser (Extract text)
→ Entity Extraction (Find entities)
→ Neo4j (Store relationships)

Example: Batch Processing with Aggregation

Schedule Trigger
→ SFTP Source (Download files)
→ Loop (For each file)
→ Conditional (Check file type)
→ [.zip] File Extractor → PDF Parser
→ [.pdf] PDF Parser (direct)
→ Table Parser (Extract data)
→ MySQL (Lookup)
→ PDF Redactor (Redact sensitive data)
→ Data Aggregator (Collect results)
→ [allPdfs] Amazon S3 (Upload batch)
→ [allErrors] PDF Generator (Create report)
→ Microsoft Exchange (Email report)

Example: Conditional Routing

Form Trigger
→ Conditional (Check priority)
→ [High Priority]
→ AI Gateway (Urgent response)
→ Gmail (Send immediately)
→ [Normal Priority]
→ MongoDB (Queue for later)

Next Steps