Workflow Nodes
Workflow nodes are the building blocks of your automation pipelines. Each node type serves a specific purpose in the data processing flow.
Node Categories
Triggers
Triggers initiate workflow execution. Every workflow must start with exactly one trigger node.
Sources
Sources read data from external systems and services.
| Node | Description |
|---|---|
| Amazon S3 | List or download files from Amazon S3 |
| Greenplum | Query data from Greenplum MPP database |
| Microsoft Exchange | Read emails from Microsoft Exchange and save as .eml files with embedded attachments |
| Milvus | Vector similarity search with Milvus |
| MongoDB | Query data from MongoDB |
| MySQL | Query data from MySQL database |
| Neo4j | Query data from Neo4j graph database |
| PostgreSQL | Query data from PostgreSQL database |
| RabbitMQ | Consume messages from RabbitMQ |
| Redis | Read data from Redis |
| REST API | Fetch data from REST API endpoints with authentication and flexible configuration |
| SFTP | Download files from SFTP server. Supports file patterns, recursive downloads, and file filtering |
| SurrealDB | Query data from SurrealDB |
Common Configuration
- Connection credentials (via Data Sources)
- Query or filter parameters
- Response data mapping
- Error handling and retries
Configure connection credentials once in Data Sources and reuse across multiple workflows.
Transform
Transform nodes parse, extract, and process data from documents and various formats.
| Node | Description |
|---|---|
| Data Aggregator | Aggregate, flatten, filter, and transform array data from loop results |
| Data Lookup | Look up values in a reference array without making additional database queries. Efficient in-memory matching for batch processing |
| Email Parser | Parse email files and extract content, attachments, and metadata |
| Excel Parser | Parse Excel files and extract data as structured JSON |
| File Extractor | Extract files from ZIP, TAR, GZ, and other archive formats. Supports nested archives and file filtering |
| Merge Data | Merge data from multiple sources using various strategies like concatenation, object merge, or combine |
| PDF Generator | Generate PDF documents from templates, data, or HTML/markdown content |
| PDF Parser | Extract data from PDF files |
| PDF Redactor | Redact or obfuscate sensitive content in PDF documents |
| Redaction List Builder | Build a list of values to redact from all rows except the current row. Perfect for creating per-row redacted PDFs |
| Table Parser | Extract structured data from tables in various formats (CSV, TSV, markdown, text) |
| Text Chunker | Split text into chunks for embedding and RAG pipelines. Supports multiple chunking strategies with overlap |
Features
- Automatic format detection
- Table extraction
- Metadata parsing
- Attachment handling
- Text cleaning and normalization
AI
AI nodes connect to language models for inference and generation.
| Node | Description |
|---|---|
| AI Gateway | Process with AI models via the Strongly AI Gateway |
Configuration
- Select model from AI Gateway
- Set prompt template
- Configure parameters (temperature, max tokens)
- Define response format
- Token usage tracking
Advanced Features
- Streaming responses
- Function calling
- Multi-turn conversations
- Prompt engineering
- Cost tracking
Learn more about AI in workflows →
Memory
Memory nodes store and retrieve conversation context and knowledge.
| Node | Description |
|---|---|
| Context Buffer | Manage working memory and context windows |
| Conversation Memory | Store and retrieve conversation history |
| Knowledge Base | Store and query structured knowledge |
Use Cases
- Multi-turn conversations
- Context-aware responses
- RAG (Retrieval Augmented Generation)
- Long-term memory for agents
Evaluation
Evaluation nodes assess AI output quality, detect hallucinations, and enable systematic testing of your AI workflows. All evaluation nodes connect to an AI Gateway for LLM-based assessment.
| Node | Description |
|---|---|
| LLM as Judge | General-purpose evaluation with configurable criteria. Supports single-criterion, multi-criteria, and custom rubric evaluation with chain-of-thought reasoning |
| Relevance Grader | Evaluate if retrieved documents are relevant to queries. Ideal for RAG pipeline quality assessment with binary, graded, or ternary classification |
| Faithfulness Checker | Detect hallucinations by verifying answers are grounded in source context. Uses claim extraction and verification |
| Answer Quality | Multi-dimensional quality scoring across correctness, completeness, helpfulness, coherence, conciseness, safety, and fluency |
| RAG Metrics | Comprehensive RAG pipeline evaluation with context precision, context recall, answer relevancy, faithfulness, and context utilization |
| Pairwise Comparator | Compare two responses to determine which is better. Includes position-bias mitigation through order shuffling. Ideal for A/B testing and model comparison |
Use Cases
- RAG pipeline quality monitoring
- Hallucination detection and prevention
- A/B testing prompts and models
- Automated quality gates in production
- Continuous evaluation of AI outputs
Configuration
- Select evaluation criteria
- Configure scoring scales (0-1, 1-5, 1-10, binary)
- Set pass/fail thresholds
- Enable chain-of-thought reasoning
- Connect to AI Gateway for judge model
Evaluation nodes automatically log metrics (scores, pass rates) that can be viewed in the Workflow Monitor's execution trace and compared across runs.
Control Flow
Control flow nodes manage execution paths and data routing.
| Node | Description |
|---|---|
| Conditional | If/Else conditional branching |
| Human Checkpoint | Pause workflow for human review, approval, or input. Essential for AI safety and human oversight in agentic workflows |
| Loop | Iterate over arrays or repeat actions |
| Map | Process array items in parallel using visual scope boxes |
| Merge | Merge multiple data sources |
| Parallel Branch | Execute multiple branches in parallel with configurable join strategies (all, any, first-N, majority) |
| Retry | Implement retry logic for failed operations with configurable backoff strategies (fixed, linear, exponential) |
| Switch/Case | Multi-way branching based on value matching with support for patterns, ranges, and multiple cases |
| While Loop | Execute a branch repeatedly while a condition is true, with configurable limits and break/continue support |
Conditional Node
Execute different branches based on conditions:
// Condition examples
{{ input.status }} === "approved"
{{ input.amount }} > 1000
{{ input.tags }}.includes("urgent")
Supported Operators:
- Comparison:
==,!=,>,>=,<,<= - String:
contains,starts_with,ends_with,regex - Null checks:
is_null,is_not_null,is_empty,is_not_empty - List:
in,not_in - Boolean:
is_true,is_false
Loop Node
Iterate over array data with accumulator support:
// Loop over items
items: {{ apiResponse.data.users }}
// Access current item in loop
{{ loop.item.name }}
{{ loop.index }}
// Accumulator collects results from each iteration
// Access via final_results when loop completes
Map Node
Transform each item in an array with parallel processing:
// Input array
{{ source.products }}
// Transform expression
{
"id": {{ item.id }},
"price": {{ item.price * 1.1 }}
}
Data Aggregator
Process loop results with multiple operations:
// Operations
[
{"type": "extract", "field": "pdfPath", "outputField": "allPdfs"},
{"type": "flatten", "field": "errors", "outputField": "allErrors"},
{"type": "filter", "condition": {"field": "status", "operator": "==", "value": "failed"}},
{"type": "count", "outputField": "totalCount"}
]
Destinations
Destinations send processed data to external systems.
| Node | Description |
|---|---|
| Amazon S3 | Upload files to S3 bucket |
| Greenplum | Write data to Greenplum MPP database |
| Microsoft Exchange | Send emails via Microsoft 365/Exchange with attachments support |
| Milvus | Store vectors in Milvus |
| MongoDB | Save data to MongoDB |
| MySQL | Save data to MySQL database |
| Neo4j | Store data in Neo4j graph database |
| PostgreSQL | Save data to PostgreSQL database |
| RabbitMQ | Publish messages to RabbitMQ |
| Redis | Write data to Redis |
| SurrealDB | Write data to SurrealDB |
Common Configuration
- Destination credentials
- Data mapping
- Success/failure handling
- Delivery confirmation
AI Agents
AI Agent nodes orchestrate complex, multi-step AI workflows using specialized patterns for autonomous task execution.
| Node | Description |
|---|---|
| Debate Agent | Multi-agent debate pattern for reaching consensus through structured argumentation, critique, and synthesis |
| Function Calling Agent | Orchestrates function calls from AI responses, extracting and managing tool calls for AI agents |
| RAG Agent | Retrieval Augmented Generation agent that combines retrieved documents with AI generation for context-aware responses |
| Supervisor Agent | Orchestrates multiple sub-agents to accomplish complex tasks. Creates execution plans, delegates work, and synthesizes results |
Agent Patterns:
- Debate: Multiple AI perspectives argue and converge on conclusions
- Supervisor: Hierarchical task delegation and result synthesis
- RAG: Knowledge-grounded generation with document retrieval
- Function Calling: Tool use orchestration for AI actions
MCP Tools
MCP (Model Context Protocol) servers provide 139 pre-integrated tools.
Node Configuration
Common Settings
All nodes share these basic settings:
Identity
- Name: Display name on canvas
- Description: Purpose and notes
- Enabled: Toggle execution on/off
Execution
- Retry Count: Number of retry attempts
- Retry Delay: Wait time between retries
- Timeout: Maximum execution time
Error Handling
- On Error: Continue, stop workflow, or branch
- Fallback Value: Default value on failure
- Error Output: Capture error details
Data Mapping
Reference data from previous nodes:
// Simple field reference
{{ triggerNode.userId }}
// Nested fields
{{ apiCall.response.data.items[0].name }}
// Conditional mapping
{{ condition ? value1 : value2 }}
// Array operations
{{ array }}.map(item => item.id)
{{ array }}.filter(item => item.active)
Variable Context
Each node has access to:
| Context | Description |
|---|---|
{{ trigger }} | Trigger node output |
{{ nodeName }} | Output from specific node |
{{ env }} | Environment variables |
{{ workflow }} | Workflow metadata |
{{ execution }} | Current execution details |
Best Practices
Node Organization
- Left-to-Right Flow: Arrange nodes to show progression
- Vertical Spacing: Group related processing paths
- Descriptive Names: Use clear, purpose-driven names
- Documentation: Add notes to complex nodes
Performance Optimization
- Minimize Sequential Chains: Use parallel execution where possible
- Cache Results: Store frequently accessed data
- Batch Operations: Process multiple items together
- Filter Early: Remove unnecessary data early in pipeline
Error Handling
- Add Retries: Configure retries for network operations
- Fallback Values: Provide defaults for optional data
- Error Branches: Route errors to notification/logging
- Validation: Check data format before processing
Security
- Credentials: Use Data Sources for sensitive credentials
- Input Validation: Sanitize user inputs
- Output Filtering: Don't expose sensitive data
- Access Control: Review workflow permissions
Node Examples
Example: Data Enrichment Pipeline
Webhook Trigger
→ REST API (Fetch user data)
→ AI Gateway (Analyze sentiment)
→ MongoDB (Store results)
→ Webhook (Notify completion)
Example: Document Processing
Schedule Trigger
→ Amazon S3 (List new PDFs)
→ Loop (For each PDF)
→ PDF Parser (Extract text)
→ Entity Extraction (Find entities)
→ Neo4j (Store relationships)
Example: Batch Processing with Aggregation
Schedule Trigger
→ SFTP Source (Download files)
→ Loop (For each file)
→ Conditional (Check file type)
→ [.zip] File Extractor → PDF Parser
→ [.pdf] PDF Parser (direct)
→ Table Parser (Extract data)
→ MySQL (Lookup)
→ PDF Redactor (Redact sensitive data)
→ Data Aggregator (Collect results)
→ [allPdfs] Amazon S3 (Upload batch)
→ [allErrors] PDF Generator (Create report)
→ Microsoft Exchange (Email report)
Example: Conditional Routing
Form Trigger
→ Conditional (Check priority)
→ [High Priority]
→ AI Gateway (Urgent response)
→ Gmail (Send immediately)
→ [Normal Priority]
→ MongoDB (Queue for later)