Skip to main content

Testing Workflows

Test your workflows before deployment to ensure they work correctly and handle edge cases. Workflow testing creates real executions with results tracked per node.

Test Run Overview

Test runs execute your workflow through the same pipeline as production executions. There is no separate "test environment" -- testing creates a real execution record and processes each node.

Starting a Test Run

  1. Click the Run button in the workflow builder toolbar
  2. Provide test input data (required for trigger nodes)
  3. The execution is submitted for processing
  4. Watch real-time node status updates on the canvas
  5. Review execution traces and output data

What Happens During a Test

  1. Execution record created with status pending
  2. STRONGLY_SERVICES generated dynamically based on the workflow's node dependencies (add-ons, data sources, AI models)
  3. Execution submitted for processing
  4. Nodes processed sequentially through the workflow graph
  5. Traces recorded for each node execution
  6. Execution status updated to completed or failed with ended_at timestamp

Providing Test Input

Different trigger types accept different test input formats.

Webhook Trigger

Provide a JSON body simulating an incoming HTTP request:

{
"event": "user.created",
"user_id": "12345",
"email": "test@example.com"
}

Schedule Trigger

No input data is required. The schedule trigger fires immediately when testing, using the current timestamp.

REST API Trigger

Provide a JSON body simulating an API request:

{
"input": "data to process",
"options": {
"format": "json"
}
}

Form Trigger

Provide a JSON object simulating form field submissions:

{
"name": "John Smith",
"email": "john@example.com",
"message": "This is a test submission"
}

Real-Time Execution Visualization

During test runs, the workflow builder canvas shows live node status updates.

Node States

StateAppearanceMeaning
IdleGrayNot yet reached by execution
RunningBlue/PulsingCurrently executing
CompletedGreenFinished successfully
FailedRedEncountered an error

Execution Flow

  • Nodes update their visual state as execution progresses through the graph
  • Parallel branches (via parallel-branch node) show multiple nodes executing simultaneously
  • Conditional branches (switch-case, conditional) show which path was taken

Node Tracing

Every node execution produces a span that provides detailed tracing for each step.

Span Types

TypeDescription
WORKFLOWRoot span covering the entire execution
NODEIndividual node execution
LLMAI Gateway / LLM call within a node
TOOLTool call (MCP or native)
RETRIEVALRAG or knowledge base retrieval
AGENTAgent reasoning loop
CHAINChain of operations
EMBEDDINGEmbedding generation
PARSERParsing operation

Span Data

Each span records:

FieldDescription
execution_idParent execution ID
span_typeOne of the types above
parent_span_idID of parent span (for tree structure)
start_timeWhen the span started
end_timeWhen the span completed
statusrunning, completed, failed
inputsData received by the node
outputsData produced by the node
errorError message if failed
metadataAdditional context (model used, token count, etc.)

Viewing Spans

After an execution completes, click on any node to inspect its span data:

  • Output data: The JSON output produced by the node
  • Input data: What the node received from upstream nodes
  • Timing: Start time, end time, and duration
  • Errors: Error message and details if the node failed
  • Nested spans: For agent nodes, see sub-spans for each LLM call and tool invocation

Span Tree

Spans form a hierarchical tree:

WORKFLOW (root)
├── NODE: webhook (trigger)
├── NODE: ai-gateway
│ └── LLM: claude-3-sonnet call
├── NODE: react-agent
│ ├── AGENT: reasoning loop
│ ├── LLM: tool selection
│ ├── TOOL: web-search call
│ └── LLM: final response
└── NODE: webhook-response

Use workflow.spans.tree to retrieve the full span tree for an execution.

Single-Node Testing

Individual nodes can be tested in isolation by providing input data directly. This is useful for verifying node configuration before running the full workflow.

The workflow builder allows you to:

  1. Select a single node on the canvas
  2. Provide test input data in the node's configuration panel
  3. Execute just that node to verify its output

This sends a targeted execution request that processes only the selected node with the provided input.

Inspecting Execution Results

Execution Details

Click on a completed execution to view:

  • Execution ID and status (completed, failed, pending, running)
  • Start time (started_at) and end time (ended_at)
  • Duration calculated from start to end
  • Trigger inputs that initiated the execution
  • Error message if the execution failed

Node-Level Output

Click any node in a completed execution to view:

  • The JSON output produced by that node
  • Input data received from upstream connections
  • Duration of that specific node
  • Error details if the node failed

Data flows between nodes using JSONPath input mappings. For example, a node might reference $.output.text from a previous node's output.

Execution Logs

Execution logs are stored in the workflow_logs collection:

  • info level: Execution start, node completions
  • error level: Failures with error messages and stack traces
  • Each log entry includes workflowId, executionId, userId, and createdAt

Testing Best Practices

Test Data Preparation

  1. Use realistic data that matches production input formats
  2. Test edge cases: empty arrays, null values, missing fields
  3. Test error paths: invalid data that should trigger error handling
  4. Test conditional branches: provide input that exercises each branch of switch-case and conditional nodes

Iterative Testing

  1. Start simple: Test with basic valid input first
  2. Review spans: Check each node's input/output in the span data
  3. Fix configuration: Update node settings based on span inspection
  4. Test edge cases: Gradually add boundary and error conditions
  5. Verify full flow: Ensure data passes correctly through all connections

Testing Agent Workflows

Agent nodes (react-agent, supervisor-agent) produce multiple sub-spans:

  1. Review the agent's reasoning loop in the span tree
  2. Check which tools the agent selected and why
  3. Verify LLM call spans for correct prompt and response
  4. Test with different inputs to ensure the agent handles varied scenarios

Testing Data Source Connections

When testing nodes that connect to external data sources (databases, APIs):

  1. Verify data source credentials are configured correctly
  2. Check that the connection type (addon or datasource) and IDs are set
  3. Test with a small query first before running full data operations
  4. Review the node's span output to confirm data was retrieved correctly

Common Testing Issues

Issue: Execution Stays in Pending Status

Possible causes:

  • Workflow is not deployed (no active worker pod)
  • Backend service is unreachable
  • STRONGLY_SERVICES generation failed

Solutions:

  • Deploy the workflow first using the Deploy button
  • Contact your administrator
  • Review execution logs for STRONGLY_SERVICES errors

Issue: Node Fails with Connection Error

Possible causes:

  • Data source credentials are invalid or expired
  • Add-on service is not running
  • Network connectivity issue between worker and external service

Solutions:

  • Verify data source configuration in the Data Sources section
  • Check add-on health status
  • Review the node's span error message for specific details

Issue: Agent Not Using Tools

Possible causes:

  • MCP Tools Provider not connected to the agent's tools connector
  • MCP server not deployed or unreachable
  • Agent system prompt does not mention available tools

Solutions:

  • Verify the tools connector (bottom of agent node) has a connection
  • Check MCP server deployment status
  • Update the agent's system prompt to reference the available tools

Issue: Incorrect Data Passed Between Nodes

Possible causes:

  • Input mapping references wrong field path
  • Upstream node output format changed
  • JSONPath expression is incorrect

Solutions:

  • Inspect span outputs for the upstream node to see actual data structure
  • Update input mappings to use correct JSONPath references (e.g., $.output.field)
  • Use set-fields node to reshape data between nodes

Execution Resume and Checkpoints

Workflow executions support resume capabilities:

  • If an execution fails at a specific node, the checkpoint state allows re-running from the failed node
  • The execution retains its ID and accumulated trace data
  • Failed executions can be retried with the automatic retry mechanism

Testing Checklist

Before deploying to production, verify:

Functionality

  • All nodes execute successfully with valid input
  • Data flows correctly through all connections
  • Conditional branches route correctly
  • Agent nodes select appropriate tools

Error Handling

  • Invalid input is handled gracefully
  • External service failures produce clear error spans
  • Retry logic works for transient failures

Data Validation

  • Node outputs match expected format
  • Input mappings reference correct JSONPath fields
  • Data transformations produce correct results

Performance

  • Execution completes within acceptable time
  • No unnecessary sequential dependencies
  • Large data sets are handled without timeout

Next Steps

Once your workflow is thoroughly tested: