Testing Workflows
Test your workflows before deployment to ensure they work correctly and handle edge cases. Workflow testing creates real executions with results tracked per node.
Test Run Overview
Test runs execute your workflow through the same pipeline as production executions. There is no separate "test environment" -- testing creates a real execution record and processes each node.
Starting a Test Run
- Click the Run button in the workflow builder toolbar
- Provide test input data (required for trigger nodes)
- The execution is submitted for processing
- Watch real-time node status updates on the canvas
- Review execution traces and output data
What Happens During a Test
- Execution record created with status
pending - STRONGLY_SERVICES generated dynamically based on the workflow's node dependencies (add-ons, data sources, AI models)
- Execution submitted for processing
- Nodes processed sequentially through the workflow graph
- Traces recorded for each node execution
- Execution status updated to
completedorfailedwithended_attimestamp
Providing Test Input
Different trigger types accept different test input formats.
Webhook Trigger
Provide a JSON body simulating an incoming HTTP request:
{
"event": "user.created",
"user_id": "12345",
"email": "test@example.com"
}
Schedule Trigger
No input data is required. The schedule trigger fires immediately when testing, using the current timestamp.
REST API Trigger
Provide a JSON body simulating an API request:
{
"input": "data to process",
"options": {
"format": "json"
}
}
Form Trigger
Provide a JSON object simulating form field submissions:
{
"name": "John Smith",
"email": "john@example.com",
"message": "This is a test submission"
}
Real-Time Execution Visualization
During test runs, the workflow builder canvas shows live node status updates.
Node States
| State | Appearance | Meaning |
|---|---|---|
| Idle | Gray | Not yet reached by execution |
| Running | Blue/Pulsing | Currently executing |
| Completed | Green | Finished successfully |
| Failed | Red | Encountered an error |
Execution Flow
- Nodes update their visual state as execution progresses through the graph
- Parallel branches (via
parallel-branchnode) show multiple nodes executing simultaneously - Conditional branches (
switch-case,conditional) show which path was taken
Node Tracing
Every node execution produces a span that provides detailed tracing for each step.
Span Types
| Type | Description |
|---|---|
WORKFLOW | Root span covering the entire execution |
NODE | Individual node execution |
LLM | AI Gateway / LLM call within a node |
TOOL | Tool call (MCP or native) |
RETRIEVAL | RAG or knowledge base retrieval |
AGENT | Agent reasoning loop |
CHAIN | Chain of operations |
EMBEDDING | Embedding generation |
PARSER | Parsing operation |
Span Data
Each span records:
| Field | Description |
|---|---|
execution_id | Parent execution ID |
span_type | One of the types above |
parent_span_id | ID of parent span (for tree structure) |
start_time | When the span started |
end_time | When the span completed |
status | running, completed, failed |
inputs | Data received by the node |
outputs | Data produced by the node |
error | Error message if failed |
metadata | Additional context (model used, token count, etc.) |
Viewing Spans
After an execution completes, click on any node to inspect its span data:
- Output data: The JSON output produced by the node
- Input data: What the node received from upstream nodes
- Timing: Start time, end time, and duration
- Errors: Error message and details if the node failed
- Nested spans: For agent nodes, see sub-spans for each LLM call and tool invocation
Span Tree
Spans form a hierarchical tree:
WORKFLOW (root)
├── NODE: webhook (trigger)
├── NODE: ai-gateway
│ └── LLM: claude-3-sonnet call
├── NODE: react-agent
│ ├── AGENT: reasoning loop
│ ├── LLM: tool selection
│ ├── TOOL: web-search call
│ └── LLM: final response
└── NODE: webhook-response
Use workflow.spans.tree to retrieve the full span tree for an execution.
Single-Node Testing
Individual nodes can be tested in isolation by providing input data directly. This is useful for verifying node configuration before running the full workflow.
The workflow builder allows you to:
- Select a single node on the canvas
- Provide test input data in the node's configuration panel
- Execute just that node to verify its output
This sends a targeted execution request that processes only the selected node with the provided input.
Inspecting Execution Results
Execution Details
Click on a completed execution to view:
- Execution ID and status (
completed,failed,pending,running) - Start time (
started_at) and end time (ended_at) - Duration calculated from start to end
- Trigger inputs that initiated the execution
- Error message if the execution failed
Node-Level Output
Click any node in a completed execution to view:
- The JSON output produced by that node
- Input data received from upstream connections
- Duration of that specific node
- Error details if the node failed
Data flows between nodes using JSONPath input mappings. For example, a node might reference $.output.text from a previous node's output.
Execution Logs
Execution logs are stored in the workflow_logs collection:
infolevel: Execution start, node completionserrorlevel: Failures with error messages and stack traces- Each log entry includes
workflowId,executionId,userId, andcreatedAt
Testing Best Practices
Test Data Preparation
- Use realistic data that matches production input formats
- Test edge cases: empty arrays, null values, missing fields
- Test error paths: invalid data that should trigger error handling
- Test conditional branches: provide input that exercises each branch of
switch-caseandconditionalnodes
Iterative Testing
- Start simple: Test with basic valid input first
- Review spans: Check each node's input/output in the span data
- Fix configuration: Update node settings based on span inspection
- Test edge cases: Gradually add boundary and error conditions
- Verify full flow: Ensure data passes correctly through all connections
Testing Agent Workflows
Agent nodes (react-agent, supervisor-agent) produce multiple sub-spans:
- Review the agent's reasoning loop in the span tree
- Check which tools the agent selected and why
- Verify LLM call spans for correct prompt and response
- Test with different inputs to ensure the agent handles varied scenarios
Testing Data Source Connections
When testing nodes that connect to external data sources (databases, APIs):
- Verify data source credentials are configured correctly
- Check that the connection type (
addonordatasource) and IDs are set - Test with a small query first before running full data operations
- Review the node's span output to confirm data was retrieved correctly
Common Testing Issues
Issue: Execution Stays in Pending Status
Possible causes:
- Workflow is not deployed (no active worker pod)
- Backend service is unreachable
- STRONGLY_SERVICES generation failed
Solutions:
- Deploy the workflow first using the Deploy button
- Contact your administrator
- Review execution logs for STRONGLY_SERVICES errors
Issue: Node Fails with Connection Error
Possible causes:
- Data source credentials are invalid or expired
- Add-on service is not running
- Network connectivity issue between worker and external service
Solutions:
- Verify data source configuration in the Data Sources section
- Check add-on health status
- Review the node's span error message for specific details
Issue: Agent Not Using Tools
Possible causes:
- MCP Tools Provider not connected to the agent's
toolsconnector - MCP server not deployed or unreachable
- Agent system prompt does not mention available tools
Solutions:
- Verify the tools connector (bottom of agent node) has a connection
- Check MCP server deployment status
- Update the agent's system prompt to reference the available tools
Issue: Incorrect Data Passed Between Nodes
Possible causes:
- Input mapping references wrong field path
- Upstream node output format changed
- JSONPath expression is incorrect
Solutions:
- Inspect span outputs for the upstream node to see actual data structure
- Update input mappings to use correct JSONPath references (e.g.,
$.output.field) - Use
set-fieldsnode to reshape data between nodes
Execution Resume and Checkpoints
Workflow executions support resume capabilities:
- If an execution fails at a specific node, the checkpoint state allows re-running from the failed node
- The execution retains its ID and accumulated trace data
- Failed executions can be retried with the automatic retry mechanism
Testing Checklist
Before deploying to production, verify:
Functionality
- All nodes execute successfully with valid input
- Data flows correctly through all connections
- Conditional branches route correctly
- Agent nodes select appropriate tools
Error Handling
- Invalid input is handled gracefully
- External service failures produce clear error spans
- Retry logic works for transient failures
Data Validation
- Node outputs match expected format
- Input mappings reference correct JSONPath fields
- Data transformations produce correct results
Performance
- Execution completes within acceptable time
- No unnecessary sequential dependencies
- Large data sets are handled without timeout
Next Steps
Once your workflow is thoroughly tested: