Testing Workflows
Thoroughly test your workflows before deployment to ensure they work correctly and handle edge cases. The workflow builder provides powerful testing and debugging tools.
Test Run Overview
Test runs allow you to execute workflows in a controlled environment without affecting production data or triggering external systems.
Starting a Test Run
- Click Test Run button in the workflow builder toolbar
- Provide test input data (if required by trigger)
- Watch real-time execution on the canvas
- Review results and logs
Test Run Features
| Feature | Description |
|---|---|
| Real-Time Visualization | Nodes light up during execution |
| Node Output Inspection | View data at each step |
| Execution Logs | Console output and errors |
| Performance Metrics | Duration per node |
| Debug Mode | Verbose logging |
Providing Test Input
Different trigger types require different test inputs.
Webhook Trigger
Simulate an HTTP request:
{
"method": "POST",
"headers": {
"Content-Type": "application/json",
"X-Custom-Header": "value"
},
"body": {
"event": "user.created",
"user_id": "12345",
"email": "test@example.com"
},
"query": {
"source": "api"
}
}
Schedule Trigger
No input required - uses current timestamp:
{
"scheduledTime": "2024-01-15T09:00:00Z",
"actualTime": "2024-01-15T09:00:01Z"
}
REST API Trigger
Simulate API request:
{
"method": "POST",
"path": "/api/process",
"headers": {
"Authorization": "Bearer test-token"
},
"body": {
"input": "data to process"
}
}
Form Trigger
Simulate form submission:
{
"formData": {
"name": "John Smith",
"email": "john@example.com",
"message": "This is a test submission"
},
"metadata": {
"submittedAt": "2024-01-15T10:30:00Z",
"userAgent": "Mozilla/5.0...",
"ipAddress": "192.168.1.1"
}
}
Real-Time Execution Visualization
During test runs, the canvas provides visual feedback:
Node States
| State | Appearance | Meaning |
|---|---|---|
| Pending | Gray | Not yet executed |
| Running | Blue/Pulsing | Currently executing |
| Success | Green | Completed successfully |
| Error | Red | Failed with error |
| Skipped | Yellow | Skipped by condition |
Execution Flow
- Connection Highlights: Active data flow shown in blue
- Progress Indicator: Shows current execution position
- Parallel Execution: Multiple nodes light up simultaneously
- Completion Status: Final state for each node
Inspecting Node Output
Click any node to view its execution details:
Output Data Tab
View the data produced by the node:
{
"status": "success",
"data": {
"id": "12345",
"processed": true,
"timestamp": "2024-01-15T10:30:00Z"
},
"metadata": {
"duration": 245,
"nodeId": "node_abc123"
}
}
Features:
- JSON formatting with syntax highlighting
- Expand/collapse nested objects
- Copy data to clipboard
- Search within output
Execution Logs Tab
View console output and logs:
[10:30:00.123] Starting node execution
[10:30:00.156] Fetching data from API
[10:30:00.401] API response received (200 OK)
[10:30:00.405] Parsing response data
[10:30:00.410] Node execution complete
Log Levels:
- INFO: General information
- DEBUG: Detailed execution steps
- WARN: Warnings and issues
- ERROR: Error messages and stack traces
Performance Tab
View timing and resource metrics:
| Metric | Value |
|---|---|
| Start Time | 10:30:00.123 |
| End Time | 10:30:00.410 |
| Duration | 287ms |
| CPU Usage | 15% |
| Memory Usage | 45MB |
Debug Mode
Enable debug mode for verbose logging and detailed execution information.
Enabling Debug Mode
- Click Settings in toolbar
- Toggle Debug Mode on
- Run test again
- Review enhanced logs
Debug Information
Enhanced Logs:
[DEBUG] Node: apiCall
[DEBUG] Input variables:
{
"userId": "12345",
"endpoint": "/users/12345"
}
[DEBUG] Executing HTTP request
[DEBUG] Request headers: {...}
[DEBUG] Request body: {...}
[DEBUG] Response status: 200
[DEBUG] Response headers: {...}
[DEBUG] Response body: {...}
[DEBUG] Parsing response...
[DEBUG] Output mapped to: {...}
Variable Inspection:
- View all variables at each node
- See variable transformations
- Track data flow through pipeline
- Inspect intermediate values
Testing Best Practices
Test Data Preparation
- Realistic Data: Use production-like test data
- Edge Cases: Test boundary conditions
- Invalid Data: Test error handling
- Various Formats: Test different input formats
Example Test Cases:
// Happy path
{
"userId": "12345",
"email": "valid@example.com",
"age": 30
}
// Edge case - minimum values
{
"userId": "1",
"email": "a@b.c",
"age": 18
}
// Edge case - maximum values
{
"userId": "999999999",
"email": "very.long.email.address@subdomain.example.com",
"age": 120
}
// Invalid data
{
"userId": null,
"email": "invalid-email",
"age": -5
}
// Missing fields
{
"userId": "12345"
// email and age missing
}
Testing Scenarios
1. Success Path
- All nodes execute successfully
- Data flows as expected
- Output format is correct
2. Error Handling
- API failures
- Invalid data formats
- Network timeouts
- Authentication errors
3. Conditional Logic
- Test all branches
- Verify routing conditions
- Check default paths
4. Edge Cases
- Empty arrays
- Null values
- Very large datasets
- Special characters
Iterative Testing
- Initial Test: Run with basic data
- Review Results: Check output and logs
- Fix Issues: Update node configurations
- Retest: Run again with fixes
- Expand Tests: Add edge cases
- Final Validation: Test all scenarios
Common Testing Issues
Issue: Node Not Executing
Symptoms:
- Node remains gray during test
- No output data
- No logs
Causes:
- Missing connection from previous node
- Conditional logic skipping node
- Disabled node
Solutions:
- Verify node connections
- Check conditional expressions
- Ensure node is enabled
Issue: Incorrect Output
Symptoms:
- Output doesn't match expected format
- Missing or incorrect data
- Data transformation errors
Causes:
- Wrong variable references
- Incorrect data mapping
- Type conversion issues
Solutions:
- Review variable syntax:
{{ nodeName.field }} - Check data types (string vs. number)
- Validate mapping expressions
- Use debug mode to inspect variables
Issue: API Call Failures
Symptoms:
- Node shows error state
- HTTP error codes
- Timeout errors
Causes:
- Invalid credentials
- Wrong endpoint URL
- Network issues
- Rate limiting
Solutions:
- Verify data source credentials
- Check endpoint URLs
- Add retry logic
- Test with API testing tool first
Issue: Slow Execution
Symptoms:
- Long test run duration
- Individual nodes taking too long
- Timeout errors
Causes:
- Large data processing
- Sequential dependencies
- Inefficient queries
- External API latency
Solutions:
- Optimize data transformations
- Use parallel execution
- Add caching
- Batch operations
- Increase timeout values
Execution Timeline
View the execution timeline to analyze performance:
Waterfall Chart
Visualizes node execution order and duration:
Trigger |▓|
└─ API Call | ▓▓▓▓▓|
└─ Parse | ▓|
└─ AI | ▓▓▓▓▓▓▓▓▓▓|
└─ DB| ▓▓|
└─ Parallel Node | ▓▓▓|
Features:
- Identify bottlenecks
- See parallel execution
- Measure node duration
- Detect slow operations
Performance Analysis
| Node | Start | Duration | % of Total |
|---|---|---|---|
| Trigger | 0ms | 2ms | 0.1% |
| API Call | 2ms | 450ms | 22.5% |
| Parse | 452ms | 5ms | 0.25% |
| AI Gateway | 457ms | 1500ms | 75% |
| MongoDB | 1957ms | 45ms | 2.25% |
Optimization Tips:
- Focus on nodes with highest % of total time
- Look for unnecessary sequential dependencies
- Consider caching for repeated operations
- Batch operations where possible
Testing Checklist
Before deploying to production, verify:
Functionality
- All nodes execute successfully
- Data flows correctly through pipeline
- Output format matches requirements
- Error handling works as expected
Data Validation
- Input validation is working
- Output data is correct
- Data transformations are accurate
- Edge cases are handled
Error Handling
- Network failures are handled
- Invalid data is caught
- Retry logic works
- Error messages are clear
Performance
- Execution time is acceptable
- No unnecessary sequential operations
- Parallel execution is used where possible
- Timeouts are appropriate
Security
- Credentials are not exposed
- Input is sanitized
- Output doesn't leak sensitive data
- Authentication is working
Advanced Testing
Load Testing
Test workflow performance under load:
- Create Test Script: Generate multiple requests
- Gradual Increase: Start with low volume
- Monitor Metrics: Watch performance degradation
- Identify Limits: Find breaking points
Integration Testing
Test with real external systems:
- Test Environment: Use sandbox/test accounts
- Real Data: Use production-like data
- End-to-End: Test complete workflow
- Verify Results: Check external system state
Regression Testing
Test after making changes:
- Baseline: Record initial test results
- Make Changes: Update workflow
- Retest: Run same tests
- Compare: Verify no regressions
Next Steps
Once your workflow is thoroughly tested:
Invest time in testing before deployment. Well-tested workflows are more reliable and easier to maintain.