Deploying Workflows
Once your workflow is tested, deploy it to make it available for execution. Deployment provisions the infrastructure needed to run your workflow.
Deployment Process
Step 1: Prepare for Deployment
Before deploying, ensure:
- All nodes are configured correctly
- Workflow has been tested with representative inputs
- Data source credentials are valid
- Required add-ons are running
- AI Gateway models are accessible (if using AI nodes)
Step 2: Configure Deployment
Click the Deploy button in the workflow builder toolbar. Configure deployment settings:
| Setting | Description | Default |
|---|---|---|
| Environment | Custom environment ID (Docker image or Dockerfile) | custom (default base image) |
| CPU | CPU allocation for the worker pod | 0.5 |
| Memory | Memory allocation | 1GB |
| Disk | Disk allocation | 5GB |
| GPU | GPU allocation (for ML workloads) | 0 |
| GPU Type | Specific GPU type if GPU > 0 | None |
Step 3: Deploy
Click Deploy Workflow. The deployment process:
- Undeploys existing deployment if
forceRedeployis set and a deployment already exists - Generates STRONGLY_SERVICES dynamically based on workflow node dependencies (data sources, add-ons, AI models)
- Resolves custom environment if
environmentIdis set (Docker image or Dockerfile) - Provisions infrastructure for the workflow in the organization's namespace
- Updates workflow document with deployment metadata
- Sets workflow status to
active
Deployment Result
After successful deployment, the workflow document is updated with:
status: 'active'
deploymentEnvironmentId: <environmentId>
deploymentCpu: <cpu>
deploymentMemory: <memory>
deploymentDisk: <disk>
deploymentGpu: <gpu>
deploymentNamespace: <organizationId>
deploymentName: <deployment_name>
deploymentPodName: <pod_name>
deployedAt: <timestamp>
For webhook triggers, the deployed workflow receives a webhook URL for external systems to call.
For schedule triggers, the scheduler begins firing at the configured cron interval.
Environments
Workflows use a custom environmentId to specify the runtime environment. There are no fixed "Development" or "Production" environments -- instead, each environment ID maps to a custom Docker configuration.
Default Environment
When environmentId is set to custom (the default), the workflow uses the platform's standard base image with all common Python dependencies pre-installed.
Custom Environments
Custom environments allow you to specify your own runtime:
- Pre-built Docker image: If the environment has a built image (
build_status: 'ready'), it is used as the base image - Dockerfile: If the environment has a Dockerfile but no built image, the Dockerfile is sent to the controller for building
- Base image reference: If the environment specifies a
base_image, that image is used directly
Custom environments are managed in the Environments section and can include additional Python packages, system libraries, or custom configurations.
GPU Support
For workflows that require GPU resources (e.g., ML inference, image generation):
- Set GPU to the number of GPUs needed (e.g.,
1) - Set GPU Type to the specific GPU type (e.g.,
nvidia-a100) - The workflow is scheduled on a GPU-enabled node group
Undeployment
To undeploy a workflow:
- Click the Undeploy button on the workflow details page
- The system tears down the provisioned infrastructure
- Workflow document is updated: deployment fields cleared, status set to
paused
Undeployment removes all provisioned infrastructure. The workflow definition and execution history are preserved.
When redeploying (force redeploy), the system undeploys the existing deployment first, then creates a new one. There is a brief period where the workflow is not available. This is by design to ensure clean resource management.
Execution Infrastructure
Deployed workflows use pre-provisioned compute resources for fast execution dispatch. When an execution is triggered, work is dispatched to an available resource with the workflow's runtime environment and dependencies pre-loaded.
STRONGLY_SERVICES
The STRONGLY_SERVICES environment variable is generated dynamically for each execution based on the workflow's node dependencies:
- AI Gateway models: Base AI Gateway configuration is always included, plus any explicitly selected models
- Data sources: Database connection details for source/destination nodes
- Add-ons: Add-on service endpoints and credentials
- Service discovery: Automatically scans workflow nodes and builds the service configuration
Version Management
Versioning
Workflow versions use a simple integer counter that auto-increments on each save:
version: 1 → (save) → version: 2 → (save) → version: 3
The version field is set to 1 when a workflow is created and increments with each update. There is no semantic versioning (MAJOR.MINOR.PATCH).
Versions are not independently deployable snapshots -- only the current state of the workflow is deployed.
Sharing
Share Workflow
Workflows use a simple sharedWith array of user IDs:
// Share with a user
workflows.share(workflowId, userId)
// → $addToSet: { sharedWith: userId }
// Unshare with a user
workflows.unshare(workflowId, userId)
// → $pull: { sharedWith: userId }
// Get shared users
workflows.getSharedUsers(workflowId)
// → Returns user details for all IDs in sharedWith array
Access control:
- The workflow
ownerIdhas full access - Users in the
sharedWitharray have read and write access - In multi-tenant mode, sharing is restricted to users in the same organization (validated via
canShareWith()) - There are no granular permission levels (Viewer/Editor/Admin) -- shared users get write access
Public Workflows
Workflows can be marked as isPublic: true to make them visible to all users in the organization.
Cloning
Clone a workflow to create an independent copy:
workflows.clone(workflowId, { name: 'My Clone' })
Cloning:
- Deep-copies nodes, connections, and settings
- Resets status to
draft - Clears all deployment metadata
- Sets the current user as owner
- Resets
sharedWithto empty - Resets execution count to 0
- Preserves tags from the source workflow
Templates
Create Template from Workflow
Save a workflow as a reusable template:
workflows.createTemplate(workflowId)
This creates a new workflow document with isTemplate: true and a reference to the source workflow via templateId.
Create Workflow from Template
Create a new workflow based on a template:
workflows.createFromTemplate(templateId, { name: 'My New Workflow' })
The new workflow copies the template's nodes, connections, and settings, then sets isTemplate: false and resets ownership to the current user.
Browse Templates
Use workflows.getTemplates() to list available templates. Templates respect the same access control as regular workflows (owned, shared, or public templates are visible).
Monitoring Executions
Execution List
View all executions for a workflow:
- Navigate to the workflow detail page
- Click the Executions tab
- Filter by status, date range, or trigger type
Each execution shows:
- Execution ID
- Status (
pending,running,completed,failed) - Start time (
started_at) and end time (ended_at) - Duration
- User who triggered the execution
Execution Details
Click an execution to view:
- Span tree: Hierarchical view of all node execution traces
- Node outputs: Data produced by each node
- Errors: Error messages and details for failed nodes
- Timing: Duration per node and total execution time
Execution Summaries
The execution-summary-sync module maintains a fast-loading summary table for the execution list, synced after each execution status change.
Managing Deployments
Workflow Statuses
| Status | Description |
|---|---|
draft | Initial state, not deployed |
active | Deployed and running |
paused | Undeployed, preserves configuration |
archived | No longer in active use |
Pause Workflow
Pausing a workflow undeploys the workflow infrastructure and sets status to paused.
Delete Workflow
Deleting a workflow:
- Undeploys infrastructure first
- Removes associated data: status records, logs, sessions, executions, and execution traces
- Removes the workflow document from the database
Deleting a workflow removes all versions, execution history, and execution traces. This cannot be undone. Consider pausing instead if you may need the workflow later.
Automatic Retry
Failed execution submissions are automatically retried:
- Configurable retries: Default 3 attempts, configurable via workflow settings
- Automatic retry: Failed submissions are retried automatically
Troubleshooting
Deployment Fails
Possible causes:
- Workflow controller URL not configured (
WORKFLOW_CONTROLLER_URLenv var or settings) - Custom environment image not built or not found
- Insufficient cluster resources (CPU, memory, GPU)
- Workflow definition exceeds maximum size
Solutions:
- Verify workflow controller URL in environment or settings
- Check custom environment build status
- Review cluster resource availability
- Reduce workflow definition size if it exceeds maximum size
Workflow Not Receiving Webhooks
Possible causes:
- Workflow not deployed (status is not
active) - Webhook URL not configured in the external system
- Network connectivity between external system and cluster
Solutions:
- Verify workflow is deployed and status is
active - Check the webhook URL provided after deployment
- Test connectivity to the webhook endpoint
Execution Fails Immediately
Possible causes:
- STRONGLY_SERVICES generation failed (missing data source or add-on)
- Worker pod is not ready
- Organization context missing (no
X-Organization-IDheader)
Solutions:
- Review execution logs for STRONGLY_SERVICES errors
- Check worker pod status in Kubernetes
- Verify the workflow has a valid
organizationId
Slow Executions
Possible causes:
- External API latency (data sources, AI models)
- Large data volumes
- Sequential node dependencies that could be parallelized
Solutions:
- Review span timing to identify bottleneck nodes
- Use
parallel-branchfor independent operations - Optimize database queries in source nodes
- Increase worker pod resources (CPU, memory)
Production Checklist
Pre-Deployment
- Tested with realistic data
- All data source credentials verified
- Error handling nodes in place
- AI Gateway models accessible
- Custom environment image built (if using custom environment)
Deployment
- Environment and resource allocation configured
- Deployment succeeded (status is
active) - Webhook URL distributed to external systems (if webhook trigger)
- Schedule verified (if schedule trigger)
Post-Deployment
- First execution completed successfully
- Execution traces show expected node flow
- Execution time is acceptable
- Error scenarios handled gracefully