Creating Workflows
Strongly.ai provides two methods for creating workflows: the STAN AI assistant, which builds workflows conversationally through natural language, and the visual workflow builder, which provides a drag-and-drop canvas interface. Both methods produce the same workflow structure and can be used interchangeably -- a workflow created by STAN can be edited in the visual builder, and vice versa.
Method 1: STAN AI Assistant (MCP)
STAN is an AI-powered workflow builder integrated into the platform. It uses the Model Context Protocol (MCP) to discover available nodes, create workflows, add and configure nodes, connect them, and save or deploy the result -- all through conversational interaction.
Starting a Conversation
- Open the STAN assistant panel from the platform interface
- Describe the workflow you want to build in natural language
- STAN will create the workflow step by step, confirming each action
How STAN Builds Workflows
STAN follows a structured process using MCP tools:
- Discover nodes -- STAN checks the available node catalog to find the right node types for your use case. It validates each node type before adding it.
- Create the workflow --
create_workflowinitializes a new workflow with a name and description. - Add nodes --
add_nodeadds each node to the workflow canvas, using exact node types from the catalog (e.g.,webhook,s3,pdf-parser,ai-gateway). - Connect nodes --
connect_nodescreates data flow connections between nodes using source and target port names. - Configure nodes --
configure_nodesets node-specific settings. STAN uses service discovery tools to find your available AI models, add-ons, and data sources before configuring nodes that require them. - Set input mappings --
set_input_mappingmaps data from one node's output to another node's input using JSONPath syntax. - Validate and save --
validate_workflowchecks for issues, thenfinalize_workflowsaves and optionally deploys the workflow.
STAN MCP Tools Reference
The following tools are available to STAN during workflow creation:
Node Discovery
| Tool | Description |
|---|---|
validate_node_type | Verify a node type exists before adding it. Returns suggestions if not found. |
search_nodes | Search for nodes by keyword or category. Categories include: triggers, sources, transform, ai, evaluation, memory, agents, control-flow, destinations, tools, utilities. |
get_node_schema | Get the full configuration schema for a node type, including input/output definitions and available config fields. |
Workflow Management
| Tool | Description |
|---|---|
create_workflow | Create a new empty workflow with a name (max 18 characters) and description. Returns a workflowId. |
rename_workflow | Rename an existing workflow. |
save_workflow | Save workflow changes without deploying. |
finalize_workflow | Save and optionally deploy. Pass deploy: true to make the workflow active. |
Node Management
| Tool | Description |
|---|---|
add_node | Add a node to the workflow by its type (e.g., webhook, ai-gateway). Returns a nodeId. |
remove_node | Remove a node and all its connections. |
configure_node | Update a node's configuration. Only fields defined in the node's editor schema are accepted; unknown fields are rejected. |
set_input_mapping | Map data between nodes using JSONPath expressions. |
move_node | Reposition a node on the canvas. |
Connections
| Tool | Description |
|---|---|
connect_nodes | Create a connection from a source node's output port to a target node's input port. |
disconnect_nodes | Remove a connection between two nodes. |
Workflow State and Testing
| Tool | Description |
|---|---|
get_workflow_summary | View the current workflow state including all nodes and connections. |
validate_workflow | Check for issues: missing trigger nodes, disconnected nodes, missing required configuration. |
test_workflow | Run a test execution with optional sample inputs. Returns an execution ID. |
get_execution_status | Check execution progress and node statuses. |
get_node_output | Get the output data from a specific node in an execution. |
stop_execution | Stop a running execution and clean up resources. |
Service Discovery
| Tool | Description |
|---|---|
list_user_models | List available AI models. Required before configuring ai-gateway nodes. |
list_user_addons | List managed add-ons (databases, caches). Required before configuring nodes with connectionType: "addon". |
list_user_datasources | List external data source connections. Required before configuring nodes with connectionType: "datasource". |
Layout
| Tool | Description |
|---|---|
auto_layout | Automatically arrange all nodes in a left-to-right DAG layout. |
Example STAN Conversation
User: "Build a workflow that reads PDFs from S3 and extracts text"
STAN: I'll create that workflow for you. Let me set it up step by step.
1. Creating workflow "S3 PDF Extract"...
2. Adding S3 source node...
3. Adding PDF Parser node...
4. Connecting S3 Source -> PDF Parser...
5. Let me check your available S3 data sources...
[calls list_user_datasources("s3")]
6. Configuring S3 source with your data source...
7. Workflow saved! View at /workflow-builder/<id>
Method 2: Visual Workflow Builder
The visual builder provides a canvas-based interface for assembling workflows by hand.
Step 1: Create a New Workflow
- Click Workflows in the main navigation
- Click Create Workflow
- Enter a workflow name and optional description
- You are taken to the visual workflow builder canvas
The builder interface includes:
- A canvas for positioning and connecting nodes
- A node palette on the left listing available node types by category
- A configuration panel on the right for editing node settings
- A toolbar with save, validate, deploy, and test options
Step 2: Add Nodes
From the node palette, select a node type and add it to the canvas. Node categories include:
| Category | Examples |
|---|---|
| Triggers | Webhook, Schedule, REST API, Form |
| Sources | S3, MySQL Source, PostgreSQL Source, MongoDB Source, REST API |
| Transform | Code, Filter, Map, Merge, Router |
| AI | AI Gateway, Entity Extraction, React Agent |
| Control Flow | Switch-Case, Loop, Parallel Branch |
| Destinations | S3, MongoDB Dest, PostgreSQL Dest, Webhook Response, Email |
| Tools | MCP servers, external tool integrations |
| Utilities | Various helper nodes |
Every workflow should begin with at least one trigger node that determines how the workflow is initiated (HTTP request, scheduled interval, form submission, etc.).
Step 3: Connect Nodes
Connect nodes to define the data flow:
- Click on the output port (right side) of a source node
- Drag to the input port (left side) of a target node
- Release to create the connection
Connections have labeled ports. The default ports are output and input, but some nodes expose additional ports:
- Switch-Case nodes have multiple output ports for each branch
- Parallel Branch nodes distribute to multiple paths
- AI Agent nodes may have
aiandtoolsports
After connecting nodes, use the Auto Layout button in the toolbar to arrange all nodes in a clean left-to-right flow. STAN's auto_layout tool does the same thing programmatically.
Step 4: Configure Nodes
Click any node to open its configuration panel. Configuration options vary by node type.
Node Configuration Structure
Each node has a definition from the node catalog. The definition includes:
- Type -- The node type identifier (e.g.,
ai-gateway,pdf-parser,mysql-source) - Label -- Display name shown on the canvas
- Category -- The functional category
- Editor config fields -- The available configuration fields with types, defaults, and validation rules
- Default data -- Default configuration values from the node definition
- Input/Output definitions -- Schema describing what data the node accepts and produces
When you configure a node, the configuration is stored in the node's data object. The final configuration is determined by a merge order:
default_config (from node definition) -> node_data -> node_config (user settings win)
This means user-provided settings always take precedence over defaults.
Common Configuration Examples
AI Gateway Node
{
"model": "<model_id from list_user_models>",
"defaultTemperature": 0.7,
"defaultMaxTokens": 1000,
"defaultSystemPrompt": "You are a helpful assistant",
"defaultUserPrompt": "Analyze the following data:"
}
MySQL Source Node
{
"connectionType": "addon",
"addonId": "<addon_id from list_user_addons>",
"query": "SELECT * FROM customers WHERE active = 1",
"limit": 1000
}
S3 Source Node
{
"dataSourceId": "<datasource_id from list_user_datasources>",
"operation": "download",
"prefix": "invoices/2025/",
"maxFiles": 100
}
REST API Node
{
"url": "https://api.example.com/data",
"method": "GET",
"timeout": 30,
"authType": "bearer",
"authConfig": { "token": "..." }
}
Error Handling
Nodes support a continueOnError flag that allows the workflow to continue executing subsequent nodes even if the current node fails. This is the only error handling mechanism at the node level -- there are no per-node fallback values, error notification settings, or retry configuration beyond the workflow-level settings.
Workflow-level settings include:
timeout-- Maximum execution time in milliseconds (default: 300,000 ms / 5 minutes)retryAttempts-- Number of retry attempts (default: 3)logLevel-- Logging verbosity (info,debug,error)
Step 5: Set Input Mappings
Input mappings define how data flows from one node's output to another node's input. Mappings use JSONPath syntax to reference fields from upstream node outputs.
JSONPath Syntax
Mappings are stored in a node's inputMappings field as key-value pairs where:
- The key is the input field name on the target node
- The value is a JSONPath expression referencing a source node's output
{
"inputMappings": {
"text": "$.output.rows",
"metadata": "$.output.metadata.count",
"filename": "$.output.files[0].name"
}
}
The JSONPath expressions reference the output data structure of connected upstream nodes. The $ symbol represents the root of the upstream node's output object.
The platform does not use template variable syntax like {{ nodeName.field }}. All data flow between nodes is configured through inputMappings with JSONPath expressions, or by using the set_input_mapping MCP tool.
Step 6: Validate and Save
Before saving, validate the workflow to check for issues:
Validation checks include:
- At least one trigger node exists
- All non-trigger nodes are connected
- Required configuration fields are populated
Save process:
When a workflow is saved (via the Save button or workflows.update method), the following steps occur:
- Execution data stripped -- Output data, status fields, and progress indicators from any previous test runs are removed from nodes
- Node references normalized -- Each node's
_idandtemplateIdare resolved against the node catalog to ensure they reference the correct system node definitions - Configuration merged -- Component configurations are automatically merged into an enhanced workflow definition used at execution time
- Scopes computed -- For control flow nodes (loops, switch-case, parallel branches), scope boundaries are calculated automatically from nodes and connections. Scopes are stored in the workflow document and used at execution time
- Version incremented -- The workflow version number is incremented on each update
- Timestamp updated -- The
lastUpdatedfield is set to the current time - Workflow persisted -- The workflow document is saved to MongoDB
Workflow Data Model
A workflow document contains the following fields:
| Field | Type | Description |
|---|---|---|
name | String | Workflow name (max 18 characters for STAN-created workflows) |
description | String | Description of what the workflow does |
status | String | Current status: draft, saved, active, paused, archived |
nodes | Array | List of node instances with their configuration |
connections | Array | List of connections between nodes |
scopes | Object | Pre-computed control flow scope boundaries |
tags | Array | Tags for categorization and filtering |
settings | Object | Workflow-level settings (timeout, retryAttempts, logLevel) |
version | Number | Auto-incremented version number |
ownerId | String | User ID of the workflow owner |
organizationId | String | Organization ID for multi-tenant isolation |
sharedWith | Array | List of user IDs the workflow is shared with |
isTemplate | Boolean | Whether this workflow is a template |
isPublic | Boolean | Whether this workflow is publicly visible |
createdAt | Date | Creation timestamp |
lastUpdated | Date | Last modification timestamp |
Node Structure
Each node in the nodes array has:
| Field | Type | Description |
|---|---|---|
id | String | Unique instance ID (e.g., webhook-abc123) |
type | String | Node type from the catalog (e.g., webhook, ai-gateway) |
category | String | Node category (e.g., triggers, ai, sources) |
label | String | Display name on the canvas |
icon | String | Icon identifier |
color | String | Node color on the canvas |
position | Object | Canvas position with x and y coordinates |
data | Object | Node configuration including inputMappings |
Connection Structure
Each connection in the connections array has:
| Field | Type | Description |
|---|---|---|
id | String | Connection ID (format: sourceNodeId-targetNodeId) |
source | String | Source node instance ID |
target | String | Target node instance ID |
sourcePort | String | Output port name (default: output) |
targetPort | String | Input port name (default: input) |
Execution Order
The execution order of nodes is determined automatically based on the connections between them. Nodes execute after all their upstream dependencies have completed. There is no manual dependency configuration -- the graph structure defined by connections is the sole determinant of execution order.
For control flow nodes:
- Switch-Case routes data to one or more branches based on conditions. Inactive branches are automatically detected, and downstream nodes on inactive branches are skipped.
- Loop nodes iterate over arrays, executing their scoped child nodes for each item.
- Parallel Branch nodes execute multiple paths concurrently.
Common Workflow Patterns
Data Retrieval and Processing
Webhook -> REST API Source -> AI Gateway -> Webhook Response
Document Processing
S3 Source -> PDF Parser -> AI Gateway (Entity Extraction) -> MongoDB Dest
Conditional Routing
Webhook -> Switch-Case -> [Branch A] -> Email
-> [Branch B] -> Slack
-> [Default] -> Log
Database ETL with Per-Row Processing
Schedule -> PostgreSQL Source -> Loop -> AI Gateway -> MongoDB Dest
Parallel API Aggregation
Webhook -> Parallel Branch -> REST API 1 -> Merge -> Webhook Response
-> REST API 2 ->
Best Practices
Naming
- Use descriptive workflow names that convey purpose (e.g., "Invoice Processor" not "Workflow 1")
- Workflow names are limited to 18 characters when created through STAN
Service Discovery
- Always use
list_user_models,list_user_addons, orlist_user_datasourcesbefore configuring nodes that connect to external services - This ensures you reference valid, active service connections
Configuration Validation
- The
configure_nodetool validates fields against the node's editor schema and rejects unknown fields - Use
get_node_schemato understand what fields are available before configuring
Testing
- Use
test_workflowor the Test Run button to execute the workflow with sample data before deploying - Use
get_execution_statusandget_node_outputto inspect results and debug issues