Skip to main content

Workflow Examples

Practical workflow examples demonstrating common patterns with correct node IDs, connection structures, and configuration approaches. All node types referenced here are from the actual workflow node catalog.

Example 1: MySQL to S3 Data Export

Use Case: Export data from a MySQL database to S3 as a CSV file on a daily schedule.

Workflow Structure

schedule → mysql-source → to-file → s3-dest

Node Configuration

schedule (trigger):

{
"cronExpression": "0 0 * * *",
"timezone": "America/New_York",
"description": "Daily at midnight"
}

mysql-source (source):

{
"connectionType": "datasource",
"dataSourceId": "<your-mysql-datasource-id>",
"query": "SELECT id, name, email, created_at FROM customers WHERE created_at >= DATE_SUB(NOW(), INTERVAL 1 DAY)"
}

Input mapping: receives trigger output from schedule node.

to-file (transform):

{
"format": "csv",
"filename": "customers-export.csv"
}

Input mapping: $.output from the mysql-source node provides rows to convert.

s3-dest (destination):

{
"connectionType": "addon",
"addonId": "<your-s3-addon-id>",
"bucket": "data-exports",
"key": "customers/daily-export.csv"
}

Input mapping: $.output.file from to-file node provides the file to upload.

Connections

[
{ "source": "schedule", "target": "mysql-source" },
{ "source": "mysql-source", "target": "to-file" },
{ "source": "to-file", "target": "s3-dest" }
]

Example 2: Webhook API with AI Processing

Use Case: Receive a webhook request, process the data with an AI model, and return a structured response.

Workflow Structure

webhook → ai-gateway → set-fields → webhook-response

Node Configuration

webhook (trigger):

{
"method": "POST",
"path": "/analyze"
}

ai-gateway (AI):

{
"connectionType": "addon",
"addonId": "<your-ai-gateway-addon-id>",
"model": "claude-3-sonnet",
"systemPrompt": "Analyze the provided text and return a JSON object with: sentiment (positive/negative/neutral), summary (one sentence), and key_topics (array of strings).",
"temperature": 0.3
}

Input mapping: The user prompt is mapped from the webhook trigger output -- $.output.body.text provides the text to analyze.

set-fields (transform):

{
"fields": {
"analysis": "$.output.response",
"processed_at": "$.metadata.processedAt",
"model_used": "$.output.model"
}
}

Input mapping: $.output from ai-gateway provides the LLM response.

webhook-response (destination):

{
"statusCode": 200,
"contentType": "application/json"
}

Input mapping: $.output from set-fields provides the response body.

Connections

[
{ "source": "webhook", "target": "ai-gateway" },
{ "source": "ai-gateway", "target": "set-fields" },
{ "source": "set-fields", "target": "webhook-response" }
]

Example 3: PDF Document Processing Pipeline

Use Case: Upload a PDF via webhook, extract text, generate embeddings, and store in a vector database.

Workflow Structure

webhook → pdf-parser → text-chunker → embeddings → pinecone

Node Configuration

webhook (trigger):

{
"method": "POST",
"path": "/upload-document"
}

Receives file upload via multipart form data.

pdf-parser (transform):

{
"extractText": true,
"extractTables": true
}

Input mapping: $.output.body.file from the webhook trigger provides the PDF file.

text-chunker (transform):

{
"chunkSize": 512,
"chunkOverlap": 50,
"separator": "\n\n"
}

Input mapping: $.output.text from pdf-parser provides the extracted text.

embeddings (AI):

{
"connectionType": "addon",
"addonId": "<your-ai-gateway-addon-id>",
"model": "text-embedding-ada-002"
}

Input mapping: $.output.chunks from text-chunker provides the text chunks to embed.

pinecone (destination):

{
"connectionType": "addon",
"addonId": "<your-pinecone-addon-id>",
"index": "documents",
"namespace": "uploads"
}

Input mapping: $.output.embeddings from the embeddings node provides vectors to store.

Connections

[
{ "source": "webhook", "target": "pdf-parser" },
{ "source": "pdf-parser", "target": "text-chunker" },
{ "source": "text-chunker", "target": "embeddings" },
{ "source": "embeddings", "target": "pinecone" }
]

Example 4: AI Agent with MCP Tools

Use Case: An AI agent that can search the web and create GitHub issues based on user requests.

Workflow Structure

webhook → react-agent → webhook-response

mcp-tools-provider
(connected to agent's "tools" connector)

Node Configuration

webhook (trigger):

{
"method": "POST",
"path": "/agent"
}

react-agent (agent):

{
"connectionType": "addon",
"addonId": "<your-ai-gateway-addon-id>",
"model": "claude-3-sonnet",
"systemPrompt": "You are a helpful assistant that can search the web and create GitHub issues. When the user asks about a topic, search the web first. When they report a bug, create a GitHub issue.",
"maxIterations": 5
}

Input mapping: $.output.body.message from the webhook provides the user message.

mcp-tools-provider (operator):

{
"mcpServerId": "brave-search",
"mcpServerIds": ["github"],
"filterTools": [],
"cacheTimeout": 300
}

Connected to the react-agent via the tools connector (not the regular data flow).

webhook-response (destination):

{
"statusCode": 200,
"contentType": "application/json"
}

Input mapping: $.output from react-agent provides the agent's final response.

Connections

[
{ "source": "webhook", "target": "react-agent" },
{ "source": "mcp-tools-provider", "target": "react-agent", "type": "tools" },
{ "source": "react-agent", "target": "webhook-response" }
]

Example 5: Conditional Routing with Switch-Case

Use Case: Receive webhook events, route them based on event type, and process each type differently.

Workflow Structure

webhook → guardrails → switch-case → [branch: rest-api]
→ [branch: slack-dest]
→ [branch: smtp]

Node Configuration

webhook (trigger):

{
"method": "POST",
"path": "/events"
}

guardrails (evaluation):

{
"rules": [
{ "field": "event_type", "required": true },
{ "field": "payload", "required": true }
]
}

Input mapping: $.output.body from the webhook trigger.

switch-case (control-flow):

{
"field": "$.output.event_type",
"cases": [
{ "value": "order.created", "output": "api" },
{ "value": "alert.triggered", "output": "slack" },
{ "value": "user.signup", "output": "email" }
],
"default": "api"
}

Input mapping: $.output from guardrails.

rest-api (for order.created branch):

{
"url": "https://api.internal.example.com/orders",
"method": "POST"
}

Input mapping: $.output.payload from switch-case.

slack-dest (for alert.triggered branch):

{
"connectionType": "addon",
"addonId": "<your-slack-addon-id>",
"channel": "#alerts"
}

Input mapping: $.output.payload.message from switch-case.

smtp (for user.signup branch):

{
"connectionType": "datasource",
"dataSourceId": "<your-smtp-datasource-id>",
"to": "welcome@example.com",
"subject": "New User Signup"
}

Input mapping: $.output.payload from switch-case.

Connections

[
{ "source": "webhook", "target": "guardrails" },
{ "source": "guardrails", "target": "switch-case" },
{ "source": "switch-case", "target": "rest-api", "label": "order.created" },
{ "source": "switch-case", "target": "slack-dest", "label": "alert.triggered" },
{ "source": "switch-case", "target": "smtp", "label": "user.signup" }
]

Example 6: Scheduled Report with Database and Email

Use Case: Generate a weekly report from PostgreSQL data and email it to stakeholders.

Workflow Structure

schedule → postgresql-source → report-builder → s3-dest → smtp

Node Configuration

schedule (trigger):

{
"cronExpression": "0 8 * * 1",
"timezone": "America/New_York",
"description": "Every Monday at 8 AM"
}

postgresql-source (source):

{
"connectionType": "datasource",
"dataSourceId": "<your-postgresql-datasource-id>",
"query": "SELECT date_trunc('day', created_at) as day, COUNT(*) as orders, SUM(amount) as revenue FROM orders WHERE created_at >= NOW() - INTERVAL '7 days' GROUP BY 1 ORDER BY 1"
}

report-builder (transform):

{
"format": "xlsx",
"title": "Weekly Sales Report",
"sheets": [
{
"name": "Daily Summary",
"dataSource": "$.output.rows"
}
]
}

Input mapping: $.output from postgresql-source.

s3-dest (destination):

{
"connectionType": "addon",
"addonId": "<your-s3-addon-id>",
"bucket": "reports",
"key": "weekly/sales-report.xlsx"
}

Input mapping: $.output.file from report-builder.

smtp (destination):

{
"connectionType": "datasource",
"dataSourceId": "<your-smtp-datasource-id>",
"to": ["team@example.com"],
"subject": "Weekly Sales Report",
"body": "Please find attached the weekly sales report.",
"attachments": ["$.output.url"]
}

Input mapping: $.output from s3-dest provides the uploaded file URL.

Connections

[
{ "source": "schedule", "target": "postgresql-source" },
{ "source": "postgresql-source", "target": "report-builder" },
{ "source": "report-builder", "target": "s3-dest" },
{ "source": "s3-dest", "target": "smtp" }
]

Example 7: Parallel API Aggregation

Use Case: Call multiple APIs in parallel, merge results, and return a combined response.

Workflow Structure

webhook → parallel-branch → [rest-api (API 1)]
→ [rest-api (API 2)]
merge ← ← ← ← ← ← (both branches)
webhook-response

Node Configuration

webhook (trigger):

{
"method": "GET",
"path": "/aggregate"
}

parallel-branch (control-flow): Splits execution into two parallel paths.

rest-api (API 1):

{
"url": "https://api.service-a.com/data",
"method": "GET",
"headers": {
"Authorization": "Bearer <token>"
}
}

rest-api (API 2):

{
"url": "https://api.service-b.com/data",
"method": "GET",
"headers": {
"Authorization": "Bearer <token>"
}
}

merge (control-flow):

{
"mode": "combine",
"outputField": "combined"
}

Waits for both parallel branches to complete, then merges their outputs.

webhook-response (destination):

{
"statusCode": 200,
"contentType": "application/json"
}

Input mapping: $.output from merge provides the combined data.

Connections

[
{ "source": "webhook", "target": "parallel-branch" },
{ "source": "parallel-branch", "target": "rest-api-1" },
{ "source": "parallel-branch", "target": "rest-api-2" },
{ "source": "rest-api-1", "target": "merge" },
{ "source": "rest-api-2", "target": "merge" },
{ "source": "merge", "target": "webhook-response" }
]

Example 8: RAG Question-Answering

Use Case: A RAG pipeline that retrieves relevant documents and generates answers.

Workflow Structure

webhook → rag → webhook-response

knowledge-base
(connected via "tools" connector)

Node Configuration

webhook (trigger):

{
"method": "POST",
"path": "/ask"
}

rag (agent):

{
"connectionType": "addon",
"addonId": "<your-ai-gateway-addon-id>",
"model": "claude-3-sonnet",
"systemPrompt": "Answer the user's question based on the retrieved context. If the context does not contain the answer, say so.",
"topK": 5,
"scoreThreshold": 0.7
}

Input mapping: $.output.body.question from the webhook trigger.

knowledge-base (memory):

{
"connectionType": "addon",
"addonId": "<your-vector-db-addon-id>",
"collection": "company-docs",
"embeddingModel": "text-embedding-ada-002"
}

Connected to the rag node's retrieval connector.

webhook-response (destination):

{
"statusCode": 200,
"contentType": "application/json"
}

Input mapping: $.output from rag provides the generated answer and source documents.


Common Workflow Patterns

Pattern: Data Pipeline (ETL)

[trigger] → [source node] → [transform node(s)] → [destination node]

Node types used:

  • Triggers: schedule, webhook
  • Sources: mysql-source, postgresql-source, mongodb-source, s3-source
  • Transforms: set-fields, filter, sort, to-file, aggregate
  • Destinations: mysql-dest, postgresql-dest, mongodb-dest, s3-dest

Pattern: AI Processing

[trigger] → [ai-gateway] → [destination]

Node types used:

  • AI: ai-gateway, embeddings, vision, text-to-speech, speech-to-text
  • Supporting: set-fields for prompt construction, guardrails for output validation

Pattern: Agent with Tools

[trigger] → [agent node] → [destination]

[mcp-tools-provider] (tools connector)
[knowledge-base] (tools connector)

Node types used:

  • Agents: react-agent, supervisor-agent, function-calling, rag
  • Tool providers: mcp-tools-provider, web-search, code-interpreter, calculator
  • Memory: conversation-memory, knowledge-base, semantic-memory

Pattern: Event-Driven Routing

[webhook] → [guardrails] → [switch-case] → [branch A]
→ [branch B]
→ [branch C]

Node types used:

  • Validation: guardrails
  • Routing: switch-case, conditional
  • Actions: rest-api, slack-dest, smtp, mongodb-dest

Pattern: Batch Processing with Loop

[schedule] → [source] → [loop] → [process each item] → [destination]

Node types used:

  • Iteration: loop, map
  • Transform: set-fields, filter, code
  • Control: while-loop, retry

Configuration Patterns

Data Source Connection

Nodes that connect to external databases use the connectionType and dataSourceId pattern:

{
"connectionType": "datasource",
"dataSourceId": "<id-from-data-sources-page>"
}

Add-on Connection

Nodes that connect to platform add-ons (AI Gateway, S3, etc.) use:

{
"connectionType": "addon",
"addonId": "<id-from-addons-page>"
}

Input Mapping (JSONPath)

Nodes reference data from upstream nodes using JSONPath expressions:

  • $.output - The full output object from the connected upstream node
  • $.output.rows - A specific field from the upstream output
  • $.output.body.text - Nested field access
  • $.metadata.processedAt - Metadata from the upstream node

MongoDB Source (Special Case)

The mongodb-source node uses dataSource (name) instead of dataSourceId:

{
"dataSource": "my-mongodb-connection",
"collection": "users",
"query": { "status": "active" }
}

Best Practices

Node Selection

  1. Use specific nodes over generic ones (e.g., postgresql-source over rest-api for database queries)
  2. Use native destinations over MCP tools for deterministic operations (e.g., slack-dest over MCP Slack server)
  3. Use agents when the workflow needs dynamic decision-making
  4. Use control flow nodes (switch-case, loop, parallel-branch) to build complex logic

Error Handling

  1. Add guardrails nodes to validate data before processing
  2. Use conditional nodes to check for error conditions
  3. Configure retry nodes for transient failures
  4. Use stop-error to halt execution with a clear error message

Performance

  1. Use parallel-branch for independent operations that can run concurrently
  2. Use filter early to reduce data volume before expensive operations
  3. Set appropriate timeouts in node configuration
  4. Use set-fields to select only needed fields before passing to downstream nodes

Data Flow

  1. Use descriptive node labels to document the workflow
  2. Keep workflows linear when possible; use branching only when needed
  3. Use merge to combine parallel branch outputs
  4. Use set-fields to reshape data between nodes with different schemas

Next Steps