Skip to main content

Python SDK

The official Python SDK for Strongly.AI. Build, deploy, and manage AI applications, workflows, and infrastructure from Python.

Installation

pip install strongly

Requires Python 3.9+. Pre-installed in all Strongly workspaces.

Quick Start

from strongly import Strongly

client = Strongly()

# Deploy an app
app = client.apps.create({"name": "my-service", "runtime": "python3.11"})
client.apps.deploy(app.id)

# Run a workflow
result = client.workflows.execute("wf-abc123")

# Chat with an AI model
response = client.ai.inference.chat_completion(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Hello!"}],
)
print(response.choices[0].message.content)

Authentication

Create an API key in the Strongly UI under Profile > Security > API Keys.

# Pass directly
client = Strongly(api_key="sk-prod-...")

# Or set an environment variable (recommended)
# export STRONGLY_API_KEY=sk-prod-...
client = Strongly()

The SDK also auto-detects credentials from ~/.strongly/config and inside Strongly workspaces.

Credential Resolution Order

  1. Explicit api_key parameter
  2. STRONGLY_API_KEY environment variable
  3. Workspace file at /tmp/strongly/api-key
  4. Config file at ~/.strongly/config

Resources

The SDK organizes platform capabilities into resource namespaces:

Core

ResourceDescription
client.appsDeploy, manage, and monitor applications
client.addonsManaged databases and services (PostgreSQL, Redis, etc.)
client.datasourcesExternal data connections
client.workflowsWorkflow pipelines — create, execute, version, share
client.executionsExecution history, node traces, logs, progress
client.workflow_nodesNode catalog for the workflow builder

AI & ML

ResourceDescription
client.ai.inferenceChat completions, text completions, embeddings with streaming
client.ai.modelsAI model catalog and lifecycle
client.ai.provider_keysProvider API key management (OpenAI, Anthropic, etc.)
client.ai.analyticsAI usage and cost analytics
client.fine_tuningFine-tune language models
client.experimentsML experiment tracking
client.automlAutomated machine learning
client.model_registryModel versioning and deployment

Infrastructure

ResourceDescription
client.projectsProject management and collaboration
client.workspacesDevelopment environments
client.volumesPersistent storage
client.usersUser management
client.organizationsOrganization management, members, invitations

Governance & FinOps

ResourceDescription
client.governance.policiesPolicy management and enforcement
client.governance.solutionsCompliance solutions and snapshots
client.governance.attestationsCompliance attestations
client.governance.templatesPolicy templates
client.finops.costsCost tracking, forecasting, anomaly detection
client.finops.budgetsBudget management and alerts
client.finops.schedulesCost optimization schedules
client.finops.resource_groupsResource grouping

Async Support

Every operation is available asynchronously with AsyncStrongly:

import asyncio
from strongly import AsyncStrongly

async def main():
async with AsyncStrongly() as client:
async for workflow in client.workflows.list(status="active"):
print(workflow.name)

response = await client.ai.inference.chat_completion(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Hello!"}],
)
print(response.choices[0].message.content)

asyncio.run(main())

Pagination

List methods return auto-paginating iterators:

# Iterate through everything
for app in client.apps.list():
print(app.name)

# Get all items as a Python list
all_apps = client.apps.list().to_list()

# Just get the first match
first = client.apps.list(status="running").first()

# Control batch size
for app in client.apps.list(limit=10):
print(app.name)

# Check total count
paginator = client.apps.list()
next(paginator) # fetch first batch
print(f"Total: {paginator.total}")

Error Handling

Errors are raised as typed Python exceptions:

from strongly import Strongly, NotFoundError, RateLimitError

client = Strongly()

try:
app = client.apps.retrieve("nonexistent")
except NotFoundError as e:
print(f"Not found: {e.message}")
except RateLimitError as e:
print(f"Rate limited — retrying in {e.retry_after}s")

See Error Handling for the full list of exceptions.

Idempotency

For safe retries on mutating operations, pass an idempotency key:

app = client.apps.create(
{"name": "my-service"},
idempotency_key="create-my-service-v1",
)

The SDK sets the Idempotency-Key header on POST, PUT, and PATCH requests. The server guarantees at-most-once execution for a given key.

Event Hooks

Monitor requests and responses with callback hooks:

def on_request(method, url, **kwargs):
print(f"→ {method} {url}")

def on_response(method, url, status_code, **kwargs):
print(f"← {status_code} {method} {url}")

client = Strongly(
on_request=on_request,
on_response=on_response,
)

Logging

Enable structured logging with the STRONGLY_LOG environment variable:

export STRONGLY_LOG=DEBUG   # DEBUG, INFO, WARNING, or ERROR

The SDK logs request/response details under the strongly logger name, making it easy to filter in your application's logging configuration.

Workspace Helpers

Inside Strongly workspaces, convenience functions are available at the top level:

import strongly

# Experiment tracking
strongly.set_experiment("my-experiment")
with strongly.start_run(run_name="run-1"):
strongly.log_params({"lr": 0.01})
strongly.log_metrics({"accuracy": 0.95})

# AI Gateway
from strongly import gateway
response = gateway.complete("Explain machine learning:")

# AutoML
from strongly.mlops import automl
job = automl.create_job(name="my-model", data=df, target_column="label", problem_type="binary")

See Experiments, AI Gateway Helpers, AutoML, and Model Registry for details.

Documentation