Python SDK
The official Python SDK for Strongly.AI. Build, deploy, and manage AI applications, workflows, and infrastructure from Python.
Installation
pip install strongly
Requires Python 3.9+. Pre-installed in all Strongly workspaces.
Quick Start
from strongly import Strongly
client = Strongly()
# Deploy an app
app = client.apps.create({"name": "my-service", "runtime": "python3.11"})
client.apps.deploy(app.id)
# Run a workflow
result = client.workflows.execute("wf-abc123")
# Chat with an AI model
response = client.ai.inference.chat_completion(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Hello!"}],
)
print(response.choices[0].message.content)
Authentication
Create an API key in the Strongly UI under Profile > Security > API Keys.
# Pass directly
client = Strongly(api_key="sk-prod-...")
# Or set an environment variable (recommended)
# export STRONGLY_API_KEY=sk-prod-...
client = Strongly()
The SDK also auto-detects credentials from ~/.strongly/config and inside Strongly workspaces.
Credential Resolution Order
- Explicit
api_keyparameter STRONGLY_API_KEYenvironment variable- Workspace file at
/tmp/strongly/api-key - Config file at
~/.strongly/config
Resources
The SDK organizes platform capabilities into resource namespaces:
Core
| Resource | Description |
|---|---|
client.apps | Deploy, manage, and monitor applications |
client.addons | Managed databases and services (PostgreSQL, Redis, etc.) |
client.datasources | External data connections |
client.workflows | Workflow pipelines — create, execute, version, share |
client.executions | Execution history, node traces, logs, progress |
client.workflow_nodes | Node catalog for the workflow builder |
AI & ML
| Resource | Description |
|---|---|
client.ai.inference | Chat completions, text completions, embeddings with streaming |
client.ai.models | AI model catalog and lifecycle |
client.ai.provider_keys | Provider API key management (OpenAI, Anthropic, etc.) |
client.ai.analytics | AI usage and cost analytics |
client.fine_tuning | Fine-tune language models |
client.experiments | ML experiment tracking |
client.automl | Automated machine learning |
client.model_registry | Model versioning and deployment |
Infrastructure
| Resource | Description |
|---|---|
client.projects | Project management and collaboration |
client.workspaces | Development environments |
client.volumes | Persistent storage |
client.users | User management |
client.organizations | Organization management, members, invitations |
Governance & FinOps
| Resource | Description |
|---|---|
client.governance.policies | Policy management and enforcement |
client.governance.solutions | Compliance solutions and snapshots |
client.governance.attestations | Compliance attestations |
client.governance.templates | Policy templates |
client.finops.costs | Cost tracking, forecasting, anomaly detection |
client.finops.budgets | Budget management and alerts |
client.finops.schedules | Cost optimization schedules |
client.finops.resource_groups | Resource grouping |
Async Support
Every operation is available asynchronously with AsyncStrongly:
import asyncio
from strongly import AsyncStrongly
async def main():
async with AsyncStrongly() as client:
async for workflow in client.workflows.list(status="active"):
print(workflow.name)
response = await client.ai.inference.chat_completion(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Hello!"}],
)
print(response.choices[0].message.content)
asyncio.run(main())
Pagination
List methods return auto-paginating iterators:
# Iterate through everything
for app in client.apps.list():
print(app.name)
# Get all items as a Python list
all_apps = client.apps.list().to_list()
# Just get the first match
first = client.apps.list(status="running").first()
# Control batch size
for app in client.apps.list(limit=10):
print(app.name)
# Check total count
paginator = client.apps.list()
next(paginator) # fetch first batch
print(f"Total: {paginator.total}")
Error Handling
Errors are raised as typed Python exceptions:
from strongly import Strongly, NotFoundError, RateLimitError
client = Strongly()
try:
app = client.apps.retrieve("nonexistent")
except NotFoundError as e:
print(f"Not found: {e.message}")
except RateLimitError as e:
print(f"Rate limited — retrying in {e.retry_after}s")
See Error Handling for the full list of exceptions.
Idempotency
For safe retries on mutating operations, pass an idempotency key:
app = client.apps.create(
{"name": "my-service"},
idempotency_key="create-my-service-v1",
)
The SDK sets the Idempotency-Key header on POST, PUT, and PATCH requests. The server guarantees at-most-once execution for a given key.
Event Hooks
Monitor requests and responses with callback hooks:
def on_request(method, url, **kwargs):
print(f"→ {method} {url}")
def on_response(method, url, status_code, **kwargs):
print(f"← {status_code} {method} {url}")
client = Strongly(
on_request=on_request,
on_response=on_response,
)
Logging
Enable structured logging with the STRONGLY_LOG environment variable:
export STRONGLY_LOG=DEBUG # DEBUG, INFO, WARNING, or ERROR
The SDK logs request/response details under the strongly logger name, making it easy to filter in your application's logging configuration.
Workspace Helpers
Inside Strongly workspaces, convenience functions are available at the top level:
import strongly
# Experiment tracking
strongly.set_experiment("my-experiment")
with strongly.start_run(run_name="run-1"):
strongly.log_params({"lr": 0.01})
strongly.log_metrics({"accuracy": 0.95})
# AI Gateway
from strongly import gateway
response = gateway.complete("Explain machine learning:")
# AutoML
from strongly.mlops import automl
job = automl.create_job(name="my-model", data=df, target_column="label", problem_type="binary")
See Experiments, AI Gateway Helpers, AutoML, and Model Registry for details.
Documentation
- Getting Started — Installation, auth, client configuration
- API Reference — Every method on every resource
- Error Handling — Exceptions and pagination
- Changelog