AI Applications
Deploy and manage containerized applications with integrated access to databases, AI models, and workflows.
What are AI Applications?
AI Applications on the Strongly platform are containerized workloads that can access platform services seamlessly. Whether you're deploying a React SPA, Node.js API, Flask backend, R Shiny dashboard, or custom application, the platform provides:
- Automated builds from your code archives, GitHub repos, or project bundles
- Integrated service connections (databases, AI models, workflows, ML models, MCP servers)
- Auto-scaling based on demand via Horizontal Pod Autoscaler (HPA)
- Health monitoring and resource metrics
- GPU deployment support for compute-intensive workloads
Supported Application Types
| Type | Default Port | Use Case |
|---|---|---|
react | 3000 | React/Vue SPA with client-side routing |
nodejs | 3000 | Node.js APIs, Express, NestJS |
flask | 5000 | Python Flask/FastAPI applications |
rshiny | 3838 | R Shiny dashboards and interactive apps |
static | 80 | Static HTML/CSS/JS sites |
fullstack | 3000, 8000 | Monorepo with frontend + backend |
mcp_server | 8080 | MCP protocol tool servers |
custom | 8080 | Any other application type |
Key Features
Service Integration
Connect your application to platform resources automatically:
- Add-ons: Managed MongoDB, PostgreSQL, Redis, RabbitMQ, Kafka, Neo4j, Milvus, Greenplum, SurrealDB instances
- Data Sources: External databases, S3, Snowflake, BigQuery, and many more
- AI Models: OpenAI, Anthropic, or self-hosted LLM models via AI Gateway
- Workflows: Invoke Strongly workflows via REST API
- ML Models: Traditional and AutoML models from the model registry
- MCP Servers: Model Context Protocol servers for tool integration
All connections are injected via the STRONGLY_SERVICES environment variable - no manual configuration needed.
Auto-Scaling
Enable intelligent auto-scaling based on CPU and memory thresholds:
- Scale from 1 to 50 instances automatically
- Custom thresholds and polling intervals
- Configurable cooldown periods to prevent flapping
- Horizontal Pod Autoscaler (HPA) integration
Resource Tiers
Choose from predefined resource tiers or configure custom allocations:
| Tier | CPU | Memory | Disk | Use Case |
|---|---|---|---|---|
| Small | 0.5 | 512MB | 5GB | Development, testing |
| Medium | 1 | 1GB | 10GB | Small production apps |
| Large | 2 | 2GB | 20GB | Production apps with moderate traffic |
| XLarge | 4 | 4GB | 40GB | High-traffic production apps |
| Custom | - | - | - | Specialized requirements (including GPU) |
Quick Start
- Prepare your code: Include
Dockerfileandstrongly.manifest.yaml - Create archive: Package as
.zip,.tar,.tar.gz, or.tgz - Deploy: Navigate to Apps and deploy your application
- Configure: Set resources, environment variables, and service connections
- Monitor: View logs, metrics, and health status
Deployment Sources
Applications can be deployed from multiple sources:
- File upload: Upload a
.zip,.tar,.tar.gz, or.tgzarchive - GitHub: Connect a GitHub repository directly
- Project bundle: Deploy from platform project bundles