Skip to main content

AI Applications

Deploy and manage containerized applications with integrated access to databases, AI models, and workflows.

What are AI Applications?

AI Applications on the Strongly platform are containerized workloads that can access platform services seamlessly. Whether you're deploying a React SPA, Node.js API, Flask backend, R Shiny dashboard, or custom application, the platform provides:

  • Automated builds from your code archives, GitHub repos, or project bundles
  • Integrated service connections (databases, AI models, workflows, ML models, MCP servers)
  • Auto-scaling based on demand via Horizontal Pod Autoscaler (HPA)
  • Health monitoring and resource metrics
  • GPU deployment support for compute-intensive workloads

Supported Application Types

TypeDefault PortUse Case
react3000React/Vue SPA with client-side routing
nodejs3000Node.js APIs, Express, NestJS
flask5000Python Flask/FastAPI applications
rshiny3838R Shiny dashboards and interactive apps
static80Static HTML/CSS/JS sites
fullstack3000, 8000Monorepo with frontend + backend
mcp_server8080MCP protocol tool servers
custom8080Any other application type

Key Features

Service Integration

Connect your application to platform resources automatically:

  • Add-ons: Managed MongoDB, PostgreSQL, Redis, RabbitMQ, Kafka, Neo4j, Milvus, Greenplum, SurrealDB instances
  • Data Sources: External databases, S3, Snowflake, BigQuery, and many more
  • AI Models: OpenAI, Anthropic, or self-hosted LLM models via AI Gateway
  • Workflows: Invoke Strongly workflows via REST API
  • ML Models: Traditional and AutoML models from the model registry
  • MCP Servers: Model Context Protocol servers for tool integration

All connections are injected via the STRONGLY_SERVICES environment variable - no manual configuration needed.

Auto-Scaling

Enable intelligent auto-scaling based on CPU and memory thresholds:

  • Scale from 1 to 50 instances automatically
  • Custom thresholds and polling intervals
  • Configurable cooldown periods to prevent flapping
  • Horizontal Pod Autoscaler (HPA) integration

Resource Tiers

Choose from predefined resource tiers or configure custom allocations:

TierCPUMemoryDiskUse Case
Small0.5512MB5GBDevelopment, testing
Medium11GB10GBSmall production apps
Large22GB20GBProduction apps with moderate traffic
XLarge44GB40GBHigh-traffic production apps
Custom---Specialized requirements (including GPU)

Quick Start

  1. Prepare your code: Include Dockerfile and strongly.manifest.yaml
  2. Create archive: Package as .zip, .tar, .tar.gz, or .tgz
  3. Deploy: Navigate to Apps and deploy your application
  4. Configure: Set resources, environment variables, and service connections
  5. Monitor: View logs, metrics, and health status

Deployment Sources

Applications can be deployed from multiple sources:

  • File upload: Upload a .zip, .tar, .tar.gz, or .tgz archive
  • GitHub: Connect a GitHub repository directly
  • Project bundle: Deploy from platform project bundles

Next Steps