MLOps
End-to-end machine learning operations including AutoML, experiment tracking, model registry, and deployment pipelines.
AutoML
Supported Tasks
- Classification: Binary and multi-class classification
- Regression: Numerical prediction
- Time Series: Forecasting and anomaly detection
- Clustering: Unsupervised grouping
Features
- Automatic feature engineering
- Hyperparameter optimization
- Model selection and ensembling
- Cross-validation and evaluation
Experiment Tracking
Track all training runs with comprehensive metrics and artifact logging:
- Metrics: Loss, accuracy, precision, recall, F1, custom metrics
- Parameters: Hyperparameters, model config, dataset versions
- Artifacts: Model weights, plots, feature importance, predictions
- Comparisons: Side-by-side experiment comparison, metric charts
- Reproducibility: Environment snapshots, random seeds, data versioning
Model Registry
Version Control for Models
- Model Versioning: Track all model versions with metadata
- Model Cards: Documentation, performance metrics, bias analysis
A/B Testing Routers
Compare model variants in production with traffic splitting:
- Weighted Random: Split traffic by percentage between variants
- Feature-Based: Route based on input features for segment testing
- Multi-Armed Bandit: Auto-optimize traffic to best performers
Learn more about A/B Testing →
Drift Detection
Monitor model performance and detect data distribution changes:
- PSI Monitoring: Population Stability Index for drift detection
- Feature Drift: Per-feature drift analysis with statistical tests
- Ground Truth: Upload actual outcomes to track accuracy over time
- Alerts: Automated notifications when drift exceeds thresholds
Learn more about Drift Detection →
Deployment Pipelines
# Example: Deploy model from registry
from strongly import MLOps
mlops = MLOps()
# Get production model
model = mlops.get_model('customer-churn', stage='production')
# Deploy as API endpoint
mlops.deploy(
model_id=model.id,
endpoint_name='churn-prediction',
instances=2,
cpu='500m',
memory='1Gi'
)
# Monitor predictions
mlops.enable_monitoring(
endpoint='churn-prediction',
log_predictions=True,
detect_drift=True
)