Logging
Access and analyze application and system logs through the Platform interface.
Viewing Logs
Pod Logs
View logs from individual pods:
- Go to Platform → Workloads → Pods
- Click on pod name
- Click Logs tab
- Logs stream in real-time
Log Controls
Available log viewing options:
- Container Selection: Choose container in multi-container pods
- Tail Lines: Show last N lines (default: 100)
- Since Time: Show logs since timestamp
- Follow: Auto-refresh to stream new logs
- Previous Logs: View logs from previous container (if restarted)
- Download: Save logs as text file
Filtering Logs
Filter logs in the viewer:
- Search: Text search within logs
- Regex: Regular expression filtering
- Level: Filter by log level (ERROR, WARN, INFO, DEBUG)
- Time Range: Show logs from specific time period
Log Aggregation
Centralized Logging
All logs are collected and stored centrally:
- Automatic Collection: Logs from all pods automatically collected
- Retention: 30 days default retention period
- Indexing: Logs indexed for fast search
- Archival: Export old logs to S3 for long-term storage
Multi-Pod Search
Search across multiple pods:
- Go to Platform → Monitoring → Logs
- Enter search query
- Filter by:
- Namespace
- Deployment
- Pod labels
- Time range
- View aggregated results
Log Levels
Standard Log Levels
Use consistent log levels:
- ERROR: Error events that might still allow the app to continue
- WARN: Potentially harmful situations
- INFO: Informational messages highlighting progress
- DEBUG: Fine-grained debugging information
- TRACE: Detailed trace information
Configuring Log Levels
Set log level per application:
# Python logging example
import logging
# Set log level via environment variable
log_level = os.getenv('LOG_LEVEL', 'INFO')
logging.basicConfig(level=getattr(logging, log_level))
logger = logging.getLogger(__name__)
logger.info("Application started")
logger.debug("Debug information")
logger.error("An error occurred")
Structured Logging
JSON Logs
Output logs in JSON format for better parsing:
# Python structured logging
import json
import logging
class JSONFormatter(logging.Formatter):
def format(self, record):
log_obj = {
'timestamp': record.created,
'level': record.levelname,
'message': record.getMessage(),
'logger': record.name,
'path': record.pathname,
'line': record.lineno
}
return json.dumps(log_obj)
# Configure logger
handler = logging.StreamHandler()
handler.setFormatter(JSONFormatter())
logger = logging.getLogger()
logger.addHandler(handler)
Log Fields
Include useful fields in structured logs:
- timestamp: When event occurred
- level: Log level (ERROR, INFO, etc.)
- message: Log message
- service: Service/application name
- version: Application version
- request_id: Trace requests across services
- user_id: Who triggered the action
- error_stack: Full stack trace for errors
Log Analysis
Common Queries
Pre-built queries for common scenarios:
- Errors in Last Hour: Find all ERROR logs in last 60 minutes
- Slow Requests: Find requests taking > 1 second
- Failed Logins: Search for authentication failures
- API Errors: Find 4xx and 5xx HTTP responses
- Database Errors: Search for database connection issues
Creating Saved Searches
Save frequently used queries:
- Build your search query
- Click Save Search
- Name and describe the search
- Optionally share with team
- Access from Saved Searches dropdown
Log Exports
Exporting Logs
Download logs for offline analysis:
- Go to log viewer or search results
- Click Export
- Select format:
- Plain Text (.txt)
- JSON (.json)
- CSV (.csv)
- Choose time range
- Click Download
Scheduled Exports
Automatically export logs:
- Create a saved search
- Enable Scheduled Export
- Configure:
- Frequency (daily, weekly)
- Time of day
- Destination (S3 bucket)
- Format
- Save configuration
Log Retention
Retention Policies
Configure how long logs are kept:
- Hot Storage: Last 7 days, fast search
- Warm Storage: 8-30 days, slower search
- Cold Storage: 31-90 days, S3 archive
- Deletion: After 90 days (configurable)
Compliance Requirements
Adjust retention for compliance:
- HIPAA: Minimum 6 years
- SOX: Minimum 7 years
- GDPR: As long as data is processed
- PCI DSS: Minimum 1 year
Troubleshooting with Logs
Application Crashes
- Find crashed pod in Pods list
- View logs with Previous Logs enabled
- Look for ERROR or stack traces before crash
- Check for OutOfMemory or segmentation faults
Request Failures
- Search logs for request ID or user ID
- Filter by ERROR level
- Trace request across services
- Identify which service failed
Performance Issues
- Search for slow query or timeout messages
- Look for database connection pool exhaustion
- Check for high GC pause times
- Identify resource constraints
Best Practices
Logging Guidelines
- Log Meaningful Events: Don't log everything, focus on important events
- Use Appropriate Levels: ERROR for errors, INFO for key events, DEBUG for diagnostics
- Include Context: Request ID, user ID, resource IDs for traceability
- Avoid Sensitive Data: Don't log passwords, tokens, PII
- Use Structured Logging: JSON format for better parsing and analysis
Performance Considerations
- Async Logging: Don't block application on log writes
- Log Sampling: Sample high-frequency debug logs
- Buffer Logs: Batch log writes to reduce I/O
- Rotate Logs: If logging to files, rotate to prevent disk fill
Security
Never log sensitive information like passwords, API keys, credit card numbers, or personal identifiable information (PII).