Skip to main content

Logging

Access and analyze application and system logs through the Platform interface.

Viewing Logs

Pod Logs

View logs from individual pods:

  1. Go to PlatformWorkloadsPods
  2. Click on pod name
  3. Click Logs tab
  4. Logs stream in real-time

Log Controls

Available log viewing options:

  • Container Selection: Choose container in multi-container pods
  • Tail Lines: Show last N lines (default: 100)
  • Since Time: Show logs since timestamp
  • Follow: Auto-refresh to stream new logs
  • Previous Logs: View logs from previous container (if restarted)
  • Download: Save logs as text file

Filtering Logs

Filter logs in the viewer:

  • Search: Text search within logs
  • Regex: Regular expression filtering
  • Level: Filter by log level (ERROR, WARN, INFO, DEBUG)
  • Time Range: Show logs from specific time period

Log Aggregation

Centralized Logging

All logs are collected and stored centrally:

  • Automatic Collection: Logs from all pods automatically collected
  • Retention: 30 days default retention period
  • Indexing: Logs indexed for fast search
  • Archival: Export old logs to S3 for long-term storage

Search across multiple pods:

  1. Go to PlatformMonitoringLogs
  2. Enter search query
  3. Filter by:
    • Namespace
    • Deployment
    • Pod labels
    • Time range
  4. View aggregated results

Log Levels

Standard Log Levels

Use consistent log levels:

  • ERROR: Error events that might still allow the app to continue
  • WARN: Potentially harmful situations
  • INFO: Informational messages highlighting progress
  • DEBUG: Fine-grained debugging information
  • TRACE: Detailed trace information

Configuring Log Levels

Set log level per application:

# Python logging example
import logging

# Set log level via environment variable
log_level = os.getenv('LOG_LEVEL', 'INFO')
logging.basicConfig(level=getattr(logging, log_level))

logger = logging.getLogger(__name__)
logger.info("Application started")
logger.debug("Debug information")
logger.error("An error occurred")

Structured Logging

JSON Logs

Output logs in JSON format for better parsing:

# Python structured logging
import json
import logging

class JSONFormatter(logging.Formatter):
def format(self, record):
log_obj = {
'timestamp': record.created,
'level': record.levelname,
'message': record.getMessage(),
'logger': record.name,
'path': record.pathname,
'line': record.lineno
}
return json.dumps(log_obj)

# Configure logger
handler = logging.StreamHandler()
handler.setFormatter(JSONFormatter())
logger = logging.getLogger()
logger.addHandler(handler)

Log Fields

Include useful fields in structured logs:

  • timestamp: When event occurred
  • level: Log level (ERROR, INFO, etc.)
  • message: Log message
  • service: Service/application name
  • version: Application version
  • request_id: Trace requests across services
  • user_id: Who triggered the action
  • error_stack: Full stack trace for errors

Log Analysis

Common Queries

Pre-built queries for common scenarios:

  • Errors in Last Hour: Find all ERROR logs in last 60 minutes
  • Slow Requests: Find requests taking > 1 second
  • Failed Logins: Search for authentication failures
  • API Errors: Find 4xx and 5xx HTTP responses
  • Database Errors: Search for database connection issues

Creating Saved Searches

Save frequently used queries:

  1. Build your search query
  2. Click Save Search
  3. Name and describe the search
  4. Optionally share with team
  5. Access from Saved Searches dropdown

Log Exports

Exporting Logs

Download logs for offline analysis:

  1. Go to log viewer or search results
  2. Click Export
  3. Select format:
    • Plain Text (.txt)
    • JSON (.json)
    • CSV (.csv)
  4. Choose time range
  5. Click Download

Scheduled Exports

Automatically export logs:

  1. Create a saved search
  2. Enable Scheduled Export
  3. Configure:
    • Frequency (daily, weekly)
    • Time of day
    • Destination (S3 bucket)
    • Format
  4. Save configuration

Log Retention

Retention Policies

Configure how long logs are kept:

  • Hot Storage: Last 7 days, fast search
  • Warm Storage: 8-30 days, slower search
  • Cold Storage: 31-90 days, S3 archive
  • Deletion: After 90 days (configurable)

Compliance Requirements

Adjust retention for compliance:

  • HIPAA: Minimum 6 years
  • SOX: Minimum 7 years
  • GDPR: As long as data is processed
  • PCI DSS: Minimum 1 year

Troubleshooting with Logs

Application Crashes

  1. Find crashed pod in Pods list
  2. View logs with Previous Logs enabled
  3. Look for ERROR or stack traces before crash
  4. Check for OutOfMemory or segmentation faults

Request Failures

  1. Search logs for request ID or user ID
  2. Filter by ERROR level
  3. Trace request across services
  4. Identify which service failed

Performance Issues

  1. Search for slow query or timeout messages
  2. Look for database connection pool exhaustion
  3. Check for high GC pause times
  4. Identify resource constraints

Best Practices

Logging Guidelines

  • Log Meaningful Events: Don't log everything, focus on important events
  • Use Appropriate Levels: ERROR for errors, INFO for key events, DEBUG for diagnostics
  • Include Context: Request ID, user ID, resource IDs for traceability
  • Avoid Sensitive Data: Don't log passwords, tokens, PII
  • Use Structured Logging: JSON format for better parsing and analysis

Performance Considerations

  • Async Logging: Don't block application on log writes
  • Log Sampling: Sample high-frequency debug logs
  • Buffer Logs: Batch log writes to reduce I/O
  • Rotate Logs: If logging to files, rotate to prevent disk fill
Security

Never log sensitive information like passwords, API keys, credit card numbers, or personal identifiable information (PII).