Logging
Access and analyze application logs through the Platform interface. The platform provides comprehensive log retrieval from Kubernetes pods with advanced filtering and streaming capabilities.
Pod Logs
Viewing Logs
Pod logs are accessible through two endpoints:
GET GET /api/v1/pods/{namespace}/{name}/logs
Query parameters:
- container: Select specific container in multi-container pods
- tail_lines: Number of lines to return from the end (default varies)
- since_seconds: Return logs from the last N seconds
- since_time: Return logs since a specific timestamp
- follow: Stream logs in real-time (auto-refresh)
- previous: View logs from the previous container instance (useful for crashed containers)
- timestamps: Include timestamps in log output
POST POST /api/v1/pods/{namespace}/{name}/logs
Provides an advanced query interface with the same parameters in a request body, allowing for more complex queries.
Log Viewing in the UI
- Navigate to Platform > Pods
- Click on a pod name to view its details
- Click the Logs tab
- Use the controls:
- Container dropdown: Select which container to view (for multi-container pods)
- Tail lines: Adjust how many lines to display
- Follow: Enable auto-refresh to stream new log lines
- Previous: Toggle to see logs from the previous container instance (if the container has restarted)
- Download: Save the current log output as a text file
Job and Workload Logs
Logs are also accessible at the workload level:
- Job logs:
GET /api/v1/workloads/jobs/{namespace}/{name}/logs-- aggregates logs from all pods in a job - Pod logs by owner:
GET /api/v1/pods/by-owner/{namespace}/{owner_name}-- find pods by their owner (deployment, StatefulSet, etc.) and then view their individual logs
Log Sources
Container Logs
Kubernetes captures stdout and stderr from every container:
- All output written to stdout/stderr is captured by the container runtime
- Logs persist as long as the pod exists
- When a container restarts, previous logs are available via the
previousflag - When a pod is deleted, its logs are permanently lost
Event Logs
Kubernetes events provide system-level information that complements container logs:
- Pod scheduling decisions
- Image pull successes and failures
- Container start/stop events
- Resource limit violations (OOMKilled)
- Volume mount successes and failures
- Health check failures
Access events via:
GET /api/v1/events-- all cluster eventsGET /api/v1/events/resource/{kind}/{name}-- events for a specific resourceGET /api/v1/pods/{namespace}/{name}/events-- events for a specific pod
Structured Logging Best Practices
JSON Format
Output logs in JSON format for better parsing and searchability:
import logging
import json
from datetime import datetime
class JSONFormatter(logging.Formatter):
def format(self, record):
log_entry = {
'timestamp': datetime.utcnow().isoformat(),
'level': record.levelname,
'message': record.getMessage(),
'logger': record.name,
'module': record.module,
'line': record.lineno
}
if record.exc_info:
log_entry['exception'] = self.formatException(record.exc_info)
return json.dumps(log_entry)
handler = logging.StreamHandler()
handler.setFormatter(JSONFormatter())
logging.basicConfig(handlers=[handler], level=logging.INFO)
Standard Log Levels
Use consistent log levels across all applications:
| Level | Usage | Examples |
|---|---|---|
| ERROR | Operation failed, requires attention | Database connection failed, unhandled exception |
| WARNING | Unexpected condition, operation continued | Deprecated API used, retry succeeded after failure |
| INFO | Normal operation milestones | Request processed, user logged in, job completed |
| DEBUG | Diagnostic information | SQL query executed, cache hit/miss, function entry/exit |
Recommended Log Fields
Include these fields in structured logs for traceability:
- timestamp: When the event occurred (ISO 8601 format)
- level: Log severity (ERROR, WARNING, INFO, DEBUG)
- message: Human-readable description
- service: Application or service name
- version: Application version
- request_id: Unique ID for tracing requests across services
- user_id: Which user triggered the action (if applicable)
- error_stack: Full stack trace for errors
What Not to Log
Never log sensitive information:
- Passwords or password hashes
- API keys or authentication tokens
- Credit card numbers or financial data
- Personal identifiable information (PII) beyond user IDs
- Database connection strings with credentials
- Secret values or encryption keys
Troubleshooting with Logs
Application Crashes (CrashLoopBackOff)
- Find the crashing pod in the Pods list
- Click the pod name and go to the Logs tab
- Enable Previous to see logs from the previous container instance
- Look for ERROR messages or stack traces at the end of the logs
- Common causes:
- Missing environment variables or config files
- Database connection failures
- Port binding conflicts
- Out of memory (check events for OOMKilled)
- Application-level errors during startup
Image Pull Failures
- Check pod events for
ImagePullBackOfforErrImagePull - The event message indicates the specific error:
- "manifest unknown" -- image tag does not exist
- "unauthorized" -- registry requires authentication
- "connection refused" -- cannot reach the registry
Failed Health Checks
- Check pod events for
Unhealthyevents with liveness or readiness probe details - The event shows the HTTP status code or exec exit code
- View container logs to understand why the health endpoint is failing
- Consider increasing
initialDelaySecondsfor slow-starting applications
Volume Mount Issues
- Check pod events for
FailedMountorFailedAttachVolumeevents - Common causes visible in events:
- PVC not found or not bound
- Volume already attached to another node (ReadWriteOnce)
- Permission denied on the volume
Missing Environment Variables
- Exec into the pod: use Terminal tab or
POST /api/v1/pods/{namespace}/{name}/exec - Run
envto list all environment variables - Use the Environment endpoint:
GET /api/v1/pods/{namespace}/{name}/environment - Compare with expected values from the deployment configuration
File Access for Debugging
The platform provides direct filesystem access to running containers:
| Operation | Endpoint | Use Case |
|---|---|---|
| Browse files | GET /pods/{ns}/{name}/files | Navigate the filesystem |
| Download file | GET /pods/{ns}/{name}/files/download | Download log files, config files |
| Upload file | POST /pods/{ns}/{name}/files/upload | Upload debug tools or test data |
| Edit file | PUT /pods/{ns}/{name}/files | Modify configuration in-place |
| Delete file | DELETE /pods/{ns}/{name}/files | Clean up temporary files |
This is useful when:
- Application writes logs to files instead of stdout
- You need to inspect configuration files mounted from ConfigMaps/Secrets
- You want to check temporary data or cache contents
- You need to upload diagnostic tools to a running container
Ephemeral Debug Containers
For production pods that lack debugging tools, use ephemeral containers:
- Add a debug container:
POST /api/v1/pods/{namespace}/{name}/ephemeralcontainers - The debug container shares the pod's network namespace and can access mounted volumes
- Use the debug container to run diagnostic commands
- List ephemeral containers:
GET /api/v1/pods/{namespace}/{name}/ephemeralcontainers - Remove when done:
DELETE /api/v1/pods/{namespace}/{name}/ephemeralcontainers/{container_name}
This approach avoids modifying production containers while still allowing in-depth debugging.
Audit Logging
The platform itself maintains comprehensive audit logs for security and compliance:
- All security events (login, logout, MFA, account locks) are logged to the audit system
- Audit logs are stored securely with long-term archival
- The platform includes audit logging that tracks API operations
For platform audit logs, see the Governance section for details on the audit log viewer.
When troubleshooting, always start with pod events before looking at container logs. Events provide context about scheduling, volume mounting, and image pulling -- issues that happen before your application code runs. Container logs show what happens after the application starts.