PostgreSQL
PostgreSQL is a powerful, open-source relational database system with strong support for ACID compliance, complex queries, and advanced data types.
Overview
- Versions: 17, 16, 15
- Cluster Support: ❌ No (Single node only)
- Use Cases: Relational data, analytics, ACID compliance
- Features: Extensions, backups, full-text search
Key Features
- ACID Compliant: Full support for transactions with atomicity, consistency, isolation, and durability
- Rich Extensions: PostGIS for geospatial data, pg_trgm for text search, and many more
- Advanced Data Types: JSON/JSONB, arrays, hstore, and custom types
- Full-Text Search: Built-in text search capabilities with ranking and stemming
- Concurrent Access: Multi-version concurrency control (MVCC) for high performance
- Foreign Data Wrappers: Query external data sources as if they were local tables
Resource Tiers
| Tier | CPU | Memory | Disk | Best For |
|---|---|---|---|---|
| Small | 0.5 | 1GB | 10GB | Development, testing |
| Medium | 1 | 2GB | 25GB | Small production apps |
| Large | 2 | 4GB | 50GB | Production workloads |
| XLarge | 4 | 8GB | 100GB | High-traffic applications |
Creating a PostgreSQL Add-on
- Navigate to Add-ons → Create Add-on
- Select PostgreSQL as the type
- Choose a version (17, 16, or 15)
- Configure:
- Label: Descriptive name (e.g., "User Database")
- Description: Purpose and notes
- Environment: Development or Production
- Resource Tier: Based on your workload requirements
- Configure backups:
- Schedule: Daily recommended for production
- Retention: 7+ days for production
- Click Create Add-on
Connection Information
After deployment, connection details are available in the add-on details page and automatically injected into your apps via STRONGLY_SERVICES.
Connection String Format
postgresql://username:password@host:5432/database
Accessing Connection Details
- Python
- Node.js
- Go
import os
import json
import psycopg2
# Parse STRONGLY_SERVICES
services = json.loads(os.environ.get('STRONGLY_SERVICES', '{}'))
# Get PostgreSQL add-on connection
pg_addon = services['addons']['addon-id']
# Connect using connection string
conn = psycopg2.connect(pg_addon['connectionString'])
cursor = conn.cursor()
# Or connect using individual parameters
conn = psycopg2.connect(
host=pg_addon['host'],
port=pg_addon['port'],
database=pg_addon['database'],
user=pg_addon['username'],
password=pg_addon['password']
)
const { Pool } = require('pg');
// Parse STRONGLY_SERVICES
const services = JSON.parse(process.env.STRONGLY_SERVICES || '{}');
const pgAddon = services.addons['addon-id'];
// Connect using connection string
const pool = new Pool({
connectionString: pgAddon.connectionString
});
// Or connect using individual parameters
const pool = new Pool({
host: pgAddon.host,
port: pgAddon.port,
database: pgAddon.database,
user: pgAddon.username,
password: pgAddon.password
});
// Query example
const result = await pool.query('SELECT * FROM users WHERE active = $1', [true]);
package main
import (
"database/sql"
"encoding/json"
"os"
_ "github.com/lib/pq"
)
type Services struct {
Addons map[string]Addon `json:"addons"`
}
type Addon struct {
ConnectionString string `json:"connectionString"`
Host string `json:"host"`
Port int `json:"port"`
Database string `json:"database"`
Username string `json:"username"`
Password string `json:"password"`
}
func main() {
var services Services
json.Unmarshal([]byte(os.Getenv("STRONGLY_SERVICES")), &services)
pgAddon := services.Addons["addon-id"]
// Connect using connection string
db, err := sql.Open("postgres", pgAddon.ConnectionString)
if err != nil {
panic(err)
}
defer db.Close()
}
Common Operations
Creating Tables
CREATE TABLE users (
id SERIAL PRIMARY KEY,
email VARCHAR(255) UNIQUE NOT NULL,
username VARCHAR(100) NOT NULL,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
metadata JSONB
);
CREATE INDEX idx_users_email ON users(email);
CREATE INDEX idx_users_metadata ON users USING GIN (metadata);
Full-Text Search
-- Add a full-text search column
ALTER TABLE articles ADD COLUMN search_vector tsvector;
-- Update the search vector
UPDATE articles SET search_vector =
to_tsvector('english', title || ' ' || content);
-- Create an index
CREATE INDEX idx_articles_search ON articles USING GIN (search_vector);
-- Search
SELECT * FROM articles
WHERE search_vector @@ to_tsquery('english', 'postgresql & database');
JSON Operations
-- Query JSONB data
SELECT * FROM users
WHERE metadata @> '{"premium": true}';
-- Update JSONB field
UPDATE users
SET metadata = metadata || '{"last_login": "2025-01-26"}'
WHERE id = 1;
-- Extract JSONB field
SELECT email, metadata->>'plan' as plan
FROM users;
Popular Extensions
PostgreSQL supports many extensions. Common ones include:
- PostGIS: Geospatial data support
- pg_trgm: Trigram matching for fuzzy text search
- uuid-ossp: UUID generation
- hstore: Key-value store within PostgreSQL
- pg_stat_statements: Query performance tracking
Enabling Extensions
-- Enable an extension
CREATE EXTENSION IF NOT EXISTS "uuid-ossp";
CREATE EXTENSION IF NOT EXISTS "pg_trgm";
-- List installed extensions
SELECT * FROM pg_extension;
Backup & Restore
PostgreSQL add-ons use pg_dump for backups, creating complete SQL dumps of your database.
Backup Configuration
- Tool:
pg_dump - Format:
.sql - Includes: Schema, data, indexes, constraints
- Storage: AWS S3 (
s3://strongly-backups/backups/<addon-id>/)
Manual Backup
- Go to add-on details page
- Click Backup Now
- Monitor progress in job logs
- Backup saved as
backup-YYYYMMDDHHMMSS.sql
Scheduled Backups
Configure during add-on creation or in settings:
- Daily backups: Recommended for production
- Retention: 7-14 days minimum for production
- Custom cron: For specific schedules (e.g.,
0 2 * * *for 2 AM daily)
Restore Process
- Navigate to Backups tab
- Select backup from list
- Click Restore
- Confirm (add-on will stop temporarily)
- Data restored and add-on restarts
Data Loss
Restoring from backup replaces ALL current data. Create a current backup first if needed.
Performance Optimization
Connection Pooling
Use connection pooling for better performance:
from psycopg2 import pool
# Create a connection pool
connection_pool = pool.SimpleConnectionPool(
minconn=1,
maxconn=20,
host='host',
database='database',
user='user',
password='password'
)
# Get connection from pool
conn = connection_pool.getconn()
# Use connection
# ...
# Return to pool
connection_pool.putconn(conn)
Indexing Strategy
-- B-tree index (default, good for equality and range queries)
CREATE INDEX idx_users_email ON users(email);
-- GIN index (good for JSONB and full-text search)
CREATE INDEX idx_users_metadata ON users USING GIN (metadata);
-- Partial index (index only subset of rows)
CREATE INDEX idx_active_users ON users(email) WHERE active = true;
-- Analyze index usage
SELECT schemaname, tablename, indexname, idx_scan
FROM pg_stat_user_indexes
ORDER BY idx_scan;
Query Optimization
-- Use EXPLAIN to analyze queries
EXPLAIN ANALYZE SELECT * FROM users WHERE email = 'user@example.com';
-- Use VACUUM to reclaim space
VACUUM ANALYZE users;
-- Monitor slow queries (requires pg_stat_statements extension)
SELECT query, calls, total_time, mean_time
FROM pg_stat_statements
ORDER BY mean_time DESC
LIMIT 10;
Monitoring
Monitor your PostgreSQL add-on through the Strongly platform:
- CPU Usage: Track CPU utilization
- Memory Usage: Monitor memory consumption
- Disk Space: Watch disk utilization
- Connection Count: Active connections
- Query Performance: Slow query detection
Best Practices
- Use Indexes Wisely: Index frequently queried columns, but avoid over-indexing
- Enable Connection Pooling: Reduce connection overhead
- Regular VACUUM: Keep database healthy with regular maintenance
- Monitor Query Performance: Use
EXPLAIN ANALYZEfor slow queries - Use Transactions: Wrap related operations in transactions
- Backup Regularly: Enable daily backups for production databases
- Test Restores: Periodically verify backup restoration
- Use Prepared Statements: Prevent SQL injection and improve performance
- Separate Dev/Prod: Never connect production apps to development databases
- Resource Planning: Start with appropriate tier, scale up as needed
Migration Guide
From MySQL to PostgreSQL
Key differences to note:
-- MySQL: AUTO_INCREMENT
-- PostgreSQL: SERIAL or IDENTITY
CREATE TABLE users (
id SERIAL PRIMARY KEY
-- or
id INT GENERATED ALWAYS AS IDENTITY PRIMARY KEY
);
-- MySQL: LIMIT offset, count
-- PostgreSQL: LIMIT count OFFSET offset
SELECT * FROM users LIMIT 10 OFFSET 20;
-- MySQL: CONCAT()
-- PostgreSQL: || operator
SELECT first_name || ' ' || last_name AS full_name FROM users;
-- MySQL: IF()
-- PostgreSQL: CASE WHEN
SELECT CASE WHEN age >= 18 THEN 'adult' ELSE 'minor' END FROM users;
Troubleshooting
Connection Issues
# Test connection
psql "postgresql://username:password@host:5432/database"
# Check connection limits
SELECT max_connections FROM pg_settings WHERE name = 'max_connections';
# View current connections
SELECT count(*) FROM pg_stat_activity;
Performance Issues
-- Find slow queries
SELECT pid, now() - query_start AS duration, query
FROM pg_stat_activity
WHERE state = 'active'
ORDER BY duration DESC;
-- Check table bloat
SELECT schemaname, tablename,
pg_size_pretty(pg_total_relation_size(schemaname||'.'||tablename)) AS size
FROM pg_tables
ORDER BY pg_total_relation_size(schemaname||'.'||tablename) DESC;
Disk Space Issues
-- Check database size
SELECT pg_size_pretty(pg_database_size(current_database()));
-- Check table sizes
SELECT tablename,
pg_size_pretty(pg_total_relation_size(schemaname||'.'||tablename)) AS size
FROM pg_tables
ORDER BY pg_total_relation_size(schemaname||'.'||tablename) DESC
LIMIT 10;
Support
For issues or questions:
- Check add-on logs in the Strongly dashboard
- Review PostgreSQL official documentation
- Contact Strongly support through the platform