Redis
Redis is an in-memory data structure store used as a database, cache, message broker, and streaming engine, known for its exceptional performance.
Overview
- Versions: 7.4, 7.2, 7.0
- Cluster Support: ❌ No (Single node only)
- Use Cases: Caching, sessions, real-time data, pub/sub messaging
- Features: Persistence, pub/sub, data structures, Lua scripting
Key Features
- Blazing Fast: In-memory operations with microsecond latency
- Rich Data Structures: Strings, lists, sets, sorted sets, hashes, streams, and more
- Persistence Options: RDB snapshots and AOF (Append-Only File)
- Pub/Sub Messaging: Real-time message broadcasting
- Atomic Operations: All operations are atomic
- Lua Scripting: Execute complex operations atomically
- TTL Support: Automatic key expiration
- Transactions: Multi-command transactions with WATCH/MULTI/EXEC
- Streams: Log-like data structure for event sourcing
Resource Tiers
| Tier | CPU | Memory | Disk | Best For |
|---|---|---|---|---|
| Small | 0.5 | 1GB | 10GB | Development, testing |
| Medium | 1 | 2GB | 25GB | Small production apps |
| Large | 2 | 4GB | 50GB | Production caching |
| XLarge | 4 | 8GB | 100GB | High-traffic applications |
Creating a Redis Add-on
- Navigate to Add-ons → Create Add-on
- Select Redis as the type
- Choose a version (7.4, 7.2, or 7.0)
- Configure:
- Label: Descriptive name (e.g., "Session Cache")
- Description: Purpose and notes
- Environment: Development or Production
- Resource Tier: Based on your workload requirements
- Configure backups:
- Schedule: Daily recommended for persistent data
- Retention: 7+ days for production
- Click Create Add-on
Connection Information
After deployment, connection details are available in the add-on details page and automatically injected into your apps via STRONGLY_SERVICES.
Connection String Format
redis://[:password@]host:6379[/database]
Accessing Connection Details
- Python
- Node.js
- Go
import os
import json
import redis
# Parse STRONGLY_SERVICES
services = json.loads(os.environ.get('STRONGLY_SERVICES', '{}'))
# Get Redis add-on connection
redis_addon = services['addons']['addon-id']
# Connect using connection string
r = redis.from_url(redis_addon['connectionString'])
# Or connect using individual parameters
r = redis.Redis(
host=redis_addon['host'],
port=redis_addon['port'],
password=redis_addon['password'],
db=redis_addon.get('database', 0),
decode_responses=True
)
# Set and get values
r.set('key', 'value')
value = r.get('key')
print(value)
const redis = require('redis');
// Parse STRONGLY_SERVICES
const services = JSON.parse(process.env.STRONGLY_SERVICES || '{}');
const redisAddon = services.addons['addon-id'];
// Connect using connection string
const client = redis.createClient({
url: redisAddon.connectionString
});
// Or connect using individual parameters
const client = redis.createClient({
socket: {
host: redisAddon.host,
port: redisAddon.port
},
password: redisAddon.password,
database: redisAddon.database || 0
});
await client.connect();
// Set and get values
await client.set('key', 'value');
const value = await client.get('key');
console.log(value);
await client.disconnect();
package main
import (
"context"
"encoding/json"
"fmt"
"os"
"github.com/go-redis/redis/v8"
)
type Services struct {
Addons map[string]Addon `json:"addons"`
}
type Addon struct {
Host string `json:"host"`
Port int `json:"port"`
Password string `json:"password"`
Database int `json:"database"`
}
func main() {
var services Services
json.Unmarshal([]byte(os.Getenv("STRONGLY_SERVICES")), &services)
redisAddon := services.Addons["addon-id"]
ctx := context.Background()
// Connect
rdb := redis.NewClient(&redis.Options{
Addr: fmt.Sprintf("%s:%d", redisAddon.Host, redisAddon.Port),
Password: redisAddon.Password,
DB: redisAddon.Database,
})
// Set and get values
err := rdb.Set(ctx, "key", "value", 0).Err()
if err != nil {
panic(err)
}
val, err := rdb.Get(ctx, "key").Result()
if err != nil {
panic(err)
}
fmt.Println("key:", val)
}
Common Operations
Basic Key-Value Operations
import redis
r = redis.Redis(host='host', port=6379, decode_responses=True)
# SET and GET
r.set('user:1000:name', 'John Doe')
name = r.get('user:1000:name')
# SET with expiration (in seconds)
r.setex('session:abc123', 3600, 'session_data')
# SET if not exists
r.setnx('lock:resource', 'locked')
# Multiple SET
r.mset({'key1': 'value1', 'key2': 'value2', 'key3': 'value3'})
# Multiple GET
values = r.mget(['key1', 'key2', 'key3'])
# INCREMENT and DECREMENT
r.incr('page:views')
r.incrby('score', 10)
r.decr('inventory:item:123')
# DELETE
r.delete('key1')
# Check existence
exists = r.exists('key1')
# Set expiration
r.expire('key1', 300) # 300 seconds
# Get TTL
ttl = r.ttl('key1')
Lists
# Push to list
r.lpush('queue:tasks', 'task1', 'task2', 'task3') # Left push
r.rpush('queue:tasks', 'task4') # Right push
# Pop from list
task = r.lpop('queue:tasks') # Left pop
task = r.rpop('queue:tasks') # Right pop
# Blocking pop (wait for item)
task = r.blpop('queue:tasks', timeout=5)
# Get list length
length = r.llen('queue:tasks')
# Get range
tasks = r.lrange('queue:tasks', 0, -1) # All items
# Trim list
r.ltrim('queue:tasks', 0, 99) # Keep first 100 items
Sets
# Add members to set
r.sadd('tags:post:1', 'python', 'redis', 'database')
# Check membership
is_member = r.sismember('tags:post:1', 'python')
# Get all members
tags = r.smembers('tags:post:1')
# Remove member
r.srem('tags:post:1', 'database')
# Set operations
r.sadd('set1', 'a', 'b', 'c')
r.sadd('set2', 'b', 'c', 'd')
union = r.sunion('set1', 'set2') # {'a', 'b', 'c', 'd'}
inter = r.sinter('set1', 'set2') # {'b', 'c'}
diff = r.sdiff('set1', 'set2') # {'a'}
# Random member
random_tag = r.srandmember('tags:post:1')
# Pop random member
tag = r.spop('tags:post:1')
Sorted Sets (Leaderboards)
# Add members with scores
r.zadd('leaderboard', {'player1': 100, 'player2': 150, 'player3': 120})
# Increment score
r.zincrby('leaderboard', 10, 'player1')
# Get rank (0-based)
rank = r.zrank('leaderboard', 'player1')
reverse_rank = r.zrevrank('leaderboard', 'player1') # Highest score = rank 0
# Get score
score = r.zscore('leaderboard', 'player1')
# Get top N players
top_players = r.zrevrange('leaderboard', 0, 9, withscores=True)
# Get players by score range
players = r.zrangebyscore('leaderboard', 100, 200, withscores=True)
# Count members in score range
count = r.zcount('leaderboard', 100, 200)
# Remove member
r.zrem('leaderboard', 'player1')
Hashes
# Set hash fields
r.hset('user:1000', mapping={
'name': 'John Doe',
'email': 'john@example.com',
'age': '30'
})
# Get single field
name = r.hget('user:1000', 'name')
# Get all fields
user = r.hgetall('user:1000')
# Get multiple fields
fields = r.hmget('user:1000', ['name', 'email'])
# Increment hash field
r.hincrby('user:1000', 'login_count', 1)
# Check field existence
exists = r.hexists('user:1000', 'name')
# Get all keys
keys = r.hkeys('user:1000')
# Get all values
values = r.hvals('user:1000')
# Delete field
r.hdel('user:1000', 'age')
Pub/Sub Messaging
# Publisher
import redis
r = redis.Redis(host='host', port=6379)
r.publish('channel:notifications', 'Hello, World!')
# Subscriber
p = r.pubsub()
p.subscribe('channel:notifications')
for message in p.listen():
if message['type'] == 'message':
print(message['data'])
# Pattern subscription
p.psubscribe('channel:*')
# Unsubscribe
p.unsubscribe('channel:notifications')
Transactions
# MULTI/EXEC transaction
pipe = r.pipeline()
pipe.set('key1', 'value1')
pipe.set('key2', 'value2')
pipe.incr('counter')
results = pipe.execute()
# Optimistic locking with WATCH
pipe = r.pipeline()
while True:
try:
# Watch key for changes
pipe.watch('balance:1000')
current_balance = int(r.get('balance:1000'))
if current_balance >= 100:
# Start transaction
pipe.multi()
pipe.decrby('balance:1000', 100)
pipe.incrby('balance:2000', 100)
pipe.execute()
break
else:
pipe.unwatch()
break
except redis.WatchError:
# Key was modified, retry
continue
Streams
# Add to stream
r.xadd('events', {'user': 'john', 'action': 'login'})
# Read from stream
messages = r.xread({'events': '0'}, count=10)
# Consumer groups
r.xgroup_create('events', 'mygroup', id='0')
messages = r.xreadgroup('mygroup', 'consumer1', {'events': '>'}, count=10)
# Acknowledge message
r.xack('events', 'mygroup', message_id)
Caching Patterns
Cache-Aside Pattern
def get_user(user_id):
# Try cache first
cache_key = f'user:{user_id}'
cached = r.get(cache_key)
if cached:
return json.loads(cached)
# Cache miss - get from database
user = db.query("SELECT * FROM users WHERE id = ?", user_id)
# Store in cache with 1 hour TTL
r.setex(cache_key, 3600, json.dumps(user))
return user
Write-Through Cache
def update_user(user_id, data):
# Update database
db.query("UPDATE users SET ... WHERE id = ?", user_id, data)
# Update cache
cache_key = f'user:{user_id}'
r.setex(cache_key, 3600, json.dumps(data))
Cache Invalidation
# Delete specific key
r.delete(f'user:{user_id}')
# Delete pattern
for key in r.scan_iter('user:*'):
r.delete(key)
# Set TTL on existing key
r.expire(f'user:{user_id}', 60)
Session Management
from flask import Flask, session
from flask_session import Session
import redis
app = Flask(__name__)
app.config['SESSION_TYPE'] = 'redis'
app.config['SESSION_REDIS'] = redis.from_url(redis_url)
Session(app)
@app.route('/login')
def login():
session['user_id'] = 1000
session['username'] = 'johndoe'
return 'Logged in'
@app.route('/profile')
def profile():
user_id = session.get('user_id')
return f'User: {user_id}'
Rate Limiting
def is_rate_limited(user_id, limit=100, window=60):
"""
Allow 'limit' requests per 'window' seconds
"""
key = f'rate_limit:{user_id}'
current = r.incr(key)
if current == 1:
r.expire(key, window)
return current > limit
# Usage
if is_rate_limited('user:1000', limit=10, window=60):
return 'Rate limit exceeded', 429
Distributed Locking
import time
import uuid
def acquire_lock(lock_name, timeout=10):
"""
Acquire distributed lock with automatic expiration
"""
lock_key = f'lock:{lock_name}'
identifier = str(uuid.uuid4())
end = time.time() + timeout
while time.time() < end:
if r.set(lock_key, identifier, nx=True, ex=timeout):
return identifier
time.sleep(0.001)
return False
def release_lock(lock_name, identifier):
"""
Release distributed lock
"""
lock_key = f'lock:{lock_name}'
pipe = r.pipeline(True)
while True:
try:
pipe.watch(lock_key)
if pipe.get(lock_key) == identifier:
pipe.multi()
pipe.delete(lock_key)
pipe.execute()
return True
pipe.unwatch()
break
except redis.WatchError:
pass
return False
# Usage
lock_id = acquire_lock('resource:123')
if lock_id:
try:
# Critical section
pass
finally:
release_lock('resource:123', lock_id)
Backup & Restore
Redis add-ons use redis-cli --rdb for backups, creating point-in-time RDB snapshots.
Backup Configuration
- Tool:
redis-cli --rdb - Format:
.rdb - Type: Point-in-time snapshot
- Storage: AWS S3 (
s3://strongly-backups/backups/<addon-id>/)
Persistence Options
Redis supports two persistence mechanisms:
- RDB (Redis Database): Point-in-time snapshots at intervals
- AOF (Append-Only File): Logs every write operation
Manual Backup
- Go to add-on details page
- Click Backup Now
- Monitor progress in job logs
- Backup saved as
backup-YYYYMMDDHHMMSS.rdb
Scheduled Backups
Configure during add-on creation or in settings:
- Daily backups: Recommended if using Redis for persistent data
- Retention: 7-14 days minimum
- Custom cron: For specific schedules
Restore Process
- Navigate to Backups tab
- Select backup from list
- Click Restore
- Confirm (add-on will stop temporarily)
- RDB file loaded and add-on restarts
Data Loss
Restoring from backup replaces ALL current data. Create a current backup first if needed.
Performance Optimization
Connection Pooling
import redis
# Connection pooling is built-in
pool = redis.ConnectionPool(
host='host',
port=6379,
password='password',
max_connections=50,
decode_responses=True
)
r = redis.Redis(connection_pool=pool)
Pipeline Commands
# Without pipeline (multiple round-trips)
for i in range(1000):
r.set(f'key:{i}', f'value:{i}')
# With pipeline (single round-trip)
pipe = r.pipeline()
for i in range(1000):
pipe.set(f'key:{i}', f'value:{i}')
pipe.execute()
Memory Optimization
# Use hashes for multiple related fields (more memory efficient)
# Instead of:
r.set('user:1000:name', 'John')
r.set('user:1000:email', 'john@example.com')
r.set('user:1000:age', '30')
# Use:
r.hset('user:1000', mapping={
'name': 'John',
'email': 'john@example.com',
'age': '30'
})
# Set memory limits and eviction policy (configured in add-on settings)
# maxmemory-policy options:
# - noeviction: Return errors when memory limit is reached
# - allkeys-lru: Evict least recently used keys
# - volatile-lru: Evict least recently used keys with TTL
# - allkeys-random: Evict random keys
# - volatile-random: Evict random keys with TTL
# - volatile-ttl: Evict keys with nearest expiration
Monitoring
Monitor your Redis add-on through the Strongly platform:
- CPU Usage: Track CPU utilization
- Memory Usage: Monitor memory consumption (critical for in-memory database)
- Connection Count: Active client connections
- Command Stats: Operations per second
- Hit Rate: Cache hit ratio
- Eviction Count: Keys evicted due to memory pressure
Redis INFO Command
# Get all server information
info = r.info()
# Specific sections
memory_info = r.info('memory')
stats = r.info('stats')
replication = r.info('replication')
# Key metrics
print(f"Used Memory: {info['used_memory_human']}")
print(f"Connected Clients: {info['connected_clients']}")
print(f"Total Commands: {info['total_commands_processed']}")
print(f"Keyspace Hits: {info['keyspace_hits']}")
print(f"Keyspace Misses: {info['keyspace_misses']}")
# Calculate hit rate
hit_rate = info['keyspace_hits'] / (info['keyspace_hits'] + info['keyspace_misses'])
print(f"Hit Rate: {hit_rate:.2%}")
Best Practices
- Use Connection Pooling: Reuse connections for better performance
- Set Appropriate TTLs: Prevent memory overflow with expiration
- Use Pipelines: Batch commands to reduce network overhead
- Choose Right Data Structure: Use the most appropriate data type for your use case
- Monitor Memory Usage: Redis is in-memory, watch your memory consumption
- Use Hashes for Objects: More memory-efficient than individual keys
- Implement Cache Invalidation: Keep cache consistent with source of truth
- Enable Persistence: Use RDB or AOF if data loss is unacceptable
- Use Transactions Wisely: For atomic multi-step operations
- Avoid Large Keys: Break large collections into smaller chunks
- Set Eviction Policy: Configure appropriate maxmemory-policy
- Test Backup Restore: Verify backups work before you need them
Troubleshooting
Connection Issues
# Test connection
try:
r.ping()
print("Connected to Redis")
except redis.ConnectionError:
print("Cannot connect to Redis")
Memory Issues
# Check memory usage
info = r.info('memory')
print(f"Used Memory: {info['used_memory_human']}")
print(f"Peak Memory: {info['used_memory_peak_human']}")
print(f"Memory Fragmentation: {info['mem_fragmentation_ratio']}")
# Find large keys
for key in r.scan_iter():
key_type = r.type(key)
if key_type == 'string':
size = len(r.get(key))
elif key_type == 'list':
size = r.llen(key)
elif key_type == 'set':
size = r.scard(key)
elif key_type == 'zset':
size = r.zcard(key)
elif key_type == 'hash':
size = r.hlen(key)
if size > 10000: # Arbitrary threshold
print(f"Large key: {key} ({key_type}) - {size}")
Performance Issues
# Check slow log
slowlog = r.slowlog_get(10)
for entry in slowlog:
print(f"Duration: {entry['duration']}μs, Command: {entry['command']}")
# Monitor commands in real-time
# Use redis-cli MONITOR (not in production - high overhead)
Support
For issues or questions:
- Check add-on logs in the Strongly dashboard
- Review Redis official documentation
- Contact Strongly support through the platform