Logging
Fluxbase provides comprehensive structured logging using zerolog, a fast and lightweight logging library that outputs JSON-formatted logs for easy parsing and analysis.
Overview
Section titled “Overview”All Fluxbase logs are structured JSON messages that include:
- Timestamp: ISO 8601 format with timezone
- Level: Log level (debug, info, warn, error, fatal)
- Message: Human-readable description
- Context Fields: Additional structured data (user_id, request_id, etc.)
Log Levels
Section titled “Log Levels”Fluxbase uses standard log levels from least to most severe:
| Level | Description | Use Case | Production |
|---|---|---|---|
| debug | Detailed diagnostic information | Development debugging | ❌ Disabled |
| info | General informational messages | Normal operations | ✅ Enabled |
| warn | Warning messages, degraded state | Non-critical issues | ✅ Enabled |
| error | Error messages, recoverable errors | Failed operations | ✅ Enabled |
| fatal | Fatal errors, application crash | Critical failures | ✅ Enabled |
Configuration
Section titled “Configuration”Enable Debug Logging
Section titled “Enable Debug Logging”Environment Variable:
# Enable debug logging (development)FLUXBASE_DEBUG=true
# Disable debug logging (production - default)FLUXBASE_DEBUG=falseDocker:
docker run -e FLUXBASE_DEBUG=true fluxbase/fluxbase:latestDocker Compose:
services: fluxbase: image: ghcr.io/fluxbase-eu/fluxbase:latest:latest environment: - FLUXBASE_DEBUG=trueKubernetes:
apiVersion: v1kind: ConfigMapmetadata: name: fluxbase-configdata: FLUXBASE_DEBUG: "false"---apiVersion: apps/v1kind: Deploymentmetadata: name: fluxbasespec: template: spec: containers: - name: fluxbase image: ghcr.io/fluxbase-eu/fluxbase:latest:latest envFrom: - configMapRef: name: fluxbase-configLog Format
Section titled “Log Format”JSON Structure
Section titled “JSON Structure”All logs are output as single-line JSON:
{ "level": "info", "time": "2024-01-15T10:30:00.123Z", "message": "HTTP request completed", "method": "POST", "path": "/api/v1/tables/users", "status": 200, "duration_ms": 25.5, "ip": "192.168.1.100", "user_id": "550e8400-e29b-41d4-a716-446655440000", "request_id": "req_abc123"}Field Descriptions
Section titled “Field Descriptions”| Field | Type | Description |
|---|---|---|
level | string | Log level (debug, info, warn, error, fatal) |
time | string | ISO 8601 timestamp with timezone |
message | string | Human-readable log message |
method | string | HTTP method (GET, POST, etc.) |
path | string | Request path |
status | integer | HTTP status code |
duration_ms | float | Request duration in milliseconds |
ip | string | Client IP address |
user_id | string | Authenticated user ID (if available) |
request_id | string | Unique request identifier |
error | string | Error message (for error logs) |
Log Events
Section titled “Log Events”HTTP Requests
Section titled “HTTP Requests”Every HTTP request is logged with details:
{ "level": "info", "time": "2024-01-15T10:30:00Z", "message": "HTTP request", "method": "POST", "path": "/api/v1/tables/users", "status": 200, "duration_ms": 25.5, "ip": "192.168.1.100", "user_agent": "Mozilla/5.0...", "request_id": "req_abc123"}Authentication Events
Section titled “Authentication Events”Successful Login:
{ "level": "info", "time": "2024-01-15T10:30:00Z", "message": "User authenticated successfully", "method": "email", "user_id": "550e8400-e29b-41d4-a716-446655440000", "email": "user@example.com", "ip": "192.168.1.100"}Failed Login:
{ "level": "warn", "time": "2024-01-15T10:30:00Z", "message": "Authentication failed", "method": "email", "reason": "invalid_credentials", "email": "user@example.com", "ip": "192.168.1.100"}Database Operations
Section titled “Database Operations”Query Execution:
{ "level": "debug", "time": "2024-01-15T10:30:00Z", "message": "Database query executed", "operation": "SELECT", "table": "users", "duration_ms": 5.2, "rows_affected": 10}Slow Query:
{ "level": "warn", "time": "2024-01-15T10:30:00Z", "message": "Slow database query detected", "operation": "SELECT", "table": "posts", "duration_ms": 1250.5, "query": "SELECT * FROM posts WHERE..."}Realtime Events
Section titled “Realtime Events”WebSocket Connection:
{ "level": "info", "time": "2024-01-15T10:30:00Z", "message": "WebSocket connection established", "connection_id": "conn_xyz789", "ip": "192.168.1.100", "user_id": "550e8400-e29b-41d4-a716-446655440000"}Channel Subscription:
{ "level": "info", "time": "2024-01-15T10:30:00Z", "message": "Channel subscription created", "connection_id": "conn_xyz789", "channel": "public:posts", "user_id": "550e8400-e29b-41d4-a716-446655440000"}WebSocket Error:
{ "level": "error", "time": "2024-01-15T10:30:00Z", "message": "WebSocket error", "connection_id": "conn_xyz789", "error": "connection closed unexpectedly", "error_type": "connection_error"}Storage Operations
Section titled “Storage Operations”File Upload:
{ "level": "info", "time": "2024-01-15T10:30:00Z", "message": "File uploaded", "bucket": "avatars", "file_path": "user-123/avatar.png", "size_bytes": 524288, "content_type": "image/png", "duration_ms": 125.5, "user_id": "550e8400-e29b-41d4-a716-446655440000"}File Download:
{ "level": "info", "time": "2024-01-15T10:30:00Z", "message": "File downloaded", "bucket": "avatars", "file_path": "user-123/avatar.png", "size_bytes": 524288, "duration_ms": 45.2}Webhook Events
Section titled “Webhook Events”Webhook Triggered:
{ "level": "info", "time": "2024-01-15T10:30:00Z", "message": "Webhook triggered", "webhook_id": "webhook_123", "event": "insert", "table": "users", "url": "https://example.com/webhooks/users"}Webhook Delivery Success:
{ "level": "info", "time": "2024-01-15T10:30:00Z", "message": "Webhook delivered successfully", "webhook_id": "webhook_123", "delivery_id": "delivery_456", "url": "https://example.com/webhooks/users", "status": 200, "duration_ms": 250.5}Webhook Delivery Failure:
{ "level": "error", "time": "2024-01-15T10:30:00Z", "message": "Webhook delivery failed", "webhook_id": "webhook_123", "delivery_id": "delivery_456", "url": "https://example.com/webhooks/users", "status": 500, "error": "connection timeout", "retry_count": 2, "duration_ms": 5000}Security Events
Section titled “Security Events”CSRF Validation Failure:
{ "level": "warn", "time": "2024-01-15T10:30:00Z", "message": "CSRF token validation failed", "ip": "192.168.1.100", "path": "/api/v1/tables/users", "method": "POST"}Rate Limit Hit:
{ "level": "warn", "time": "2024-01-15T10:30:00Z", "message": "Rate limit exceeded", "ip": "192.168.1.100", "path": "/api/v1/auth/login", "limit": 10, "window": "1m"}RLS Policy Violation:
{ "level": "warn", "time": "2024-01-15T10:30:00Z", "message": "Row Level Security policy violation", "user_id": "550e8400-e29b-41d4-a716-446655440000", "table": "private_data", "operation": "SELECT", "policy": "user_isolation"}System Events
Section titled “System Events”Server Started:
{ "level": "info", "time": "2024-01-15T10:00:00Z", "message": "Fluxbase server started", "version": "v1.0.0", "address": ":8080", "environment": "production"}Database Connected:
{ "level": "info", "time": "2024-01-15T10:00:01Z", "message": "Database connection established", "host": "postgres", "port": 5432, "database": "fluxbase", "max_connections": 25}Graceful Shutdown:
{ "level": "info", "time": "2024-01-15T18:00:00Z", "message": "Graceful shutdown initiated", "uptime_seconds": 28800}Log Aggregation
Section titled “Log Aggregation”Sending Logs to External Services
Section titled “Sending Logs to External Services”1. Docker Logs
Section titled “1. Docker Logs”View Logs:
# Follow logsdocker logs -f fluxbase
# Last 100 linesdocker logs --tail 100 fluxbase
# Since 1 hour agodocker logs --since 1h fluxbaseFilter Logs:
# Only error logsdocker logs fluxbase 2>&1 | grep '"level":"error"'
# Only authentication eventsdocker logs fluxbase 2>&1 | grep '"message":"User authenticated"'2. Loki (Grafana Loki)
Section titled “2. Loki (Grafana Loki)”Docker Compose Setup:
services: fluxbase: image: ghcr.io/fluxbase-eu/fluxbase:latest:latest logging: driver: "json-file" options: max-size: "10m" max-file: "3" labels: logging: "promtail"
promtail: image: grafana/promtail:latest volumes: - /var/lib/docker/containers:/var/lib/docker/containers:ro - ./promtail-config.yml:/etc/promtail/config.yml command: -config.file=/etc/promtail/config.yml
loki: image: grafana/loki:latest ports: - "3100:3100"
grafana: image: grafana/grafana:latest ports: - "3000:3000"Promtail Configuration (promtail-config.yml):
server: http_listen_port: 9080 grpc_listen_port: 0
positions: filename: /tmp/positions.yaml
clients: - url: http://loki:3100/loki/api/v1/push
scrape_configs: - job_name: fluxbase docker_sd_configs: - host: unix:///var/run/docker.sock refresh_interval: 5s relabel_configs: - source_labels: ["__meta_docker_container_name"] regex: "/(.*)" target_label: "container" - source_labels: ["__meta_docker_container_log_stream"] target_label: "stream"3. Elasticsearch (ELK Stack)
Section titled “3. Elasticsearch (ELK Stack)”Filebeat Configuration:
filebeat.inputs: - type: container paths: - "/var/lib/docker/containers/*/*.log" json.keys_under_root: true json.add_error_key: true
processors: - add_docker_metadata: ~
output.elasticsearch: hosts: ["localhost:9200"] index: "fluxbase-%{+yyyy.MM.dd}"
setup.kibana: host: "localhost:5601"4. CloudWatch Logs (AWS)
Section titled “4. CloudWatch Logs (AWS)”Docker Log Driver:
services: fluxbase: image: ghcr.io/fluxbase-eu/fluxbase:latest:latest logging: driver: awslogs options: awslogs-region: us-east-1 awslogs-group: /fluxbase/production awslogs-stream: fluxbase-app5. Google Cloud Logging
Section titled “5. Google Cloud Logging”Docker Log Driver:
services: fluxbase: image: ghcr.io/fluxbase-eu/fluxbase:latest:latest logging: driver: gcplogs options: gcp-project: your-project-id gcp-log-cmd: trueQuerying Logs
Section titled “Querying Logs”Using jq (Command Line)
Section titled “Using jq (Command Line)”Install jq:
# macOSbrew install jq
# Ubuntu/Debiansudo apt-get install jq
# Alpineapk add jqQuery Examples:
# Filter by log leveldocker logs fluxbase 2>&1 | jq 'select(.level == "error")'
# Filter by messagedocker logs fluxbase 2>&1 | jq 'select(.message | contains("authentication"))'
# Extract specific fieldsdocker logs fluxbase 2>&1 | jq '{time, level, message, user_id}'
# Count errors by typedocker logs fluxbase 2>&1 | jq -r 'select(.level == "error") | .error' | sort | uniq -c
# Find slow requests (> 1000ms)docker logs fluxbase 2>&1 | jq 'select(.duration_ms > 1000)'
# Get unique IP addressesdocker logs fluxbase 2>&1 | jq -r '.ip' | sort | uniq
# Calculate average request durationdocker logs fluxbase 2>&1 | jq -s 'map(.duration_ms) | add / length'Using Grafana Loki (LogQL)
Section titled “Using Grafana Loki (LogQL)”Query Examples:
# All logs from fluxbase{container="fluxbase"}
# Only error logs{container="fluxbase"} |= "error"
# Authentication failures{container="fluxbase"} | json | level="warn" | message="Authentication failed"
# Requests taking > 1 second{container="fluxbase"} | json | duration_ms > 1000
# Rate of errorsrate({container="fluxbase"} | json | level="error" [5m])
# Top 10 slowest requeststopk(10, sum by (path) (avg_over_time({container="fluxbase"} | json | unwrap duration_ms [5m])))Using Elasticsearch (Kibana)
Section titled “Using Elasticsearch (Kibana)”Query Examples:
// All error logs{ "query": { "match": { "level": "error" } }}
// Authentication failures in last hour{ "query": { "bool": { "must": [ { "match": { "message": "Authentication failed" }}, { "range": { "time": { "gte": "now-1h" }}} ] } }}
// Slow requests{ "query": { "range": { "duration_ms": { "gte": 1000 } } }}Log Retention
Section titled “Log Retention”Docker Log Rotation
Section titled “Docker Log Rotation”Configure log rotation to prevent disk space issues:
services: fluxbase: image: ghcr.io/fluxbase-eu/fluxbase:latest:latest logging: driver: "json-file" options: max-size: "10m" # Max size per log file max-file: "3" # Keep 3 log files compress: "true" # Compress rotated logsKubernetes Log Rotation
Section titled “Kubernetes Log Rotation”Kubernetes automatically rotates logs:
# Pod configurationapiVersion: v1kind: Podmetadata: name: fluxbasespec: containers: - name: fluxbase image: ghcr.io/fluxbase-eu/fluxbase:latest:latest # Logs are automatically rotated by kubelet # Default: 10MB per file, max 5 filesExternal Log Storage
Section titled “External Log Storage”Loki Retention:
table_manager: retention_deletes_enabled: true retention_period: 720h # 30 daysElasticsearch Retention:
// Index Lifecycle Policy{ "policy": { "phases": { "hot": { "actions": { "rollover": { "max_size": "50GB", "max_age": "7d" } } }, "delete": { "min_age": "30d", "actions": { "delete": {} } } } }}Best Practices
Section titled “Best Practices”1. Production Configuration
Section titled “1. Production Configuration”Disable Debug Logging:
FLUXBASE_DEBUG=falseConfigure Log Rotation:
logging: driver: "json-file" options: max-size: "10m" max-file: "3"Send to External Service:
Use Loki, Elasticsearch, or CloudWatch for long-term storage and analysis.
2. Monitoring Critical Events
Section titled “2. Monitoring Critical Events”Set up alerts for:
# High error raterate({container="fluxbase"} | json | level="error" [5m]) > 1
# Authentication failuresrate({container="fluxbase"} | json | message="Authentication failed" [5m]) > 10
# Slow requestsrate({container="fluxbase"} | json | duration_ms > 1000 [5m]) > 53. Log Sampling
Section titled “3. Log Sampling”For high-traffic applications, consider sampling:
// Sample 10% of debug logsif level == "debug" && rand.Float64() > 0.1 { return}4. Redact Sensitive Data
Section titled “4. Redact Sensitive Data”Fluxbase automatically redacts:
- Passwords
- API keys
- JWT tokens
- Credit card numbers
Never log:
- ❌ User passwords (plaintext or hashed)
- ❌ API keys or secrets
- ❌ JWT tokens (except for debugging)
- ❌ Credit card information
- ❌ Social security numbers
- ❌ Personal health information
5. Use Structured Fields
Section titled “5. Use Structured Fields”Always use structured fields instead of string concatenation:
// ✅ GOOD: Structured logginglog.Info(). Str("user_id", userID). Str("action", "login"). Msg("User logged in")
// ❌ BAD: String concatenationlog.Info().Msg(fmt.Sprintf("User %s logged in", userID))6. Include Context
Section titled “6. Include Context”Always include relevant context:
log.Info(). Str("request_id", requestID). Str("user_id", userID). Str("ip", clientIP). Int("status", statusCode). Float64("duration_ms", duration). Msg("Request completed")7. Avoid Log Spam
Section titled “7. Avoid Log Spam”Rate limit or sample high-frequency events:
// Rate limit "user online" events to once per minuteif time.Since(lastLog) < time.Minute { return}Troubleshooting
Section titled “Troubleshooting”No Logs Appearing
Section titled “No Logs Appearing”Check log level:
# Ensure debug mode is enabled if neededFLUXBASE_DEBUG=trueCheck log driver:
# View Docker log driverdocker inspect fluxbase | jq '.[0].HostConfig.LogConfig'Check log permissions:
# Ensure Fluxbase can write to stdout/stderrls -la /dev/stdout /dev/stderrLogs Too Verbose
Section titled “Logs Too Verbose”Disable debug logging:
FLUXBASE_DEBUG=falseFilter logs:
# Only show warnings and errorsdocker logs fluxbase 2>&1 | jq 'select(.level == "warn" or .level == "error")'Disk Space Issues
Section titled “Disk Space Issues”Enable log rotation:
logging: options: max-size: "10m" max-file: "3"Clean old logs:
# Dockerdocker system prune -a
# Kuberneteskubectl logs --tail=100 pod-nameSummary
Section titled “Summary”Fluxbase provides comprehensive structured logging:
- ✅ Structured JSON logs for easy parsing
- ✅ Multiple log levels (debug, info, warn, error, fatal)
- ✅ Automatic request logging with detailed context
- ✅ Security event logging (auth, CSRF, RLS)
- ✅ Integration with log aggregators (Loki, ELK, CloudWatch)
- ✅ Redaction of sensitive data
Configure appropriate log levels, set up log rotation, send logs to an aggregator, and use structured queries to monitor and troubleshoot your Fluxbase instance.