Backup & Restore
This guide covers backup and restore procedures for Fluxbase deployments. Regular backups are critical for disaster recovery and should be part of your operational practices.
Overview
Section titled “Overview”Fluxbase stores data in two primary locations:
- PostgreSQL Database: User data, authentication, metadata, jobs, webhooks
- Storage Backend: Uploaded files (local filesystem or S3-compatible storage)
Both must be backed up together to ensure consistent recovery.
Quick Start
Section titled “Quick Start”Fluxbase provides backup and restore scripts for common scenarios:
# Full backup (database + storage)./scripts/backup.sh --output /backups/$(date +%Y%m%d)
# Database-only backup./scripts/backup.sh --database-only --output /backups/db-$(date +%Y%m%d).sql
# Restore from backup./scripts/restore.sh --backup /backups/20260118Database Backup
Section titled “Database Backup”Using pg_dump (Recommended for Small-Medium Databases)
Section titled “Using pg_dump (Recommended for Small-Medium Databases)”# Full backup with custom format (supports parallel restore)pg_dump -Fc -h localhost -U postgres -d fluxbase \ -f fluxbase_$(date +%Y%m%d_%H%M%S).dump
# Plain SQL backup (readable, portable)pg_dump -h localhost -U postgres -d fluxbase \ -f fluxbase_$(date +%Y%m%d_%H%M%S).sql
# Schema-only backup (useful for migrations)pg_dump -h localhost -U postgres -d fluxbase \ --schema-only -f fluxbase_schema.sqlRecommended Options:
| Option | Description |
|---|---|
-Fc | Custom format - compressed, supports parallel restore |
-j N | Number of parallel jobs (for pg_dump 9.3+) |
--no-owner | Don’t include ownership commands |
--no-privileges | Don’t include privilege grants |
-T pattern | Exclude tables matching pattern |
Using pg_basebackup (For Large Databases)
Section titled “Using pg_basebackup (For Large Databases)”For databases larger than 10GB, use physical backups:
# Base backup with WAL filespg_basebackup -h localhost -U postgres -D /backups/base_$(date +%Y%m%d) \ -Ft -z -P -Xs
# Options:# -Ft: tar format# -z: gzip compression# -P: show progress# -Xs: stream WAL files during backupContinuous Archiving (WAL Archiving)
Section titled “Continuous Archiving (WAL Archiving)”For point-in-time recovery, configure WAL archiving in postgresql.conf:
# Enable WAL archivingwal_level = replicaarchive_mode = onarchive_command = 'cp %p /wal_archive/%f'archive_timeout = 300 # Force archive every 5 minutes
# Retention (optional, using pg_archivecleanup)# archive_cleanup_command = 'pg_archivecleanup /wal_archive %r'Storage Backup
Section titled “Storage Backup”Local Filesystem Storage
Section titled “Local Filesystem Storage”# Using rsync (incremental, efficient)rsync -avz --delete /var/fluxbase/storage/ /backups/storage/
# Using tar (full archive)tar -czf storage_$(date +%Y%m%d).tar.gz /var/fluxbase/storage/S3-Compatible Storage
Section titled “S3-Compatible Storage”# Using AWS CLI (works with MinIO, Wasabi, etc.)aws s3 sync s3://fluxbase-storage s3://fluxbase-backup-$(date +%Y%m%d) \ --source-region us-east-1
# Using rclone (supports many providers)rclone sync fluxbase:storage backup:storage-$(date +%Y%m%d)Enable S3 Versioning for automatic file history:
aws s3api put-bucket-versioning \ --bucket fluxbase-storage \ --versioning-configuration Status=EnabledAutomated Backup Script
Section titled “Automated Backup Script”The provided scripts/backup.sh handles common backup scenarios:
# Environment variablesexport PGHOST=localhostexport PGUSER=postgresexport PGDATABASE=fluxbaseexport STORAGE_PATH=/var/fluxbase/storageexport BACKUP_RETENTION_DAYS=30
# Run backup./scripts/backup.sh --output /backupsScript Options
Section titled “Script Options”| Option | Description |
|---|---|
--output DIR | Backup destination directory |
--database-only | Skip storage backup |
--storage-only | Skip database backup |
--compress | Compress backup files (default: enabled) |
--parallel N | Parallel jobs for pg_dump (default: 4) |
--retention DAYS | Delete backups older than N days |
--verify | Verify backup integrity after creation |
Scheduling with Cron
Section titled “Scheduling with Cron”# Daily backup at 2 AM0 2 * * * /opt/fluxbase/scripts/backup.sh --output /backups --retention 30 >> /var/log/fluxbase-backup.log 2>&1
# Weekly full backup on Sunday0 3 * * 0 /opt/fluxbase/scripts/backup.sh --output /backups/weekly >> /var/log/fluxbase-backup.log 2>&1Database Restore
Section titled “Database Restore”From Custom Format Dump
Section titled “From Custom Format Dump”# Create target databasecreatedb -h localhost -U postgres fluxbase_restored
# Restore with parallel jobspg_restore -h localhost -U postgres -d fluxbase_restored \ -j 4 --no-owner --no-privileges \ fluxbase_20260118.dump
# Verify restorationpsql -h localhost -U postgres -d fluxbase_restored \ -c "SELECT count(*) FROM auth.users;"From Plain SQL Dump
Section titled “From Plain SQL Dump”# Create target databasecreatedb -h localhost -U postgres fluxbase_restored
# Restore from SQL filepsql -h localhost -U postgres -d fluxbase_restored \ -f fluxbase_20260118.sqlPoint-in-Time Recovery (PITR)
Section titled “Point-in-Time Recovery (PITR)”For WAL-archived databases:
# 1. Stop PostgreSQLsudo systemctl stop postgresql
# 2. Clear data directoryrm -rf /var/lib/postgresql/14/main/*
# 3. Restore base backuptar -xzf base_20260118.tar.gz -C /var/lib/postgresql/14/main/
# 4. Create recovery.signal and configure recovery targetcat > /var/lib/postgresql/14/main/postgresql.auto.conf << EOFrestore_command = 'cp /wal_archive/%f %p'recovery_target_time = '2026-01-18 10:30:00'recovery_target_action = 'promote'EOF
# 5. Start PostgreSQL (will recover to target time)sudo systemctl start postgresqlStorage Restore
Section titled “Storage Restore”Local Filesystem
Section titled “Local Filesystem”# Stop Fluxbase to prevent writessudo systemctl stop fluxbase
# Restore from rsync backuprsync -avz /backups/storage/ /var/fluxbase/storage/
# Restore from tar archivetar -xzf storage_20260118.tar.gz -C /
# Fix permissionschown -R fluxbase:fluxbase /var/fluxbase/storage
# Restart Fluxbasesudo systemctl start fluxbaseS3-Compatible Storage
Section titled “S3-Compatible Storage”# Sync from backup bucketaws s3 sync s3://fluxbase-backup-20260118 s3://fluxbase-storage \ --delete
# Or restore specific version (if versioning enabled)aws s3api list-object-versions --bucket fluxbase-storage --prefix uploads/aws s3api get-object --bucket fluxbase-storage --key uploads/file.jpg \ --version-id "abc123" restored-file.jpgAutomated Restore Script
Section titled “Automated Restore Script”The provided scripts/restore.sh handles common restore scenarios:
# Full restore./scripts/restore.sh --backup /backups/20260118
# Database-only restore to different database./scripts/restore.sh --backup /backups/20260118 \ --database-only --target-db fluxbase_test
# Dry run (verify without restoring)./scripts/restore.sh --backup /backups/20260118 --dry-runScript Options
Section titled “Script Options”| Option | Description |
|---|---|
--backup DIR | Backup directory to restore from |
--database-only | Restore only database |
--storage-only | Restore only storage |
--target-db NAME | Restore to different database name |
--dry-run | Verify backup without restoring |
--no-stop | Don’t stop Fluxbase during restore |
Backup Verification
Section titled “Backup Verification”Always verify backups can be restored:
# 1. Restore to test databasecreatedb fluxbase_verifypg_restore -d fluxbase_verify --no-owner fluxbase_backup.dump
# 2. Run verification queriespsql -d fluxbase_verify << 'EOF'-- Check table countsSELECT 'auth.users' as table_name, count(*) FROM auth.usersUNION ALLSELECT 'storage.objects', count(*) FROM storage.objectsUNION ALLSELECT 'jobs.jobs', count(*) FROM jobs.jobs;
-- Check for data integritySELECT 'orphaned_identities' as check_name, count(*)FROM auth.identities iLEFT JOIN auth.users u ON i.user_id = u.idWHERE u.id IS NULL;EOF
# 3. Clean updropdb fluxbase_verifyAutomated Verification
Section titled “Automated Verification”Add to your backup script:
#!/bin/bash# Verify backup after creationverify_backup() { local backup_file=$1 local test_db="fluxbase_verify_$(date +%s)"
createdb "$test_db" || return 1 pg_restore -d "$test_db" --no-owner "$backup_file" || return 1
# Check critical tables local user_count=$(psql -d "$test_db" -t -c "SELECT count(*) FROM auth.users") if [ "$user_count" -eq 0 ]; then echo "WARNING: No users found in backup" fi
dropdb "$test_db" return 0}Disaster Recovery Checklist
Section titled “Disaster Recovery Checklist”Before Disaster
Section titled “Before Disaster”- Automated daily backups configured
- Backups stored in separate location/region
- Backup verification running weekly
- Recovery procedure documented and tested
- Recovery time objective (RTO) defined
- Recovery point objective (RPO) defined
- Team trained on recovery procedures
During Recovery
Section titled “During Recovery”-
Assess the situation
- Identify what failed (database, storage, both)
- Determine data loss window
- Choose recovery target time
-
Communicate
- Notify stakeholders
- Set expectations for downtime
-
Execute recovery
Terminal window # Stop applicationkubectl scale deployment fluxbase --replicas=0# Restore database./scripts/restore.sh --backup /backups/latest --database-only# Restore storage./scripts/restore.sh --backup /backups/latest --storage-only# Verify data./scripts/restore.sh --backup /backups/latest --dry-run# Start applicationkubectl scale deployment fluxbase --replicas=3 -
Verify recovery
- Check application health endpoints
- Verify user authentication works
- Test file uploads/downloads
- Review application logs for errors
-
Post-mortem
- Document what happened
- Identify improvements
- Update procedures if needed
Backup Strategy Recommendations
Section titled “Backup Strategy Recommendations”Development/Testing
Section titled “Development/Testing”- Daily pg_dump backups
- 7-day retention
- Manual verification monthly
Production (Small-Medium)
Section titled “Production (Small-Medium)”- Daily pg_dump with custom format
- Hourly WAL archiving
- 30-day retention
- Weekly automated verification
- Off-site backup replication
Production (Large/Critical)
Section titled “Production (Large/Critical)”- Continuous WAL archiving (5-minute intervals)
- Physical backups (pg_basebackup) weekly
- Real-time replication to standby
- 90-day retention
- Daily automated verification
- Multi-region backup storage
- Tested disaster recovery quarterly
Monitoring Backup Health
Section titled “Monitoring Backup Health”Prometheus Metrics
Section titled “Prometheus Metrics”If you’re using the provided backup script, it exports metrics:
# HELP fluxbase_backup_last_success_timestamp Last successful backup timestamp# TYPE fluxbase_backup_last_success_timestamp gaugefluxbase_backup_last_success_timestamp{type="database"} 1705574400fluxbase_backup_last_success_timestamp{type="storage"} 1705574400
# HELP fluxbase_backup_size_bytes Backup size in bytes# TYPE fluxbase_backup_size_bytes gaugefluxbase_backup_size_bytes{type="database"} 1073741824fluxbase_backup_size_bytes{type="storage"} 5368709120Alerting Rules
Section titled “Alerting Rules”groups:- name: backup rules: - alert: BackupMissing expr: time() - fluxbase_backup_last_success_timestamp > 86400 for: 1h labels: severity: critical annotations: summary: "Fluxbase backup is missing" description: "No successful backup in the last 24 hours"
- alert: BackupSizeAnomaly expr: | abs(fluxbase_backup_size_bytes - fluxbase_backup_size_bytes offset 1d) / fluxbase_backup_size_bytes offset 1d > 0.5 for: 1h labels: severity: warning annotations: summary: "Backup size changed significantly" description: "Backup size changed by more than 50%"Learn More
Section titled “Learn More”- Deployment Overview - Production deployment guide
- Production Checklist - Pre-production checklist
- Monitoring & Observability - Monitoring setup
- Scaling - Scaling Fluxbase