File Storage
Fluxbase provides file storage supporting local filesystem or S3-compatible storage (MinIO, AWS S3, Wasabi, DigitalOcean Spaces, etc.).
Features
Section titled “Features”- Local filesystem or S3-compatible storage
- Bucket management
- File upload, download, delete, list operations
- Custom metadata support
- Signed URLs for temporary access (S3 only)
- Range requests for partial downloads
- Copy and move operations
Configuration
Section titled “Configuration”storage: provider: "local" # or "s3" local_path: "./storage" max_upload_size: 10485760 # 10MB
# S3 Configuration (when provider: "s3") s3_endpoint: "s3.amazonaws.com" s3_access_key: "your-access-key" s3_secret_key: "your-secret-key" s3_region: "us-east-1" s3_bucket: "default-bucket"Environment Variables
Section titled “Environment Variables”FLUXBASE_STORAGE_PROVIDER=local # or s3FLUXBASE_STORAGE_LOCAL_PATH=./storageFLUXBASE_STORAGE_MAX_UPLOAD_SIZE=10485760
# S3 ConfigurationFLUXBASE_STORAGE_S3_ENDPOINT=s3.amazonaws.comFLUXBASE_STORAGE_S3_ACCESS_KEY=your-access-keyFLUXBASE_STORAGE_S3_SECRET_KEY=your-secret-keyFLUXBASE_STORAGE_S3_REGION=us-east-1Provider Comparison
Section titled “Provider Comparison”Local Storage:
- Simple setup, no external dependencies
- Best for development and single-server deployments
- Not scalable across multiple servers
S3-Compatible:
- Highly scalable and distributed
- Best for production with multiple servers
- Requires external service (AWS S3, MinIO, etc.)
Architecture Comparison
Section titled “Architecture Comparison”Local Storage Architecture
Section titled “Local Storage Architecture”graph TB A[Client App 1] -->|Upload/Download| B[Fluxbase Server] C[Client App 2] -->|Upload/Download| B B -->|Read/Write| D[Local Filesystem<br/>/storage]
E[Load Balancer] -.->|Cannot scale| F[Multiple Instances] F -.->|No shared filesystem| D
style B fill:#3178c6,color:#fff style D fill:#f39c12,color:#fff style E fill:#e74c3c,color:#fff,stroke-dasharray: 5 5 style F fill:#e74c3c,color:#fff,stroke-dasharray: 5 5Limitations:
- Single server only - files stored locally cannot be accessed by multiple Fluxbase instances
- No horizontal scaling possible
- Server failure means data loss (unless backups exist)
S3-Compatible Storage Architecture (MinIO/S3)
Section titled “S3-Compatible Storage Architecture (MinIO/S3)”graph TB A[Client 1] -->|API Request| LB[Load Balancer] B[Client 2] -->|API Request| LB C[Client 3] -->|API Request| LB
LB --> FB1[Fluxbase Instance 1] LB --> FB2[Fluxbase Instance 2] LB --> FB3[Fluxbase Instance 3]
FB1 -->|S3 API| S3[MinIO / S3 Cluster] FB2 -->|S3 API| S3 FB3 -->|S3 API| S3
S3 -->|Distributed| S3A[Storage Node 1] S3 -->|Distributed| S3B[Storage Node 2] S3 -->|Distributed| S3C[Storage Node 3]
style LB fill:#ff6b6b,color:#fff style FB1 fill:#3178c6,color:#fff style FB2 fill:#3178c6,color:#fff style FB3 fill:#3178c6,color:#fff style S3 fill:#c92a2a,color:#fff style S3A fill:#5c940d,color:#fff style S3B fill:#5c940d,color:#fff style S3C fill:#5c940d,color:#fffBenefits:
- Multiple Fluxbase instances can access the same storage
- Horizontally scalable - add more instances as needed
- High availability - storage cluster handles redundancy
- No single point of failure
Use Cases:
- Local Storage: Development, testing, single-server deployments
- MinIO: Self-hosted production with horizontal scaling needs
- AWS S3/DigitalOcean Spaces: Cloud production with managed infrastructure
Installation
Section titled “Installation”npm install @nimbleflux/fluxbase-sdkBasic Usage
Section titled “Basic Usage”import { createClient } from "@nimbleflux/fluxbase-sdk";
const client = createClient("http://localhost:8080", "your-anon-key");
// Upload fileconst file = document.getElementById("fileInput").files[0];const { data, error } = await client.storage .from("avatars") .upload("user1.png", file);
// Download fileconst { data: blob } = await client.storage .from("avatars") .download("user1.png");
// List filesconst { data: files } = await client.storage.from("avatars").list();
// Delete fileawait client.storage.from("avatars").remove(["user1.png"]);Bucket Operations
Section titled “Bucket Operations”| Method | Purpose | Parameters |
|---|---|---|
createBucket() | Create new bucket | name, options (public, file_size_limit, allowed_mime_types) |
listBuckets() | List all buckets | None |
getBucket() | Get bucket details | name |
deleteBucket() | Delete bucket | name |
Example:
// Create bucketawait client.storage.createBucket("avatars", { public: false, file_size_limit: 5242880, allowed_mime_types: ["image/png", "image/jpeg"],});
// List/get/deleteconst { data: buckets } = await client.storage.listBuckets();const { data: bucket } = await client.storage.getBucket("avatars");await client.storage.deleteBucket("avatars");File Operations
Section titled “File Operations”| Method | Purpose | Parameters |
|---|---|---|
upload() | Upload file | path, file, options (contentType, cacheControl, upsert) |
download() | Download file | path |
list() | List files | path, options (limit, offset, sortBy) |
remove() | Delete files | paths[] |
copy() | Copy file | from, to |
move() | Move file | from, to |
Example:
// Uploadawait client.storage .from("avatars") .upload("user1.png", file, { upsert: true });
// Downloadconst { data } = await client.storage.from("avatars").download("user1.png");
// Listconst { data: files } = await client.storage .from("avatars") .list("subfolder/", { limit: 100 });
// Deleteawait client.storage.from("avatars").remove(["file1.png", "file2.png"]);
// Copy/Moveawait client.storage.from("avatars").copy("old.png", "new.png");await client.storage.from("avatars").move("old.png", "new.png");Upload Progress Tracking
Section titled “Upload Progress Tracking”Track upload progress by providing an onUploadProgress callback in the upload options:
Vanilla SDK
Section titled “Vanilla SDK”import { createClient } from "@nimbleflux/fluxbase-sdk";
const client = createClient("http://localhost:8080", "your-anon-key");const file = document.getElementById("fileInput").files[0];
// Upload with progress trackingconst { data, error } = await client.storage .from("avatars") .upload("user1.png", file, { onUploadProgress: (progress) => { console.log(`Upload progress: ${progress.percentage}%`); console.log(`Loaded: ${progress.loaded} / ${progress.total} bytes`);
// Update UI progress bar const progressBar = document.getElementById("progress"); progressBar.value = progress.percentage; }, });React Hook (with automatic state management)
Section titled “React Hook (with automatic state management)”import { useStorageUploadWithProgress } from "@nimbleflux/fluxbase-sdk-react";
function UploadComponent() { const { upload, progress, reset } = useStorageUploadWithProgress("avatars");
const handleUpload = (file: File) => { upload.mutate({ path: "user-avatar.png", file: file, }); };
return ( <div> <input type="file" onChange={(e) => handleUpload(e.target.files[0])} />
{progress && ( <div> <progress value={progress.percentage} max="100" /> <p>{progress.percentage}% uploaded</p> <p> {progress.loaded} / {progress.total} bytes </p> </div> )}
{upload.isSuccess && <p>Upload complete!</p>} {upload.isError && <p>Upload failed: {upload.error?.message}</p>} </div> );}React Hook (with custom callback)
Section titled “React Hook (with custom callback)”import { useState } from "react";import { useStorageUpload } from "@nimbleflux/fluxbase-sdk-react";
function UploadComponent() { const upload = useStorageUpload("avatars"); const [uploadProgress, setUploadProgress] = useState(0);
const handleUpload = (file: File) => { upload.mutate({ path: "user-avatar.png", file: file, options: { onUploadProgress: (progress) => { setUploadProgress(progress.percentage); }, }, }); };
return ( <div> <input type="file" onChange={(e) => handleUpload(e.target.files[0])} />
{uploadProgress > 0 && uploadProgress < 100 && ( <div> <progress value={uploadProgress} max="100" /> <p>{uploadProgress}% uploaded</p> </div> )}
{upload.isSuccess && <p>Upload complete!</p>} </div> );}Progress Object
Section titled “Progress Object”The onUploadProgress callback receives an object with the following properties:
interface UploadProgress { loaded: number; // Number of bytes uploaded so far total: number; // Total number of bytes to upload percentage: number; // Upload percentage (0-100)}- Progress tracking uses
XMLHttpRequestinstead offetch()for better progress events - Progress tracking is optional and backward-compatible
- When no progress callback is provided, the standard
fetch()API is used - Progress updates may not be perfectly linear depending on network conditions
- The progress callback is called multiple times during the upload
Public vs Private Files
Section titled “Public vs Private Files”// Public bucket (no auth required)await client.storage.createBucket("public-images", { public: true });const url = client.storage.from("public-images").getPublicUrl("logo.svg");
// Private bucket (requires auth or signed URL)await client.storage.createBucket("private-docs", { public: false });Signed URLs (S3 Only)
Section titled “Signed URLs (S3 Only)”const { data } = await client.storage .from("private-docs") .createSignedUrl("document.pdf", 3600); // 1 hour expiryMetadata
Section titled “Metadata”// Upload with metadataawait client.storage.from("avatars").upload("profile.png", file, { metadata: { user_id: "123", description: "Profile picture" },});
// Get file infoconst { data } = await client.storage .from("avatars") .getFileInfo("profile.png");S3 Provider Setup
Section titled “S3 Provider Setup”AWS S3
Section titled “AWS S3”storage: provider: "s3" s3_endpoint: "s3.amazonaws.com" s3_access_key: "AKIAIOSFODNN7EXAMPLE" s3_secret_key: "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY" s3_region: "us-east-1" s3_bucket: "my-app-storage"MinIO (Self-Hosted)
Section titled “MinIO (Self-Hosted)”storage: provider: "s3" s3_endpoint: "localhost:9000" s3_access_key: "minioadmin" s3_secret_key: "minioadmin" s3_region: "us-east-1" s3_bucket: "fluxbase" s3_use_ssl: false # for developmentStart MinIO with Docker:
docker run -d \ -p 9000:9000 \ -p 9001:9001 \ --name minio \ -e "MINIO_ROOT_USER=minioadmin" \ -e "MINIO_ROOT_PASSWORD=minioadmin" \ -v ./minio-data:/data \ minio/minio server /data --console-address ":9001"DigitalOcean Spaces
Section titled “DigitalOcean Spaces”storage: provider: "s3" s3_endpoint: "nyc3.digitaloceanspaces.com" s3_access_key: "your-spaces-key" s3_secret_key: "your-spaces-secret" s3_region: "us-east-1" s3_bucket: "my-space"Best Practices
Section titled “Best Practices”File Naming:
- Use consistent naming conventions
- Avoid special characters
- Use lowercase for better compatibility
- Include file extensions
Security:
- Keep buckets private by default
- Use signed URLs for temporary access
- Validate file types before upload
- Set file size limits
- Never expose S3 credentials in client code
Performance:
- Use appropriate file size limits
- Implement client-side compression for large files
- Use CDN for public files
- Cache control headers for static assets
Organization:
- Use path prefixes to organize files (e.g.,
users/123/avatar.png) - Separate buckets by access level
- Use metadata for searchability
S3 Credential Management
Section titled “S3 Credential Management”SECURITY WARNING: Never hardcode S3 credentials in configuration files or commit them to version control. Always use environment variables or secret management systems.
Environment Variables (Recommended for Development)
Section titled “Environment Variables (Recommended for Development)”# fluxbase.yaml - DO NOT hardcode credentials herestorage: provider: "s3" s3_endpoint: "${FLUXBASE_STORAGE_S3_ENDPOINT}" s3_access_key: "${FLUXBASE_STORAGE_S3_ACCESS_KEY}" s3_secret_key: "${FLUXBASE_STORAGE_S3_SECRET_KEY}" s3_region: "${FLUXBASE_STORAGE_S3_REGION}" s3_bucket: "${FLUXBASE_STORAGE_S3_BUCKET}"Set environment variables:
# .env file (NEVER commit this)export FLUXBASE_STORAGE_S3_ENDPOINT="s3.amazonaws.com"export FLUXBASE_STORAGE_S3_ACCESS_KEY="AKIAIOSFODNN7EXAMPLE"export FLUXBASE_STORAGE_S3_SECRET_KEY="wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"export FLUXBASE_STORAGE_S3_REGION="us-east-1"export FLUXBASE_STORAGE_S3_BUCKET="my-app-storage"
# Load and runsource .envfluxbase serverSecret Management Systems (Recommended for Production)
Section titled “Secret Management Systems (Recommended for Production)”HashiCorp Vault
Section titled “HashiCorp Vault”storage: provider: "s3" s3_endpoint: "{{ vault `secret/storage/s3#endpoint` }}" s3_access_key: "{{ vault `secret/storage/s3#access_key` }}" s3_secret_key: "{{ vault `secret/storage/s3#secret_key` }}"AWS Secrets Manager
Section titled “AWS Secrets Manager”storage: provider: "s3" s3_endpoint: "{{ aws_secret `fluxbase/storage/s3_endpoint` }}" s3_access_key: "{{ aws_secret `fluxbase/storage/access_key` }}" s3_secret_key: "{{ aws_secret `fluxbase/storage/secret_key` }}"Docker Secrets
Section titled “Docker Secrets”services: fluxbase: environment: - FLUXBASE_STORAGE_S3_ENDPOINT=file:/run/secrets/s3_endpoint - FLUXBASE_STORAGE_S3_ACCESS_KEY=file:/run/secrets/s3_access_key - FLUXBASE_STORAGE_S3_SECRET_KEY=file:/run/secrets/s3_secret_key secrets: - s3_endpoint - s3_access_key - s3_secret_key
secrets: s3_endpoint: external: true s3_access_key: external: true s3_secret_key: external: trueIAM Roles (Best for AWS Deployments)
Section titled “IAM Roles (Best for AWS Deployments)”For applications running on AWS EC2, ECS, or Lambda, use IAM roles instead of credentials:
storage: provider: "s3" s3_endpoint: "s3.amazonaws.com" s3_region: "us-east-1" s3_bucket: "my-app-storage" # No access_key/secret_key needed - uses IAM role from EC2/ECSAttach an IAM policy to your EC2 instance or ECS task:
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:GetObject", "s3:PutObject", "s3:DeleteObject", "s3:ListBucket" ], "Resource": [ "arn:aws:s3:::my-app-storage", "arn:aws:s3:::my-app-storage/*" ] } ]}Credential Rotation
Section titled “Credential Rotation”Regularly rotate S3 credentials:
- Generate new credentials in AWS IAM or S3 provider
- Update environment variables or secret management system
- Restart Fluxbase to load new credentials
- Revoke old credentials after confirming new ones work
Automation tools like AWS Secrets Manager or HashiCorp Vault can automate this process.
Malware Scanning Integration
Section titled “Malware Scanning Integration”Fluxbase supports integration with malware scanning services to automatically scan uploaded files for viruses, malware, and malicious content.
ClamAV (Self-Hosted)
Section titled “ClamAV (Self-Hosted)”Install ClamAV and configure Fluxbase to scan files:
storage: malware_scanning: enabled: true provider: "clamav" clamav_socket: "/var/run/clamav/clamd.ctl" scan_on_upload: true quarantine_infected: true quarantine_bucket: "quarantine"Install ClamAV:
# Ubuntu/Debiansudo apt-get install clamav clamav-daemon
# Start ClamAV daemonsudo systemctl start clamav-daemonsudo systemctl enable clamav-daemonAWS Advanced Virus Protection
Section titled “AWS Advanced Virus Protection”For AWS S3 buckets, enable AWS Advanced Virus Protection:
storage: malware_scanning: enabled: true provider: "aws_avp" aws_region: "us-east-1" scan_on_upload: true quarantine_infected: trueEnable in AWS S3:
# Enable AVP for S3 bucketaws s3api put-bucket-configuration \ --bucket my-app-storage \ --configuration '{ "AdvancedVirusProtectionConfiguration": { "AdvancedVirusProtection": "Enabled" } }'VirusTotal API
Section titled “VirusTotal API”Use VirusTotal API for cloud-based scanning:
storage: malware_scanning: enabled: true provider: "virustotal" virustotal_api_key: "${VIRUSTOTAL_API_KEY}" scan_on_upload: true quarantine_infected: trueGet API key from VirusTotal Developer Portal.
Scanning Behavior
Section titled “Scanning Behavior”When malware scanning is enabled:
- Upload receives file -> File stored temporarily
- Scanner processes file -> Asynchronous scan
- Scan results:
- Clean: File moved to permanent storage
- Infected: File moved to quarantine bucket, upload rejected
- Error: Configurable fail-open or fail-closed
Quarantine Management
Section titled “Quarantine Management”# List quarantined filesfluxbase-cli storage list --bucket quarantine
# Download quarantined file for analysisfluxbase-cli storage download --bucket quarantine --path suspicious.exe
# Delete quarantined files (after review)fluxbase-cli storage delete --bucket quarantine --path suspicious.exeBest Practices
Section titled “Best Practices”- Enable in production: Always scan files from untrusted sources
- Quarantine first: Review quarantined files before permanent deletion
- Rate limiting: Malware scanning can be slow, implement rate limiting
- Async scanning: Consider async scanning for better UX
- False positives: Whitelist known-safe files as needed
- Regular updates: Keep ClamAV definitions updated
Error Handling
Section titled “Error Handling”try { const { data, error } = await client.storage .from("avatars") .upload("file.png", file);
if (error) { if (error.message.includes("already exists")) { // File exists, use upsert: true or different name } else if (error.message.includes("too large")) { // File exceeds size limit } else { // Other error console.error("Upload error:", error); } }} catch (err) { console.error("Network error:", err);}REST API
Section titled “REST API”For direct HTTP access without the SDK, see the Storage SDK Documentation.
Related Documentation
Section titled “Related Documentation”- Authentication - Secure file access
- Row-Level Security - File access policies
- Configuration - All storage options