File Storage
Fluxbase provides file storage supporting local filesystem or S3-compatible storage (MinIO, AWS S3, Wasabi, DigitalOcean Spaces, etc.).
Features
Section titled “Features”- Local filesystem or S3-compatible storage
- Bucket management
- File upload, download, delete, list operations
- Custom metadata support
- Signed URLs for temporary access (S3 only)
- Range requests for partial downloads
- Copy and move operations
Configuration
Section titled “Configuration”storage: provider: "local" # or "s3" local_path: "./storage" max_upload_size: 10485760 # 10MB
# S3 Configuration (when provider: "s3") s3_endpoint: "s3.amazonaws.com" s3_access_key: "your-access-key" s3_secret_key: "your-secret-key" s3_region: "us-east-1" s3_bucket: "default-bucket"Environment Variables
Section titled “Environment Variables”FLUXBASE_STORAGE_PROVIDER=local # or s3FLUXBASE_STORAGE_LOCAL_PATH=./storageFLUXBASE_STORAGE_MAX_UPLOAD_SIZE=10485760
# S3 ConfigurationFLUXBASE_STORAGE_S3_ENDPOINT=s3.amazonaws.comFLUXBASE_STORAGE_S3_ACCESS_KEY=your-access-keyFLUXBASE_STORAGE_S3_SECRET_KEY=your-secret-keyFLUXBASE_STORAGE_S3_REGION=us-east-1Provider Comparison
Section titled “Provider Comparison”Local Storage:
- Simple setup, no external dependencies
- Best for development and single-server deployments
- Not scalable across multiple servers
S3-Compatible:
- Highly scalable and distributed
- Best for production with multiple servers
- Requires external service (AWS S3, MinIO, etc.)
Architecture Comparison
Section titled “Architecture Comparison”Local Storage Architecture
Section titled “Local Storage Architecture”graph TB A[Client App 1] -->|Upload/Download| B[Fluxbase Server] C[Client App 2] -->|Upload/Download| B B -->|Read/Write| D[Local Filesystem<br/>/storage]
E[Load Balancer] -.->|Cannot scale| F[Multiple Instances] F -.->|No shared filesystem| D
style B fill:#3178c6,color:#fff style D fill:#f39c12,color:#fff style E fill:#e74c3c,color:#fff,stroke-dasharray: 5 5 style F fill:#e74c3c,color:#fff,stroke-dasharray: 5 5Limitations:
- Single server only - files stored locally cannot be accessed by multiple Fluxbase instances
- No horizontal scaling possible
- Server failure means data loss (unless backups exist)
S3-Compatible Storage Architecture (MinIO/S3)
Section titled “S3-Compatible Storage Architecture (MinIO/S3)”graph TB A[Client 1] -->|API Request| LB[Load Balancer] B[Client 2] -->|API Request| LB C[Client 3] -->|API Request| LB
LB --> FB1[Fluxbase Instance 1] LB --> FB2[Fluxbase Instance 2] LB --> FB3[Fluxbase Instance 3]
FB1 -->|S3 API| S3[MinIO / S3 Cluster] FB2 -->|S3 API| S3 FB3 -->|S3 API| S3
S3 -->|Distributed| S3A[Storage Node 1] S3 -->|Distributed| S3B[Storage Node 2] S3 -->|Distributed| S3C[Storage Node 3]
style LB fill:#ff6b6b,color:#fff style FB1 fill:#3178c6,color:#fff style FB2 fill:#3178c6,color:#fff style FB3 fill:#3178c6,color:#fff style S3 fill:#c92a2a,color:#fff style S3A fill:#5c940d,color:#fff style S3B fill:#5c940d,color:#fff style S3C fill:#5c940d,color:#fffBenefits:
- Multiple Fluxbase instances can access the same storage
- Horizontally scalable - add more instances as needed
- High availability - storage cluster handles redundancy
- No single point of failure
Use Cases:
- Local Storage: Development, testing, single-server deployments
- MinIO: Self-hosted production with horizontal scaling needs
- AWS S3/DigitalOcean Spaces: Cloud production with managed infrastructure
Installation
Section titled “Installation”npm install @fluxbase/sdkBasic Usage
Section titled “Basic Usage”import { createClient } from "@fluxbase/sdk";
const client = createClient("http://localhost:8080", "your-anon-key");
// Upload fileconst file = document.getElementById("fileInput").files[0];const { data, error } = await client.storage .from("avatars") .upload("user1.png", file);
// Download fileconst { data: blob } = await client.storage .from("avatars") .download("user1.png");
// List filesconst { data: files } = await client.storage.from("avatars").list();
// Delete fileawait client.storage.from("avatars").remove(["user1.png"]);Bucket Operations
Section titled “Bucket Operations”| Method | Purpose | Parameters |
|---|---|---|
createBucket() | Create new bucket | name, options (public, file_size_limit, allowed_mime_types) |
listBuckets() | List all buckets | None |
getBucket() | Get bucket details | name |
deleteBucket() | Delete bucket | name |
Example:
// Create bucketawait client.storage.createBucket("avatars", { public: false, file_size_limit: 5242880, allowed_mime_types: ["image/png", "image/jpeg"],});
// List/get/deleteconst { data: buckets } = await client.storage.listBuckets();const { data: bucket } = await client.storage.getBucket("avatars");await client.storage.deleteBucket("avatars");File Operations
Section titled “File Operations”| Method | Purpose | Parameters |
|---|---|---|
upload() | Upload file | path, file, options (contentType, cacheControl, upsert) |
download() | Download file | path |
list() | List files | path, options (limit, offset, sortBy) |
remove() | Delete files | paths[] |
copy() | Copy file | from, to |
move() | Move file | from, to |
Example:
// Uploadawait client.storage .from("avatars") .upload("user1.png", file, { upsert: true });
// Downloadconst { data } = await client.storage.from("avatars").download("user1.png");
// Listconst { data: files } = await client.storage .from("avatars") .list("subfolder/", { limit: 100 });
// Deleteawait client.storage.from("avatars").remove(["file1.png", "file2.png"]);
// Copy/Moveawait client.storage.from("avatars").copy("old.png", "new.png");await client.storage.from("avatars").move("old.png", "new.png");Upload Progress Tracking
Section titled “Upload Progress Tracking”Track upload progress by providing an onUploadProgress callback in the upload options:
Vanilla SDK
Section titled “Vanilla SDK”import { createClient } from "@fluxbase/sdk";
const client = createClient("http://localhost:8080", "your-anon-key");const file = document.getElementById("fileInput").files[0];
// Upload with progress trackingconst { data, error } = await client.storage .from("avatars") .upload("user1.png", file, { onUploadProgress: (progress) => { console.log(`Upload progress: ${progress.percentage}%`); console.log(`Loaded: ${progress.loaded} / ${progress.total} bytes`);
// Update UI progress bar const progressBar = document.getElementById("progress"); progressBar.value = progress.percentage; }, });React Hook (with automatic state management)
Section titled “React Hook (with automatic state management)”import { useStorageUploadWithProgress } from "@fluxbase/sdk-react";
function UploadComponent() { const { upload, progress, reset } = useStorageUploadWithProgress("avatars");
const handleUpload = (file: File) => { upload.mutate({ path: "user-avatar.png", file: file, }); };
return ( <div> <input type="file" onChange={(e) => handleUpload(e.target.files[0])} />
{progress && ( <div> <progress value={progress.percentage} max="100" /> <p>{progress.percentage}% uploaded</p> <p> {progress.loaded} / {progress.total} bytes </p> </div> )}
{upload.isSuccess && <p>Upload complete!</p>} {upload.isError && <p>Upload failed: {upload.error?.message}</p>} </div> );}React Hook (with custom callback)
Section titled “React Hook (with custom callback)”import { useState } from "react";import { useStorageUpload } from "@fluxbase/sdk-react";
function UploadComponent() { const upload = useStorageUpload("avatars"); const [uploadProgress, setUploadProgress] = useState(0);
const handleUpload = (file: File) => { upload.mutate({ path: "user-avatar.png", file: file, options: { onUploadProgress: (progress) => { setUploadProgress(progress.percentage); }, }, }); };
return ( <div> <input type="file" onChange={(e) => handleUpload(e.target.files[0])} />
{uploadProgress > 0 && uploadProgress < 100 && ( <div> <progress value={uploadProgress} max="100" /> <p>{uploadProgress}% uploaded</p> </div> )}
{upload.isSuccess && <p>Upload complete!</p>} </div> );}Progress Object
Section titled “Progress Object”The onUploadProgress callback receives an object with the following properties:
interface UploadProgress { loaded: number; // Number of bytes uploaded so far total: number; // Total number of bytes to upload percentage: number; // Upload percentage (0-100)}- Progress tracking uses
XMLHttpRequestinstead offetch()for better progress events - Progress tracking is optional and backward-compatible
- When no progress callback is provided, the standard
fetch()API is used - Progress updates may not be perfectly linear depending on network conditions
- The progress callback is called multiple times during the upload
Public vs Private Files
Section titled “Public vs Private Files”// Public bucket (no auth required)await client.storage.createBucket("public-images", { public: true });const url = client.storage.from("public-images").getPublicUrl("logo.svg");
// Private bucket (requires auth or signed URL)await client.storage.createBucket("private-docs", { public: false });Signed URLs (S3 Only)
Section titled “Signed URLs (S3 Only)”const { data } = await client.storage .from("private-docs") .createSignedUrl("document.pdf", 3600); // 1 hour expiryMetadata
Section titled “Metadata”// Upload with metadataawait client.storage.from("avatars").upload("profile.png", file, { metadata: { user_id: "123", description: "Profile picture" },});
// Get file infoconst { data } = await client.storage .from("avatars") .getFileInfo("profile.png");S3 Provider Setup
Section titled “S3 Provider Setup”AWS S3
Section titled “AWS S3”storage: provider: "s3" s3_endpoint: "s3.amazonaws.com" s3_access_key: "AKIAIOSFODNN7EXAMPLE" s3_secret_key: "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY" s3_region: "us-east-1" s3_bucket: "my-app-storage"MinIO (Self-Hosted)
Section titled “MinIO (Self-Hosted)”storage: provider: "s3" s3_endpoint: "localhost:9000" s3_access_key: "minioadmin" s3_secret_key: "minioadmin" s3_region: "us-east-1" s3_bucket: "fluxbase" s3_use_ssl: false # for developmentStart MinIO with Docker:
docker run -d \ -p 9000:9000 \ -p 9001:9001 \ --name minio \ -e "MINIO_ROOT_USER=minioadmin" \ -e "MINIO_ROOT_PASSWORD=minioadmin" \ -v ./minio-data:/data \ minio/minio server /data --console-address ":9001"DigitalOcean Spaces
Section titled “DigitalOcean Spaces”storage: provider: "s3" s3_endpoint: "nyc3.digitaloceanspaces.com" s3_access_key: "your-spaces-key" s3_secret_key: "your-spaces-secret" s3_region: "us-east-1" s3_bucket: "my-space"Best Practices
Section titled “Best Practices”File Naming:
- Use consistent naming conventions
- Avoid special characters
- Use lowercase for better compatibility
- Include file extensions
Security:
- Keep buckets private by default
- Use signed URLs for temporary access
- Validate file types before upload
- Set file size limits
- Never expose S3 credentials in client code
Performance:
- Use appropriate file size limits
- Implement client-side compression for large files
- Use CDN for public files
- Cache control headers for static assets
Organization:
- Use path prefixes to organize files (e.g.,
users/123/avatar.png) - Separate buckets by access level
- Use metadata for searchability
Error Handling
Section titled “Error Handling”try { const { data, error } = await client.storage .from("avatars") .upload("file.png", file);
if (error) { if (error.message.includes("already exists")) { // File exists, use upsert: true or different name } else if (error.message.includes("too large")) { // File exceeds size limit } else { // Other error console.error("Upload error:", error); } }} catch (err) { console.error("Network error:", err);}REST API
Section titled “REST API”For direct HTTP access without the SDK, see the Storage SDK Documentation.
Related Documentation
Section titled “Related Documentation”- Authentication - Secure file access
- Row-Level Security - File access policies
- Configuration - All storage options