Skip to content

File Storage

Fluxbase provides file storage supporting local filesystem or S3-compatible storage (MinIO, AWS S3, Wasabi, DigitalOcean Spaces, etc.).

  • Local filesystem or S3-compatible storage
  • Bucket management
  • File upload, download, delete, list operations
  • Custom metadata support
  • Signed URLs for temporary access (S3 only)
  • Range requests for partial downloads
  • Copy and move operations
storage:
provider: "local" # or "s3"
local_path: "./storage"
max_upload_size: 10485760 # 10MB
# S3 Configuration (when provider: "s3")
s3_endpoint: "s3.amazonaws.com"
s3_access_key: "your-access-key"
s3_secret_key: "your-secret-key"
s3_region: "us-east-1"
s3_bucket: "default-bucket"
Terminal window
FLUXBASE_STORAGE_PROVIDER=local # or s3
FLUXBASE_STORAGE_LOCAL_PATH=./storage
FLUXBASE_STORAGE_MAX_UPLOAD_SIZE=10485760
# S3 Configuration
FLUXBASE_STORAGE_S3_ENDPOINT=s3.amazonaws.com
FLUXBASE_STORAGE_S3_ACCESS_KEY=your-access-key
FLUXBASE_STORAGE_S3_SECRET_KEY=your-secret-key
FLUXBASE_STORAGE_S3_REGION=us-east-1

Local Storage:

  • Simple setup, no external dependencies
  • Best for development and single-server deployments
  • Not scalable across multiple servers

S3-Compatible:

  • Highly scalable and distributed
  • Best for production with multiple servers
  • Requires external service (AWS S3, MinIO, etc.)
graph TB
A[Client App 1] -->|Upload/Download| B[Fluxbase Server]
C[Client App 2] -->|Upload/Download| B
B -->|Read/Write| D[Local Filesystem<br/>/storage]
E[Load Balancer] -.->|Cannot scale| F[Multiple Instances]
F -.->|No shared filesystem| D
style B fill:#3178c6,color:#fff
style D fill:#f39c12,color:#fff
style E fill:#e74c3c,color:#fff,stroke-dasharray: 5 5
style F fill:#e74c3c,color:#fff,stroke-dasharray: 5 5

Limitations:

  • Single server only - files stored locally cannot be accessed by multiple Fluxbase instances
  • No horizontal scaling possible
  • Server failure means data loss (unless backups exist)

S3-Compatible Storage Architecture (MinIO/S3)

Section titled “S3-Compatible Storage Architecture (MinIO/S3)”
graph TB
A[Client 1] -->|API Request| LB[Load Balancer]
B[Client 2] -->|API Request| LB
C[Client 3] -->|API Request| LB
LB --> FB1[Fluxbase Instance 1]
LB --> FB2[Fluxbase Instance 2]
LB --> FB3[Fluxbase Instance 3]
FB1 -->|S3 API| S3[MinIO / S3 Cluster]
FB2 -->|S3 API| S3
FB3 -->|S3 API| S3
S3 -->|Distributed| S3A[Storage Node 1]
S3 -->|Distributed| S3B[Storage Node 2]
S3 -->|Distributed| S3C[Storage Node 3]
style LB fill:#ff6b6b,color:#fff
style FB1 fill:#3178c6,color:#fff
style FB2 fill:#3178c6,color:#fff
style FB3 fill:#3178c6,color:#fff
style S3 fill:#c92a2a,color:#fff
style S3A fill:#5c940d,color:#fff
style S3B fill:#5c940d,color:#fff
style S3C fill:#5c940d,color:#fff

Benefits:

  • Multiple Fluxbase instances can access the same storage
  • Horizontally scalable - add more instances as needed
  • High availability - storage cluster handles redundancy
  • No single point of failure

Use Cases:

  • Local Storage: Development, testing, single-server deployments
  • MinIO: Self-hosted production with horizontal scaling needs
  • AWS S3/DigitalOcean Spaces: Cloud production with managed infrastructure
Terminal window
npm install @nimbleflux/fluxbase-sdk
import { createClient } from "@nimbleflux/fluxbase-sdk";
const client = createClient("http://localhost:8080", "your-anon-key");
// Upload file
const file = document.getElementById("fileInput").files[0];
const { data, error } = await client.storage
.from("avatars")
.upload("user1.png", file);
// Download file
const { data: blob } = await client.storage
.from("avatars")
.download("user1.png");
// List files
const { data: files } = await client.storage.from("avatars").list();
// Delete file
await client.storage.from("avatars").remove(["user1.png"]);
MethodPurposeParameters
createBucket()Create new bucketname, options (public, file_size_limit, allowed_mime_types)
listBuckets()List all bucketsNone
getBucket()Get bucket detailsname
deleteBucket()Delete bucketname

Example:

// Create bucket
await client.storage.createBucket("avatars", {
public: false,
file_size_limit: 5242880,
allowed_mime_types: ["image/png", "image/jpeg"],
});
// List/get/delete
const { data: buckets } = await client.storage.listBuckets();
const { data: bucket } = await client.storage.getBucket("avatars");
await client.storage.deleteBucket("avatars");
MethodPurposeParameters
upload()Upload filepath, file, options (contentType, cacheControl, upsert)
download()Download filepath
list()List filespath, options (limit, offset, sortBy)
remove()Delete filespaths[]
copy()Copy filefrom, to
move()Move filefrom, to

Example:

// Upload
await client.storage
.from("avatars")
.upload("user1.png", file, { upsert: true });
// Download
const { data } = await client.storage.from("avatars").download("user1.png");
// List
const { data: files } = await client.storage
.from("avatars")
.list("subfolder/", { limit: 100 });
// Delete
await client.storage.from("avatars").remove(["file1.png", "file2.png"]);
// Copy/Move
await client.storage.from("avatars").copy("old.png", "new.png");
await client.storage.from("avatars").move("old.png", "new.png");

Track upload progress by providing an onUploadProgress callback in the upload options:

import { createClient } from "@nimbleflux/fluxbase-sdk";
const client = createClient("http://localhost:8080", "your-anon-key");
const file = document.getElementById("fileInput").files[0];
// Upload with progress tracking
const { data, error } = await client.storage
.from("avatars")
.upload("user1.png", file, {
onUploadProgress: (progress) => {
console.log(`Upload progress: ${progress.percentage}%`);
console.log(`Loaded: ${progress.loaded} / ${progress.total} bytes`);
// Update UI progress bar
const progressBar = document.getElementById("progress");
progressBar.value = progress.percentage;
},
});

React Hook (with automatic state management)

Section titled “React Hook (with automatic state management)”
import { useStorageUploadWithProgress } from "@nimbleflux/fluxbase-sdk-react";
function UploadComponent() {
const { upload, progress, reset } = useStorageUploadWithProgress("avatars");
const handleUpload = (file: File) => {
upload.mutate({
path: "user-avatar.png",
file: file,
});
};
return (
<div>
<input type="file" onChange={(e) => handleUpload(e.target.files[0])} />
{progress && (
<div>
<progress value={progress.percentage} max="100" />
<p>{progress.percentage}% uploaded</p>
<p>
{progress.loaded} / {progress.total} bytes
</p>
</div>
)}
{upload.isSuccess && <p>Upload complete!</p>}
{upload.isError && <p>Upload failed: {upload.error?.message}</p>}
</div>
);
}
import { useState } from "react";
import { useStorageUpload } from "@nimbleflux/fluxbase-sdk-react";
function UploadComponent() {
const upload = useStorageUpload("avatars");
const [uploadProgress, setUploadProgress] = useState(0);
const handleUpload = (file: File) => {
upload.mutate({
path: "user-avatar.png",
file: file,
options: {
onUploadProgress: (progress) => {
setUploadProgress(progress.percentage);
},
},
});
};
return (
<div>
<input type="file" onChange={(e) => handleUpload(e.target.files[0])} />
{uploadProgress > 0 && uploadProgress < 100 && (
<div>
<progress value={uploadProgress} max="100" />
<p>{uploadProgress}% uploaded</p>
</div>
)}
{upload.isSuccess && <p>Upload complete!</p>}
</div>
);
}

The onUploadProgress callback receives an object with the following properties:

interface UploadProgress {
loaded: number; // Number of bytes uploaded so far
total: number; // Total number of bytes to upload
percentage: number; // Upload percentage (0-100)
}
  • Progress tracking uses XMLHttpRequest instead of fetch() for better progress events
  • Progress tracking is optional and backward-compatible
  • When no progress callback is provided, the standard fetch() API is used
  • Progress updates may not be perfectly linear depending on network conditions
  • The progress callback is called multiple times during the upload
// Public bucket (no auth required)
await client.storage.createBucket("public-images", { public: true });
const url = client.storage.from("public-images").getPublicUrl("logo.svg");
// Private bucket (requires auth or signed URL)
await client.storage.createBucket("private-docs", { public: false });
const { data } = await client.storage
.from("private-docs")
.createSignedUrl("document.pdf", 3600); // 1 hour expiry
// Upload with metadata
await client.storage.from("avatars").upload("profile.png", file, {
metadata: { user_id: "123", description: "Profile picture" },
});
// Get file info
const { data } = await client.storage
.from("avatars")
.getFileInfo("profile.png");
storage:
provider: "s3"
s3_endpoint: "s3.amazonaws.com"
s3_access_key: "AKIAIOSFODNN7EXAMPLE"
s3_secret_key: "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"
s3_region: "us-east-1"
s3_bucket: "my-app-storage"
storage:
provider: "s3"
s3_endpoint: "localhost:9000"
s3_access_key: "minioadmin"
s3_secret_key: "minioadmin"
s3_region: "us-east-1"
s3_bucket: "fluxbase"
s3_use_ssl: false # for development

Start MinIO with Docker:

Terminal window
docker run -d \
-p 9000:9000 \
-p 9001:9001 \
--name minio \
-e "MINIO_ROOT_USER=minioadmin" \
-e "MINIO_ROOT_PASSWORD=minioadmin" \
-v ./minio-data:/data \
minio/minio server /data --console-address ":9001"
storage:
provider: "s3"
s3_endpoint: "nyc3.digitaloceanspaces.com"
s3_access_key: "your-spaces-key"
s3_secret_key: "your-spaces-secret"
s3_region: "us-east-1"
s3_bucket: "my-space"

File Naming:

  • Use consistent naming conventions
  • Avoid special characters
  • Use lowercase for better compatibility
  • Include file extensions

Security:

  • Keep buckets private by default
  • Use signed URLs for temporary access
  • Validate file types before upload
  • Set file size limits
  • Never expose S3 credentials in client code

Performance:

  • Use appropriate file size limits
  • Implement client-side compression for large files
  • Use CDN for public files
  • Cache control headers for static assets

Organization:

  • Use path prefixes to organize files (e.g., users/123/avatar.png)
  • Separate buckets by access level
  • Use metadata for searchability

SECURITY WARNING: Never hardcode S3 credentials in configuration files or commit them to version control. Always use environment variables or secret management systems.

Section titled “Environment Variables (Recommended for Development)”
# fluxbase.yaml - DO NOT hardcode credentials here
storage:
provider: "s3"
s3_endpoint: "${FLUXBASE_STORAGE_S3_ENDPOINT}"
s3_access_key: "${FLUXBASE_STORAGE_S3_ACCESS_KEY}"
s3_secret_key: "${FLUXBASE_STORAGE_S3_SECRET_KEY}"
s3_region: "${FLUXBASE_STORAGE_S3_REGION}"
s3_bucket: "${FLUXBASE_STORAGE_S3_BUCKET}"

Set environment variables:

Terminal window
# .env file (NEVER commit this)
export FLUXBASE_STORAGE_S3_ENDPOINT="s3.amazonaws.com"
export FLUXBASE_STORAGE_S3_ACCESS_KEY="AKIAIOSFODNN7EXAMPLE"
export FLUXBASE_STORAGE_S3_SECRET_KEY="wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"
export FLUXBASE_STORAGE_S3_REGION="us-east-1"
export FLUXBASE_STORAGE_S3_BUCKET="my-app-storage"
# Load and run
source .env
fluxbase server
Section titled “Secret Management Systems (Recommended for Production)”
fluxbase.yaml
storage:
provider: "s3"
s3_endpoint: "{{ vault `secret/storage/s3#endpoint` }}"
s3_access_key: "{{ vault `secret/storage/s3#access_key` }}"
s3_secret_key: "{{ vault `secret/storage/s3#secret_key` }}"
fluxbase.yaml
storage:
provider: "s3"
s3_endpoint: "{{ aws_secret `fluxbase/storage/s3_endpoint` }}"
s3_access_key: "{{ aws_secret `fluxbase/storage/access_key` }}"
s3_secret_key: "{{ aws_secret `fluxbase/storage/secret_key` }}"
docker-compose.yml
services:
fluxbase:
environment:
- FLUXBASE_STORAGE_S3_ENDPOINT=file:/run/secrets/s3_endpoint
- FLUXBASE_STORAGE_S3_ACCESS_KEY=file:/run/secrets/s3_access_key
- FLUXBASE_STORAGE_S3_SECRET_KEY=file:/run/secrets/s3_secret_key
secrets:
- s3_endpoint
- s3_access_key
- s3_secret_key
secrets:
s3_endpoint:
external: true
s3_access_key:
external: true
s3_secret_key:
external: true

For applications running on AWS EC2, ECS, or Lambda, use IAM roles instead of credentials:

fluxbase.yaml
storage:
provider: "s3"
s3_endpoint: "s3.amazonaws.com"
s3_region: "us-east-1"
s3_bucket: "my-app-storage"
# No access_key/secret_key needed - uses IAM role from EC2/ECS

Attach an IAM policy to your EC2 instance or ECS task:

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::my-app-storage",
"arn:aws:s3:::my-app-storage/*"
]
}
]
}

Regularly rotate S3 credentials:

  1. Generate new credentials in AWS IAM or S3 provider
  2. Update environment variables or secret management system
  3. Restart Fluxbase to load new credentials
  4. Revoke old credentials after confirming new ones work

Automation tools like AWS Secrets Manager or HashiCorp Vault can automate this process.

Fluxbase supports integration with malware scanning services to automatically scan uploaded files for viruses, malware, and malicious content.

Install ClamAV and configure Fluxbase to scan files:

fluxbase.yaml
storage:
malware_scanning:
enabled: true
provider: "clamav"
clamav_socket: "/var/run/clamav/clamd.ctl"
scan_on_upload: true
quarantine_infected: true
quarantine_bucket: "quarantine"

Install ClamAV:

Terminal window
# Ubuntu/Debian
sudo apt-get install clamav clamav-daemon
# Start ClamAV daemon
sudo systemctl start clamav-daemon
sudo systemctl enable clamav-daemon

For AWS S3 buckets, enable AWS Advanced Virus Protection:

fluxbase.yaml
storage:
malware_scanning:
enabled: true
provider: "aws_avp"
aws_region: "us-east-1"
scan_on_upload: true
quarantine_infected: true

Enable in AWS S3:

Terminal window
# Enable AVP for S3 bucket
aws s3api put-bucket-configuration \
--bucket my-app-storage \
--configuration '{
"AdvancedVirusProtectionConfiguration": {
"AdvancedVirusProtection": "Enabled"
}
}'

Use VirusTotal API for cloud-based scanning:

fluxbase.yaml
storage:
malware_scanning:
enabled: true
provider: "virustotal"
virustotal_api_key: "${VIRUSTOTAL_API_KEY}"
scan_on_upload: true
quarantine_infected: true

Get API key from VirusTotal Developer Portal.

When malware scanning is enabled:

  1. Upload receives file -> File stored temporarily
  2. Scanner processes file -> Asynchronous scan
  3. Scan results:
    • Clean: File moved to permanent storage
    • Infected: File moved to quarantine bucket, upload rejected
    • Error: Configurable fail-open or fail-closed
Terminal window
# List quarantined files
fluxbase-cli storage list --bucket quarantine
# Download quarantined file for analysis
fluxbase-cli storage download --bucket quarantine --path suspicious.exe
# Delete quarantined files (after review)
fluxbase-cli storage delete --bucket quarantine --path suspicious.exe
  • Enable in production: Always scan files from untrusted sources
  • Quarantine first: Review quarantined files before permanent deletion
  • Rate limiting: Malware scanning can be slow, implement rate limiting
  • Async scanning: Consider async scanning for better UX
  • False positives: Whitelist known-safe files as needed
  • Regular updates: Keep ClamAV definitions updated
try {
const { data, error } = await client.storage
.from("avatars")
.upload("file.png", file);
if (error) {
if (error.message.includes("already exists")) {
// File exists, use upsert: true or different name
} else if (error.message.includes("too large")) {
// File exceeds size limit
} else {
// Other error
console.error("Upload error:", error);
}
}
} catch (err) {
console.error("Network error:", err);
}

For direct HTTP access without the SDK, see the Storage SDK Documentation.