Skip to content

Testing Guide

Fluxbase has comprehensive testing infrastructure covering unit tests, integration tests, end-to-end tests, and performance tests for both backend (Go) and SDK (TypeScript).

┌─────────────────────────────────────────────────────────────┐
│ Testing Pyramid │
│ │
│ ┌─────────────┐ │
│ │ E2E Tests │ (Slowest, Full Stack) │
│ └─────────────┘ │
│ ┌───────────────────────┐ │
│ │ Integration Tests │ (Database + API) │
│ └───────────────────────┘ │
│ ┌──────────────────────────────────┐ │
│ │ Unit Tests │ (Fastest) │
│ └──────────────────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────┘

Fluxbase backend uses Go’s built-in testing framework with testify for assertions and test suites.

Test individual functions in isolation. Location: internal/*/

Terminal window
make test-fast # Run all unit tests
go test -v ./internal/auth/... # Test specific package
go test -v -run TestPasswordHasher ... # Run specific test

Example:

func TestPasswordHasher(t *testing.T) {
hasher := auth.NewPasswordHasher()
hash, err := hasher.HashPassword("password123")
assert.NoError(t, err)
assert.NoError(t, hasher.VerifyPassword(hash, "password123"))
assert.Error(t, hasher.VerifyPassword(hash, "wrong"))
}

Test components with real database. Location: internal/api/*_integration_test.go

Terminal window
go test -v ./internal/api/... -run Integration # Run all integration tests

Example:

func (s *Suite) TestUploadFile() {
resp := s.tc.NewRequest("POST", "/api/v1/storage/buckets/test/files").
WithFile("file", "test.txt", []byte("Hello")).Send()
resp.AssertStatus(201)
}

Test full application stack. Location: test/e2e/

Terminal window
make test-e2e # Run all E2E tests
make test-full # Run unit + integration + E2E

Example:

func (s *Suite) TestSignUpAndSignIn() {
resp := s.tc.NewRequest("POST", "/api/v1/auth/signup").
WithBody(map[string]interface{}{"email": "test@example.com", "password": "pass"}).
Send()
resp.AssertStatus(201)
s.Contains(resp.JSON(), "access_token")
}

Test system performance under load. Location: test/performance/

Terminal window
go test -bench=. -benchmem ./test/performance/... # Run benchmarks

TestContext provides test dependencies:

tc := test.NewTestContext(t)
defer tc.Close()
// tc.DB, tc.Server, tc.App, tc.Config

HTTP requests:

resp := tc.NewRequest("POST", "/api/v1/tables/users").
WithBody(map[string]interface{}{"name": "John"}).
WithAuth("Bearer token").
Send()
resp.AssertStatus(201)

Database:

tc.ExecuteSQL("INSERT INTO users...")
tc.CleanDatabase()

The TypeScript SDK uses Vitest for testing with mocking and assertions.

Terminal window
cd sdk
# Run all tests
npm test
# Run tests in watch mode
npm test -- --watch
# Run specific test file
npm test -- src/auth.test.ts
# Run with UI
npm run test:ui
# Type checking
npm run type-check

Location: sdk/src/*.test.ts

Available Test Files:

  • src/auth.test.ts - Authentication tests
  • src/admin.test.ts - Admin SDK tests
  • src/management.test.ts - Management SDK tests
  • src/query-builder.test.ts - Query builder tests
  • src/realtime.test.ts - Realtime tests
  • src/storage.test.ts - Storage tests
  • src/aggregations.test.ts - Aggregation tests

Authentication Test (from sdk/src/auth.test.ts):

import { describe, it, expect, beforeEach, vi } from "vitest";
import { FluxbaseAuth } from "./auth";
import type { FluxbaseFetch } from "./fetch";
describe("FluxbaseAuth", () => {
let auth: FluxbaseAuth;
let mockFetch: FluxbaseFetch;
beforeEach(() => {
// Create mock fetch
mockFetch = {
get: vi.fn(),
post: vi.fn(),
patch: vi.fn(),
delete: vi.fn(),
} as unknown as FluxbaseFetch;
auth = new FluxbaseAuth(mockFetch);
});
describe("signUp", () => {
it("should sign up a new user", async () => {
const mockResponse = {
user: {
id: "user-123",
email: "test@example.com",
},
access_token: "token-123",
refresh_token: "refresh-123",
};
vi.mocked(mockFetch.post).mockResolvedValue(mockResponse);
const result = await auth.signUp({
email: "test@example.com",
password: "password123",
});
expect(mockFetch.post).toHaveBeenCalledWith("/api/v1/auth/signup", {
email: "test@example.com",
password: "password123",
});
expect(result).toEqual(mockResponse);
expect(result.user.email).toBe("test@example.com");
expect(result.access_token).toBe("token-123");
});
it("should handle sign up errors", async () => {
vi.mocked(mockFetch.post).mockRejectedValue(
new Error("Email already exists")
);
await expect(
auth.signUp({
email: "test@example.com",
password: "password123",
})
).rejects.toThrow("Email already exists");
});
});
describe("signIn", () => {
it("should sign in existing user", async () => {
const mockResponse = {
user: { id: "user-123", email: "test@example.com" },
access_token: "token-123",
refresh_token: "refresh-123",
};
vi.mocked(mockFetch.post).mockResolvedValue(mockResponse);
const result = await auth.signIn({
email: "test@example.com",
password: "password123",
});
expect(mockFetch.post).toHaveBeenCalledWith("/api/v1/auth/signin", {
email: "test@example.com",
password: "password123",
});
expect(result.access_token).toBe("token-123");
});
});
describe("signOut", () => {
it("should sign out user", async () => {
vi.mocked(mockFetch.post).mockResolvedValue({});
await auth.signOut();
expect(mockFetch.post).toHaveBeenCalledWith("/api/v1/auth/signout", {});
});
});
});

Mock HTTP Fetch:

import { vi } from "vitest";
const mockFetch = {
get: vi.fn(),
post: vi.fn(),
patch: vi.fn(),
delete: vi.fn(),
};
// Set mock return value
vi.mocked(mockFetch.post).mockResolvedValue({
data: { id: 123, name: "Test" },
});
// Verify mock was called
expect(mockFetch.post).toHaveBeenCalledWith("/api/endpoint", { data: "test" });
// Check call count
expect(mockFetch.post).toHaveBeenCalledTimes(1);

Mock Modules:

import { vi } from "vitest";
vi.mock("./storage", () => ({
FluxbaseStorage: vi.fn(() => ({
upload: vi.fn().mockResolvedValue({ url: "https://example.com/file.jpg" }),
download: vi.fn().mockResolvedValue(new Blob()),
})),
}));

Tests use a separate fluxbase_test database:

Environment Variables:

Terminal window
# Test database
FLUXBASE_DATABASE_HOST=postgres
FLUXBASE_DATABASE_USER=fluxbase_app
FLUXBASE_DATABASE_PASSWORD=fluxbase_app_password
FLUXBASE_DATABASE_DATABASE=fluxbase_test
# Test services
FLUXBASE_EMAIL_SMTP_HOST=mailhog
FLUXBASE_STORAGE_S3_ENDPOINT=minio:9000

Test Config (from test/e2e_helpers.go):

func GetTestConfig() *config.Config {
return &config.Config{
Server: config.ServerConfig{
Address: ":8081",
ReadTimeout: 15 * time.Second,
WriteTimeout: 15 * time.Second,
IdleTimeout: 60 * time.Second,
BodyLimit: 10 * 1024 * 1024,
},
Database: config.DatabaseConfig{
Host: "postgres",
Port: 5432,
User: "fluxbase_app",
Password: "fluxbase_app_password",
Database: "fluxbase_test",
SSLMode: "disable",
MaxConnections: 25,
MinConnections: 5,
},
Auth: config.AuthConfig{
JWTSecret: "test-secret-key-for-testing-only",
JWTExpiry: 15 * time.Minute,
RefreshExpiry: 168 * time.Hour,
EnableSignup: true,
},
Debug: true,
}
}

SDK tests use mocked HTTP calls and don’t require a running server.

Vitest Configuration (sdk/vitest.config.ts):

import { defineConfig } from "vitest/config";
export default defineConfig({
test: {
globals: true,
environment: "node",
coverage: {
provider: "v8",
reporter: ["text", "json", "html"],
exclude: ["node_modules/", "dist/"],
},
},
});

Tests run automatically in GitHub Actions on every push and pull request.

GitHub Actions Workflow (.github/workflows/test.yml):

name: Tests
on:
push:
branches: [main, develop]
pull_request:
branches: [main, develop]
jobs:
test-backend:
runs-on: ubuntu-latest
services:
postgres:
image: postgis/postgis:18-3.6
env:
POSTGRES_PASSWORD: postgres
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
ports:
- 5432:5432
steps:
- uses: actions/checkout@v5
- name: Set up Go
uses: actions/setup-go@v5
with:
go-version: "1.22"
- name: Setup test database
run: ./test/scripts/setup_test_db.sh
- name: Run tests
run: make test-full
- name: Upload coverage
uses: codecov/codecov-action@v3
with:
file: ./coverage.out
test-sdk:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v5
- name: Set up Node.js
uses: actions/setup-node@v4
with:
node-version: "20"
- name: Install dependencies
working-directory: ./sdk
run: npm install
- name: Run SDK tests
working-directory: ./sdk
run: npm test
- name: Type check
working-directory: ./sdk
run: npm run type-check

Generate test coverage report:

Terminal window
# Generate coverage report
go test -cover -coverprofile=coverage.out ./...
# View coverage in terminal
go tool cover -func=coverage.out
# Generate HTML report
go tool cover -html=coverage.out -o coverage.html
# Open in browser
open coverage.html # macOS
xdg-open coverage.html # Linux

Coverage Targets:

  • Overall: > 70%
  • Critical paths (auth, RLS, security): > 90%
  • Utilities: > 80%
Terminal window
cd sdk
# Run tests with coverage
npm test -- --coverage
# View coverage report
open coverage/index.html

Coverage Targets:

  • Overall: > 80%
  • Core modules: > 90%

PracticeDescription
Use test suitesOrganize related tests with testify/suite for setup/teardown
Test isolationClean data between tests (TRUNCATE TABLE, vi.clearAllMocks())
Descriptive namesTestCreateUserWithValidEmail not TestUser
Test error casesTest happy path AND error conditions
Parallel testsUse t.Parallel() for independent unit tests
Skip slow testsif testing.Short() { t.Skip() } then run go test -short
Table-driven testsTest multiple scenarios efficiently with test tables

Example table-driven test:

func TestValidateEmail(t *testing.T) {
tests := []struct {
email string
wantErr bool
}{
{"user@example.com", false},
{"invalid", true},
}
for _, tt := range tests {
err := ValidateEmail(tt.email)
if (err != nil) != tt.wantErr {
t.Errorf("error = %v, wantErr %v", err, tt.wantErr)
}
}
}

Always mock external services in tests:

// Mock HTTP client
vi.mock("./http-client", () => ({
httpClient: {
post: vi.fn().mockResolvedValue({ data: "mocked" }),
},
}));
// Mock WebSocket
vi.mock("./websocket", () => ({
WebSocketClient: vi.fn(() => ({
connect: vi.fn(),
send: vi.fn(),
close: vi.fn(),
})),
}));

Run Single Test:

Terminal window
go test -v -run TestSpecificTest ./test/e2e/...

Run with Race Detector:

Terminal window
go test -race ./...

Enable Debug Logging:

Terminal window
FLUXBASE_DEBUG=true go test -v ./test/e2e/...

Use Debugger (Delve):

Terminal window
# Install delve
go install github.com/go-delve/delve/cmd/dlv@latest
# Debug specific test
dlv test ./test/e2e -- -test.run TestAuthSuite

Run with Verbose Output:

Terminal window
npm test -- --reporter=verbose

Debug in VS Code:

Add to .vscode/launch.json:

{
"type": "node",
"request": "launch",
"name": "Debug Vitest Tests",
"runtimeExecutable": "npm",
"runtimeArgs": ["test", "--", "--run"],
"console": "integratedTerminal",
"internalConsoleOptions": "neverOpen"
}

Watch Mode:

Terminal window
npm test -- --watch

IssueSolution
Database connectionpg_isready -h postgres -U postgres, restart database
Port conflictslsof -i :8080, kill -9 <PID>
Race conditionsRun go test -race ./..., fix shared state access
Flaky testsUse deterministic time, proper mocking, ensure cleanup

func BenchmarkPasswordHashing(b *testing.B) {
hasher := auth.NewPasswordHasher()
password := "MySecurePassword123!"
b.ResetTimer()
for i := 0; i < b.N; i++ {
_, _ = hasher.HashPassword(password)
}
}
func BenchmarkQueryExecution(b *testing.B) {
tc := test.NewTestContext(&testing.T{})
defer tc.Close()
b.ResetTimer()
for i := 0; i < b.N; i++ {
_ = tc.QuerySQL("SELECT * FROM users LIMIT 100")
}
}

Run Benchmarks:

Terminal window
# Run all benchmarks
go test -bench=. ./...
# Run specific benchmark
go test -bench=BenchmarkPasswordHashing ./internal/auth/...
# With memory profiling
go test -bench=. -benchmem ./...
# CPU profiling
go test -bench=. -cpuprofile=cpu.prof ./...
go tool pprof cpu.prof

Fluxbase provides comprehensive testing infrastructure:

  • Backend Testing: Unit, integration, E2E, performance tests with Go
  • SDK Testing: Vitest-based TypeScript testing with mocking
  • Test Helpers: TestContext, HTTP builders, database utilities
  • CI/CD Integration: Automated tests on every push
  • Coverage Reporting: Track test coverage over time
  • Performance Testing: Benchmarks and load tests
  • Debugging Tools: Race detector, verbose logging, debugger support

Write tests early, test all paths (happy and error), maintain high coverage, and run tests frequently to ensure code quality and prevent regressions.