Credit-aware multi-cloud storage behind one endpoint
Expiremental Guide: May contain errors as our technology continues to evolve. If you encounter any problems, please do not hesitate to contact us in Discord and give us your feedback.
The Problem
Today, developers juggle credits across AWS, GCP, Azure (and sometimes DePIN providers like StorJ/Akave), while each application usually speaks only one storage API. You end up maintaining multiple SDKs, keys, and endpoints; switching providers or enforcing hard limits (“stop writes when free credits are out”) is painful. Visibility is fragmented, and failover is manual.
Flashback gives you a single integration that works across backends. Bridge Nodes speak S3, GCS, or Azure Blob; your app uses one protocol and one set of repo-scoped keys, while Flashback routes to the bucket/provider you select via policy and statistics. Repositories aggregate multiple vendor buckets and expose usage/latency stats and quotas so you can automatically shift writes as credits are consumed.
Prerequisites
Step-by-Step Deployment Recommendations
Model your policy (“where do writes go?”)
Two common strategies:
A. Credit-harvesting (recommended initially):
Maintain one bucket per provider (e.g.,
app-logs-aws
,app-logs-gcp
,app-logs-azure
).At write-time, choose the target bucket based on remaining monthly credit/soft-cap per provider.
Reads may come from any bucket (or a designated “primary”).
B. Hot+Cold:
Write to a hot bucket (closest/fastest), mirror to a cheaper “cold” bucket (DePIN or another cloud) on a schedule.
Use native provider copy when possible; otherwise Flashback streams cross-provider. (See storage ops/limits.)
We’ll implement A below and show where to adapt for B.
Create Buckets and Repository in Flashback
2.1 Create/Link Buckets
In Storage → Buckets, add one bucket per provider. Provide least-privilege credentials/role and validate. (See Bucket endpoints and validation)
2.2 Create a Repository
In Storage → Repositories, create app-data
(example) and attach the buckets you created. (Repo CRUD and stats endpoints: /repo
, /repo/stats
.)
2.3 Generate Repository API keys
In the repository’s API Keys tab, create a WRITE key for your writers and a READ key for read-only flows. Save secrets to your vault. (API keys endpoints under /repo/{repoId}/apikey
.)
Wire your backend to Flashback
Create clients that point to a Bridge Node endpoint. Keep a small per-endpoint client cache so you can swap endpoints quickly if you move traffic. (Bridge Nodes and endpoint pattern: see docs.) docs.flashback.tech
3.1 S3 — Node.js (@aws-sdk/client-s3
)
// fbS3Client.ts
import { S3Client } from "@aws-sdk/client-s3";
const clients = new Map<string, S3Client>();
export function s3ClientFor(endpoint: string, keyId: string, secret: string) {
if (!clients.has(endpoint)) {
clients.set(endpoint, new S3Client({
endpoint,
region: "us-east-1", // required by SDK, not used by Flashback
credentials: { accessKeyId: keyId, secretAccessKey: secret },
forcePathStyle: true // recommended with custom endpoints
}));
}
return clients.get(endpoint)!;
}
Usage:
import { PutObjectCommand } from "@aws-sdk/client-s3";
import { s3ClientFor } from "./fbS3Client";
const ENDPOINT = process.env.FB_S3_ENDPOINT!; // e.g., https://s3-us-east-1-aws.flashback.tech
const KEY_ID = process.env.FB_KEY_ID!; // repo API key id
const KEY_SECRET = process.env.FB_KEY_SECRET!;// repo API key secret
export async function putS3(bucket: string, key: string, body: Buffer) {
const s3 = s3ClientFor(ENDPOINT, KEY_ID, KEY_SECRET);
await s3.send(new PutObjectCommand({ Bucket: bucket, Key: key, Body: body }));
}
3.2 S3 — Python (boto3
)
# fb_s3_client.py
import boto3
from botocore.client import Config
def s3_client_for(endpoint: str, key_id: str, key_secret: str):
session = boto3.session.Session(
aws_access_key_id=key_id, aws_secret_access_key=key_secret
)
return session.client(
"s3",
endpoint_url=endpoint,
config=Config(signature_version="s3v4")
)
Usage:
from fb_s3_client import s3_client_for
import os
ENDPOINT = os.environ["FB_S3_ENDPOINT"]
KEY_ID = os.environ["FB_KEY_ID"]
KEY_SECRET = os.environ["FB_KEY_SECRET"]
def put_s3(bucket: str, key: str, body: bytes):
s3 = s3_client_for(ENDPOINT, KEY_ID, KEY_SECRET)
s3.put_object(Bucket=bucket, Key=key, Body=body)
3.3 GCS — Node.js (@google-cloud/storage
) with custom endpoint
// fbGcsClient.ts
import { Storage } from "@google-cloud/storage";
let cached: Storage | null = null;
export function gcsClientFor(endpoint: string, keyId: string, keySecret: string) {
if (!cached) {
cached = new Storage({
apiEndpoint: endpoint, // e.g., https://gcs-eu-central-1-gcp.flashback.tech
credentials: {
client_email: keyId, // if your lib expects SA-shaped creds; otherwise prefer S3 SDK against GCS Bridge
private_key: keySecret,
} as any,
projectId: "flashback",
});
}
return cached;
}
Note: many teams standardize on S3 SDKs against Flashback’s S3 Bridge endpoint even when the underlying bucket is GCS, to avoid OAuth SA constraints.
3.4 Azure Blob — Node.js (@azure/storage-blob
) with custom endpoint
// fbBlobClient.ts
import { BlobServiceClient, StorageSharedKeyCredential } from "@azure/storage-blob";
let cached: BlobServiceClient | null = null;
export function blobClientFor(endpoint: string, keyId: string, keySecret: string) {
if (!cached) {
const cred = new StorageSharedKeyCredential(keyId, keySecret);
cached = new BlobServiceClient(endpoint, cred);
}
return cached;
}
Tip: You can also stick to S3 across services for simplicity—just use the S3 Bridge endpoint closest to your buckets. Flashback translates calls to the underlying provider.
Pull usage/latency statistics (for policy decisions)
You’ll periodically poll usage to know how much credit you’ve consumed and when to switch providers. Minimal fetchers:
// flashbackStats.ts
const BASE = "https://backend.flashback.tech";
const headers = {
Accept: "application/json",
Authorization: `Bearer ${process.env.FB_JWT}` // token with repo access
};
export async function getRepoStats(repoId?: string) {
const url = repoId ? `${BASE}/repo/stats?repoId=${repoId}` : `${BASE}/repo/stats`;
const r = await fetch(url, { headers });
if (!r.ok) throw new Error(`repo stats: ${r.status}`);
return r.json();
}
export async function getBucketStats(bucketId?: string) {
const url = bucketId ? `${BASE}/bucket/stats?bucketId=${bucketId}` : `${BASE}/bucket/stats`;
const r = await fetch(url, { headers });
if (!r.ok) throw new Error(`bucket stats: ${r.status}`);
return r.json();
}
export async function getDailyStats() {
const r = await fetch(`${BASE}/stats/daily`, { headers });
if (!r.ok) throw new Error(`daily stats: ${r.status}`);
return r.json();
}
export async function getMinuteStats() {
const r = await fetch(`${BASE}/stats/minute`, { headers });
if (!r.ok) throw new Error(`minute stats: ${r.status}`);
return r.json();
}
Repo statistics endpoint:
GET /repo/stats
(returns repository-level stats).Bucket statistics endpoint:
GET /bucket/stats
(returnstotalUploadBytes
,totalDownloadBytes
, etc.).Performance statistics:
GET /stats/daily
,GET /stats/minute
for daily/minute aggregates.
Implement the credit-aware bucket chooser
Define soft caps per provider and pick the first bucket still under its cap. If a write fails due to quota or a transient node issue, fall back to the next provider.
// chooseBucket.ts
import { getRepoStats } from "./flashbackStats";
const ORDER = ["app-logs-gcp", "app-logs-aws", "app-logs-azure"]; // preference order
const SOFT_CAPS = {
"app-logs-gcp": 0.8,
"app-logs-aws": 0.8,
"app-logs-azure": 1.0
};
type BucketMonth = { used_bytes?: number; soft_limit_bytes?: number };
export async function chooseWriteBucket(repoId: string) {
const stats = await getRepoStats(repoId);
// Shape may evolve; adapt to your actual response fields.
const buckets: Record<string, BucketMonth> = stats?.data?.[0]?.buckets ?? {};
for (const name of ORDER) {
const used = buckets[name]?.used_bytes ?? 0;
const limit = buckets[name]?.soft_limit_bytes ?? Number.POSITIVE_INFINITY;
if (used < (SOFT_CAPS as any)[name] * limit) return name;
}
return ORDER[ORDER.length - 1];
}
Write path with fallback (S3 example):
// writeObject.ts
import { PutObjectCommand } from "@aws-sdk/client-s3";
import { s3ClientFor } from "./fbS3Client";
import { chooseWriteBucket } from "./chooseBucket";
const ENDPOINT = process.env.FB_S3_ENDPOINT!;
const KEY_ID = process.env.FB_KEY_ID!;
const KEY_SECRET = process.env.FB_KEY_SECRET!;
const REPO_ID = process.env.FB_REPO_ID!;
async function fallbackBucket(current: string) {
const order = ["app-logs-gcp", "app-logs-aws", "app-logs-azure"];
const i = order.indexOf(current);
return order[(i + 1) % order.length];
}
export async function putObject(key: string, body: Buffer) {
const s3 = s3ClientFor(ENDPOINT, KEY_ID, KEY_SECRET);
let bucket = await chooseWriteBucket(REPO_ID);
try {
await s3.send(new PutObjectCommand({ Bucket: bucket, Key: key, Body: body }));
} catch (e: any) {
const msg = String(e?.message ?? e);
const quota = msg.includes("QUOTA_EXCEEDED") || msg.includes("403") || msg.includes("429");
if (quota) {
const alt = await fallbackBucket(bucket);
await s3.send(new PutObjectCommand({ Bucket: alt, Key: key, Body: body }));
} else {
throw e;
}
}
}
For latency-first routing, consult node status and minute stats, then apply your credit cap as a gate. (Bridge Node endpoint pattern & status guidance are documented under Bridge Nodes.)
Configure quotas and alerts
In your policy, treat
QUOTA_EXCEEDED
(or HTTP 403/429 mapped by your SDK) as a soft read-only signal for that bucket/repo and switch to the next provider automatically.Poll
stats/daily
to drive dashboards/alerts on usage growth, andstats/minute
for spikes or SLOs.If you attach per-bucket soft limits in your internal config, keep them visible to ops so they match provider credit allocations.
(Optional) Add a cold tier (Hot+Cold strategy)
Attach a DePIN (or cheaper cloud) bucket to the same repository.
Run a periodic copy job from hot → cold. Prefer native provider copy when both buckets are on the same provider/region; otherwise Flashback will stream cross-provider. (See “Storage API operations” for current coverage/limits.)
Periodically verify restores (sample reads from cold weekly).
Skeleton (Node):
import { CopyObjectCommand } from "@aws-sdk/client-s3";
import { s3ClientFor } from "./fbS3Client";
const s3 = s3ClientFor(process.env.FB_S3_ENDPOINT!, process.env.FB_KEY_ID!, process.env.FB_KEY_SECRET!);
export async function mirrorToCold(hotBucket: string, coldBucket: string, key: string) {
// Use native copy if hot/cold share the same provider/region behind the same Bridge;
// otherwise stream GetObject -> PutObject to the other provider.
await s3.send(new CopyObjectCommand({
Bucket: coldBucket,
Key: key,
CopySource: `/${hotBucket}/${encodeURIComponent(key)}`
}));
}
Validate & roll out
Smoke tests
Put/Get/Delete small objects to each provider bucket via Flashback.
Force a quota breach in staging; verify writers switch providers or enter read-only mode.
Performance baseline
For each region/provider, measure
HeadObject
/PutObject
latency and set SLOs.Prefer the Bridge Node endpoint closest to your traffic patterns (Bridge Nodes doc shows pattern & examples).
Gradual rollout
Start with low-risk object classes (logs, analytics artifacts).
Expand to user-visible data once stability and observability are proven.
Operations Playbook (TL;DR)
Endpoints: Use
https://<api>-<region>-<provider>.flashback.tech
. Keep a per-endpoint client cache.Keys: Repo-scoped READ/WRITE keys. Store in a vault. Rotate.
Routing: Policy chooses bucket by credit usage (soft caps). On
QUOTA_EXCEEDED
/403/429 → switch. (Use/repo/stats
&/bucket/stats
.)Observability: Poll
/stats/daily
and/stats/minute
for dashboards/alerts.Resilience: Keep at least one alternate provider in the repository. Test fallback monthly.
Hot+Cold: Optional mirroring to DePIN or cheaper cloud. Verify restores weekly.
Limits: Some cross-provider operations have constraints; check “Storage API operations” as coverage expands.
Environment Variables (example)
# Bridge endpoint (choose the one that matches your primary protocol/region/provider)
FB_S3_ENDPOINT=https://s3-us-east-1-aws.flashback.tech
# Repo keys (WRITE for writers, READ for read-only flows)
FB_KEY_ID=...
FB_KEY_SECRET=...
# Backend API access (for stats, admin calls)
FB_JWT=...
# Repo identity (used by chooser)
FB_REPO_ID=...
Notes on limitations & compatibility
Storage operation coverage is improving; basic CRUD is supported. Cross-provider multipart uploads and some advanced features may be limited—design large-file flows accordingly. See Storage API operations status.
When in doubt, standardize your app on one protocol (usually S3) and let Flashback translate to the underlying bucket/provider.
References
Bridge Nodes: endpoint pattern and examples (S3/GCS/Azure).
Repositories: create/update/list and stats (
/repo
,/repo/stats
).API Keys: repo-scoped key management.
Bucket stats:
GET /bucket/stats
fields (e.g.,totalUploadBytes
, etc.).Performance statistics:
GET /stats/daily
,GET /stats/minute
.Explore use cases: endpoints summary and prerequisites.
Appendix: Minimal Go example (S3)
// fb_s3_client.go
package flashback
import (
"context"
"net/http"
"os"
"strings"
"github.com/aws/aws-sdk-go-v2/aws"
"github.com/aws/aws-sdk-go-v2/config"
"github.com/aws/aws-sdk-go-v2/service/s3"
"github.com/aws/smithy-go/logging"
)
var clients = map[string]*s3.Client{}
func S3ClientFor(endpoint, keyId, keySecret string) *s3.Client {
if c, ok := clients[endpoint]; ok {
return c
}
cfg, _ := config.LoadDefaultConfig(context.TODO())
creds := aws.Credentials{
AccessKeyID: keyId,
SecretAccessKey: keySecret,
}
client := s3.NewFromConfig(cfg, func(o *s3.Options) {
o.BaseEndpoint = aws.String(endpoint)
o.UsePathStyle = true
o.Credentials = aws.NewCredentialsCache(aws.StaticCredentialsProvider{Value: creds})
o.Region = "us-east-1"
o.HTTPClient = &http.Client{}
o.Logger = logging.NewStandardLogger(os.Stdout)
})
clients[endpoint] = client
return client
}
// put.go
bucket := "app-logs-gcp"
key := "hello.txt"
body := strings.NewReader("hi")
cli := flashback.S3ClientFor(os.Getenv("FB_S3_ENDPOINT"), os.Getenv("FB_KEY_ID"), os.Getenv("FB_KEY_SECRET"))
_, err := cli.PutObject(context.TODO(), &s3.PutObjectInput{Bucket: &bucket, Key: &key, Body: body})
if err != nil { /* fallback to next bucket */ }
Last updated
Was this helpful?