Credit-aware multi-cloud storage behind one endpoint
Expiremental Guide: May contain errors as our technology continues to evolve. If you encounter any problems, please do not hesitate to contact us in Discord and give us your feedback.
The Problem
Today, developers juggle credits across AWS, GCP, Azure (and sometimes DePIN providers like StorJ/Akave), while each application usually speaks only one storage API. You end up maintaining multiple SDKs, keys, and endpoints; switching providers or enforcing hard limits (“stop writes when free credits are out”) is painful. Visibility is fragmented, and failover is manual.
Flashback gives you a single integration that works across backends. Bridge Nodes speak S3, GCS, or Azure Blob; your app uses one protocol and one set of repo-scoped keys, while Flashback routes to the bucket/provider you select via policy and statistics. Repositories aggregate multiple vendor buckets and expose usage/latency stats and quotas so you can automatically shift writes as credits are consumed.
Prerequisites
Step-by-Step Deployment Recommendations
Model your policy (“where do writes go?”)
Two common strategies:
A. Credit-harvesting (recommended initially):
Maintain one bucket per provider (e.g.,
app-logs-aws,app-logs-gcp,app-logs-azure).At write-time, choose the target bucket based on remaining monthly credit/soft-cap per provider.
Reads may come from any bucket (or a designated “primary”).
B. Hot+Cold:
Write to a hot bucket (closest/fastest), mirror to a cheaper “cold” bucket (DePIN or another cloud) on a schedule.
Use native provider copy when possible; otherwise Flashback streams cross-provider. (See storage ops/limits.)
We’ll implement A below and show where to adapt for B.
Create Buckets and Repository in Flashback
2.1 Create/Link Buckets
In Storage → Buckets, add one bucket per provider. Provide least-privilege credentials/role and validate. (See Bucket endpoints and validation)
2.2 Create a Repository
In Storage → Repositories, create app-data (example) and attach the buckets you created. (Repo CRUD and stats endpoints: /repo, /repo/stats.)
2.3 Generate Repository API keys
In the repository’s API Keys tab, create a WRITE key for your writers and a READ key for read-only flows. Save secrets to your vault. (API keys endpoints under /repo/{repoId}/apikey.)
Wire your backend to Flashback
Create clients that point to a Bridge Node endpoint. Keep a small per-endpoint client cache so you can swap endpoints quickly if you move traffic. (Bridge Nodes and endpoint pattern: see docs.) docs.flashback.tech
3.1 S3 — Node.js (@aws-sdk/client-s3)
Usage:
3.2 S3 — Python (boto3)
Usage:
3.3 GCS — Node.js (@google-cloud/storage) with custom endpoint
Note: many teams standardize on S3 SDKs against Flashback’s S3 Bridge endpoint even when the underlying bucket is GCS, to avoid OAuth SA constraints.
3.4 Azure Blob — Node.js (@azure/storage-blob) with custom endpoint
Tip: You can also stick to S3 across services for simplicity—just use the S3 Bridge endpoint closest to your buckets. Flashback translates calls to the underlying provider.
Pull usage/latency statistics (for policy decisions)
You’ll periodically poll usage to know how much credit you’ve consumed and when to switch providers. Minimal fetchers:
Repo statistics endpoint:
GET /repo/stats(returns repository-level stats).Bucket statistics endpoint:
GET /bucket/stats(returnstotalUploadBytes,totalDownloadBytes, etc.).Performance statistics:
GET /stats/daily,GET /stats/minutefor daily/minute aggregates.
Implement the credit-aware bucket chooser
Define soft caps per provider and pick the first bucket still under its cap. If a write fails due to quota or a transient node issue, fall back to the next provider.
Write path with fallback (S3 example):
For latency-first routing, consult node status and minute stats, then apply your credit cap as a gate. (Bridge Node endpoint pattern & status guidance are documented under Bridge Nodes.)
Configure quotas and alerts
In your policy, treat
QUOTA_EXCEEDED(or HTTP 403/429 mapped by your SDK) as a soft read-only signal for that bucket/repo and switch to the next provider automatically.Poll
stats/dailyto drive dashboards/alerts on usage growth, andstats/minutefor spikes or SLOs.If you attach per-bucket soft limits in your internal config, keep them visible to ops so they match provider credit allocations.
(Optional) Add a cold tier (Hot+Cold strategy)
Attach a DePIN (or cheaper cloud) bucket to the same repository.
Run a periodic copy job from hot → cold. Prefer native provider copy when both buckets are on the same provider/region; otherwise Flashback will stream cross-provider. (See “Storage API operations” for current coverage/limits.)
Periodically verify restores (sample reads from cold weekly).
Skeleton (Node):
Validate & roll out
Smoke tests
Put/Get/Delete small objects to each provider bucket via Flashback.
Force a quota breach in staging; verify writers switch providers or enter read-only mode.
Performance baseline
For each region/provider, measure
HeadObject/PutObjectlatency and set SLOs.Prefer the Bridge Node endpoint closest to your traffic patterns (Bridge Nodes doc shows pattern & examples).
Gradual rollout
Start with low-risk object classes (logs, analytics artifacts).
Expand to user-visible data once stability and observability are proven.
Operations Playbook (TL;DR)
Endpoints: Use
https://<api>-<region>-<provider>.flashback.tech. Keep a per-endpoint client cache.Keys: Repo-scoped READ/WRITE keys. Store in a vault. Rotate.
Routing: Policy chooses bucket by credit usage (soft caps). On
QUOTA_EXCEEDED/403/429 → switch. (Use/repo/stats&/bucket/stats.)Observability: Poll
/stats/dailyand/stats/minutefor dashboards/alerts.Resilience: Keep at least one alternate provider in the repository. Test fallback monthly.
Hot+Cold: Optional mirroring to DePIN or cheaper cloud. Verify restores weekly.
Limits: Some cross-provider operations have constraints; check “Storage API operations” as coverage expands.
Environment Variables (example)
Notes on limitations & compatibility
Storage operation coverage is improving; basic CRUD is supported. Cross-provider multipart uploads and some advanced features may be limited—design large-file flows accordingly. See Storage API operations status.
When in doubt, standardize your app on one protocol (usually S3) and let Flashback translate to the underlying bucket/provider.
References
Bridge Nodes: endpoint pattern and examples (S3/GCS/Azure).
Repositories: create/update/list and stats (
/repo,/repo/stats).API Keys: repo-scoped key management.
Bucket stats:
GET /bucket/statsfields (e.g.,totalUploadBytes, etc.).Performance statistics:
GET /stats/daily,GET /stats/minute.Explore use cases: endpoints summary and prerequisites.
Appendix: Minimal Go example (S3)
Last updated
Was this helpful?