githubEdit

Credit-aware multi-cloud storage behind one endpoint

triangle-exclamation

The Problem

Today, developers juggle credits across AWS, GCP, Azure (and sometimes DePIN providers like StorJ/Akave), while each application usually speaks only one storage API. You end up maintaining multiple SDKs, keys, and endpoints; switching providers or enforcing hard limits (“stop writes when free credits are out”) is painful. Visibility is fragmented, and failover is manual.

Flashback gives you a single integration that works across backends. Bridge Nodes speak S3, GCS, or Azure Blob; your app uses one protocol and one set of repo-scoped keys, while Flashback routes to the bucket/provider you select via policy and statistics. Repositories aggregate multiple vendor buckets and expose usage/latency stats and quotas so you can automatically shift writes as credits are consumed.


Prerequisites

chevron-rightAccounts & resourceshashtag
  • At least one object bucket/container per provider you want to use (AWS S3, GCS, Azure Blob, optionally DePIN). You can create/link them in Flashback.

  • Use least-privilege credentials for each bucket when you link it.

chevron-rightFlashback setuphashtag
  • Flashback account with dashboard/API access.

  • Create Buckets in Flashback (one per cloud bucket/container you’ll use).

  • Create a Repository and attach those Buckets.

  • Issue API Keys for the repository (WRITE for writers, READ for readers). See the API Keys reference for create/list/update/delete of repo-scoped keys, secured by Bearer Token.

chevron-rightPick a protocol (client SDK)hashtag

Choose the API you will speak from your app; each client instance uses one protocol:

  • S3: boto3 (Python), @aws-sdk/client-s3 (Node.js), minio (Go/Java/etc.)

  • GCS: @google-cloud/storage (Node.js), google-cloud-storage (Python), etc.

  • Azure: azure-storage-blob

You can mix protocols across services if needed, but each process should keep one active client per protocol.

chevron-rightBridge Node endpoints (examples with Public Nodes)hashtag

Bridge Node URL pattern: https://<api>-<region>-<provider>.flashback.tech (where <api> is s3, gcs, or blob). Examples:

  • S3: https://s3-us-east-1-aws.flashback.tech

  • GCS: https://gcs-eu-central-1-gcp.flashback.tech

  • Azure Blob: https://blob-eu-central-1-azure.flashback.tech.

chevron-rightNetworking & securityhashtag
  • Outbound HTTPS to *.flashback.tech.

  • Store Flashback repository API keys in your vault (AWS Secrets Manager, GCP Secret Manager, Azure Key Vault).

  • Treat keys like you would cloud provider access keys (rotate, least privilege).


Step-by-Step Deployment Recommendations

1

Model your policy (“where do writes go?”)

Two common strategies:

A. Credit-harvesting (recommended initially):

  • Maintain one bucket per provider (e.g., app-logs-aws, app-logs-gcp, app-logs-azure).

  • At write-time, choose the target bucket based on remaining monthly credit/soft-cap per provider.

  • Reads may come from any bucket (or a designated “primary”).

B. Hot+Cold:

  • Write to a hot bucket (closest/fastest), mirror to a cheaper “cold” bucket (DePIN or another cloud) on a schedule.

  • Use native provider copy when possible; otherwise Flashback streams cross-provider. (See storage ops/limits.)

We’ll implement A below and show where to adapt for B.

2

Create Buckets and Repository in Flashback

2.1 Create/Link Buckets

In Storage → Buckets, add one bucket per provider. Provide least-privilege credentials/role and validate. (See Bucket endpoints and validation)

2.2 Create a Repository

In Storage → Repositories, create app-data (example) and attach the buckets you created. (Repo CRUD and stats endpoints: /repo, /repo/stats.)

2.3 Generate Repository API keys

In the repository’s API Keys tab, create a WRITE key for your writers and a READ key for read-only flows. Save secrets to your vault. (API keys endpoints under /repo/{repoId}/apikey.)

3

Wire your backend to Flashback

Create clients that point to a Bridge Node endpoint. Keep a small per-endpoint client cache so you can swap endpoints quickly if you move traffic. (Bridge Nodes and endpoint pattern: see docs.) docs.flashback.techarrow-up-right

3.1 S3 — Node.js (@aws-sdk/client-s3)

Usage:

3.2 S3 — Python (boto3)

Usage:

3.3 GCS — Node.js (@google-cloud/storage) with custom endpoint

Note: many teams standardize on S3 SDKs against Flashback’s S3 Bridge endpoint even when the underlying bucket is GCS, to avoid OAuth SA constraints.

3.4 Azure Blob — Node.js (@azure/storage-blob) with custom endpoint

Tip: You can also stick to S3 across services for simplicity—just use the S3 Bridge endpoint closest to your buckets. Flashback translates calls to the underlying provider.

4

Pull usage/latency statistics (for policy decisions)

You’ll periodically poll usage to know how much credit you’ve consumed and when to switch providers. Minimal fetchers:

  • Repo statistics endpoint: GET /repo/stats (returns repository-level stats).

  • Bucket statistics endpoint: GET /bucket/stats (returns totalUploadBytes, totalDownloadBytes, etc.).

  • Performance statistics: GET /stats/daily, GET /stats/minute for daily/minute aggregates.

5

Implement the credit-aware bucket chooser

Define soft caps per provider and pick the first bucket still under its cap. If a write fails due to quota or a transient node issue, fall back to the next provider.

Write path with fallback (S3 example):

For latency-first routing, consult node status and minute stats, then apply your credit cap as a gate. (Bridge Node endpoint pattern & status guidance are documented under Bridge Nodes.)

6

Configure quotas and alerts

  • In your policy, treat QUOTA_EXCEEDED (or HTTP 403/429 mapped by your SDK) as a soft read-only signal for that bucket/repo and switch to the next provider automatically.

  • Poll stats/daily to drive dashboards/alerts on usage growth, and stats/minute for spikes or SLOs.

  • If you attach per-bucket soft limits in your internal config, keep them visible to ops so they match provider credit allocations.

7

(Optional) Add a cold tier (Hot+Cold strategy)

  • Attach a DePIN (or cheaper cloud) bucket to the same repository.

  • Run a periodic copy job from hot → cold. Prefer native provider copy when both buckets are on the same provider/region; otherwise Flashback will stream cross-provider. (See “Storage API operations” for current coverage/limits.)

  • Periodically verify restores (sample reads from cold weekly).

Skeleton (Node):

8

Validate & roll out

Smoke tests

  • Put/Get/Delete small objects to each provider bucket via Flashback.

  • Force a quota breach in staging; verify writers switch providers or enter read-only mode.

Performance baseline

  • For each region/provider, measure HeadObject/PutObject latency and set SLOs.

  • Prefer the Bridge Node endpoint closest to your traffic patterns (Bridge Nodes doc shows pattern & examples).

Gradual rollout

  • Start with low-risk object classes (logs, analytics artifacts).

  • Expand to user-visible data once stability and observability are proven.


Operations Playbook (TL;DR)

  • Endpoints: Use https://<api>-<region>-<provider>.flashback.tech. Keep a per-endpoint client cache.

  • Keys: Repo-scoped READ/WRITE keys. Store in a vault. Rotate.

  • Routing: Policy chooses bucket by credit usage (soft caps). On QUOTA_EXCEEDED/403/429 → switch. (Use /repo/stats & /bucket/stats.)

  • Observability: Poll /stats/daily and /stats/minute for dashboards/alerts.

  • Resilience: Keep at least one alternate provider in the repository. Test fallback monthly.

  • Hot+Cold: Optional mirroring to DePIN or cheaper cloud. Verify restores weekly.

  • Limits: Some cross-provider operations have constraints; check “Storage API operations” as coverage expands.

Environment Variables (example)

Notes on limitations & compatibility

  • Storage operation coverage is improving; basic CRUD is supported. Cross-provider multipart uploads and some advanced features may be limited—design large-file flows accordingly. See Storage API operations status.

  • When in doubt, standardize your app on one protocol (usually S3) and let Flashback translate to the underlying bucket/provider.

References

  • Bridge Nodes: endpoint pattern and examples (S3/GCS/Azure).

  • Repositories: create/update/list and stats (/repo, /repo/stats).

  • API Keys: repo-scoped key management.

  • Bucket stats: GET /bucket/stats fields (e.g., totalUploadBytes, etc.).

  • Performance statistics: GET /stats/daily, GET /stats/minute.

  • Explore use cases: endpoints summary and prerequisites.

Appendix: Minimal Go example (S3)

Last updated

Was this helpful?