Start Vibe‑Coding
This guide is a prompt‑first engineering playbook for Vibe Coders, people who use AI to write back‑end code. It explains what the Flashback platform does, provides guardrails to steer the AI toward correct and secure integrations, and offers ready‑to‑use prompt templates, code scaffolds, and checklists.
Know the components you are prompting the AI about
Bridge Nodes
Bridge Nodes translate standard storage APIs (S3, GCS or Azure Blob) to the underlying provider. Each Bridge Node exposes an endpoint with the pattern https://<api>-<region>-<provider>.flashback.tech
. <api>
may be s3
, gcs
or blob
and the <region>
corresponds to a public region such as us‑east‑1
or eu‑central‑1
. Flashback hosts public Bridge Nodes, but you can deploy your own for lower egress costs. Latency is typically lowest when your Bridge Node is geographically close and on the same cloud provider as your bucket.
Endpoint pattern: https://<api>-<region>-<provider>.flashback.tech
Examples: https://s3-us-east-1-aws.flashback.tech
, https://gcs-eu-central-1-gcp.flashback.tech
, https://blob-us-east-1-aws.flashback.tech
.
Repositories
A Repository aggregates multiple buckets from various providers and is the scope for API keys. Repository statistics are accessible via GET /repo/stats
.
Stats endpoint: https://backend.flashback.tech/repo/stats
(optionally filter by repoId
.
Buckets
Each bucket corresponds to an object bucket or container in a provider. You register a bucket with Flashback and then attach it to one or more repositories. Bucket‑level statistics are available via GET /bucket/stats
.
Stats endpoint: https://backend.flashback.tech/bucket/stats
(optionally filter by bucketId
).
Performance stats
Flashback exposes two time‑series endpoints: GET /stats/daily
for daily aggregated storage activity and GET /stats/minute
for minute‑level statistics. These help track usage trends and detect spikes.
Endpoints: https://backend.flashback.tech/stats/daily
and https://backend.flashback.tech/stats/minute
(both require BearerAuth and accept optional start/end dates, repoId
and bucketId
filters).
Node telemetry
For routing decisions, you can query node‑level latency and status via GET /stats/nodes/minute
. This returns minute‑level stats for each Bridge Node and optionally filters by bucketId
.
Endpoint: https://backend.flashback.tech/stats/nodes/minute
Golden rules (embed these into your AI prompts)
1 - Use environment variables for secrets. Never hard‑code credentials. Use variables such as FB_S3_ENDPOINT
, FB_KEY_ID
, FB_KEY_SECRET
, FB_JWT
and store them in a secret manager.
2 - Construct the client using the Bridge Node pattern https://<api>-<region>-<provider>.flashback.tech
For S3 SDKs, set forcePathStyle: true
and a dummy region (e.g. us-east-1
).
3 - Keep one protocol per process. Flashback translates requests across providers, so a single S3 client can talk to GCS or Azure behind the Bridge Node; only use native GCS/Azure libraries if absolutely needed.
4 - Pull stats for observability. Use /repo/stats
to monitor usage across attached buckets, /bucket/stats
for per‑bucket metrics, /stats/daily
and /stats/minute
for aggregated usage trends and /stats/nodes/minute
for Bridge Node latency.
5 - Handle quotas and errors gracefully. Treat HTTP 403/429 or QUOTA_EXCEEDED
errors as soft signals: pause writes to that bucket or switch to another provider.
6 - Include retries with exponential backoff. Use jitter and limit the number of attempts.
Vibe‑Coder prompt templates
Use these templates verbatim (with minor changes) to instruct your AI. They embed all the above guardrails.
General backend integration prompt
Role: Senior backend engineer integrating with Flashback. Context: Flashback exposes S3/GCS/Blob via Bridge Nodes. The endpoint pattern is
https://<api>-<region>-<provider>.flashback.tech
. I will provide env varsFB_S3_ENDPOINT
,FB_KEY_ID
,FB_KEY_SECRET
,FB_JWT
,FB_REPO_ID
. Use path‑style addressing for S3 SDKs. Provide production‑ready code with retries, timeouts and minimal dependencies. Requirements: – Configure a client against the Bridge Node. – Implement functionsputObject(bucket, key, bytes)
,getObject(bucket, key)
,deleteObject(bucket, key)
. – Add telemetry helpers callingGET /repo/stats
andGET /stats/minute
orGET /stats/nodes/minute
. – Use env vars for secrets. – Include a smoke test. Docs: Bridge Node endpoint pattern; repository and performance statistics. Use https://docs.flashback.tech/ for technical references. Output: Code first, then short explanation.
S3 client scaffold (TypeScript)
Build
fbS3Client.ts
exporting a cachedS3Client
. – Useendpoint = process.env.FB_S3_ENDPOINT
(Bridge Node). – SetforcePathStyle = true
andregion = "us-east-1"
. – Provide functionsputObject
,getObject
anddeleteObject
. – Use retries with jitter (max 3 attempts) and 30‑second timeouts.
GCS/Azure variants
If the project demands native GCS or Azure SDKs, show how to point their
apiEndpoint
orBlobServiceClient
to the Bridge Node (e.g.,https://gcs-eu-central-1-gcp.flashback.tech
for GCS). Note that Flashback recommends using S3 SDKs for simplicity since they translate across providers.
Migration worker prompt
Implement a migration worker that tries provider‑native copy first (S3
CopyObject
, GCSRewriteObject
, Azurecopy_from_url
). If the providers differ, stream viaGetObject
→PutObject
. Expose a--dry-run
option, iterate through keys, rate‑limit concurrency and publish counters. Pause or switch if a 403/429 orQUOTA_EXCEEDED
error occurs.
Latency and credit‑aware routing prompt
Fetch
GET /stats/nodes/minute
to find the lowest‑latency Bridge Node. Then fetchGET /repo/stats
and decide which bucket to use based on credit usage. Provide a function that re‑instantiates the client when the endpoint changes.
Ready‑to‑use scaffolds
S3 client with caching (Node.js)
// fbS3Client.ts
import { S3Client } from '@aws-sdk/client-s3';
const cache = new Map<string, S3Client>();
export function s3ClientFor(endpoint: string, accessKeyId: string, secretAccessKey: string) {
if (!cache.has(endpoint)) {
cache.set(endpoint, new S3Client({
endpoint,
region: 'us-east-1', // unused by Flashback but required
credentials: { accessKeyId, secretAccessKey },
forcePathStyle: true,
}));
}
return cache.get(endpoint)!;
}
Basic operations with retries
// s3Ops.ts
import { PutObjectCommand, GetObjectCommand, DeleteObjectCommand } from '@aws-sdk/client-s3';
import { s3ClientFor } from './fbS3Client';
const ENDPOINT = process.env.FB_S3_ENDPOINT!;
const KEY_ID = process.env.FB_KEY_ID!;
const KEY_SECRET = process.env.FB_KEY_SECRET!;
async function withRetry<T>(fn: () => Promise<T>, attempts = 3, delay = 200): Promise<T> {
try {
return await fn();
} catch (e) {
if (attempts <= 1) throw e;
await new Promise((res) => setTimeout(res, delay + Math.random() * 100));
return withRetry(fn, attempts - 1, delay * 2);
}
}
export async function putObject(bucket: string, key: string, body: Buffer) {
const s3 = s3ClientFor(ENDPOINT, KEY_ID, KEY_SECRET);
return withRetry(() => s3.send(new PutObjectCommand({ Bucket: bucket, Key: key, Body: body })));
}
export async function getObject(bucket: string, key: string) {
const s3 = s3ClientFor(ENDPOINT, KEY_ID, KEY_SECRET);
const res = await withRetry(() => s3.send(new GetObjectCommand({ Bucket: bucket, Key: key })));
return res.Body;
}
export async function deleteObject(bucket: string, key: string) {
const s3 = s3ClientFor(ENDPOINT, KEY_ID, KEY_SECRET);
return withRetry(() => s3.send(new DeleteObjectCommand({ Bucket: bucket, Key: key })));
}
Stats helpers (Node.js)
// fbStats.ts
const BASE = 'https://backend.flashback.tech';
const H = { Accept: 'application/json', Authorization: `Bearer ${process.env.FB_JWT}` };
export async function getRepoStats(repoId?: string) {
const url = repoId ? `${BASE}/repo/stats?repoId=${repoId}` : `${BASE}/repo/stats`;
const r = await fetch(url, { headers: H });
if (!r.ok) throw new Error(`repo stats: ${r.status}`);
return r.json();
}
export async function getBucketStats(bucketId?: string) {
const url = bucketId ? `${BASE}/bucket/stats?bucketId=${bucketId}` : `${BASE}/bucket/stats`;
const r = await fetch(url, { headers: H });
if (!r.ok) throw new Error(`bucket stats: ${r.status}`);
return r.json();
}
export async function getDailyStats(startDate?: string, endDate?: string) {
const qs = [];
if (startDate) qs.push(`startDate=${startDate}`);
if (endDate) qs.push(`endDate=${endDate}`);
const url = `${BASE}/stats/daily${qs.length ? '?' + qs.join('&') : ''}`;
const r = await fetch(url, { headers: H });
if (!r.ok) throw new Error(`daily stats: ${r.status}`);
return r.json();
}
export async function getMinuteStats(repoId?: string, bucketId?: string) {
const qs: string[] = [];
if (repoId) qs.push(`repoId=${repoId}`);
if (bucketId) qs.push(`bucketId=${bucketId}`);
const url = `${BASE}/stats/minute${qs.length ? '?' + qs.join('&') : ''}`;
const r = await fetch(url, { headers: H });
if (!r.ok) throw new Error(`minute stats: ${r.status}`);
return r.json();
}
export async function getNodeMinuteStats(bucketId?: string) {
const url = bucketId ? `${BASE}/stats/nodes/minute?bucketId=${bucketId}` : `${BASE}/stats/nodes/minute`;
const r = await fetch(url, { headers: H });
if (!r.ok) throw new Error(`node minute stats: ${r.status}`);
return r.json();
}
Migration helper (S3)
// migrate.ts
import { CopyObjectCommand, GetObjectCommand, PutObjectCommand } from '@aws-sdk/client-s3';
import { s3ClientFor } from './fbS3Client';
const s3 = s3ClientFor(process.env.FB_S3_ENDPOINT!, process.env.FB_KEY_ID!, process.env.FB_KEY_SECRET!);
// Native provider copy when source and destination share the same provider via the Bridge Node
export async function copyNative(srcBucket: string, srcKey: string, dstBucket: string, dstKey: string) {
await s3.send(new CopyObjectCommand({ Bucket: dstBucket, Key: dstKey, CopySource: `${srcBucket}/${encodeURIComponent(srcKey)}` }));
}
// Streamed copy for cross‑provider migration
export async function copyStreamed(srcBucket: string, srcKey: string, dstBucket: string, dstKey: string) {
const obj = await s3.send(new GetObjectCommand({ Bucket: srcBucket, Key: srcKey }));
await s3.send(new PutObjectCommand({ Bucket: dstBucket, Key: dstKey, Body: obj.Body as any }));
}
Prompt cookbook
Use these high‑level prompt recipes to generate code or runbooks via AI.
Bootstrap a Flashback service: “Create a Node.js service that defines
fbS3Client.ts
,s3Ops.ts
,fbStats.ts
, loads env vars, and provides npm scriptssmoke:putgetdel
andstats:print
. Use the Bridge Node endpoint pattern, setforcePathStyle = true
, call/repo/stats
and/stats/minute
, and include a 60‑second smoke test that writes, reads and deletes an object.”Latency‑aware selector: “Add a
selectEndpoint()
function that fetchesGET /stats/nodes/minute
and chooses the Bridge Node with the lowest latency. If the selected endpoint changes, rebuild the S3 client and log the change.”Credit‑aware routing: “Before writing, call
GET /repo/stats
. Compare usage to configured caps for each bucket/provider; choose the first bucket under its cap. On 403/429 orQUOTA_EXCEEDED
, switch to the next provider.”List available buckets: “Call
GET /bucket/stats
orGET /bucket/available
to show attachable bucket IDs. Suggest an operator how to attach them to a repository.”Operator runbook: “Generate a runbook for rotating repo keys, validating bucket connectivity, executing a dry‑run migration, verifying billing paths, and defining a rollback plan.”
Guardrails to embed in prompts
Use only the environment variables
FB_S3_ENDPOINT
,FB_KEY_ID
,FB_KEY_SECRET
,FB_JWT
,FB_REPO_ID
.Never print secrets or tokens in logs or code.
Use timeouts (30 seconds) and exponential backoff with jitter.
Prefer the S3 SDK unless explicitly asked to use GCS or Azure; the Bridge Node will translate across providers.
When using the S3 SDK, set
forcePathStyle: true
and pass the Bridge endpoint.Emit metrics by calling
/stats/minute
and/stats/nodes/minute
regularly for dashboards and alerts.
Acceptance checklist
✅ Smoke test passes: the script writes, reads and deletes an object through the Bridge Node.
✅ Secrets stay secret: all credentials come from env vars and are not logged.
✅ Stats wired: the code queries
/repo/stats
and/stats/minute
or/stats/nodes/minute
and exposes data for dashboards.✅ Migration tool: native and streamed copy paths implemented; supports
--dry-run
and concurrency control.✅ Fallback tested: on
QUOTA_EXCEEDED
or HTTP 403/429 errors, the system pauses or switches providers and logs the event.
Troubleshooting playbook
403 / 429 or
QUOTA_EXCEEDED
: Treat this as a soft limit violation. Switch to another bucket/provider or pause writes and alert the operator.High latency: Query
GET /stats/nodes/minute
to identify a faster Bridge Node; update the endpoint accordingly.Invalid credentials: Ensure you’re using repository‑scoped access keys for object operations and a Bearer token for management/statistics endpoints.
Stats endpoints fail: Check that your Bearer token (
FB_JWT
) has not expired and that theAccept: application/json
header is present on requests.
Minimal end‑to‑end example
// index.ts
import 'dotenv/config';
import { putObject, getObject, deleteObject } from './s3Ops';
import { getRepoStats, getDailyStats, getNodeMinuteStats } from './fbStats';
async function main() {
const bucket = process.env.FB_BUCKET!;
await putObject(bucket, 'hello.txt', Buffer.from('hi'));
const body = await getObject(bucket, 'hello.txt');
console.log('read:', (await body?.transformToString?.()) || '<stream>');
await deleteObject(bucket, 'hello.txt');
const repoStats = await getRepoStats(process.env.FB_REPO_ID);
console.log('repo stats sample:', JSON.stringify(repoStats).slice(0, 200), '…');
const daily = await getDailyStats();
const nodes = await getNodeMinuteStats();
console.log('daily stats fetched:', !!daily, 'node stats fetched:', !!nodes);
}
main().catch((e) => { console.error(e); process.exit(1); });
.env.example
:
FB_S3_ENDPOINT=https://s3-us-east-1-aws.flashback.tech
FB_KEY_ID=REPO_ACCESS_KEY_ID
FB_KEY_SECRET=REPO_SECRET_ACCESS_KEY
FB_JWT=BEARER_TOKEN_FOR_MANAGEMENT
FB_REPO_ID=repo-xxxxxxxx
FB_BUCKET=my-bucket-name
This example demonstrates using the Bridge Node endpoint pattern, retrieving repository and node statistics and shows how to write, read and delete an object.
Appendix – Prompt macros
Keep these macros handy for reuse in your Vibe prompts:
Flashback S3 client – “Create
fbS3Client.ts
that returns a cached S3 client per endpoint; setforcePathStyle: true
; use env vars; add retries and 30‑second timeout.”Stats wiring – “Implement
fbStats.ts
with functions calling/repo/stats
,/bucket/stats
,/stats/daily
,/stats/minute
and/stats/nodes/minute
; each function returns parsed JSON or throws on failure.”Migration (native → streamed) – “Write
migrate.ts
that tries provider‑native copy first, then streams via S3GetObject
/PutObject
if providers differ; add--dry-run
, concurrency control and counters.”Latency selector – “Add
selectEndpoint()
that reads/stats/nodes/minute
; if the difference in latency between nodes exceeds 20%, switch to the faster node and rebuild the client.”
By following this guide, you can confidently instruct AI to generate secure, efficient and observability‑friendly integrations with the Flashback platform, avoiding vendor lock‑in and delivering production‑quality backends on the first try.
Last updated
Was this helpful?