# Start Vibe‑Coding

> This guide is a prompt‑first engineering playbook for **Vibe Coders**, people who use AI to write back‑end code. It explains what the Flashback platform does, provides guardrails to steer the AI toward correct and secure integrations for both **Storage** and **AI LLM**, and offers ready‑to‑use prompt templates, code scaffolds, and checklists.

## Know the components you are prompting the AI about

| Component                          | Purpose/notes                                                                                                                                                                                 | Key endpoints & examples                                                                                                                                                                                                                                                      |
| ---------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Bridge Nodes (Storage Gateway)** | Translate standard storage APIs (S3, GCS, Azure Blob) to underlying providers. Endpoint pattern: `https://<api>-<region>-<provider>.flashback.tech`, where `<api>` is `s3`, `gcs`, or `blob`. | <p>Pattern: <code>https\://\<api>-\<region>-\<provider>.flashback.tech</code>.<br>Examples: <code><https://s3-us-east-1-aws.flashback.tech></code>, <code><https://gcs-eu-central-1-gcp.flashback.tech></code>, <code><https://blob-us-east-1-aws.flashback.tech></code>.</p> |
| **Repositories**                   | Aggregate buckets and AI resources; also the scope for API keys and governance.                                                                                                               | Storage stats: `GET https://backend.flashback.tech/repo/stats` (optional `repoId`).                                                                                                                                                                                           |
| **Buckets**                        | Provider buckets/containers registered in Flashback and attached to repositories.                                                                                                             | Bucket stats: `GET https://backend.flashback.tech/bucket/stats` (optional `bucketId`).                                                                                                                                                                                        |
| **Storage observability**          | Daily/minute storage trends for usage monitoring and incident detection.                                                                                                                      | `GET https://backend.flashback.tech/stats/daily`, `GET https://backend.flashback.tech/stats/minute`.                                                                                                                                                                          |
| **Node telemetry**                 | Bridge-node latency/status for routing and fallback decisions.                                                                                                                                | `GET https://backend.flashback.tech/stats/nodes/minute`.                                                                                                                                                                                                                      |
| **AI LLM configurations**          | Workspace-level provider configs (OpenAI/Gemini/Anthropic/custom) used by repositories through a stable Flashback gateway.                                                                    | Management endpoints under AI LLM APIs (for example available models/configs), and runtime endpoint like `https://openai-<region>-<provider>.flashback.tech/v1`.                                                                                                              |
| **AI API keys**                    | Repository-scoped keys dedicated to AI requests (separate from storage credentials).                                                                                                          | `POST/GET/PUT/DELETE /repo/{repoId}/ai/apikey...`                                                                                                                                                                                                                             |
| **AI Policy**                      | Governance layer to log/alert/block prompt/response flows by scope (org/workspace/repository).                                                                                                | AI Policy endpoints under AI APIs (`/ai/policy...`).                                                                                                                                                                                                                          |
| **Conversation API**               | Multi-turn chat container with history and context per repository; use when you need persistent conversations instead of single-shot completions.                                             | `POST /conversation`, `GET /conversation`, `POST /conversation/{conversationId}/prompt`, `GET /conversation/{conversationId}/messages`.                                                                                                                                       |
| **AI usage statistics**            | Daily AI metrics across model, key, conversation, and policy dimensions for cost/compliance tracking.                                                                                         | `GET https://backend.flashback.tech/aistats/daily`.                                                                                                                                                                                                                           |

***

## Golden rules (embed these into your AI prompts)

**1 - Use environment variables for secrets**. Never hard‑code credentials. Use variables such as `FB_S3_ENDPOINT`, `FB_KEY_ID`, `FB_KEY_SECRET`, `FB_JWT` and store them in a secret manager.

**2 - Construct the client using the Bridge Node pattern** `https://<api>-<region>-<provider>.flashback.tech` For S3 SDKs, set `forcePathStyle: true` and a dummy region (e.g. `us-east-1`).

**3 - Keep one protocol per process**. Flashback translates requests across providers, so a single S3 client can talk to GCS or Azure behind the Bridge Node; only use native GCS/Azure libraries if absolutely needed.

**4 - Pull stats for observability**. Use `/repo/stats` to monitor usage across attached buckets, `/bucket/stats` for per‑bucket metrics, `/stats/daily` and `/stats/minute` for aggregated usage trends and `/stats/nodes/minute` for Bridge Node latency.

**5 - Handle quotas and errors gracefully**. Treat HTTP 403/429 or `QUOTA_EXCEEDED` errors as soft signals: pause writes to that bucket or switch to another provider.

**6 - Include retries with exponential backoff**. Use jitter and limit the number of attempts.

***

## Vibe‑Coding for AI LLM (OpenAI‑compatible via Gateway)

Use this section when you want your AI assistant to generate application code for prompts/completions, not only storage operations.

### AI LLM building blocks

* **AI LLM resources**: provider configurations managed in **AI → AI LLM** (provider type, base URL, secret key).
* **Repositories**: attach one or more AI LLM resources to expose a stable OpenAI-compatible endpoint from Flashback.
* **AI API keys**: repository-scoped keys used as `Authorization: Bearer <key>` for AI calls.
* **Endpoint pattern**: OpenAI-compatible gateway URL, typically like `https://openai-<region>-<provider>.flashback.tech/v1`.

### AI LLM golden rules

**A1 - Use dedicated AI env vars**: `FB_OPENAI_BASE_URL`, `FB_OPENAI_API_KEY`, `FB_MODEL` and optionally `FB_AI_TIMEOUT_MS`.

**A2 - Keep OpenAI contract stable**: code against `/v1/chat/completions` (or OpenAI SDK equivalents) and keep provider switching in Flashback config, not in application code.

**A3 - Separate keys by workload**: never reuse storage keys for AI calls, and avoid sharing one AI key across multiple services.

**A4 - Build safe fallbacks**: handle transient errors (429/5xx/timeouts) with retries, jitter, and model/provider fallback logic.

**A5 - Add prompt governance checks**: include basic output validation and redact sensitive data before sending prompts where required.

### Quick AI smoke test (cURL)

```bash
curl -sS "$FB_OPENAI_BASE_URL/chat/completions" \
  -H "Authorization: Bearer $FB_OPENAI_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "'"$FB_MODEL"'",
    "messages": [
      {"role":"system","content":"You are concise."},
      {"role":"user","content":"Return exactly: pong"}
    ],
    "temperature": 0
  }'
```

Expected result: HTTP 200 and a response payload with `choices[0].message.content`.

### Unified AI prompt template for Vibe coders

```txt
Role: Senior backend engineer integrating Flashback AI Gateway.

Context:
- I provide FB_OPENAI_BASE_URL, FB_OPENAI_API_KEY, FB_MODEL, and optionally FB_AI_TIMEOUT_MS.
- The endpoint is OpenAI-compatible and app code must stay provider-agnostic.
- Use Flashback AI components correctly: AI LLM config, repository-scoped AI API key, and (if needed) Conversation API.

Requirements:
1) Build a typed AI client wrapper with timeout, retries, jittered exponential backoff, and structured errors.
2) Implement:
   - generateAnswer(input)
   - healthcheck() using a minimal completion call
   - optional conversation helpers using:
     - POST /conversation
     - POST /conversation/{conversationId}/prompt
     - GET /conversation/{conversationId}/messages
3) Handle 429/5xx/timeouts with capped retries and fallback model/provider logic.
4) Read secrets only from env vars and never log prompt secrets, API keys, or raw PII.
5) Add basic governance hooks: redact sensitive input before sending and validate outputs.
6) Add observability hooks for AI usage checks (for example daily stats endpoint).

Output format:
- Production-ready code first.
- Then a concise runbook for key rotation, fallback behavior, and incident response.
```

***

## Vibe‑Coder prompt templates

Use these templates verbatim (with minor changes) to instruct your AI. They embed all the above guardrails.

### General backend integration prompt

> **Role**: Senior backend engineer integrating with Flashback.\
> **Context**: Flashback exposes S3/GCS/Blob via Bridge Nodes. The endpoint pattern is `https://<api>-<region>-<provider>.flashback.tech` . I will provide env vars `FB_S3_ENDPOINT`, `FB_KEY_ID`, `FB_KEY_SECRET`, `FB_JWT`, `FB_REPO_ID`. Use path‑style addressing for S3 SDKs. Provide production‑ready code with retries, timeouts and minimal dependencies.\
> **Requirements**:\
> – Configure a client against the Bridge Node.\
> – Implement functions `putObject(bucket, key, bytes)`, `getObject(bucket, key)`, `deleteObject(bucket, key)`.\
> – Add telemetry helpers calling `GET /repo/stats` and `GET /stats/minute` or `GET /stats/nodes/minute`.\
> – Use env vars for secrets.\
> – Include a smoke test.\
> **Docs**: Bridge Node endpoint pattern; repository and performance statistics. Use <https://docs.flashback.tech/> for technical references.\
> **Output**: Code first, then short explanation.

### S3 client scaffold (TypeScript)

> Build `fbS3Client.ts` exporting a cached `S3Client`.\
> – Use `endpoint = process.env.FB_S3_ENDPOINT` (Bridge Node).\
> – Set `forcePathStyle = true` and `region = "us-east-1"`.\
> – Provide functions `putObject`, `getObject` and `deleteObject`.\
> – Use retries with jitter (max 3 attempts) and 30‑second timeouts.

### GCS/Azure variants

> If the project demands native GCS or Azure SDKs, show how to point their `apiEndpoint` or `BlobServiceClient` to the Bridge Node (e.g., `https://gcs-eu-central-1-gcp.flashback.tech` for GCS). Note that Flashback recommends using S3 SDKs for simplicity since they translate across providers.

### Migration worker prompt

> Implement a migration worker that tries provider‑native copy first (S3 `CopyObject`, GCS `RewriteObject`, Azure `copy_from_url`). If the providers differ, stream via `GetObject` → `PutObject`. Expose a `--dry-run` option, iterate through keys, rate‑limit concurrency and publish counters. Pause or switch if a 403/429 or `QUOTA_EXCEEDED` error occurs.

### Latency and credit‑aware routing prompt

> Fetch `GET /stats/nodes/minute` to find the lowest‑latency Bridge Node. Then fetch `GET /repo/stats` and decide which bucket to use based on credit usage. Provide a function that re‑instantiates the client when the endpoint changes.

***

## Ready‑to‑use scaffolds

### S3 client with caching (Node.js)

```ts
// fbS3Client.ts
import { S3Client } from '@aws-sdk/client-s3';

const cache = new Map<string, S3Client>();

export function s3ClientFor(endpoint: string, accessKeyId: string, secretAccessKey: string) {
  if (!cache.has(endpoint)) {
    cache.set(endpoint, new S3Client({
      endpoint,
      region: 'us-east-1', // unused by Flashback but required
      credentials: { accessKeyId, secretAccessKey },
      forcePathStyle: true,
    }));
  }
  return cache.get(endpoint)!;
}
```

### Basic operations with retries

```ts
// s3Ops.ts
import { PutObjectCommand, GetObjectCommand, DeleteObjectCommand } from '@aws-sdk/client-s3';
import { s3ClientFor } from './fbS3Client';

const ENDPOINT = process.env.FB_S3_ENDPOINT!;
const KEY_ID = process.env.FB_KEY_ID!;
const KEY_SECRET = process.env.FB_KEY_SECRET!;

async function withRetry<T>(fn: () => Promise<T>, attempts = 3, delay = 200): Promise<T> {
  try {
    return await fn();
  } catch (e) {
    if (attempts <= 1) throw e;
    await new Promise((res) => setTimeout(res, delay + Math.random() * 100));
    return withRetry(fn, attempts - 1, delay * 2);
  }
}

export async function putObject(bucket: string, key: string, body: Buffer) {
  const s3 = s3ClientFor(ENDPOINT, KEY_ID, KEY_SECRET);
  return withRetry(() => s3.send(new PutObjectCommand({ Bucket: bucket, Key: key, Body: body })));
}

export async function getObject(bucket: string, key: string) {
  const s3 = s3ClientFor(ENDPOINT, KEY_ID, KEY_SECRET);
  const res = await withRetry(() => s3.send(new GetObjectCommand({ Bucket: bucket, Key: key })));
  return res.Body;
}

export async function deleteObject(bucket: string, key: string) {
  const s3 = s3ClientFor(ENDPOINT, KEY_ID, KEY_SECRET);
  return withRetry(() => s3.send(new DeleteObjectCommand({ Bucket: bucket, Key: key })));
}
```

### Stats helpers (Node.js)

```ts
// fbStats.ts
const BASE = 'https://backend.flashback.tech';
const H = { Accept: 'application/json', Authorization: `Bearer ${process.env.FB_JWT}` };

export async function getRepoStats(repoId?: string) {
  const url = repoId ? `${BASE}/repo/stats?repoId=${repoId}` : `${BASE}/repo/stats`;
  const r = await fetch(url, { headers: H });
  if (!r.ok) throw new Error(`repo stats: ${r.status}`);
  return r.json();
}

export async function getBucketStats(bucketId?: string) {
  const url = bucketId ? `${BASE}/bucket/stats?bucketId=${bucketId}` : `${BASE}/bucket/stats`;
  const r = await fetch(url, { headers: H });
  if (!r.ok) throw new Error(`bucket stats: ${r.status}`);
  return r.json();
}

export async function getDailyStats(startDate?: string, endDate?: string) {
  const qs = [];
  if (startDate) qs.push(`startDate=${startDate}`);
  if (endDate) qs.push(`endDate=${endDate}`);
  const url = `${BASE}/stats/daily${qs.length ? '?' + qs.join('&') : ''}`;
  const r = await fetch(url, { headers: H });
  if (!r.ok) throw new Error(`daily stats: ${r.status}`);
  return r.json();
}

export async function getMinuteStats(repoId?: string, bucketId?: string) {
  const qs: string[] = [];
  if (repoId) qs.push(`repoId=${repoId}`);
  if (bucketId) qs.push(`bucketId=${bucketId}`);
  const url = `${BASE}/stats/minute${qs.length ? '?' + qs.join('&') : ''}`;
  const r = await fetch(url, { headers: H });
  if (!r.ok) throw new Error(`minute stats: ${r.status}`);
  return r.json();
}

export async function getNodeMinuteStats(bucketId?: string) {
  const url = bucketId ? `${BASE}/stats/nodes/minute?bucketId=${bucketId}` : `${BASE}/stats/nodes/minute`;
  const r = await fetch(url, { headers: H });
  if (!r.ok) throw new Error(`node minute stats: ${r.status}`);
  return r.json();
}
```

### Migration helper (S3)

```ts
// migrate.ts
import { CopyObjectCommand, GetObjectCommand, PutObjectCommand } from '@aws-sdk/client-s3';
import { s3ClientFor } from './fbS3Client';
const s3 = s3ClientFor(process.env.FB_S3_ENDPOINT!, process.env.FB_KEY_ID!, process.env.FB_KEY_SECRET!);

// Native provider copy when source and destination share the same provider via the Bridge Node
export async function copyNative(srcBucket: string, srcKey: string, dstBucket: string, dstKey: string) {
  await s3.send(new CopyObjectCommand({ Bucket: dstBucket, Key: dstKey, CopySource: `${srcBucket}/${encodeURIComponent(srcKey)}` }));
}

// Streamed copy for cross‑provider migration
export async function copyStreamed(srcBucket: string, srcKey: string, dstBucket: string, dstKey: string) {
  const obj = await s3.send(new GetObjectCommand({ Bucket: srcBucket, Key: srcKey }));
  await s3.send(new PutObjectCommand({ Bucket: dstBucket, Key: dstKey, Body: obj.Body as any }));
}
```

***

## Prompt cookbook

Use these high‑level prompt recipes to generate code or runbooks via AI.

* **Bootstrap a Flashback service**: “Create a Node.js service that defines `fbS3Client.ts`, `s3Ops.ts`, `fbStats.ts`, loads env vars, and provides npm scripts `smoke:putgetdel` and `stats:print`. Use the Bridge Node endpoint pattern, set `forcePathStyle = true`, call `/repo/stats` and `/stats/minute`, and include a 60‑second smoke test that writes, reads and deletes an object.”
* **Latency‑aware selector**: “Add a `selectEndpoint()` function that fetches `GET /stats/nodes/minute` and chooses the Bridge Node with the lowest latency. If the selected endpoint changes, rebuild the S3 client and log the change.”
* **Credit‑aware routing**: “Before writing, call `GET /repo/stats`. Compare usage to configured caps for each bucket/provider; choose the first bucket under its cap. On 403/429 or `QUOTA_EXCEEDED`, switch to the next provider.”
* **List available buckets**: “Call `GET /bucket/stats` or `GET /bucket/available` to show attachable bucket IDs. Suggest an operator how to attach them to a repository.”
* **Operator runbook**: “Generate a runbook for rotating repo keys, validating bucket connectivity, executing a dry‑run migration, verifying billing paths, and defining a rollback plan.”

***

## Guardrails to embed in prompts

* Use only the required environment variables for your path: Storage (`FB_S3_ENDPOINT`, `FB_KEY_ID`, `FB_KEY_SECRET`, `FB_JWT`, `FB_REPO_ID`) and/or AI (`FB_OPENAI_BASE_URL`, `FB_OPENAI_API_KEY`, `FB_MODEL`).
* **Never** print secrets or tokens in logs or code.
* Use timeouts (30 seconds) and exponential backoff with jitter.
* Prefer the S3 SDK for storage unless explicitly asked to use GCS or Azure; for AI calls, prefer OpenAI-compatible SDK flows against the Flashback endpoint.
* When using the S3 SDK, set `forcePathStyle: true` and pass the Bridge endpoint.
* Emit metrics by calling `/stats/minute` and `/stats/nodes/minute` regularly for dashboards and alerts; for AI, also track request latency, token usage, and error rates per model.

***

## Acceptance checklist

* ✅ **Smoke test passes**: the script writes, reads and deletes an object through the Bridge Node.
* ✅ **Secrets stay secret**: all credentials come from env vars and are not logged.
* ✅ **Stats wired**: the code queries `/repo/stats` and `/stats/minute` or `/stats/nodes/minute` and exposes data for dashboards.
* ✅ **Migration tool**: native and streamed copy paths implemented; supports `--dry-run` and concurrency control.
* ✅ **Fallback tested**: on `QUOTA_EXCEEDED` or HTTP 403/429 errors, the system pauses or switches providers and logs the event.
* ✅ **AI smoke test passes**: a `/chat/completions` call via `FB_OPENAI_BASE_URL` succeeds with the repository AI key.
* ✅ **AI isolation respected**: storage and AI keys are separated by workload and never logged.

***

## Troubleshooting playbook

* **403 / 429 or `QUOTA_EXCEEDED`**: Treat this as a soft limit violation. Switch to another bucket/provider or pause writes and alert the operator.
* **High latency**: Query `GET /stats/nodes/minute` to identify a faster Bridge Node; update the endpoint accordingly.
* **Invalid credentials**: Ensure you’re using repository‑scoped access keys for object operations and a Bearer token for management/statistics endpoints.
* **Stats endpoints fail**: Check that your Bearer token (`FB_JWT`) has not expired and that the `Accept: application/json` header is present on requests.
* **AI 401/403**: Confirm you are using a repository AI API key (Bearer), not S3-style key pairs.
* **AI 404 or model errors**: Verify `FB_OPENAI_BASE_URL` includes `/v1` and `FB_MODEL` exists in attached AI LLM providers.

***

## Minimal end‑to‑end example

```ts
// index.ts
import 'dotenv/config';
import { putObject, getObject, deleteObject } from './s3Ops';
import { getRepoStats, getDailyStats, getNodeMinuteStats } from './fbStats';

async function main() {
  const bucket = process.env.FB_BUCKET!;
  await putObject(bucket, 'hello.txt', Buffer.from('hi'));
  const body = await getObject(bucket, 'hello.txt');
  console.log('read:', (await body?.transformToString?.()) || '<stream>');
  await deleteObject(bucket, 'hello.txt');

  const repoStats = await getRepoStats(process.env.FB_REPO_ID);
  console.log('repo stats sample:', JSON.stringify(repoStats).slice(0, 200), '…');

  const daily = await getDailyStats();
  const nodes = await getNodeMinuteStats();
  console.log('daily stats fetched:', !!daily, 'node stats fetched:', !!nodes);
}

main().catch((e) => { console.error(e); process.exit(1); });
```

`.env.example`:

```
FB_S3_ENDPOINT=https://s3-us-east-1-aws.flashback.tech
FB_KEY_ID=REPO_ACCESS_KEY_ID
FB_KEY_SECRET=REPO_SECRET_ACCESS_KEY
FB_JWT=BEARER_TOKEN_FOR_MANAGEMENT
FB_REPO_ID=repo-xxxxxxxx
FB_BUCKET=my-bucket-name
```

This example demonstrates using the Bridge Node endpoint pattern, retrieving repository and node statistics and shows how to write, read and delete an object.

***

## Appendix – Prompt macros

Keep these macros handy for reuse in your Vibe prompts:

* **Flashback S3 client** – “Create `fbS3Client.ts` that returns a cached S3 client per endpoint; set `forcePathStyle: true`; use env vars; add retries and 30‑second timeout.”
* **Stats wiring** – “Implement `fbStats.ts` with functions calling `/repo/stats`, `/bucket/stats`, `/stats/daily`, `/stats/minute` and `/stats/nodes/minute`; each function returns parsed JSON or throws on failure.”
* **Migration (native → streamed)** – “Write `migrate.ts` that tries provider‑native copy first, then streams via S3 `GetObject`/`PutObject` if providers differ; add `--dry-run`, concurrency control and counters.”
* **Latency selector** – “Add `selectEndpoint()` that reads `/stats/nodes/minute`; if the difference in latency between nodes exceeds 20%, switch to the faster node and rebuild the client.”

By following this guide, you can confidently instruct AI to generate secure, efficient and observability‑friendly integrations with the Flashback platform, avoiding vendor lock‑in and delivering production‑quality backends on the first try.
