# Send a Prompt

Use this guide to validate AI/LLM integration through the **Cloud and AI Gateway** after configuring your repository.

Unlike storage tests, AI tests use the repository's **OpenAI-compatible endpoint type** and an **AI LLM API key** generated for that repository.

{% hint style="info" %}
This is the AI equivalent of **Store An Object**: you confirm connectivity, authentication, routing, and response behavior before shipping production traffic.
{% endhint %}

## Prerequisites

Before running examples, make sure all of the following are complete:

1. At least one AI provider is configured in **AI → AI LLM**.
2. The AI resource is attached to your repository.
3. The repository endpoint type is **OPENAI**.
4. You created a repository **AI LLM API key**.

Related setup guides:

* [Configure an AI LLM](https://docs.flashback.tech/guides/setup-the-cloud-and-ai-gateway/start-with-cloud-storage/create-a-bucket-1)
* [Create a Repository](https://docs.flashback.tech/guides/setup-the-cloud-and-ai-gateway/start-with-cloud-storage-1/create-a-repository)
* [Configure a Repository](https://docs.flashback.tech/guides/setup-the-cloud-and-ai-gateway/start-with-cloud-storage-1/program-with-flashback)

## Required endpoint and credentials

From your repository details, collect:

* **OpenAI-compatible base URL** (example: `https://openai-us-east-1-aws.flashback.tech/v1`)
* **AI API key** (Bearer token format)
* **Model identifier** available in your configured provider(s)

Then export them as environment variables:

```bash
export FB_OPENAI_BASE_URL="https://openai-us-east-1-aws.flashback.tech/v1"
export FB_OPENAI_API_KEY="YOUR_AI_API_KEY"
export FB_MODEL="gpt-4o-mini"
```

***

## Quick connectivity check (cURL)

Run a simple chat completion request through Flashback:

```bash
curl -sS "$FB_OPENAI_BASE_URL/chat/completions" \
  -H "Authorization: Bearer $FB_OPENAI_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "'"$FB_MODEL"'",
    "messages": [
      {"role": "system", "content": "You are a concise assistant."},
      {"role": "user", "content": "Explain in one sentence what Flashback Gateway does."}
    ],
    "temperature": 0.2
  }'
```

If successful, you should receive a JSON response containing `choices[0].message.content`.

***

## Python example

Install dependency:

```bash
pip install openai
```

```python
# send_prompt.py
import os
from openai import OpenAI

client = OpenAI(
    api_key=os.environ["FB_OPENAI_API_KEY"],
    base_url=os.environ["FB_OPENAI_BASE_URL"],
)

response = client.chat.completions.create(
    model=os.environ.get("FB_MODEL", "gpt-4o-mini"),
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "List 3 benefits of using an API gateway for AI workloads."}
    ],
    temperature=0.3,
)

print(response.choices[0].message.content)
```

Run:

```bash
python send_prompt.py
```

## JavaScript example

Install dependency:

```bash
npm install openai
```

```javascript
// sendPrompt.mjs
import OpenAI from "openai";

const client = new OpenAI({
  apiKey: process.env.FB_OPENAI_API_KEY,
  baseURL: process.env.FB_OPENAI_BASE_URL,
});

const response = await client.chat.completions.create({
  model: process.env.FB_MODEL || "gpt-4o-mini",
  messages: [
    { role: "system", content: "You are a helpful assistant." },
    { role: "user", content: "Give me 3 best practices for AI key rotation." }
  ],
  temperature: 0.3,
});

console.log(response.choices[0].message.content);
```

Run:

```bash
node sendPrompt.mjs
```

## Gateway integration validation checklist

When the call succeeds, validate these integration points:

* Authentication works with the repository-level **AI LLM API key**.
* The request is routed through your Flashback Gateway endpoint (not directly to provider).
* The selected `model` is authorized and available in your provider configuration.
* The same app code can keep a stable OpenAI-compatible contract while backend provider/policy evolves in Flashback.

## Common errors and fixes

* **401 Unauthorized / 403 Forbidden**
  * Verify you are using an AI key (not a storage key).
  * Check key status and access mode in repository API keys.
* **404 Not Found**
  * Confirm `FB_OPENAI_BASE_URL` includes `/v1`.
  * Ensure repository endpoint type is OPENAI.
* **400 model not found / invalid model**
  * Use a model exposed by your configured provider(s).
  * Recheck provider mapping/policies in AI LLM configuration.
* **Timeout / network issues**
  * Validate DNS/network access to your Flashback endpoint.
  * Confirm bridge node health and connectivity.

## Next steps

After this test succeeds, you can:

* implement fallback and reliability patterns for multi-provider AI routing,
* apply governance/policy controls,
* move to production with key rotation and observability.

Explore practical patterns in [AI LLM Use Cases](https://docs.flashback.tech/guides/explore-use-cases/ai-llm).
