Send a Prompt
Use this guide to validate AI/LLM integration through the Cloud and AI Gateway after configuring your repository.
Unlike storage tests, AI tests use the repository's OpenAI-compatible endpoint type and an AI LLM API key generated for that repository.
This is the AI equivalent of Store An Object: you confirm connectivity, authentication, routing, and response behavior before shipping production traffic.
Prerequisites
Before running examples, make sure all of the following are complete:
At least one AI provider is configured in AI → AI LLM.
The AI resource is attached to your repository.
The repository endpoint type is OPENAI.
You created a repository AI LLM API key.
Related setup guides:
Required endpoint and credentials
From your repository details, collect:
OpenAI-compatible base URL (example:
https://openai-us-east-1-aws.flashback.tech/v1)AI API key (Bearer token format)
Model identifier available in your configured provider(s)
Then export them as environment variables:
Quick connectivity check (cURL)
Run a simple chat completion request through Flashback:
If successful, you should receive a JSON response containing choices[0].message.content.
Python example
Install dependency:
Run:
JavaScript example
Install dependency:
Run:
Gateway integration validation checklist
When the call succeeds, validate these integration points:
Authentication works with the repository-level AI LLM API key.
The request is routed through your Flashback Gateway endpoint (not directly to provider).
The selected
modelis authorized and available in your provider configuration.The same app code can keep a stable OpenAI-compatible contract while backend provider/policy evolves in Flashback.
Common errors and fixes
401 Unauthorized / 403 Forbidden
Verify you are using an AI key (not a storage key).
Check key status and access mode in repository API keys.
404 Not Found
Confirm
FB_OPENAI_BASE_URLincludes/v1.Ensure repository endpoint type is OPENAI.
400 model not found / invalid model
Use a model exposed by your configured provider(s).
Recheck provider mapping/policies in AI LLM configuration.
Timeout / network issues
Validate DNS/network access to your Flashback endpoint.
Confirm bridge node health and connectivity.
Next steps
After this test succeeds, you can:
implement fallback and reliability patterns for multi-provider AI routing,
apply governance/policy controls,
move to production with key rotation and observability.
Explore practical patterns in AI LLM Use Cases.
Last updated
Was this helpful?