# Configure an AI LLM

{% hint style="info" %}
Before following this guide, we strongly recommend reading [AI LLM](https://docs.flashback.tech/flashback-platform/cloud-and-ai-gateway/ai-llm) to understand how provider configurations, repositories, AI API keys, and policies work together in Flashback. You can also review the [AI LLM Management APIs](https://docs.flashback.tech/support-reference/platform-api-reference/ai-apis/ai-llms) if you want to automate setup.
{% endhint %}

{% hint style="danger" %}
This guide is experimental and may evolve with the platform. If something behaves differently in your workspace, please contact us on [Discord](https://discord.com/invite/yy8kyM5qFB).
{% endhint %}

## Properties

Each AI LLM configuration uses the following properties:<br>

* **Configuration Name** (required)\
  Human-readable label of the provider connection in your workspace.
* **AI LLM Type** (required)\
  Provider compatibility type used by Flashback to route and validate calls.
* **API Endpoint** (required)\
  Base URL of the provider endpoint (for example `https://api.openai.com/v1`).
* **API Key** (optional)\
  Optional key field for providers/setups that require an additional access key.
* **API Secret** (required)\
  Main secret/token used to authenticate requests.
* **Workspace** (required, auto-selected)\
  The current workspace where the configuration is created.

## Access & security model

AI LLM credentials in Flashback follow the same security principles used for other connected resources:

* Secrets are encrypted at rest.
* Secrets are never returned in clear text by the APIs.
* Access is scoped by workspace permissions.
* Configuration names should be unique and explicit to avoid confusion during repository setup.

{% hint style="info" %}
For the full security model (encryption/decryption behavior, scope, and best practices), see [Security and Secret Encryption](https://docs.flashback.tech/support-reference/security-and-secret-encryption).
{% endhint %}

## AI LLM Type

When creating a configuration, select the provider type that matches your endpoint:

* **OPENAI**: OpenAI-compatible endpoints.
* **GOOGLE**: Google AI/Gemini-compatible setup.
* **ANTHROPIC**: Anthropic Claude-compatible setup.
* **AWS**: AWS AI services (e.g., Bedrock-compatible usage through your chosen endpoint/proxy).
* **OTHER**: Any custom endpoint compatible with one of the supported API patterns (commonly OpenAI-compatible on-prem/private providers).

All provider types currently use the same form fields in the UI:

* Configuration Name
* API Endpoint
* API Key (optional)
* API Secret
* Workspace (auto-defined)

## Instructions

Here is the step-by-step process to connect an AI LLM provider in Flashback Platform.

{% stepper %}
{% step %}

#### Open AI LLM management

In the left menu, go to **AI** (or **AI → Overview**), then open **AI LLM**.\
You can start creation from either:

* the **+ / Add** action in **Overview**, or
* the **+ / Add** action in **AI LLM**.
  {% endstep %}

{% step %}

#### Start a new AI LLM configuration

Click **Add AI LLM** (or equivalent **+** action) to open the creation form.
{% endstep %}

{% step %}

#### Select the AI LLM type

Choose the provider compatibility group:

* OpenAI
* Google Cloud
* Anthropic Cloud
* AWS
* Other

Use **Other** for self-hosted/decentralized endpoints that are API-compatible with supported provider patterns.
{% endstep %}

{% step %}

#### Fill in configuration fields

Complete the form with your provider values:

* **Configuration Name**: clear unique name (example: `prod-openai-gateway`).
* **API Endpoint**: provider base URL.
* **API Key**: optional, only if your provider requires this extra key.
* **API Secret**: required provider secret/token.
* **Workspace**: already selected from your current workspace context.

{% hint style="warning" %}
Double-check endpoint protocol (`https://`), path (`/v1` when needed), and token format before saving. Most validation failures come from endpoint typos or invalid/expired secrets.
{% endhint %}
{% endstep %}

{% step %}

#### Create the configuration

Click **Create configuration**. The AI LLM resource is now attached to your workspace and appears in the AI LLM list.
{% endstep %}

{% step %}

#### (Recommended) Validate and operate safely

After creation:

1. Validate connectivity (UI/API validation flow).
2. Attach the AI LLM to a Repository when needed.
3. Rotate keys regularly and update configuration when credentials change.
4. Remove unused configurations to keep workspace governance clean.
   {% endstep %}
   {% endstepper %}

## Troubleshooting tips

* **Validation fails with authentication error**: verify API Secret (and API Key if used), then rotate credentials.
* **Provider returns endpoint/model errors**: verify endpoint base URL and provider compatibility type.
* **Configuration not visible where expected**: check current workspace and your workspace permissions.
* **Calls blocked by policy**: review applied AI Policies at organization/workspace/repository scope.
