# AI LLMs

The AI LLM Management APIs allow you to configure, manage, and monitor AI/Large Language Model provider integrations within the Flashback platform. These APIs enable you to connect to various AI providers and use them for AI-powered features across your repositories.

## Supported AI Providers

* **OpenAI** - GPT-5, GPT-4, GPT-3.5, and other OpenAI models
* **Google** - Gemini, PaLM, and Google AI services
* **Anthropic** - Claude models

{% hint style="info" %}
Our solution is fully OpenAI-compatible. You can then add an on-prem or decentralized solutions that is OpenAI compatible.
{% endhint %}

## Key Features

* **Centralized Configuration Management** - Store and manage AI provider credentials securely in one place
* **Multi-Provider Support** - Configure multiple AI providers and switch between them
* **Workspace Integration** - AI configurations are scoped to workspaces with proper access controls
* **Usage Statistics** - Track API calls, token consumption, and policy enforcement
* **Credential Security** - All API keys and secrets are encrypted at rest and never returned in API responses
* **Configuration Validation** - Test configurations to ensure connectivity and valid credentials

## AI LLMs API Calls

| Method                                                           | API Reference                                                                                                                                                    | Description                                                      |
| ---------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------- |
| <mark style="color:orange;">`POST`</mark>`/ai/llm`               | [**post\_\_ai\_llm**](https://docs.flashback.tech/support-reference/platform-api-reference/ai-apis/ai-llms/post_ai_llm)                                          | Create a new AI LLM configuration.                               |
| <mark style="color:green;">`GET`</mark>`/ai/llm`                 | [**get\_\_ai\_llm**](https://docs.flashback.tech/support-reference/platform-api-reference/ai-apis/ai-llms/get_ai_llm)                                            | List all AI LLM configurations (with optional workspace filter). |
| <mark style="color:green;">`GET`</mark>`/ai/llm/available`       | [**get\_\_ai\_llm\_available**](https://docs.flashback.tech/support-reference/platform-api-reference/ai-apis/ai-llms/get_ai_llm_available)                       | Get available configurations ready for use.                      |
| <mark style="color:blue;">`PUT`</mark>`/ai/llm/{id}`             | [**put\_\_ai\_llm\_{aillmid}**](https://docs.flashback.tech/support-reference/platform-api-reference/ai-apis/ai-llms/put__ai_llm_-aillmid)                       | Update an existing AI LLM configuration.                         |
| <mark style="color:red;">`DELETE`</mark>`/ai/llm/{id}`           | [**delete\_\_ai\_llm\_{aillmId}**](https://docs.flashback.tech/support-reference/platform-api-reference/ai-apis/ai-llms/delete__ai_llm_-aillmid)                 | Delete an AI LLM configuration.                                  |
| <mark style="color:orange;">`POST`</mark>`/ai/llm/{id}/validate` | [**post\_\_ai\_llm\_{aillmId}\_validate**](https://docs.flashback.tech/support-reference/platform-api-reference/ai-apis/ai-llms/post__ai_llm_-aillmid-_validate) | Validate an AI LLM configuration.                                |
| <mark style="color:green;">`GET`</mark>`/ai/llm/stats`           | [**get\_\_ai\_llm\_stats**](https://docs.flashback.tech/support-reference/platform-api-reference/ai-apis/ai-llms/get__ai_llm_stats)                              | Get usage statistics for AI LLM configurations.                  |

## Common Use Cases

### 1. Setting Up an AI Provider

```typescript
// Create OpenAI configuration
const response = await client.createAiLlm({
  name: "Production OpenAI",
  aiType: "OPENAI",
  endpoint: "https://api.openai.com/v1",
  secret: "sk-proj-xxxxxxxxxxxx",
  workspaceId: "workspace-123"
});

// Validate the configuration
const validation = await client.validateAiLlm(response.aiLlmId);
console.log(validation.message);
```

### 2. Listing Available Configurations

```typescript
// Get all available AI configurations
const available = await client.getAvailableAiLlms();

// Filter by workspace
const workspaceConfigs = await client.getAiLlms("workspace-123");
```

### 3. Monitoring Usage

```typescript
// Get statistics for a specific configuration
const stats = await client.getAiLlmStats("ai-llm-id-123");

console.log(`Total API Calls: ${stats.stats[0].totalApiCalls}`);
console.log(`Total Tokens In: ${stats.stats[0].totalTokensIn}`);
console.log(`Total Tokens Out: ${stats.stats[0].totalTokensOut}`);
console.log(`Policy Violations: ${stats.stats[0].totalPolicyViolations}`);
```

### 4. Updating Credentials

```typescript
// Update API credentials
const updated = await client.updateAiLlm("ai-llm-id-123", {
  secret: "new-api-key-xxxxxxxxxxxx"
});

// Validate the new credentials
await client.validateAiLlm("ai-llm-id-123");
```

## Security Considerations

1. **Credential Storage**: All API keys and secrets are encrypted using industry-standard encryption before being stored in the database.
2. **Never Returned**: Credentials are never returned in API responses. The `key` field in response objects is always `null` or masked.
3. **Workspace Access Controls**: AI configurations respect workspace-level permissions. Users can only access configurations in workspaces they have permission to access.
4. **Secure Deletion**: When a configuration is deleted, all associated credentials are securely removed from the system.
5. **Validation Security**: The validation endpoint makes real API calls to providers, so ensure you trust the endpoint URLs before validation.

## Permissions

All AI LLM Management API endpoints require authentication via BearerAuth. The following access rules apply:

* Users must have access to the workspace to create, view, update, or delete configurations
* Only configurations within accessible workspaces are returned in list operations
* Workspace administrators have full access to manage configurations within their workspaces

## Error Handling

Common error codes across AI LLM APIs:

| Status Code | Description                                                  |
| ----------- | ------------------------------------------------------------ |
| 400         | Bad Request - Invalid parameters or validation error         |
| 403         | Forbidden - Insufficient permissions or configuration in use |
| 404         | Not Found - Configuration or resource not found              |
| 500         | Internal Server Error - Server-side error occurred           |

## Best Practices

1. **Test Configurations**: Always use the validate endpoint after creating or updating configurations to ensure they work correctly.
2. **Monitor Usage**: Regularly check statistics to monitor token consumption and identify potential issues or policy violations.
3. **Secure Credentials**: Rotate API keys periodically and update configurations using the PUT endpoint.
4. **Use Available Endpoint**: When building UI components, use the `/ai/llm/available` endpoint to get only ready-to-use configurations.
5. **Handle Validation Failures**: The validation endpoint returns a 200 status even for invalid configurations - always check the `success` field in the response.
6. **Delete Unused Configurations**: Clean up configurations that are no longer needed to maintain a tidy workspace.

## TypeScript Client Library

The Flashback TypeScript client provides convenient methods for all AI LLM operations:

```typescript
import { FlashbackClient } from '@flashback/client';

const client = new FlashbackClient({
  apiKey: 'your-api-key',
  baseUrl: 'https://backend.flashback.tech'
});

// All AI LLM methods are available on the client instance
await client.createAiLlm(data);
await client.getAiLlms(workspaceId);
await client.getAvailableAiLlms();
await client.updateAiLlm(id, data);
await client.deleteAiLlm(id);
await client.validateAiLlm(id);
await client.getAiLlmStats(aiLlmId);
```

## Next Steps

* Explore the [Repository APIs](https://github.com/flashbacknetwork/flashback-usermanual/blob/main/support-reference/platform-api-reference/storage-apis/repository-management/README.md) to learn how to associate AI configurations with repositories
* Check out the [Policy APIs](https://github.com/flashbacknetwork/flashback-usermanual/blob/main/support-reference/platform-api-reference/policy-apis/README.md) to understand how to enforce AI usage policies
* Review the [Statistics APIs](https://docs.flashback.tech/support-reference/platform-api-reference/ai-apis/ai-llms/get__ai_llm_stats) to monitor and optimize your AI usage
