githubEdit

brainAI LLMs

The AI LLM Management APIs allow you to configure, manage, and monitor AI/Large Language Model provider integrations within the Flashback platform. These APIs enable you to connect to various AI providers and use them for AI-powered features across your repositories.

Supported AI Providers

  • OpenAI - GPT-5, GPT-4, GPT-3.5, and other OpenAI models

  • Google - Gemini, PaLM, and Google AI services

  • Anthropic - Claude models

circle-info

Our solution is fully OpenAI-compatible. You can then add an on-prem or decentralized solutions that is OpenAI compatible.

Key Features

  • Centralized Configuration Management - Store and manage AI provider credentials securely in one place

  • Multi-Provider Support - Configure multiple AI providers and switch between them

  • Workspace Integration - AI configurations are scoped to workspaces with proper access controls

  • Usage Statistics - Track API calls, token consumption, and policy enforcement

  • Credential Security - All API keys and secrets are encrypted at rest and never returned in API responses

  • Configuration Validation - Test configurations to ensure connectivity and valid credentials

Available Endpoints

Configuration Management

Method
API Reference
Description

POST/ai/llm

Create a new AI LLM configuration.

GET/ai/llm

List all AI LLM configurations (with optional workspace filter).

GET/ai/llm/available

Get available configurations ready for use.

PUT/ai/llm/{id}

Update an existing AI LLM configuration.

DELETE/ai/llm/{id}

Delete an AI LLM configuration.

Operations

Method
API Reference
Description

POST/ai/llm/{id}/validate

Validate an AI LLM configuration.

Monitoring

Method
API Reference
Description

GET/ai/llm/stats

Get usage statistics for AI LLM configurations.

Common Use Cases

1. Setting Up an AI Provider

2. Listing Available Configurations

3. Monitoring Usage

4. Updating Credentials

Security Considerations

  1. Credential Storage: All API keys and secrets are encrypted using industry-standard encryption before being stored in the database.

  2. Never Returned: Credentials are never returned in API responses. The key field in response objects is always null or masked.

  3. Workspace Access Controls: AI configurations respect workspace-level permissions. Users can only access configurations in workspaces they have permission to access.

  4. Secure Deletion: When a configuration is deleted, all associated credentials are securely removed from the system.

  5. Validation Security: The validation endpoint makes real API calls to providers, so ensure you trust the endpoint URLs before validation.

Permissions

All AI LLM Management API endpoints require authentication via BearerAuth. The following access rules apply:

  • Users must have access to the workspace to create, view, update, or delete configurations

  • Only configurations within accessible workspaces are returned in list operations

  • Workspace administrators have full access to manage configurations within their workspaces

Error Handling

Common error codes across AI LLM APIs:

Status Code
Description

400

Bad Request - Invalid parameters or validation error

403

Forbidden - Insufficient permissions or configuration in use

404

Not Found - Configuration or resource not found

500

Internal Server Error - Server-side error occurred

Best Practices

  1. Test Configurations: Always use the validate endpoint after creating or updating configurations to ensure they work correctly.

  2. Monitor Usage: Regularly check statistics to monitor token consumption and identify potential issues or policy violations.

  3. Secure Credentials: Rotate API keys periodically and update configurations using the PUT endpoint.

  4. Use Available Endpoint: When building UI components, use the /ai/llm/available endpoint to get only ready-to-use configurations.

  5. Handle Validation Failures: The validation endpoint returns a 200 status even for invalid configurations - always check the success field in the response.

  6. Delete Unused Configurations: Clean up configurations that are no longer needed to maintain a tidy workspace.

TypeScript Client Library

The Flashback TypeScript client provides convenient methods for all AI LLM operations:

Next Steps

Last updated

Was this helpful?