AI LLMs
The AI LLM Management APIs allow you to configure, manage, and monitor AI/Large Language Model provider integrations within the Flashback platform. These APIs enable you to connect to various AI providers and use them for AI-powered features across your repositories.
Supported AI Providers
OpenAI - GPT-5, GPT-4, GPT-3.5, and other OpenAI models
Google - Gemini, PaLM, and Google AI services
Anthropic - Claude models
Key Features
Centralized Configuration Management - Store and manage AI provider credentials securely in one place
Multi-Provider Support - Configure multiple AI providers and switch between them
Workspace Integration - AI configurations are scoped to workspaces with proper access controls
Usage Statistics - Track API calls, token consumption, and policy enforcement
Credential Security - All API keys and secrets are encrypted at rest and never returned in API responses
Configuration Validation - Test configurations to ensure connectivity and valid credentials
Available Endpoints
Configuration Management
Operations
Monitoring
Common Use Cases
1. Setting Up an AI Provider
2. Listing Available Configurations
3. Monitoring Usage
4. Updating Credentials
Security Considerations
Credential Storage: All API keys and secrets are encrypted using industry-standard encryption before being stored in the database.
Never Returned: Credentials are never returned in API responses. The
keyfield in response objects is alwaysnullor masked.Workspace Access Controls: AI configurations respect workspace-level permissions. Users can only access configurations in workspaces they have permission to access.
Secure Deletion: When a configuration is deleted, all associated credentials are securely removed from the system.
Validation Security: The validation endpoint makes real API calls to providers, so ensure you trust the endpoint URLs before validation.
Permissions
All AI LLM Management API endpoints require authentication via BearerAuth. The following access rules apply:
Users must have access to the workspace to create, view, update, or delete configurations
Only configurations within accessible workspaces are returned in list operations
Workspace administrators have full access to manage configurations within their workspaces
Error Handling
Common error codes across AI LLM APIs:
400
Bad Request - Invalid parameters or validation error
403
Forbidden - Insufficient permissions or configuration in use
404
Not Found - Configuration or resource not found
500
Internal Server Error - Server-side error occurred
Best Practices
Test Configurations: Always use the validate endpoint after creating or updating configurations to ensure they work correctly.
Monitor Usage: Regularly check statistics to monitor token consumption and identify potential issues or policy violations.
Secure Credentials: Rotate API keys periodically and update configurations using the PUT endpoint.
Use Available Endpoint: When building UI components, use the
/ai/llm/availableendpoint to get only ready-to-use configurations.Handle Validation Failures: The validation endpoint returns a 200 status even for invalid configurations - always check the
successfield in the response.Delete Unused Configurations: Clean up configurations that are no longer needed to maintain a tidy workspace.
TypeScript Client Library
The Flashback TypeScript client provides convenient methods for all AI LLM operations:
Next Steps
Explore the Repository APIs to learn how to associate AI configurations with repositories
Check out the Policy APIs to understand how to enforce AI usage policies
Review the Statistics APIs to monitor and optimize your AI usage
Last updated
Was this helpful?