AI LLMs
The AI LLM Management APIs allow you to configure, manage, and monitor AI/Large Language Model provider integrations within the Flashback platform. These APIs enable you to connect to various AI providers and use them for AI-powered features across your repositories.
Supported AI Providers
OpenAI - GPT-4, GPT-3.5, and other OpenAI models
Google - Gemini, PaLM, and Google AI services
Anthropic - Claude models
AWS - Amazon Bedrock and AWS AI services
Other - Custom or additional AI provider endpoints
Key Features
Centralized Configuration Management - Store and manage AI provider credentials securely in one place
Multi-Provider Support - Configure multiple AI providers and switch between them
Workspace Integration - AI configurations are scoped to workspaces with proper access controls
Usage Statistics - Track API calls, token consumption, and policy enforcement
Credential Security - All API keys and secrets are encrypted at rest and never returned in API responses
Configuration Validation - Test configurations to ensure connectivity and valid credentials
Available Endpoints
Configuration Management
POST /ai/llm - Create a new AI LLM configuration
GET /ai/llm - List all AI LLM configurations (with optional workspace filter)
GET /ai/llm/available - Get available configurations ready for use
PUT /ai/llm/{id} - Update an existing AI LLM configuration
DELETE /ai/llm/{id} - Delete an AI LLM configuration
Operations
POST /ai/llm/{id}/validate - Validate an AI LLM configuration
Monitoring
GET /ai/llm/stats - Get usage statistics for AI LLM configurations
Common Use Cases
1. Setting Up an AI Provider
// Create OpenAI configuration
const response = await client.createAiLlm({
name: "Production OpenAI",
aiType: "OPENAI",
endpoint: "https://api.openai.com/v1",
secret: "sk-proj-xxxxxxxxxxxx",
workspaceId: "workspace-123"
});
// Validate the configuration
const validation = await client.validateAiLlm(response.aiLlmId);
console.log(validation.message);2. Listing Available Configurations
// Get all available AI configurations
const available = await client.getAvailableAiLlms();
// Filter by workspace
const workspaceConfigs = await client.getAiLlms("workspace-123");3. Monitoring Usage
// Get statistics for a specific configuration
const stats = await client.getAiLlmStats("ai-llm-id-123");
console.log(`Total API Calls: ${stats.stats[0].totalApiCalls}`);
console.log(`Total Tokens In: ${stats.stats[0].totalTokensIn}`);
console.log(`Total Tokens Out: ${stats.stats[0].totalTokensOut}`);
console.log(`Policy Violations: ${stats.stats[0].totalPolicyViolations}`);4. Updating Credentials
// Update API credentials
const updated = await client.updateAiLlm("ai-llm-id-123", {
secret: "new-api-key-xxxxxxxxxxxx"
});
// Validate the new credentials
await client.validateAiLlm("ai-llm-id-123");Security Considerations
Credential Storage: All API keys and secrets are encrypted using industry-standard encryption before being stored in the database.
Never Returned: Credentials are never returned in API responses. The
keyfield in response objects is alwaysnullor masked.Workspace Access Controls: AI configurations respect workspace-level permissions. Users can only access configurations in workspaces they have permission to access.
Secure Deletion: When a configuration is deleted, all associated credentials are securely removed from the system.
Validation Security: The validation endpoint makes real API calls to providers, so ensure you trust the endpoint URLs before validation.
Permissions
All AI LLM Management API endpoints require authentication via BearerAuth. The following access rules apply:
Users must have access to the workspace to create, view, update, or delete configurations
Only configurations within accessible workspaces are returned in list operations
Workspace administrators have full access to manage configurations within their workspaces
Error Handling
Common error codes across AI LLM APIs:
400
Bad Request - Invalid parameters or validation error
403
Forbidden - Insufficient permissions or configuration in use
404
Not Found - Configuration or resource not found
500
Internal Server Error - Server-side error occurred
Best Practices
Test Configurations: Always use the validate endpoint after creating or updating configurations to ensure they work correctly.
Monitor Usage: Regularly check statistics to monitor token consumption and identify potential issues or policy violations.
Secure Credentials: Rotate API keys periodically and update configurations using the PUT endpoint.
Use Available Endpoint: When building UI components, use the
/ai/llm/availableendpoint to get only ready-to-use configurations.Handle Validation Failures: The validation endpoint returns a 200 status even for invalid configurations - always check the
successfield in the response.Delete Unused Configurations: Clean up configurations that are no longer needed to maintain a tidy workspace.
TypeScript Client Library
The Flashback TypeScript client provides convenient methods for all AI LLM operations:
import { FlashbackClient } from '@flashback/client';
const client = new FlashbackClient({
apiKey: 'your-api-key',
baseUrl: 'https://backend.flashback.tech'
});
// All AI LLM methods are available on the client instance
await client.createAiLlm(data);
await client.getAiLlms(workspaceId);
await client.getAvailableAiLlms();
await client.updateAiLlm(id, data);
await client.deleteAiLlm(id);
await client.validateAiLlm(id);
await client.getAiLlmStats(aiLlmId);Next Steps
Explore the Repository APIs to learn how to associate AI configurations with repositories
Check out the Policy APIs to understand how to enforce AI usage policies
Review the Statistics APIs to monitor and optimize your AI usage
Last updated
Was this helpful?