AI LLMs

The AI LLM Management APIs allow you to configure, manage, and monitor AI/Large Language Model provider integrations within the Flashback platform. These APIs enable you to connect to various AI providers and use them for AI-powered features across your repositories.

Supported AI Providers

  • OpenAI - GPT-4, GPT-3.5, and other OpenAI models

  • Google - Gemini, PaLM, and Google AI services

  • Anthropic - Claude models

  • AWS - Amazon Bedrock and AWS AI services

  • Other - Custom or additional AI provider endpoints

Key Features

  • Centralized Configuration Management - Store and manage AI provider credentials securely in one place

  • Multi-Provider Support - Configure multiple AI providers and switch between them

  • Workspace Integration - AI configurations are scoped to workspaces with proper access controls

  • Usage Statistics - Track API calls, token consumption, and policy enforcement

  • Credential Security - All API keys and secrets are encrypted at rest and never returned in API responses

  • Configuration Validation - Test configurations to ensure connectivity and valid credentials

Available Endpoints

Configuration Management

Operations

Monitoring

Common Use Cases

1. Setting Up an AI Provider

// Create OpenAI configuration
const response = await client.createAiLlm({
  name: "Production OpenAI",
  aiType: "OPENAI",
  endpoint: "https://api.openai.com/v1",
  secret: "sk-proj-xxxxxxxxxxxx",
  workspaceId: "workspace-123"
});

// Validate the configuration
const validation = await client.validateAiLlm(response.aiLlmId);
console.log(validation.message);

2. Listing Available Configurations

// Get all available AI configurations
const available = await client.getAvailableAiLlms();

// Filter by workspace
const workspaceConfigs = await client.getAiLlms("workspace-123");

3. Monitoring Usage

// Get statistics for a specific configuration
const stats = await client.getAiLlmStats("ai-llm-id-123");

console.log(`Total API Calls: ${stats.stats[0].totalApiCalls}`);
console.log(`Total Tokens In: ${stats.stats[0].totalTokensIn}`);
console.log(`Total Tokens Out: ${stats.stats[0].totalTokensOut}`);
console.log(`Policy Violations: ${stats.stats[0].totalPolicyViolations}`);

4. Updating Credentials

// Update API credentials
const updated = await client.updateAiLlm("ai-llm-id-123", {
  secret: "new-api-key-xxxxxxxxxxxx"
});

// Validate the new credentials
await client.validateAiLlm("ai-llm-id-123");

Security Considerations

  1. Credential Storage: All API keys and secrets are encrypted using industry-standard encryption before being stored in the database.

  2. Never Returned: Credentials are never returned in API responses. The key field in response objects is always null or masked.

  3. Workspace Access Controls: AI configurations respect workspace-level permissions. Users can only access configurations in workspaces they have permission to access.

  4. Secure Deletion: When a configuration is deleted, all associated credentials are securely removed from the system.

  5. Validation Security: The validation endpoint makes real API calls to providers, so ensure you trust the endpoint URLs before validation.

Permissions

All AI LLM Management API endpoints require authentication via BearerAuth. The following access rules apply:

  • Users must have access to the workspace to create, view, update, or delete configurations

  • Only configurations within accessible workspaces are returned in list operations

  • Workspace administrators have full access to manage configurations within their workspaces

Error Handling

Common error codes across AI LLM APIs:

Status Code
Description

400

Bad Request - Invalid parameters or validation error

403

Forbidden - Insufficient permissions or configuration in use

404

Not Found - Configuration or resource not found

500

Internal Server Error - Server-side error occurred

Best Practices

  1. Test Configurations: Always use the validate endpoint after creating or updating configurations to ensure they work correctly.

  2. Monitor Usage: Regularly check statistics to monitor token consumption and identify potential issues or policy violations.

  3. Secure Credentials: Rotate API keys periodically and update configurations using the PUT endpoint.

  4. Use Available Endpoint: When building UI components, use the /ai/llm/available endpoint to get only ready-to-use configurations.

  5. Handle Validation Failures: The validation endpoint returns a 200 status even for invalid configurations - always check the success field in the response.

  6. Delete Unused Configurations: Clean up configurations that are no longer needed to maintain a tidy workspace.

TypeScript Client Library

The Flashback TypeScript client provides convenient methods for all AI LLM operations:

import { FlashbackClient } from '@flashback/client';

const client = new FlashbackClient({
  apiKey: 'your-api-key',
  baseUrl: 'https://backend.flashback.tech'
});

// All AI LLM methods are available on the client instance
await client.createAiLlm(data);
await client.getAiLlms(workspaceId);
await client.getAvailableAiLlms();
await client.updateAiLlm(id, data);
await client.deleteAiLlm(id);
await client.validateAiLlm(id);
await client.getAiLlmStats(aiLlmId);

Next Steps

  • Explore the Repository APIs to learn how to associate AI configurations with repositories

  • Check out the Policy APIs to understand how to enforce AI usage policies

  • Review the Statistics APIs to monitor and optimize your AI usage

Last updated

Was this helpful?