AI
Flashback’s AI layer combines AI LLM management with a flexible AI Policy system. You can securely configure multiple AI providers (OpenAI, Google Cloud's Gemini, Anthropic) per workspace, then govern their usage with natural-language policies scoped at organization, workspace, or repository level. Together, they give you centralized control over credentials, usage, compliance, and real-time enforcement of responsible AI behavior across your apps and repositories.
Reference Table
Configure and manage connections to external AI providers (e.g., OpenAI-compatible, cloud LLMs), validate credentials, and make these models available to workspaces and repositories under a unified interface.
Define and enforce natural-language governance rules (e.g., PII handling, security, content boundaries) with risk levels and actions (log, alert, block), scoped at organization, workspace, or repository level.
Create and manage scoped API keys that applications use to call AI endpoints via Flashback, controlling which LLM configurations and repositories they can access and how much usage they are allowed.
Last updated
Was this helpful?