githubEdit

Policies

The Policies section is where your organization defines AI governance and safety rules.

Policy scopes

Policies can be defined at three levels:

  • Organization scope (global defaults)

  • Workspace scope (team or environment-specific)

  • Repository scope (fine-grained, endpoint-level control)

What policies are used for

Policies are AI guardrails for LLM traffic and AI usage. They can be used to:

  • reduce privacy/compliance risk,

  • detect or block sensitive data exposure,

  • enforce usage constraints,

  • standardize behavior across teams.

Examples include:

  • PII handling constraints,

  • redaction requirements,

  • disallowed content classes,

  • logging and alerting behavior.

Policy building blocks

A robust policy usually includes:

  • a scope (where it applies),

  • a risk level,

  • one or more actions (log, alert, block, etc.),

  • a clear natural-language rule describing intended behavior.

Relationship with Private Chat

Private Chat traffic that uses repository AI resources is subject to policy evaluation. This gives admins visibility and control over how AI interactions are handled across the organization.

API mapping

See: AI Policy API

Last updated

Was this helpful?