# Configure delegated access for GCP Vertex AI

## Overview

For Vertex AI, delegated access is typically implemented with a Google service account and short-lived OAuth access tokens generated through secure identity flows.

In production, prefer Workload Identity Federation or managed identity paths over long-lived JSON key files.

## When to use this

* Production systems where static service-account keys are discouraged.
* Centralized IAM/security operations with scoped permissions.
* Environments requiring frequent credential rotation and strong audit trails.

## Prerequisites

* GCP project with Vertex AI enabled.
* Permissions to create/manage service accounts and IAM roles.
* A service account dedicated to Flashback AI provider access.
* Flashback AI LLM setup path: [Configure an AI LLM](https://docs.flashback.tech/guides/setup-the-cloud-and-ai-gateway/start-with-cloud-storage/create-a-bucket-1).

## Step-by-step (provider side)

{% stepper %}
{% step %}

#### Create a service account with least privilege

Create a service account for Vertex AI calls and assign only required permissions.

Use minimum roles required for your use case (for example, prediction/invocation capabilities instead of broad project admin roles).
{% endstep %}

{% step %}

#### Prefer short-lived credentials

Use Workload Identity / token exchange flows to obtain short-lived OAuth access tokens.

Avoid storing long-lived JSON private keys in production when possible.
{% endstep %}

{% step %}

#### Generate and refresh access tokens

Implement a secure token refresh mechanism (for example in a trusted backend service) that obtains fresh OAuth access tokens before expiry.

If your architecture needs a stable upstream interface, expose a small proxy that handles token acquisition and forwards requests to Vertex AI.
{% endstep %}
{% endstepper %}

## Configure in Flashback

Use [Configure an AI LLM](https://docs.flashback.tech/guides/setup-the-cloud-and-ai-gateway/start-with-cloud-storage/create-a-bucket-1) with existing fields:

* Choose the appropriate **AI LLM Type** for your Vertex integration.
* Set **API Endpoint** to Vertex AI endpoint (or your proxy endpoint).
* Set **API Secret** to the token/credential expected by that endpoint.
* Set **API Key** only if your integration endpoint requires an additional key.

{% hint style="info" %}
If your provider flow cannot be mapped directly to endpoint + secret/token, place the identity logic in a controlled proxy and configure Flashback to call that proxy.
{% endhint %}
