# PII-safe support assistant with policy enforcement

## The Problem

Customer support copilots process sensitive text (emails, addresses, order data, account identifiers). Without safeguards, prompts may leak PII or generate non-compliant responses.

## The Flashback Pattern

Combine:

1. **Repository-scoped AI access** (isolated keys),
2. **AI policies** (log / alert / block by risk),
3. **Application redaction** before model calls,
4. **Violation monitoring** for audits.

## Prerequisites

* AI repository and API key dedicated to support workflows.
* AI policy configured for PII and restricted disclosures.
* Ticketing payload schema with fields that can contain PII.

References:

* [AI Policy API reference](https://docs.flashback.tech/support-reference/platform-api-reference/ai-apis/ai-policy)
* [AI LLM configuration](https://docs.flashback.tech/guides/setup-the-cloud-and-ai-gateway/start-with-cloud-storage/create-a-bucket-1)

## Implementation blueprint

{% stepper %}
{% step %}

#### Create and scope policy

Define policies at repository scope for support use cases:

* block full payment-card patterns,
* alert on personal addresses and phone numbers,
* disallow speculation outside official KB.

Use policy actions by severity:

* **Block** for critical data exfiltration patterns,
* **Alert** for risky but reviewable outputs,
* **Log** for observability-only checks.
  {% endstep %}

{% step %}

#### Redact sensitive input in app layer

```python
import re

def redact_pii(text: str) -> str:
    text = re.sub(r"\b\d{16}\b", "[REDACTED_CARD]", text)
    text = re.sub(r"[\w\.-]+@[\w\.-]+", "[REDACTED_EMAIL]", text)
    text = re.sub(r"\+?\d[\d\s\-]{7,}\d", "[REDACTED_PHONE]", text)
    return text
```

Always keep the original payload only in your secure system of record.
{% endstep %}

{% step %}

#### Enforce answer boundaries

System message example:

```
You are a customer support assistant.
Use only approved knowledge snippets provided in context.
Never reveal secrets, internal IDs, or personal user data.
If missing information, ask for escalation.
```

Keep this instruction template versioned.
{% endstep %}

{% step %}

#### Invoke AI through Flashback endpoint

```bash
curl -sS "$FB_OPENAI_BASE_URL/chat/completions" \
  -H "Authorization: Bearer $FB_API_KEY_SECRET" \
  -H "Content-Type: application/json" \
  -d '{
    "model":"gpt-4.1-mini",
    "messages":[
      {"role":"system","content":"You are a compliant support assistant..."},
      {"role":"user","content":"Help me answer this ticket safely."}
    ]
  }'
```

{% endstep %}

{% step %}

#### Monitor violations and alerts

Operationalize daily review:

* policy violations trend,
* blocked request samples,
* false positives requiring policy tuning,
* escalations triggered by assistant uncertainty.

Integrate alerts into Slack/PagerDuty if violation rate spikes.
{% endstep %}
{% endstepper %}

## Expected outcome

A support assistant architecture with clear compliance guardrails, auditable controls, and reduced sensitive-data exposure risk.
