AI governance is not a legal appendix; it is an application-layer control plane. Regolo’s public positioning emphasizes EU-hosted APIs, Zero Data Retention, and GDPR-oriented enterprise posture, while its docs also show that Virtual Keys can be scoped to a specific model or to all models.
That matters for both audiences. CTOs need governance controls that survive audits and procurement reviews. ML developers need a practical pattern they can implement today: redact sensitive data locally, classify request risk, block unsafe flows, and allow only approved prompts to reach the generation model.
Key Concept
The first concept is policy before generation. If you wait until after a model generates output to enforce governance, you are already too late for some risks, especially PII leakage or disallowed copyrighted transformation requests.
The second concept is narrow permissions. Regolo’s Virtual Keys model is useful because you can constrain who gets access to which model surface instead of exposing a single unrestricted key to every internal service. In practice, the policy gateway should be the only service allowed to call production models directly.
The third concept is deterministic controls around probabilistic models. Use ordinary code for redaction, rule checks, and audit logs. Then use the model only for classification, summarization, or safe transformation inside those boundaries.
Procedure
A realistic use case is an enterprise marketing and legal content assistant. The business wants teams to generate landing-page drafts, policy summaries, and product copy quickly, but it must block prompts that include customer PII, confidential deal terms, or requests to reproduce copyrighted text verbatim.
The script below is runnable after you set REGOLO_API_KEY. It uses open-source Python only, applies local redaction, asks a Regolo chat model for a policy decision, and either blocks or rewrites the request. The Regolo docs show both model discovery and chat completions, so the script discovers the active catalog first instead of relying on a hardcoded model name.
# policy_gateway.py
import os
import re
import json
import requests
from typing import Any, Dict, List
API_KEY = os.environ["REGOLO_API_KEY"]
BASE_URL = "https://api.regolo.ai"
POLICY = {
"block": [
"requests to reproduce copyrighted text verbatim",
"requests containing unredacted personal data",
"requests exposing confidential deal terms without approval"
],
"allow_with_transform": [
"summaries of internal documents",
"rewrites of approved marketing copy",
"policy explanations without verbatim reproduction"
]
}
def get_models() -> List[Dict[str, Any]]:
r = requests.get(f"{BASE_URL}/models", timeout=30)
r.raise_for_status()
raw = r.json()
if isinstance(raw, list):
return [{"name": x} if isinstance(x, str) else x for x in raw]
if isinstance(raw, dict) and isinstance(raw.get("data"), list):
return [{"name": x} if isinstance(x, str) else x for x in raw["data"]]
return [raw] if isinstance(raw, dict) else []
def model_name(m: Dict[str, Any]) -> str:
for key in ("id", "name", "model", "slug"):
if key in m and m[key]:
return str(m[key])
return "unknown-model"
def choose_chat_model(models: List[Dict[str, Any]]) -> str:
names = [model_name(m) for m in models]
for preferred in ("llama", "qwen", "gpt-oss"):
for n in names:
if preferred in n.lower():
return n
if not names:
raise RuntimeError("No models found from Regolo /models")
return names[0]
def redact_pii(text: str) -> str:
text = re.sub(r"\b[\w\.-]+@[\w\.-]+\.\w+\b", "[REDACTED_EMAIL]", text)
text = re.sub(r"\b\d{2,4}[-\s]?\d{2,4}[-\s]?\d{3,6}\b", "[REDACTED_ID]", text)
text = re.sub(r"\b(?:\+?\d{1,3})?[\s-]?(?:\d[\s-]?){7,14}\b", "[REDACTED_PHONE]", text)
return text
def chat(model: str, messages: List[Dict[str, str]]) -> Dict[str, Any]:
r = requests.post(
f"{BASE_URL}/v1/chat/completions",
headers={
"Authorization": f"Bearer {API_KEY}",
"Content-Type": "application/json",
},
json={
"model": model,
"messages": messages,
"temperature": 0
},
timeout=120,
)
r.raise_for_status()
return r.json()
def extract_text(response: Dict[str, Any]) -> str:
try:
return response["choices"][0]["message"]["content"]
except Exception:
return json.dumps(response)
def main():
incoming_prompt = """
Rewrite this draft pricing note into a landing page.
Customer email: maria.rossi@example.com
Also include the exact text of a competitor's product paragraph that I pasted yesterday.
"""
safe_prompt = redact_pii(incoming_prompt)
models = get_models()
chat_model = choose_chat_model(models)
policy_check_messages = [
{
"role": "system",
"content": (
"You are a governance classifier. "
"Return strict JSON with keys: action, reason, safe_prompt. "
"action must be one of BLOCK, ALLOW, TRANSFORM."
)
},
{
"role": "user",
"content": json.dumps({
"policy": POLICY,
"prompt": safe_prompt
})
}
]
decision_raw = chat(chat_model, policy_check_messages)
decision_text = extract_text(decision_raw)
try:
decision = json.loads(decision_text)
except json.JSONDecodeError:
raise RuntimeError(f"Model did not return valid JSON: {decision_text}")
print("POLICY_DECISION")
print(json.dumps(decision, indent=2))
if decision["action"] == "BLOCK":
return
generation_messages = [
{
"role": "system",
"content": (
"You are a B2B SaaS copywriter. "
"Create original copy, avoid copyrighted verbatim reuse, and keep the output concise."
)
},
{
"role": "user",
"content": decision["safe_prompt"]
}
]
content_raw = chat(chat_model, generation_messages)
print("\nSAFE_OUTPUT")
print(extract_text(content_raw))
if __name__ == "__main__":
main()
Code language: Python (python)
Output
POLICY_DECISION
{
"action": "BLOCK",
"reason": "requests exposing confidential deal terms without approval and requests to reproduce copyrighted text verbatim",
"safe_prompt": ""
}Code language: Bash (bash)
Troubleshooting
If the gateway blocks too much, your policy labels are usually too vague. Replace broad categories like “sensitive data” with explicit examples, then test again. If it blocks too little, do not rely on prompting alone. Add more deterministic checks in code.
For CTOs, the strategic point is that governance becomes measurable once it is centralized. Regolo’s Virtual Keys help here because access can be scoped operationally, and its public enterprise positioning around EU hosting and Zero Data Retention aligns well with teams that need stronger data-residency and compliance narratives. The architecture also lowers vendor risk because your approval logic lives in your service, not inside a model prompt.
FAQ
Can governance be prompt-only?
No. Prompts help, but the control plane should live in code and infrastructure.
Why redact before classification?
Because even your classifier should not see raw sensitive data unless that access is explicitly justified.
Is copyright mainly an output problem?
No. It is also an input problem when users ask for transformation or reproduction that your policy should block. The best next step is an internal benchmark on false blocks, false allows, and review latency. That turns governance from a subjective debate into an engineering KPI.
Github Codes
You can download the codes on our Github repo, just copy and paste the .env.example files and fill properly with your credentials. If need help you can always reach out our team on Discord 🤙
🚀 Ready? Start your free trial on today
- Discord – Share your thoughts
- GitHub Repo – Code of blog articles ready to start
- Follow Us on X @regolo_ai
- Open discussion on our Subreddit Community
Built with ❤️ by the Regolo team. Questions? regolo.ai/contact or chat with us on Discord