AI Governance Policy
Effective Date: February 27, 2026
Last Updated: February 27, 2026
Introduction
hyperfocus.tech deploys AI agents that perform e-commerce operations on behalf of brands and agencies — financial reconciliation, creator management, catalog optimization, campaign execution, and compliance monitoring across platforms like TikTok Shop, Shopify, and Amazon.
Because our agents take real actions with real business consequences, we hold ourselves to a higher standard than typical SaaS. This policy documents how we build, deploy, and govern AI systems in alignment with the NIST AI Risk Management Framework (AI RMF 1.0), with consideration for requirements under the Texas Responsible AI Governance Act (TRAIGA), Colorado AI Act, and California privacy and AI transparency regulations.
This is a living document. We update it as regulations evolve and as we learn from operating AI agents in production.
Seven Characteristics of Trustworthy AI
The NIST AI RMF defines seven characteristics that AI systems should exhibit. Here is how we implement each one.
1. Valid & Reliable
Our agents produce accurate, consistent results that match what they were designed to do.
- Every agent is tested against accuracy benchmarks before production deployment
- Outputs are validated against source data (invoices against platform reports, reconciliation against bank records)
- Agents that fall below accuracy thresholds are automatically suspended and flagged for review
- Results are reproducible — same inputs produce same recommendations
2. Safe
Our agents are designed to avoid causing harm, even when they encounter unexpected situations.
- Guardrails enforce hard limits on financial actions, data modifications, and external communications
- When confidence is low, agents escalate to humans rather than acting autonomously
- Failure mode is always “stop and ask” — never “guess and act”
- Actions are bounded — no agent can exceed its defined scope of authority
3. Secure & Resilient
Our platform is built to resist attacks and recover gracefully from failures.
- Input sanitization protects against prompt injection and data poisoning
- All agent communications are encrypted in transit (TLS 1.3) and at rest (AES-256)
- SOC 2 Type II compliant infrastructure on Google Cloud
- Google Cloud enterprise security stack: VPC network isolation, IAM authentication, Cloud KMS encryption, Security Command Center monitoring, and container scanning
- Regular security testing including adversarial red-team exercises
4. Accountable & Transparent
Every agent action is traceable, and every agent has a named owner responsible for its behavior.
- Complete decision logs: what the agent did, why, and with what confidence
- Every agent has a designated owner accountable for its performance and behavior
- Audit trails are immutable and available to customers on request
- We clearly disclose when AI is performing work (no hidden automation)
5. Explainable & Interpretable
You can understand why an agent made any given decision.
- Confidence scores accompany every recommendation and action
- Reasoning chains explain the logic behind each decision
- Decision logs are accessible through the dashboard — no black boxes
- Customers can request detailed explanations of any agent action
6. Privacy-Enhanced
Data minimization is a core design principle, not an afterthought.
- Each agent has scoped, least-privilege access — agents only see data relevant to their function
- No PII is stored in decision logs (only decision IDs, actions, and confidence levels)
- Data access is logged and auditable
- We comply with platform-specific data retention requirements (TikTok 48h deletion, Amazon 30-day PII limits)
7. Fair — Bias Managed
Our agents treat all customers, creators, and partners equitably.
- Creator selection and outreach algorithms are tested for demographic bias
- Campaign budget allocation does not discriminate based on protected characteristics
- Regular bias audits with diverse test cases
- When bias is detected, agents are retrained or recalibrated before resuming operations
Human Oversight Model
Our agents operate on a tiered autonomy model. The higher the business impact of an action, the more human involvement is required.
Tiered Autonomy Levels
| Level | Action | Human Role |
|---|---|---|
| Read | Agent reads and collects data | None required |
| Analyze | Agent performs analysis and identifies patterns | None required |
| Recommend | Agent suggests actions with reasoning | Human reviews recommendations |
| Draft | Agent prepares outputs (reports, messages) | Human reviews before sending |
| Execute | Agent takes action within defined scope | Human approval required for high-impact actions |
Mandatory Human Approval
The following actions always require explicit human approval, regardless of agent confidence:
- Financial actions above configured thresholds (refunds, adjustments, transfers)
- External communications that create business liability
- Data deletion or modification of historical records
- Changes to pricing, product listings, or inventory levels
- Compliance-sensitive decisions (disputes, vendor changes)
- New vendor or partner relationship creation
Escalation Triggers
Agents automatically escalate to human review when:
- Confidence score falls below the configured threshold
- Daily or monthly spending limits are approached
- An anomaly is detected in the data
- The action falls outside the agent's defined scope of authority
Prohibited Uses
In alignment with NIST guidelines and US regulation (Texas TRAIGA), our AI agents are explicitly prohibited from the following:
- Behavioral manipulation: Agents must not exploit cognitive biases or use deceptive techniques to influence user decisions
- Discrimination: Agents must not make decisions that discriminate based on race, gender, age, disability, religion, or any other protected characteristic
- Social scoring: Agents must not evaluate or rank individuals based on behavioral profiling or social characteristics
- Biometric identification: Agents must not perform real-time biometric identification or facial recognition
- Unauthorized surveillance: Agents must not monitor individuals without their knowledge and consent
- Autonomous high-stakes decisions: Agents must not make binding financial, legal, or compliance decisions without human approval
Customer Rights
Because our agents perform work on your behalf — not just provide software — we adopt a service-level accountability model. These rights are built into our customer agreements.
1. Delegation of Authority
You define what each agent can and cannot do. Agent scope is explicitly documented — what actions are permitted, what requires approval, and what is prohibited. Agents cannot exceed their defined authority.
2. Service Warranties
We warrant that our service is performed in a professional, diligent manner in accordance with industry standards. Unlike typical SaaS “as-is” disclaimers, we guarantee the quality of work itself — as if performed by qualified professionals.
3. Outcome SLAs
We commit to measurable performance guarantees: accuracy of processed documents, rate of autonomous actions leading to customer complaints, and timeliness of recommended actions within defined service windows.
4. Audit Rights
You have the right to access decision logs, assess agent performance against SLAs, and request detailed explanations of any agent action. Audit data is available through your dashboard at any time.
5. Policy Guardrails
Certain actions always require human approval — these are hard limits in our system, not user-configurable settings. Financial refunds above thresholds, new vendor relationships, and compliance-sensitive decisions are never fully automated.
Data Governance
Ownership
- You own all outputs, reports, and recommendations generated by our agents
- We do not sell, share, or use your data for model training without explicit consent
- You can request complete data deletion at any time
Access Controls
- Each agent operates under least-privilege access — agents only see data relevant to their function
- No single agent has access to all customer data
- All data access is logged and auditable
- Role-based access control for team members within your organization
Encryption & Security
- All data encrypted at rest using AES-256 on Google Cloud SQL
- All data encrypted in transit using TLS 1.3
- No PII stored in decision logs or agent memory
- Platform-specific retention: TikTok Shop data deleted within 48 hours of disconnection, Amazon PII within 30 days
Regulatory Alignment
We actively monitor and prepare for the evolving regulatory landscape around AI:
- NIST AI RMF 1.0: Our primary governance framework — all seven trustworthy AI characteristics are implemented and documented. NIST AI RMF
- Texas TRAIGA: Substantial compliance with NIST AI RMF provides safe harbor under this law. HB 1709
- Colorado AI Act (SB24-205): Risk management policy, impact assessments, human oversight, and consumer notification requirements are addressed. SB24-205
- California AB 2013: GenAI training data transparency requirements (effective January 1, 2026). AB 2013
- California CCPA/CPRA: Automated decision-making technology provisions and consumer data privacy protections. CPPA Regulations
Contact
Questions about our AI governance practices? We welcome the conversation.
Email: trust@hyperfocus.tech
Address: 28 Geary St, Suite 650, San Francisco, CA 94108