AI Governance & Risk Control

Managing AI-augmented development in regulated environments. This framework describes the governance, risk controls, and compliance measures we apply when using AI agents in software delivery.

Foundational Principles

AI as Tooling

AI agents are development tooling, not autonomous decision-makers. AI does not make business decisions or approve its own outputs.

Human Accountability

Every AI-assisted output has a named human accountable for its quality and correctness. Humans remain accountable for all outcomes.

Transparency

Clients are informed that AI agents are used. AI-generated content is identifiable. We do not claim AI outputs as purely human work.

Data Handling Principles

Core Commitment

Client data is not used to train AI models. Client code and data are processed only for the immediate task.

What We Include in AI Prompts

  • Sanitised code snippets (credentials removed)
  • Anonymised requirements and specifications
  • Generic architectural patterns
  • Public documentation references

What We Exclude from AI Prompts

  • Production credentials or secrets
  • Personally identifiable information (PII)
  • Client proprietary business logic (unless authorised)
  • Security vulnerability details

Agent Access Controls

Least Privilege

  • • Read access to relevant repos only
  • • No direct production access
  • • No secrets management access
  • • No deploy without human approval

Environment Isolation

  • • Development environments only
  • • Sandboxed code execution
  • • Network restrictions
  • • No persistent access between sessions

Authentication

  • • Service accounts with limited permissions
  • • Access tokens with short expiry
  • • Activity logging for all actions
  • • Regular access review and rotation

Logging & Traceability

What We Log

  • Timestamp of interaction
  • AI service/model used
  • Prompt content (sanitised if necessary)
  • Response content
  • Human reviewer identity
  • Acceptance/rejection decision
  • Modifications made to AI output

Traceability Chain

1
Production Code
2
Commit (author, reviewer, timestamp)
3
Pull Request (review, approval)
4
Ticket (requirements, acceptance criteria)
5
AI Interaction Log (if AI-assisted)

Risk Assessment Summary

RiskLikelihoodImpactMitigationResidual
AI generates incorrect codeMediumMediumHuman review, testingLow
AI generates insecure codeMediumHighSecurity review, scanningLow
Client data exposed via AILowHighData controls, enterprise agreementsLow
Model update causes issuesMediumMediumVersion pinning, testingLow
Regulatory non-complianceLowHighGovernance framework, monitoringLow

Regulatory Compliance

Our AI usage is designed to support compliance with:

GDPR / UK Data Protection

No personal data processed without basis; data minimisation applied.

Financial Services Regulations

Audit trails and accountability maintained.

Public Sector Requirements

Data sovereignty and security standards respected.

Industry Standards

ISO 27001, SOC 2 principles applied.

We monitor and adapt to EU AI Act requirements, UK AI regulatory developments, and sector-specific guidance.

Need the full governance framework?

We provide complete governance documentation and can customise for your specific regulatory requirements.

Get in Touch