AI-Augmented Delivery Methodology

A process framework for regulated enterprises. This describes how we use AI agents within strict human-defined boundaries to deliver software that meets governance, auditability, and quality requirements.

1

Project Initiation

Engagement Qualification

Before any engagement begins, we assess fit:

  • Organisational readiness for discovery and review cycles
  • Scope clarity and problem understanding
  • Governance compatibility with existing frameworks
  • Technical feasibility within constraints

Engagements that fail qualification are declined.

Governance Setup

Upon engagement confirmation:

  • Master services agreement executed
  • Project governance structure defined
  • Communication cadence established
  • Tooling access provisioned
  • Security and data handling agreements finalised
2

Discovery & Scoping

Context Gathering

  • Stakeholder interviews (business, technical, compliance)
  • Existing system documentation review
  • Technical environment assessment
  • Constraint identification
  • Risk and dependency mapping

Scope Definition Outputs

  • Scope document: Inclusions and exclusions
  • Requirements catalogue: Functional and non-functional
  • Constraint register: Technical, regulatory, business
  • Architecture decision record: Key choices with rationale
  • Risk register: Identified risks with mitigations
  • Definition of done: Acceptance criteria
3

Specification Standards

Every unit of work is captured as a formal ticket before AI agents are engaged.

Required Ticket Fields

All Tickets

  • • Title: Clear, descriptive summary
  • • Description: Detailed explanation
  • • Acceptance criteria: Testable conditions
  • • Constraints: Technical or business limitations
  • • Dependencies: Other tickets or external factors
  • • Priority: Business importance ranking
  • • Estimate: Effort range

Feature Tickets Additionally

  • • User story or job-to-be-done framing
  • • Edge cases: Known boundary conditions
  • • Error handling: Expected failure behaviour
  • • Security considerations
  • • Compliance notes: Regulatory implications

Quality Gate

Tickets must pass review before implementation begins. Acceptance criteria must be testable, constraints explicit, dependencies identified, and security considerations documented. Tickets that fail quality review are returned for refinement.

4

AI Agent Usage by Phase

Discovery Phase

AI agents assist with documentation analysis, codebase assessment, requirement extraction, and risk identification.

Human accountability: All AI outputs reviewed by senior team member before inclusion.

Specification Phase

AI agents assist with drafting acceptance criteria, identifying edge cases, generating test outlines, and consistency checking.

Human accountability: Specifications authored by humans, AI suggestions reviewed and edited.

Implementation Phase

AI agents operate as junior developers (implementing well-specified tasks), senior developers (code review, refactoring), and pair programmers (real-time assistance).

Human accountability: All AI-generated code reviewed before merge. Named reviewer on every PR.

Testing Phase

AI agents assist with test case generation, edge case tests, test data generation, and coverage analysis.

Human accountability: Test strategy defined by humans. AI-generated tests reviewed for correctness.

5

Human Review Checkpoints

The following require explicit human approval before proceeding:

CheckpointReviewerCriteria
Specification approvalProduct owner + Tech leadComplete, testable, constraints documented
Architecture decisionsLead architectAligned with principles, risks assessed
Code mergeSenior developerCorrect, maintainable, tested, secure
Security-sensitive changesSecurity reviewerNo vulnerabilities introduced
Database migrationsDBA or senior engineerReversible, performant, data-safe
Release approvalRelease managerAll gates passed, rollback plan ready
6

Quality Assurance

Testing Pyramid

Unit Tests (Base)

80%+ coverage for business logic. Run on every commit.

Integration Tests (Middle)

API contracts, database integration. Run on every PR.

End-to-End Tests (Top)

Critical user journeys, regression suite. Run before release.

Automated Quality Gates

  • Unit test pass rate: 100%
  • Integration test pass rate: 100%
  • Code coverage threshold met
  • Static analysis: No critical issues
  • Security scanning: No high/critical vulnerabilities
  • Dependency audit: No known vulnerabilities

Want the full methodology documentation?

We provide complete process documentation as part of every engagement.

Get in Touch