AI-Augmented Delivery Methodology
A process framework for regulated enterprises. This describes how we use AI agents within strict human-defined boundaries to deliver software that meets governance, auditability, and quality requirements.
Project Initiation
Engagement Qualification
Before any engagement begins, we assess fit:
- •Organisational readiness for discovery and review cycles
- •Scope clarity and problem understanding
- •Governance compatibility with existing frameworks
- •Technical feasibility within constraints
Engagements that fail qualification are declined.
Governance Setup
Upon engagement confirmation:
- •Master services agreement executed
- •Project governance structure defined
- •Communication cadence established
- •Tooling access provisioned
- •Security and data handling agreements finalised
Discovery & Scoping
Context Gathering
- •Stakeholder interviews (business, technical, compliance)
- •Existing system documentation review
- •Technical environment assessment
- •Constraint identification
- •Risk and dependency mapping
Scope Definition Outputs
- •Scope document: Inclusions and exclusions
- •Requirements catalogue: Functional and non-functional
- •Constraint register: Technical, regulatory, business
- •Architecture decision record: Key choices with rationale
- •Risk register: Identified risks with mitigations
- •Definition of done: Acceptance criteria
Specification Standards
Every unit of work is captured as a formal ticket before AI agents are engaged.
Required Ticket Fields
All Tickets
- • Title: Clear, descriptive summary
- • Description: Detailed explanation
- • Acceptance criteria: Testable conditions
- • Constraints: Technical or business limitations
- • Dependencies: Other tickets or external factors
- • Priority: Business importance ranking
- • Estimate: Effort range
Feature Tickets Additionally
- • User story or job-to-be-done framing
- • Edge cases: Known boundary conditions
- • Error handling: Expected failure behaviour
- • Security considerations
- • Compliance notes: Regulatory implications
Quality Gate
Tickets must pass review before implementation begins. Acceptance criteria must be testable, constraints explicit, dependencies identified, and security considerations documented. Tickets that fail quality review are returned for refinement.
AI Agent Usage by Phase
Discovery Phase
AI agents assist with documentation analysis, codebase assessment, requirement extraction, and risk identification.
Human accountability: All AI outputs reviewed by senior team member before inclusion.
Specification Phase
AI agents assist with drafting acceptance criteria, identifying edge cases, generating test outlines, and consistency checking.
Human accountability: Specifications authored by humans, AI suggestions reviewed and edited.
Implementation Phase
AI agents operate as junior developers (implementing well-specified tasks), senior developers (code review, refactoring), and pair programmers (real-time assistance).
Human accountability: All AI-generated code reviewed before merge. Named reviewer on every PR.
Testing Phase
AI agents assist with test case generation, edge case tests, test data generation, and coverage analysis.
Human accountability: Test strategy defined by humans. AI-generated tests reviewed for correctness.
Human Review Checkpoints
The following require explicit human approval before proceeding:
| Checkpoint | Reviewer | Criteria |
|---|---|---|
| Specification approval | Product owner + Tech lead | Complete, testable, constraints documented |
| Architecture decisions | Lead architect | Aligned with principles, risks assessed |
| Code merge | Senior developer | Correct, maintainable, tested, secure |
| Security-sensitive changes | Security reviewer | No vulnerabilities introduced |
| Database migrations | DBA or senior engineer | Reversible, performant, data-safe |
| Release approval | Release manager | All gates passed, rollback plan ready |
Quality Assurance
Testing Pyramid
Unit Tests (Base)
80%+ coverage for business logic. Run on every commit.
Integration Tests (Middle)
API contracts, database integration. Run on every PR.
End-to-End Tests (Top)
Critical user journeys, regression suite. Run before release.
Automated Quality Gates
- Unit test pass rate: 100%
- Integration test pass rate: 100%
- Code coverage threshold met
- Static analysis: No critical issues
- Security scanning: No high/critical vulnerabilities
- Dependency audit: No known vulnerabilities
Want the full methodology documentation?
We provide complete process documentation as part of every engagement.
Get in Touch