Technology Choices for AI Development
Our technology stack is deliberately chosen to maximise the effectiveness of AI-assisted development. These aren't arbitrary preferences—they're strategic decisions that compound the productivity gains from LLM tooling.
Why Python for AI-Assisted Development
Python isn't just popular—it's structurally better suited for LLM-assisted development than alternatives like C# or Ruby. Here's why:
Natural Language Proximity
Python's syntax closely resembles natural language with keywords like in, is, not, and, or.
When an LLM sees if x in collection: versus C#'s if (collection.Contains(x)), the Python version maps more directly to how the problem would be described in English.
Reduced Syntactic Noise
Python's lack of mandatory type declarations, braces, and semicolons means less syntactic noise for an LLM to generate correctly.
C# requires precise placement of curly braces, semicolons, access modifiers, and type annotations—more opportunities for subtle errors that break compilation.
Smaller Context Window Footprint
Python code is typically 30-50% more concise than equivalent C# code.
This means an LLM can fit more of the codebase into its context window—critical for understanding relationships between files or making cross-cutting changes.
Training Data Dominance
LLMs have seen orders of magnitude more Python code across more domains than any other language.
This statistical advantage means the LLM has encountered similar patterns before—web development, data science, automation, AI/ML, scripting—and can generate contextually appropriate code.
One Obvious Way
Python's philosophy of "there should be one—and preferably only one—obvious way to do it" means LLMs encounter more consistent patterns across codebases. This predictability reduces the search space for correct solutions.
Python vs C# vs Ruby for LLM Work
A fair comparison of the three languages most commonly considered for enterprise web development:
| Factor | Python | Ruby | C# |
|---|---|---|---|
| LLM Training Data Volume | Excellent | Good | Moderate |
| Syntax Simplicity | Excellent | Good | Verbose |
| Pattern Consistency | High | Variable | Variable |
| Context Window Efficiency | Excellent | Good | Poor |
| Type Safety (Error Prevention) | Optional | None | Strong |
Python: Best Overall
The combination of massive training data volume, syntactic simplicity, and domain breadth is unbeatable. LLMs working with Python benefit from exposure across web, data science, automation, and AI/ML.
Ruby: Close Second
Ruby's expressiveness and "everything is an object" consistency creates fewer edge cases. Convention-over-configuration (especially in Rails) helps LLMs leverage learned patterns.
C#: Distant Third
While C#'s type system provides guardrails, the cognitive overhead is significant. Mixing OOP patterns, LINQ, async/await, and DI frameworks requires maintaining multiple philosophies simultaneously.
The Honest Caveat
For maintaining a large existing enterprise codebase with complex business logic, C#'s type system might flip this ranking—compile-time checks catch LLM errors that would be runtime failures in Python. But for new development where AI acceleration matters most, Python's advantages dominate.
Django: Batteries Included, Decisions Made
We favour "batteries included" frameworks where sensible defaults eliminate decision fatigue. Django exemplifies this philosophy—and it compounds the benefits of AI-assisted development.
Convention Over Configuration
- ✓ORM with migrations built-in
- ✓Authentication system ready to use
- ✓Admin interface auto-generated
- ✓Form handling and validation included
- ✓Security middleware by default
- ✓Templating engine built-in
Why This Helps LLMs
- ✓Consistent project structure across codebases
- ✓Standard patterns LLMs have seen thousands of times
- ✓Less custom code = fewer surprises
- ✓Documentation and examples abundant
- ✓Fewer integration decisions to get wrong
- ✓Mature ecosystem with proven solutions
The Pragmatic Choice
Every decision you don't have to make is a decision you can't get wrong. Django makes hundreds of small decisions for you—database abstraction, session handling, CSRF protection, password hashing—and makes them well.
When an LLM generates Django code, it's generating code that follows patterns established over 18 years and battle-tested by millions of applications. That's not boring—that's smart.
The Pragmatic Monolith
Microservices have their place. For most applications, that place is "later, if ever." We build monoliths first—not because we don't understand distributed systems, but because we do.
Why Monoliths Win for AI-Accelerated Development
Single Codebase, Full Context
An LLM can understand your entire application. With microservices, context is fragmented across repositories, and the AI can't see the relationships between services.
Simpler Debugging
When something breaks, there's one place to look. No distributed tracing, no "which service owns this?" conversations, no network partitions to diagnose.
Faster Iteration
Change your data model and update every affected view in one commit. No API versioning, no backwards compatibility gymnastics, no coordinated deployments.
Lower Operational Burden
One deployment pipeline. One monitoring dashboard. One database to backup. One thing to scale. The operational simplicity compounds over time.
Team Efficiency
Small teams thrive with monoliths. You don't need a platform team, an infrastructure team, and service owners. You need developers who ship features.
Refactoring Freedom
Want to restructure your domain? Do it. Move code between modules without worrying about API contracts, service boundaries, or breaking other teams.
The Monolith Scales
Shopify runs on a monolith. GitHub ran on a monolith for years. Basecamp still does. These are not small applications.
A well-structured monolith can handle enormous scale. Premature decomposition into microservices usually creates problems, not solves them.
Modular Monolith = Future Options
Build with clear module boundaries inside your monolith. If you genuinely need to extract a service later, you can.
But you probably won't need to. And if you do, you'll know exactly where to cut because you understand your domain—not because you guessed upfront.
When Microservices Make Sense
We're not dogmatic. Microservices are appropriate when:
- • You have genuinely independent scaling requirements (not "might need to scale differently someday")
- • You have multiple large teams that need to deploy independently
- • You have regulatory requirements for isolation (payment processing, healthcare data)
- • You're integrating systems written in different languages that can't share a runtime
If none of these apply, start with a monolith. You can always evolve later with better information.
Infrastructure: Terraform on Google Cloud
Our default infrastructure stack is Terraform deploying to Google Cloud Platform. This isn't the only option—but it's a very good default with few downsides.
Why Terraform
- ✓Infrastructure as Code: Version controlled, reviewable, repeatable
- ✓LLM-Friendly: Declarative HCL syntax that AI understands well
- ✓Cloud Agnostic: Same patterns work across GCP, AWS, Azure
- ✓State Management: Track what exists, plan changes safely
- ✓Mature Ecosystem: Modules for common patterns readily available
Why Google Cloud
- ✓Cloud Run: Container deployment without Kubernetes complexity
- ✓Cloud SQL: Managed PostgreSQL with automatic backups
- ✓Simpler IAM: More intuitive than AWS's permission model
- ✓Competitive Pricing: Often cheaper than AWS for similar workloads
- ✓Excellent AI/ML: Vertex AI, BigQuery ML for future expansion
The Complete Stack
Language
Python 3.11+
Framework
Django / FastAPI
Infrastructure
Terraform + GCP
Database
PostgreSQL
Boring technology, chosen deliberately. Every component is well-understood, well-documented, and well-supported by LLMs.
Ready to build with a pragmatic stack?
We'll help you ship faster with technology choices that maximise AI-assisted development effectiveness.
Get in Touch