Shift-left with AI

Shift-left with AI
An ocean with hues of green and blue, symbolizing the shift-left of shadow towards blue. From Maldives.

The last decade of software engineering optimised for clean code, maintainable code, human-readable formats, faster debugging, better reviews, quicker onboarding. We built linters, formatters, style guides, PR templates. We argued about tabs versus spaces. We weren't optimising for machines... We were optimising for each other.

Think about every principle we held sacred... test-driven development, single responsibility, separation of concerns, meaningful variable names... all existed because another human was going to read the code. Maintain the code. Extend the code. Curse at the code at 2am during an incident. The entire craft of modern software engineering shaped itself around one constraint: someone else has to deal with the mess later.

Now agents dissolve the constraint. And shift-left... design upfront, think about testability, think about architecture... changes in ways we're not fully ready to talk about. We're entering an era of non-negotiable best practice starting to look like... overhead. Not because the practice was wrong. Because the practice solved for a bottleneck now moving.

History backs the pattern up. The printing industry optimised for typesetters. Desktop publishing killed the craft overnight. Journalists optimised for editors and newsroom workflows. Substack showed up and moved the bottleneck from production to distribution. Humanity has always built elaborate systems of practice around the current constraint, then adapted when the constraint shifted. AI does the same to software. Not destroying practices... questioning who the practices served.

High-level languages, readable syntax, expressive frameworks... all existed because humans needed to maintain things over time. Maintainability was the real product. Not the software. The ability to hand a codebase to the next developer without losing six weeks of context transfer.

Agents change the entire equation. An agent doesn't care about non-minified JavaScript. An agent doesn't need pretty code. An agent can bundle everything into a binary. An agent can refactor, rewrite, or restructure a whole codebase faster than you can open a pull request. Code becomes disposable... not because code lacks quality, but because regeneration cost drops to near zero.

What an agent will produce, though, is a human compatibility layer. Think of a declarative audit: what the agent did, what the agent plans to do, and a brief of how. The "how" used to live in programming languages. Expressed in Python, in TypeScript, in Go. From here on out, the "how" lives in English. Or whatever language you think in.

The compatibility layer becomes the new interface between humans and software systems. No more weird specialised syntax only a select group called programmers can parse and interact with. And fundamentally, a shift-left problem of a completely different kind. We're not shifting testing left. We're shifting understanding left. The barrier to participating in software creation drops from "can you code" to "can you describe what you want and verify what you got."

No more estimation meetings arguing about story points. Ditch those Jira ticket fields. No more decomposing work into two-week sprints so developers can context-switch between three projects. No more manually thinking through test cases, edge cases, error handling matrices. The elements aren't disappearing overnight, but clinging to them as identity will hurt.


So what remains?

Trust but verify. The principle has always existed in engineering... implemented through code review, QA teams, integration tests in CI, staging environments, canary deploys. Agents make the principle more important, not less. But the implementation of the trust layer changes entirely. You're no longer reviewing lines of code to verify intent. You're reviewing outcomes against intent. You're building verification systems checking whether the agent did what the agent said, and whether what the agent said was the right thing to do. A fundamentally different skill. Less "does the function handle null inputs" and more "does the system behave the way the business needs." Verification moves up the stack... from syntax to semantics, from implementation to intent.

Outcome over output. A clichΓ© in product management for years, now becoming the only thing engineering cares about too. Nobody will care how many lines of code shipped, how elegant the architecture looks, how clever the abstraction feels. The question becomes purely: did the solution work? Did the solution solve the problem? Did the user get what the user needed? Code, infrastructure, pipelines... all become implementation details the agent manages. Humans own the outcome. Defining the outcome. Measuring the outcome. Knowing when the outcome has been achieved and when not. A harder skill than most engineers realise, because we've spent a decade hiding behind the complexity of how as a proxy for value.

Hopefully product managers and engineers no longer fight amongst each other. Too wishful? Haha!

Taste and judgement. The one nobody talks about enough. When the cost of building drops to near zero, the bottleneck moves to knowing what to build. And more importantly, knowing what not to build. Agents can generate anything. Human value becomes curation, direction, and the wisdom to say "wrong problem." Not a technical skill. Taste. And the thing separating people who use agents well from people generating a lot of noise very quickly.

We've seen this in elsewhere too. Take printing industry, we've surplus of books being written and internet is full of random thought pieces. Ofcourse, you're reading one. ;)

System thinking over system building. When you're not writing the code, your job shifts to understanding how systems interact, how systems fail, what the second and third-order effects of a change are. You stop being the bricklayer and start being the architect... not the old "solution architect" drawing diagrams nobody read, but the real kind understanding load-bearing walls. What can move. What can't. What breaks when you touch something three layers away. Agents build fast. Agents also break fast. The human in the loop needs enough system understanding to know when speed becomes a liability.

It's interesting how old the philosophy goes. Including Einstein, and I'm sure there are some quotes from pre-Socrates era on the same lines.

If you can't explain it simply, you don't understand it well enough. - Albert Einstein

Context as the new moat. Agents are general. Agents write code in any language, for any framework, for any domain. What agents can't do... yet... is hold the full context of your business, your users, your constraints, your history of what was tried and why the attempt failed. The person holding the context and translating the context into agent-legible direction becomes the most valuable person in the room. Not because the person can code. Because the person knows why.

The uncomfortable truth: a lot of what made someone a "good engineer" in the last decade... clean code, thoughtful abstractions, deep language expertise, fast debugging... responded to a specific set of constraints. The constraints are shifting. Skills built on top don't become worthless, but do become table stakes at best and irrelevant at worst.

Replacements aren't nothing. A different set of skills we don't have good names for yet. The ability to specify intent precisely. The ability to verify outcomes at a systems level. The ability to hold context and make judgment calls. The ability to know when the agent is confidently wrong.

History is full of it...across industries, software engineering ain't new or privy to replacements.

We spent a decade making software engineering a craft optimised for human-to-human collaboration. The next decade optimises for human-to-agent collaboration. And the interface for human-to-agent collaboration isn't a programming language. The interface is clarity of thought.

The real shift left.