When Simulation Runs Faster Than Understanding
From Clay to Code (Part 3) Simulation is no longer a support activity. It is a primary validation engine. Crash, ADAS, powertrain, calibration, emissions, and software verification are now explored virtually at a scale that replaces large portions of physical development. Yet despite this progress, familiar problems persist. Programmes still slip. Integration remains painful. Quality risks are often discovered later than anyone would like.

The automotive industry has undergone one of the most profound development shifts in its history.
Simulation is no longer a support activity. It is a primary validation engine. Crash, ADAS, powertrain, calibration, emissions, and software verification are now explored virtually at a scale that replaces large portions of physical development. Hardware testing increasingly exists for correlation and final proof.
This is not incremental improvement. It is a structural change.
And it is working.
Yet despite this progress, familiar problems persist. Programmes still slip. Integration remains painful. Quality risks are often discovered later than anyone would like.
This raises an uncomfortable question.
If simulation has accelerated so dramatically, why have these issues not disappeared?
Validation has scaled. Understanding has not.
Simulation is extremely good at one thing: validating behaviour against defined conditions.
What it cannot do is resolve ambiguity in the inputs it receives.
A vague requirement is not corrected by simulation.
A conflicting requirement is not reconciled.
A missing assumption is not inferred.
Instead, ambiguity is executed. Repeatedly. Across variants, suppliers, and software releases.
In effect, we have industrialised validation faster than we have industrialised understanding.
The hidden gap in software-defined vehicles
Software-defined vehicles have changed the economics of mistakes.
A weak requirement is no longer local. It propagates across platforms, contracts, test cases, documentation, and regulatory artefacts. By the time it is detected, it is embedded everywhere.
At that point, the cost is no longer just rework. It is delay, negotiation, loss of trust, and accumulated programme risk.
Many organisations still rely on manual reviews and individual expertise to manage this. That approach does not scale.
Requirements are often treated as static inputs rather than as living system constraints that must be continuously examined, challenged, and refined.
Experience is leaving. Complexity is compounding.
There is a second force at play.
Senior engineers and system thinkers carry deep, tacit knowledge. They recognise fragile requirements early. They understand where specifications tend to break under integration pressure.
That knowledge is rarely digitised or systematised.
As experienced people leave the industry, requirement volumes continue to grow across functional, safety, cybersecurity, regulatory, and lifecycle domains. The gap between system complexity and available experience is widening.
Simulation does not close that gap on its own.
Detection before correction
The organisations making the most progress are not stepping back from simulation. They are stepping further upstream.
They are focusing on detecting issues earlier.
Detecting ambiguity before code is written.
Detecting conflicts before suppliers diverge.
Detecting gaps before validation becomes negotiation.
This is not about perfection. It is about visibility.
You cannot resolve what you cannot clearly see. And today, many of the most expensive problems in automotive programmes are visible far earlier than we act on them.
A question for leaders
Simulation tells us whether a system behaves correctly.
But who, or what, is systematically checking whether we are building the right thing in the first place?
Not at the end of the V-model.
Not during audits.
But at the point where ambiguity is still cheap.
If simulation now runs faster than understanding, where should leadership rebalance attention to prevent uncertainty from scaling with it?
