This is part two of a two-part series produced by Adam Blomerley in collaboration with Ataccama. Part one quantified the cost of untrustworthy engineering data; this piece sets out what to do about it.
In part one we traced the financial and operational impact of the hidden factory — the quiet drain that poor engineering data creates across every phase of the product lifecycle. The next question is the one that matters to programme directors: how do you actually stop it? What follows is the framework QR_ has sharpened on complex engineering programmes over two decades, rendered as a short playbook any leadership team can put to work.
First time right: a framework for engineering data you can trust
Data quality doesn't need to feel overwhelming. The organisations that get it right consistently pull on four levers — all of them practical, all of them within reach of any programme with the appetite to act.
- Standard language — Use ISO 8000 as the shared vocabulary. Engineers, stewards, and executives should be describing data quality the same way — not inventing local terms that break down at the boundary between teams.
- Ownership — Stand up a Data Governance Council, ideally chaired by the Chief Engineer. Empower data stewards inside every domain, and give each one a personal data quality score so accountability is visible, not theoretical.
- Process gates — Treat data quality the way you already treat safety. If the KPIs don't clear the bar, signoff doesn't happen. No exceptions, no workarounds.
- Technology enablers — Combine QR_'s process discipline with platforms like Ataccama ONE to apply rule-based profiling to every record in real time. Catch issues where they originate, not three systems downstream where fixing them costs ten times as much.
None of this is novel in isolation. What makes the difference is applying all four together, with enough authority behind the rollout that the organisation believes it.
The programme director's playbook
Bringing engineering data quality into the heart of a programme is a matter of five concrete moves:
- 1. Declare the aim — Commit publicly to zero-defect data, first time right. Ambiguity here gives every other priority permission to outrank data quality.
- 2. Visualise performance — Put a Data Quality Scorecard on the same wall as your cost, schedule, and risk dashboards. If it isn't reported alongside them, it isn't managed like them.
- 3. Dedicate data stewards — Name the individuals who are accountable for data quality in each domain — and leave them in post. Rotating responsibility guarantees rotating results.
- 4. Root-cause like a defect — Apply the same problem-solving methods your engineers already trust — 5 Whys, A3s — to contain, eradicate, and verify data issues. The discipline travels cleanly from metal to data.
- 5. Shorten the feedback loops — Automate alerts so stewards can act on a failing record in minutes, not at the next weekly review.
Early wins from this sequence can be dramatic: duplicates collapse, phantom inventory deflates, and purchase orders that would have shipped the wrong part simply don't get raised.
Governance that sticks — and doesn't feel like extra work
Good governance isn't a separate overhead. It's a small set of visible KPIs embedded inside the processes teams already run. The turning point is authority: when data stewards can halt a release the way a quality manager can stop the line, the organisation learns quickly which signals to respect. And when the wins come — 99% BOM completeness, a duplicate part rate cut in half — they deserve more recognition than another PowerPoint slide.
Real value, real savings
The case for acting isn't theoretical. A handful of recent examples that QR_ and Ataccama have seen first-hand:
- Global automotive OEM — validated every BOM line before each prototype build, cutting material spend by 20% and saving $21 million on a single programme.
- Astec's Roadtec division — scrubbed the BOM, removed over a thousand unnecessary parts, and reduced manufacturing costs by 21.4%.
- A mid-size manufacturer — traced a $100,000 inventory write-down back to BOM errors — and closed the gap with targeted data-quality work.
Platforms like Ataccama ONE automate the checks, streamline governance, and keep audit readiness continuous — which frees engineering teams to spend their time on the higher-value work they were hired to do.
Overcoming the common objections
Three pushbacks come up in almost every conversation, and none of them hold up. If the PLM is legacy, that's fine — data rules sit above the system, so QR_ and Ataccama can profile, clean, and govern data from any source, new or old. If the teams look overloaded, eliminating the hidden factory is precisely what frees their time. And if the savings feel soft on a spreadsheet, ask the plant manager how it feels to stop a launch because of bad data.
Leadership behaviours that build a lasting data culture
Real change holds when leaders do three things, consistently:
- 1. Put data quality KPIs next to cost and schedule — at every programme review. Same board, same weight.
- 2. Celebrate zero-defect data — as enthusiastically as zero-defect builds. Data quality is a craft skill, and the people who practise it should be recognised.
- 3. Allocate around 5% of engineering resources to stewardship and data analysis — so the capability scales with programme complexity rather than lagging it.
The payoff is quiet but significant: engineers get time back to innovate, operators trust the instructions on the screen in front of them, and programme directors sleep a little better the week before launch.
A cultural payoff: innovation, trust, and peace of mind
When trusted data becomes the norm, everything downstream gets easier. Teams stop chasing errors and start building the future. QR_ and Ataccama help organisations make the shift from spreadsheet firefighting to decisions made confidently on data that is already known to be good — which, in the end, is the only version of data-driven engineering that ever actually delivers.
