
Assurance for the assurance tool.
Tomco built the safety and AI-governance case for an AI assurance tooling startup — then deployed the same tool ourselves across their downstream robotics customers' safety programs. One shared evidence spine across the platform and the field.
How the engagement ran.
The client builds AI assurance tooling that other safety-critical programs use to evidence ML lifecycle, dataset coverage, and runtime monitoring. That places the tool itself inside the regulated chain — and the client needed a defensible argument that their platform was fit to be cited as evidence in a customer's signed safety case.
Tomco stood up an ISO/IEC 42001 AI management system, mapped controls to ISO/IEC 23894 risk and the NIST AI RMF, and built a UL 4600-style assurance argument explicitly addressing tool qualification — what the tool guarantees, where its limits sit, and how a downstream AFSP can rely on its outputs. AFSPs co-signed the tool-qualification dossier; agents keep the dataset, model, and conformity evidence current across every release.
The engagement didn't stop at the tool. Tomco then embedded with the client's downstream robotics customers — agricultural autonomy, mobile manipulation, last-mile delivery, warehouse picking, sidewalk robotics, humanoid teleop — and ran the same assurance evidence through their own safety cases. Eight robotics programs, one shared evidence spine, one set of AFSPs signing across the portfolio.
The case is now the client's commercial differentiator: the only AI assurance tool on the market shipping with its own AFSP-signed qualification dossier, EU AI Act conformity evidence, and a real-world portfolio of customer programs already using it under signature.
Who signed it.
Names withheld by policy. Credentials and program references verifiable on request under NDA.
The regime, line by line.
Where the tool went next.
Beyond the platform engagement, Tomco embedded with the client's downstream robotics customers and ran the same assurance evidence through their own safety cases. Codenames below; client names on request under NDA.
One signed thread, end to end.
- 01AI MS scope — ISO/IEC 42001 statement of applicability
- 02Risk register — ISO/IEC 23894 + NIST AI RMF mapped controls
- 03Tool-qualification dossier — guarantees, limits, downstream reliance
- 04EU AI Act technical file — conformity evidence ready to embed
- 05Per-release evidence — datasets, models, eval results immutably stored
- 06Signed release — AFSP co-signature on every qualification revision
Want this for your program?
We embed AFSPs and agents into your safety case the same way we did on this engagement. Client references available under mutual NDA.