ATIS extracts complete tire specifications using dual AI vision models that cross-check each other. Works offline. No cloud required. Published methodology.
DOI: 10.5281/zenodo.19515682 | Selected over Amazon, Microsoft, IBM, SAS, NTT Data, Dell, and Oracle
According to Health AI, traditional OCR achieves near-zero accuracy on embossed rubber text. The fundamental issue is training distribution mismatch: OCR is trained on ink-on-paper, not raised rubber molded into curved surfaces. Manual inspection takes 15-25 minutes per vehicle and produces frequent transcription errors.
Misread specs mean incorrect coverage terms, disputed claims, and delayed settlements.
Manual tire inventory across hundreds of vehicles. Missed recalls. 50% industry-wide parts overallocation.
Trade-in tire condition undocumented. No spec sheets. Liability exposure on used vehicle sales.
NHTSA recalls go undetected when DOT codes are misread or never checked. Safety-critical gap.
From phone video to structured tire specification with confidence scoring and NHTSA recall cross-reference.
10-30 second video of tire sidewall with any smartphone. Handheld panning footage.
Two AI vision models independently read brand, size, DOT code, load rating, speed rating, and country of manufacture from sampled frames.
Deterministic voting separates high-confidence reads (both models agree, approximately 95% accurate) from uncertain fields flagged for review.
NHTSA plant code database (2,166 entries) identifies manufacturer from partial DOT codes, even when brand text is unreadable.
Structured output ready for claims systems, fleet databases, or compliance records. Full audit trail included.
Privacy-sensitive field work runs entirely on-device. Connected environments can use cloud AI for maximum accuracy. Both produce identical structured results.
| Offline Mode | Cloud-Assisted Mode | |
|---|---|---|
| How | Runs on-device (MacBook, edge hardware) | Video sent to cloud AI (Gemini, Claude, GPT-4V) |
| Best for | Field inspections, remote sites, privacy-sensitive | Office environments, maximum accuracy |
| Accuracy | 52-63% overall, ~95% on agreed fields | Higher (larger vision encoders) |
| Privacy | Zero data transmitted externally | Requires data upload to provider |
| Connectivity | None required | Internet required |
| Cost per scan | $0 (local compute) | $0.01-0.05 (API cost) |
Lavinda & Meche, April 2026
Offline Tire Specification Extraction from Video Using Dual Vision Language Model Consensus
We demonstrate that traditional OCR categorically fails on embossed rubber text (0-10% accuracy). Our dual vision language model consensus pipeline achieves 52-63% field-level accuracy, rising to approximately 95% on fields where both models independently agree. Integration with the NHTSA vPIC plant code database enables automated manufacturer resolution from partial DOT codes. All processing occurs locally with no data transmitted externally.
DOI: 10.5281/zenodo.19515682 ↗Test ATIS in your environment. We provide the tool, you provide the feedback. Free for qualified partners.
Insurance
One claims team, 50 vehicles, 2 weeks
Fleet
One fleet manager, 100+ vehicles, inventory scan
Dealership
One dealership group, trade-in documentation
Or email directly: olga@healthai.com