Redesigning the maintenance workflow for aircraft technicians — from 14 screens to 5, cutting input errors by 40%.
Aircraft maintenance technicians — known as AMTs — operate in physically demanding environments: noisy hangars, confined spaces, variable lighting, and always wearing protective gloves. Their daily tools are supposed to help them move fast and stay safe. Instead, they were stuck with a legacy desktop system ported onto tablets, built for office use, not the hangar floor.
The existing MRO (Maintenance, Repair & Overhaul) application forced technicians through 14 sequential screens to complete a single part replacement validation — the most frequent task in their workflow. Touch targets were designed for a mouse cursor. Forms required precise text entry without glove-friendly input methods. Shift handover documentation was fragmented across five separate modules.
My mandate was to lead a full UX redesign of the mobile application, placing field usage patterns at the center of every decision. The constraint was real: the core data model could not change, only the interface layer. The goal was measurable — reduce input errors, reduce screen count, and reach a task completion rate above 90%.
Research was conducted directly in two active maintenance hangars. Observing technicians mid-task — gloves on, tools in hand — revealed usage patterns invisible in any requirements document.
In-hangar sessions across 2 sites (Toulouse & Hamburg). Technicians used the app during real tasks while narrating their experience. Recorded with consent for later affinity mapping.
Identified 8 critical failure points in the existing flow — moments where the interface forced incorrect actions or caused technicians to abort a task and restart from the beginning.
Compared 3 competing MRO tools (Boeing Toolbox, Ramco Aviation, SAP EAM Mobile) on task flow structure, touch interaction patterns, and offline capability design.
Technicians work with gloves throughout their shift. Touch targets below 56px were regularly missed, and text fields requiring keyboard input caused workflow interruption multiple times per hour.
The most frequent task — performed 4 to 12 times per shift — required navigating an average of 4 screens. Technicians had developed workarounds using paper sheets alongside the app.
Input errors were highest during shift handover documentation: technicians were fatigued, under time pressure, and the module required the most complex multi-field input in the entire app.
How might we enable full task completion with gloves on, without any keyboard input for the most frequent workflows?
How might we collapse the part replacement validation flow into a single gesture-based interaction that remains traceable and audit-compliant?
How might we make shift handover documentation the fastest interaction of the day rather than the most error-prone?
Starting from hand sketches, the core concept emerged quickly: eliminate deep navigation hierarchy, surface the most frequent action on every screen, and design every interactive element for a minimum 56px touch target.
High-fidelity screens designed in Figma with a custom design system built around three non-negotiable constraints: glove-friendly touch targets, high contrast for variable hangar lighting, and full offline capability.
All interactive elements respect a minimum of 56px height. Primary CTAs reach 64px to ensure reliable activation through thick work gloves. Tested on-site with six common glove types across both hangar sites.
Dark base theme chosen after observing screens wash out under hangar fluorescent lighting. Accent green (#0EA271) achieves 7.2:1 contrast against the dark background — exceeding WCAG AA compliance requirements.
All critical flows function without network connectivity. Data queues locally and syncs when connection is restored. A persistent status indicator communicates sync state clearly at the top of every screen.
Testing was conducted in an active maintenance environment. Participants completed standardized task scenarios using the Figma prototype on a tablet, wearing their standard work gloves throughout the session.
Focused on the core validation flow and touch target sizing. Revealed that the swipe gesture needed haptic feedback and that the scanner viewfinder required a significantly larger active area.
Tested the complete end-to-end flow including the handover module. Round 1 participants were re-engaged to measure learning curve retention and long-term workflow adoption.
| Metric | Before redesign | After redesign |
|---|---|---|
| Task completion rate | 61% | 94% |
| Input error rate | 47% | 7% |
| Time-on-task (part validation) | 4m 12s | 1m 48s |
| SUS score (System Usability Scale) | 58 / 100 | 88 / 100 |
Without tactile feedback, technicians were unsure whether the swipe gesture had registered. Adding haptic confirmation reduced false completion attempts by 62% in Round 2 testing.
The initial scanner window was 160px wide. With gloves, technicians struggled to align barcodes within the frame. Expanding to full-width with corner guides cut scan failure rate from 31% to 6%.
Technicians appreciated voice notes but worried about accuracy in noisy hangars. Adding a transcription preview with a large single "Confirm" button resolved all hesitation around the feature.
The most significant design bet was consolidating the 3-screen validation flow into a single-screen swipe interaction. A/B testing quantified the impact before committing to the engineering handoff.
The existing approach: part details on screen 1, compliance checklist on screen 2, confirmation form on screen 3. Each screen required a tap to advance. Total: 3 screens, average 7 taps to complete.
All part details, compliance status, and confirmation on one screen. Swipe gesture (64px handle) with haptic feedback completes the validation. Total: 1 screen, 1 gesture, audit-traceable.
| Metric | Version A | Version B |
|---|---|---|
| Average completion time | 3m 54s | 1m 30s |
| Input error rate | 41% | 5% |
| User preference | 32% | 91% |
| Gloves-on completion rate | 58% | 96% |
Involve the compliance and certification team from the research phase rather than at the handoff review. The swipe-to-confirm interaction initially raised audit traceability concerns that required a late-stage redesign of the confirmation data structure. Earlier cross-functional involvement would have saved two full sprints of rework.