1. Introduction: Usability Was Always There—MDR Just Started Asking the Right Questions
Usability engineering has never been new to medical devices. Long before the EU MDR came into force, manufacturers were already applying IEC 62366, ISO 14971, and user-centered design principles—at least on paper. Yet, for many organizations transitioning from the Medical Device Directive (MDD 93/42/EEC) to the Medical Device Regulation (EU MDR 2017/745), usability has emerged as one of the most frequent sources of Notified Body findings.
This has led to a common misconception:
“MDR introduced stricter usability requirements.”
In reality, MDR did not radically change usability principles. What it changed was:
- The regulatory visibility of usability
- The depth of evidence expected
- The degree of integration with risk management and clinical safety
Under MDR, usability is no longer something you claim—it is something you must demonstrate, justify, and trace.
2. Usability Under MDD
Under the MDD, usability typically existed in a gray zone:
- Mentioned indirectly in Essential Requirements
- Implemented inconsistently across manufacturers
- Rarely audited in depth unless a serious incident occurred
Typical MDD-Era Practices
Many manufacturers relied on:
- Informal formative testing
- Expert user reviews
- Training as the primary mitigation for user error
- A single “usability report” prepared late in development
Common characteristics of MDD usability files included:
- Limited task analysis
- Generic conclusions such as “no usability issues were observed”
- Weak or missing linkage to the risk management file
- Minimal differentiation between formative and summative evaluation
In many cases, usability was treated as a design quality attribute, not a safety control.
3. MDR Changed the Regulatory Requirement: Usability as a Safety and Performance Requirement
With the introduction of MDR, usability moved from the margins to the center of regulatory scrutiny.
Unlike MDD, MDR:
- Repeatedly references user error, intended users, and use environments
- Embeds usability expectations directly into Annex I – General Safety and Performance Requirements (GSPRs)
Usability and Risk Management Are Now Inseparable
Notified Bodies now expect to see:
- Use-related hazards identified systematically
- Clear identification of:
- Use errors
- Reasonably foreseeable misuse
- Demonstrated effectiveness of usability-related risk controls
If a risk arises from use, its mitigation must be verified through usability evidence.
A standalone usability report without risk traceability is no longer acceptable.
4. The Most Common Reasons MDD Usability Files Fail MDR Reviews
During MDR transition assessments, the same usability gaps appear repeatedly.
Lack of Iterative Design Evidence
MDR expects usability engineering to be a lifecycle process, not a single event.
Common issues include:
- Formative evaluations were conducted
- Findings were addressed informally
- No documented rationale showing how user feedback influenced design, labeling, or IFU
Even a single improvement, such as clarifying instructions in the user manual, must be:
- Documented
- Justified
- Linked to risk reduction
“No Use Errors Observed” Without Objective Criteria
Statements like:
- “All tasks were completed successfully”
- “No critical use errors were identified”
often fail MDR scrutiny because they lack:
- Defined critical tasks
- Objective success criteria
- Pass/fail thresholds
- Performance measures (time, deviation, recovery)
Over-Reliance on Professional Users
A very common justification under MDD was:
“The device is used by trained healthcare professionals.”
MDR requires this claim to be proven, not assumed.
- Cognitive load
- Time pressure
- Environmental distractions
- Workflow interruptions
- Emergency conditions
Professional users still make errors, especially in high-throughput or stressful environments.
Weak Validation of IFU and Labeling
Under MDD, IFU was often treated as a paper mitigation.
Under MDR:
- IFU and labeling are part of the user interface
- Their effectiveness must be validated with users
Simply referencing the IFU in risk controls is no longer adequate.
5. What MDR-Compliant Usability Engineering Looks Like in Practice
MDR-compliant usability is characterized by structure, traceability, and realism.
Integration With Risk Management
- Planned alongside risk management
- Reflected in the risk management file
- Referenced in verification and validation activities
Each critical task should be traceable to:
- A potential use error
- A hazardous situation
- A specific risk control
- Validation evidence confirming effectiveness
Clear Separation of Formative and Summative Evaluation
- Formative evaluation – improves design, identifies weaknesses, supports iteration
- Summative evaluation – validates final design and confirms risk control adequacy
Even when formative findings are minor, their resolution must be documented.
Realistic Use Scenarios
- Worst-case scenarios
- Incorrect sequences
- Environmental constraints
- Typical user assumptions
6. Practical Roadmap: How to Upgrade an MDD-Era Usability File to Meet MDR Expectations
For many manufacturers, the biggest concern during MDR transition is the assumption that all usability activities must be repeated. In practice, this is rarely necessary.
Most gaps identified by Notified Bodies arise from how usability evidence is structured, justified, and linked—rather than from the absence of testing itself.
The following roadmap reflects a realistic, regulator-accepted approach to strengthening MDD-era usability documentation for MDR compliance.
Step 1: Establish the Intended Use, Users, and Use Environments—Explicitly
Under MDR, assumptions are no longer acceptable. What was previously implied must now be explicitly defined and documented.
Key Actions:
- Clearly define:
- Intended medical purpose
- Intended users (e.g., radiographers, clinicians, service engineers)
- Use environments (e.g., radiology rooms, ICUs, mobile settings)
- Ensure consistency across:
- IFU
- Risk Management File
- Clinical Evaluation
- Usability Engineering File
Many usability gaps originate not from poor testing, but from inconsistent definitions across documents. MDR assessments frequently identify mismatches between usability assumptions and risk documentation.
If the intended user or environment is not clearly defined, it is impossible to defend task selection or usability validation scope.
Step 2: Identify and Re-Validate Critical User Tasks
MDR requires manufacturers to justify why specific tasks were tested.
Key Actions:
- Review existing task analyses and identify:
- Tasks that are safety-critical
- Tasks associated with potential use errors
- Tasks performed infrequently but with high risk
- Confirm that these tasks:
- Appear in the risk analysis
- Are included in summative usability evaluation
Not every task must be tested. Focus on:
- Tasks that could result in harm if performed incorrectly
- Tasks that rely on user judgment rather than automation
- Tasks performed under time pressure or stress
Each task in summative testing should be traceable to a risk or safety concern—not convenience.
Step 3: Map Use-Related Hazards and Errors to Risk Controls
This is where many MDD files fall short.
Key Actions:
- Identify use-related hazards and hazardous situations using:
- Existing risk management data
- Complaint history
- Service feedback
- Link each hazard to:
- Specific use errors or misuse scenarios
- Corresponding risk control measures
Examples of Risk Controls:
- Design features (interlocks, confirmations, defaults)
- User interface logic
- Visual cues or alarms
- IFU or labeling (secondary controls)
If a risk control relies on user interaction, its effectiveness must be validated through usability activities.
Step 4: Reconstruct and Document Formative Evaluation Iterations
MDR places strong emphasis on iterative design.
Key Actions:
- Review historical development records for:
- Design reviews
- User feedback
- Prototype evaluations
- Identify any changes made as a result of usability insights, such as:
- UI layout modifications
- Workflow simplification
- IFU clarifications
How to Handle Legacy Data:
- Document them transparently
- Explain the rationale
- Link them to risk reduction
Step 5: Strengthen Summative Evaluation Objectivity (Without Re-Testing)
In many cases, summative testing already exists—but lacks clarity.
Key Actions:
- Define task success criteria clearly:
- What constitutes a successful task?
- What deviations are acceptable?
- Establish pass/fail logic:
- Per task
- Per participant
- Re-analyze existing observations to:
- Identify close calls or recoverable errors
- Document how design prevented harm
A summative evaluation with minor, non-harmful deviations can still pass—provided outcomes are objective and justified.
Step 6: Validate IFU and Labeling as Usability Risk Controls
Under MDR, IFU and labeling are no longer passive documents.
Key Actions:
- Identify risks mitigated by information for safety
- Verify that:
- Users noticed the information
- Users understood it
- Users applied it correctly during use
This validation can often be integrated into:
- Existing summative usability data
- Simulated use scenarios
IFU is referenced in risk controls but never validated—this is a frequent MDR nonconformity.
Step 7: Strengthen Traceability Across the Usability File
Traceability is what turns usability evidence into a regulatory argument.
Key Actions:
- Intended use → user → environment
- Critical tasks → use errors → hazards
- Risk controls → usability evaluation results
- Usability conclusions → risk acceptability
A well-structured traceability matrix often resolves multiple NB queries at once.
Step 8: Align Usability With Clinical and PMS Evidence
MDR expects consistency across lifecycle data.
Key Actions:
- Review:
- Clinical evaluation
- PMS and vigilance data
- Confirm that:
- Known use-related issues are reflected in usability analysis
- Usability conclusions align with real-world experience
Step 9: Justify When Re-Testing Is Not Required
Re-testing should be driven by risk, not regulation anxiety.
Acceptable Justifications Include:
- No significant design changes
- Established and stable user interface
- Strong PMS evidence
- Adequate summative coverage of critical tasks
This justification must be:
- Written
- Risk-based
- Defensible
Upgrading usability for MDR is less about doing more testing and more about:
- Making assumptions explicit
- Strengthening traceability
- Demonstrating risk-based thinking
- Documenting decisions transparently
When approached systematically, usability becomes one of the most defensible parts of the MDR technical documentation, not the weakest.
Under MDR, usability is no longer assessed in isolation. It is examined in direct connection with:
- Risk management
- Clinical safety
- Real-world use conditions
- Lifecycle evidence
Most MDR usability findings do not arise because manufacturers failed to perform usability activities, but because:
- Assumptions were left implicit
- Evidence was insufficiently structured
- Traceability between risks, tasks, controls, and validation was weak
A successful MDR usability strategy focuses on:
- Clearly defining intended users, uses, and environments
- Identifying and justifying critical user tasks
- Demonstrating the effectiveness of usability-related risk controls
- Validating IFU and labeling as part of the user interface
- Aligning usability conclusions with clinical and PMS data
Importantly, MDR compliance does not automatically require repeating usability testing. When supported by a strong, risk-based justification and consistent lifecycle evidence, existing usability data can remain fully defensible.
When approached systematically, usability engineering becomes one of the strongest and most credible elements of MDR technical documentation—providing regulators with clear, objective assurance that the device can be used safely and effectively in real-world conditions.