Policy Roadmap: Actionable recommendations for safe, ethical, and legally compliant XR Development

Summary

This document provides a comprehensive overview of the regulatory challenges in the XR field and serves as a practical guide for policymakers, regulators, and stakeholders, supporting the safe, ethical, and legally compliant integration of XR technologies into European society. Building on a previous analysis of 25 EU legislations covering privacy & data, intellectual property, consumer & competition law, media & online services, cybersecurity, accessibility & non-discrimination, sectoral, technology, and finance law, it highlights a complex and overlapping policy landscape that can hinder XR development.

Recommendations

Based on the challenges analysed in this report, we propose reviewing and clarifying the application of the EU legal framework to XR technologies. Please consider this section as a policy roadmap rather than a strict legal assessment, since the legislative techniques required to achieve the same result may differ upon closer examination.

This set of recommendations involves various stakeholders, including the European Parliament (EP) for the legal reform of EU regulations and directives, the Court of Justice of the European Union (CJEU) for the judicial interpretation of such regulations and directives, the European Commission (EC) for its delegated legislative and interpretative powers (including its subdivisions, e.g. the AI Office), Member States for the implementation of directives, and other authorities such as the European Data Protection Board (EDPB).

1. Clarify application of regulations to XR technologies

Targets: Physical injuries and accidents; Cybersickness.

As demonstrated in our report, there are numerous regulations and directives that may be pertinent to XR technologies. However, in certain instances, the scope of such EU laws may be ambiguous. The achievement of this objective may be facilitated by the interpretation of the CJEU or a legislative initiative by the EP.

1.1.   Clarify the application of the GPSR to XR devices

Targets: Physical injuries and accidents; Cybersickness.

In order to clarify whether risks of ergonomic stressors (such as physical strain from prolonged headset use or poor posture design), cognitive overload (such as disorientation, visual fatigue or cybersickness resulting from immersive environments) and spatial dissociation (such as accidents or injuries resulting from impaired awareness of the physical surroundings while using XR devices) are included in the concept of a ‘safe product’ under Article 3(1) GPSR, clarification is needed. The EC already considered this a priority in its “2020 Report on AI, IoT, and Robotics”, which expressly recognized that mental and cognitive safety risks, especially for vulnerable users, must be addressed by product safety legislation.

Further clarification is required to determine whether integrated XR systems that combine physical hardware with real-time, sensory-driven software fall within the scope of the GPSR. XR hardware and software are not usually designed to function independently; rather, they operate as a convergent system in which the physical devices are inseparable from the immersive digital content, such as AR overlays, real-time spatial audio or AI-generated environments. In such scenarios, product safety cannot be assessed in isolation, but must be evaluated based on the interaction between the user, the device and the digital environment. Risk assessments must therefore account for potential harms arising from this interplay between hardware and software, such as visual overlays obscuring obstacles, delayed haptic feedback causing injury and spatial misalignment creating fall risks. This approach would reduce legal uncertainty for manufacturers and market surveillance authorities, while ensuring that consumers are adequately protected against the unique risks arising from embodied digital technologies.

1.2. Clarify application of the PLD and the Machine Regulation to XR devices

Targets: Long-lasting consequential impairments and impacts; Physical injuries and accidents.

The concept of a ‘defective product’ under Article 7 of the PLD could be reviewed through court interpretation or legal reform to clarify whether design choices leading to cumulative or long-term harm, even in the absence of traditional malfunctions or immediate injuries, fall within its scope. XR devices, including head-mounted displays, haptic wearables and immersive smart interfaces, introduce novel forms of risk that emerge from prolonged and repetitive use. Poor ergonomic design, such as imbalanced weight distribution, displays that induce eye strain, or interfaces that constrain posture, can lead to chronic musculoskeletal strain, visual fatigue, or psychological distress over time. While these harms may not arise from a technical ‘defect’ in the conventional sense, they are foreseeable consequences of negligent or suboptimal design. The current framework focuses too narrowly on immediate impacts, overlooking foreseeable long-term impacts, including physical and psychological harm. Adopting this interpretation would align the PLD with its stated purpose of ensuring fair compensation for consumers, while also reflecting the realities of modern wearable and immersive technology ecosystems, where long-term physical interaction with the body and senses is an integral part of how the product is used. The EC will need to evaluate this issue as part of the PLD evaluation required by Article 20 PLD by 2030.

It is also necessary to clarify whether XR devices featuring haptic feedback, motion sensors, mechanical actuation or integrated mechanical components fall within the scope of the Machinery Regulation. These features may constitute ‘integrated machinery’, as defined by the Regulation, which would trigger compliance obligations, including CE marking, risk assessment and essential health and safety requirements. Such clarification would ensure that immersive devices with mechanical or kinetic interfaces, such as VR head-mounted displays, exoskeletons and force-feedback gloves, are properly regulated for mechanical safety. This would bridge the conventional gap between digital product safety, ergonomics and physical risk, as outlined under the GPSR.

1.3. Clarify status of XR hardware as medical device

Target: Use and handling of XR hardware in healthcare.

XR technologies are often developed for consumer use, such as entertainment, education and gaming. However, they are increasingly being repurposed for clinical applications, including immersive relaxation apps, physical rehabilitation and exposure therapy for PTSD and phobias. In such cases, while the hardware itself is not intended for medical use, its functional application may fulfil the criteria for a medical purpose, creating legal uncertainty regarding its regulation under the MDR. In this context, it is important to clarify whether consumer-grade XR technologies used in clinical environments can be considered ‘medical devices’ under the MDR, despite not being designed for this purpose. Currently, there is legal ambiguity over whether these repurposed devices meet the MDR’s definition of a ‘medical device’, particularly when their original intended purpose was not medical. This leads to regulatory gaps in risk classification, clinical oversight and safety assessment for XR devices.

1.4.  Encourage adoption of safety standards

Article 7(1) of the GPSR states that a product is considered safe if it complies with ‘relevant European standards’ published in the Official Journal of the EU. In this context, the EC may support the development of voluntary standards for XR devices and immersive environments through European standardisation organisations such as CEN and CENELEC.

Such voluntary standards can address specific physical, spatial, and cognitive safety concerns that are not yet clearly defined by current product safety legislation. They may include guidelines on optimal weight distribution, headset strapping mechanisms and load-bearing thresholds to reduce strains to the neck, shoulders and back; minimum technical specifications for display refresh rates, resolution, brightness and field of view to reduce visual fatigue and motion sickness; and standardised safety protocols such as time-based breaks, lockout prompts or in-device fatigue detection to prevent overexposure. The establishment of such voluntary standards can serve multiple objectives, including providing clear, measurable benchmarks for compliance with product safety obligations under the GPSR; helping manufacturers to integrate safety-by-design principles into XR hardware development; facilitating market surveillance and risk assessment by national authorities through uniform technical criteria; and enhancing consumer trust and reducing the likelihood of injuries or long-term health effects.

2.   Ensure GDPR compliance of XR-processed data

XR headsets can collect large amounts of data from users, most of which qualifies as personal data under the GDPR. Therefore, the GDPR generally applies to XR products and services, and XR providers must adhere to the regulation’s principles. However, there is less consensus on the applicability of the GDPR to specific types of data, such as soft biometric data, emotional data, and bystanders’ data. We propose including or clarifying the regulation of such types of data via legal reform and guidance from the EDPB.

2.1. Soft biometrics

Targets: Large-scale collection of personal data; Biometric and other sensitive data processing.

Under the GDPR, data containing physical, physiological or behavioural characteristics is recognised as biometric data only if it constitutes personal data (i.e. if it allows or confirms the unique identification of an individual). This means that non-identifying biometric data (or ‘soft’ biometrics) is outside the scope of the GDPR, even though processing it could affect users’ privacy and data protection rights.

Given the importance of this type of data for XR providers, we recommend that the EDPB issue guidelines on the regulation of soft biometrics. This guidance would not only be relevant for the XR ecosystem, but also for providers and users of AI systems. An alternative to technologically neutral guidance could be specific guidance on data processing techniques used by XR equipment, following the example of guidance on blockchain technologies currently under discussion. EDPB guidance will be essential in providing XR providers with clear rules for processing soft biometrics and other non-personal data.

Another way to address this regulatory gap, albeit less likely given the deregulation trends with regard to the GDPR, is to reform this regulation to explicitly address soft biometrics. One reform proposal is to expand the definition of biometric data under Article 3(34) of the GDPR to include soft biometrics. This could be achieved by removing the requirement for biometric data to be used ‘for the purpose of uniquely identifying a natural person’. Another proposal is to amend Article 9 of the GDPR, which currently classifies ‘hard’ biometrics as a special category of personal data, to include soft biometrics as an additional type of data that is specially protected under the GDPR. This could be achieved by including specific types of soft biometrics, such as keystroke, gait or voice modulation analysis, as sensitive data.

2.2.  Emotion data

Targets: Large-scale collection of personal data; User targeting and emotional manipulation.

Emotion data has a similar status to soft biometrics in that it is not considered personal data per se, since it does not necessarily lead to the identification of an individual. When it does lead to identification, it is not included in the comprehensive list of special categories of data under Article 9 of the GDPR. To better protect XR users, given the considerable risk of such data being used for manipulation, we propose that Article 9 of the GDPR is also expanded to include emotion data. Guidance from the EDPB would also be welcome, whether it is general advice on the status of emotion data under the GDPR or focused on its specific use by XR technologies.

2.3.  Bystanders’ personal data

Targets: Large-scale collection of personal data; Bystanders’ personal data collection.

XR providers need to comply with GDPR provisions for processing bystanders’ personal data, particularly with regard to enabling them to exercise their data subject rights. Therefore, we believe that the EPDB should publish guidelines on processing bystanders’ personal data using XR technologies. Although there is some previous advice on similar technologies (particularly on video cameras), and some suggestions could be applied to XR devices, we believe that the document does not seem suited to XR. This is not only because of the differences in technology and use cases, but also because the document is outdated, since there is no mention to the potential use of integrated AI.

3. Address user manipulation on XR platforms

As demonstrated in our research, deceptive practices can be amplified by the immersive nature of XR technologies. It is therefore recommended that certain regulations explicitly address user manipulation in XR technologies as a distinct case, or at the very least, provide clarification as to whether they fall within the scope of such regulations. This would serve to ensure a level playing field for both XR providers and users.

3.1. Definition of deceptive practices on XR platforms

Targets: Deceptive practices and unfair competition; Addiction, isolation and other psychosocial risks.

In order to ensure legal clarity, it is recommended that the UCPD Annex I listing of blacklisted unfair commercial practices be amended by the EP to explicitly reference immersive or XR-specific manipulative interfaces. Examples of such practices that may be subject to blacklisting include: (i) haptics-based coercion, or using tactile feedback to pressure users into purchasing or consenting; (ii) immersive dark patterns, or spatial design choices that obscure exit points or mislead users into unwanted actions, and (iii) forced immersion, or forcing users to stay in immersive environments without clear, accessible options to opt out. This amendment has the potential to ensure that regulators and courts can effectively identify and sanction unfair XR business practices, thereby aligning XR consumer protection with traditional digital markets.

A more likely alternative to a reform of the UCPD is for the EC to exercise its supervisory role under the DSA by issuing guidance on how dark patterns manifest in XR environments. This guidance could provide concrete examples of the aforementioned immersive dark patterns, offer XR platform providers recommendations on how to detect and prevent manipulative design, and suggest monitoring mechanisms to enable authorities to audit XR platforms and enforce transparency. Additionally, such guidance could clarify the specific effects of XR platform design, such as addictive design, as recommended by the EP’s 2023 “Resolution on addictive design of online services and consumer protection”209. By providing such tailored guidance, the EC will empower regulators, platform operators and developers to uphold consumer protection standards in the rapidly evolving context of XR.

Finally, we recommend that the upcoming Digital Fairness Act includes a dedicated section that addresses manipulative practices unique to XR environments. Immersive environments allow for new forms of user manipulation that are not adequately covered by current consumer protection legislation. The Act should include specific provisions defining XR-specific manipulative behaviours, such as exploiting immersive sensory inputs (visual, auditory and haptic) to unduly influence user decision- making; utilising immersive ads or notifications that interrupt or manipulate the user experience; and employing behavioural nudges in spatial interfaces that limit meaningful user consent. By codifying these behaviours, the Digital Fairness Act can establish clear legal boundaries against unfair manipulation in XR environments, thereby safeguarding consumer rights.

3.2.  Better enforcement of the legal framework on XR platforms

Targets: Deceptive practices and unfair competition; Harmful behaviour, offensive and violent interactions; Inauthentic identities and deepfakes; Cybersecurity vulnerabilities.

It is recommended that content moderation obligations under the DSA and AVMSD be strengthened through the evaluation of detection and reporting mechanisms for deepfake identity fraud. Furthermore, the EU cybersecurity framework should categorise deepfake identity fraud as a high-risk digital crime, thereby ensuring that scams facilitated by deepfake technology and identity theft are subject to legal repercussions under international cybercrime legislation.

Another way to ensure that these technologies remain fit for the future is to review the GPSR, with a view to encourage security measures against biometric spoofing in XR hardware. This will prevent biometric authentication mechanisms (i.e. iris scanning or facial recognition) from being easily bypassed by deepfake-generated identities. The standardisation of hardware-level authentication protocols has the potential to enhance the cybersecurity protections across XR platforms, thereby mitigating the risk of identity fraud in immersive digital environments.

3.3. Promote literacy about XR-related risks and child protection

Targets: Digital literacy, access and affordability; Children and other vulnerable users.

The AVMSD currently requires Member States to encourage the development of media literacy skills among the public, particularly with regard to audiovisual content (Article 33a). However, XR systems, such as VR environments, AR overlays and MR simulations, are increasingly being used for news, entertainment, advertising and education, and pose new challenges that go beyond those of traditional media formats. The EC may therefore consider issuing a new communication to encourage Member States to extend the scope of national media literacy initiatives to explicitly include an understanding of the persuasive and immersive effects of XR experiences. This would cover simulated reality and identity manipulation, awareness of privacy risks and data profiling, psychological immersion in XR content and the ability to distinguish between real and synthetic experiences — especially in the context of XR’s use in journalism, advertising or political communication.

These initiatives would protect vulnerable users, especially children, from misinformation, manipulation and unsafe social experiences in immersive environments, while fostering informed digital citizenship. However, we believe that more can be done in this regard, including the EC issuing guidelines on the online protection of minors on XR platforms (Article 28(3) of the DSA), and special measures for XR platforms being incorporated into the upcoming EU Code of Conduct on age-appropriate design.

3.4. Address immersive political manipulation

Target: Impact on democracy and political manipulation.

The upcoming Regulation on Political Advertising should explicitly cover immersive political advertisement delivered via XR technologies. This includes but is not limited to VR-based political rallies, town halls, or candidate appearances, AR overlays promoting political messages or campaign materials in public or private spaces, or MR experiences designed to influence voter perceptions or engagement.

4. Address enhanced risks of XR-AI integration

AI technologies are used in XR platforms and virtual worlds to improve personalisation, make interfaces more intuitive, and generate virtual spaces, objects, and subjects. While this brings benefits, it also poses challenges to the XR environment. Many of the challenges described in this report for XR technologies could be exacerbated by integration with AI. This is where the implementation of the AI Act plays a pivotal role in the future of XR technologies. We propose the following actions:

4.1. Classifying AI-powered XR systems as high-risk

Targets: User targeting and emotional manipulation; Use and handling of XR hardware in healthcare.

The EC may consider leveraging its delegated powers to generally classify immersive AI-powered XR systems as high-risk AI systems under Article 6(2) AI Act, either through delegated legislation regarding Annex III under Article 7 AI Act or by evaluation under Article 112 AI Act. This move would formalize the regulation of AI systems whose immersive sensory interaction creates heightened risks. Immersive AI functionalities, such as AI-enabled agents and avatars, voice-cloning technology, or emotion-aware simulations, can interact directly with a user’s perception and body. Even when not used in traditional high-risk AI sectors, they can cause significant psychological, cognitive, or physical harm. Incorporating immersive AI systems into the “high-risk” category or formally recognizing them via guidance would ensure the Act remains effective and future-proof in an increasingly embodied digital landscape.

In addition, the EC may consider issuing guidance or interpretative documents under the AI Act to clarify that a “significant risk of harm”, as referenced in the context of high-risk AI systems211, explicitly includes psychological and mental health-related harms, particularly those arising from extended or immersive interaction with XR-based AI systems. Immersive environments powered by AI such as adaptive virtual therapy platforms, biometric emotion-recognition systems, or AI-driven avatars, can deeply influence users’ cognitive and emotional states. This is especially relevant where XR systems are used in sensitive applications, including healthcare, education, behavioural nudging, or workplace monitoring. Prolonged exposure to emotionally manipulative, overstimulating, or immersive content may lead to cognitive overload, psychological distress, or even breakdowns and suicidal tendencies, particularly among vulnerable users such as adolescents, individuals with pre-existing mental health conditions, or persons in high-pressure training environments. This interpretation would also support consistency with broader EU digital safety and consumer protection strategies, including the DSA and the Mental Health Strategy launched under the EU Health Union, which looks to addressing mental health across all policies.

4.2. Clarify the role of XR technologies in AI-driven emotion recognition

Targets: Invasive uses in the workplace and education; User targeting and emotional manipulation.

The AI Act prohibits the use of AI systems to infer emotions of a natural person in the areas of workplace and education institutions, except where the use of the AI system is intended to be put in place or into the market for medical or safety reasons.212 Since XR devices can accumulate biometric and emotional data which can be processed in a way that provides deep knowledge of, or inference about, an individual’s actions, perceptions and emotions. These features could then be used for an impromptu assessment in schools or workplaces, depending on the purpose that providers and deployers give to an AI system integrated into a XR software. As such, specific guidelines are needed to clearly disassociate emotion recognition applications from XR devices used in education and workplace. It is our understanding that XR hardware manufacturers should not be held responsible if the device itself is not intended to monitor individuals, and the responsibility should be assigned to the software/platform provider.

4.3. Encourage development of codes of conduct under the AI Act

An alternative proposal is that the EC through its AI Office (under Article 95) supports the development of voluntary “Safety-by-Design” codes of conduct for non-high-risk systems which are tailored to XR technologies. These voluntary but structured codes should offer detailed implementation guidance to developers of XR systems, particularly where AI-driven features influence user cognition, perception, or physical interaction with the environment.

Given the intense sensory immersion and real-world detachment involved in XR, these systems pose unique challenges to user autonomy, safety, and mental health. These codes can include provisions for mandatory “panic buttons” or quick-exit mechanisms that allow users to immediately disengage from the immersive environment in case of distress, confusion, or emergency, real-world override features, mandatory human oversight for critical XR applications, including those used in mental health, education, workplace monitoring, or high-pressure training environments, or built-in usage time limits or fatigue- detection prompts to prevent overexposure and cognitive exhaustion. Such codes would not only support compliance with fundamental rights and user safety principles under the AI Act (including transparency, accountability, and human oversight), but could also serve as a foundation for future standardization efforts through CEN/CENELEC.

*Please note that the full document (D3.3) below provides all references and the complete bibliography. 

 

 

 

Download Deliverable

Code of Conduct

The XR4Human Code of Conduct sets forth the ethical obligations for developers involved in technological innovation and governance of immersive technologies, including Virtual Reality (VR), Augmented Reality (AR), Mixed Reality (MR), as well as all current and other emerging immersive environments. The Code is designed to ensure that these technologies respect human rights, protect user privacy, promote inclusivity, and safeguard the mental, physical, and social well-being of all users.

1. Learn

Read and become familiar with the XR4Human CoC. Learn by exploring the Educational Toolbox and the publications (on Ethics, Interoperability & Legal policy) in the Rating Repository

2. Assess

Conduct a self-assessment of your own XR technology concept via the Ethical Impact Assessment (EIA) and the CoC Compliance Checklist

3. Test and Explore

Test your idea and get new ideas by exploring the Experience Library

4. Reflect

Reflect on the rating information received after completing your self-assessment and join the XR4Human Forum to revise and improve your XR concept

Use our Tools to implement the Code of Conduct

The following guides provides step-by-step instructions and tools to help you implement the Code of Conduct during your development and deployment processes.