This document distils key findings from Deliverable 3.1, ‘State-of-the-Art in XR Policy Debates’, offering a structured mapping of regulatory gaps and risks across XR domains. It highlights shortcomings in current EU frameworks concerning biometric data privacy (pp. 47–48, 52–55), workplace surveillance and consent (pp. 52–53), child safety and psychological harms (pp. 47–48, 56), consumer protection (pp. 57–58), and interoperability (pp. 45–46). Drawing on sectoral analysis, it introduces a regulatory risk taxonomy (pp. 60–61) that supports policy design and governance. The summary provides XR developers and regulators with actionable insights into why domain-specific legal safeguards are urgently needed.
Introduction
Deliverable 3.1 maps the current European policy and regulatory landscape for Extended Reality (XR), identifying key challenges across sectors and use cases. It shows how XR technologies, due to their immersive, data-rich and deeply personal nature, outpace many of the assumptions behind existing legal frameworks. The report highlights where significant regulatory gaps exist and proposes a regulatory risk assessment approach (pp. 60–61) to better inform policy responses and governance mechanisms.
Key Regulatory Gaps Identified
Biometric Data Privacy:
XR applications collect large volumes of sensitive biometric data, including head/eye movement, gestures, emotional responses, and even voice tone. While the GDPR applies in principle, D3.1 highlights how XR-specific practices, such as biometric profiling for targeted advertising or employee surveillance, introduce novel risks not yet explicitly addressed in EU law.
à Discussed on pp. 47-48 (Media), pp. 52-53 (Work), and pp. 54-55 (Health)
Workplace Surveillance:
XR can enable pervasive employee monitoring, with biometric and performance data used for intrusive evaluation. Existing consent models may be insufficient in employment contexts due to power imbalances, challenging GDPR’s reliance on informed, voluntary consent.
à Detailed on pp. 52-53
Child Safety and Mental Health:
Age-rating systems are inadequate for immersive XR. Risks include harassment, grooming, addiction, and psychological harm. XR’s deep immersion amplifies these harms compared to traditional media, while current content moderation and child-protection frameworks lag behind.
à Discussed on pp. 47-48 and 56
Consumer Protection:
Virtual goods and experiences raise issues around misleading advertising and manipulation, especially via biometric targeting. Existing consumer rights frameworks don’t fully address these XR-specific practices, such as false representation of virtual products.
à Addressed on pp. 57-58
Interoperability Standards:
Fragmented hardware/software ecosystems hinder consistent safety, accessibility, and privacy protections. Internationally coordinated, user-centric standardization is needed but currently insufficiently developed.
Sectoral Challenges and Examples
à See pp. 45-46 for general governance challenges and standardization gaps
Healthcare:
Insufficient regulation of biometric health data from XR devices.
Risks include data breaches, patient re-identification, and inadequate hygiene protocols for shared devices.
à pp. 54-56
Education:
Privacy concerns about tracking student data.
Digital literacy gaps among educators.
Uneven funding limiting access and inclusivity.
à pp. 50-52
Work and Production:
Risk of workplace surveillance and performance evaluation via XR.
Inequality in access to training opportunities.
Cybersecurity vulnerabilities exposing sensitive data.
à pp. 52-54
Media and Entertainment:
Enhanced privacy risks through detailed behavioural tracking.
Psychological risks like addiction, depersonalisation, harassment in virtual spaces.
Weak content moderation for immersive social environments.
à pp. 47-49
Marketing and Retail:
Use of biometric targeting for manipulation.
Misleading virtual representations of products.
IP infringement challenges in virtual shops.
à pp. 57-58
Security and Law Enforcement:
Potential for data leaks of classified information in military/policing XR use.
Ethical concerns over surveillance and proportionality in policing.
Design risks (e.g., disorientation during critical operations).
à pp. 59.59
Risk Taxonomy (Regulatory Risk Assessment Approach)
D3.1 proposes categorising XR risks guiding governance (à see pp. 60-61):
Physical Risks:
Injuries from use (e.g., VR-induced falls or repetitive strain).
Mental/Psychological Risks:
Addiction, PTSD, depersonalisation, simulator sickness.
Social Risks:
Harassment, manipulation, erosion of trust in democratic processes (e.g. deepfakes).
Legal/Privacy Risks:
Data breaches, insufficient consent, inadequate regulation of biometric data.
Abuse of Power:
Surveillance in the workplace, exploitation of users’ vulnerabilities (e.g. children or neurodiverse users).
Download
The XR4Human Code of Conduct sets forth the ethical obligations for developers involved in technological innovation and governance of immersive technologies, including Virtual Reality (VR), Augmented Reality (AR), Mixed Reality (MR), as well as all current and other emerging immersive environments. The Code is designed to ensure that these technologies respect human rights, protect user privacy, promote inclusivity, and safeguard the mental, physical, and social well-being of all users.
Read and become familiar with the XR4Human CoC. Learn by exploring the Educational Toolbox and the publications (on Ethics, Interoperability & Legal policy) in the Rating Repository
Conduct a self-assessment of your own XR technology concept via the Ethical Impact Assessment (EIA) and the CoC Compliance Checklist
Test your idea and get new ideas by exploring the Experience Library
Reflect on the rating information received after completing your self-assessment and join the XR4Human Forum to revise and improve your XR concept
Compliance checklist
Ethical impact assessment
AI Guide