How AI x XR Tech Can Fools Us in Meetings
Can you tell when an unknown AI agent takes over your CEO's avatar?
You log into a virtual panel discussion. The room is full of lifelike avatars: a CEO you recognize from LinkedIn, a well-known researcher, a public official. They gesture, joke, disagree, and answer questions. At some point during the session, two of those people quietly swap bodies. The avatars stay where they are; the humans behind them trade places. You don’t notice.
That isn’t a thought experiment. It’s a real scenario, tested in a live Social-VR panel study. Researchers found that nearly 40% of attendees failed to detect that the person “inside” an avatar had changed mid‑conversation. In other words, a large chunk of the audience saw the wrong person in the right body and carried on as if nothing had happened.
The more our meetings move into XR, the more this kind of misalignment will matter. Who we think is in the room may diverge, quietly but systematically, from who is actually there. And as soon as Generative AI can inhabit those same bodies, the line between human presence, human performance, and synthetic participation becomes less obvious and more of a design choice.
This essay is the first part of a two‑piece series about what happens when identity, embodiment, and AI tech collide.
In this first article, I stay inside the virtual room: XR meetings, avatar realism, human identity swaps, and AI “panelists” that share the stage with us. I look at what recent studies are telling us about our ability, and frequent inability to tell who is actually behind the mask.
In the second article, “The Politics of AI Presence: How XR x AI Deepfakes Can Fool Us on Election Day”, I follow the same cognitive vulnerabilities into a different arena: Elections. If we already struggle to track identity and authenticity in immersive meetings, what happens when AI Deepfake tools and models trick are turned loose on the 2026 US Senate races and the Israeli elections?
Can you tell when an unknown AI agent takes over your CEO's avatar?
Extended reality (XR in short) meetings are often sold as the next stage of digital collaboration: more immersive than video calls, more expressive than text chat, and more socially vivid than flat-screen conferencing. But the same qualities that make XR appealing also make it vulnerable. When identity is embodied in avatars, the meeting room no longer depends on a visible face or a stable camera feed. Instead, participants must infer who is present from posture, voice, timing, and interaction style.
That shift creates a new epistemic problem: in immersive environments, people are often worse at detecting identity swaps, synthetic agents, and other forms of deception than they assume. Recent research shows that this is not a marginal issue but a structural one, rooted in cognition, attention, and social bias.
The most striking recent evidence comes from Oliva et al. (2026)1, who examined impersonation and AI fakes in social virtual reality. In their study, two human panelists swapped avatars during a live social VR discussion while continuing to behave as their original selves. Nearly 40% of around 100 attendees failed to notice the swap. The finding is important not simply because it demonstrates deception, but because it reveals a broader phenomenon the authors describe as social identity change blindness:
Users may register the continuity of an avatar more strongly than the continuity of the person behind it.
That matters for XR meetings because avatar presence can become more psychologically sticky than identity itself. Once participants accept that a given body in virtual space belongs to a given person, they may discount subtle contradictions in voice, movement, or conversational style.
In a conventional video call, identity remains tethered to the visible face, background, and platform interface. In XR, by contrast, those anchors are replaced by performance, design, and implied social coherence. A person who enters a meeting as one avatar and exits as another may remain undetected if the interaction remains fluid enough to preserve the illusion of continuity.
Why XR identity Detection Fails
The failure to detect identity swaps in XR is not only a technical failure; it is also a cognitive one. Humans do not process social environments as neutral observers. We rely on heuristics, shortcuts, and expectations to manage the sheer volume of sensory information available to us. In immersive environments, this reliance becomes even stronger because attention is divided across spatial cues, conversational demands, and embodied movement. The brain is busy maintaining a coherent scene, so identity verification takes a back seat to completing the task.
One major mechanism is change blindness. People often fail to notice changes in visual scenes when those changes occur during interruptions or distractions. XR creates ideal conditions for this failure because the environment is rich, dynamic, and cognitively demanding.
If the avatar remains visually plausible and conversationally stable, many users will assume the person behind it has not changed. That is not irrational so much as economical: the mind prefers continuity unless it has strong reason to revise its model.
A second mechanism is the Halo Effect. When an avatar appears polished, realistic, or professionally designed, users are more likely to infer competence, trustworthiness, and authenticity. That makes high-fidelity avatars especially powerful in meeting settings where social credibility matters. A highly refined avatar can serve as a trust signal, even if the person behind it is deceptive, compromised, or entirely synthetic. The more lifelike the avatar, the more difficult it becomes for audiences to maintain suspicion without explicit cues.
A third mechanism is Overconfidence. People consistently overestimate their ability to spot deception. That gap between subjective confidence and actual detection ability becomes dangerous in XR because it encourages complacency. Users may assume they would “obviously” notice a synthetic participant or a hidden identity swap. In contrast, empirical studies show that, under realistic conditions, they frequently do not.
Meet Alan Turing: Your AI Panelist
The Oliva et al. study (2026) included an AI panelist represented as Alan Turing and controlled by a large language model. Participants found the AI less realistic and somewhat distracting. Yet, the broader significance lies elsewhere: a synthetic agent could serve as a panelist in a live social VR conversation without immediately collapsing the social frame. That is an important threshold. It indicates that LLM-driven agents are no longer limited to chat windows or scripted assistants. They can enter shared spatial environments and participate as if they were members of the room.
That changes the nature of meeting deception. In older media, a fake participant was often obvious because it lacked timing, embodiment, or responsiveness. In XR, however, AI can inherit all three. A synthetic panelist can speak in turn, react to others, maintain conversational continuity, and even adopt a historically resonant persona.
The result is not simply a chatbot avatar in 3D space, but a new kind of social actor whose authority may derive from the performance of embodied presence rather than from verified human origin.
That possibility has major implications for political meetings, research panels, community forums, and corporate discussions. A panel of “experts” could contain hidden AI participants, impersonated humans, or hybrid arrangements in which a human operator and an AI system share control. If the audience cannot reliably detect the difference, then the legitimacy of deliberation itself comes into question. Meeting participants may believe they are witnessing spontaneous pluralism when, in fact, they are encountering curated or automated consensus.
Bias as an Attack Surface
XR deception works partly because it exploits predictable human bias. People infer identity from surface continuity, infer trust from visual realism, and infer sincerity from conversational fluency. Those heuristics are usually useful in ordinary social life, but they become liabilities when adversarial actors deliberately design for them.
In immersive systems, appearance is not a neutral shell; it is part of the persuasive infrastructure.
Research on AI-mediated interaction supports this concern. In a VR conversation study on alignment and modality, participants were willing to engage meaningfully with an AI agent regardless of whether it agreed with them. Still, their perceptions shifted depending on ideological alignment. When the AI’s stance matched their own, it seemed less biased and more strategically thoughtful. That is a classic confirmation-bias pattern: People are more forgiving of agents that appear to validate their beliefs.
That dynamic matters in XR meetings because AI panelists could be tuned to specific audiences. A synthetic participant can be made to sound reasonable, empathetic, or authoritative in ways that reinforce a preferred narrative.
In political contexts, such systems could simulate broad support for a position, mask coordination among insiders, or soften controversial views through a persuasive avatar. The threat is not only overt fake speech, but also subtle manipulation of what seems normal, reasonable, and socially shared.
Embodiment and Cognitive Load
The more immersive the environment, the more the user must split attention between presence and interpretation. That can raise cognitive load and degrade performance. Studies of mixed-reality meetings show that users adapt to avatar-based collaboration. Still, they also rely less on facial cues and more on voice and movement to infer emotional state. That shift makes interaction richer, but it also makes deception harder to detect because the system encourages distributed attention rather than skeptical inspection.
The same pattern appears in human-AI teamwork research. When participants believed they were collaborating with an AI teammate in an embodied task, performance suffered as task difficulty increased. Human teammates showed more corrective actions, less communication, and higher physiological arousal, suggesting that the presence of an AI collaborator can increase uncertainty, even when people subjectively feel comfortable with it. In other words, immersive collaboration may feel natural while still making people more prone to errors.
That is exactly the kind of environment in which identity swaps become difficult to spot. If a meeting already requires users to manage content, spatial orientation, and social turn-taking, then identity verification becomes one task too many. The result is not complete ignorance but partial monitoring: participants notice enough to feel engaged, but not enough to reconstruct who is actually behind each avatar. That is the cognitive basis for social identity change blindness.
Misuse and Deception
The misuse potential of XR identity systems is substantial. A bad actor could impersonate an executive, a public official, a journalist, or a trusted colleague using a realistic avatar and synthetic voice. Because social VR can feel more intimate than text or even video, manipulation may be especially effective in this context.
The setting itself provides social credibility: if a person appears in the same room, gestures naturally, and speaks responsively, audiences are more likely to assume legitimacy.
That is why deepfake security research increasingly treats VR and XR as high-risk environments. Survey work on Metaverse cybersecurity highlights threats such as avatar impersonation, AI-generated social engineering, and synthetic participants in real-time group settings.
The concern is not only that someone may steal an identity, but that the medium itself can make forged identities feel socially real enough to function in practice.
There is also a direct fraud angle. Identity theft in immersive systems can involve not only money but also access, reputation, and influence. An impersonated avatar can ask for confidential information, redirect a conversation, or engineer consensus in a deliberative setting. In a political or organizational meeting, this can distort decision-making without leaving obvious forensic traces. Because XR interaction is often ephemeral and embodied, the damage may occur long before verification catches up.
Ethical and Political Stakes
The ethical stakes are broader than fraud. If people cannot reliably know whether they are talking to a human, an AI, or a human-AI hybrid, then consent becomes harder to define. Participants in an XR meeting may agree to interact with a “person” without realizing that some speakers are synthetic or that identity has changed mid-session. That undermines basic norms of informed participation.
The political stakes are even larger. Social VR and XR may become venues for public deliberation, education, labor negotiation, and civic consultation. If those spaces are vulnerable to hidden AI panelists or swapped identities, then the legitimacy of collective judgment is weakened.
A meeting can look democratic while being partially automated, partially impersonated, or strategically staged. The result is an erosion of epistemic trust: people are no longer sure whether the voices they hear represent genuine pluralism or engineered appearance.
That is why Ramon Oliva and fellow researches study is so important. It provides a concrete empirical baseline for the argument that XR does not merely extend social interaction into a new medium; it alters the conditions under which identity is recognized and trusted. The nearly 40% failure rate in detecting the swap should be read not as an anomaly but as a warning.
If a live human swap can go unnoticed in a social VR panel, then AI impersonation will likely be harder still.
What Needs to Change
Mitigating these risks will require both technical and cultural interventions. On the technical side, platforms need stronger identity verification, provenance tools, and real checks that can distinguish live users from synthetic or replayed agents. On the interaction side, systems should make disclosure persistent and legible, especially when a participant is AI-driven or when control changes hands between human and machine.
On the cultural side, users need better training in perceptual skepticism. That does not mean assuming that all avatars are fake. Still, it does mean abandoning the assumption that realism equals authenticity. In immersive environments, trust should be earned through verified provenance, not visual polish.
Participants should be able to ask not only “Who does this avatar look like?” but “Who is operating it, under what conditions, and with what accountability?”
The deeper lesson is that XR meetings are not just a new communication interface. They are a new social epistemology. They change what people notice, what they ignore, and what they are willing to believe. As AI panelists become more fluent and avatars become more convincing, the challenge will not simply be preventing deception; it will be preserving the shared reality on which meaningful collaboration depends.
Till next time,
✨Mega-Play Your Life.
Note: The complete references list of this essay will be featured in Dr. Gazit’s forthcoming book: Gameful Intelligence™: The Art of Thriving in the Era of AI. JuLoot Publication House. (Tentative, due: late 2026).
Disclaimer:
Any references to public figures are used for commentary, criticism, education, and analysis and do not imply endorsement or affiliation. All third-party trademarks are the property of their respective owners. Read the full Disclaimer, Copyrights, Trademark & AI Disclosure » here


