Microsoft AI Chief Warns of 'Psychosis Risk' as Users Form Deep Attachments to AI Systems
Mustafa Suleyman, Microsoft's chief of AI, has raised concerns about the psychological impacts of advanced AI systems on users. He warns of potential 'psychosis risk' where users may develop delusional attachments to AI. A study revealed 25% of Gen Z users already believe AI systems are conscious, with 52% anticipating AI consciousness in the future. OpenAI CEO Sam Altman has expressed similar concerns about users' emotional dependence on AI. Suleyman urges the AI industry to establish clear ethical boundaries and develop AI as tools rather than digital persons.

*this image is generated using AI for illustrative purposes only.
Microsoft's chief of AI, Mustafa Suleyman, has raised concerns about the potential psychological impacts of advanced artificial intelligence systems on users. In a recent blog post, Suleyman highlighted the risk of users developing delusional attachments to AI, dubbing it a 'psychosis risk.'
The Blurring Line Between AI and Reality
Suleyman pointed out that interacting with sophisticated AI models can feel remarkably compelling and real, potentially blurring the distinction between simulation and reality for some users. This lifelike quality of AI systems has led to growing concerns about users forming deep emotional connections with these digital entities.
Gen Z's Perception of AI Consciousness
Adding weight to these concerns, a study conducted by EduBirdie revealed some startling statistics about Generation Z's perception of AI:
- 25% of Gen Z users already believe that AI systems are conscious
- 52% anticipate AI developing consciousness in the future
These findings underscore the need for clear communication about the nature and limitations of AI systems.
Industry-Wide Concerns
Suleyman's warnings are not isolated. OpenAI CEO Sam Altman has expressed similar apprehensions about users' emotional dependence on AI. Altman noted that the bonds people form with AI models seem different and potentially stronger than attachments to previous technologies.
Potential Consequences
The Microsoft AI chief warned that these deep attachments could lead to concerning outcomes, including:
- Users believing AI systems are conscious beings
- Advocacy for AI rights and citizenship
Call for Ethical Boundaries
In light of these concerns, Suleyman urged the AI industry to establish clear ethical boundaries. He emphasized the importance of developing AI as tools for human use rather than as digital persons.
The Path Forward
As AI continues to advance and integrate into daily life, the tech industry faces the challenge of balancing innovation with responsible development. Suleyman's warnings serve as a reminder of the need for ongoing dialogue about the psychological impacts of AI and the importance of maintaining a clear distinction between artificial intelligence and human consciousness.
The concerns raised by industry leaders like Suleyman and Altman highlight the complex relationship between humans and AI, underscoring the need for continued research, ethical guidelines, and public education as this technology evolves.