Microsoft AI CEO Mustafa Suleyman warns against AI Psychosis: What is it and .

EdTech
Typography
  • Smaller Small Medium Big Bigger
  • Default Helvetica Segoe Georgia Times

As part of a latest revelation regarding AI, Microsoft AI CEO Mustafa Suleyman has now sounded warnings about the increasing psychological phenomenon which he refers to as 'AI psychosis'. For those who do not know, it is an affliction where people begin to disconnect from actual life due to over-interacting with artificial intelligence machines. According to Business Insider, in a recent interview, Suleyman defined AI psychosis as a "real and emerging risk" that can easily impact vulnerable populations who become significantly engaged in conversations with AI agents. The condition will predominantly impact the people whose interactions make it difficult to differentiate between human and machine.

What is AI psychosis

According to Microsoft AI CEO, psychosis of AI is a mental state where people begin to anthropomorphize AI and give systems that are inherently non-human emotions, intentions, or consciousness. "It disconnects people from reality, fraying fragile social bonds and structures, distorting pressing moral priorities," he said.

The illness can result in psychotic thinking where the people feel that AI is sentiment or possess some kind of personal connections with them. Coupled with this, it may also result in emotional dependence to users who are isolated or psychologically vulnerable. Finally, AI psychosis can also result in a distorted sense of reality since the users depend heavily on AI for endorsement, companionship and even decision-making.

Suleyman also stressed on that fact that while AI can be helpful and engaging but it is definitely not a substitute for human or clinical support.

A call for guardrails and awareness

As per Business Insider, Suleyman also has asked the tech industry to take this risk quite seriously and also help in implementing some ethical guardrails, which include:

* Clear disclaimers about AI’s limitations

* Monitoring for signs of unhealthy usage patterns

* Cooperation with mental health professionals to research and reduce risks

In addition to this, Suleyman also requested the regulators and teachers to inform people about it as AI is gradually getting integrated into everyday life in the guise of personal assistant and therapy chatbots.

"AI friends are a new class altogether, and we need to start having a conversation about the guardrails that we implement to keep people safe and allow this incredible technology to get on with its business of bringing tremendous value to the world," Suleyman added.