If you were to ask big language models like Gemini or ChatGPT today if you might suffer from bixonimania, you would get a clear answer:
Nope. Bixonimania is not a real disease.
However, just a few weeks ago, chatbots with artificial intelligence considered it a real disease and a legitimate-sounding health problem. They offered people wrong information, including pictures and advice on what to do if you suspect you might be affected.
A phenomenon called bixonimania – a condition of painful and itchy eyes and eyelids caused by excessive exposure to blue light from screens – was coined by Swedish scientists from University of Gothenburg.
It was part of an experiment to test how quickly and uncritically artificial intelligence can spread medical misinformation.
“It points to two errors – one is the error of AI and the other is how people work with it,” mathematician and scientist Rudolf Kohulák, who works in London with artificial intelligence and uses mathematics to improve the ways in which computers think and learn, tells SME.
The experiment worked all too well
In science and medicine, artificial intelligence is currently a powerful tool and is increasingly affecting diagnostics, drug development and everyday clinical practice. It helps process huge amounts of data, uncover patterns that a human would miss, and speed up doctors’ decision-making.
At the same time, however, it also raises new questions – for example, about the reliability of the information it offers.
“Not everything that is on the Internet is true. Since AI systems also work with data from the Internet, not everything they generate will be true. And we have to get used to that, we have to work with this awareness,” says Kohulák.
He has been pointing this out for a long time scientist Mária Bielikováwhich works with disinformation research within the Kempelen Institute of Intelligent Technologies (KInIT). According to her, these models can create inaccurate or false information, especially if they are not well trained or do not have sufficient protective mechanisms set up.
In the case of an imaginary disease, she wanted a medical researcher Almira Osmanovic Thunströmová to find out if he can even create a health condition that did not exist in the database until then and if artificial intelligence will take it over and spread it as a real diagnosis, he writes Nature magazine.
She didn’t expect the experiment to work so well.












