Recently, a research report published by Nataliya Kosmyna, a scientist from the MIT Media Lab, revealed that ChatGPT could potentially harm the thinking abilities of average users.
The study divided 54 participants (aged 18 to 39 in the Boston area) into three groups, each asked to write several SAT essays using OpenAI’s ChatGPT, Google search engine, or without any tools. Researchers recorded brain activity using electroencephalography (EEG) in 32 brain regions.
The research paper indicated that ChatGPT users showed the lowest brain engagement and performed poorly on neurological, linguistic, and behavioral levels. Over the course of several months, essays written using ChatGPT became increasingly lazy, often resulting in mere copying and pasting by the end of the study.
EEG readings showed that these participants had lower executive control and attention levels. Many writers even resorted to handing over essay topics to ChatGPT by the third essay. Two English teachers grading these essays discovered they were lacking “soul”, originality, with substantial repetition of expressions and ideas.
In contrast, students who solely used their brains exhibited the highest neural connectivity, especially in areas of creative ideation, memory load, and semantic processing. Researchers found that this group had higher engagement, stronger curiosity, felt more ownership of their work, and expressed higher satisfaction.
Participants using Google search also reported high satisfaction levels and active brain function. It is noteworthy that Google search, with over two decades of history, has also seen a decline in usage as people rely more on AI for information retrieval.
MIT researchers concluded that the use of Large Language Models (LLMs) could potentially impair people’s thinking abilities, especially among younger individuals.
Although the paper has not undergone peer review and the sample size was relatively small, the lead author, Kosmyna, believes that as society increasingly relies on LLMs for instant convenience, brain development may be compromised.
She expressed concern by releasing the research findings now rather than waiting for full peer review, fearing that a decision to integrate GPT into kindergarten settings could be made within 6 to 8 months, posing significant risks to developing brains.
Furthermore, earlier this year, the laboratory also found that overall, the longer individuals used ChatGPT, the lonelier they felt.
Kosmyna emphasized the need for synchronized proactive legislation and, most importantly, testing before using these tools.
Dr. Zishan Khan, a board-certified psychiatrist in child, adolescent, and adult psychiatry, stated, “From a psychiatric perspective, excessive reliance on LLMs could lead to unforeseen psychological and cognitive consequences, particularly for young individuals whose brains are still developing.” He mentioned that LLMs could weaken “the neural connections that originally help you acquire information, remember facts, and maintain resilience.”
Scientific research on the impacts of artificial intelligence is still in its early stages.
