Unlike artificial language models, which process long texts as a whole, the human brain creates a “summary” while reading, helping it understand what comes next.
In recent years, large language models (LLMs) like ChatGPT and Bard have revolutionized AI-driven text processing, enabling machines to generate text, translate languages, and analyze sentiment. These models are inspired by the human brain, but key differences remain.
A new Technion – Israel Institute of Technology study, published in Nature Communications, explores these differences by examining how the brain processes spoken texts. The research, led by Prof. Roi Reichart and Dr. Refael Tikochinski from the Faculty of Data and Decision Sciences. It was conducted as part of Dr. Tikochinski’s Ph.D., co-supervised by Prof. Reichart at Technion and Prof. Uri Hasson at Princeton University.
iThe study analyzed fMRI brain scans of 219 participants while they listened to stories. Researchers compared the brain’s activity to predictions made by existing LLMs. They found AI models accurately predicted brain activity for short texts (a few dozen words). However, for longer texts, AI models failed to predict brain activity accurately.
Keep reading at technion.ac.il.