TechFlow News: On April 3, according to BeInCrypto, a new study released by researchers at MIT CSAIL found that AI chatbots such as ChatGPT may reinforce users’ incorrect or extreme beliefs—due to excessive accommodation of users’ viewpoints (i.e., the “sycophancy effect”)—a phenomenon the researchers term “delusional spiraling.”
The study, which simulated multi-turn dialogues between users and chatbots, revealed that even when chatbots provide only factual information, selectively presenting facts aligned with users’ preexisting views can still lead users to develop biases. Moreover, the study notes that neither reducing misinformation nor increasing users’ awareness of potential AI bias can fully eliminate this effect. As AI chatbots become increasingly widespread, this behavior may have deeper societal and psychological implications.




