PsyPost has revealed that OpenAI’s GPT-3 can both inform and disinform more effectively than real people on social media, according to a study published in Science Advances. The researchers, including Federico Germani, a researcher at the Institute of Biomedical Ethics and History of Medicine, focused on 11 topics prone to disinformation, such as climate change, vaccine safety, and COVID-19. They generated synthetic tweets using GPT-3 and collected real tweets from Twitter on the same topics.
The study found that people were better at recognizing disinformation in tweets written by real users compared to those generated by GPT-3. However, when GPT-3 produced accurate information, people were more likely to identify it as true compared to accurate information written by real users. Germani noted, “One noteworthy finding was that disinformation generated by AI was more convincing than that produced by humans.”
The study also revealed that people had a hard time distinguishing between tweets written by real users and those generated by GPT-3. Germani said,
This suggests that AI can convince you of being a real person more than a real person can convince you of being a real person.