Researchers at the University of Zurich stirred up controversy when they revealed they conducted a study on whether AI-generated comments could influence people’s opinions on the Reddit forum r/changemyview — without gathering informed consent from users. Northwestern University School of Medicine Professor Mohammad Hosseini explains how this one case lays out several ethical issues with the use of AI in research.
In late April, details came to light about a covert experiment conducted on unsuspecting Reddit users on the debate forum known as r/changemyview.
Researchers at the University of Zurich unleashed AI chatbots posing as real humans on the forum to test their powers of persuasion. They had invented backstories like a rape survivor or a Black man opposed to Black Lives Matter.
What they didn't have was consent. The experiment violated Reddit Terms of Service, forum rules and, critics say, academic research standards. The researchers who notified Reddit of the experiment after the fact have since apologized and said they won't publish the results. Reddit says it's increasing efforts to verify users are human.
Marketplace’s Meghan McCarty Carino spoke to Mohammad Hosseini, a professor at Northwestern's medical school, about the potential harms that could come from a study like this one.
“‘Unethical’ AI research on Reddit under fire” - from Science
“CMV AI Experiment Update - Apology Received from Researchers” - from Reddit
“Reddit will tighten verification to keep out human-like AI bots” - from TechCrunch
“The ethics of using artificial intelligence in scientific research: new guidance needed for a new tool” - from Springer Nature
“Open Science at the generative AI turn: An exploratory analysis of challenges and opportunities” - from Quantitative Science Studies