Even with MIT Technology Review, which allows OpenAI’s crawlers, the chatbot cited a syndicated copy rather than the original.
The Tow Center found that all publishers risk misrepresentation by ChatGPT Search:
Enabling crawlers doesn’t guarantee visibility.
Blocking crawlers doesn’t prevent content from showing up.
These issues raise concerns about OpenAI’s content filtering and its approach to journalism, which may push people away from original publishers.
OpenAI’s Response
OpenAI responded to the Tow Center’s findings by stating that it supports publishers through clear attribution and helps users discover content with summaries, quotes, and links.
An OpenAI spokesperson stated:
“We support publishers and creators by helping 250M weekly ChatGPT users discover quality content through summaries, quotes, clear links, and attribution. We’ve collaborated with partners to improve in-line citation accuracy and respect publisher preferences, including enabling how they appear in search by managing OAI-SearchBot in their robots.txt. We’ll keep enhancing search results.”
While the company has worked to improve citation accuracy, OpenAI says it’s difficult to address specific misattribution issues.
OpenAI remains committed to improving its search product.
Looking Ahead
If OpenAI wants to collaborate with the news industry, it should ensure publisher content is represented accurately in ChatGPT Search.
Publishers currently have limited power and are closely watching legal cases against OpenAI. Outcomes could impact content usage rights and give publishers more control.
As generative search products like ChatGPT change how people engage with news, OpenAI must demonstrate a commitment to responsible journalism to earn user trust.
Featured Image: Robert Way/Shutterstock