Bussiness
AI startup Perplexity accused of ‘directly ripping off’ news outlets like Forbes, CNBC without proper credit
AI startup Perplexity operates a chatbot that has been “directly ripping off” articles written by news outlets such as CNBC and Forbes without giving proper credit or attribution, according to a scathing report.
The issue surfaced in a feature called “Perplexity Pages,” which displays articles that have been “curated” by the company while scraping details on various topics from third-party news outlets who have written stories about them, Forbes reported on Friday.
The news outlets aren’t credited by name within the curated article text, even though the wording of the articles closely matches that of the source. Instead, Perplexity includes what Forbes described as “small, easy-to-miss logos” that linked back to the original story.
In one case, Perplexity’s chatbot regurgitated a version of an exclusive, paywalled Forbes report on ex-Google CEO Eric Schmidt’s military drone project. Perplexity’s “curated” version, which has been viewed nearly 30,000 times, lifted near-verbatim passages and even what appeared to be an in-house graphic from Forbes’ original story.
“Our reporting on Eric Schmidt’s stealth drone project was posted this AM by @perplexity_ai,” Forbes Executive Editor John Paczkowski wrote on X. “It rips off most of our reporting. It cites us, and a few that reblogged us, as sources in the most easily ignored way possible.”
Forbes identified two other cases in which Perplexity Pages scraped news articles without giving proper credit – including an original CNBC report on Elon Musk’s decision to shift shipments of advanced computer chips to his xAI startup instead of Tesla, and a Bloomberg story on Apple’s plans to develop home robotics products.
In each instance, Perplexity used near-verbatim passages from the original article without naming them in the copy.
Forbes, CNBC and Bloomberg did not immediately return requests for comment.
Perplexity AI is valued at more than $1 billion, with blue-chip investors that include Amazon boss Jeff Bezos, chipmaker Nvidia and billionaire Stanley Duckenmiller, Bloomberg reported in April.
Perplexity AI CEO Aravind Srinivas acknowledged the issue in an X post, but asserted that the chatbot cites third-party outlets more prominently than rival services such as Google Gemini, OpenAI’s ChatGPT and Microsoft’s Copilot.
Srinivas shared a screenshot with one view of Perplexity’s post on Eric Schmidt’s AI-powered drones in which a small hyperlink to the Forbes article was visible near the top of the page. He also sought to differentiate between Perplexity Pages and its separate core product, which is essentially an AI-powered chatbot.
“It has rough edges, and we are improving it with more feedback,” Srinivas wrote on X. “The core Perplexity product has, from day one, had appropriate source attribution in the most prominent way, unlike other chatbots on the market like ChatGPT, Gemini, and Copilot.”
“The pages and discover features will improve, and we agree with the feedback you’ve shared that it should be a lot easier to find the contributing sources and highlight them more prominently,” Srinivas added.
Forbes’ Paczkowski fired back, describing Perplexity’s actions as “little more than plagiarism.”
“There is no clear attribution, just tiny logos where our work is treated with the same weight as reblogs. It’s not ‘rough’ it’s theft,” he said.
When reached for comment on Monday morning, a Perplexity AI spokesperson said the firm has “updated how we present sources on Pages” in response to Forbes’ reporting.
“Now, when a user opens a Page, all of the sources will be presented at the top, as well as in footnotes for each section,” the spokesperson said in a statement. The sources are already live on the web version of Pages and will be rolling out to mobile this week.”
“We have always cared about giving attribution to content and have designed our core product (the answer engine) from the beginning to clearly cite its source materials, which most chatbots are unable to do reliably and prominently even today,” the spokesperson added.
Journalism outlets have regularly blasted AI firms in recent months for using their content to “train” chatbots without proper credit or compensation – and then using the chatbots to erode their audiences.
As The Post reported, critics have warned that the rise of AI ripoffs could decimate news publishers unless federal officials intervene.
Last November, the News Media Alliance – a nonprofit that represents more than 2,200 publishers, including The Post – warned that chatbots were creating “plagiarism stew” by lifting text in potential violation of copyright laws.
More recently, Google was blasted for adding auto-generated text summaries known as “AI Overviews” to search results while demoting links to other outlets in search results.
Google’s AI-powered search immediately began delivering bizarre answers, such as instructing users to eat rocks or add glue to their pizza.
Users later determined that the “pizza glue” response had been lifted directly from a decade-old, tongue-in-cheek Reddit post.