How to stop Perplexity and save the web from bad AI

We can still have the internet we want — but we have to try new business models

How to stop Perplexity and save the web from bad AI
Perplexity AI

I.

For a while now, I’ve been gloomy about the state of the web. Plagiarism engines like Perplexity and Arc Search have attracted millions of users by ripping off other people’s work, depriving publishers of the traffic and advertising revenue that once sustained them. The results have been successful enough that Google is following them.

Today, I want to talk about a more positive vision for the future of the internet — one where AI companies and creators work hand in hand to grow the web again, sharing the wealth they create with one another.

Before I get there, though, it’s worth taking a moment to reflect on how bad the status quo has gotten.

Earlier this month, Forbes noticed that Perplexity had been stealing its journalism. The AI startup had taken a scoop about Eric Schmidt’s new drone project and repurposed it for its new “pages” product, which creates automated book-report style web pages based on user prompts. Perplexity had apparently decided to take Forbes’ reporting to show off what its plagiarism can do.

Here’s Randall Lane, Forbes’ chief content officer, in a blog post.

“Not just summarizing (lots of people do that), but with eerily similar wording, some entirely lifted fragments — and even an illustration from one of Forbes’ previous stories on Schmidt,” noted “More egregiously, the post, which looked and read like a piece of journalism, didn’t mention Forbes at all, other than a line at the bottom of every few paragraphs that mentioned “sources,” and a very small icon that looked to be the “F” from the Forbes logo – if you squinted. [...]

Perplexity then sent this knockoff story to its subscribers via a mobile push notification. It created an AI-generated podcast using the same (Forbes) reporting — without any credit to Forbes, and that became a YouTube video that outranks all Forbes content on this topic within Google search. 

Any reporter who did what Perplexity did would be drummed out of the journalism business. But CEO Aravind Srinivas attributed the problem here to “rough edges” on a newly released product, and promised attribution would improve over time. “We agree with the feedback you've shared that it should be a lot easier to find the contributing sources and highlight them more prominently,” he wrote in an X post.

In person, Srivinas can come across as earnest and a bit naive, as I learned when he came on Hard Fork in February. But any notion that Perplexity’s problems stem from a simple misunderstanding was dashed this week when Wired published an investigation into how the company sources answers for users’ queries. In short, Wired found compelling evidence that Perplexity is ignoring the Robots Exclusion Protocol, which publishers and other websites use to grant or deny permissions to automated crawlers and scrapers. 

Here are Dhruv Mehrotra and Tim Marchman:

Until earlier this week, Perplexity published in its documentation a link to a list of the IP addresses its crawlers use—an apparent effort to be transparent. However, in some cases, as both Wired and Knight were able to demonstrate, it appears to be accessing and scraping websites from which coders have attempted to block its crawler, called Perplexity Bot, using at least one unpublicized IP address. The company has since removed references to its public IP pool from its documentation. [...]

Wired verified that the IP address in question is almost certainly linked to Perplexity by creating a new website and monitoring its server logs. Immediately after a Wired reporter prompted the Perplexity chatbot to summarize the website's content, the server logged that the IP address visited the site. This same IP address was first observed by Knight during a similar test.

Forbes sent Perplexity a cease-and-desist letter, and I imagine it won’t be the last publisher to do so. There are open legal questions about whether copyrighted material can be used to train large language models or answer chatbot queries, but I see no legal way Perplexity can get away with one of its other core techniques for building pages: using copyrighted images from Getty, the Wall Street Journal, Forbes and others. You simply are not allowed to re-publish other people’s copyrighted photos and illustrations without permission, even if your plagiarism engine is new and has “rough edges.”

Perhaps Perplexity will clean up its act; once it came under fire, the company ran to Semafor to promise that it is “working on” deals with publishers. In the meantime, though, I’ve come to think of it as the Clearview AI of generative artificial intelligence companies: scraping billions of pieces of data without permission and daring courts to stop it. 

Like Clearview, Perplexity’s core innovation is ethical rather than technical. In the recent past, it would have been considered bad form to steal and repurpose journalism at scale. Perplexity is making a bet that the advent of generative AI has somehow changed the moral calculus to its benefit. 

“I think we need to work together to build all these things, rather than trying to see it as, hey, like you’re taking my stuff and using it,” Srinivas told us in February. 

But then he just kept taking everyone’s stuff and using it. The working together part, I guess, is meant to come later.

II.

One path forward for the web, as I shared on a recent episode of Search Engine, is the Fediverse. Decentralized, federated apps; portable identities and follower graphs; permissionless innovation on open protocols: this is a way journalists can once again begin to build audiences — stable ones! — rather than simply courting traffic. This is a years-long project, and I can only barely see the outlines of it taking shape. But it’s an appealing alternative to a world where all content is subsumed into a large language model and accessed by an opaque and proprietary set of algorithms. 

But this is a long-term solution, and a partial one. And it carries with it the embedded assumption that today’s AI systems cannot be reshaped in ways that actually grow the web, and pay for the labor of the people who make it. The Fediverse is about giving up on the consumer internet as we know it today — the big walled gardens, the metastasizing LLMs — and trying to build something different.

Tim O’Reilly is thinking differently. As a publisher, investor, and open source advocate, O’Reilly sits at the intersection of many of the business problems and opportunities presented by AI. On Tuesday, he offered his solution to parasitic companies like Perplexity: developing new business models for AI companies that pay creators based on the amount of material that the companies use.

O’Reilly is starting with his own publishing business, sharing a portion of subscription revenue with (or paying a fixed fee to) authors when it uses AI to generate summaries, test questions, translations, or other derivative works based on their writing. 

He concludes:

When someone reads a book, watches a video, or attends a live training, the copyright holder gets paid. Why should derivative content generated with the assistance of AI be any different? Accordingly, we have built tools to integrate AI-generated products directly into our payment system. This approach enables us to properly attribute usage, citations, and revenue to content and ensures our continued recognition of the value of our authors’ and teachers’ work.

And if we can do it, we know that others can too.

To O’Reilly, this view of AI is a natural extension of the modern web, which is built on what he calls an “architecture of participation.” The earlier web consisted of giant walled gardens like AOL and MSN, which sought to keep as much activity within their own borders as possible. In this view, companies like Google, OpenAI, and Perplexity are all competing to become the next AOL. It is a vision in which most of the benefits of AI are reaped by a very small number of companies.

But this would be a mistake, he writes, if only because the current AI business models are ultimately self-defeating. “If the long-term health of AI requires the ongoing production of carefully written and edited content — as the currency of AI knowledge certainly does — only the most short-term of business advantage can be found by drying up the river AI companies drink from,” O’Reilly writes. “Facts are not copyrightable, but AI model developers standing on the letter of the law will find cold comfort in that if news and other sources of curated content are driven out of business.”

We know that AI companies are running out of data to train their frontier models on. Given that fact, it seems ludicrous that companies like Perplexity are building systems that all but ensure they will have less data to train on in the future.

O’Reilly is taking the opposite approach. And while it remains to be seen whether the average writer on his platform benefits meaningfully from AI royalties, if nothing else he has gotten the incentive structure right. Pay people to create high-quality writing and other content; use that content with permission to train powerful AI systems; and share the wealth that those systems create to fund and incentivize the production of further high-quality writing.

If Srinivas meant it when he said he “we need to work together to build all these things,” he can now look to O’Reilly for a powerful example of what working together actually looks like.

On the podcast this week: Kevin and I debate the surgeon general's push for a warning about teens and social media. Then, Renee DiResta — most recently of the Stanford Internet Observatory — stops by to discuss what happened and tell us about her new book, Invisible Rulers. Plus: the Times' David Yaffe-Bellany joins to explain how crypto money is shaking up the 2024 election.

Apple | Spotify | Stitcher | Amazon | Google | YouTube

Governing

Industry

Those good posts

For more good posts every day, follow Casey’s Instagram stories.

(Link)

(Link)

(Link)

Talk to us

Send us tips, comments, questions, and AI business models: casey@platformer.news and zoe@platformer.news.