Death by a Thousand Personality Quizzes

AI-assisted internet posting is already in a race to the bottom.

An AI quiz loads on a screen alongside binary.
Getty; Joanne Imperio / The Atlantic

One might assume that when your boss finally comes to tell you that the robots are here to do your job, he won’t also point out with enthusiasm that they’re going to do it 10 times better than you did. Alas, this was not the case at BuzzFeed.

Yesterday, at a virtual all-hands meeting, BuzzFeed CEO Jonah Peretti had some news to discuss about the automated future of media. The brand, known for massively viral stories aggregated from social media and being the most notable progenitor of what some might call clickbait, would begin publishing content generated by artificial-intelligence programs. In other words: Robots would help make BuzzFeed posts.

“When you see this work in action it is pretty amazing,” Peretti had promised employees in a memo earlier in the day. During the meeting, which I viewed a recording of, he was careful to say that AI would not be harnessed to generate “low-quality content for the purposes of cost-saving.” (BuzzFeed cut its workforce by about 12 percent weeks before Christmas.) Instead, Peretti said, AI could be used to create “endless possibilities” for personality quizzes, a popular format that he called “a driving force on the internet.” You’ve surely come across one or two before: “Sorry, Millennials, but There’s No Way You Will Be Able to Pass This Super-Easy Quiz,” for instance, or “If You Were a Cat, What Color Would Your Fur Be?

These quizzes and their results have historically been dreamed up by human brains and typed with human fingers. Now BuzzFeed staffers would write a prompt and a handful of questions for a user to fill out, like a form in a proctologist’s waiting room, and then the machine, reportedly constructed by OpenAI, the creator of the widely discussed chatbot ChatGPT, would spit out uniquely tailored text. Peretti wrote a bold promise about these quizzes on a presentation slide: “Integrating AI will make them 10x better & be the biggest change to the format in a decade.” The personality-quiz revolution is upon us.

Peretti offered the staff examples of these bigger, better personality quizzes: Answer 7 Simple Questions and AI Will Write a Song About Your Ideal Soulmate. Have an AI Create a Secret Society for Your BFFs in 5 Easy Questions. Create a Mythical Creature to Ride. This Quiz Will Write a RomCom About You in Less Than 30 Seconds. The rom-com, Peretti noted, would be “a great thing for an entertainment sponsor … maybe before Valentine’s Day.” He demonstrated how the quiz could play out: The user—in this example, a hypothetical person named Jess—would fill out responses to questions like “Tell us an endearing flaw you have” (Jess’s answer: “I am never on time, ever”), and the AI would spit out a story that incorporated those details. Here’s part of the 250-word result. Like a lot of AI-generated text, it may remind you of reading someone else’s completed Mad Libs:

Cher gets out of bed and calls everyone they know to gather outside while she serenades Jess with her melodic voice singing “Let Me Love You.” When the song ends everyone claps, showering them with adoration, making this moment one for the books—or one to erase.

Things take an unexpected turn when Ron Tortellini shows up—a wealthy man who previously was betrothed to Cher. As it turns out, Ron is a broke, flailing actor trying to using [sic] Cher to further his career. With this twist, our two heroines must battle these obstacles to be together against all odds—and have a fighting chance.

There are many fair questions one might ask reading this. “Why?” is one of them. “Ron Tortellini?” is another. But the most important is this: Who is the content for? The answer is no one in particular. The quiz’s result is machine-generated writing designed to run through other machines—content that will be parsed and distributed by tech platforms. AI may yet prove to be a wonderful assistive tool for humans doing interesting creative work, but right now it’s looking like robo-media’s future will be flooding our information ecosystem with even more junk.

Peretti did not respond to a request for comment, but there’s no mistaking his interest here. Quizzes are a major traffic-driver for BuzzFeed, bringing in 1.1 billion views in 2022 alone, according to his presentation. They can be sold as sponsored content, meaning an advertiser can pay for an AI-generated quiz about its brand. And they spread on social media, where algorithmic feeds put them in front of other people, who click onto the website to take the quiz themselves, and perhaps find other quizzes to take and share. Personality quizzes are a perfect fit for AI, because although they seem to say something about the individual posting them, they actually say nothing at all: “Make an Ice Cream Cone and We’ll Reveal Which Emoji You Are” was written by a person, but might as well have been written by a program.

Much the same could be said about content from CNET, which has recently started to publish articles written at least in part by an AI program, no doubt to earn easy placement in search engines. (Why else write the headline “What Are NSF Fees and Why Do Banks Charge Them?” but to anticipate something a human being might punch into Google? Indeed, CNET’s AI-“assisted” article is one of the top results for such a query.) The goal, according to the site’s editor in chief, Connie Guglielmo, is “to see if the tech can help our busy staff of reporters and editors with their job to cover topics from a 360-degree perspective.” Reporting from Futurism has revealed that these articles have contained factual errors and apparent plagiarism. Guglielmo has responded to the ensuing controversy by saying, in part, that “AI engines, like humans, make mistakes.”

Such is the immediate path for robot journalism, if we can call it that: Bots will write content that is optimized to circulate through tech platforms, a new spin on an old race-to-the-bottom dynamic that has always been present in digital media. BuzzFeed and CNET aren’t innovating, really: They’re using AI to reinforce an unfortunate status quo, where stories are produced to hit quotas and serve ads against—that is, they are produced because they might be clicked on. Many times, machines will even be the ones doing that clicking! The bleak future of media is human-owned websites profiting from automated banner ads placed on bot-written content, crawled by search-engine bots, and occasionally served to bot visitors.

This is not the apocalypse, but it’s not wonderful, either. To state what was once obvious, journalism and entertainment alike are supposed to be for people. Viral stories—be they 6,000-word investigative features or a quiz about what state you actually belong in—work because they have mass appeal, not because they are hypertargeted to serve an individual reader. BuzzFeed was once brilliant enough to livestream video of people wrapping rubber bands around a watermelon until it exploded. At the risk of over-nostalagizing a moment that was in fact engineered for a machine itself—Facebook had just started to pay publishers to use its live-video tool—this was at least content for everyone, rather than no one in particular. Bots can be valuable tools in the work of journalism. For years, the Los Angeles Times has experimented with a computer program that helps quickly disseminate information about earthquakes, for example. (Though not without error, I might add.) But new technology is not in and of itself valuable; it’s all in how you use it.

Much has been made of the potential for generative AI to upend education as we’ve known it, and destabilize white-collar work. These are real, valid concerns. But the rise of robo-journalism has introduced another: What will the internet look like when it is populated to a greater extent by soulless material devoid of any real purpose or appeal? The AI-generated rom-com is a pile of nonsense; CNET’s finance content can’t be trusted. And this is just the start.

In 2021, my colleague Kaitlyn Tiffany wrote about the dead-internet theory, a conspiracy rooted in 4chan’s paranormal message board that posits that the internet is now mostly synthetic. The premise is that most of the content seen on the internet “was actually created using AI” and fueled by a shadowy group that hopes to “control our thoughts and get us to purchase stuff.” It seemed absurd then. But a little more real today.

Damon Beres is a senior editor at The Atlantic, where he oversees the Technology section.