Open in App
Fortune

‘Cesspool of AI crap’ or smash hit? LinkedIn’s AI-powered collaborative articles offer a sobering peek at the future of content

By Sharon Goldman,

13 days ago

When Irene Malatesta learned that LinkedIn was launching collaborative articles last year, she was itching to participate. The collection of work-related tips and insights contributed by LinkedIn users on thousands of career topics—from global logistics to growth hacking—seemed like an ideal way for the San Francisco–based content strategist and marketing consultant to showcase her expertise and earn one of the platform’s coveted “top voice” badges.

When she took a look at the assortment of Q&As on the site however, Malatesta quickly decided she wanted nothing to do with it. The prompt questions for various topics, which LinkedIn says are created with generative AI, and the answers provided by LinkedIn users, had an odd quality about them—many seemed overly basic or regurgitated; some veered into the nonsensical, “like AI-generated word salad,” she said.

Malatesta isn’t the only one to have had that reaction. Cassie Evans, a U.K.-based developer and tech educator who was prompted to contribute to LinkedIn’s collaborative articles, described them to Fortune as “laughably low-quality” and filled with “pointless AI junk.” On X , she got more colorful, calling the articles “an ever-deteriorating ouroboros of ‘thought leadership’ wank and AI word vomit.” Alex Cristache, a digital designer in Bucharest, Romania, shared his own unflattering verdict on X: a “ cesspool of AI crap .”

The negative reactions stand in stark contrast to the cheer at LinkedIn, which touts collaborative articles as a slam-dunk success: LinkedIn announced last month that there have been over 10 million contributions and that weekly readership has increased over 270% since September. And the Microsoft-owned company says it plans to double down on the project with new features and capabilities.

The disconnect highlights a tricky problem that could become widespread in the AI era. As more businesses rely on generative AI for projects, the technology’s ability to produce instant content risks having a corrosive effect on quality that could drive away customers and users—even as companies celebrate gains and efficiencies with other metrics. More worrisome is that AI, by its very nature, lends itself to compounding the problem: Many of the ostensibly human users on LinkedIn appear to have contributed answers that were produced by AI, effectively creating an AI-to-AI conversation, which LinkedIn then uses to train its own AI models.

A LinkedIn spokesperson told Fortune that training AI models is “not the primary goal of the feature, and we don’t directly share that information outside of LinkedIn with others for the purpose of training their models.”

And while LinkedIn did not respond to user claims that many of the answers to collaborative articles appear to be generated by AI, it said the feature was driven by user demand: “We introduced collaborative articles because people on LinkedIn tell us they want a place for professionals to share knowledge and insights on everyday workplace challenges,” the spokesperson said, offering examples of contributions from Arianna Huffington and Barbara Corcoran . “They really want to learn from other people’s ‘been there, done that’ experience.”

To users like Cristache, the digital designer in Romania, however, the spate of AI content in collaborative articles and the incentive for users to add more AI into the mix, has created an absurd feedback loop. And it’s not likely to stay confined to LinkedIn. Other social networks, including Instagram and Snapchat , are experimenting with AI generated content in some form or another. And as Fortune recently reported , Elon Musk’s X platform is considering adding generative AI capabilities that can automatically compose tweets for premium users.

LinkedIn wants to tap into industry-specific user expertise

LinkedIn has a close and tight connection to generative AI. The social networking platform is owned by Microsoft, the largest partner and investor in ChatGPT-maker OpenAI . Within a few months of ChatGPT’s November 2022 launch, LinkedIn released a bevy of generative AI features including AI-generated job descriptions, personalized writing suggestions, and collaborative articles.

The collaborative articles hub, launched in March 2023, is designed to tap into the deep well of industry-specific expertise within the network’s over 1 billion users (a combined “10 billion years of professional knowledge,” according to LinkedIn’s blog post ). The vast catalog of topics are suggested by AI, with the help of LinkedIn’s editorial team, and then turned into a sort of FAQ. The topic “Film Production” for example, has more than 100 questions that LinkedIn users can weigh in on, including “How can you qualify to become a sound designer or recordist?” and “What is the best way to create tension and conflict in a documentary?”

LinkedIn’s Skills Graph , which uses machine learning models and natural language processing, matches each article with relevant member experts in certain topics “based on their work experience, skills proficiency, and prior engagement on the platform.” That’s how Evans, the U.K.-based developer, was invited to contribute.

To encourage user participation, LinkedIn offers a yellow “Community Top Voice” badge to top contributors of quality content, such as “Top Sales Voice” or “Top SEO Voice.” (The company makes clear, however, that this yellow badge is not the same as the blue LinkedIn “Top Voice” badge, which is invitation-only and features senior-level experts and leaders.) The badge is only active for 60 days, so users must keep contributing content consistently in order to keep it.

The gamification can also encourage users to devise their own tactics to game the system and keep earning—and re-earning—badges.

Joe Apfelbaum, who runs a B2B digital marketing agency in Brooklyn, recently launched EvyAI, a GPT-4 wrapper AI assistant to help people write and edit their LinkedIn collaborative article contributions. He told Fortune that the tool has 8,000 users from 35 countries.

“I started contributing to collaborative articles as soon as they came out, but I didn’t get a Top Voice badge till I started using EvyAI,” he said. He started the company, he explained, “because I trained over 1,000 companies on how to leverage LinkedIn, and the sales professionals and business development reps that we trained simply did not have time to do all the work themselves.”

Joshua Lee, who runs a LinkedIn-focused consultancy out of Austin, says he has been a longtime LinkedIn partner who is often invited to test demos—and first heard about collaborative articles from the company’s product team. He says he has contributed 200 to 300 articles since the launch of collaborative articles and claims he “now has the most contributor badges on LinkedIn.”

He also helps all of his clients—which he says are “among the top 1% of leaders, entrepreneurs, and CEOs”—contribute to LinkedIn’s collaborative articles. And while he said he doesn’t use generative AI to write his content contributions (though he does use it for other things), he said LinkedIn’s AI strategy doesn’t bother him because his larger goal is to ultimately show up in generative AI search results.

“I would rather be able to have my information be the information that shows up in the AI,” he said. “I want to be the authority; I want to be the one that is cited. So if I know the easiest way is through collaborative articles, I’m going to spend my time, energy, and effort doing that.”

A potential AI-to-AI feedback loop

LinkedIn does not have a policy forbidding users from providing AI-generated contributions. And indeed, with AI “copilots” integrated into everything from email to word processing, machine-generated text is an increasingly accepted form of communication.

But like any tool, AI can be misused even by the well-intentioned. Jordan Bentley, a Massachusetts-based data scientist, told Fortune that some of the responses to collaborative articles questions should be “downright embarrassing” to the “otherwise well-credentialed” people posting them.

According to some observers, the problem starts with the AI-generated categories themselves. While LinkedIn says the topic questions are crafted with the assistance of its editorial team, many of the questions feature phrasing that betray a robotic origin. And some are even worse.

Evans, the U.K. designer, pointed out that in one case she was invited to contribute to an article on ActionScript, a programming language used back in the Flash era to build websites and games—but Flash is now a dead technology. “The article was titled ‘What are the current trends and challenges in ActionScript animation development and design?’” she recalled, adding: “There are no current trends—it was officially killed off in 2020. It was so ridiculous it almost seemed like a joke to be asked to contribute.”

Ryan Law, director of content marketing at Ahrefs, says some of the questions are just “word salad,” crying out for human review; in many cases, he said in a LinkedIn post , a simple glance by a real person is all that would be necessary to fix the problem. Instead, Law says, “a silly, time-wasting question that could have been disarmed by a simple ‘No, this doesn’t make sense,’ becomes a hot topic with dozens of people vying to answer it.”

And given that LinkedIn is training its AI on some of the content, the proverbial garbage data is being recycled ad infinitum.

LinkedIn acknowledges that it uses collaborative article contributions to train AI models, although that may not be clear to the average user. The spokesperson told Fortune the company “uses a variety of data sources, including collaborative article contributions in compliance with our user agreement and our privacy policy , to improve our products for our members. This includes training models to support and improve our ability to surface relevant content in the feed, job recommendations, and search, and personalized learning.” AI, data, and model training are not specifically mentioned in the user agreement and privacy policy, however.

Abhishek Gupta, founder and principal researcher at the Montreal AI Ethics Institute, pointed to a recent paper, “ Self-Consuming Generative Models Go MAD ,” that showed AI-generated content entering the training feedback loop reduces the performance of future generations of models. “The underlying sources of the generative system largely remain the same, i.e., the internet-scale datasets—this means that source diversity for inputs would potentially be reduced,” he explained.

Richard Baraniuk, a professor of computer engineering at Rice University and coauthor of the MAD paper, added that the feedback loop can be compared to the spread of mad cow disease—when “diseased dead cow flesh was fed to young cows in an ever-repeating cycle that led to brain-destroying pathogens being amplified beyond control.”

Unfortunately, he said, there is no silver bullet to solve the problem. “You simply must have enough fresh real data,” he said. If not, over time, the collaborative article content will become “duller and duller and less and less informative.”

Some LinkedIn users are pushing back

Some are pushing back by contributing sarcastic comments to the collaborative articles. For example, U.K.-based SEO consultant Nikki Pilkington was so frustrated by the fact that people she admired were answering AI-generated articles seriously just to get a Top Voice badge that she decided to take advantage of the lax content moderation with some humorous, clearly non-AI comments. In one collaborative article titled “What are the most common causes of burnout for content marketers?” She wrote: “Being encouraged to contribute to some of the most awful AI-written articles I have ever seen. Here. On LinkedIn. If it doesn’t contribute to burnout, it will definitely make you lose your will to live.”

Nevertheless, Pilkington was still awarded the Top Copywriting Voice badge. “I wanted to prove that the articles were useless and the badges meant nothing,” she told Fortune.

LinkedIn has a history of retiring features that don’t live up to expectations, including stories and reading lists. But at the moment, the increased engagement and the fact that LinkedIn is continuing to develop and push the product “suggests that there’s a lot to be gained here for them,” she said.

But Irene Malatesta, the San Francisco–based content strategist, told Fortune that she thinks LinkedIn is “at a tipping point.” Meaningless comments and article contributions are not just worthless, she argued. “They are detrimental to the platform as a whole because they get in the way of the genuine, valuable professional interactions and info sharing that make the platform special,” she said. “I would hate to lose what has been an incredibly useful platform for many years.”

This story was originally featured on Fortune.com

Expand All
Comments / 0
Add a Comment
YOU MAY ALSO LIKE
Most Popular newsMost Popular

Comments / 0