Fake Review Ban Proposal

Online customer reviews for a product are displayed on a computer in New York in 2019. The Federal Trade Commission on Friday, proposed a new rule that would ban paying for reviews, suppressing honest reviews, selling fake social media engagement, and more. Jenny Kane/Associated Press, file

Fake reviews are ruining the web. But there’s some new hope to fight them.

The Federal Trade Commission on Friday proposed new rules to aim at businesses that buy, sell, and manipulate online reviews. If the rules are approved, they’ll carry a big stick: a fine of up to $50,000 for each fake review, for each time a consumer sees it.

That could add up fast.

It’s the biggest step to date by the federal government to deter the insidious market for buying and selling fake reviews, though the FTC’s rules don’t do as much to hold big review sites like Yelp, Google, Tripadvisor, and Amazon directly accountable. (Amazon founder Jeff Bezos owns The Washington Post. Interim chief executive Patty Stonesifer sits on Amazon’s board.)

You’ve seen it before: Thousands of conspicuous five-star reviews for a borderline product. Perhaps even a merchant has offered to pay you to leave a positive review. This kind of fraud undermines our collective power as consumers.

“Anyone who’s done any shopping online knows that trying to get objective information about the product is so fraught because there’s so much commercial misinformation, so many deceptive reviews,” says Samuel Levine, director of the FTC’s Bureau of Consumer Protection.

As much as 30 to 40% of online reviews are fabricated or otherwise not genuine, estimate consumer advocacy groups and researchers like U.S. PIRG, though the rate of fakes can vary widely by type of product and website.

There are global businesses dedicated to generating fake reviews for scammers and merchants looking for a shortcut. And the problem threatens to explode in an era of artificial intelligence like ChatGPT that can generate remarkably humanlike writing.

Yet the federal government’s approach to the problem has been to address it largely case by case through lawsuits – until now.

WHAT THE FTC’S FAKE-REVIEW RULES DO

The FTC’s view is that fake reviews have always been against the law because they mislead consumers. But its proposed rules, which are open for two months of comment before they could be codified, would draw some bright red lines that clarify who’s responsible – and empower the FTC to take more action.

So what’s against the rules? No-gos include reviews that misrepresent someone’s experience with a product and that claim to be written by someone who doesn’t exist. Reviews also can’t be written by insiders like company employees without clear disclosures.

The rules apply not only to the people who write fake reviews but also to the middlemen who procure them and the companies who pay for them and know – or should have known – they were fake.

There are some gray zones. What if a business asks its real customers to leave them a review? The FTC tells me that’s still allowed because it’s a critical tool for small businesses to build an online reputation. The rules also don’t specifically forbid giving legitimate customers a gift card for leaving a review, so long as they’re not required to express a particular opinion – though it’s a good idea to disclose that if it’s a significant amount of money.

The rules also forbid a few more shady tactics such as review “hijacking.” That’s when a merchant takes a product listing page filled with legitimate reviews and swaps in a different product that those customers never actually used. (Earlier this year, the FTC made its first enforcement action for this practice, fining a supplement maker $600,000 for doing this on Amazon.)

A business can’t run a website that claims to host independent reviews while covertly selling its products and services, something that happens a lot with tech products. A business also can’t suppress negative reviews, such as by using intimidation or legal threats.

“It’s really important to deter the practice upfront so that the people or businesses that engage in these practices know that they could face a really heavy price,” says the FTC’s Levin. In addition to the $50,000 per case fine, the FTC would also have the ability to retrieve money directly for consumers harmed by the fakes.

So how are they going to enforce this? The FTC says codifying rules can help it be much more efficient in court – but it isn’t getting any additional enforcement resources. It could also still face hurdles trying to go after offending businesses that are based overseas, if they’re in countries that don’t have a history of working with the FTC.

WHAT THE FTC’S FAKE-REVIEW RULES DON’T DO 

Many consumer advocates say fixing the problem requires addressing the entire fake-review economy.

For starters, Facebook, Twitter, and other social media provide all-too-easy forums where companies can easily recruit and hire fake-review writers. Facebook has taken down some fake review groups, and Amazon last year sued leaders of more than 10,000 fake-review groups. (Amazon founder Jeff Bezos owns The Post.)

And most of all, review platforms and retailers such as Yelp, Google, and Amazon have the ultimate control over what they publish on their sites, as well as the most information about who’s leaving the reviews. These big companies decide which reviews they leave up, what kind of proof they require to leave one – and also profit from having them.

Yet the FTC’s rules wouldn’t extend liability to either social media or review sites themselves unless the companies are directly involved in procuring the fake reviews. There’s also no requirement for sites to verify a user’s identity or that they used a product.

“Many of them assert immunity under Section 230 of the Communications Decency Act,” says the FTC’s Levine, a reference to the law that makes online forums not responsible for the content others publish on them. That would make it hard for the FTC to hold them accountable, even if it wanted to.

The companies claim they take it seriously: Amazon says it blocked more than 200 million suspected fake reviews in 2022, and Yelp says in 2022 its software identified 19% of reviews as “not recommended.” Earlier this month, Google sued an individual and company who it says posted 350 fraudulent Google business profiles and tried to bolster them with more than 14,000 fake reviews.

The problem is even that’s not enough. “In any given day, I can . . . find thousands of fakes myself without any automation. That’s just me alone – one person,” says Kay Dean, who runs the organization Fake Review Watch. “They don’t have much incentive to self-police – there are no repercussions.”

One idea: Review sites could be more transparent about when they’ve taken down fake reviews so that consumers and outside investigators can better track the activity. “We deserve more data and more transparency into what led to review content being displayed,” says Saoud Khalifah, founder of Fakespot, which uses AI to try to flag fake reviews.

The big review sites are running out of excuses. “Regardless of the liability regime, it is in the interests of consumers and the businesses that use these platforms for them to be policing this problem better. They have the most visibility into what’s happening, they are in the best position often to stop it, and we want them to be doing more,” says Levine.