Anyone laboring under impression that Facebook actually plans to do something about the waves of “fake news” flooding the social media platform got a wake-up call last week. Or at least it should be a wake-up call.
A doctored video of Nancy Pelosi went viral in the days after the House speaker’s tense White House confrontation with President Donald Trump.
The video, taken from the first moments of a Pelosi press conference, was slowed just enough to make it appear the House speaker was slurring her words and thus, intoxicated. Posted on YouTube, Facebook and Twitter, it garnered millions of views and shares before the truth, in the form of fact-checkers, was able to catch up. Rudy Guiliani, Trump’s personal lawyer and the former mayor of New York, shared it with his 318,000 Twitter followers.
YouTube eventually took the video down. Twitter, as is its habit, ignored the controversy and did nothing. Facebook, still the dominant player in the online landscape, tried to find a way to have the best of both worlds, essentially decrying the video while leaving it up for the precious clicks that make the company money.
In a release, Facebook acknowledged the video was a fake, eventually tagging it as such, and said it would make sure it appeared less frequently in news feeds.
“We work hard to find the right balance between encouraging free expression and promoting a safe and authentic community,” a company spokesperson told the Los Angeles Times. “We believe that reducing the distribution of inauthentic content strikes that balance. But just because something is allowed to be on Facebook doesn’t mean it should get distribution. In other words, we allow people to post it as a form of expression, but we’re not going to show it at the top of News Feed.”
It’s a typically mealymouthed response from a company that has long been accused of — and has actually acknowledged — having a role in the spread of misinformation and outright lies since the days before the 2016 elections. When pressured, Facebook and its president, Mark Zuckerberg, make doe-eyed promises to change, to do better. Then, once the heat is off, the hands-off attitude returns, and we are left with halfhearted “solutions.”
Kat Lo, a UC Irvine researcher, told the Times Facebook has “made progress” but hasn’t addressed the root of the problem.
“The changes are incremental,” Lo said. “It’s not like they’ve solved anything.”
“The best way to counter disinformation is to deplatform it,” Lo said. “To make it not visible and not shareable.”
Facebook, of course, wants everything to be shareable. It wants traffic — the company made $16.6 billion in advertising revenue last quarter. A viral video — even a fake like the one featuring the House speaker— helps drive profits, something the company slyly acknowledged in its statement to the Times.
“We don’t have a policy that stipulates that the information you post on Facebook must be true,” the company said.
Company spokeswoman Monika Bickert went one step further, saying on CNN, “We think it’s important for people to make their own informed choice for what to believe. Our job is to make sure we are getting them accurate information.”
The company is essentially saying, “Sure, we know plenty of people are posting lies. We even know what the lies are. But if you get fooled, it’s your own fault.”
That certainly seems fair when one takes into consideration the vast amount of what most of us post on Facebook. Our grandchildren can’t all be that cute, our cats and dogs that smart, our hair perfectly coiffed and our meals so photogenic.
But there’s a difference between kidding ourselves and crafting and sharing the lies that have contributed so greatly to the coarsening of our political discourse. It’s long past time that Zuckerberg and his company recognized that fact.