How Facebook Became the Opium of the Masses (Frantisek Vrabel)

How Facebook Became the Opium of the Masses (Frantisek Vrabel)
Photo by Dima Solomin / Unsplash


In the war on disinformation, it is difficult to identify the enemy. Journalists, politicians, governments, and even grandparents have been accused of contributing to the spread of lies on the Internet.
While none of these groups is entirely innocent, the real adversary is more "prosaic." As Facebook whistleblower Frances Haugen said late last year, social media algorithms make misinformation accessible.
Since its inception in 2004, Facebook has gone from being a student social network to a surveillance monster that is destroying social cohesion and democracy around the world. Facebook collects a lot of user data, including intimate facts such as body weight and pregnancy status, to map the social DNA of its users. The company then sells this information to everyone from shampoo manufacturers to Russian and Chinese intelligence agencies who want to "micro-target" its 2.9 billion users. In this way, Facebook allows third parties to manipulate minds and trade in the "human future": predictive models of the choices people are likely to make.
All over the world, Facebook has been used to sow mistrust in democratic institutions. Its algorithms have contributed to real violence, from genocide in Myanmar to terrorist recruitment in South America, West Africa, and the Middle East. Lies about election fraud in the United States, promoted by former President Donald Trump, flooded Facebook in the run-up to the Jan. 6 riots. Meanwhile, in Europe, Facebook contributed to perverse attempts by Belarusian leader Alexander Lukashenko to use migrants as weapons against the European Union.
In the Czech Republic, misinformation emanating from Russia and disseminated on the site flooded Czech cyberspace thanks to the malicious Facebook code. One analysis conducted by my company showed that the average Czech citizen is exposed to 25 times more misinformation about the COVID-19 vaccine than the average American. The situation is so dire, and the government's actions so inept, that Czechs rely on civil society, including volunteers known as Czech Elves, to control and counteract this influence.
Attempts to reduce the threat to democracy posed by Facebook have so far failed miserably. In the Czech Republic, Facebook has partnered with Agence France-Presse to identify malicious content. But with one part-time employee and a monthly quota of just ten questionable posts, these efforts are a drop in the ocean of disinformation. "Facebook files," published by the Wall Street Journal, confirm that Facebook is taking action against "only 3 to 5 percent of hate speech."
Facebook has given users the option to opt out of custom and political ads, but it's a token gesture. Some organizations, such as Ranking Digital Rights, have called on the platform to disable ad targeting by default. That's not enough. Microtargeting, at the heart of Facebook's business model, relies on artificial intelligence to capture users' attention, maximize engagement and disable critical thinking.
In many ways, microtargeting is the digital equivalent of the opioid crisis. But the U.S. Congress has taken aggressive steps to protect people from opioids through legislation aimed at increasing access to treatment, education and alternative medicines. To stop the world's addiction to fake news and lies, lawmakers must recognize the disinformation crisis for what it is and take similar steps, starting with appropriate regulation of microtargeting.
The problem is that no one outside Facebook knows how the company's complex algorithms work, and it can take months, if not years, to decipher them. This means that regulators will have no choice but to rely on Facebook employees themselves to walk them through the factory. To encourage this cooperation, Congress should grant full civil and criminal immunity and financial compensation.
Regulating social media algorithms seems complicated, but it's easy compared to the even greater digital dangers on the horizon. "Deepfake (or dipfake)" - large-scale manipulation of videos and images by artificial intelligence to influence public opinion - is barely discussed in Congress. While lawmakers worry about the threats posed by traditional content, dipfakes pose an even greater threat to privacy, democracy and national security.
Meanwhile, Facebook is becoming increasingly dangerous. A recent investigation by MIT Technology Review revealed that Facebook is funding misinformation by "paying millions of advertising dollars to clickbait actors" through its advertising platform. And CEO Mark Zuckerberg's plans to create a meta-universe, "a convergence of physical, augmented and virtual reality," should scare regulators around the world. Just imagine the potential damage these unregulated artificial intelligence algorithms could do if they were allowed to create a new immersive reality for billions of people.
In a statement after a recent hearing in Washington, D.C., Zuckerberg reiterated a suggestion he made earlier: "regulate us." "I don't believe private companies should make all the decisions themselves," he wrote on Facebook. "We strive to do our jobs the best we can, but on some level the appropriate body to evaluate tradeoffs between different types of social justice is our democratically elected Congress."
Zuckerberg is right: Congress does have an obligation to act. But Facebook also has an obligation to take action. It can show Congress the social inequalities it continues to create and how it creates them. Until Facebook opens its algorithms to scrutiny (conducted with the know-how of its own experts), the war on misinformation will remain hopeless, and democracies around the world will remain at the mercy of an unscrupulous and renegade industry.
Frantisek Vrabel
Frantisek Vrabel is the CEO and founder of Semantic Visions, a Prague-based analytics firm that collects and analyzes 90% of online news content from around the world. -- Ed.