How “engagement” makes you vulnerable to manipulation and misinformation on social media

ILLUSTRATION BY KLAWE RZECZY USED UNDER A CREATIVE COMMONS LICENSE.
“The heart of the matter is the distinction between provoking a response and providing content people want.”

Facebook has been quietly experimenting with reducing the amount of political content it puts in users’ news feeds. The move is a tacit acknowledgment that the way the company’s algorithms work can be a problem.

The heart of the matter is the distinction between provoking a response and providing content people want. Social media algorithms — the rules their computers follow in deciding the content that you see — rely heavily on people’s behavior to make these decisions. In particular, they watch for content that people respond to or “engage” with by liking, commenting and sharing.

As a computer scientist who studies the ways large numbers of people interact using technology, I understand the logic of using the wisdom of the crowds in these algorithms. I also see substantial pitfalls in how the social media companies do so in practice.

From lions on the savanna to likes on Facebook

The concept of the wisdom of crowds assumes that using signals from others’ actions, opinions and preferences as a guide will lead to sound decisions. For example, collective predictions are normally more accurate than individual ones. Collective intelligence is used to predict financial markets, sportselections and even disease outbreaks.

Throughout millions of years of evolution, these principles have been coded into the human brain in the form of cognitive biases that come with names like familiaritymere-exposure and bandwagon effect. If everyone starts running, you should also start running; maybe someone saw a lion coming and running could save your life. You may not know why, but it’s wiser to ask questions later.

Your brain picks up clues from the environment — including your peers — and uses simple rules to quickly translate those signals into decisions: Go with the winner, follow the majority, copy your neighbor. These rules work remarkably well in typical situations because they are based on sound assumptions. For example, they assume that people often act rationally, it is unlikely that many are wrong, the past predicts the future, and so on.

Technology allows people to access signals from much larger numbers of other people, most of whom they do not know. Artificial intelligence applications make heavy use of these popularity or “engagement” signals, from selecting search engine results to recommending music and videos, and from suggesting friends to ranking posts on news feeds.

Not everything viral deserves to be

Our research shows that virtually all web technology platforms, such as social media and news recommendation systems, have a strong popularity bias. When applications are driven by cues like engagement rather than explicit search engine queries, popularity bias can lead to harmful unintended consequences.

Seguir leyendo: NiemanLab

Deixa un comentari

L'adreça electrònica no es publicarà.