It is no secret that the majority of modern media platforms exist and operate thanks to algorithms. Put simply, an algorithm is a set of rules or processes that can be followed in order to execute a calculation or solve a problem. They can be extremely simple, or extremely complex, accounting for multiple variables.
For both social media and search engines, algorithms are employed in order to quickly and simply present the most relevant information to a user. They are extremely useful in disseminating a large amount of information quickly, and cherry-picking what we are looking for. But they are also flawed.
In theory, the purpose of algorithms is to deliver the information that is most appropriate and relevant to your search or interests. On the surface, they are extremely useful – if you were searching for information about critical thinking, it would be next to impossible to find if your search presented everything that uses the word “critical” or “thinking” as well as “critical thinking” and also did so for the entire history of the internet. What’s more, the algorithms prioritise information based on how current it is, and whether it is an “authority” piece including lots of good, detailed information that is trusted, and is most closely matched to your search. The problem is, algorithms are fixed processes or rules, and they cannot adapt to or think critically themselves. This means that they can be manipulated, and that they also lack foresight to identify patterns.
The flaws in algorithms:
More of the same:
In trying to present you with useful, relevant information, algorithms try to prioritise information that’s similar to your previous searches. On search engines, that might mean reusing a source you’ve visited before – making you more likely to see Fox News over CNN or vice versa – or it might mean profiling you and presenting you with information that other people have found useful, based on your common interests. On social media, it means that information in your feed is not presented chronologically, but shows you information similar to what you have engaged with before.
The problem with this is that algorithms cannot and do not account for natural human flaws. It doesn’t identify that we have confirmation bias, and the more we see information presented, the more we start to believe it. Similarly, it doesn’t understand truth bias or the illusory truth effect, so it doesn’t know that we automatically assume information is true, and will ignore the fact we know information is false, just because of the way our brains are wired. Therefore, the more of the same information we see, the more of it we are likely to believe, and the less able we become at thinking critically.
Reinforcing the stereotype
We are each of us individual, but in order to function, algorithms need to make assumptions. They need to make assumptions that enable them to process large quantities of information in a short space of time. One of those assumptions is to categorise the user by their behaviour, and to group those users together. It’s a bit like assuming that you and all your friends like and want to do the same things – there is some truth in this, but you are still an individual and that nuance is lost.
What this means is that you are more likely to see things that other people like you have seen or looked at. This might be based on the source of the information, or the type of content, or even the medium through which it is delivered. That means you’re probably not seeing the things most relevant to you, but are seeing the things most relevant to people like you.
A different purpose
While attempting to be useful, algorithms also have a different purpose to you – you want relevant information, but the algorithm wants performance, and that means they’ll prioritise information that is most likely to get a reaction from you. Sometimes those goals align. Sometimes they are at odds. Either way, you’re presented with information that is deemed most relevant at that moment in time.
Unfortunately, the antithesis of critical thinking is emotion. The more emotional we are, the less able we are to think critically. Emotion is also attractive, so the more outrageous, thought-provoking or inflammatory the content, the more engagement it is likely to get, and therefore the more the algorithms will prioritise it.
A self-fulfilling prophecy
As we’ve already touched on, algorithms consider popularity of content as a way of determining “authority”. The more popular it is, the more it will rank. The more you engage with things, the more it will show you things that are the same or similar. The problem is, by process of elimination, your pool of content gets smaller and smaller and smaller. It is through this that people start to believe conspiracy theories for example. They start to see more and more of the same thing, reinforcing something that they could possibly believe, leading them down a rabbit hole of confirmation bias.
Magnifying misbehaviour
Sometimes, algorithms make mistakes. They are driven by keywords and common terms, but lack the intelligence to disseminate between tone and content. This means that it can be exploited. When searching for a trending news story, there have been a lot of instances where the stories at the top are in fact misinformation, simply because the authors have abused the way search engines rank keywords and geared the content to rank. That content then gains popularity – regardless of no basis in truth – and ranks at the top, gaining more momentum. This is often referred to as “magnifying misbehaviour”.
What can be done about it?
As we explored in our piece on misinformation, many of the search engines and social media platforms are actively trying to improve their algorithms in order to combat these flaws. Aside from this, practising the rule of five, and actively seeking out multiple sources of information can help to expand your algorithmic response, and help to prevent the shutdown of your data access. Finally, as always, self-led critical thinking is essential and is a skill we should all actively practise.