The Myth of the objective algorithm / by Christoffer Andersson

Euclid's_algorithm_structured_blocks_1.png

We live in a time of rapid technological development, and in a time when the idea of ”digitalisation” raises all sorts of hopes about efficiency and precision; increased productivity and objective and rational decision making processes. Behind a lot of such digital technologies is a pre-programmed algorithm that is the key to improving processes and that moves work from subjective humans to objective machines that are not affected by circumstances in a way that disturbs and distorts the process. But is there really an objective algorithm?

An article in Wall Street Journal this week reported that most companies on the US Fortune 500-list use artificial intelligence (AI) in their recruitment processes. The technology is used to scan applications for key words, as a way to identify the most interesting candidates for specific jobs. The key words are connected to personality traits rather than to competences and skills, which means that there is a shift regarding how recruitment takes place, compared to if a human being had made a first scan.

That robots and AI are used to perform routine jobs with a certain degree of complexity is becoming more and more common. In Sweden, manufacturing industry has during the past 15 years undergone a strutural tranformation as production has been automized. At the moment, many public and private organisations are implementing RPA – Robot Process Automation – as a way to make white collar work more efficient, for example pay roll-work and financial reporting. The robots are here to stay.

But which consequences does the implementation of robots have, other than that routine based tasks are performed more efficiently?

A year ago, the social workers in the municipality of Kungsbacka in Sweden caught a lot of attention in media when a majority of them resigned in protest against the pre-programmed software that had been implemented in the organisation. The social workers argued that the algorithm in the software lead to wrong decisions, and that the task it was performing should be done by a human being. In the article in Wall Street Journal a researcher says that although the technology used to scan job-applications may help deal with a vast number of interesting applications for a position, it may also do more harm than good. This resembles the argument of Joy Boulamwini, the PhD-candidate at MIT and Algorithmic League-founder, who discovered when performing research on face recognition software that the technology did not recognize her face. After some time she realized why: she was a woman and black. The face recognition technology had been developed by white men, and the technology had thus only learned to recognize white, male faces – all other faces were “wrong”.

It thus seems as if algorithms are not as objective as we would like to think they are. In his book ”Our Robots. Ourselves. Robotics and the Myths of Autonomy” (2015) robotics researcher David Mindell argues that the myth that robots should be autonomous is unrealistic. Human intentions, ideas and assumptions are always built into machines, he argues. The creators of the machines are therefore ”technical ghosts in the machines” (s.102).

The interesting question is: which assumptions are built into the machines and the digital technologies that are developed now, and which the consequences of these assumptions? Do the algorithms mean that competent individuals have difficulties in finding a job because certain key words are missing in their applications, meaning that their applications do not pass the first step of a recruitment process? Do the algorithms construct humans of a certain sex or colour of skin as “abnormal”? There are many questions regarding the consequences of algorithms that could – and should – be asked.

I believe that technological development is good and exciting. But we also need to critically scrutinize the effect of new technology when it’s used and be attentive to what the consequences are. There is no objective algorithm – algorithms will always express, in some way or another, the (often unconscious) assumptions about the world of the individuals who developed them.

 Anette Hallin, Program director