Digital Technologies: What defines a monster?
Late in 2018 I attended a conference with the intriguing title ‘Living with Monsters? Social Implications of Algorithmic Phenomena, Hybrid Agency and the Performativity of Technology’ at San Francisco State University, organised by the International Federation for Information Processing.
The conference was thought provoking with many points of view shared as to whether the technologies described in the title were becoming ‘monsters’ in our midst. Two key criteria seemed to define whether a technology becomes a monster; understandability and controllability.
Many of the presenters drew attention to case studies where a system, technology or an algorithm are apparently successfully implemented in an industry or a company. Each of these constructs is a human creation but is not under constant human control – they simply run in the background prompting multiple decisions. Their complexity makes it impossible for all but a small number of humans to grasp what is going on ‘under the hood’ and therefore control the processes.
The fact that this technology cannot be fully understood nor controlled by anyone other than its originator is often cited as central criteria underpinning their classification as being ‘monstrous’. I question that, for reasons outlined below.
Technologies (hereafter consider that a collective term for the purposes of this article to include algorithms, systems, IT, digital platforms and technologies) are increasingly autonomous. This was highlighted at the conference by Dr Lucy Suchman in relation to the increasing automation of military systems. She highlighted the emergence of automated target identification and the initiation of attack using military drones. The statistics presented on the accuracy of this autonomous technology are worrying, suggesting a large degree of collateral damage and high civilian casualties is inflicted. In this case the potential implications for thousands of human lives is inarguably monstrous.
Dr Suchman then went on to suggest that instead of arguing for greater development of this technology aimed at improving accuracy, better outcomes might be achieved by ensuring human participation and engagement with the technologies is actually increased. This will ensure responsibility cannot be absolved via dissociation from executive actions, with human beings involved at all decision points.
With this in mind, we might ask ourselves where our moral duties begin and end. It could be said that the problem may be not that we create so-called digital ‘monsters’ but that we abandon them. Even if we accept that abandonment rather than creation is the problem, I still disagree with the classification of technology itself as being monstrous.
For me, the pertinent question in this debate is whether the technologies described are created as monsters, or do they become monstrous because we do not know how to nurture them? Consider the analogy that suggests nurturing a human from birth is a prerequisite for that person becoming a good human or not, in the language of the conference, becoming a ‘monster’. There are many apparent ‘monsters’ that fill our prison cells suggesting society has a problem with the nurturing process. We increasingly appear to have similar challenges nurturing our emergent digital technologies.
Another presentation, drawing on research by Rachel Douglas-Jones et al., further adds to the debate. The presentation highlighted that descriptions of functionality are key in the analysis and subsequent classification of monsters. I agree with the authors that this is important, however I am firmly of the belief that questioning why they are monsters in the first place is a prerequisite. To support their view, the authors make reference to monster theory (Cohen 1996). Cohen saw the impossibility of knowing something in its entirety as an important element of what makes a monster. Our inability to grasp their essence, to fully comprehend what they are and what they do, as many other authors at the conference illustrated, make algorithms and digital technologies appear monstrous.
In my view, way too much time was spent highlighting monstrosity rather than fundamentally questioning the classification itself. Perhaps we just like fancy, dramatic labels that attract attention? Surely the way we use a technology is what makes it monstrous in the first place rather than the technology itself. For example, the decision to use automated weapons targeting is a human decision. Human beings decide how to use a technology. Other humans experience that same technology, and these two elements are central to the labelling of technology as monstrous. If we accept this, then it is human beings who are at the heart of the monster. This then raises the question as to whether we are in fact the monster!
So, how helpful is the monster metaphor when talking about technologies and algorithms? It is of course very easy to criticise technology and label it in an alarmist fashion. But are we looking in the wrong direction? The social implications of algorithmic phenomena are undeniable, and have been well documented in the papers presented at the conference. That is not under scrutiny here. It is the monster and its location that is in question. Is it the human or is it the technology? Personally, I’m convinced it is the former.