Why even the most benevolent AI overlord is probably not better than democracy

Yes we know, humans are faulty and biassed. But so are machines – and much more so than you’d expect.

I don’t mind admitting that I am dissatisfied with the democratic structures that currently dominate politics – in fact, I’ll tell anyone who will listen. The attempt to govern a fast-paced, connected and digital society with methods from the early 20th century seems, frankly, ridiculous. My partner, who is a software engineer, agrees with the premise but takes it one step further: ‘We should just all be governed by benevolent AI overlords. They’d do a much better job of all of it’, he’ll say. One might think he’s being facetious but he is actually (mostly) serious. And, worse, he is not alone. 

‘‘We should just all be governed by benevolent AI overlords. They’d do a much better job of all of it’, he’ll say.

Now, when we speak about AI overlords we are invited to think of a conscious super-intelligence, like the one we’ve all seen in the Matrix movies. This type of AI doesn’t exist and most people with an opinion (including my partner) seem to agree that if it did, it would at best ignore and, at worst, eradicate us.

Instead, it is important to understand that the term AI generally describes algorithms that get better at what they do over time, without human help.  It is still humans who decide what the task is, and it is humans who create the initial learning environment for the algorithm.



I was surprised to find that the idea of automated governance has enough traction to merit its own terminology

I was surprised to find that the idea of automated governance has enough traction to merit its own terminology: algocracy is just one of many concepts relating to this idea. It describes governance by algorithms, which may or may not mean artificial intelligence (AI) technologies (only self-learning algorithms classify as artificial intelligence). And in fact, there are already algorithms being used for regulatory purposes, for example to dynamically adapt speed limits to current traffic patterns. There is a big difference however, between handing over the power to determine speed limits within a predetermined range, or expecting AI to produce good social policy, for example.

‘The idea that objectivity is possible – and that algorithms could deliver it – is rooted in a misunder-standing of reality.’

Even ignoring the ideological misgiving that many of us may foster regarding human autonomy and self-determination (which may be an illusion anyway, as not-all-that-recent findings in neuroscience suggest), there is no reason to assume that AI will outperform us in political decision making. The reason for this is threefold: One, the idea that objectivity is possible – and that algorithms could deliver it – is rooted in a misunderstanding of reality.    

The meaning of reality is not ‘out there’, it is constructed by humans and exists only in our heads. A table is not a table at all. It is atoms that have combined in a certain way to create a certain texture and shape that can be used for human purposes – if humans interpret the object accordingly. Without human perception and interpretation there is no such thing as a ‘table’. The second we attempt to teach a machine the human interpretation of the world, we must imbue it with human subjectivity. 

Two, even accommodating the assumption that some things can be objectively true or false (which is not beyond philosophical doubt), political decisions in particular are rarely a question of ‘true or false’ but of ‘right or wrong’, which are very different premises. They are about value judgements, which are even more individual and subjective than tables. We maybe all think that fairness is a good thing, but whether fairness means equality (everyone gets the same), equity (everyone gets what they require) or meritocracy (everyone gets what they earn, to name but a few possible interpretations) is highly contested and context-dependent.


 Data is never objective, not only (but especially) when it is data of past human behaviour.

Three, the idea that data (the stuff AI feeds on to get smart) can be objective, is also an illusion. Data is never objective, not only (but especially) when it is data of past human behaviour. The question of which data has been gathered, how, about whom, by what means and for what purpose, how the data is being categorised and what is being left invisible are deeply political in nature. There is a reason why the fight about gender categories, for example, is so rampant: the decision has very real implications on the lives of people who have been rendered invisible by the past data collection regime. So, should we use two, three, seventy-four or individualised attributes to describe someone’s gender, or should we scrap the category altogether? The answer to this question will be based on someone’s values and their interpretation of reality. Neither of these is remotely objective, nor is the output of an AI system that’s based on any such decision.

To be sure, there are technical approaches to solving some of these problems. You can try to ‘clean’ your dataset from bias or let AI generate new datasets for you. You can also let AI come up with its own categories based on observable patterns in the data. Yet, at the end of the day someone has to make the underlying decisions of how to define bias, how to trade off different measures of fairness and different affected groups against each other. For example, if you correct your model for sexist bias, this may negatively impact racist bias or vice versa.

‘If you correct your model for sexist bias, this may negatively impact racial equity or vice versa.’

On an even deeper level, someone has to define thresholds and parameters for pattern recognition and classification that will determine the clever algorithm's output. Who gets to make these decisions on behalf of everyone, forever?


Who gets to make these decisions on behalf of everyone, forever?

My partner thinks that the mathematical average should serve as a baseline for political decisions. You could compile everyone’s different views and opinions and calculate the middle ground, making the output ‘fair on average’. I am fairly certain that this approach would not necessarily create an experience of fair treatment on average, especially not for people on the statistical margins. This is not to say that we live in a fair world right now. But if the result is still unsatisfying, then what’s the point of automation in the first place?