Tribune | The decision and the machine

The film saga Terminator it is one of the most popular dystopias. It tells that, after having reached the intelligence of their human creators, the machines will rebel against them and, to exterminate them, they will produce other machines, murderous and perfect. Total war will then break out between men and their artifacts and the destiny of humanity will remain in the hands of a providential savior; etc. Typical Judeo-Protestant messianic pattern: with his conspiracy and his redeemer who saves us.

The script however is a variant of the old Golem myth. Its novelty is in the spectacular special effects and some unforgettable gags; and in an ideal actor – Arnold Schwarzenegger – in reality, himself a kind of Golem. His character is the ogre of children’s stories; or Iago, the perverse schemer of Othello, for, like him, he is a being of absolute wickedness, an implacable creature whose malignancy, for being unmotivated and inexplicable, produces terror.

Can we get an idea of ​​absolute evil? If he is incarnated in a machine, it does not seem so difficult, on the other hand, understanding Iago is much more complicated, because when an individual is very bad our eyes invent a demonic nihilist with moral stature, like Ivan Karamazov. Evil is difficult; and the dominant patterns help us little, for as our rules and customs become more and more permissive, it is very difficult to imagine an absolutely wicked character who is also credible. Because today everyone is bad to some extent – another Judeo-Protestant cliché spread by popular culture and endorsed by psychopedagogues – that’s why film writers choose psychopathological baddies, like Henry or Leatherface or Anton Chigurh or Hannibal Lecter, etc. However, while narratively plausible, the psychopath is morally unconvincing. In fact, criminal laws do not admit that the madman can be held responsible for his actions, precisely because he is mad; and evil, no less than good, needs a responsible subject. Indeed, the fact that we can identify responsibility in an action allows us to determine the intention and its motive and, above all, the transgression, which will ultimately allow us to judge it morally.

But for that, it must be plausible that the subject is wrong, that he chooses between good or bad and deviates. Even more, a transcendental condition is required that does not derive from the idea that the subject is made about the good or the bad, but from a blind decision between the two instances that, in turn, can be correct or wrong. In short, responsibility presupposes the possibility of error: not only in the alternative between good and evil, but in the act of deciding between one option or another. If an action, whatever it may be, can only be correct —even if it involves doing wrong—, decisions cease to be such and morality is extinguished.

Thus, if we conceive an artifact in which all possible errors have been eliminated – and that will surely happen after some machinic revolution – decision-making and risk calculation will no longer be necessary, and the idea of ​​responsibility will be so empty like a white metaphor. Take the case of new driverless cars: does it make sense to penalize a traffic violation if it is an algorithm who commits it? No. It is entirely unlikely that an automaton freed from the decision by the algorithm will commit infractions; And if it fails, why waste time with reprimands or sanctions? The best thing is to go to the technician to correct it. But then, what use will we have to have a highway code?

Machines, on the other hand, not only are not wrong, but, unlike human beings, they are perfectible. And since they are not wrong, they do not decide either. Hence the hypothesis of Terminator It can be disturbing and very effective as film fiction, but it is false: technical gadgets may become almost human but will never decide to rebel against men. Instead, the battle against error is waged daily in our cybernetic gadgets. Each update makes the artifact more perfect – and, incidentally, introduces some sophisticated robot to fine-tune social control. The indefinite improvement impoverishes us from an ethical point of view since it cuts the sphere of uncertainty in experience and nullifies our ability to make decisions, which comes to be replaced by protocolized and programmed solutions, as happens to current doctors when treating an illness.

And let’s not talk about the illusion that Google “learns” and is more precise and intelligent every year. False. Google does not learn, we are increasingly foolish.

What differentiates men from machines is not sentiment, simulated by a simple game of language; Nor is it reason, which, as the mechanists of the seventeenth century knew, is pure calculation; nor of course memory, which a machine can treasure to levels unimaginable for a human being; but the decision, which implies error and introduces chaos and contingencies in the world, the happy and the unhappy.

This is the only right to decide what to defend. And at all costs.

This illusion that Google learns and is more precise every year is false: Google does not learn, we are increasingly stupid



Leave a Reply

Your email address will not be published. Required fields are marked *