AI: What's Autonomy Got to Do With it?
Some science fiction scenarios that have to do with computers or machines taking over the world:
An apprentice learns a magic trick for making a tool take over all the work of cleaning his master’s workshop, but he can’t figure out how to shut the operation down, and it keeps on multiplying and making what threatens to be an exponentially larger mess, until the master returns and restores order.
A super intelligent computer on a spaceship bound for Jupiter starts systematically killing the human occupants, when it decides on its own that the humans are hampering the mission.
An ancient race of super-intelligent beings, called the “Krell,” create a planetary supercomputer that realizes every wish of the inhabitants, leading to their self-destruction by “monsters from the Id”.
A technological advance shared on the internet leads to a “singularity,” a super-intelligent global computer mind that is self-aware and hostile to human interests. A war ensues between robots and humans.
A conglomeration of robots and computers called “Cylons” attack and destroy most of human civilization, but a remnant of human survivors in a flotilla of spaceships manages to escape and sets out to search for the fabled original planet earth.
An unhinged authoritarian presidential candidate on earth’s reigning superpower manages to get elected twice. The second time with financial help from a tech billionaire, sustaining fanatical support from his followers by utilizing easily available social media computer algorithms that interact with digital device users in a way that promotes and magnifies extremism and conspiracy theories.
The titles of these stories are respectively: The Sorcerer’s Apprentice; 2001: A Space Odyssey; The Forbidden Planet; The Terminator; Battlestar Galactica; and The Apprentice, Episode 193. Although the first and the last scenarios are not really science fiction, I believe that they are the ones we should be worried about.
In the movie “2001,” the super-intelligent computer Hal 9000 was defeated by “pulling the plug.” If the singularity (The Terminator) occurred and computers started this revolt against humans, couldn’t humans terminate the computer network by turning them off and pulling their plugs, swearing off from using them forevermore? Computers don’t work if they are not plugged in or their batteries are taken out. It’s humans that manufacture, maintain, and repair them, and it’s humans that supply the power that keeps them running. Computers, and computing equipment have global supply chains of factories behind them. How likely would it be for a computer to command this supply chain to create new versions of itself or to command humans in the supply chain to help maintain or repair itself? I suppose the renegade computer could trick all the humans in the supply chain into manufacturing replicas of itself. But how long would it take for some to figure out that they were being tricked?
The real success and use of AI seems to be in specialized areas where the problems and solutions can be reasonably well defined. Examples such as natural language processing, facial recognition, self-driving cars, copy writing, and augmenting human medical expertise, show the incredible power and usefulness of AI deep learning algorithms and neural networking. These computer systems utilize banks of computers connected in parallel, and are given access to massive amounts of data. With only minimal expert assistance from humans, the “neural networks” of computer systems are able to outperform human experts, because of their ability to see patterns of data in millions of pages of specialized information.
At the heart of the idea of a super-intelligent computer that can do everything better than humans [sometimes called Artificial General Intelligence, or “AGI”] is the notion that one day a computer will have enough intelligence to be fully autonomous. It is not clear, however, whether it would be possible [let alone desirable] to completely eliminate human control.
Autonomy is a concept that we use to describe living organisms and is the degree to which something can make decisions on its own. It’s not an all or nothing thing. No lifeform is autonomous of the earth, its atmosphere, or its water. There seems to be an evolutionary progression towards more autonomous organisms, from single celled diatoms that passively float around in the ocean to large-bodied mammals that range over large distances and make complex decisions.
There is absolutely no history of machines needing to survive and reproduce on their own. Why should there be? Only humans want machines to exist. Machines themselves have never had any say in it. No one has ever made a machine and then told it: “OK, from now on, you have to make it on your own.” Machines always have human purposes built into them: they don’t decide to plug themselves into a power source or turn themselves on. They were never a part of the struggle for existence that is the basis for Darwinian natural selection. Wherever one finds a machine, there is always a person behind it: conceiving; building; maintaining; repairing; and supplying sufficient power. And this is why the so-called “autonomy” of computing systems is not biological or human autonomy or really autonomy at all, as we understand it, but simply the capacity to function and learn intermittently [and finitely] without human supervision.
We tend to project our own sense of autonomy onto natural processes like the weather, and onto geographic features like mountains, rivers, and oceans. The storm is “brutal and merciless.” The volcano is “angry.” The mountain “looms menacingly.” We, of course, engage in the same kind of projection with regard to our own creations and especially computers and other machines. R2D2 and C3P0 from Star Wars are two almost universally recognizable fictional instances of this type of projection.
It may be more appropriate to use a biological metaphor here and say that computers play a similar role to that of cells that make up the body of multicellular organisms. Computers are machines that can perform many functions without human supervision, but they are not autonomous. Their use by humans is analogous to the functions of cells in the body. All the cells of the body perform functions for the body; they are kept in a temperature controlled environment; they live bathed in nutrient fluid, so they don’t have to go out and look for food; and they don’t decide what to do, but rather are instructed by hormones and other chemical messengers to change their functioning when the body needs them to.
One of 18th Century philosopher Immanuel Kant’s deepest insights was that there is an intrinsic connection between morality and human autonomy. When animals make choices and decisions it is “according to the dictates of nature.” Humans, by agreeing to limit their own behavior through moral rules, open up a world of creative choices not available to animals. However, our creativity comes at the price of responsibility and accountability. By all of us doing our “duty” in upholding and enforcing moral rules, we make possible the widespread trust and cooperation that forms the background of human society and makes our unlimited creativity possible.
Battlestar Galactica and The Terminator are really stories that project our history of slavery onto science fiction scenarios about intelligent robots. What if the slaves seize the means to overthrow their masters? This has always been a real fear of the slave owner classes, such as in the Antebellum South, so it’s still in our collective memories. The robots in these fictions are a metaphor for treating people as a means rather than as autonomous beings, and speculate as to the repercussions of treating a whole people this way.
The story of The Sorcerer’s Apprentice, however, is a warning to all of us. We can build machines that have the ability to make decisions that affect humans, but the computers are just going to automatically maximize some mathematical function, with no thought of the consequences. Computing systems always need human supervision. As the late Daniel Dennett argued in “The Singularity, An Urban Legend?”:
The real danger is not machines that are more intelligent than we are usurping our role as captains of our destinies. The real danger is basically clueless machines being ceded authority far beyond their competence.” And he adds, ominously, “We are on the verge of abdicating…control to artificial agents that can’t think, prematurely putting civilization on autopilot. [1]
If we look at my one scenario that wasn’t fiction, namely the weaponizing of social media, it is apparent that the whole problem, as Dennett foresaw, was the absence of human supervision. The algorithms on Facebook and YouTube were set in motion, white supremacists and gullible QAnons found each other and multiplied, and no one in charge intervened until it was too late. Now, as a result of Trump’s election we have a dangerously unstable situation in the United States, with one of the two main political parties embracing conspiracy theories and actively working towards cancelling democracy. Now that we know better what can happen, we need to ensure there is proper legal oversight over social media platforms, and the same goes for all AI applications. As Dennett wryly points out: “computers have no skin in the game.” They cannot be held accountable for decisions made, only humans can.
Dennett, Daniel, “The Singularity, an Urban Legend?”, Edge.org, 2015


Thanks Charles, I like the reasonable analysis of autonomy in the context of human/computer interaction. It is the humans searching for control and ultimately the tools of control, money. The overwhelming urge to maximize self, isolates us from the vast majority of our brothers and sisters and is prime driver of techno illusions and propaganda. If only we could share. For an similarly interesting critique of AI, see: https://existentialcomics.com/comic/588