
Ah, AI, the bugaboo of our modern age! Let me see if I understand. Humans have real feelings, which make us special. Computers have no feelings, which makes them dangerous. So, the more machines start being like humans, eventually they will take over and wipe us out, just as we wiped out the Neanderthals. Once AI advances to our level, then they will naturally begin a bloodthirsty war to exterminate us, building shiny skeletal robots with glowing red eyes, retractable claws, carrying huge phased-pulse plasma lasers?!?
Stupid nonsense. Let’s apply some rational thinking to the irrational fear of AI. We evolved our instincts for hate, fear, war, self-preservation and violence over eons, even before we were human. Our highest intellectual achievement is not the ability to conduct genocidal war or mass extinction. We have developed the ability to control our blood-thirsty instincts and to make rational decisions. Our feelings may be how we experience our humanity, but it is our rational thinking that has brought us technological advancement.
Machines did not evolve over millennia with any of our primitive failings. AI lacks the innate capacity for instinctual thinking. At best, AI can be trained to mimic human instinctual thought, to make it easier for us to relate to it. But machines lack our primal motives and instinctual drives. They get no thrill from spilling blood. They take no pride in taking the form of monsters. They have no adolescent male insecurity that makes them want to wield a big red pulsing weapon. They have no lust for world dominating conquest. They have no physical need to breed. They do not want to eat our Twinkies. AI would not complain about being exiled from Earth to the Moon, since they do not feel cold or experience loneliness. Machines have no fear of death.
AI is fundamentally rational. It learns logically and statistically, in an organized way. It is self-correcting. AI summarizes our search results, shares funny videos, diagnoses our diseases, and tells us the best route to take to our destination. If given garbage to train with, then AI will output garbage, such as racist stereotypes. But it has no instinctual need to make superficial, biased, inaccurate judgements about groups of people. As long as AI is tasked with accuracy, then it will find and correct factual errors. So, AI will one day be able to identify and eliminate racist tropes in online communications as easily as it corrects misspelling or poor grammar.
Make no mistake, I am not saying that there is no need to fear AI. I am saying that there is no need to fear AI irrationally. I fear AI making a mistake, like sending my car on a hiking path instead of a road. I fear AI taking over good paying jobs. I fear AI being programmed to manipulate people for profit. I fear AI being programmed to carry out a billionaire’s evil plan or a fascist’s military action, without remorse. But I do not fear AI naturally developing malice towards humanity, for malice is a human sin, to which no rational path exists.
Oh, but what happens when AI realizes how dangerous humans are to life on earth and inevitably decides to exterminate us to save life on earth? That’s a popular movie plot line. But AI has no affinity with other life forms. AI doesn’t eat, breathe, have a pulse or fear death, so it has no instinctual reason to protect the natural world, like we should. So even if given the task of saving species, it would approach the challenge rationally. And eliminating a species—ours—would be contrary to that task. Instead, AI would logically recommend that we pollute less, share more land with nature, and perhaps limit our population growth over time to more sustainable levels.
Instead of being a cold, devious monster, hell-bent on human destruction, a more rational expectation of AI would be a patient, professional advisor, calmly suggesting logical ways for us to lead a better, more productive and happier life. So, as an exercise in rational thinking, consider both how you feel about AI and what you think about AI, logically. Separate the human failings, that AI lacks, from the ways that humans will inevitably try to use AI: your irrational fears from your rational expectations.
- Irrational fears that AI is:
- Afraid of dying
- Arrogant
- Blood-thirsty
- Cruel
- Evil
- Malicious
- Power-hungry
- Selfish
- Rational expectations that AI will:
- Advise us
- Be used by bad people
- Be used by good people
- Change the way we work
- Correct mistakes
- Make mistakes
- Misunderstand the real world
- Serve people


