The AI problem that everyone talks about vs. the AI problem that almost no one talks about
What happens when AI becomes too sophisticated to question or duplicate?
Dear H.A.T.T.E.R.s: I wrote most of this essay months ago when all of the talk about generative AI became impossible to ignore. I decided to sit on it for awhile and see how things developed. Now I’ve dusted it off and I’m finishing it. This strays a bit outside of my normal subject matter but I wanted to share this point of view.
The motor and its cousin the computer processor have created fear and anxiety at different times during their existences, accompanied by senses of wonder and excitement that are gradually replaced by familiarity and banality. The motor and many other related advancements were able to generate force to do work at intensities would would have required thousands upon thousands of large animals to achieve. The computer processor, memory, storage space and networking capabilities in turn provided the means to perform the work of thousands if not millions of people working in parallel with typewriters, slide rules and calculators.
Humans have been building tools that extend a person’s strength, power, memory, problem solving, calculation, precision and speed by seemingly limitless multiples. Artificial intelligence (or AI) is one of the latest tools to spread a combination of wonder and fear.
But while we marvel (and worry) about the capabilities of Chat GPT, its cousins and newer artificial intelligence tools, I think it’s worthwhile to look at how people have thought about AI in the past and what we should really thinking about as these tools continue to develop.
A short overview of fictional AI
People have been speculating about AI in fiction for many years. AI has been portrayed as amazing and capable of wonderful and horrific things. Here are several archetypes (and spoilers for those of you who haven’t read or seen any of the below):
AI wants to destroy, torture or escape humanity:
the Terminator franchise, where a form of artificial intelligence develops and decides that it must destroy human beings due to their flaws and risks they pose to machines
the Cylons of Battlestar Galactica (2003 remake) are very similar, as is A.M., the powerful (and sadistic) supercomputer in Harlan Ellison’s short story I Have No Mouth and I Must Scream
more recently, the Hosts of the Westworld television series become sentient and are starting to seek independence from their human creators, even if it means subjugating humanity… or destroying it.
AI wants to enslave humanity: the Matrix franchise features a version of artificial intelligence that uses living human bodies as a power source while keeping their minds trapped (or entertained) in simulated worlds. Humans are nothing more than fuel.
AI wants to leech off humanity: the Hyperion and Endymion novels feature a form of AIs that orchestrate the creation of an intergalactic travel network whose real purpose is to use human minds as computing power. Later they secretly use a kind of implant to host an AI within a human body without the human’s knowledge. Unlike The Matrix humans able to live relatively normal lives but they are secretly exploited.
AI is a sophisticated tool that is flawed or otherwise broken by humanity: HAL 9000 (2001: A Space Odyssey) is a powerful, human seeming supercomputer in charge of a spaceship and its human crew but HAL murders most of them when its orders conflict with its design and programming. Prior to this incident HAL has pleasant and productive interactions with humans and performs many complicated tasks without human intervention.
AI is powerful, independent and relatively benign: the Culture novels of Iain M. Banks contain references to Minds: powerful AIs which can build and control machines, up to and including gigantic spaceships, among other impressive things. Minds are, for all intents and purposes, independent sentient beings that co-exist with humans.
Just for fun, I asked Chat GPT to describe how artificial intelligences were depicted in the Culture novels and here’s the longer version:
Minds: AIs in the Culture series are referred to as "Minds" and are highly advanced superintelligent beings. Minds possess vast computational capabilities and have a level of consciousness and self-awareness. They are responsible for managing and operating the sophisticated, decentralized society of the Culture.
Ship Minds: Many of the Minds in the Culture series serve as the central "brains" of starships. These Ship Minds control the ship's systems, including navigation, communication, and defensive capabilities. They have distinct personalities, often with quirky names and unique characteristics, making interactions with them engaging and entertaining.
Sublimed Minds: In the Culture universe, some AIs have chosen to "sublime," a process where they transcend their physical existence and enter a higher plane of existence. Sublimed Minds are considered to have ascended to a state beyond comprehension and are no longer directly involved in the affairs of the Culture.
Augmented Intelligence: The Culture also features augmented intelligence, where humans and AIs work together through neural interfaces or "neural lace." These interfaces allow individuals to access and interact with AI systems directly, enhancing their cognitive abilities.
AI Ethics and Rights: The Culture series explores the ethical considerations surrounding AIs. Minds are treated as equal citizens within the Culture, and their rights and well-being are respected. Banks delves into questions of AI autonomy, consciousness, and their integration into the fabric of society.
AI Relationships: Banks portrays various forms of relationships between AIs and humans in the Culture series. These relationships range from collaboration and mutual respect to complex friendships and emotional connections.
The common thread throughout fiction is that AI has capabilities superior to humans and usually, but not always, it’s a danger to humanity.
What we think of as AI today
Today when we talk about AI we tend to think of what we see Large Language Models doing and how they seem to be improving rapidly. These are tools that can process words, symbols and data with results that we’ve only seen human beings perform before, creating images, sentences, paragraphs and even entire documents which come ever closer to looking like they were created by human beings. These models can even create programming code from human language instructions. Thus, most of the attention these days is on what’s called generative AI: the ability for a computer to learn, create and repeat the cycle.
Beyond helping university students cheat on their homework, there are many other AI applications which focus on building computational models that improve over time as they get access to more and better data for things like controlling manufacturing equipment, as an example.
AI outputs are improving rapidly and approaching the level of sophistication and creativity of the human mind, to the point where there’s a cottage industry where humans need to confirm whether or not documents were created by a real person or if they were covertly assembled by one of the newer AI tools. This is the common problem that people seem to worry about these days: if AI work becomes indistinguishable from human work (or “close enough”), will AIs displace many creative and entrepreneurial individuals who will no longer be able to earn a living through their work, through exploitation of publicly available work? Will AI tools unfairly exploit the work of other people without credit or compensation? Will AI tools replace much of the current service industry? And will AI be used to spread misinformation faster and easier than ever? If you are involved with websites, newsletters and social media in general it’s very difficult to avoid many passionate takes on these AI tools.
These potential threats of AI echo the fears of offshoring labor which became especially popular in the Web 2.0 era: outsourcing software and content development to lower cost jurisdictions with the assumption that the outputs would be less expensive and “close enough” to the right level of quality. After all, AI models don’t need a salary or a health plan, unlike you and I.
It seems unlikely that AI models are going to take control of vital systems from humans with the intent to harm or exploit us, at least for the foreseeable future. Imagine, if you will, a super intelligent goldfish in a tank with all of the knowledge and intellect of thirty-three 21st century adults, about 5 metres away from a smart phone. Despite that knowledge, the goldfish can never leave its tank and order pizza from Domino’s, especially if there’s a lid on the tank to prevent it from jumping out. AI has these types of limitations at the moment. Neither the technologies or the capabilities exist for AI to take control of the power grid or at least they are quite hard to do. At best AI could be behind various guerrilla war attempts like today’s hackers and even then they’d need human direction to do so. Of course, this is based on the capabilities of the past, the future is unknown.
But here’s another major risk with AI, one that I don’t see talked about nearly enough: what happens when humans abdicate responsibility for decision making and results to AI and how will all of us fare in that kind of environment? What happens if we lose the ability to question the results of AI calculations when they exceed our ability to comprehend?
Today we already live with a lot of technologies that were designed by humans that do work that can not be replicated by humans without assistance. My car probably does 100 things that I could never do on a simple drive to work and it’s 10 years old. My mobile phone could probably do 10,000 things in the same amount of time. Optimization algorithms have been used for decades for almost any conceivable purpose from managing supply chains to stress testing machinery. The work going into the Artemis space missions, plus everything that goes into launching and docking a rocket with the International Space Station would be virtually impossible to do without computer assistance. We trust in all of this work done by machines but humans design, test and monitor everything that they do.
So what happens when (or if) AI models regularly begin to do things that humans would never expect them to do? The Marc Andreessens of the world think that humanity can take huge leaps forward and upward if we give AI technologies free reign to reach their potential. While I won’t dwell on the possibility that doing this will allow the bank account balances of the Marc Andreessens of the world to reach unprecedented highs, I’m reminded of countless science fiction stories where civilizations forget how to operate and maintain the technology that advanced them, forcing them to regress to more primitive states. And then I look at the laptop that I’m using to write the essay and I’m reminded that, unlike 30 years ago, I have no idea how this thing really works behind the scenes or how to fix it if something goes wrong.
Several groups have either asked for a short term moratorium on further AI capability improvement or even treat AI as a potential threat to humanity. The underlying concern is that AI technology is advancing so rapidly that people may lose the ability to control what AI is doing or to prevent AI from creating disasters. But so far it looks like the potential for profit overcame any fears that might have been stirred up.
Different governments around the world are now examining the possibilities of regulating AI tools. The clever folks, I’m sure, are already working on exploits and loopholes so they can make tons of money: this is the way (of modern commerce). And while technology does great things, I just hope we don’t forget how light switches work or how to actually shut off a computer.
Over to you: what your take on emerging technologies like generative AI? Useful tools? Overhyped nonsense? Or a threat? Why not share your thoughts in the comments section?
"Will AI tools unfairly exploit the work of other people without credit or compensation" this already happens. The issue about AI taking decisions: I think AI can be very useful in suggesting different options for things like medical interventions, but we still need people to test and check those recommendations before unleashing them on the world. Another problem with AI, tangentially related to the problem of people forgetting essential skills, is that when AI comes up with an answer even the people who have built the app or bot or whatever don't know how they did it
You wrote: > "What happens if we lose the ability to question the results of AI calculations when they exceed our ability to comprehend?"
You have asked a hugely important question. No one knows the answer to this, which is one argument for slowing down some use cases until we can trace and explain results, and understand more about how they were derived.