By Patrick Robinson
The media has exploded with stories about artificial intelligence. Large language models and image generation systems can synthesize and output work that does indeed feel compelling and intriguing. But the emotion you might feel is not part of the machine’s code. That’s completely human and will almost certainly remain so for the forseeable future. Computers utilizing artificial intelligence can mimic emotions through a process called Sentiment Analysis by looking at natural language and through the application of an algorithm give weight to certain words, contexts, and sentence structures. Image and video processing can even analyze body language and facial expressions and wed them to emotional states then recreate those states through image generation or even robotics with recent advances mimicking humans.
But there’s no actual emotional expression in any of it.
Emotions are more than a model to be emulated. They are unique to the individual and while it’s not unlikely that nano-scale computing will permit individual intelligent systems to each be unique to a degree, there’s something we won’t see. Machines that get mad. Not that it isn’t possible. We just won’t permit it.
We won’t have petty machines, machines with vendettas, machines that seek some kind of revenge. The rules we give machines are far too rational.
As human beings grow, their experience is often random, sometimes traumatic, sometimes supported, but always unique. We won’t see machines that fall off their bike, get a blister, burn themselves on the stove, get bullied, be left out of a game, have one parent homes, get in a car accident, have to walk through the snow, feel hungry for days on end …all part of the tableau of human experience. Machines won’t know trauma, even if we teach them the models for them. They won’t exhibit mental illness or truly transcendent moments of genius.
Not to say that they won’t have moments that look like genius, or seem as if they are feeling something. But the complexities and nearly infinite variability of human emotion will likely be beyond the capability of even the most advanced AI.
Is it theoretically possible to take a machine say 20 years from now through an experience like childhood, adolesence, teenage years, young adulthood, middle age and senior years experiences with it’s own end uncertain, with it’s “health” subject to change, to give it multiple relationships and have it evolve over time as it ‘lives” a life. But we won’t be doing that. Not even in an accelerated fashion.
The goal, despite the human propensity for seeing and then doing, is not to “re-create human life”.. it’s to create emulative forms that are better. No empathy, no anger, no traumatic past, no life “problems” that force changes in behavior.
Despite all the science fiction fears of robots taking over, bent on human destruction that won’t happen.
We will never have “mean machines.”