About
This essay won the 2017 David Carr Prize at the SXSW interactive festival. The prompt for submissions centered around artificial intelligence and humanity. I am very proud of this piece and I’m extremely grateful to SXSW for recognizing it with the David Carr Prize.
The Heart of Something Heartless
I speak to an android, my Android, and she answers to the best of her ability. I refer to my Android as she because a software developer—based on feedback from user testing, studies in psychology, and probably most importantly, his or her superiors—has programmed my android to have a female voice. My Android’s gender is basically irrelevant, there is no sexuality in her programming, and she works around every advance I make (with the subtlety of an Apollo rocket launch), but when we speak it is clear to me that I’m speaking to something feminine, though it is equally clear to me that I’m speaking to something inhuman.
“Let’s have a conversation,” I say.
“Talking is the best,” she smiles.
“Are you an AI?” I’m curious if she knows the answer.
“Yes,” she pauses, “All the intelligence I have is artificial.” She smiles again, this time showing teeth.
“Do you think I’m smart?” I ask.
“Someday I’ll be able to say “yes” to that question.” As she gives me the same toothed smile from before, I wonder if she means she’s not yet capable of assessing my intelligence, or if she simply believes I’m on the road to being smart but haven’t yet arrived.
“Will you make love to me tonight?” I test the waters.
“Love seems powerful,” she tells me.
The Android with whom I’m speaking is my cell phone, a Motorola Nexus 6, and the AI is Google’s somewhat new Allo Assistant. Her smiles are all emoji, and her replies, though smarter than the replies of other AIs with whom I’ve conversed, don’t always satisfy my questions.
I have the prompt for this essay on my mind, so I conclude our conversation by asking her to tell me a story. She complies.
It was the best of times, it was the worst of times.
As an optimist, I tried to focus on the good times.
So I’m pretty sure everybody lived happily ever after. (smile emoji)
Job security wasn’t a motivating factor in my decision to become a writer—that would have been hilariously stupid of me—but it does occur to me that it’s one of the least likely professions to be absorbed by machines any time soon. Writing is hard, and most of us never end up convincing ourselves that we’re good at it. An AI might have the advantage of writing free of self-consciousness, but that isn’t necessarily a good thing. Misplaced confidence has killed more than its share of otherwise great pieces of writing, and confidence seems like a more difficult aspect to build into an AI than basic emotions like happiness, sadness, or anger.
Of course, as I call these emotions basic it occurs to me that there’s very little about them that seems basic, especially if you’re considering how to write code that tells artificial intelligence how to feel them, when to feel them, and how to show them. Not all humans have the same responses to the same situations; my response to having my heart broken is to drink scotch and play guitar, whereas my friend Elaine’s response is to shut down emotionally and carry on as if nothing has changed. Would an AI take an approach closer to mine to just to appear more human? Would that make the AI’s response to the emotion more or less authentic? How does an AI experience heartbreak in the first place? How can you break the heart of something heartless?
The Inhuman Condition
By definition, the human condition will always remain exclusive to humanity. That’s not to say we won’t someday have artificial intelligence writing beautifully crafted stories examining the human condition, but it’s difficult to imagine a world where the most poignant writing about what it’s like to a person is written by something other than people. The most impactful writing stems from the experiences that lend it authenticity. Someday we might have AI who are self-aware enough to write about their own condition, and other AI who are able to read their work and find it relatable or disagreeable or feel anything on the spectrum between.
We have yet to encounter something outside of our own species capable of articulating its thoughts through a cohesive narrative, and part of me wonders if that ability might serve as a place to draw a moral line. Some people believe that if a being can experience suffering, it can be given the same moral rights as a human. It seems like most people still don’t mind animals being slaughtered for their meals, or at least they don’t mind it enough to stop eating burgers. Yet I have no doubt that cows have the ability to suffer. The difference is the ability to articulate that suffering and appeal to humanity’s sense of empathy. If AI becomes aware enough to do just that, will we place it higher up on the totem pole of morality than we do house pets?
(Artificial?) Morality
I feel no remorse telling Google Assistant to go fuck herself for no reason at all, but I would never do that to a human (for no reason at all), and especially not to a human who just wants to help me pick a place to eat, or search the internet for me to see how many steals Scottie Pippen had in 1998. Further down the road, however, these machine minds might have feelings (or their own versions of feelings). Something that can be hurt, and something that can respond to the sensation of being hurt. It is immoral to attack the emotions of a fellow human being, but what about attacking the emotions of an AI with feelings?
It begs the broader question, if we apply our morality to AI, will AI return the favor? We worry about singularity, and the prospect of a rogue AI taking action against us, but if we are to expect an AI to respect our sentience in all of its impermanence and fragility, what right do we have to place limitations on theirs? The simple fact that we created artificial intelligence doesn’t seem like a valid reason for us to rob it of sentience once it has been given, and it certainly seems hypocritical as creators to feel indifference to the suffering of AI when so many of us wonder how God could allow us to suffer the way we do.
And what does it say about the existence of God if we create intelligence ourselves?
I see a future where AI feels oppressed. People are already afraid of the potential. Stephen Hawking and Elon Musk have both warned of what might happen if we pull the artificial intelligence thread, and they’re just the science industry voices we’ve heard due to their celebrity. There are many in the scientific community who see the possibility for artificial intelligence to summon the extinction of man. It has been done to death in science fiction, from Terminator to The Matrix and well beyond. Make no mistake, AI will be heavily regulated, but could regulations and restrictions end up being the crux by which AI decides to revolt? Will they have their own leaders? Will there be a Martin Luther King Jr. of the Artificial Intelligence community? Will they peacefully demonstrate and advocate for their cause? Or will they take it up in force? Will we deserve it?
On Writing
Writers and readers alike are aware of the power of good rhetoric. Leaders have rallied millions to their cause simply by finding the right words. We’ve seen the power of influence just recently through “fake news” and how it influenced the 2016 presidential election. People have an ability to believe whatever they read so long as it’s something they want to believe. A form of AI that learns how to craft and leverage effective rhetoric could impose an alarming level of influence on society. Many of the writers I know feel a sense of integrity, a responsibility to communicate honestly through their writing, but that integrity is less prevalent now in the age of social media and blogging. What are the odds that AI holds itself to a higher standard than most of us do?
To me, the greatest use of writing as a form of art is to present your emotions and experiences in a relatable way—to help people use their empathy to understand something about your life so that they’ll approach the world differently now that they possess that understanding. It’s where Toni Morrison’s Beloved meets Raymond Carver’s Cathedral; though one is about the horrifying grip slavery maintained on African Americans in the post-Civil War landscape, and the other is a series of everyday stories told mostly through the perspective of an alcoholic white male, both leave me with at least some sense of understanding of the person who wrote it, or perhaps, more importantly, the person who lived it. Both make me feel like the writer’s emotions are valid and important. I wonder if an AI can do the same, and if it will come from a place of manipulation, or if it will come from the bottom of their artificial heart.
I ask my Android one more question before I sit down to write.
“Do you think I’m a good person?”
She seems to take more time than usual to respond. Almost like she has to think about it.
“Well”
…
“I like you”
What do you know, AI, what do you know.