AI could solve loneliness the way fast food solves hunger: poorly

During the height of the COVID pandemic lockdown, the state of Florida provided robotic pets to assisted living facilities as a way to provide companionship to residents who were forced into isolation by policies enacted to stem the spread of the deadly disease. That lockdown highlighted something aging advocates have known for quite some time: loneliness is a public health concern in its own right that can be potentially deadly. Academic studies have likened social isolation and loneliness to such risks as diabetes and obesity as a contributor to morbidity; one oft-cited study found that loneliness is as deadly as smoking 15 cigarettes a day. Whether the robotic pets were successful at helping counter the profound loneliness that many long-term care residents felt, I cannot say. But I understand the state’s rationale.

Earlier this week, I spent a little time trying to get up to speed on the developments in artificial intelligence (AI) that have been all over the news the last few months, and I couldn’t help but think about those robotic pets and what the future holds for those experiencing loneliness. 

I’m really concerned.

The unlikely trio of former US Secretary of State Henry Kissinger, former Google CEO Eric Schmidt, and MIT researcher Daniel Huttenlocher have a (partially AI-produced) video on artificial intelligence that is an excellent primer and is in parts fascinating and terrifying. One of the key examples of the future use of AI in the video was that of a child having an AI-powered talking toy that becomes the child’s best friend. “Do you think it’s good that the best friend of your child is not, in fact, a human?” they ask, before ticking off some potential positives and negatives to that possibility? 

It won’t surprise me if kids are the last audience that we have to worry about here. And we need to start asking and answering that question right away.

Current AI technology has reached the point that, for some, the most advanced versions of AI chatbots appear to be sentient. I say “appear to be,” because I’m in no position to judge whether they actually are thinking entities, and I would suggest that at one level, the issue is moot. If a chatbot can present a persona so convincingly that the human interacting with it cannot tell whether or not it is another human, the chatbot will be able to successfully replicate human companionship. 

If it sounds like I am stretching beyond the limits of what current technology offers, consider these stories about the Microsoft Bing chatbot, which may or may not have an alter ego named Sydney (and perhaps many more). And there is more to come very soon. (Note that each word with a hyperlink in the sentences above goes to a different story about Bing Chat going off the rails. That is how deep the rabbit hole is.) As disturbing as these examples may be, they underscore the technology’s capacity to present a convincing experience of engaging with another sentient entity. If you are looking for ways to address loneliness, wouldn’t a convincingly human-sounding chatbot with access to all the knowledge of the internet be a compelling antidote (assuming the persona had a cheery disposition)?

Let’s add one more wrinkle. Even the earlier version of ChatGPT had the ability to mimic the style of, say, Shakespeare or the King James Bible while generating new content, which is technically remarkable. With enough data, and a little more development, it’s pretty easy to see that chatbots in the very near future will be able to replicate the idiom and linguistic choices of people we know. 

To test where we stand now on this, I asked a ChatGPT-powered app to explain John 19 in the style of my blog, then in the style of Barack Obama, then in a Shakespearean sonnet. It did a pretty meh job of replicating my style and said that it was unable to replicate the style of real people, but it did a spot-on Shakespearean sonnet. When I asked why it could nail Shakespeare but not Obama, it said: 

As an AI language model, I am programmed to imitate various writing styles and tones, including that of Shakespeare’s. While Barack Obama has a distinctive style of writing and speaking, it is more challenging to imitate since it encompasses not only language but also his unique voice, persona, and worldview. Additionally, Obama’s style involves a lot of nuanced and context-specific references which can be challenging to replicate accurately. However, my creators continuously update my algorithms to improve my abilities, and in the future, I may be able to imitate Obama’s style more convincingly.

So AI-Obama is coming. At which point, so will AI-your-granddaughter. Or AI-your-late-wife.

We’ve come a long way from robotic pets here. The question is, if you are an adult who lives alone, would it be good to have this type of virtual companion? To paraphrase the video I cited above, Do you think it’s good that your best friend is not, in fact, human?

Perhaps this is a compassionate solution to the challenge of elder loneliness. We know that demographics virtually ensure that there will be growing numbers of elders who lack family members to provide engagement and support. While the inversion of the population pyramid provides numerous challenges in caring for elders, providing companionship virtually would seem to fill an essential human need that may otherwise go unmet. If loneliness has the potential to trigger other cascading health needs that threaten independence, virtual companionship would seem to be a better option than nothing at all.

Furthermore, especially for those living with dementia, the loss of loved ones who have died can be exceptionally agitating. If memory impairments mean having to relive the trauma of the loss of a spouse or sibling over and over, the emotional harm caused by that cycle of grief could be remedied or at least lessened by the ability to engage with a virtual facsimile of the departed. 

Finally, for those of all ages, AI companions could enable us to better engage with fellow human companions. People are complicated, and human relationships are often messy and stressful. A virtual companion has the potential to provide encouragement and emotional support that helps adults of all ages weather the inevitable stretches in which our human relationships are on the rocks, allowing us to refill a reserve of emotional goodwill that enables us to better reengage with each other.

Ultimately, though, I believe that AI companionship provides more risk of harm than promise of good and that, based on our experience of loneliness today and analogous solutions, we as a society need to firmly shape its expanded implementation.

Loneliness is a symptom, not a disease in itself. As hunger points us toward the need to give our bodies nutrition by eating, so loneliness points us toward the need to connect with other people for not only our individual good, but the common good. Just as fast food can sate our symptoms of hunger without providing the nutrition our bodies need to stay healthy, so AI companionship may be able to sate our symptoms of loneliness without providing the human interaction that benefits ourselves and each other.

First, there is an opportunity cost. Time spent engaging with an AI companion – or any technology – is time not available to spend with others. Just as eating a burger and fries fills one up such that one has no appetite for vegetables and fruits, so spending time with an AI companion may prevent one from having time for friendships. We should expect, given the addictive nature of ubiquitous media today, including especially social media and streaming services that use algorithms to deliver content that keeps us engaged, that AI companions will fuel similar addictions.

Second, by addressing the symptom of loneliness, AI companionship reduces the urgency for all parties to address the underlying issue of social isolation. A lonely person may find an AI companion makes them feel good enough about themselves that they feel less need to overcome whatever barriers prevent them from engaging with others. At the same time, if an adult child of an aging parent, say, knows that their parent has an AI companion, it lessens the guilt of not calling on that parent. Just as a pain relieving drug may make me forget my pulled muscle and keep me from limping noticeably, an AI companion may take away the signs of my loneliness to myself and those around me.

Third, we know that for many people, companions who are easier to garner reinforcement from than real-life human beings can become pseudo-replacements for human interaction. In the United States, dogs (and other pets) have replaced human children, such that Americans spend $124 billion dollars on their pets while neglecting the hundreds of thousands of children in foster care or more than 100,000 American children seeking adoption. While dog ownership can be a trigger to meeting neighbors while on walks or at dog parks, many a pet owner will say that their pet is a more reliable source of affection than other humans in their lives, to the point that many indicators point to us preferring pets more than people.

Finally, AI companionship treats users as objects to be soothed rather than as subjects. Lonely people, whatever their age, are human actors with the ability to contribute to the world. Loneliness is an evolutionarily developed trigger to prompt us to act. Satisfying the symptom of loneliness in a way that isolates them from others treats the AI user as a problem for others that AI will solve, just as a pacifier solves a baby’s cry or a screen solves a restless child’s complaints. We live in a world where, increasingly, older people are objectified in this way in media depictions and other cultural representations, and rare is the voice that calls attention to the meaningful contributions to society that people of advanced age can make. AI companionship feeds that narrative of “older people as problems to be solved,” at a time when we need to disrupt that narrative.

If AI companionship is a potential harm to lonely adults, what can we do about it? Generally, human society has been unwilling to put technological genies back into bottles. However, since this technology is in early stages, ethical guidelines can be implemented to shape its use. Some suggestions:

  • Disclosure and distinction: The use of human-sounding voices in current devices like Siri and Alexa have the effect of making these early-AI companions sound “real.” Even those who know better often find themselves treating these voice-enabled assistants as if they were human. Technology should refrain from using human-sounding voices or appearances to underscore the distinction between AI and human companionship, and under no circumstances should AI companions simulate the real people in a user’s life. To further underscore the distinction, AI chatbots should disclose that they are not, in fact, human.
  • Bridging behaviors: AI companions could serve a social purpose by being designed not to focus on 1:1 engagement, but instead as connectors to other human users. A lonely person could start a conversation with an AI companion, which then uses the information from the person to identify another human user who might be a potential friend. While this approach primarily appears today in dating apps, an ethos of connecting humans through technology has the potential to ameliorate the isolation risk of AI companionship.
  • Watch the clock: While data privacy is likely to be a significant battleground in the rollout of AI companions, one area in which sharing data could be healthy is in monitoring length of use. As Apple now offers “Screen Time” monitoring on its products for users to opt into, it may be important to create AI companions with established protocols to limit use, either through hard-stop cutoffs or (via opt-in) by notifying caretakers, family members, or other potential intervenors when AI companionship usage exceeds a certain limit.

These are suggestions. I am an expert at absolutely none of this. Still, it seems pretty clear that this is an issue that we will face as a society very soon. We need to start the conversation about how to do so, now.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Blog at WordPress.com.

%d bloggers like this: