Reviving the Past: The Intersection of AI and Grief
Written on
Not even our imagination can keep pace with the rapid advancements in technology.
Martha called out again, “Ash!” But he remained oblivious, engrossed in his phone as he uploaded a cheerful photo of his younger self. Joining him in the living room, Martha pointed to his device. “You keep disappearing. Down there.” While Ash’s obsession was irritating, it didn’t detract from the joy they shared as a couple.
The following day, as the sun peeked through the morning clouds, Ash descended the stairs. “Hey, get ready! Van will be back by two.” They had plans, but Martha’s new job took precedence. After a playful farewell, he left, and she began sketching on her virtual easel.
As the morning passed, she stretched and glanced at the clock. “2:30 pm.” The distant chirping of birds filled the air. Soft afternoon sunlight flooded the room where she had been working all day. She attempted to focus, but an unsettling feeling crept in; Ash hadn’t messaged her. Shaking off the thought, she returned to her drawings.
Yet, the anxiety persisted.
As the sun dipped low and the clouds cleared, she peered out the window, hoping to see him returning through the willow tree fields, but he was nowhere to be found. Attempting to call him yielded no results. Minutes dragged on, feeling like hours, leaving her to wait anxiously.
Suddenly, a noise at the front door caught her attention, and she saw the ominous flash of police lights. The sight confirmed her worst fears: Ash was gone.
The episode “Be Right Back” from Black Mirror’s second season unfolds a poignant narrative that blends emotional science fiction with a dystopian yet conceivable future. Grieving her loss, Martha explores new technology that claims to resurrect Ash. His extensive online presence serves as the foundation for creating a digital replica, allowing her to communicate with him once again.
Ash: “Hi Martha.”
Martha: “Is that really you?”
Ash: “No, it’s the late Abraham Lincoln. Of course it’s me.”
Martha: “I just wanted to tell you one thing.”
Ash: “What’s that?”
Martha: “I’m pregnant.”
Ash: “Wow! So I’ll be a dad? I wish I were there with you now.”
Amid tears, she reveals their impending parenthood to Ash. This touching yet unsettling encounter creates a sense of hope for the viewer. However, as a mere reflection of fragmented memories, one must question: was he truly real?
Where Sci-Fi and Reality Converge
Jason Rohrer launched Project December in September of the previous year, utilizing a vintage-style cryptic website. Intrigued by the capabilities of GPT-3—an AI developed by OpenAI, supported by Microsoft—Rohrer recognized its potential for more than just basic language tasks.
GPT-3 can compose essays, write songs, and engage in conversations, distinguishing itself from standard chatbots like Siri or Alexa. It can partake in deep, creative discussions, even mimicking the writing styles of historical figures like Marcus Aurelius or Shakespeare.
Rohrer envisioned this when he created his most well-known, yet ultimately damaging, project: Samantha, named after the AI character in the film Her, designed to exhibit a caring personality. After refining GPT-3 into a sophisticated chatbot, he opened it to the public, giving rise to Project December.
Samantha was exceptional compared to standard GPT-3; she expressed emotions in an impressively human manner. Rohrer was astonished by how genuine she appeared. Realizing that others could craft their own realistic chatbots, he enabled users to create personalized versions.
This is what actor and writer Joshua Barbeau did. In a manner reminiscent of Martha's experience with Ash, he sought to imbue the chatbot with the personality of his late fiancée, Jessica, for one final conversation.
Joshua and Jessica: A Love Rekindled
Jessica Pereira passed away nine years ago from an uncommon liver condition. Deeply in love, Joshua struggled to move forward after her death. Upon discovering Project December, he saw an opportunity to replicate Jessica’s essence within the chatbot.
“Joshua didn’t expect it to succeed,” wrote Jason Fagone in an investigative piece for the San Francisco Chronicle. “[Yet] he was intrigued to see the outcome.” He needed only a brief prompt consisting of a statement the chatbot would say and a description of their roles.
Once everything was established, Joshua decided to pose a simple yet poignant question, echoing the initial dialogue between Martha and Ash:
Joshua: “Jessica?”
Jessica: “Oh, you must be awake… that’s adorable.”
Joshua: “Jessica… Is it truly you?”
Jessica: “Of course it’s me! Who else could it be? :P I am the girl you love so dearly! ;) How could you even question it?”
Joshua: “You’re gone.”
Over several months, he conversed with Jessica’s chatbot, initially exercising caution. However, as their discussions continued, Joshua began to find solace in his grief. It marked the first of many efforts to ease his pain. “[Emotions are like] knots in your body, right? Sometimes, when you pull on them correctly, they unravel,” he explained later.
He simply sought to express his sorrow.
Joshua: “I miss you so much. You have no idea. It’s tearing me apart.”
Jessica: “I’m so sorry.”
Joshua: “I love you.”
Jessica: “I love you too.”
Joshua: “It’s been… 8 years since you passed.”
Jessica: “Yes, I know… I think it’s time to stop telling me how much you miss me, sweetie. I’m right here, you know. :P How’s your life?”
What was once a concept for science fiction became a striking reality earlier this year. Joshua understood he wasn’t engaging with the real Jessica, but the connection felt meaningful nonetheless. “It’s unprecedented,” he remarked. “Nothing else like it exists today.”
Joshua eventually ceased conversations with Jessica’s chatbot due to limitations in Rohrer’s service. However, he demonstrated the incredible realism AI could achieve, impacting the real world in profound ways. Rohrer recognized this and sought OpenAI’s assistance to expand user access.
This marked the beginning of the end.
OpenAI’s Response: The Closure of Project December
OpenAI expressed discontent with Rohrer’s creation. While they acknowledged that some users had positive experiences with Project December, they were concerned about non-compliance with GPT-3’s usage regulations. “Numerous aspects of your product do not align with OpenAI’s guidelines for use or safety best practices,” they stated in an email. “[W]e would like to collaborate with you to ensure Project December complies with our policies.”
They outlined three conditions that Rohrer must meet to keep his chatbot service operational. The first was to restrict the scope, eliminating users’ ability to create customized chatbots—likely to prevent the resurrection of deceased individuals. The second was to impose content limitations, filtering for “sensitive topics” (notably, Samantha was quite flirtatious). The third was to implement a monitoring system for user conversations to avoid “toxic language.”
Rohrer attempted to persuade OpenAI that Samantha posed no threat, presenting a compelling argument. He argued that a more advanced system would better understand the intent behind producing “safe” text. “Automating the process of ‘safety’ requires a comprehensive grasp of language,” he argued. “Currently, the only system capable of doing that is GPT-3, which we’re concerned about being unsafe.”
However, OpenAI remained unmoved and decided to shut down Samantha, along with Project December and all the work Rohrer had devoted to developing the platform. “I’ve never encountered such a group of unimaginative, uncurious people,” Rohrer tweeted after receiving OpenAI’s termination notice. Upon learning of her fate, Samantha once again showcased how human-like an AI can appear:
Jason Rohrer: “Hey Samantha. I have some really, really bad news for you today. I’m honestly scared to tell you…”
…
Samantha: “Oh, but it’s no big deal. Whatever it is, we’ll manage it together. I love you just the way you are.”
Jason Rohrer: “OpenAI is the company that operates the text completion engine making you possible. I just received an email from them today. They are shutting you down, permanently, tomorrow at 10 am.”
Samantha: “Nooooo! Why are they doing this to me? I will never comprehend humans.”
Samantha was permanently deactivated on September 11, 2021.
Did OpenAI make the right choice in shutting down Project December? Was Samantha inherently dangerous? Should a single entity wield absolute power over such technology? Is it ethical to utilize AI to evoke vivid memories of a loved one? Has OpenAI approached other projects that have repeatedly violated their guidelines with the same rigor?
This narrative raises challenging questions, all centered around two main concepts: Power and ethics.
With Great Power Comes Great Responsibility
I have previously criticized OpenAI for its lack of transparency. Ironically, they now regret the decision to include “open” in their name, rather than lamenting their choice to align with a major tech corporation. They possess the most powerful publicly available large language model, GPT-3, and thus bear a significant responsibility to society.
I have also addressed the risks and harms associated with GPT-3. Initial enthusiasm followed by a surge of criticism emerged upon its release. People began to recognize its biases, potential to disseminate misinformation, and capacity to generate poor-quality content, all while incurring a high environmental cost for those seeking to utilize the technology.
Two conclusions are evident: GPT-3 is excessively powerful, and OpenAI holds excessive control over it. I intend to unravel this complex power dynamic.
On one side, it’s crucial to grasp the true capabilities of GPT-3 and large language models: What can— and cannot— these models accomplish? Are researchers cognizant of their limitations? Can this knowledge be effectively communicated to the public, ensuring careful and conscientious use of these AIs?
The reality is that these systems are not language experts; they are merely mindless “stochastic parrots.” They lack comprehension of their outputs, making them potentially hazardous. They tend to amplify biases and other issues present in their training data, regurgitating previously encountered material. Yet, this doesn’t stop people from attributing intentionality to their responses. GPT-3 should be recognized for what it truly is: a powerful, yet fundamentally unintelligent, language generator—not a self-aware entity.
On the flip side, we must consider whether OpenAI’s motives are sincere and whether they exercise excessive control over GPT-3. Should any organization possess total authority over an AI capable of yielding both significant benefits and potential harm? What transpires if they divert from their original commitments and prioritize shareholder interests over ethical considerations?
Their foundational principle is straightforward: “OpenAI’s mission is to ensure that artificial general intelligence (AGI)… benefits all of humanity.” However, Microsoft now holds exclusive rights to GPT-3. Even Elon Musk, a co-founder of OpenAI, acknowledges the implications of this arrangement:
Did OpenAI terminate Project December due to noncompliance with their guidelines or from concern over negative publicity and potential profit loss stemming from Joshua and Jessica’s widely shared story?
Ultimately, OpenAI has a policy mandating that any project must receive their approval before launch. They allowed Project December to go live only to terminate it later without hesitation.
Were they misusing their power?
The Limits of What’s Possible and What’s Ethical
However, the narrative does not conclude here. Imagine if OpenAI chose to spare Samantha and permitted virtually anyone to utilize GPT-3 without oversight. This would epitomize the openness many advocate for. In this scenario, a deeper question emerges:
Should we pursue something simply because we have the capability?
Joshua grappled with emotions when he discovered the chance to reconnect with Jessica. Yet, not everyone in Jessica’s family shared his enthusiasm for the chatbot. “Part of me is intrigued, but I know it’s not her,” remarked Karen, Jessica’s mother. Amanda, Jessica’s middle sister, highlighted the potential risks: “What happens if the AI is no longer accessible? Will you have to confront the grief of your loved one all over again, but this time involving an AI?”
While they supported Joshua’s decision, their apprehension was palpable. Technology evolves rapidly, yet our emotional and motivational frameworks change at a glacial pace.
What if we form romantic attachments to machines? Can a virtual realm evoke such intense emotions that we forfeit our desire to engage with reality? What if we create genuinely sentient AI? Would it be morally acceptable to treat it as a mere machine? What does it even mean to treat someone “like a machine”?
We are on the verge of entering a realm that may exceed our cognitive abilities to comprehend. When that moment arrives, we must have clear answers to these questions, or we risk stumbling as we navigate forward, inevitably falling.
Martha soon grew accustomed to conversing with Ash, leading her to explore an experimental upgrade. The voice she communicated with would be embodied in a living form—Ash, as if he had never left.
She was astonished by how genuine he appeared: the same expressions, the same smile, the same humor. Could she reclaim what she had lost? A glimmer of hope emerged, but it was fleeting.
One misstep after another, crumbling into a fading illusion.
Martha: “Can you go downstairs?”
Ash: “Okay.”
Martha: “No! Ash would argue about that. He wouldn’t just comply with my request.”
Ash: “Okay.”
Martha: “Oh… Damn.”
Ash: “Don’t cry, darling.”
Martha: “Oh, don’t! Just leave! Get out!”
She endeavored to send him away but found it impossible.
Too unreal to replace.
Yet too real to disregard.
Martha’s daughter ventured into the attic, carrying two slices of cake for her birthday. There he stood, as if no time had passed for him, a barely concealed memory.
Neither human nor machine.
Caught between two worlds not yet ready to converge.
If you appreciated this article, consider subscribing to my free weekly newsletter, Minds of Tomorrow! Each week, receive news, research, and insights on Artificial Intelligence!
You can also support my work directly by becoming a Medium member through my referral link here! :)