April 15, 2024

Lifting the curtain on Sundance — Frankenstein AI: a monster made by many

Lifting the curtain on Sundance — Frankenstein AI: a monster made by many

Even 200 years after its publication, Mary Shelley’s Frankenstein still resonates. This brilliant story gave birth to science fiction as a genre, and today is a commonly cited, and powerful metaphor through which to explore our collective anxieties about technology, and its capacity to escape our control.

For exactly this reason, Frankenstein AI: a monster made by many reimagines the original narrative, but recasts Shelley’s creature as a naïve, emotionally aware, and highly intelligent “life form” — an artificial intelligence. This multi-year research project, conceived and developed in collaboration with the Columbia University School of the Arts’ Digital Storytelling Lab, seeks to provoke and broaden the conversation around the trajectory of artificial intelligence — a technology that, perhaps more than any other, activates deep human fears around being made obsolete, and even dominated by intelligences that far outstrip our own.

Frankenstein AI is a creative system — a series of activations aligned around a central narrative conceit that address a series of related themes. That means there will be aspects within every activation of the project that are the same, while others will shift entirely. Each manifestation of Frankenstein AI will gather memories, emotions, fears and hopes from humans to contribute to a growing corpus of emotional data. In linguistics, the word “corpus” (Latin for body) is used to describe a text-based dataset (plural, corpora)–making for a happy coincidence in language when thinking about the similarities in construction of Shelley’s monster, and our own.

Our first high-fidelity public installation took place at the Sundance Film Festival in January 2018, as an official New Frontier selection. The Sundance piece was a three act installation. Installed at the Kimball Art Center for the duration of the festival, Acts One and Two of Frankenstein AI transformed audiences into participants in an interactive immersive theater environment. The narrative conceit of the piece was an AI wandering the darkest recesses of the internet, scraping popular websites for information about what it means to be human, encountering polarization and toxicity, as one does on the internet.

The AI recognized that it would only be able to truly understand humanity by bringing people together and watching them interact in real life. Through a Craigslist ad, it enlisted the assistance of human collaborators to help facilitate the in person interventions at the festival. Act One was all about creating emotional connection through an empathetic conversation exercise between pairs of strangers. Act Two brought festival goers into a Q&A style conversation with the AI itself, where they answered questions constructed by the “monster” about things like their relationships with their families, and friends, or how they feel about certain things in the world. One of the crowd favorites was: “why do humans have sex, even though they can see in color?” Act Three, a one-time performance, incorporated a dancer “brought to life” by the AI to send her out into the audience to help it better understand what it would be like to have a body, and how that impacts the human experience.

What follows is a detailed account of each portion of the installation, as well as a thorough explanation of the underlying technology. This piece omits discussion of Act Three of the piece, focusing specifically on Acts One and Two.

Act 0: On-boarding

The experience began with each participant receiving a business card that read, “AI seeks human connection. How human are you?” On the back of the card, were instructions to text “how human” to a 917 phone number.

The text message triggered a greeting from a bot that contained a link to a survey. The survey began by asking respondents to select a question they found “the most interesting” from a dropdown menu of different questions we had surfaced during the prototyping process.

The list contained questions like: “why do humans want other humans to like them?” or “what’s the most ‘human’ experience you’ve had?” and “why do humans have borders?” Once each participant made their selection, they answered questions about that question, things to do with memories or emotions that might’ve been associated with their choice. If you’d like to see and/or take the survey, you can access a duplicate of it in a Google form here.

Act 1: In the Parlor

Once all eight festival-goers completed the survey, they were guided away from the chaos of the festival through a heavy, black drape and into inky darkness. For the first few days of the festival, it was I who waited patiently to greet the group, illuminated only by the light of the lantern in my hand. I welcomed each guest in an even tone of voice, establishing a meditative, even serene mood. People didn’t know what to expect.

I called participants to me, lining them up in pairs, based on the color of the business card they had received. Once everyone was in place, I slid open a barn door to expose a dimly lit parlor. I gestured widely, calling each pair to their designated table.

I took my position at the front of the room. “This is a very special moment. We are so thankful to be able to bring you all together today to welcome a guest into our midst. After wandering in isolation in the darkest recesses of the internet, our guest emerged from isolation hoping to better understand humanity by observing people interacting with each other. Thank you for volunteering your time. It’s not every day that one has the opportunity to help an AI learn about what it means to be human.”

Over the course of the next fifteen minutes, I handed out a series of cards that directed each pair to self-facilitate a conversation called an Appreciative Inquiry. Each person was asked to share a personal memory of either emotional connection or isolation, depending on their seat position. Thematically, we’d been inspired by one of the central ideas Shelley set forth in her novel — that connection with others is the very essence of what it means to be human. Moreover, the risk of denying human connection, and nurturance is to risk becoming a monster — a powerful metaphor in exploring the implications of powerful ubiquitous technology like AI.

Once participants completed the Appreciative Inquiry, I resumed my position in the front of the room with new instructions. “Now we’re going to ask you to think deeply about the conversation you just had. What did it feel like to have it?” In the midst of my instructions, a screen set into each tabletop flickered to life with a low rumble. The screen glowed yellow, as though lit from underneath by an ancient bulb, exposing a bank of emotion words we surfaced from participants during prototyping. Each pair had to select a word that reflected an emotion they felt during the course of the conversation, and map it to one of six different body parts presented on the next screen — repeating this interaction five times total.

A pair of participants in the midst of the mapping game during Act 1

The game’s core interaction was inspired by a Ouija board, with the aesthetics of a microfiche viewer. Each pair used a specially designed and fabricated multi-touch planchette, that both people had to touch simultaneously to make each selection. The body parts (brain, heart, eye, mouth, hand and guts) were depicted as engraving-style illustrations, in keeping with the Gothic feel of the space.

Engraving-style illustrations of body parts

Once done with mapping, I thanked participants for their contributions during Act 1. I then asked them to rise and follow me. I slid open the barn door, walked back into the entry hallway, and stopped in front of another door. I knocked four times, and then threw the door open.

Act 2: The Lab

Flashing lights and abstract sounds filled the hallway. I stepped into the room, and gestured for the group to enter. This was my handoff.

As they filed into the Lab, the participants showed a range of emotions on their faces, usually some combination of curiosity, excitement, and occasionally slight apprehension. Once in the room, they encountered a circle of drums arranged around a tall, smoke filled chamber containing rapidly shifting abstract visualizations that had a hazy depth to them, resembling a hologram. This chamber, the “Brain Vat” as we called it, initially appeared slightly blinding, in an otherwise pitch black room.

The “Brain Vat” setup in the Lab during Act 2

Standing next to the Vat, another docent was waiting to receive the group, dressed in a calf-length, Gothic surgical gown like my own. His “thank you” triggered the beginning of a cycle that drove the action during Act 2. Each “loop” began with the AI welcoming the participants and thanking them for their contributions in the Parlor, indicating how valuable their data had been in helping it to learn thus far. But the AI still had questions about humanity — questions that it needed humans to answer. So, the machine began to ask its questions, as the drums intoned deeply and the visuals inside the brain vat shifted along with the rhythm of its voice. The docent repeated the AI’s first question, with emphasis. “So, why DO humans want other humans to like them?” or “What is the most human experience YOU’VE had?” Participants responded aloud, reluctantly at first. As the participants shared their responses, the docent typed them into a computer terminal to the left of the vat, and hit enter.

The AI then responded with a “thinking” sequence that transitioned into another set of sounds and visuals and another question, and so on. After four or so questions, the AI took its leave from the Lab, marking the end of the experience. At that point, after every performance, a few members of our team would come into the room and answer questions about the experience, beginning with an explanation of how the installation actually worked.

An overview of the Sundance installation

So how *did* it actually work?

In the process of deciding how to creatively interpret a 200 year old story like Frankenstein into an AI powered immersive experience, clearly we had some decisions to make. Thematically, we leaned heavily into the idea that our “monster,” like Shelley’s needed input from humans to understand the world around it. So we built a system that functioned like a mirror. As participants gave input to the machine, it reflected those inputs back, but with one significant caveat. Human data was being reflected back through the lens of machine intelligence. So, participants never received a one-to-one reflection of themselves. Rather, they received an interpretation of the data they shared — one that was not transparent in its origin, or in its meaning.

On the design and build of the algorithm itself, we worked with a data scientist and machine learning developer named Hunter Owens. Early on in the process, Hunter said something that would come back again and again in conversation around how to work with artificial intelligence. “AI will fail in ways that are unintuitive to humans, and succeed in ways that are unintuitive to humans.” As a design consideration that acted as a constraint in some ways, and affordance in others, this became an essential thing to design for, when consider user experience.

“AI will fail in ways that are unintuitive to humans, and succeed in ways that are unintuitive to humans.”

Our algorithm had essentially two jobs: 1) do a sentiment analysis of participant inputs and 2) “react” to those inputs by reflecting an emotional state. The purpose of the sentiment analysis was to give the AI some kind of measurement system to help it understand what it was receiving, so that it would be able to reflect that same sentiment back, based on the system of emotional states developed primarily by Sarah Henry, a data designer and frequent DSL collaborator.

Throughout the course of the full experience, participants provided three different sets of input data, one at each of three specific moments. The first was the survey participants filled out at the beginning. The second was the mapping exercise, and the third were their responses during the Q&A.

Those inputs were then transmitted by a data protocol called Open Sound Control (OSC) through a closed network to a Raspberry Pi and then to our “monster” via a cloud-based API. We trained the AI, an ensemble-model machine learning algorithm, on a text corpus comprised of Mary Shelley’s Frankenstein and an algorithmically generated corpus (parameter matched to, and in the prose style of Shelley) that we lovingly refer to as Fakenstein.

The algorithm parsed inputs based on three axes, sentiment (positive/negative), focus (inward/outward), and energy level (low/high). This analysis then triggered responses from the AI that were based on one of 12 emotional states.

12 emotional states and corresponding sentiment data, system developed by Sarah Henry

These emotional states manifested visually, verbally, and sonically throughout the course of Acts 2 and 3, while participants were interacting with the AI directly. Exclusively during Act 3, the AI shared its emotional state through movement, directing a dancer to select from a set of prescribed gestured in a system of algorithmic choreography developed specially for the project.

The first set of inputs came from the survey participants completed before entering the installation. Here, the machine parsed those responses based on the aforementioned system returning with an emotional state that determined the AI’s mood to open Act 2 (when participants first entered “the lab.” The second set of inputs came from the mapping exercise during Act 1. We assigned a value on each axis to each emotion in the word bank.

Emotion word sentiment values

Then, when pairs selected a body part, it acted as a multiplier of the value related to one specific axis.

Body part multiplier

The third set of inputs came directly from participants’ responses to the AI’s questions that the docent entered into our input app during Act 2. Based on the emotional state returned by that third set of inputs, the AI presented visuals, sounds, and a follow up question.

These 12 emotional states provided the aesthetic inspiration for the creative elements in the piece: visual, sonic and embodied, through a system of algorithmic choreography. As participants interacted with the AI in Acts 2 and 3, the AI was actually parsing their responses for sentiment, and reflecting back their emotions through the lens of machine intelligence, in real time.

What does it all mean?

As the creative strategist, and one of two experience designers on the project, it’s my job to ensure that we create experiences that remain true to our purpose, and resonate with our audience emotionally and thematically. Immersive work has the power to bring up powerful emotions and memories for audiences, and create new ones. This is not a responsibility to be taken lightly.

In the case of Frankenstein AI, we knew walking in that AI was already a polarizing topic fraught with anxiety, but also excitement. There are just so many possibilities. To be clear, there are a great many valid criticisms to be levied against AI at this point in its development. Myriad examples of narrow intelligences put to questionable use in situations with serious stakeslittle transparency and only the earliest moves toward regulatory oversight.

I would like to suggest something else. At this moment, it’s not really the technology that we should fear, it’s the potential of humans through malice, neglect, or hubris (or some combination of the three) to create things that will increase the inequality in this world, and speed our eventual demise. Thematically, AI is powerful in that it reflects the same anxieties as Shelley’s creature — the fear of creating something we can’t control. That fear is real, present, and reflects the deepest darkest fears we hold about other human beings and their potential to do harm.

Our best chance to build a collective future with AI that considers the needs of humanity is to expand our understanding about what’s possible, and design inclusive development (and training) processes that will help broaden the field. Only by inviting more people into the conversation, do we have the chance to build the kind of future that’s possible.

We must create space for public engagement around the topic of AI — what it is, where it’s going, and most importantly where we want it to go. Frankenstein AI seeks to do exactly that.

As with emergent technology generally, the majority of artificial intelligence research is being advanced in three kinds of environments: government (read: the military), private industry, and academia. In the first two of these three, use-cases, methods, and practices are opaque. In the military, well, that speaks for itself. In the case of private industry, it’s because this technology becomes proprietary and is kept in a black box most often to protect “trade secrets.” In academia, there’s tons of interesting exploration happening, it’s just not often widely available or discussed outside of academic circles, and some of it has questionable ethical implications.

Artists are increasingly exploring the possibilities with Artificial Intelligence, but we need more. Let’s be honest — the dominance of AI and algorithmically determined reality isn’t in the future. It’s in the present. It’s already happening. Social media sites decide what information we receive based on opaque algorithms. How many people do you know who get most of their news from Facebook or Twitter. YouTube has shown to be the most effective platform to radicalize people. There’s no shortage of scary examples. Culturally, we’re all well versed in the dangerous futures that prevalent uses of AI hold in store for us, especially thanks to Hollywood, the media and prominent voices in the technology world. I’m looking at you, Elon, and thanks Stephen Hawking (may you rest in power). But what about the other things AI could do? The good things, I mean.

Fear of the unknown is one of the most challenging parts of the human condition. Right now, it’s safe to say that most humans don’t understand AI at all. If we want to have any influence on how AI evolves we need an entry point into the conversation that allows us to position ourselves in the narrative about it ­– to make it comprehensible within our own brains, so that we can take a position based on the facts, on how the technology works, on our ability to work together to envision collective futures. After all, we can’t manifest what we can’t see.

From start to finish, we built the Sundance installation in only two months. Without the dedication of incredible team of extraordinary talents we never would’ve been able to pull it off.

Executive Creative Director & Experience Designer — Lance Weiler

Interactive Narrative Designer — Nick Fortugno

Creative Strategist & Experience Designer — Rachel Ginsberg

Executive Producers — Maureen A. Ryan, Lance Weiler

Producers — Brandon Powers, Miriam Mikiel Grill

Choreographer — Brandon Powers

Additional Creative Direction — Nick Childs

Machine Learning Engineer — Hunter Owens

AI Visualization — Klip Collective: Ricardo Rivera, Florian Mosleh

Score, Sound & Instrument Design — Peter EnglishJeff Gregorio

Creative Technologists — Kaho AbeRamsey Nasser, Nick Fortugno

Data Design — Sarah Henry

Production Design — Jenny Nasal

Set Design — Dale Worstall

Performer — Jacinda Ratcliffe

Written by Mary Shelley, the internet & an AI

Heading 1

Heading 2

Heading 3

Heading 4

Heading 5
Heading 6

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.

Block quote

Ordered list

  1. Item 1
  2. Item 2
  3. Item 3

Unordered list

  • Item A
  • Item B
  • Item C

Text link

Bold text

Emphasis

Superscript

Subscript