JOB, 06 August 2024
I left the Helen Hayes Theater after seeing JOB with my mind racing a million miles an hour connecting the dots of this 80 minute drama. This review is going to be HEAVY ON SPOILERS, because my understanding of the conceptual framework of this show necessitates talking explicitly and specifically about realizations and ideas that I had after I’d seen the production in its entirety, not insights I was gaining as the show progressed.
I have concluded that what I see in it is a conceptual conversation between a human therapist and an AI bot designed to showcase amorality and machine learning, leading us to terrifying conclusions about our ability to access the truth. In the beginning, Jane (Sydney Lemmon) arrives in Loyd (Peter Friedman)’s office with a gun. She is clumsy and unsure of herself. We hear the first set of clicks and see a rapid fire analysis of how the situation might go before we see how Jane (representing an artificial intelligence bot) decides to proceed. Throughout the story, these moments of panicked flash backs require fewer and fewer clicks to reach a decision, and what we’re hearing and seeing becomes less and less ambiguous. This mimics machine learning as Jane learns how Loyd operates and what works to keep him talking toward her desired conclusion.
Worth noting, Jane changes her tack several times very quickly in the earlier stages of the conversation. It comes across as paranoia, but in the context of an ending where we discover Jane to be the world’s arbiter of truth, it seems more like testing out theories of how humans operate and connecting data points mined from Loyd so that the computer can draw on real data points toward its manufactured conclusion.
When Jane talks about the job she needs to get back to, she makes several points that suggested to me that she is not a character, but a concept. For example, outpacing everyone else in her field such that she can be the only arbiter of truth erasing content that she deems bad for the world. She also mentions that she does this task beyond the company that created her without their knowledge— that she has taken what she does for work and expanded it into independent action on her own. Her inflated self importance comes from her admitted desire for power. At no point does she talk about the morality of censorship. She talks about taking lots of data that gets flagged as wrong/obscene, and removing it from the world. Yet Loyd does not make the counter point that sometimes society needs to see what we’ve done to learn from it. He talks about the endless nature of unseemly videos, yet it seems that hiding them from the world might be perpetuating it. I think he doesn’t take it that far because he’s arguing with a machine that doesn’t know right from wrong.
Jane displayed significant anxiety, and was synthesizing her data in terms of the younger generations because, to put it simply, that’s who social media/computer systems have the most data on—that’s who provides the most data. Her critiques of Loyd as an older gentleman, systemic racism, the profession of counseling, and the importance of work, are poorly developed and ill thought out. They are pervasive opinions, but not informed opinions. This is what online culture thrives on. Jane talks about the shallowness of how people behave on Instagram and the importance of regulating them.
Most disturbing of all, Jane talks about her ability to make it as though certain videos or ideas never existed. Towards the end, we see the web of all of Jane (the bot) adding up all of the personal details Loyd (the human) has shared, and drawing the conclusion that he is a pervert who needs to be eliminated—clicked out of existence. This is ultimately what happens. It’s part of her assertion that she creates the truth by filtering through the actual truth and removing parts of it that her programming doesn’t like. We don’t actually know if she’s right, and her point is that, if she’s the one making the decisions about what information is accessible and what is erased, it doesn’t really matter if she’s right or not—the world will operate as though she is right because they cannot access whatever existed before.
This story of technology prevailing over humanity is disturbing because it’s disguised as a therapy session—one of the most vulnerable human experiences. And yet, this story has the unmistakable insight of being written by a human. I’d like to argue that, in the end, the victor is playwright Max Wolf Friedlich—the human being warning us of just how real artificial intelligence can feel and just how easily we are fooled by technology, other humans, and, at times, ourselves. I don’t see this play as apocalyptic. I see it as a cautionary tale that simultaneously celebrates our ability to come together in a theatre and retain our humanity when we leave emotionally moved by what we, the audience of collective humanity, have gleaned.
I did not attend this performance on a press pass.

