Taking the High Road

A psychological model reveals how artists bring necessary friction and wisdom to our tech-preoccupied society, and to our future

Willa Köerner

To be human is to grow through friction and opposition. Living through this push and pull can be painful, but it can also stimulate the revelations needed for growth, and for eventual departures from the status quo.

These days, though, there’s so much friction it can feel hard to breathe. Despite the proliferation of new “smart” and “innovative” technologies aimed at bettering our daily lives, the planet is hotter and sicker than ever before, and we face enormous challenges in our quest for true equity, justice, and environmental resilience. In this uphill climb, we need constant reminders of the agency we still have to build the future we want—not just the future being sold to us by the kingpin beneficiaries of late-stage capitalism. Often, this reminder comes from artists.

Artists help us reclaim our agency when they make visible the pitfalls and possibilities of otherwise invisible or overlooked forces. But how do they do this, exactly? To put it simply, they do this by making us collectively more wise. When we’re wise, we’re more aware of what’s holding us back, and more capable of sculpting a shared reality that serves everyone.

By understanding the psychology behind how our brains process information, we can understand why it’s so important to listen to artists—and why the friction they tease out can be transformative for our co-created future.

In his book Social Intelligence (2006), psychologist Daniel Goleman lays out the two main ways our brains process information: through a “high road” and a “low road.” Low-road thinking is instinctual, and happens when the brain takes action without our conscious awareness. The low road is where “gut instincts” come from; it’s also responsible for triggering fight-or-flight responses. Evolutionarily, low-road responses play a role in keeping us safe, and make life easier by freeing our conscious minds from the burden of constant self-awareness.

On the other hand, the high road runs through a part of our neural system that operates by way of conscious effort and analysis. This is the kind of mental activity we’re acutely aware of, which we can perceive anytime we debate with ourselves internally, consider a response to a question, or notice an anxious thought pattern. Taking the high road is necessary to develop new perspectives, challenge existing mindsets, and rewire parts of our brain that aren’t working in our best interest.

As a human, you need both the high road and the low road to achieve peak cognitive acuity. Psychologist Marsha Linehan has a theory that through the balanced combination of high- and low-road thinking, we can access a heightened state of mental awareness called the “wise mind.” The wise mind is able to lean on instinctual, habitual, and emotional intelligence to work efficiently, while also bringing in critical, logical, and analytical ways of thinking when it’s necessary to reevaluate our instincts, or learn new ways of responding to the world.

The low and high roads, and the symbiotic relationship they create, are a helpful metaphor for appreciating the two forces necessary to build a balanced future: one focused on creating efficiency and ease, the other on bringing friction and criticality into the equation. To thrive as individuals, we need to think efficiently—but we also need to know when to pick apart our instincts, and be mindful of what’s going on beneath the surface. The same is true for our cultures and societies, and this is where artists come in: they can bring high-road thinking to a world in desperate need of conscious realignment.

To better grasp why this high- and low-road metaphor applies to how we’re building the future, let’s take a step back. Today, we tend to use the term “technology” as a sweeping, all-encompassing word to reference the tools being built to enhance our lives. Generally speaking, we’re talking about cutting-edge software and hardware that’s been mostly developed in the last few decades—things like personal computers, software, the internet, “smart” devices, artificial intelligence, robotics, and automation—all of which share the goal of making daily life more seamless, effortless, and enjoyable. In this way, the more our technology develops, the more we find ourselves on the friction-free low road: Alexa orders us fresh groceries before we realize we’re running low, our smart watches tell us when to stand up or sit, Gmail handily offers to finish our sentences, and our Instagram feeds keep us entranced. On the low road, we can just keep scrolling into infinity.

Contemporary technologies are so good at taking us down the low road, we often don’t notice how low we’re going. This is where the need for friction comes into play.

Contemporary technologies are so good at taking us down the low road, we often don’t notice how low we’re going. This is where the need for friction comes into play. As a society living through a pandemic and endlessly dire news cycles, of course we yearn for simple fixes to bring a brighter future closer. But we must be wary of technology as a Trojan horse for positive innovation and greater collective intelligence, and we must be critical of where our pursuit of ease is sending us.  

To do this, we must look to artists. Specifically, artists making work with and about futuristic technologies. This is because it’s these artists who are helping us see who we are now, how we’re setting ourselves up to evolve into the future—and whether that’s wise.

The best artists making work with and about technology today are picking apart the networked, computational, and automated underpinnings that continue to fabricate more and more of our shared reality, asking, “How does this work, and what is it doing to us?” Their questioning doesn’t place a technology into the false dichotomy of “good” or “bad”; instead, it uses technology as a portal through which to reveal assumptions about what it means to be human. Through the friction created by this mindful, high-road exploration, we’re offered a chance to realign our trajectory.

To concretely understand how artists are using high-road thinking to reveal technology’s effect on the future, consider the example of artists engaging with artificial intelligence. AI systems learn to “think” by ingesting large data sets, which they turn into generalizations that they can draw conclusions from—in a sense, creating their own low-road thought patterns. Then, AI can be applied at scale to make decisions that are in line with the information drawn from whatever input they were trained on. Where and how these data sets are obtained (and the biases they can perpetuate) is something that has often gone overlooked by the technologists bringing AI technologies to life.

The biased tendencies of AI were more fully brought into public consciousness in 2019, when artist Trevor Paglen collaborated with researcher Kate Crawford to release ImageNet Roulette, a facial recognition AI trained to label your identity after you uploaded a photo of your face. The project drew its data from something called ImageNet, a vast database with over 14 million images that was developed to train interfaces used in things like self-driving cars and facial recognition software. But when Paglen and Crawford’s project debuted, it became obvious just how biased (and downright racist) the dataset actually was. When put to the test, ImageNet Roulette would often label people with offensive racial slurs, showing just how little control the developers had over the “learning” that would happen when AI systems ingested their data.

A key problem is that the datasets we have available to “teach” AI systems are flawed, largely because they’ve been created by a society that’s entrenched in racist, ableist, patriarchal ways of thinking and operating.

As the ImageNet Roulette project pointed out, a key problem is that the datasets we have available to “teach” AI systems are flawed, largely because they’ve been created by a society that’s entrenched in racist, ableist, patriarchal ways of thinking and operating. These ways of thinking don’t point to the future we want; instead, they train AI to use low-road thought patterns that keep us tethered to our deeply troubled past. Without using critical, high-road thinking to intervene, these biased data sets threaten to further perpetuate (and likely magnify) the effects of systemic racism.

In much of her work, artist Stephanie Dinkins adeptly points out how developments in AI technologies perpetuate biases, particularly in relation to race and gender. With the goal of driving technologists to create more equitable and ethical AI ecosystems, she has created many works that critically engage viewers’ imaginations to rethink how AI systems can be diversified and complexified to work better.

In her project AI.ASSEMBLY (2017–present), Dinkins convened a series of workshops where participants could constructively think through “what AI needs from us, and what we want from it.” As Dinkins writes on her website, “[AI] systems are encoded with the same biases responsible for the myriad systemic injustices we experience today. We can no longer afford to be passive consumers or oblivious subjects to algorithmic systems that significantly impact how and where we live, who we love, and our ability to build and distribute wealth.” Dinkins points out how important it is to critique the underpinnings of technologies that take away our cognitive autonomy, because often, they are not programmed in ways that serve everyone well.

As one more example of an artist who is heightening our awareness of how AI works (and how it could work), consider Rashaad Newsome’s installation To Be Real, for which he created a cloud-based AI persona called Being (2019). To generate the AI’s intelligence, Newsome used the works of radical authors and theorists—such as Paulo Freire, Michel Foucault, and bell hooks—as input. Consider what this means for how the AI will learn to think, in contrast to how AIs fed the ImageNet dataset learn to think.

According to a statement provided by the Fort Mason Center, where Newsome’s work was exhibited in the winter of 2020, “The figure is additionally queered through contemporary assemblage: a lower body cut from a life-like sex doll, outfitted in drag padding; a custom wig, acrylic nails, and high heel boots; and a dress form that fuses traditional African and drag ballroom aesthetics. Together, the collaged and sculptural figures draw from Queer, Black, and Ballroom life itself, pointing to the future utopias that these lives represent and inspire.” Newsome’s work is phenomenal in that it is critical of AI while simultaneously celebrating the possibilities the technology can unlock when developed with a more kaleidoscopic, inclusive, and “queer” data set.  

Beyond just reimagining what and how an AI can learn, Newsome is also asking who can benefit from AI’s applications—especially outside of capitalistic constructs. Currently, Newsome is pushing Being into its next iteration, Being 1.5, which will evolve the non-binary cyborg into a therapy app specifically tailored to help Black people overcome trauma. In a video produced for Eyebeam’s “What Comes Next” series, Newsome explains, “Being 1.5 is an effort to make online mental healthcare broadly accessible to the Black community by leveraging the possibilities of machine learning.” Being 1.5 will model its behavior after actual Black therapists, and will assist its users in “decolonizing their minds and imaginations” through daily affirmations, meditation, and dance therapy.

As we learn from Newsome, Dinkins, and Paglen’s projects, technology isn’t inherently good or bad. Instead, technologies are just a reflection of us—of our past misgivings, our current struggles to define our collective identity, and of the friction we feel as we try to settle on a shared vision for the future. Each time we take a moment to think critically about where we’re headed—every time an artist prompts us to pop out of low-road thinking, and to instead use high-road thinking to see how the future is actually taking shape around us—we have an opportunity to become collectively more wise.

In a recent interview, artist Sondra Perry muses about how new technologies play a role in her work, and how they affect us overall: “All of these new technological spaces of representation are spaces where old stories are becoming new again, or where the old is being seen through a different literal lens. [With every new technology that emerges,] we’re getting to rediscover who we are once more.”

This is the work of the future-focused artist: To pull apart the tools that seek to define our futures, and remind us to be mindful of their effect on us as humans, and as conscious dreamers.

“How can dreams posit new futures?” Perry later asks. This is the work of the future-focused artist: To pull apart the tools that seek to define our futures, and remind us to be mindful of their effect on us as humans, and as conscious dreamers.

It’s not always easy to take the high road. On the contrary, working through that friction can be extremely hard. But as we keep pushing onwards—attempting to save this thwarted planet and our traumatized species from an increasingly dystopian future—our critical minds, combined with an unrelenting expectation that we can and must do better, may be all we have to keep us going.