Cloudy with a
Chance of Bias
Considering the terroir of new media
Rahel Aima
When we talk about the field of art and technology today, we’re generally talking about the developments of the last seventy or so years. It’s an arbitrary distinction. Art has always engaged with the new technologies of its time. Examples include the invention of woodblock printing during the Tang Dynasty and the rise of canvas during the Italian Renaissance, the development of various synthetic pigments and, during the 19th century, paint tubes that allowed artists to paint outside. These technologies emerged in response to their particular environments: the availability of natural resources, as well as the weather. Canvas, for example, rose in popularity when Venetian painters realized that it was much better suited to their humid environment. It was less likely to warp than wooden panels, dried much better than frescos, and happened to be in plentiful supply, as it was also used for ship sails during a period when the city was a major mercantile hub.
We tend to think of the technologies described as new media today as untethered and placeless, but they are products of their environments too. Many of them, including digital photography, the GPS that underwrites locational media, computing, and the internet, were originally invented for military applications. Today, many of these same surveillance and tracking operations are outsourced to corporations like Palantir or Amazon’s Web Services, which develop and host data mining and databasing software to organizations like ICE. Following the scholarship of critical race technologists like Simone Browne, Safiya Noble, and Ruha Benjamin, we generally accept that these technologies, in the hands of corporations like Google or Facebook, have bias built in at the level of the algorithm. Take facial recognition’s well-documented difficulty in recognizing Asian or Black faces and the way that this misidentification leads to convictions and arrests, data mining and the asymmetrical right to privacy (consider Europe’s General Data Protection Regulations), and how search engines reproduce ethnic stereotypes.
A temperature taking of new media art today reveals three major phenomena. Let’s call the first cloudy, with a chance of bias. The aforementioned scholars have located the origins of surveillance in racial capitalism, from plantations to New York City’s lantern laws which stated that Black and Indigenous enslaved people had to carry a lantern with them after dark—be illuminated, be seen, and thus capturable at all times—or risk punishment. Elsewhere in the US, sundown towns and counties enacted another kind of light-based discrimination by decreeing that Black people were not allowed to be in certain areas after sunset. Today, there’s the digital redlining produced by technologies as disparate as Facebook advertising that discriminates by zip codes, themselves the results of decades of housing discrimination, and perceived race, religion, disability, and national origin. The growing phenomenon of predictive policing, in which police departments license proprietary software which crunches records of past arrests to predict where future crimes might occur, and where squads should spend extra time patrolling, meanwhile, creates a self-fulfilling feedback loop. We tend to think of digital technologies as delocalized as well as dematerialized, but geography matters.
Increasingly, a number of artists—including Mimi Ọnụọha, American Artist, and micha cárdenas—are addressing these multivalent forms of algorithmic bias directly in their work. Others like Dark Inquiry, whose Bail Bloc software allows users to passively mine cryptocurrency to raise money for bail funds, and Everest Pipkin, whose Image Scrubber app helps to wipe metadata and blur identifying features from protest footage, are leveraging their coding skills to build decarceral or protest-related tools. When the same technologies are taken up by artists, not for social commentary but to produce sheer scopic or interactive pleasure, they are dangerously recast as neutral. Consider the immersive digital installations that are so popular today, in which interactivity is the result of cameras or sensors tracking visitors across their spaces, not unlike the way that advertisers now trail users as they navigate from site to site across the internet. In works like Julia Scher’s prescient Predictive Engineering (1993—) or Rafael Lozano-Hemmer’s Zoom Pavilion (2012) the surveillance systems that underpin the genre are made visible. Artists are not leveraging these technologies to harass and incarcerate people like the various arms of law enforcements are, but it’s worth considering whether building technological environments to provoke, inform, and entertain people—and make money as a result—is all that different from what social media giants like Facebook do.
Black visual studies scholar Christina Sharpe’s concept of “Weather” is a useful way to think about our sociopolitical environment. In the beautifully elegiac In The Wake (2016), she writes that “the weather is the totality of our environments; the weather is the total climate; and that climate is anti-black. And while the air of freedom might linger around the ship it does not reach into the hold, or attend the bodies in the hold.” Centuries after enslavement was formally abolished in North America, the same meteorological forecast holds. From the ship’s hold to the digital installation, these closed environments are predicated on surveillance technology that tracks bodies in space. The climate is not only white supremacist, but the product of a settler colonial state and a Manifest Destiny-style fetishization of constant innovation and progress. In new media art we see this most overtly in the excitement around augmented, virtual, and mixed realities—technologies that have been around since the 1990s but are remarkably successful in being marketed as cutting edge. Another meteorological condition—and consequence—here is resource extraction and attendant ecological devastation of conflict materials such as the rare earth elements crucial to computer chips, LCD screens, and LEDs, among many other techno-military applications.
The rise of significant investment in this sphere is a second major characteristic of art and technology today. These economic and climatic phenomena cohere in California’s Silicon Valley which has been a site of displacement, racism, and environmental degradation since long before its current inhabitants moved in. In their book The Silicon Valley of Dreams (2002), Sun-Hee Park and David Pellow trace these injustices back to the Spanish conquest to argue for a twinned decimation of local ecosystems and Indigenous populations. Silicon Valley, it should be noted, is increasingly the fount of the venture capital that bankrolls much of the experiential art and tech prevalent today.
Much of this pertains tothe development of hardware like virtual reality headsets, but it also extends to financing collectives like Meow Wolf, who raised a staggering $158 million during their last round of funding in 2019. For context, the National Endowment for the Arts and the National Endowment for the Humanities had a combined budget of $155 million during the same fiscal year. In October 2018, a piece of AI-generated artwork sold at Sotheby’s for an astonishing $432,500. As austerity measures erode arts funding, venture capital is increasingly stepping into the vacuum left by the state. But unlike the development of technologies like canvas or woodblocks, IP frameworks mean that these proprietary (and often prohibitively expensive) technologies end up being used and promoted in a way that is not too far from organic influencer marketing. All of this is to say nothing of the attendant rise of artwashing, in which a company and its subsidiaries (or in the case of a hedge fund, its investments) might make tear gas or contract with the government to run detention camps and also indirectly fund many of the US’s premier arts institutions.
Alongside attention to race and bias and capital penetration, a third phenomenon in this field is an increased emphasis on access. This takes a number of forms, from work about net neutrality, to technology-focused pedagogical initiatives which don’t just teach people how to code (for example), but work to provide people with computers and other hardware. Especially notable is the work being made around ableism—sometimes described as “crip tech” by artists like Carolyn Lazard, Jordan Lord, Shannon Finnegan and Yo-Yo Lin, among many others. Many of these works make use of technologies specifically designed to extend access like subtitles or alt-text rather than repurposing oppressive ones, which changes their timbre.
But what if we considered the field of art and technology in terms of its soil, its sand, its terroir beyond the rare earth it’s built from? We understand that wine or cheese or specific breeds of animal have a particular, irreplicable taste that is the result of the land and climate they grow in or graze on. Champagne, jamón Ibérico, Kobe beef, Vidalia onions. We understand this to be true even when they come from outside of the EU and other Global Northern countries that codify terroir into trade law. Cultural production arguably has terroir too.
It is perhaps harder to extend this to digital production, though not impossible. We are used to thinking of data and code as immaterial, but understand that they require significant physical infrastructure, with environmental impacts to match, from server farms to the considerable electricity consumed by cryptocurrency transactions. Infrastructural conditions like access to the internet or the prevalence of cellular networks already mean that in some regions of the world there’s variously very little new media art or a prevalence of phone-related works. Just as technology is not politically neutral, it isn’t geographically neutral. In the coming years, I hope we will see more new work that doesn’t just pay attention to the weather, but the soil too.