Episode XVII. How Trevor Paglen Sees the World by

by November 28, 2022

Following the opening of “Slip.Stream.Slip: Resistance and Velocity in Game Engine Culture” at Modal, Manchester, Valentino Catricalà asks one of its artists, Trevor Paglen, about how he imagines alternative futures.

“The Uncanny Valley” is Flash Art’s digital column offering a window on the developing field of artificial intelligence and its relationship to contemporary art.

Trevor Paglen, CLOUD #211 Region Adjacency Graph, 2019. Dye sublimation print. 121,9 × 203,2 cm. Edition 1 of 5. Edition of 5 + 2 AP. The work is currently on view at MODAL Gallery, Manchester. Photograph by Annik Wetter. Courtesy the artist and Pace Gallery.

Valentino Catricalà: I would like to start by asking you about the tension between nature and technology present in your work. It seems to me that your work tries to overcome this conventional dualism in order that we might understand our world anew. Do you agree with that?
Trevor Paglen: Nature is of course a tricky concept. When I was a bit younger, I spent a lot of time studying geography alongside art, specifically human geography. I wanted to understand the concept of “landscape” in as many ways as I possibly could. The basic premise of geography is that humans sculpt the world and are in turn sculpted by the ways in which we’ve sculpted the world. When you start thinking that way, the nature/culture dichotomy quickly breaks down. Or in the words of Robert Smithson, “Nature is simply another eighteenth- and nineteenth-century fiction.”

VC: You are well-known for investigating twenty-first-century mass surveillance, state secrets, and data collection. What prompted your study of surveillance technologies, such as satellite tracking, as a means by which technology is altering our relationship to the land around us?
TP: I’m deeply interested in all types of visuality, including the various apparatuses humans use to “see” the world around us. Ways of seeing — whether they’re cultural or historical conventions or whether they’re technological apparatuses — are never neutral, and, of course, they play an active role in creating the things that they purport to represent (there’s a version of the geographical premise above). So if you want to learn about how humans sculpt the world through technical forms of image-making — and this is something I very much try to do in my work — you have to learn how those technologies “see.”
Examples are everywhere: if you want to learn about how AI and computer vision systems classify people, for example, you have to develop the means to “see” what those AI systems are “seeing.” I guess to sum up, I’d say that imaging technologies — from oil paintings to computer vision systems — shape the world they purport to represent. If you want to see how this “shaping” occurs, you have to have an understanding of how those technologies “see.”

VC: The prospect of an alternative future and the vision of space as a possibility is often present in your work. How do you imagine this future?
TP: I think one of the things that art can help with is to imagine different types of futures, different types of life. This could be anything from different ways to think about representation, which can translate into different types of politics, or it could be different ways of interacting with the environmental and ecological spheres we’re enmeshed within. Having said that, I’m pretty pessimistic about the future and certainly don’t think that space flight has much to contribute to the sustainability of life on earth. On the other hand, I don’t think it’s ethical to behave as if the prospects for human life on earth are as grim as I believe they are — in other words, we must behave as if the future exists and do our best to care for it.

VC: For your recent exhibition at Pace Gallery, “A Thousand Flowers,” you invoked art-historical traditions of conceptualism, minimalism, and naturalism, while investigating the use of technological structures that shape and control society. This exhibition featured a series of earlier works, including the series “Bloom” (2020–21) as well as the interactive work ImageNet Roulette (2020). What was your idea when developing this exhibition?
TP: The exhibition was really about how new types of imaging technologies, particularly computer vision technologies, go about trying to “see” the world and the various ways in which AI systems both overlap and wildly diverge from our commonsensical understandings of visual perception. To create this constellation of images I was putting allegorical tropes (flowers, seascapes, etc.) into tension with some extremely technical algorithms used in computer vision and AI systems to formally analyze images. In works like ImageNet Roulette I was trying to show how logics of classification, especially when it comes to humans, have an extremely high chance of being, at best, simply absurd while very easily slipping into producing harm.

VC: In “Bloom,” you allowed visitors from all over the world to experience the London exhibition virtually by creating a live web portal called Octopus, connected to cameras and placed in the gallery. Online participants could observe visitors experiencing the work in person and be present in the space by transmitting their own image on monitors inside the exhibition. What prompted this choice?
TP: “Bloom” opened during the middle of the COVID-19 lockdowns, and that body of work very much emerged from looking at some of the ways that the pandemic was accelerating technologies of surveillance and control. So I wanted to create a kind of meta-commentary on the great re-organization of space that happened during the pandemic, and the expansion of surveillance/control technologies into the most intimate corners of our everyday lives.

VC: Why do surveillance technologies permeate so much of your work?
TP: Because surveillance technologies permeate the entire world, and I am in the business of trying to see the world.

Trevor Plagen, Bloom (#79655d), 2020. Dye sublimation print. 66 cm × 49,5 cm unframed; 68 x 51,4 x 3,8 cm framed. Edition 1 of 5; edition of 5 + 2 APs. Photography by Damian Griffiths. Courtesy of the artist.

VC: In “The Last Pictures” project, you worked with scientists at MIT to develop an ultra-archival disk micro-etched with one hundred photographs and encased in a gold-plated shell. The disk, which is designed to last billions of years, was then launched into space on November 20, 2012. How did you select the images?
TP: The images used for “The Last Pictures” were curated by a study group that I put together in collaboration with Creative Time, who produced the project. It was a group of about ten people, who met every week at Creative Time, and we’d have a seminar of sorts where we’d talk about research avenues, ways to think about the ethical concerns of the project, and we spent a lot of time talking about images. So the images really came out of that group process.

VC: You have investigated artificial intelligence in many ways. Recently, in the two-person exhibition “Training Humans” at the Fondazione Prada in Milan, you and Kate Crawford investigated the visual training paths that train artificial intelligence systems to recognize human beings, noting risks in the ways technological systems collect, label, and use these materials. What do you think are the most dangerous implications of this method of recognition?
TP: This is another huge question, and there are a lot of ways to answer it. On one level we could talk about the various politics of classifications in AI systems, which is what a piece like ImageNet Roulette was getting at. You could go down an epistemological route here and think about what sorts of politics emerge when you take concepts about people that are fundamentally historical and relational (i.e., a concept like “beauty” or “crime”) and attempt to quantify them and put them into a technical system (spoiler alert: it’s bad). Or you could ask the question from an economic/political standpoint and ask, “Who is creating what kinds of systems to benefit whom? At whose expense does that come?”
There are many other approaches to answering your question: you could talk about the environmental costs of training AI models, or the labor practices that go into the creation of training data, for example. I think all of these approaches can lead to some really important analytical work that needs to be done so that we can more clearly think about the opportunities and true costs of new technologies.

VC: We are in an age of catastrophe. Many scholars have underlined how our society is going through a major crisis. Our era is approaching a new social, economic, and philosophical model. From artificial intelligence to climate change, from posthumanism to a post-nature condition, the feeling of facing a new existential condition is increasingly evident. But catastrophe is not only a negative term; it can be turned into a positive one. Do you agree?
TP: I really do think that we are in the middle of a catastrophe, and I worry that catastrophes, as we have seen, are just as useful to fascism as they are to those of us who want to create a more progressive society. I’m having a very difficult time imagining something positive coming out of the climate crisis, which will make human life impossible across much of the earth. On the other hand, there are many times in history when people believed — based on ample evidence — that the world was coming to an end. So perhaps I’m just lacking imagination. But I don’t think so. I am deeply worried about the various ways in which climate crisis, fascism, and predatory technologies are strongly amplifying one another.

More stories by

Valentino Catricalà