Andrea Bellini: As a curator and writer, you have stood out in recent years for your focus on both the rhetorical politics of software and the philosophy of artificial intelligence, meaning machine learning and emerging technologies. You explore these issues through collaboration and exchange with artists and thinkers exploring the same. I am interested in understanding what led you to become interested in these issues and fields.
Nora N. Khan: I have always wanted to write on what I do not quite understand. Writing was and is my way to understanding all aspects of the impossible. And these fields — software, machine learning — produce menageries of warped life forms and bizarre experiences that are utterly alien and curious, and so clearly unlike anything you’ve ever seen. And it’s true: with machine learning and neural networks generating gobs and gobs of images, a hundred thousand in a pass, you haven’t seen these forms and images and experiences before. Each, endlessly new, novel, ready for projecting onto. You can move from one exhibit to the next, examine and analyze the little artificial within, which are both of us and not of us. They refuse language; they make language fold in on itself.
How do you analyze an algorithmically produced painting made in part by analyzing the patterns of a million paintings made by people across every era, movement, and set of styles? You could spend your whole life answering just that question. One year for textures. Another five for the forms in the painting. Another for the references that gave rise to the forms. Another twenty for the database of all paintings that fed the ML system. And so on.
In the menagerie are experiences that, in this way, generate their own field, their own discourse. One gets tangled up in the didactic effort at explaining alone, as here. This is sometimes necessary, but not always generative. I have to instead compress, to find metaphors and narrative and semantic structures for these moments of aesthetic surprise and horror and delight and complete moral confusion, even panic, that arise in the individual’s encounter with technology, which is a moral encounter, and an aesthetic one, a psychological one.
Software, machine learning, artificial intelligence, techne in all its forms, whether emerging or inscrutable or utterly basic, like a handheld calculator: over the last twenty years, ten years, these fields are where some truly strange, gooey, impossible decisions about the future of humanity, society, language, and art are taking place. What’s not to love! Where else can you find all the basic, core questions of philosophy, so violently debated in such public, unabashed, unfiltered ways, than in the live, real-time building of technological systems? There are the obvious questions: What does it mean to be human? A classic. How do we know that we think, and how do I know what I know? What do we (and I use “we” loosely) agree on as just an irrevocable core of our identity as human beings? What do we feel that we humans alone can do? Linked to these questions are the ones bubbling up from the other side: Why does the thought of nonhuman intelligence threaten us? Just maybe it is in part because we have treated humans as nonhumans: we have developed the need to dehumanize in order to feel human. Can we be so confident that we can understand a machine’s “intelligence” or its alive-ness in the world, when we don’t “understand” fungi or spirits or spiders or an octopus? You just might watch a video of an octopus, going about its day, to remember that you know one percent of what there is to know of the inner lives of beings outside the human. We believe we know all because we can compute and predict and build and use techne. And techne continually brings us up against our limits. Again and again.
That’s all to say that there is high drama in these fields — the kind of drama you prepare yourself your whole life to be ready for. Cinematic drama, world-historical drama, end-of-days- style drama. We build systems that close off our own futures. Wow! And there: we’re doing it again. The system’s design is harder and harder to see: what a conundrum for us. So what can an ecstatic, loving, and ekphrastic dance with that system accomplish? Can we learn to listen to it and hear its logic and sensibility and recognize it, further, at play in the world around us? Can we narrate its design and speculate on the choices that created it, much as we analyze social and economic systems?
I started writing through games, game systems, decision trees, which were easier to understand than what was happening in my life at the time. Fifteen, twenty years later, I’m still not bored. The outcomes of machine learning are both material and conceptual, as they shift our concepts of time and justice, of creativity and aesthetics. They shift our interrelation with our environment, with other human beings. The more we understand of these systems, the more we might imagine how we might embed a multiplicity of competing values in their future making. Are we “happy” with extractive and surveillant networks of Things, or with computational logic pairing oh-so- perfectly with neoliberalism? Does this delight us, still, no matter how smooth those edges of the newest smartphone, the new device, get? Are we pleased that, no matter how many times we learn the lesson, still, technocratic policies slip right in to try to reengineer “social problems” like migrant movement and the human fallout of incredible economic precarity? We say yes to many things because they’re easy, because they are convenient, and technology as we experience it is designed for this ease, ostensibly to free us to do “other things.” Like write novels and create institutes and realize all our creative ambitions.
A system can always be analyzed. Analyzing the construction of systems — why they work as they do, how they dissemble, how they simulate truth and objectivity — allowed me to construct my own. This practice has been essential for my mental survival. And so the question of “what led me to become interested in technology” becomes hard to pin down, when technology’s emergence has been part of our lives, or my life, forever. The road to our discrete interests, the ones we might have to capture in professional biographies, is more an accumulative set of happenings, an iterative process, an intensification of focus over time. We find them by chance and by luck, along off-roads and byways. We also find them because everything in our history has led us to them — what our family went through, the specific intersection of political and world-historical forces we happen to exist and toil and maybe even self-realize at.
That’s to say, I’m interested in these fields because I want to tell new stories about human capacity and creativity that take root and produce their own futures. I want to be in thrall to these questions that are taken up by every generation: What of our moves are totally original? When can we trace the true root of our decisions? Am I coded, irrevocably, and acting out a precise record of genetically defined algorithmic operations in my life world, or am I constantly glitching and forking and taking new paths? Trying to write to some kind of speculative answer seems worth spending my some-kind-of-life on.
AB: To talk about “high drama,” as you say, on the issue of artificial intelligence, the world seems to be basically divided into two groups: the optimists, those who think that supercomputers will save the world and humanity; and the pessimists, those who believe that the development of artificial intelligence and quantum computers poses a great danger to human beings. What can we do to make sure that this world to come will not be a dystopian place dominated by machines that will treat us like plants in a garden, but a place in which human beings will still be world-builders and time travelers?
NK: If we’re treated like plants in a garden by machines, that is, a garden that is tended to, watered at night, given compost — that, to me, sounds like quite a lovely future. If it is a symbiotic system in which we co-exist productively with machines, all the better. The gardener is a smart one: a timed sprinkler attuned to our needs and dehydration. The gardener is, hopefully, a decent collaborator. Humans and machines have been collaborating well enough for decades, and we now co-create with machines whether we’re conscious of the process or not. I unlock my phone with my face, and I type into terminals and feeds all the livelong day. We live in lockstep with these partially known figures, spirits, and intelligences at each moment. And so, we’re here, with it, them, that: what kind of world will we create together?
I stop short of any need to be a world-builder or engineer or designer, because of how closely tied that has historically been to an ethos of mastery and authorship of the right kind of worlds that others just have to learn to live within. That’s one of the axes you hear forever grinding in technology criticism: yes, we live in worlds that a few people designed, a few people built. We thrash about in systems not designed for the vast majority of human beings. We broker systems that take a universal default of an able-bodied, fully realized neoliberal subject who can engineer their own life with aplomb and ease. And we learn to fit into, work with, fold ourselves, somehow, into these systems. To function in our seamless digital future, we have to let them train us into their expectations for efficient legible production and expression. Exciting.
Of course, there are activists, artists, writers, designers, and critical engineers who are theory-crafting and building wilder worlds. Donna Haraway spoke on this, with that famous thesis that so many artists in the new media space have taken up in the last decade: “It matters what thoughts think thoughts. It matters what knowledges know knowledges. It matters what relations relate relations. It matters what worlds world worlds.” Artists develop worlding into life practice, philosophy, taking worlding from its diverse roots and meanings across child psychology, postcolonial theory, media studies, and Indigenous studies. Worlding allows us to think of engineering worlds with others, worlds which we cannot alone design. Nor should we want to.
And so, the futures you describe as “optimistic” might not be based on what we can reasonably and strongly predict. The prospect of supercomputers saving the world — from climate change? Or, from our own designed disasters? — can be also quite pessimistic, if the quality of that supercomputer-sovereign world is as limited as the computationally driven present. Further, if machine learning and AI continue to expand as now, but a multiplicity of AIs and technologies are also designed and adopted, then dangers may be mitigated. If a million local or anarchic AIs bloom, and they proliferate and take root, expressing different values than what we have come to expect (extractive, hyper-capitalist, technocratic structures, affirming and reproducing the logic of the same), a future-cast simulation of dangerous AI might be bearable, because there will be many AIs and many types of machines and technologies for different cultures, ideologies, worldviews, and systems of values. Perhaps there will not be one dominant “AI” or machine-learning process but a dizzying range to tangle with. We might continue to learn, at scale, to speak with and alongside machines, to hear their logic, to predict their outcomes with more skill and deftness. So we’ll potentially be prepared for an increasingly tech future, which itself only would magnify what we already have today.
The implied binary in this question is the same one embedded in our stories about technology: that technology will be one thing or another, either a savior or the key to our eradication. That binary also suggests that technology alone will move us towards dystopia or utopia. Many decisions that have little to do with technology will move us towards a more just or a more violent world. As we know, AI can be stultifying in how predictable it is, in the ways it is used to classify and gather and sort. We also know it can have emancipatory and experimental potential, generating aesthetic and philosophical delight. See: even in this paragraph about deconstructing binaristic “either/or” and “it is this, but it’s also that”–type statements, I’ve replicated the same. Stultifying, but also: experimental and exciting. Where did this notion, that AI and machines will save us or destroy us, take root? What are our cultural expectations for the systems that we design, and why are they even given the responsibility to save us or not? They are an extension of our hand, reaching right back to us. The persistent presence of the binary — Will it bring emancipation or the end of the world? — seems more important to question. As though the future will be so clearly defined!
AB: Beyond these oppositional stances, it seems one thing is certain: the world as we knew it is ending, or, it is dead already. Is it possible, in your opinion, to predict what the world to come will be like?
NK: I think there is such great seduction in prediction. Prediction allows us to believe there’s an end state we can anticipate and then prepare for, rather than the more terrifying and chilling prospect that there’s no way to perfectly plan for the chaos to come. And so we build future-casting for people based on both actuarial science and earthquake algorithms, and, well, statistics and predictive policing, Myers-Briggs tests, astrology… which all help capture the future, or hold it hostage, based on present (fuzzy, speculative) reads of incoming data (about weather events, like earthquakes, and about how a person like me is likely to handle money and property). Algorithms are one entry in a long line of many beautiful and often silly and frustrating predictive systems. You write that the world we used to know is ending, or that it is dead already. It certainly feels like old narratives that have created political chaos and crisis after crisis — like, say, liberal myths of global unity or universalism or, say, the myth of technology as inherently good and clear progress — are dying. And good riddance to them.
Whether one uses tarot and divination or tea leaves, or reads folds in the surface of moving water for clues and signals, the logic of each system is designed, whether through narrative or allusion or a set of phrases, to make one feel like one has a reasonable prediction. But I also have the names of the cards, the collections of symbols, and I’ve learned the patterns of meaning that are unlocked when certain arrangements, say, of a tarot deck set are flipped. There’s the way the Devil is upside down today, next to that Tower. I have loosely learned the system.
Because predictive algorithms actually produce the future, and shape the maps we walk on, and create loops in which more data is needed to feed them, we can predict how specific algorithms might shape the social and economic and cultural landscape. And predictive algorithms actively create our reality. So prediction in this way has its own logic that is easy to predict. I’m living out a combination of decisions that are familial, personal, and “made” by my environment, class upbringing, education, generational history. Rough algorithms. And now, acutely, my whole psychological state is shaped by this Thing suggesting and predicting my choices, likes, consumption, the media landscape I’m imbibing. I think that helps one predict what one’s life might look like. It might also help predict the way we’re headed, as we can read the maps we’ve made and are following. Off the cliff, after the pied piper. Or somewhere less disastrous.
To predict a future is to project, cast, or pitch ourselves, in an act of vicariance, after Alain Berthoz, headlong into a simulation. And those future-casted worlds are entirely colored by our interpretation of the present, and by what we subscribe to in the present. Science fiction and speculation are degrees of extrapolation from this moment. If the logic of our current machines, as “we” have designed them, is predictive to the end of essentializing data to curtail human possibility and manage and nudge desire to sell, then the future run by artificial intelligence looks, of course, like the worlds of William Gibson. The landscape of Pattern Recognition, which is now our moment, in which we live only to be mined, mirrored, and molded. The brilliant frontier of innovative progress will then surely be a giant app we live inside. We’ll certainly continue to be dinged and batted around as in a pinball machine unless we grab onto the wheels and spokes and hold on for dear life.
AB: Your book Seeing, Naming, Knowing, published by the Brooklyn Rail in 2019, investigated the impact of predictive algorithms and machine vision on the arts, and what we might learn from art criticism in tracking visual culture. Three years later, what would you change or add to this book?
NK: Seeing, Naming, Knowing was about the iterative creep of surveillance. I was trying to map all the ways we begin to sense at the periphery of vision: the small entries and shifts in the landscape that are powerful signals that one should be on alert to a sense of being watched. In that case study, of Detroit at the time, green lights were forming what I saw as emerald necklaces along the road, late at night. Project Greenlight was the name of the program that brought these cameras to businesses: 24/7 feeds onto sidewalks before the businesses that paid for them. I was, at the time, very embedded in tech activism, and of course I loved Forensic Architecture and Hito Steyerl and all like artists unveiling the more insidious unseen of technologies. These were artists who helped me focus on the unseen or invisible. And Trevor Paglen had written a crucial piece, “Invisible Images,” in a flourishing New Inquiry two years before.
Since, I have only tracked the same process I outlined taking place in Detroit with more intensity, and across many different national contexts. For instance, that book about machine vision was translated beautifully into Spanish by Diego Gerard, who contextualized it within the context of Mexico, where sophisticated cameras were long being established to not only police the class of neighborhoods but also communicate national commitments and policy. To think about the exportable model of surveillance creep, unfolding at the psychological level on which one might interact with others, across countries and contexts, was profound. It kept drawing me back into the themes of the book over the past years.
Depending on where I gave a talk about surveillance and that book — Denmark, South Korea, Pakistan, South Africa, Canada — the receipt and questions would be different, but also, weirdly, largely the same. I was delighted by this, as it affirmed my hypothesis that tech models are endlessly replicable and exportable, but need rooting in local context in inventive ways.
Of course, surveillance has intensified in the last few years, activated by increasingly emboldened regimes that have now abolished abortion, and by police protests, and by pandemic madness. And the creep is not as quiet and subtle now. The lockdown gave permission for the whole surveillance apparatus to fully activate.
If I were to update the book, or try to say something new, I’d like to trace how one might predict — circling back to prediction — what the future could reasonably be, based on the evident, if subtle, technological changes in the present. I feel my strategies in that book were a bit weak, and I was not bold enough. I spent more time parsing what machine learning is for readers than describing what it feels like for us to work and live in a world defined by its “learnings.” How might we predict art history to change? What should I have known seeing Project Greenlight unfolding in Detroit? What other methods could I use to analyze machine-produced images, other than frames for reading images from art criticism? How do I, in putting such weight on the visual image, replicate the vision-first politics of much technology? What other senses does surveillance activate? What is to be done with machinic images and information other than gaining familiarity and intimacy with their aesthetic, and reverse engineering their logic?
I am more cautious to note that, in that book and in other works and writings about AI, criticality can be abraded and worn down by proximity to the technology itself, which often resists criticism and is often seen as an overall net good. Such technology has a way of capturing your mind and language. I find myself parroting assumptions, frameworks, logics, that I hadn’t chosen to deploy. I’d also take more time to see what frame I implicitly value, and then be explicit about what worldviews and world-building that I say is “good” and “just” in the book. We can always strive to be more precise. I have to see my own position and analyze my commitments given that position. This seems one modest responsibility of a critic.
AB: The NFT phenomenon also divided audiences and artists into two categories. There are those who immediately welcomed the phenomenon with enthusiasm, and those who do not see the NFT world as an interesting space for their practice. The harshest critics argue that NFTs add nothing new to what artists have already been doing for at least fifteen or even twenty years in the context of digital art and new media. In short, the NFT- allergic argue that the medium is nothing more than the last frontier of neoliberal speculation. You observed the situation a little bit before taking a position, and then you organized “Experimental Models” with the NFT platform Foundation, to curate a sale. What was your goal, as either a curator or a writer and editor? And do you think you achieved it?
NK: Talk about compromises! The double bind of being a writer is that you can talk yourself into literally anything. I can find a frame to justify a lot of experimentation and attempts in spaces that have values I find, for the most part, noxious. Arguing that there is no pure space under capitalism isn’t quite enough.
Here’s a frame: with most technological spaces, my goal is to test the system out and give the game a test drive. I will give any game a try and find something sublime in it. If a technology is fairly new, before I agree with the general discourse, I want to make my way through it. As with most digital and media art “masters,” as you mentioned, the antipathy and contempt for NFTs is real. I also found a number who, despite being technically savvy and leaders in digital art, had found no success in the NFT space. Their values were totally at odds; the possibility for curation was terrible; sites plugged and played. There was no context, no writing, no discussion past sales metrics. But I also saw artists I truly respected in the space, who were advocating for the radical possibilities of Web3. A sucker for “radical possibilities,” I couldn’t ignore their reasons for getting involved. Many of them had introduced me to the world of blockchain, crypto, and the possibilities of Web3 for contracts since 2014. I even wrote a white paper about Satoshi Nakamoto that was featured in a blockchain art exhibition! I’ve been invested.
In general, I avoid binaries, like believing every artwork that ever becomes an NFT is a form of neoliberal speculation. I believe there’s good potential for artists in every platform with the right intermediary, and the Foundation show was a sure way to test that hypothesis. I wanted to see if the frame could slow down the rush to speculate and the rush to buy on a site that is largely a marketplace, and a robust one. I hoped to test out the hypothesis that people will want to talk about how the work was made and hear about the concentration of ideas in a single .gif, a single .mov file, a single clip, NFT or not. In short, can some of the same values of curating media and digital art be ported over? Or must they change given the setting and market?
Let’s talk about compression, say, for an hour, about a single .gif — and find out everything the artist put into it, every concept, every frame. It can take ten to forty hours to make a model, a digital artwork, labor that’s disappeared by the slick accessibility of software and lack of documentation of process. I learned, along with the artists, about all the barriers to entry in the space. If you’d never minted, it was a long and arduous process. Artists had to mint their own work. There were gas fees. It’s a massively wasteful process in terms of time and human energy and communication. A gig economy mentality and a bit of a mercenary approach to artists dominated. How could we resist that by analyzing the space together? I believe we were able to support each other’s work and ideas in a pocket we carved out. Criticism is as much about taking a position as it is about slowing a space down so critical thinking can take place.
AB: In an interview you did with ARTnews in 2021, you wrote, “We live in a present where we are inundated with machine-made images, where we can’t tell whether an image is made by a machine or a person. I think spaces like these NFT markets are almost a way to train our values about what kinds of images we will find interesting, beautiful, and compelling in this emerging context. What types of images will you pay attention to in the future?” Today, after the first crisis in this market, and the success of artists like Beeple on the international museum circuit, would you say the same thing? For critics and thinkers like you, a year represents a long time. I wonder what is your “mood” today with respect to NFTs and their market.
NK: We all have to make a lot of compromises under capitalism every minute of every day. I’m not convinced NFTs should be one of those compromises.
My mood is circumspect and cautious. I’m drawn to venues like Feral File, and to artists and writers like Emily Segal, releasing novel chapters as NFTs. I’m a little less drawn into invites to write about algorithmically churned NFTs of psychedelic mandala swirls as “digital art” to help support some pet project of a hedge fund. I don’t know who has the time for that. I will say that looking at an NFT feed remains instructive. I learn through the ledger board where the collective attention is at this moment. I can pay attention to the aesthetics rising to the top, the forms and moods and styles, the lack, or presence, of context. It helps reverse engineer a collective taste in the space, try and pinpoint its origin. Now that is interesting.
AB: You were recently appointed director of the Project X Foundation for Art and Criticism, publisher of X-TRA Contemporary Art Journal, LA’s longest-running arts criticism journal, which has a very important reputation relative to a certain kind of serious criticism in the context of Los Angeles and the West Coast in general. Some argue that criticism is dead, that there is no need for it anymore. Why, on the other hand, do you think criticism is still important, and what plans are you building to renew this organization and its journal?
NK: I knew X-TRA and its mission, and the journal, through life as a critic, particularly when Aria Dean, the wonderful artist, curator, and writer, started publishing in its pages. And I learned that Project X Foundation for Art and Criticism was, is, the foundation that supports the journal, that makes it possible, and expands its mission to create forums for the diverse voices of artists and writers. X-TRA does, as you note, have a long history of supporting criticism, particularly experimental writing and artists’ writing. The writing is edited by a dedicated, passionate, and fiercely rigorous editorial board who turn each piece, idea, and line over again in the light.
Criticism and artists’ writing are a site for expansive practice; in this site, young writers can test new ideas and styles. This ongoing practice syncs perfectly with the mission and ethos of Project X. We are on the cusp of our twenty-fifth anniversary. And so, at this moment, we must change and respond to our time to guarantee a future. We support critical thought, and so that practice of critical analysis has to be turned on ourselves. As spaces for criticism collapse, and given technocratic effects in digital publishing, we desperately need more robust systems that make writing a sustainable profession for all. Young, untested writers should get to write about strange things, and write experimentally, in ways that do not guarantee security. We’re advocating for the resources, time, and space for them to do so.
It’s a very hard profession to last in. I’m drawing on my experiences as a writer in precisely the subjects we’ve talked about here, in leading us forward.
I’m planning to continue X-TRA’s early-established track record of working with writers right at the cusp of needing that additional push in their career. The archive has revealed work by Jack Halberstam, Lorraine O’Grady, Pope.L. You find reviews of Patty Chang, a feminist and performance art icon, of her early shows. There was a desk residency given to Johanna Hedva, the author of “Sick Woman Theory,” a seminal essay in disability rights and studies.
We’re also pushing to think along different — longer — time scales. How can we maintain space for writing in the expanded field around art, and argue for it? We will be formalizing the process of support for young writers. Writing across genres is difficult. Young writers often burn out or are discouraged. Time is needed to write a four-thousand-word essay, that epic piece that can define a writer’s thinking and path forward, and bring them opportunities, connections, and ways to live. We’re increasing pay for writers, of course, to be more commensurate with the time and energy — and hours — of research, editing, and rewriting that they invest.
A huge part of this effort, and my personal priority, is communicating the basics of arts writing and reviewing. How to pitch. How to navigate editors and editorial preferences. How to communicate with editors and staff. How to work through the editorial process, without guideposts and knowledge or connections or the “right” kind of insider training. We hope to continue the work of mentorship and training, to foster interesting writers who want to experiment.
We’re planning a new editorial fellowship to launch this fall, to fund two new writers and editors to join in our editorial process, take part in editorial board meetings, write a longform essay, and both contribute to our editorial board and learn from the editors at X-TRA. We’re expanding our boards to better reflect the wild diversity of Los Angeles, one of the richest cities, demographically, in the States. We’re developing partnerships with like-minded artist-led nonprofits in LA to keep growing our community of people who love to think together about challenging contemporary art.
I’d like to see us become a place young writers want to come to, to grow. Good writing takes its time in solitude, but is grown through community and relation. Our organization will continue to fight for space for slow, deep thinking to unfold, despite a resistant, even hostile, and algorithmically driven publishing landscape. Turns out this might be another very good and necessary use of time.