top of page
FOREIGN OBJEKT

Anaïs Nony

I am a research associate in the Center for the Study or Race, Gender and Class at the University of Johannesburg and the author of Performative Images. A Philosophy of Video Art Technology in France published in October 2023. Over the last decade, I have published articles in journals such as Mail & Guardian, Philosophy Today, Cultural Critique, La Deleuziana, Intermediality, The Moving Image, French Review and Parallax as well as book chapters in edited volumes. Taken together, this body of work addresses debates about how technology and art shape and affect societies.


I write and speak widely about the relation between art and technology, culture and philosophy. I earned a PhD from the University of Minnesota and held postdoctoral and lectureship positions at Université Sorbonne-Nouvelle (FR), Florida State University (US), the University of the Western Cape (SA), the University of Fort Hare (SA), and University College Cork (IR). My expertise in performance arts and philosophy of technology is supplemented by two other areas of research: feminist methodologies and critical race studies.


Website and links: https://anaisnony.com/ , IG: anais_nony_


Project:

Artificial Intelligence. Between Instrument and Agency


In 2024, just over two-thirds of the world's population will be using the internet, while in 2020 one person in four will still not have access to safe drinking water at home. What this parallel between digital networks and access to water shows is the distortion of international priorities in terms of civic and moral responsibility. While water meets a vital need of first necessity, what we see being deployed is the use of this drinking water to cool down massive data centres. The energy cost is alarming, the human cost is distressing. 75% of the world's supply of cobalt, the material essential to the lithium-ion batteries in our mobile phones, computers, tablets, and electric cars comes from eastern Congo, where millions people (children and adults) live and work within dehumanising conditions. Furthermore, aninvestigation published on November 30th 2023 interrogates the wider use of artificial intelligence on the Israeli war on Gaza and exposes the use of a system called “Habsora” which deploys Artificial Intelligence technology to generate targets. For each target, a file is attached which “stipulates the number of civilians who are likely to be killed in an attack”. While there was a strict limitation to the collateral damage in the past, these AI generated targets are unprecedented: they are made of automation, rely on AI-powered data processing technologies, allow a potential collateral damage of hundreds of civilians. What does it mean to be statistically precise for the world to see and yet generate targets according to lose military protocols that can kill hundreds of civilians? In the case of the war on besieged Gaza, IDF leverages on algorithmic accuracy while dismissing fairness procedures and accountability. The so-called clinical efficiency of their AI-generated targets are being portrayed by the political marketing wings of mainstream media as advanced tools that give the right to kill in the name of technological sophistication. While AI is generally promoted as making warfare more precise, evidence from the lived experience in Gaza shows that saving lives is not part of the model.


My proposed research project questions the instrumentarian power of artificial intelligence, its usage for dismissing human responsibility in data-driven fascist regimes as well as its promising force for the future of agency. Often, critical discussions on AI as seen in Web summits, press conferences, international colloquia and interviews falls into two main categories. One wants to prove that AI is not really intelligent, at least not in the human sense of knowledge making. The other presents AI as a threat because the technology can surpass human capabilities in the fields of cognition. These intellectual propositions, while somewhat valuable in themselves, often omit to question the set of values and priorities that shape the theoretical models of their inquiry. Their claims fail to acknowledge that there is human reality in every single technical reality, from the spoon we use to eat to the rocket launch to kill civilians. Inability to apprehend the human reality of Artificial Intelligence, its biases and its undignified features, is a mistake that only serves a certain political marketing and its data-driven economy. Furthermore, these claims serve values organized around ideas of inclusion and exclusion so central to supremacist ideologies: this data matters/this one does not, this person is human/this one is not, this life matters, this one does not. Exclusion is part of the systemic refusal to recognize the human reality integral to all objects and all people. This position doesn’t mean that objects are equal to people. It shows that this exclusion is part of a hegemonic culture that is failing to take responsibility for the violence it produces. The dismissal of human reality in technology is a strategic mode of cancelling out accountability in the name of advancement. It allows for the implementation of a new form of obedience that is algorithmically driven. To critically think about AI, its creation, modelling and application as well as its development as a technology of behavioural prediction, is a responsibility not to leave such a technology in an a-political blur.


What matters in the regime of truth promoted by fascist ideologies is the accuracy of the data collected as well as the computation, control and prediction of behaviours through systemic data surveillance. The digital regime of instrumentarian power is symptomatic of the rule of induction: forgetting about the cause of problems and focusing on predicting more outcomes, of creating total certainty. The digital regime of truth aims to shape the future according to a trajectory that validates the data already collected. In the context of the Israeli war on Palestinians, the future of civilians is being shaped according to the data collected. They become the targets, automatically. In turns, the data collected validates the assault according to a single-minded mode of making the truth that empties out accountability. The ecosystems of tools and smart devices create the fabric of everyday life by shaping the normative values of behaviours. In the context of surveillance capitalism, algorithmic modelling works to cancel all meaning and disrupts social trust. Surveillance capitalism is a new instrumentarian power that relies on computation and automation to overthrow people's sovereignty. At both an individual and collective level, the mechanisms of subordination maximizes on indifference and media-driven connection to empty out dreams and desires for the future.

23 views0 comments

Recent Posts

See All

Axayacatl

Comments


bottom of page