top of page
FOREIGN OBJEKT

Stefka Hristova

Dr. Stefka D. Hristova is an Associate Professor of Digital Media at Michigan Technological University. She holds a Ph.D. in Visual Studies with an emphasis on Critical Theory from the University of California, Irvine. Her research analyzes digital and algorithmic visual culture. Hristova’s work has been published in journals such as Transnational Subjects Journal, Visual Anthropology, Radical History Review, TripleC, Surveillance and Security, Interstitial, Cultural Studies, Transformations. She was a NEH Summer Scholar for the “Material Maps in the Digital Age” seminar in 2019. Hristova is the lead editor for Algorithmic Culture: How Big Data and Artificial Intelligence are Transforming Everyday Life, Lexington Books, 2021 and the author of Proto-Algorithmic War: How the Iraq War became a laboratory for algorithmic logics, Palgrave 2022.


Website:


Project Statement:

My project engages with algorithmic structures of prediction and training in order to increase the probability of a favorable outcome in relation to war. As Michael Van Creveld writes, “[B]y definition, training is a future-oriented activity and one cannot train without having at least a rough idea as to what one is training for. In other words, what the future may be like” (2020, 211). The forecasted future thus is a model that is already familiar—it aims to repeat the past and therefore it notes the distinction between the past and the present moments. It fails to acknowledge and further hides this distinction, however. Predictive models promise repetition, reproduction, and replication. Further, as Wendy Chun has argued, these forecasts are indeed the desired repetition of a selected past—the repetition of an outcome in time. This is important to point out because simulations and trainings are already guided with an end in mind. It is this foreclosure of openness in the logic of simulation and forecasting that lends predictive modes to automation. In this project, I am interested in exploring the biopolitical implications of probability in the context of algorithms and war. Military training has further shifted to training the algorithms themselves in order to make decisions autonomously. Whereas for human soldiers where training happens for the purposes of improving in real-life, military algorithms are faced with training data that prepare the algorithms to operate with testing data. In particular, I am interested in exploring the visual discursive formation around questions of risk and ethics in this context. This exploration is situated in relation to platform technologies as well as military-industrial complexes. I seek to illuminate how are models of probability justified visually as well as discursively in the contemporary moment in relation to human-machine war assemblages?

23 views0 comments

Recent Posts

See All

Comments


bottom of page