Computational Mama (Ambika)

Ambika started working with AI in its nascent stages. She's now working with motherhood and AI and the liminal spaces of how generative AI identifies mother as a body and concept. She runs her own solar-powered AI server and is now engaging with AI's ecological impact. She has done workshops for Gooey in artistic domains and continues to explain these tools to non-tech people.

Khoj can engage with her for a workshop on training artists to interact and co-create with AI. This could open a space for more critical artistic engagement in the future.

Ambika shares:
BeFantastic team - capacity building. Kamya and Archana. It is an interesting starting point.
Ambika runs a studio called Ajaibghar with Nanditi. They are exploring how to bring coding to artists without the barrier of overcoming the wall of STEM learning. They are working with Godrej and Goethe and are opening in Travancore on 11th April, in collab with Sandbox Collective.
Artists to look at:
- US based artist, Aarati Akkapeddi. https://aarati.online/
- Game designer artist -- Anurati Srivastava. Based in Delhi.

Ecological damage - personal AI server powered by solar power



Agath - AI clone of David Attenborough's voice



strange exploratory practice

process of coding - not the product - community building

CARE AND TECH


Bodies-Machines-Publics
article on internet archiving, open sources, knowledge activism
school started by Zachary Lieberman - new media artist who used to teach at Parsons School of Design
Nasreen Mohamedi
Three Thousand Years of Algorithmic Rituals: The Emergence of AI from the Computation of Space - Matteo Pasquinelli

Life in Fifteen Gigabytes - Bami Oke
interesting list of artists and researchers w/ descriptions
bibliography
Summary from the Jnanpravaha Quarterly 58 newsletter, Pasquinelli gave a lecture at JPM on the Aesthetics, Criticism, & Theory Course: A Spiralling Revolution – Technology, Culture and Crisis

Professor Pasquinelli’s research into the book began with an intuition – that the perception of abstraction that AI holds across society hides a very concrete reality that is rooted in labour and social relations that can be studied and understood. Fascinatingly, this research led him further back than even he had anticipated, back to ancient Vedic mathematics and the tradition of the Agnicayana, a ritual in which the symbolic bird-like body of a god is reconstructed piece by piece in a precise arrangement of bricks following a set of instructions that have not changed in millennia. One of the most ancient rituals still practised today in India, the Agnicayana has also been called an algorithmic ritual, if we consider an algorithm at its most basic to be a set of step-by-step instructions, to be carried out mechanically to achieve a desired result. For Professor Pasquinelli, the Agnicayana is one prominent example from the ancient world of a logical form, an abstract algorithmic form arising from a social form – from labour and ritual, from discipline and power, from ritual and repetition.

One of the central tenets of this book has been to investigate the possibility of arriving at a theory of automation that considers both the abstract as well as the social form. Here, Professor Pasquinelli further broke down the theory of automation into three formulations that have been acknowledged within the discourse so far, that is – an internalist theory, a culturalist theory and an externalist theory of automation. The internalist approach is the idea that scientific or mathematical paradigms like the notion of numbers are eternal ideas which are completely divorced from history and evolve purely through their own internal logic. Within contemporary discourse, Artificial Intelligence has often been seen through this internalist approach. The second theory of automation is a culturalist theory, which argues that automation is also a form of social constructivism. Finally, there is the externalist approach, which Professor Pasquinelli also traces more favourably in his book, which argues that technological innovation reflects larger socio-economic metrics.
In a 1996 interview with journalist Charlie Rose, filmed just a few months before his death at the end of that year, Sagan stressed the importance of public science education and pointed out that technology was progressing faster than the general public could understand it.

"We've arranged a society on science and technology in which nobody understands anything about science and technology, and this combustible mixture of ignorance and power sooner or later is going to blow up in our faces," the late "Cosmos" host said. "I mean, who is running the science and technology in a democracy if the people don't know anything about it?"

From https://futurism.com/the-byte/carl-sagan-warned-charlatan
Some quotes from the article are below:

"As such, he develops a theory of technology that had roots in social relations and material design before showing how artificial intelligence is constructed through an incredible collectivity that remains invisible."

"For him, artificial intelligence is not about imitating biological intelligence; it’s about the statistical mapping of social relations—and he notes that statistics first emerged from techniques to control and measure society. In the end, Pasquinelli calls for a new culture of AI, so that the social relations become apparent in the design of artifacts."

"So, the long answer of what AI is is that it’s a technique to project human culture into a multi-dimensional space and navigate within it, to recognize patterns but also to generate unseen artifacts at will."

"AI is expanding our field of disbelief."

"The positive effect of AI is that, as I suggested in the beginning, we are rediscovering the capacities of our bodies to produce knowledge and cultural abstractions. However, as a result, we will have a different social composition of manual and mental labor, which means society will be organized according to new hierarchies. [Yes], AI will probably have a positive impact in different fields—but only under strict human supervision. For instance, in the digital humanities, the use of statistics helps us understand the history of art, design, fashion, and other realms in a different manner. We can map the development of styles and have a hyperstatistical perspective on artistic and design artifacts. It might be useful in medicine, too. The statistical model is very helpful in discovering patterns of symptoms, but only when the human is left in the loop and the scientific method is respected. In fact, there’s a long list of positive effects. Machine learning will probably be taught—and untaught—in schools in the future.

That said, the current oligarchical model of knowledge economy and organization of labor is worrying."


In Surrogate Humanity Neda Atanasoski and Kalindi Vora trace the ways in which robots, artificial intelligence, and other technologies serve as surrogates for human workers within a labor system entrenched in racial capitalism and patriarchy. Analyzing myriad technologies, from sex robots and military drones to sharing-economy platforms, Atanasoski and Vora show how liberal structures of antiblackness, settler colonialism, and patriarchy are fundamental to human---machine interactions, as well as the very definition of the human. While these new technologies and engineering projects promise a revolutionary new future, they replicate and reinforce racialized and gendered ideas about devalued work, exploitation, dispossession, and capitalist accumulation. Yet, even as engineers design robots to be more perfect versions of the human—more rational killers, more efficient workers, and tireless companions—the potential exists to develop alternative modes of engineering and technological development in ways that refuse the racial and colonial logics that maintain social hierarchies and inequality.
Morgane Billuart
https://donotresearch.substack.com/p/morgane-billuart-virtuality-and-aesthetics

Becoming a Product podcast
Some thoughts for the open call (Aditi)

What are the roots of our technological infrastructures? How do we define technology? Is there a way to reconcile natural realities with our digital spheres? How do we go beyond criticising the inequalities heightened by various technologies and address it as our collaborator? How do we reveal and teach that what looks invisible? What are some radical ways of confronting and working with technology?

How does technology empower us? How does technology fail us? Is technology transparent or opaque? Is it owned by us or owns us? Is technology our friend, a collaborator or is it not to be trusted? How do we confront the fear of a digital god? How do we admit to its seduction? How do we use technology to collectivise rather than isolate?

In the second edition of the BMP residency, artists grappling with these questions are invited to use the Khoj space to work with technology in a dynamic way. Can we challenge technology, romance it, conspire with it, and play with it? Is it possible to trace a history of a technology, to demystify it, to teach it? Is there a softness and murkiness that can be found in our increasingly homogenised understanding of the world?

How does one hack it? In McKenzie Wark’s seminal manifesto, she says, ‘We are the hackers of abstraction. We produce new concepts, new perceptions, new sensations, hacked out of raw data. Whatever code we hack, be it programming language, poetic language, math or music, curves or colorings, we are the abstracters of new worlds. Whether we come to represent ourselves as researchers or authors, artists or biologists, chemists or musicians, philosophers or programmers, each of these subjectivities is but a fragment of a class still becoming, bit by bit, aware of itself as such.’

Creative practitioners that are "hacking" on intersections of art and technology are invited to apply to BMP with their proposals.
BMP READINGS - texts, books, and other pdfs!

click on me to reach the google folder