Humans In The Loop: Raising Intelligence, Owning Responsibility

 Humans in the Loop: Raising Intelligence, Owning Responsibility

Artificial Intelligence is often framed in extremes—either as an existential threat or as a flawless solution to human limitations. Humans in the Loop refuses both narratives. Instead, it asks a quieter but far more uncomfortable question: if intelligence is learned, shaped, and corrected through human intervention, who is responsible for what AI becomes ?

Humans in the Loop resists the dominant cinematic impulse to portray Artificial Intelligence either as an omnipotent threat or as a miraculous technological solution. Instead, the film grounds AI within a quieter, more unsettling ethical framework—intelligence as something that is taught, shaped, corrected, and socially produced. Structured around three conceptual chapters—To Learn Like a Child, A Child Moulds Like Clay, and The Child and the Bias—the film repeatedly returns to a central philosophical provocation: if AI learns like a child, then humanity is not its victim but its caretaker.

Rather than indulging in spectacle or technological mystification, the film directs attention toward the slow, invisible processes through which intelligence is formed. These processes are inseparable from human labour, cultural context, economic inequality, and ethical responsibility. In doing so, Humans in the Loop offers not a story about machines becoming dangerous, but about humans refusing to acknowledge their role in shaping them.


I. To Learn Like a Child



 

The first chapter establishes AI as a learning entity whose intelligence is fundamentally dependent on human input and interpretation. Learning here is not presented as neutral data accumulation but as a fragile process shaped by context, selection, and omission. Much like a child encountering the world, AI does not arrive with innate understanding; it inherits meaning from the environments and narratives humans provide.

The Porcupine Scene: Embodied Knowledge and Indigenous Epistemology


The porcupine motif in Humans in the Loop is not framed as a simple allegory for error, fear, or misunderstanding in learning. Instead, it functions as a recurring symbol rooted in Nehmma’s lived environment and indigenous relationship with nature. The porcupine appears as a familiar forest presence—an animal that survives through adaptation, restraint, and coexistence rather than domination.

Its quills do not signify aggression but a form of embodied intelligence that is situational and defensive, emerging from long-term inhabitation of a specific ecology. This stands in contrast to the abstract, decontextualized logic of artificial intelligence systems, which often strip data from cultural, environmental, and relational grounding. The porcupine thus represents a mode of knowing that is experiential rather than computational.

Importantly, the porcupine connects Nehmma’s childhood memories with her present labour and her daughter’s future, establishing a generational continuity that challenges the narrative of technological modernity as a complete rupture from traditional life. The film subtly suggests that older forms of intelligence—indigenous, ecological, relational—continue to exist alongside AI, even as they are rendered invisible within technological systems.

Rather than imposing a metaphor onto AI learning, the porcupine operates as a counterpoint: it asks what kinds of intelligence are ignored or erased when machines are trained solely on client-driven, market-oriented datasets.


II. A Child Moulds Like Clay

The second chapter shifts focus from learning to shaping, foregrounding the human labour that actively moulds artificial intelligence. By depicting data labelling, annotation, correction, and verification, the film dismantles the myth of autonomous AI. Intelligence here is not self-generating; it is manufactured through repetitive, meticulous, and largely invisible human effort.

These scenes expose a critical contradiction in contemporary AI narratives. While AI is often described as self-learning, its functionality depends on continuous human supervision. The labour involved—often outsourced, feminized, and underpaid—remains obscured behind the sleek language of innovation. it  argues that ethical AI cannot be separated from the labour systems that sustain it.

Nehmma and Situated Knowledge


Nehmma’s role as a data labeller complicates conventional assumptions about who produces technological knowledge. As an Adivasi woman, her engagement with AI is shaped by lived experience rather than formal technical training. The film does not depict her as technologically deficient; instead, it reveals how her understanding of context, ambiguity, and relational meaning often exceeds the reductive logic demanded by datasets.

By intentionally recruiting women from tribal and rural communities rather than formally educated urban professionals, the film challenges stereotypes that equate intelligence exclusively with institutional education. It suggests that tribal life is not disconnected from knowledge but embedded in alternative epistemologies that value sustainability, community, and contextual awareness.

Importantly, the film avoids romanticising indigeneity. Instead, it argues that traditional ways of life are not inherently regressive or anti-modern. Many are ecologically sustainable and intellectually rich, offering insights into data, meaning, and truth that modern AI systems routinely ignore.


III. The Child and the Bias


The third chapter confronts the ethical consequences of shaping intelligence within unequal power structures. Bias, the film suggests, is not an accidental flaw in AI systems but a predictable outcome of selective data, economic priorities, and epistemic exclusion.

A pivotal moment occurs when tribal women question the nature of their work: We are labelling the data provided by the client, but we should use our own data and scenarios and share them to get proper context or correct answers.” This statement crystallizes the film’s critique of data ownership and representation. It raises fundamental questions: whose realities are allowed to shape AI systems, and whose remain unacknowledged?

The women’s observation exposes the asymmetry at the heart of AI production. Marginalized communities are tasked with sustaining intelligent systems while being denied the authority to contribute their own experiences as valid data. This form of epistemic injustice transforms AI into a tool of extraction rather than collective knowledge-making.

Drawing again on Mehrotra’s concept of human-in-the-loop systems, the film emphasizes that ethical safeguards are not merely technical interventions but moral choices. When certain communities are reduced to annotators of external realities, AI inevitably reflects the worldview of those who control data pipelines.


Conclusion: Raising Intelligence, Owning Responsibility 

The closing sequence of Humans in the Loop advances a clear yet unsettling argument: artificial intelligence does not inherently produce biased, insufficient, or harmful outcomes. These failures emerge from the false, incomplete, or prejudiced data humans choose to feed into systems. AI mirrors human values—not because it is powerful, but because it is dependent.

The film challenges humanity’s tendency to fear artificial intelligence while ignoring its own ethical failures. Despite possessing natural intelligence, humans repeatedly evade responsibility by attributing harm to machines rather than confronting the social, economic, and moral conditions that shape them.

If AI learns like a child, then society must take responsibility for how it is raised. Humans in the Loop ultimately asks viewers to reconsider intelligence itself—not as a technological achievement, but as an ethical relationship between those who teach and those who learn.

References 

Sahay, Aranya, director. Humans in the Loop. Storiculture Museum of Imagined Futures SAUV Films, 2025.

No comments:

Post a Comment

Blogs

Review: Coded Bias

Coded Bias -  Documentary ( 2020) The central problem posed by Coded Bias (2020) is not whether artificial intelligence works, nor whether ...

Must Read