HUMANS IN THE LOOP (2024)
This blog is part of Sunday reading assigned by Dilip Barad to analyse Humans in the Loop deeply, Also to explore AI, Bias, and Epistemic Representation, Labour and the Politics of Cinematic Visibility and Film Form, Structure, and Digital Culture. Worksheet for Task
🎬 PRE-VIEWING WORKSHEET: CONTEXT & KEY CONCEPTS
🔹 AI Bias & Indigenous Knowledge Systems
-
AI bias refers to systematic distortions in algorithmic outputs that arise from the data, categories, and assumptions embedded in machine learning systems. Rather than being neutral computational errors, biases often reflect historical inequalities, dominant cultural norms, and selective representation within datasets.
-
Machine learning systems depend on classification. However, classification is never ideologically neutral; it simplifies complex realities into fixed, standardized categories. These categories are shaped by designers, institutions, and economic priorities, thereby embedding cultural assumptions into technical infrastructures.
-
Indigenous ecological knowledge systems operate differently. They emphasize relationality, seasonal rhythms, oral transmission, and context-dependent understanding. Knowledge is experiential and collective rather than abstracted into universal taxonomies.
-
When such situated knowledge encounters rigid algorithmic structures, tension emerges. Indigenous frameworks resist reduction because they are grounded in lived interaction with land, community, and environment.
-
Thus, indigenous epistemologies challenge technological framings by exposing the limits of computational universality. They reveal that intelligence is plural and culturally situated, not singular or purely mathematical.
🔹 Labour & Digital Economies
-
Invisible labour in digital economies refers to forms of human work that sustain technological systems but remain socially and economically obscured. In AI production, this includes data annotation, content moderation, verification, and correction.
-
Although AI is frequently described as autonomous or self-learning, machine learning models require continuous human intervention. Workers classify images, interpret language, and refine datasets so algorithms can function effectively.
-
This labour is often outsourced, precarious, and geographically marginalized. It operates within global digital capitalism, where value accumulates at the top of technological hierarchies while cognitive effort remains under-recognized.
-
Highlighting invisible labour is politically significant because it disrupts the myth of automation. It reveals that so-called artificial intelligence is dependent on human judgment.
-
Economically, invisible labour transforms human cognition into scalable data capital. Culturally, its invisibility reinforces assumptions that innovation is detached from embodied work. Bringing such labour into narrative focus exposes structural inequalities within contemporary digital economies.
🔹 Politics of Representation
-
Representation in cinema is not mere depiction but the construction of meaning through framing, selection, and narrative emphasis. Media shapes how audiences understand both technology and marginalized identities.
-
Public discourse often portrays AI as progressive, objective, and future-oriented. Conversely, Adivasi communities are frequently framed within developmental narratives as traditional or outside modernity. This contrast reinforces hierarchical binaries between technological modernity and indigenous life.
-
By centering an Adivasi woman within an AI context, the film’s publicity and reviews suggest a disruption of this binary. Technology and indigenous identity are placed in dialogue rather than opposition.
-
Representation thus becomes ideological: it determines whose knowledge is seen as innovative and whose as residual. If indigenous experience is framed as intellectually engaged rather than technologically excluded, dominant stereotypes are challenged.
-
The politics of representation therefore operates at two levels—depicting technology not as neutral infrastructure, and depicting Adivasi culture not as static tradition, but as an active participant in contemporary knowledge systems.
📖 POST-VIEWING REFLECTIVE ESSAY
TASK 1 — AI, Bias & Epistemic Representation
AI, Bias, and the Politics of Knowledge in Humans in the Loop
Artificial intelligence is frequently presented as neutral computation—objective, mathematical, and detached from social context. Humans in the Loop, directed by Aranya Sahay, challenges this assumption by representing AI as culturally produced and ideologically structured. The film argues that algorithmic systems do not merely process data; they inherit the assumptions, hierarchies, and exclusions embedded within the societies that design and sustain them. Through its focus on Nehma, an Adivasi woman engaged in data labelling work in Jharkhand, the film exposes algorithmic bias as socially situated and reveals the epistemic hierarchies that determine whose knowledge counts within technological systems.
The narrative foregrounds the human infrastructure behind machine learning. Rather than portraying AI as autonomous intelligence, the film repeatedly shows Nehma performing classification tasks—drawing bounding boxes, assigning labels, and verifying categories. These acts reveal that AI learning is dependent on human interpretation. The so-called “learning” of the machine is a structured repetition of human judgment. By situating the camera within the workspace, the film dismantles the myth of technical neutrality. AI emerges not as an independent entity but as a system shaped by selective data and predefined categories.
Algorithmic bias is presented as structurally embedded rather than accidental. When Nehma labels images according to rigid taxonomies, the film highlights the reduction inherent in computational classification. Complex social identities and ecological realities are compressed into singular tags such as “professional,” “normal,” or “violent.” The repetition of bounding boxes visually reinforces this reduction. The screen fragments lived experience into measurable units, illustrating how algorithmic systems simplify multiplicity into standardized data points. Bias thus appears as a design consequence: it arises from the limitations and assumptions built into the classificatory framework itself.
The film further demonstrates that such classifications are culturally situated. Nehma’s indigenous ecological knowledge—rooted in relational understanding of land, seasonality, and community—cannot easily be translated into fixed digital categories. Her pauses and hesitations while labelling forest imagery signal a disjunction between lived knowledge and algorithmic logic. What she understands contextually must be reformulated into abstract, decontextualized inputs. This tension reveals that AI systems privilege particular epistemologies—often standardized, Western, and market-oriented—while marginalizing others. Bias therefore reflects the dominance of one knowledge system over another.
This dynamic illustrates epistemic hierarchy. The authority to define categories lies with distant clients and designers, not with those performing interpretive labour. Nehma contributes her cognitive effort to shaping the dataset, yet she does not control the conceptual framework guiding classification. Her knowledge is instrumentalized but not recognized as epistemically authoritative. The film thereby exposes what scholars term epistemic injustice: the systematic devaluation of certain knowers within institutional structures. Indigenous knowledge becomes raw material for machine training but is denied legitimacy as knowledge in its own right.
Apparatus Theory offers a useful framework for interpreting this critique. Traditionally associated with the ideological operations of cinema, Apparatus Theory argues that film positions spectators within structured systems of meaning that appear natural but are socially constructed. In Humans in the Loop, the AI interface functions analogously to a cinematic apparatus. It organizes perception through framing, bounding, and categorization. Just as the cinematic frame directs the viewer’s gaze, the algorithmic interface directs machine perception. Both systems produce meaning by delimiting what can be seen and how it can be interpreted. By foregrounding the interface rather than concealing it, the film reveals this structuring power. The ideological function of technology becomes visible rather than naturalized.
Representation plays a central role in this exposure. The film does not depict Nehma as technologically deficient or culturally static. Instead, it presents her as intellectually reflective and ethically aware. This representation disrupts dominant media narratives that frame Adivasi communities as outside modern technological processes. By positioning her at the centre of AI production, the film challenges the binary between tradition and modernity. The narrative suggests that indigenous identity and technological labour coexist within contemporary digital culture, complicating simplistic developmental hierarchies.
At the same time, the film avoids romanticizing indigeneity. Nehma’s knowledge does not automatically resolve the contradictions of machine learning. Rather, her situated understanding exposes the limits of universal classification. The forest imagery intercut with screen-based labour reinforces this contrast. Natural spaces are depicted with depth and texture, emphasizing relational complexity. In contrast, the digital interface appears flat and segmented. This formal juxtaposition underscores the epistemic tension between contextual knowledge and algorithmic abstraction.
Power relations remain central to the film’s argument. The unseen clients who define the categories embody structural authority. Their absence from the frame intensifies the asymmetry: control is exercised through data pipelines rather than physical presence. The labourer sees the interface but not the institutional decision-makers shaping it. This invisibility mirrors broader dynamics within digital capitalism, where those who design systems remain detached from those who execute micro-tasks. Algorithmic bias is therefore not only cultural but economic; it reflects hierarchies embedded within global technological production.
The metaphor of the “human in the loop” operates beyond technical terminology. In engineering discourse, the phrase refers to systems requiring human oversight. In the film, it acquires political resonance. Humans are necessary for AI training, yet they lack decision-making power. The loop suggests continuity, but it does not imply equality. Nehma’s participation sustains the system, but her epistemic authority remains constrained. The film thus reframes the loop as a site of asymmetrical dependency rather than collaborative co-creation.
Importantly, the narrative refrains from offering technological solutions. There is no suggestion that better coding alone can eliminate bias. Instead, the film situates bias within social structures. As long as datasets reflect unequal representation and categories privilege dominant frameworks, algorithmic outputs will reproduce those hierarchies. The absence of narrative closure reinforces this argument. Structural problems cannot be resolved through individual intervention.
Cinematically, the restrained style supports this critique. Close-ups of Nehma’s concentrated gaze emphasize the cognitive labour behind machine learning. The rhythmic clicking of the interface contrasts with the layered sounds of the forest, symbolizing the narrowing of perception within digital systems. Editing connects micro-actions—such as a single mouse click—to broader technological consequences, suggesting that large-scale AI infrastructures are built from countless small judgments. Form and argument align: the film’s aesthetic choices render visible what digital discourse obscures.
Ultimately, Humans in the Loop positions artificial intelligence as a mirror of societal structures rather than an autonomous force. Algorithmic bias is revealed as culturally situated because it emerges from selective epistemologies embedded within data and design. Epistemic hierarchies become visible through the unequal distribution of authority between those who classify and those who define categories. By foregrounding indigenous knowledge without romanticization, the film challenges the universality claimed by technological systems and insists on the plurality of intelligence.
Through its narrative focus and formal strategies, the film transforms AI from a symbol of futuristic innovation into a site of contemporary ideological struggle. Technology does not transcend culture; it is shaped by it. The machine learns what it is taught, and what it is taught reflects power relations. In exposing this dynamic, Humans in the Loop compels viewers to reconsider not only how artificial intelligence functions, but whose knowledge it encodes and whose it leaves outside the frame.
No comments:
Post a Comment