Review: Coded Bias

Coded Bias - Documentary (2020)

The central problem posed by Coded Bias (2020) is not whether artificial intelligence works, nor whether it is improving in accuracy. The documentary advances a far more unsettling claim: algorithmic systems have become instruments of governance without democratic consent, transparency, or accountability. In doing so, they silently reorganise power—deciding who is visible, who is legible, who is trusted, and who is punished.

Rather than framing AI as a neutral technological evolution, Coded Bias interrogates the political life of algorithms. It insists that automated systems are not merely tools but decision-making infrastructures that increasingly determine access to employment, housing, credit, education, welfare, and freedom itself. The documentary’s argument is that power has migrated into code, while responsibility has evaporated behind technical opacity.

This concern places Coded Bias in direct conversation with George Orwell’s 1984. Orwell did not imagine oppression as the product of sadistic individuals alone, but as something embedded in systems, routines, and information architectures. In this sense, Coded Bias suggests that Orwell’s dystopia has not arrived through overt authoritarianism, but through bureaucratised computation, data extraction, and automated judgment.

The danger, the film argues, is not that machines will rebel against humans—but that humans will increasingly live under systems that classify, predict, and constrain them, while appearing objective, efficient, and inevitable.


Computers, as Coded Bias makes repeatedly clear, do not understand the future. They predict it by mining the past. Algorithms are trained on historical data—data shaped by inequality, exclusion, and structural violence—and are then tasked with forecasting human behaviour. What appears as innovation is therefore often historical repetition at machine speed.

This is the documentary’s foundational concern: when prediction replaces judgment, and efficiency replaces ethics, technology ceases to be neutral infrastructure and becomes political authority. The question Coded Bias asks is not whether AI has “bright and dark sides,” but rather who decides where those sides fall, and on whose lives they operate.

By situating facial recognition, predictive policing, and automated classification within global systems of surveillance and corporate power, Coded Bias argues that contemporary AI does not represent social progress. Instead, it replicates existing worlds, encoding inequality into software while claiming objectivity.


This is where George Orwell’s 1984 becomes analytically indispensable. Orwell’s insight was not merely that surveillance exists, but that power becomes most effective when embedded into systems that feel inevitable, invisible, and rational. Coded Bias demonstrates that algorithmic governance is precisely such a system.


The Central Argument of Coded Bias: From Assistance to Control

The documentary’s central argument can be distilled into a precise claim:

AI systems have shifted from assisting human decision-making to silently governing it, without public consent, democratic oversight, or ethical safeguards.

While facial recognition is often justified as a tool for preventing attacks or increasing security, Coded Bias interrogates this justification by asking: security for whom, and at what cost? The film does not deny that AI can function efficiently. Instead, it exposes how efficiency becomes the moral alibi for surveillance.


AI has both “bright and dark sides,” yet deployment occurs before safeguards exist, particularly when technologies are tested on poor and marginalised populations. Surveillance infrastructures are rarely trialed on the powerful; they are piloted on those with the least capacity to resist.

Thus, Coded Bias reframes AI development not as public innovation, but as corporate-led experimentation, where:

  • algorithms are designed for institutional convenience,

  • deployed in socially unequal environments,

  • and defended through technical opacity.


Algorithmic Bias as Political Architecture

Replication, Not Progress

One of the documentary’s most incisive claims—echoed directly in your notes—is that machines are not creating new worlds; they are replicating existing ones. AI systems trained on biased data do not transcend history; they operationalise it.

This is why algorithmic bias cannot be reduced to error. It is the predictable outcome of systems designed within unequal social orders. The problem is not that algorithms occasionally fail, but that they work precisely as expected within unjust frameworks.

Corporate Surveillance and Institutional Power

Coded Bias is explicit: most AI systems are not built for public good but for corporate and institutional efficiency. Surveillance capitalism depends on continuous data extraction, and algorithms thrive on constant monitoring.

  • Corporations know what they want algorithms to do,

  • but often claim they cannot fully understand or control what those systems actually produce.

This contradiction allows responsibility to dissolve. When harm occurs, accountability is deflected onto “the system,” reinforcing what the documentary identifies as institutional opacity.


Global Geographies of Surveillance

The documentary’s movement across global locations—China, the United States, the United Kingdom, South Africa, and beyond—demonstrates that algorithmic governance is a planetary condition.

listing Hankou, Huzhou, Philadelphia, London, Cape Town, Washington D.C., and the Soviet Union are not incidental. They reveal how:

  • surveillance infrastructures adapt to political contexts,

  • yet produce similar outcomes: classification, control, and behavioural prediction.

China’s social credit system is often invoked as dystopian, yet Coded Bias complicates this narrative. As your notes observe, China is at least transparent about surveillance. Citizens know they are being watched and are expected to behave accordingly.

In contrast, Western democracies often operate through invisible classification. Individuals are scored, ranked, and separated without knowing it. The absence of awareness does not indicate freedom—it indicates unconscious governance.


Orwell’s 1984: Power Without a Face

From Big Brother to the Black Box

In 1984, Big Brother is less a person than a symbol of systemic power. Surveillance is not merely visual but psychological. Similarly, Coded Bias replaces Big Brother with the Black Box—the algorithm that decides but cannot explain.

“I have many names. I am Algorithm. I am Black Box.”


Algorithms promise conclusions without reasoning, outcomes without explanations.
How can a system give a conclusion if it cannot tell us how it reached it?

This opacity transforms authority into something unchallengeable. As in Orwell’s world, truth becomes whatever the system outputs, regardless of lived reality.

Recorded → Logged → Analyzed

Orwell imagined surveillance as constant observation. Today, surveillance is procedural:

  • actions are recorded,

  • data is logged,

  • behaviour is analysed,

  • consciousness itself becomes a data stream.

This is not speculative fiction—it is infrastructural reality.


Invisibility, Efficiency, and the Loss of Human Judgment

Algorithms are often described as “better than random,” yet Coded Bias insists this is insufficient when systems shape lives. Efficiency, your notes remind us, is the primary design goal—not justice, empathy, or dignity.

The automation of workers raises urgent questions, but the documentary goes further by asking: who controls the gatekeepers? When algorithms determine access to jobs, housing, or credit, exclusion becomes automated—and therefore harder to contest.

Crucially, Coded Bias exposes how people increasingly understand themselves less than algorithms claim to understand them. When prediction replaces self-knowledge, autonomy erodes.


Resistance, Ethics, and the Meaning of Being Human

The documentary does not end in despair. Your final notes provide its most human intervention: resistance.

To reject a particular technological future—to protest, regulate, or refuse—is not anti-progress. It is profoundly human. As your notes observe:

  • To be human is to be vulnerable.

  • To be human is not always to be efficient.

  • Sometimes humanity means disobedience.

  • Sometimes it means saying no.

Automation performs what it is programmed to do. Ethics begins where programming ends.


Pathways Forward: Accountability Over Efficiency

Drawing from Coded Bias  meaningful responses must include:

  • democratic oversight of algorithmic systems,

  • regulation of facial recognition and biometric surveillance,

  • transparency mandates for high-stakes algorithms,

  • public education to recognise hidden governance,

  • and ethical responsibility embedded at institutional levels.

The goal is not to eliminate AI, but to reclaim agency over systems that increasingly govern social life.


Conclusion: From Dystopia to Infrastructure

Coded Bias reveals that Orwell’s 1984 was not a prophecy of totalitarian spectacle, but a blueprint for systemic, invisible control. Surveillance today does not require overt force; it relies on normalisation, efficiency, and data-driven authority.

The most dangerous aspect of algorithmic governance is not that it watches—but that it decides, quietly and conclusively.

To challenge this is not to reject technology. It is to insist that human values remain sovereign over automated systems.

In an age where prediction threatens to replace freedom, Coded Bias reminds us that the future is still a political choice.

This critique does not position artificial intelligence as an inherently harmful or regressive force. Rather, it challenges the uncritical delegation of social, political, and ethical authority to automated systems operating without transparency or accountability. AI, when governed responsibly, has the capacity to support human decision-making, reduce certain forms of bias, and improve institutional efficiency. The concern raised by Coded Bias is therefore not technological advancement itself, but the normalisation of algorithmic power in the absence of democratic oversight. To question how AI is designed, deployed, and regulated is not to reject technology, but to insist that it remains aligned with human values, legal responsibility, and social justice.

REFERENCES :

Kantayya, Shalini, director. Coded Bias. 7th Empire Media, 2020.

Nineteen Eighty-Four, by George Orwell, Penguin UK, 2004.

No comments:

Post a Comment

Blogs

Review: Coded Bias

Coded Bias -  Documentary ( 2020) The central problem posed by Coded Bias (2020) is not whether artificial intelligence works, nor whether ...

Must Read