Audrey Watters: ‘AI is ideological’

Think of computer code as a new and powerful accomplice to legal code – the rules by which society finds itself governed. Who gets to enforce it? asks Audrey Watters

Artificial intelligence is not developed in a vacuum. AI isn’t simply technological: it’s ideological. 

So when we talk about the future of AI and how AI might threaten our ability to address social inequalities and our ability to organize to oppose power structures, we must remember that AI reflects beliefs and practices that are already in place.

And we must consider the ways in which these systems are already complex and designed to be impervious (or at least resistant) to change.

Who can understand the law, for example? Who gets to practise law or make it? Who gets to enforce it? It’s not a perfect analogy, of course, but we can think of computer code as a new and powerful accomplice to legal code – the rules by which society finds itself governed.

But computer code can also be a foe of those rules, as there are no obligations for transparency, disclosure, due process, or democracy.

Related: The age of disruption

We’re told that AI will help bring about more ‘personalization’, something that has great appeal in a culture that values individualism and consumption.

But we have very little insight into how the algorithms that drive AI actually make decisions, how they determine what ‘personalization’ means or entails. We have very little insight or recourse. As formidable the work of dismantling oppressive power structures already is in practice – legally, politically, culturally – the opacity of AI will make this even more challenging. If we take a good look at the world around us today, this opacity will likely make oppression even more entrenched.

Audrey Watters (pictured above) is an independent scholar and education tech’s Cassandra. hackeducation.com