Anna Charlton
The United Nations’ International Day of the World’s Indigenous Peoples theme for 2025 is Indigenous Peoples and AI: Defending Rights, Shaping Futures, with the upcoming commemoration event focusing discussions around ‘how Indigenous Peoples rights can be ensured in the age of AI, and the associated challenges and opportunities Indigenous Peoples face’. Timely and pertinent, this year’s theme sits within a broader context whereby Indigenous rights continue to be impacted by systemic racism and structural disadvantage across the globe. Earlier this year, Aluki Kotierk, Chair of the 24th United Nations Permanent Forum on Indigenous Issues described how Indigenous Peoples “remain excluded from decisions regarding the very foundation of our identity, survival, and self-determination.” When we think about the role artificial intelligence is beginning to take in society, it’s critical to examine how society is currently shaped and how something as powerful as AI may further exacerbate and amplify existing inequalities.
Australia – a site of current research into Indigenous and cultural rights surrounding mass graves aligned to Pillars 1 and 2 of MaGPIE’s human rights framework – continues to hold a poor track record for upholding human rights for First Peoples, with gross inequalities across health, wellbeing, life expectancy and education between Indigenous and non-Indigenous Australians. Looking at Indigenous rights and AI in Australia is to examine the current barriers to human rights that exist and think about how AI may increase these barriers, and what protocols would need to be in place so that these new technologies might be utilised to bridge gaps.
Leading cultural and Indigenous rights lawyer and Wuthathi/Meriam woman, Terri Janke, describes the interplay between Indigenous culture and emerging AI as the ‘new frontier.’ Outlining a number of examples of the problematic nature of this technology – from AI-generated ‘Indigenous characters’ used to lobby referendum no-votes and AI-generative ‘Aboriginal style art’, to inaccurate and inconsistent language translations – Janke and Co explore the serious ethical, legal and cultural challenges that AI presents for Indigenous Knowledges (IK). As current AI models work by extracting data from the source to provide a summary of what it has been programmed to ‘think’ it is being asked for, the potential for secret/sensitive knowledge to be shared, stripping culture from its context, is highly problematic for Indigenous Cultural and Intellectual Property (ICIP).
Earlier this year, Adobe was criticised for its use of AI-generated images of people and artworks as stock images. Some of these include the use of ‘random markings’ on AI characters tagged as ‘Aborigines’ (a term considered inherently racist and colonial), creating fake images devoid of cultural meaning and significance. Dr Hannah McGlade, a Kurin Minang Noongar woman and Senior Indigenous Fellow at the United Nations Office of the High Commissioner for Human Rights, responded to this, warning that Adobe is disregarding human rights principles in permitting and selling this content.
“Using AI-generated images as stock images is entirely inappropriate. UN Guiding Principles on Business and Human Rights need to be respected by all non-state entities, including Adobe.”
Dr Hannah McGlade
‘AI Generated “Indigenous Australian” children appear to resemble South-East Asian people rather than Indigenous Australian. (Image: Screenshot of Adobe Stock)’ (National Indigenous Times)
Tokenistic and reductive AI-generated ‘cultural’ content, exploiting Indigenous Knowledges is significant and damaging. It’s all too easy to view online knowledge as neutral, an extensive repository of all there is to know at our fingertips, yet existing online systems are exclusive, available to those with the resources, the access, the opportunity, and so forth. The disparity in accessing the tools of AI is one, all too prevalent problem understood as the digital divide, but this is only part of the story of AI inequity. Exploitation and extractive and reductive processes have long been tools of colonisation. The rapid emergence of AI represents an evolution in dominant knowledge production, where ‘vast computational resources and talent pools have become the gatekeepers of AI development’ is further embedding certain knowledge as universal knowledge. As AI models are trained predominantly on data from the West and China, these dominant knowledges will be repeated, reproduced and regurgitated with each use. What goes in, must inevitably come out, and with this the potential for technological colonisation at the local and global level.
“When AI systems are embedded with a particular set of cultural values and perspectives, they inadvertently become agents of cultural construction. This process can lead to the subtle yet pervasive imposition of certain norms and values across various societies, particularly affecting those with different cultural backgrounds or value systems. This phenomenon is not limited to high-level concepts but permeates the minutiae of daily life and decision-making.”
AI and the Risk of Technological Colonialism
What does this mean for this research project? Universities are increasingly turning to AI platforms, joining industry in a ‘learning as you go’ effort as to how to harness trained AI models for efficiency while balancing integrity and academic rigor. It is possible to see the many benefits of AI capabilities in supporting research aims; it’s all too easy to see the risks too. As a non-Indigenous researcher examining Indigenous rights, the greatest most immediate threat I see in AI is in its ability to disrupt meaningful relationships and original, genuine knowledge sharing. An over dependence on or the privileging of Western programmed AI-generated information at any phase of research would be counterintuitive when relationships and respect for local, Indigenous ways of doing lie at the heart of this journey. Utilising AI as it is currently offered through models like ChatGPT feels at odds with the guiding ethical principles of my research, which centres relationships and responsibility.
“What’s particularly concerning is that the growing reliance on AI technologies risks eliminating opportunities for relationship-building and meaningful learning that fosters critical self-reflection. This shift threatens to relegate us to the status of a “other” in the digital world.”
Dr Tamika Gill
However, I also see hope in the development of AI trained models programmed in alignment with Indigenous cultural values. The vast potential for programming AI means that it is possible to protect ICIP through embedding these protections in the models themselves. This of course means that First Nations peoples must be involved in the development of AI that reflects and respects local cultural protocols and knowledges, and great work is already underway in this area. The promise of localised, ethical AI trained models aligned to IK is exciting. Not only can it protect the misuse of ICIP, but the far-reaching impact and usage of AI means that it holds the power to bridge gaps previously unbridged. While the threat of technological colonisation remains significant, the possibilities of new freedoms exist too. Angie Abdilla et al share a vision for this in their paper, Out of the Black Box: Indigenous protocols for AI, one that enables the “creation of policies, standards, and protocols for various software languages, systems and architecture, not only for the sake of representation, but in the hope of initiating a divergent evolution of intelligent autonomous machines.” A divergent model of AI in this context is one that is aligned not to existing Western knowledges, but to an “Indigenous worldview that privileges communal wellbeing, wholeness and balance.”
When Indigenous worldviews are not only integrated, but preferenced in the development of AI, there is the potential to shape dominant cultural and socioeconomic landscapes and contribute to deconstructing the illusion of homogenous cultural and knowledge norms online that continue to restrict and erode Indigenous rights.
Poignantly, one of the key findings of Abdilla et al is that protecting and safeguarding IK in artificial intelligence will be in the programming of ‘restrictive protocols’ – in-building limitations that preserve sacred and secret knowledge that can’t possibly be integrated or meaningfully understood in a synthetic technological data system.
It is in this I see the greatest gift and opportunity: technological AI models based on less, not more. Sensitive, boundaried and respectful – local, values-driven AI programming that respects limitations and acknowledges that all that is to be known of this world and the people in it won’t ever be found online.