Who owns AI? Digital literacy, inequality, and the future of learning

Published On: 14 May 2025|

Tools like ChatGPT have become a daily part of life for students and educators alike. But beneath the surface of convenience and efficiency lies a deeper, more uncomfortable reality that questions not just how AI is used, but who benefits from it, who is left out, and whether education systems are prepared for the digital future being sold to them.

Addressing these questions is the challenge posed by Ms Helen Beetham (left), a UK-based researcher and consultant in digital education, during a recent webinar hosted by the Community of Practice on Digital Education in Learning and Teaching (DELT CoP) of Universities South Africa (USAf). The event, titled Digital Literacies in the Age of AI, was aimed at educators, learning designers, researchers, and institutional leaders working across South Africa’s higher education sector. It took place on 10 April 2025.

Why was this webinar held?

The session was held to equip higher education professionals with a critical understanding of the tools increasingly shaping academic life. The webinar forms part of ongoing efforts to support academic staff and digital education practitioners in responding to technological change through informed, ethical, and context-aware approaches.

AI systems are already being integrated into teaching, learning, assessment, and administration. The webinar aimed to raise awareness of the broader consequences of these technologies and to prompt institutions to examine their roles as adopters, regulators, and gatekeepers. Doing so would reshape what AI literacies mean.

Who was it for?

The primary target audiences were educators, learning technologists, academic developers, and university staff tasked with teaching and supporting students. The webinar spoke to the needs of students affected by the institutional use of AI and the policymakers who must make decisions about integrating AI tools in higher education systems.

Beetham pointed out that students are not passive observers in this space; they are already actively using AI tools, often with little guidance. As such, the value of the session lay in helping staff understand how to support students more effectively in this fast-changing environment.

What was discussed?

Beetham’s presentation offered an in-depth examination of generative artificial intelligence’s origins, risks, and consequences. The talk was positioned not as a showcase of AI’s benefits, but as a critical reflection of its societal and educational implications. Rather than opening with the potential of AI, Beetham deliberately began with concerns about inequality, labour, power, and digital infrastructure.

“Unlike many talks and guidance around AI, I tend to start with the risks, the problems, and the concerns… and then I get around to saying what a digital or AI literacy might look like to address these,” said Beetham.

This framing set the tone for a session that explored generative AI’s cultural and technical foundations. She explained how AI tools are trained on large datasets, predominantly in English and created by people with the privilege of digital access and visibility. This, she argued, results in models that disproportionately reflect dominant worldviews while underrepresenting minority cultures and languages.

Attendees were also introduced to critiques of the commercial structure of AI. Citing authors like Naomi Klein, Beetham noted how generative AI, while appearing to democratise creation, often mediates access through proprietary platforms that profit from user data and historical cultural archives.

Where can this knowledge be applied?

The information shared in the session informs decisions at various levels: curriculum development, teaching practices, policy frameworks, and digital transformation strategies. Universities may apply the insights in designing assessments, guiding students in ethical AI use, and selecting or regulating AI-powered platforms.

Institutions were also encouraged to consider the regulatory responsibilities they carry. Beetham referred to the European Union AI Act, which had initially categorised educational use of AI as “high risk.” That classification came with requirements for risk assessments, transparency, and human oversight. Beetham observed that current systems rarely meet these criteria.

Implementation will vary by institution, but practical steps discussed during the webinar include developing internal policies on AI use and academic integrity, training lecturers to assess AI-assisted student work, embedding critical AI literacy into student learning, advocating for transparency from AI vendors and platforms, and designing institutional safeguards with human oversight mechanisms in place.

Beetham highlighted that frameworks like the UNESCO AI Competency Framework offer guidance on ethics and safety, but warned that educators must be given the power and resources to act on those principles.

Throughout the presentation, Beetham linked AI’s current form to its industrial and colonial roots. She pointed to Charles Babbage’s early computing work, which included efforts to reduce labour costs on plantations by extracting “intelligence” from the worker and transferring it into machines. She showed how that logic persists in designing and deploying modern AI systems.

The session also explored the gender gap in AI use. More than 75% of regular ChatGPT users, Beetham noted, are men—a statistic that opens up questions about who these tools are designed for, who feels included, and whose needs are overlooked.

Attendees also heard about the invisible labour underpinning AI technologies, particularly the low-paid data workers in the Global South who refine AI outputs. While marketed as “high-value” jobs locally, these roles are often precarious and poorly compensated, yet essential to the functioning of the tools now being promoted to students and educators worldwide.

The presentation extended into environmental concerns. Beetham cited data showing that data centres in countries like Ireland now consume more power than all residential households. These centres are often located in low-regulation zones and contribute to significant ecological pressure, despite AI being marketed as a clean, forward-looking technology.

Back on campus, AI is already disrupting peer review, publishing, information systems, and academic workflows. AI bots constantly scrape tools like Wikipedia, journals are flooded with AI-generated submissions, and predictive engines are replacing traditional research search methods. Beetham warned that fragile knowledge ecosystems—built on trust, collaboration, and peer validation—risk collapsing under the weight of automated, scale-driven information systems.

The webinar concluded by examining what a critical AI literacy should look like. While AI literacy is increasingly promoted in universities, Beetham noted that calls for fairness and ethical use can become meaningless without structural power.

The session encouraged educators to go beyond technical skill-building and focus instead on fostering critical awareness. This includes asking:

  • Who owns the AI tools we use?
  • What values are embedded in their design?
  • Who profits and who pays the price?

Beetham argued that true AI literacy must include historical context, power analysis, and questions of justice. It must prepare educators and students to challenge systems, not just operate within them.

The Digital Literacies in the Age of AI webinar offered an urgent and expansive view of artificial intelligence and its role in higher education. Rather than simply promoting responsible use, the session urged educators and institutions to interrogate the systems themselves, highlighting the importance of equity, sustainability, and critical thinking in shaping digital futures.

Mduduzi Mbiza is a commissioned writer for Universities South Africa.