Digital literacies in the era of AI; A lively discussion

Published On: 14 May 2025|

The rapid rise of generative AI tools like ChatGPT has forced educators and policymakers to grapple with urgent questions: How should universities prepare students for an AI-driven world? Who bears responsibility for ethical integration? And what are the key questions to be asked about AI for teaching and learning?

Manchester University researcher Ms Helen Beetham’s recent webinar under the auspices of Universities South Africa’s Digital Education (DELT) Community of Practice (CoP) on Digital Literacies in the Age of AI was provocative and thought-provoking. It was followed by a candid discussion about the role of institutions, AI literacy, equity, and student learning with participants from universities across South Africa engaged in digital education.

Dr Nazira Hoosen, Associate Lecturer and Educational Developer who supports academics’ use of educational technologies at the University of the Witwatersrand (WITS), moderated the conversation.

Responsibilities: helping students navigate generative AI

Professor Deborah Blaine (left), Senior Lecturer in the Mechanical and Mechatronics Department at Stellenbosch University: I don’t have the answer as to who is responsible for helping students navigate AI. I just know that I see my students either using this technology, or being scared to use it, and I know there is an expectation in industry that our graduates will know how to use AI tools, just like they use a spreadsheet or CAD software. That’s what I feel responsible for, as an educator. But from my environment, there’s no formal expectation or institutional support. I’m pushing for more structured integration of AI into the curriculum, but I’m not sure how much hope I have. Institutions are happy if we do it in our own time, but are not necessarily willing to fund it.

If I want to use AI in my classroom, I don’t think there are people in the university equipped to evaluate whether I’m using it ethically. I need community spaces to have those discussions, so I don’t fall into traps I haven’t yet seen. We need a collective understanding.

On discomfort, critique, and responsibility

Ms Helen Beetham (right): Yes, when we’re being urged to use AI, we often feel that if we don’t, we’re failing in our pastoral duty to students. But it’s not about spending more classroom time; it’s about how we frame it. A more authentic response might take less effort than always trying to find ways to make AI useful. Universities said “yes” to AI, and now staff are expected to make that story true. It’s exhausting. And often, the tool isn’t great at what we need it to do. We could spend less time finding ‘use cases’ and more time helping students notice the limitations, and having open conversations about what kind of futures they want and we want.

Critical AI literacy, discomfort, and resistance

Ms Fatima Rahiman, Online Learning Programmes Project Manager at WITS: This is an evolving conversation. There needs to be space for critique. I once gave a webinar on critical AI literacy, and some colleagues dismissed it, like I was being overly negative. But we’re expected to test these tools. I once spent four or five hours using ChatGPT to analyse data. It kept hallucinating. Yet we keep hearing it’s perfected. We shouldn’t send that message when we still don’t understand what’s happening under the hood.

Procurement, policy, and guilt

Dr Nicola Pallitt (left), Senior Lecturer and Educational Technology Specialist at Rhodes University: I’m both an academic and provide a service at a teaching and learning centre. I’ve been involved in AI policy and guidelines, like turning off AI detection tools and encouraging discussion. But I sometimes feel guilty and wonder: ‘Did we move too fast? Were we irresponsible in how we approached this?’ I think today has made me reflect deeply on that.

Ms Beetham: We can’t be individually responsible. We’re caught up in a systemic push. Calling it “AI literacy” already makes it sound like something everyone needs, just to participate in culture at all. Of course, we feel responsible, but guilt isn’t helpful. What matters is that we’re here having these conversations.

Equity and meaningful access

Ms Bianca Le Cornu (right), Learning Designer at the University of Pretoria: There is a tension. Generative AI can potentially support equity, especially with culturally diverse tools like DeepSeek. But access and meaningful use remain unequal. Tools may be available, but without guidance, foundational literacy, and facilitation, students outside formal education are left behind. The current landscape still favours those with digital access and institutional support. The question isn’t just access—it’s about who supports equitable use, and whose cultural starting points shape these tools.

Ethics and trust

Ms Toni Malgas, Instructional Designer at the WITS Centre for Learning and Teaching Development: We have a diverse student cohort. Some struggle with language or don’t fully understand classroom concepts, so they turn to AI to help them complete assignments. Should we encourage or discourage this? Is it the lecturer’s responsibility to explain how and when to use AI? Students may not know where to start, and while there are risks, they are using the tool anyway.

Dr Nazira Hoosen: We keep going back to Helen’s question: Who is responsible? But it also comes down to who we are, where we come from, and how we’re encultured.

Practical pedagogy and academic confidence

Mr Jeremiah Pietersen (left), Manager: Learning and Training at Stellenbosch University Library and Information Services: We’re caught up in the hype around AI policies and literacy. But really, this is a chance to return to fundamentals: information literacy, source evaluation, and critical thinking. I don’t need AI detection tools to spot AI-generated content. Flowery, vague language is usually a giveaway. We were hired for our expertise—we know what well-written academic work looks like.

Ms Beetham: Yes, Jeremiah’s point about implicit expertise is so important. This is an opportunity to discuss how knowledge is produced in our fields. That’s what makes us valuable as educators.

Industry readiness and educational values

Ms Le Cornu (UP): Are we preparing students for the world of work, for industry? Teaching AI isn’t just about the tools; it’s about building the human behind the tool. For example, anyone can generate a logo with AI now, but that’s not design thinking. We need to teach critical analysis and creativity, not just output.

Ms Beetham: Universities sometimes assume they know what industry wants. But many industries are pushing back against AI. Productivity gains might be achieved by experienced professionals who know which shortcuts are valuable and can check for errors. But that’s not true of graduates and junior staff. And where layoffs happen, it often affects junior staff because AI is doing the entry-level work they would have used to gain experience. That’s dangerous for long-term talent development.

Professor Blaine (SU): I recently told my class, ‘if you already know the right answer, then there’s nothing to learn. It’s when you don’t know, and have to figure it out, that learning happens.’ Perfect answers don’t always reflect growth. We need to help students ask better questions.

On critical thinking and pedagogy

Ms Mei Luo, Learning Experience Designer at WITS: Before we ask students to critically engage with AI we need to define critical thinking. It’s not just about logic or checking sources. It’s about questioning assumptions, understanding context, and recognising that knowledge is shaped by culture, history, and power. It’s discipline-specific, but also about developing the capacity to reflect and respond. Are we modelling that, ourselves?

Ms Beetham:
This is such a good point. I’ve been writing about disciplinary approaches to criticality. In the sciences, criticality often means choosing the right method. In the humanities, it’s about interrogating meaning and power. And critical pedagogy expands the context. But AI invites us into the role of user; it offers services. So, we should ask: what ends is this tool designed to serve? Do those align with our values? Students don’t want AI to think for them, but it takes work to resist that compulsion.

Trust, privacy, and paid tools

Ms Fatima Rahiman (WITS): We piloted a paid chatbot in a medical school. All data was verified and private. The uptake was high—it was factual and secure. But it was expensive. I worry that we’re naive to think paid tools are necessarily safe. Are we just ticking a box when we say ‘we’ve paid for it, so it’s fine’?

Ms Beetham: Small-scale language models are promising, but functionality is limited. Commercial models spend billions — not on better algorithms, but on refining the interface and improving responses with human data workers. That’s where they get the benefits of all their data power. Universities should invest in experimenting, but not assume we’ll match those capabilities.

Students’ awareness and community building

Ms Sukaina Walji (right), Director: Centre for Innovation in Learning and Teaching (CILT) at the University of Cape Town: There are contradictions. The more students use AI, the more they notice its flaws. Some realise their learning is being short-changed. If we move past fear, we can approach AI relationally, with students and teachers building a shared understanding, rather than treating AI as a threat.

The Q&A discussion with Helen Beetham demonstrated the many layers of complexity in AI in higher education.

From questions of individual responsibility and institutional preparedness to issues of equity, privacy, and pedagogical values, the session underscored the urgency of approaching AI critically. The discussion explored how to realise AI’s practical benefits and warned against unquestioned adoption. It emphasised that AI literacies are not simply individual capabilities and must be understood as institutional and deeply structural.

Mduduzi Mbiza is a commissioned writer for Universities South Africa.