AI may not do your literature review, but it will identify the gaps, says Dr Nompilo Tshuma
At the recent colloquium on Exploring Preparedness for Postgraduate Studies, Professor Sioux McKenna of Rhodes University interrupted keynote speaker Dr Nompilo Tshuma as she stood before her PowerPoint presentation to start her keynote address.
“So, did you write this yourself?” she asked.
The 100 or so delegates just laughed, and so did Tshuma. The delegates were research leaders from almost all of South Africa’s 26 public universities, a few private higher education institutions, representatives of the Department of Higher Education and Training (DHET) and Universities South Africa (USAf).
McKenna, one of the coordinators of the Enabling Quality Postgraduate Education project that co-hosted the colloquium, was poking fun because Tshuma is an educational technology researcher, whose talk was on The implications of AI for postgraduate education.
Dr Tshuma (left), Senior Lecturer at the Centre for Higher and Adult Education at Stellenbosch University, asked the delegates to consider the purpose of postgraduate education in the context of artificial intelligence (AI) tools. “Do you think they are potentially supportive or disruptive?” she asked.
She said there is a misalignment between why students opt for postgraduate studies and why universities provide this education. Many students enter postgraduate studies not to make an original contribution but for reasons that must be considered when framing where generative artificial intelligence fits into this picture.
She said academics want to develop in their master’s and PhD students, “the kind of people who see the world differently and critique it”. But that is not necessarily what the students are preoccupied with.
Backed by research, Dr Tshuma stated that academic motivations may prompt people to enter postgraduate study, especially after performing well at the undergraduate level. The perceived quality of the postgraduate programme may also motivate them.
She said the labour market is also a big motivator — considering that many postgraduate students are part-time. Some believe that postgraduate study will increase their job prospects or earn them a promotion. Some want to learn new skills or deepen their knowledge in a particular area, or aspire to start a business. These reasons have nothing to do with research aspirations or contributing to knowledge – academics’ understanding of postgraduate studies.
Once enrolled, the students realise they lack the requisite skills. They might seek help but then discover ChatGPT and other tools designed specifically for research, said Dr Tshuma.
The perils of AI tools
“Everyone is screaming about how amazing generative AI is, but what exactly is it helping us to accomplish in postgraduate education?” asked Dr Tshuma.
“It’s become a buzzword,” she said. She had googled “artificial intelligence South African government” and could not believe the number of times it was mentioned. “Until you read it and think: ‘I don’t understand what they even mean. I don’t know if they understand what they mean.’ It’s like we just put in the word ‘artificial intelligence’ somewhere and talk about all the amazing things happening in the country,” said Dr Tshuma.
Problems of using generative AI for postgraduate research include that most tools cannot explain how they got their answer. “So how do students learn from it if they can’t even ask ‘so how did you get from here to here’?” she said. Although it might work in some research fields in a niche area, it does not always provide helpful responses.
She said the plagiarism detector, Turnitin, which claims it can catch students using AI – introduced Clarity earlier in March, mimicking how people write. “Think of the way you write a journal article. For me to finish my introduction, I’ll have deleted the paragraph seven times and then move something to the other end and think, no, that should be in the conclusion. And then I come back. So, it’s learning my writing process from my keystrokes, to use it and monetise it later on,” she said.
“It’s such a political game, I don’t think our students realise that the data they enter is going to be monetised by these companies, and that’s how they get better and better.” She said she was not discouraging using the tools but cautioning students to be critical about how developers use them.
She added that some people are using AI to fabricate non-existent research. In 2024 about 10 000 papers were retracted, mostly because of inaccurate data and the use of ChatGPT.
Humanizer quillbot is another tool designed to make AI text undetectable. “So, students will get the text from ChatGPT and put it on quillbot, which then humanizes that text and does even more,” she said. Although it yields some interesting results, when a researcher asked it to look for “tortured phrases,” it changed words such as “big data” to “colossal information” and “artificial intelligence” to “counterfeit consciousness”.
Mishra’s framework for AI expertise
She had recently encountered a helpful new framework developed by Professor Punya Mishra (right), Director of Innovative Learning Futures at the Learning Engineering Institute at Arizona State University, and an expert in technology integration in teaching
His framework looks at different types of knowledge and sets up four scenarios to understand the challenges people encounter when dealing with AI tools:
- the novice’s dilemma – concerns students who do not have AI expertise and do not have disciplinary knowledge. They cannot evaluate the AI’s outputs for accuracy and don’t know when AI leads them astray because they don’t understand what is within the discipline.
- the expert’s advantage – refers to those with disciplinary expertise but may not be AI literate. This is a better position because it’s easier to learn how to use AI than to build disciplinary expertise. It means “you can start supporting your students even before you have full AI literacy,” said Dr Tshuma.
- the false confidence trap – those highly knowledgeable in AI but not in the discipline. They are confident in dealing with AI tools but don’t know when AI is lying to them.
- the dual expertise challenge – is the one we are all hoping to teach, said Dr Tshuma. People highly knowledgeable in both AI and their discipline are the most engaged when working with AI because they spend a lot of time thinking not just about which tool they are using, but what it’s giving them. They use prompts to push for better and better responses.
“So students are caught in a chicken-and-egg dilemma: expertise versus AI literacy. Which one should we support first?” questioned Dr Tshuma.
How researchers can use AI effectively
Students must learn to craft their prompts, she said. It is a skill to tell the bot what you want it to do, and how you want it to do it. “What example should it follow? What should be the output? From there, you can ask additional questions.
“But if you request ‘a poem about South Africa’, it will give you something very general and broad, that will not help you,” she said.
She said generative AI can be used ethically with writing literature reviews — not to do the actual review, “but to help you to see what you might have missed in your literature review”.
She said students may use tools such as Research Rabbit, get summaries from CHATGPT or some other tool and think that is fine. “But so much more engagement needs to go into it. The tools can support but students still need to read those articles and not just the summaries.” She said although the tools can help with data collection, transcription and data analysis, human oversight remains critical.
“We need to recognise that reading and writing are so central to the postgraduate education experience that students turn to generative AI tools for support. I worry that the way we support or don’t support their reading and writing is what is pushing them to use these tools,” said Dr Tshuma
AI workshops are not enough
To a question on how universities could help supervisors gain more understanding and expertise on the use of AI, Dr Tshuma responded: “We have to go beyond just AI literacy training”.
She said her experience in education technology abroad and running workshops on Moodle, the online learning platform, revealed that “people needed to be intrinsically motivated or inherently interested in what they are doing. They need to see its value for themselves.
“I can attend an AI literacy workshop and learn absolutely nothing. But it is different if I know my challenge and the workshop is tailored to address it.”
Gillian Anstey is a contract writer for Universities South Africa.