The majority of university students have used generative artificial intelligence (AI) to help them with their studies, a survey suggests.
More than one in eight undergraduate students (13%) said they have used generative AI to generate text for assessment but they edited the content themselves before submitting it, according to the report from the Higher Education Policy Institute (Hepi) think tank.
The rise of generative AI tools – such as ChatGPT and Google Bard – has sparked concerns about cheating among students in the education sector.
But the report by Hepi suggests that only 5% of students surveyed admitted incorporating AI-generated text into assessments without editing it personally.
A poll, of 1,250 undergraduate students in the UK through the Ucas admissions service, suggests that 53% of students have used generative AI to help them prepare assessments.
Of these students, the most popular use is as an ‘AI private tutor’ with 36% of students reporting that they used AI to explain concepts to them.
Other popular uses of AI include suggesting ideas for research and summarising articles, the report found.
The study which was carried out in partnership with EdTech company Kortext, suggests a “digital divide” in AI use may be emerging.
The survey, which was carried out in November 2023, found that nearly three-fifths of students (58%) from the most privileged backgrounds reported using generative AI to help prepare assessments, compared with just half (51%) from the least privileged backgrounds.
Those with Asian ethnic backgrounds are also much more likely to have used generative AI than white or black students, and male students use it more than female students.
The report calls on institutions to provide AI tools for those who cannot afford them – when they have been identified as benefiting learning – to prevent the “digital divide” from growing,
It adds that institutions should develop clear policies on what AI use is acceptable and what is unacceptable.
Nearly two in three students believe their institution has a “clear” policy on AI use (63%) and their institution could spot work produced by AI (65%), the survey suggests.
Josh Freeman, policy manager at Hepi and author of the report, said: “Students trust institutions to spot the use of AI tools and they feel staff understand how AI works.
“As a result, rather than having AI chatbots write their essays, students are using AI in more limited ways: to help them study but not to do all the work.
“However, action is urgently needed to stop a new ‘digital divide’ from growing. AI tools are still new and often unknown. For every student who uses generative AI every day, there is another who has never opened ChatGPT or Google Bard, which gives some students a huge advantage.
“The divide will only grow larger as generative AI tools become more powerful. Rather than merely adopting a punitive approach, institutions should educate students in the effective use of generative AI – and be prepared to provide AI tools where they can aid learning.”
A spokesperson for Universities UK (UUK) said: “The findings in this study highlight the importance of crystal-clear policies surrounding the use of AI and we know universities are increasingly focusing on this.
“The potential risks and benefits posed by AI to higher education have been known to the sector for some time, and any new developments are closely monitored.
“Currently, all universities have codes of conduct that include severe penalties for students found to be submitting work that is not their own and engage with students from day one to underline the implications of cheating and how it can be avoided.”
Follow STV News on WhatsApp
Scan the QR code on your mobile device for all the latest news from around the country