The topic of Artifical Intelligence has been of huge interest in the early months of 2023. Our latest blog sees Abertay University Principal, Professor Liz Bacon, discuss the role AI could have in higher education reflecting on her decades of work in this area.
With the disruption navigated by our universities during Covid-19 thankfully beginning to fade in the memory, the sector has been operating in the realm of the ‘new normal’ for some time.
The complex challenges we grappled with as institutions throughout the pandemic years were unprecedented, all requiring a rapid shift in our use of existing technology alongside some major changes to regulations, processes and attitudes. It was an incredibly tough period, but we adapted, pulled together and slowly moved out of crisis mode and back into long term planning.
Then ChatGPT was released.
While an AI chatbot tool clearly isn’t in the same bracket as a deadly virus, there should be no mistake that what’s coming over the hill is going to require a far more fundamental change in terms of how universities operate than anything that was prompted by Covid – and at an even faster pace.
Much of the media commentary around the current AI advances has thus far focused on how to prevent students from using bots to cheat in exams, or perhaps looking at ways we might somehow block their use in our institutions.
But these early reactions are essentially based on side issues, predicated on a flawed notion that we can somehow control the growth of the technology and also a misreading of both the expanding capabilities of AI and the direction in which the tech is travelling.
The starting line for any discussion around the impact of AI should be an assumption that it will be embedded in our work and personal lives in some form within the next year, or faster. This approach views AI developments in the same broad vein as the ubiquitous use of calculators, for example. Recent calls for the explosive growth of AI bots to be slowed may serve to curb that rate a little in the short term, but it’s difficult to imagine the technology being permanently banned by policy makers, with tighter regulation around the industry perhaps a more likely outcome.
Existing AI models can already create artworks or video games, file lawsuits, generate research feedback and build computer code (among a vast range of other capabilities) so it won’t just be teaching and assessment models that need to shift. We are looking at a sea change in our approach to almost all aspects of running a higher education institution.
But far from seeing AI as a threat to HE, I believe this period should be viewed as a time of opportunity for any institution willing to embrace change. That’s not to say it will be easy. Indeed, it’s highly probable that our traditional definition of what it means to teach or support a student will have to evolve alongside this tech.
The future student who’s been brought up on AI is going to be accustomed to nothing less than accessing instant answers at the touch of a button. Will they really be satisfied with waiting to receive any sort of information from a tutor or a support professional unless it adds significant value over what they can receive from their personal bot in a fraction of the time? As ever, it will depend completely on the quality of the interactions we can provide.
As HE professionals we will reap the benefits of the work that we are willing to put in and our ability and willingness to adapt. If we can hand over the basic or repetitive tasks to AI assistants then we can shift the focus towards the types of activities we do best as human beings – making social connections, generating new ideas, challenging conventions, providing leadership and inspiring one another – to name but a few.
In terms of assisting our research capabilities, again there are clear areas of benefit around automated data analysis, assistance with curation of bids and applications, publishing research outputs and many other tasks. It’s AI’s capacity to sift enormous amounts of information in a very short time frame that’s the potential game changer here – it is already identifying patterns in data that we either can’t see as humans, or that would take years of traditional processing to achieve.
How schools take to the AI revolution is going to be equally as important as HE’s reaction. Although some tech-savvy students are already using the likes of ChatGPT, we need to prepare for an explosion in use in the generations following on behind this one – young people who will be used to viewing AI assistants as we might view a calculator.
So, in future will all assessments have to be in person to authenticate authorship? Or will we ask our bots to interact with our students, questioning them to help develop their critical thinking, as they develop their assessment solutions and then help us with the grading of assignments? Or perhaps it will become normal for us all to ask a bot any question before trying to solve a problem ourselves.
At the moment we’re still in the foothills of this tech and only time will tell how society is going to react. But what’s clear is that we need to be thinking about how our approach is going to prepare students to operate in workplace environments where the likes of Microsoft’s Copilot, Google’s Bard and various other AI systems are already beginning to be rolled out into our suite of go-to tools.
While none of these systems are perfect and they may be prone to occasional errors or ‘hallucination’, they are improving fast, and we must enact changes now to prepare for the future.
A university degree has, and always will, act as a document of proof for employers that a student has attained a certain standard of education. I believe that there’s always going to be a place for that trusted certification in years to come, but while the end outcome may not change for our students, the shape of their journeys with us certainly will.