Australia is lagging behind the rest of the world on investment and research on artificial intelligence and risks being left behind.
Group of Eight chief Vicki Thomson sounded the alarm on the government’s need to step up on innovation during a parliamentary inquiry into the use of generative AI in the education sector.
But university peak bodies agreed equity and fair access to the groundbreaking technology should be front and centre of any future government response.
The House of Representatives inquiry is currently considering the impacts and opportunities created by generative AI for Australia’s education system.
Ms Thomson said there was “no question” generative AI would shake up higher education but providers needed to be ahead of the curve.
“What is concerning, given this is such an important, subject is that Australia is lagging behind competitor nations when it comes to our investment in AI and indeed research more broadly,” she told the inquiry.
The federal government committed $100m in the last budget towards helping businesses to integrate quantum and AI tech.
It’s compared to the European Union’s €20 billion per-year target towards AI to 2030.
“We’ve got a lot of ground to make up,” she added.
Australian Technology Network of universities head Luke Sheehy warned MPs any future regulation should take into consideration financial barriers.
“We have to ensure that not only are students able to access it but that institutions have equitable resources to ensure that their students can equitably access it across the Australian higher education system,” he said.
Societal bias, structural racism and discrimination of vulnerable groups being replicated and amplified in generative AI models were also raised as concerns.
Australian Curriculum, Assessment and Reporting Authority executive Sharon Foster told the inquiry that there was little they could do beyond raise awareness of biases.
“We can’t manage to overcome that except to raise awareness with teachers … so that if they are using it in their classrooms there is potential for them to address it and discuss it and educate young people about it,” she said.
“From our perspective … if (students) understand how AI works, then they can actually begin to understand some of these challenges.”
A task force, with a focus on Gen AI, is due to hand a draft report to education ministers shortly. The group of experts is trying to put together a framework to support schools and communities and address the question of using the technology to cheat.
At a university level, Ms Thomson noted unethical behaviour, cheating potential and risks to privacy and intellectual property was an issue but ignoring the technology was not an option.
“That train has left the station,” she said.
The Tertiary Education and Quality Standards Agency agreed. Higher Education Integrity Unit director Helen Gneil said any attempt to ban the tools would be “simplistic and ineffective”.
But Dr Gneil said AI’s capability has prompted a rethink of how the sector assesses student’s learning outcomes.
Asked if the body was concerned about the possibility of tertiary providers delivering entire courses generated by AI, Dr Gneil downplayed the risk because there was still a “human in the loop accountable” for its use.
“I think it really comes back to the effective and ethical use of these tools,” she said.
“I think there are lots of opportunities for these tools to help create meaningful assessment tasks and to help create meaningful rubrics.”
The director said she anticipates as the software continues to develop the disentangling of what is an academic’s work and what was prompted by AI will become “harder”.