Budget Boost for Australia’s National AI Centre, Safeguards Against High-Risk AI

The agency aims to standardise restrictions on AI to mitigate nefarious actors that threaten national security.
Budget Boost for Australia’s National AI Centre, Safeguards Against High-Risk AI
A photo shows a frame of a video generated by a new artificial intelligence tool, dubbed "Sora," unveiled by the company OpenAI, in Paris on February 16, 2024. (Stefano Rellandini/AFP via Getty Images)
Jim Birchall
5/14/2024
Updated:
5/14/2024
0:00

The May 14 federal budget allocated an $39 million (US$26 million) funding package, under the Future Made in Australia plan, to guard against the risks of AI technology across Australia’s business and infrastructure as the tool becomes more prominent in everyday society, increasingly exposing governments to national security risks.

The government’s attention for the past two years has been focused on AI’s potential within the tech sector, and the 2023/24 budget tagged $102 million for the integration of quantum and the adoption of AI technologies.

Over $21 million was also set aside to transform and expand the National AI Centre, shifting it from CSIRO to the Department of Industry, Science, and Resources (DISR).

The money will also continue the work over the next four years of the National AI advisory group, appointed after a review of safe practices and represented by 12 experts running under the DISR portfolio.

The group is tasked with identifying high-risk use of AI and making recommendations on implementing restrictions.

New standards will also be developed for watermarking, which refers to embedding a unique signal into the output of an artificial intelligence model to identify it as AI-made, improving creator transparency.

Around $15 million has been earmarked for unlocking restrictions on AI use in areas like healthcare, consumer law, and copyright issues.

Meanwhile, combatting national security risks posed by AI has been boosted by $2.6 million over three years.

AVEVA Pacific vice-president Alexey Lebedev told AAP that the tech industry needed the rules set down by the National AI Centre to be straightforward.

“Digital technologies, including AI, could be worth $315 billion to the Australian economy by 2028. Australia’s budget can provide a clear direction and certainty on AI while helping to cement an industry here,” he said.

NVIDIA's CEO Jensen Huang speaks during the annual Nvidia GTC Artificial Intelligence Conference at SAP Center in San Jose, California, on March 18, 2024. (JOSH EDELSON/AFP via Getty Images)
NVIDIA's CEO Jensen Huang speaks during the annual Nvidia GTC Artificial Intelligence Conference at SAP Center in San Jose, California, on March 18, 2024. (JOSH EDELSON/AFP via Getty Images)

Safety Amid Rapid Growth

Artificial intelligence (AI) has the potential to revolutionize industries and improve efficiency. However, with this potential comes the need to safeguard risks associated with AI.

In their published budget announcement, the DSIR said the funding commitment was to combat AI that has the potential to “cause harm, without appropriate regulation to ensure the use of AI is safe and responsible, secure, fair, accessible, and does not discriminate.”

Mirroring global trends, AI in Australia has been experiencing steady growth in key areas like health, which is using AI to improve patient care, diagnostics, and treatment planning.

In the financial sector, AI is used for fraud detection, risk management, and customer service. Chatbots are also becoming more common in banking and insurance.

Its applications for agriculture include precision farming, crop monitoring, and predictive analytics. AI can help farmers make data-driven decisions to improve crop yields and reduce waste.

In the defence sector, it is utilised for threat detection, surveillance, and tactical decision-making.

AI developers must be transparent about how their algorithms work and how they make decisions. They must also ensure that their algorithms are fair and unbiased and that they do not discriminate against any group of people.

Data privacy and security are also critical aspects of safeguarding AI and it is essential to protect this data from unauthorised access and use. This includes ensuring that data is collected, stored, and used in a way that complies with relevant laws and regulations.

Jim Birchall has written and edited for several regional New Zealand publications. He was most recently the editor of the Hauraki Coromandel Post.