Australia evaluates compulsory guardrails to ensure safer AI

Australia is considering mandatory guardrails to promote the safe use of AI technology while bringing down its dangers.

australia fallback generic shutterstock 189092753
Wavebreakmedia/Shutterstock

Australia would soon establish an expert advisory group to evaluate and develop options for “mandatory guardrails” on AI research and development, Minister for Industry and Science Ed Husic announced on Wednesday.

“The government is now considering mandatory guardrails for AI development and deployment in high-risk settings, whether through changes to existing laws or the creation of new AI-specific laws,” the press release announcing the development said.

Even as it assesses measures to control the development, Australia would be working with the industry to develop a voluntary AI Safety Standard and options for voluntary labeling and watermarking of AI-generated materials to ensure greater transparency.

The mandatory guardrails will include requirements related to testing to ensure the safety of products before and after their release. In addition, the Australian federal government said it is keen to ensure accountability, which may include training for both developers as well as deployers of AI systems, certifications, and clearly defined expectations of accountability for organizations developing and deploying AI systems.

Australia also released its interim response to the consultation paper on Safe and responsible AI in Australia. The country now sees the need to go beyond voluntary restraints on the development of AI as it poses the risks of biases, errors, and limited transparency.

In last year’s budget in May 2023, the Australian government announced an investment of $101.2 million to support businesses to use quantum and AI in their operations.

Global momentum for AI regulations

Several countries are working to develop policies to leverage AI technology while mitigating the risks associated with it. Recently, the EU became the first region to introduce a comprehensive set of laws to ensure that the technology is being used for the economic and social benefit of the people.

Apart from the EU, the UK, the US, and China are also working on regulations to better manage the technology. Around 28 nations, including Australia, recently signed the Bletchley Declaration to establish common opportunities and risks posed by AI. Regulating AI is also a subject of discussion at the ongoing World Economic Forum in Davos, Switzerland.

Australia is keen that its AI regulations are in tune with the approach adopted by other countries. “As a relatively small, open economy, international harmonization of Australia’s governance framework will be important as it ultimately affects Australia’s ability to take advantage of AI-enabled systems supplied on a global scale and foster the growth of AI in Australia,” said the discussion paper, Safe and responsible AI in Australia, published by the Australian government last year.

As AI adoption gathers pace, countries are trying to come up with regulations quickly to better manage and govern the impact of the technology. AI is known to enhance productivity and operational efficiencies for businesses but it can also lead to job losses, misinformation as well as cyberattacks if used by malicious elements.

Copyright © 2024 IDG Communications, Inc.

It’s time to break the ChatGPT habit