Australia Takes Steps to Regulate Artificial Intelligence

Australia has announced plans to establish an advisory body and introduce guidelines to mitigate the risks associated with artificial intelligence (AI). The move reflects the growing global trend of increased oversight and regulation of AI technology.

The Australian government intends to collaborate with industry bodies to develop a set of guidelines, including a recommendation for technology companies to label and watermark AI-generated content. Science and Industry Minister Ed Husic acknowledged that while AI has the potential to boost the economy, its adoption in the business sector has been inconsistent. He emphasized the importance of addressing trust issues surrounding the technology, which currently hinder its wider adoption.

In 2015, Australia became the first country to establish an eSafety Commissioner, demonstrating its commitment to addressing online safety concerns. However, it has fallen behind other nations in terms of regulating AI.

The initial guidelines introduced by Australia will be voluntary, distinguishing the country from other regions such as the European Union, where AI regulations for technology companies are mandatory. To ensure comprehensive input and insight, the Australian government held a consultation on AI, receiving over 500 responses. In its interim response, the government highlighted the need to differentiate between “low risk” uses of AI, such as spam email filtering, and “high risk” examples like the creation of manipulated content or “deep fakes.”

The government aims to release a comprehensive response to the consultation later this year, outlining its plans for AI regulation. Australia’s proactive approach to managing the risks associated with AI underscores the importance of establishing guidelines that strike a balance between enabling innovation and safeguarding against potential harm.

The source of the article is from the blog radiohotmusic.it

Privacy policy
Contact