Government Urges Tech Companies to Implement Watermarking on AI-Generated Content

The Australian government is taking steps to address the challenges posed by the rapid evolution of artificial intelligence (AI) products. The industry and science minister, Ed Husic, has unveiled the government’s response to a consultation process on Safe and responsible AI, acknowledging the need for new regulations for “high risk” AI applications. While the government aims to support the growth of AI, it is also emphasizing the importance of addressing public concerns and implementing safeguards.

Husic revealed that there is a lack of public trust in the safe and responsible use of AI. The government’s response includes the establishment of an expert advisory group to develop AI policy and further guardrails, as well as the creation of a voluntary “AI Safety Standard” for businesses. Transparency measures, such as public reporting on the data used to train AI models, are also being considered.

Furthermore, the government is exploring the possibility of requiring tech companies to implement watermarks or labels on AI-generated content. This move aims to address concerns raised by the use of generative AI models, which quickly generate new content using existing material. The government recognizes that legislation may lag behind the pace of AI development and wants to ensure that AI systems are designed, developed, and deployed responsibly.

The government’s response comes alongside other efforts to regulate AI in Australia. Communications minister Michelle Rowland has pledged to change online safety laws to address AI-generated harmful material, such as deep fakes and hate speech. Reviews are also underway regarding the use of AI in schools and through the AI in Government Taskforce.

By encouraging tech companies to implement watermarks or labels on AI-generated content, the Australian government aims to provide clarity and accountability in the use of AI. This approach aligns with the government’s commitment to fostering innovation while safeguarding public trust and addressing the risks associated with high-risk AI applications.

The source of the article is from the blog qhubo.com.ni

Privacy policy
Contact