International Standard on AI Management Published by UK’s National Standards Body

The British Standards Institution (BSI) has released a groundbreaking international standard on how to effectively and responsibly manage artificial intelligence (AI). The newly published guidance provides direction on establishing and maintaining AI management systems, with a strong emphasis on implementing safeguards. As AI technology continues to rapidly advance, there have been ongoing discussions surrounding the need for regulation. The advent of generative AI tools like ChatGPT has propelled this conversation forward.

Last November, the UK hosted the world’s first global AI Safety Summit, which brought together world leaders and major tech firms to address the safe and responsible development of AI and the potential long-term threats it may pose. Among the concerns discussed were the use of AI to create malware for cyber attacks and the potential loss of control over the technology, leading to existential threats to humanity.

The publication of this international standard marks a significant milestone in the quest to ensure the responsible use of AI. Susan Taylor Martin, CEO of BSI, highlights the critical role that trust plays in harnessing the transformative power of AI for the benefit of society. The standard aims to empower organizations to manage AI technology responsibly, thereby accelerating progress towards a better future and a sustainable world.

The guidance included in the standard covers a range of requirements, including context-based risk assessments and additional controls for both internal and external AI products and services. Scott Steedman, Director General for Standards at BSI, points out the widespread use of AI technologies by organizations in the UK despite the absence of a regulatory framework. The publication of this standard fills a crucial gap by providing guidelines and guardrails to protect consumers and industry from potential risks associated with AI.

Leaders in business are urged to adopt best practices outlined in the AI standard, which strike a balance between innovation and risk mitigation. The aim is to ensure that the development of AI technologies does not inadvertently embed discrimination, safety blind spots, or compromise privacy. By adhering to these guidelines, organizations can confidently harness the benefits of AI while building trust in its responsible use.

The source of the article is from the blog macnifico.pt

Privacy policy
Contact