Oklahoma University Joins National Consortium to Promote Safe and Trustworthy AI

Oklahoma University (OU) has recently announced its participation in a national artificial intelligence consortium aimed at developing safe and trustworthy AI technology. The U.S. Artificial Intelligence Safety Institute Consortium (AISIC), comprising over 200 organizations and institutions, is an initiative led by the U.S. Department of Commerce’s National Institute of Standards and Technology.

Under the executive order of the Biden-Harris Administration, AISIC was established to address the safety and security concerns surrounding artificial intelligence, promoting responsible use and development of the technology. By joining forces with the Data Institute for Societal Challenges (DISC) and the NSF AI Institute for Research on Trustworthy AI in Weather, Climate, and Coastal Oceanography (AI2ES), OU aims to contribute to the advancement of ethical AI practices.

OU’s involvement in AISIC brings expertise and guidance on various aspects of trustworthy AI, including responsible system design, human-AI collaboration, fairness, and interpretability. David Ebert, director of the Data Institute for Societal Challenges, emphasizes the importance of developing best practices and ethical guidelines that can guide policymakers in setting up regulations for AI technology.

The DISC’s mission is to confront the challenges and safety issues associated with AI while ensuring its positive impact on a global scale. Through collaboration and policy discussions, DISC aims to establish standards and explore AI solutions that are both accurate and fair.

Meanwhile, the AI2ES focuses on researching how AI can contribute to the field of environmental science and enhance safety measures related to severe weather phenomena. Dr. Amy McGovern, principal investigator of AI2ES, emphasizes the need to deploy AI in an ethical and responsible manner.

OU’s participation in AISIC aligns perfectly with the university’s commitment to national policies and its mission to contribute to the development of safe and trustworthy AI. The consortium’s work is expected to have a significant impact not only within the United States but globally as well.

Excited about the potential outcomes, Ebert encourages motivated students to explore opportunities to engage in research and join the team in shaping the future of AI.

In summary, OU’s participation in the AISIC consortium reflects a commitment to drive responsible AI development and use. By collaborating with other leading institutions, the university aims to establish best practices and ethical guidelines while addressing the challenges and safety concerns associated with artificial intelligence.

FAQ Section:

1. What is the U.S. Artificial Intelligence Safety Institute Consortium (AISIC)?
The U.S. Artificial Intelligence Safety Institute Consortium (AISIC) is a national initiative led by the U.S. Department of Commerce’s National Institute of Standards and Technology. It comprises over 200 organizations and institutions and aims to address safety and security concerns surrounding artificial intelligence while promoting responsible use and development of the technology.

2. Why has Oklahoma University (OU) joined AISIC?
OU has joined AISIC to contribute to the development and advancement of ethical AI practices. By collaborating with other institutions in the consortium, OU aims to provide expertise and guidance on various aspects of trustworthy AI, including responsible system design, human-AI collaboration, fairness, and interpretability.

3. What is the role of the Data Institute for Societal Challenges (DISC) in AISIC?
The Data Institute for Societal Challenges (DISC) is one of the organizations collaborating with OU in AISIC. DISC’s mission is to confront the challenges and safety issues associated with AI while ensuring its positive impact on a global scale. Through collaboration and policy discussions, DISC aims to establish standards and explore AI solutions that are both accurate and fair.

4. What is the focus of the NSF AI Institute for Research on Trustworthy AI in Weather, Climate, and Coastal Oceanography (AI2ES) in AISIC?
The NSF AI Institute for Research on Trustworthy AI in Weather, Climate, and Coastal Oceanography (AI2ES) is another organization partnering with OU in AISIC. AI2ES focuses on researching how AI can contribute to the field of environmental science and enhance safety measures related to severe weather phenomena. The institute emphasizes the need to deploy AI in an ethical and responsible manner.

5. How does OU’s involvement in AISIC align with its mission?
OU’s participation in AISIC aligns with the university’s commitment to national policies and its mission to contribute to the development of safe and trustworthy AI. By collaborating with other leading institutions, OU aims to establish best practices and ethical guidelines while addressing the challenges and safety concerns associated with artificial intelligence.

Key terms:
– Artificial Intelligence (AI): Technology that enables machines to perform tasks that would typically require human intelligence.
– Trustworthy AI: AI that is reliable, transparent, and operates in an ethical manner.
– National Institute of Standards and Technology (NIST): A U.S. federal agency that promotes innovation and industrial competitiveness, including in the field of AI.
– Executive Order: An official directive given by the President of the United States.
– Ethical guidelines: Principles and standards that ensure the responsible development and use of AI.

Related links:
NIST
Oklahoma University

The source of the article is from the blog motopaddock.nl

Privacy policy
Contact