New Guidelines in San Francisco Encourage Fact Checking and Disclosure for AI Use

In a move to address the growing use of artificial intelligence (AI) technology, the city of San Francisco has released preliminary guidelines that urge city employees to fact check AI-generated content and disclose their use of the technology. The guidelines emphasize the importance of responsible AI use and warn against entering sensitive information into public generative AI tools that can potentially be accessed by companies and the public.

The guidelines highlight the potential benefits of AI in tasks such as drafting emails, adjusting formality in writing, and automating repetitive tasks. However, they also acknowledge the inherent biases that AI systems can reflect based on their training data, emphasizing the need for caution.

These guidelines come in the wake of New York state’s encouragement of AI tool use among employees and Pennsylvania’s launch of a pilot program for state government employees to utilize ChatGPT. San Francisco’s guidelines draw from existing AI plans, including those implemented by San Jose, Boston, the state of California, the White House, and the United Kingdom.

While some experts believe these guidelines are a step in the right direction, others argue that more clarity and formal training are needed for city employees to navigate the use of AI effectively. Kevin Frazier, a law professor studying AI law and regulation, suggests that the city should consider banning the use of generative AI tools that have not undergone scrutiny by third-party experts.

Future steps in developing these guidelines include conducting a comprehensive survey of AI use in city departments, creating a user community, and consulting with AI experts. It remains to be seen how these guidelines will evolve to address the challenges and opportunities presented by AI technology in public functions.

The source of the article is from the blog myshopsguide.com

Privacy policy
Contact