G7 countries to launch AI code of conduct: Report

The Group of Seven (G7) countries are set to establish a voluntary code of conduct for artificial intelligence (AI) developers. This initiative, scheduled for agreement on October 30, encompasses 11 points designed to foster “safe, secure, and trustworthy AI worldwide” while ensuring that the benefits of AI are harnessed and potential risks are addressed. Drafted by G7 leaders in September, the code provides voluntary guidance for organizations involved in developing advanced AI systems, including foundation models and generative AI systems. It encourages companies to publish reports detailing the capabilities, limitations, use, and misuse of the AI systems they create, emphasizing the importance of robust security controls. Participating countries include Canada, France, Germany, Italy, Japan, the United Kingdom, the United States, and the European Union.

This effort coincides with global attempts by governments to grapple with the challenges and opportunities presented by AI technology. The European Union has already established guidelines through its landmark EU AI Act, while the United Nations recently formed a 39-member advisory committee to address issues related to global AI regulation. Additionally, China has introduced its own AI regulations, which came into effect in August. Within the industry, OpenAI, the developer of the popular AI chatbot ChatGPT, has announced plans to create a “preparedness” team focused on assessing various AI-related risks.