U.S. Executive Order Outlines AI Standards for Safety and Security

A new executive order, issued by U.S. President Biden, outlines broad actions aimed at establishing standards for artificial intelligence (AI) safety and security.

The order “protects Americans’ privacy, advances equity and civil rights, stands up for consumers and workers, promotes innovation and competition,” and more, according to the fact sheet. Specific directives include:

  • Requiring that developers of the most powerful AI systems share their safety test results and other critical information with the U.S. government.
  • Developing standards, tools, and tests to help ensure that AI systems are safe, secure, and trustworthy. 
  • Protecting against the risks of using AI to engineer dangerous biological materials.
  • Protecting Americans from AI-enabled fraud and deception by establishing standards and best practices for detecting AI-generated content and authenticating official content. 

The directive follows voluntary commitments from 15 leading companies to manage AI risks, including a promise of transparency. According to Eliza Strickland at IEEE Spectrum, however, a recent Stanford report evaluated 10 models on 100 different metrics and found “a fundamental lack of transparency in the AI industry.” 

For example, when OpenAI versioned from GPT-3 to GPT-4, the company chose “to withhold all information about architecture (including model size), hardware, training compute, dataset construction, [and] training method,” Strickland notes.

See also:

 

Contact FOSSlife to learn about partnership and sponsorship opportunities.

woman with dark ponytail in front of computer

Comments