President Biden Signs Executive Order on Artificial Intelligence

November 6, 2023

BACKGROUND


On October 30, 2023, President Biden signed an Executive Order (EO) on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. The Biden-Harris Administration’s sweeping artificial intelligence (AI) Executive Order comes on the heels of the White House’s 2022 release of its AI Bill of Rights – a set of non-binding, voluntary principles intended to guide AI design and deployment for both the public and private sector. Biden’s administrative action also comes in parallel to the ongoing debates in the U.S. Congress as to whether or not a federal comprehensive AI framework is needed.

SUMMARY


The long-awaited Executive Order sets out six topline principles and priorities that should govern the development and use of AI systems in the private sector.

  • AI systems should be safe and secure, which requires robust, standardized evaluations of AI systems;
  • Policies should promote responsible innovation, competition, and collaboration;
  • AI should not be deployed in ways that undermine workers’ rights, worsen job quality, encourage undue worker surveillance, lessen market competition, introduce new health and safety risks, or cause harmful labor-force disruptions;
  • AI should not be used to enable harmful discrimination or bias;
  • Use of AI should not excuse organizations’ legal obligations or undermine consumer protection;
  • Privacy and civil liberties should be protected in connection with AI systems.

The Executive Order directs various federal agencies to take a number of actions, many of which are summarized below.

The National Institute of Standards and Technology (NIST) will develop standards, tools, and tests to help ensure that AI systems are safe, secure, and trustworthy, including rigorous red-team testing to ensure the safety of an AI system before it is released to the public. Developers of AI systems that pose a serious risk to national security, national economic security, or national public health and safety must notify the federal government when training the model. They must share the results of all safety tests.

The Department of Commerce (Commerce) is tasked with proposing new regulations that will require U.S. infrastructure-as-a-service (IaaS) providers to notify the U.S. government when a foreign person transacts with them in order to train a “large AI model” (as defined by the EO) with potential capabilities that could be used in malicious cyber-enabled activity (a “training run”). Such reports shall, at a minimum, include the identity of the foreign person. Further, within 180 days of the EO, the Secretary of Commerce shall propose regulations that require IaaS providers to ensure that foreign resellers of United States IaaS products verify the identity of any foreign person who obtains an IaaS account from the foreign reseller.

Additionally, the Departments of Energy and Homeland Security will consult with industry and develop methods to mitigate the threat of AI systems to critical infrastructure, as well as chemical, biological, radiological, nuclear (CBRN), and cybersecurity risks. At the conclusion of the process, the Department of Homeland Security must submit a report to the President. The Department of Commerce will also be tasked with developing guidance for content authentication and watermarking to clearly label AI-generated content. In an effort to combat synthetic content and deepfakes, Commerce, in coordination with the Director of the Office of Management and Budget (OMB), shall develop guidance regarding the existing tools and practices for digital content authentication and synthetic content detection measures.

The Executive Order also promotes the protection of privacy interests. The President proposes strengthening privacy-preserving technologies, such as cryptographic tools, by funding research to advance their development. Federal agencies are directed to evaluate how they collect and use commercially available information—including information they procure from data brokers and to strengthen privacy guidance for federal agencies to account for AI risks.

The Executive Order also addresses civil rights issues. The appropriate agencies are to provide guidance to landlords, federal benefits programs, and federal contractors to keep AI algorithms from being used to exacerbate discrimination. The Department of Justice and federal civil rights offices will collaborate on best practices for investigating and prosecuting civil rights violations related to AI. Additionally, the Department of Homeland Security and the Office of Science and Technology Policy will develop best practices on the use of AI in sentencing, parole and probation, pretrial release and detention, police surveillance, crime forecasting and predictive policing, and forensic analysis.

To protect consumers while ensuring that AI can produce benefits for the economy, the Executive Order directs federal agencies to promote the responsible use of AI in healthcare and the development of pharmaceuticals. Additionally, the Department of Health and Human Services will establish a safety program to receive reports of and remedy unsafe healthcare practices involving AI.

The Executive Order directs the Council of Economic Advisors to develop principles and best practices designed to mitigate the harms and maximize the benefits of AI for workers by addressing job displacement, labor standards, workplace health and safety, and data collection. These principles and best practices will also provide guidance designed to prevent employers from under-compensating workers, evaluating job applications unfairly, or impinging on workers’ ability to organize.

The Executive Order seeks to promote innovation and competition in several ways. The National Science Foundation is directed to launch a pilot program implementing the National AI Research Resource, previously recommended by the National AI Research Resource Task Force, to provide AI researchers and students access to key AI resources, data, and grants for AI research. The federal government will, across agencies, promote a fair, open, and competitive AI ecosystem by providing small developers and entrepreneurs access to technical assistance and resources and helping small businesses commercialize AI breakthroughs. The Department of State will expand the ability of skilled immigrants with expertise in critical areas to study, stay, and work in the United States by modernizing and streamlining visa criteria, interviews, and reviews.

Finally, the Executive Order establishes an AI interagency council that will coordinate the use of AI across the Federal Government. The OMB Director will chair the council, while the Director of the Office of Science and Technology Policy (OSTP) shall serve as Vice Chair for the interagency council, of which the full membership will include most federal civilian agencies. Each agency will be responsible for appointing a Chief AI officer.

As a result of the Executive Order, we are likely to see a large number of much-needed standards, best practices, and recommendations coming out from federal agencies over the next several years.

CONCLUSION


The Biden Administration has taken an important step in building upon the voluntary commitments laid out in its earlier AI Bill of Rights document. Federal agencies will now be tasked with implementing the requirements set forth in the AI Executive Order as Congress concurrently continues to evaluate the need and structure of competing AI legislative proposals. Public and private sector stakeholders should continue to closely monitor the EO’s implementation, in addition to staying on top of the latest regulatory and legislative developments.

Explore Articles and News

See All News