Publications
Alert

Government Guidance for Operationalizing Risk Management in the Development and Deployment of AI Systems

Privacy, Security & Data Protection

The U.S. Department of Commerce’s National Institute of Standards and Technology (“NIST”) last week released an “Artificial Intelligence Risk Management Framework” (“AI RMF 1.0”) for voluntary use by organizations. With several states implementing laws requiring data-centric risk assessments such as data privacy impact assessments, and with Congress poised to consider national data privacy legislation containing economy-wide risk provisions, it is important for companies and organizations that develop or use AI systems to review their policies and procedures to ensure their approaches to AI risk management are comprehensive and comply with applicable laws and regulations.

The AI RMF has been in development by NIST for over a year, following the passage of the AI Initiative Act of 2020 (part of the National Defense Authorization Act of 2021). In remarks introducing the AI RMF on January 26, 2023, Don Graves, Deputy Secretary of Commerce, said development of the RMF had been an urgent priority for NIST. Dr. Laurie E. Locascio, Under Secretary of Commerce for Standards and Technology and Director of NIST, echoed his sentiments. She said cultivating public trust in AI is key to driving innovation, and AI risk management can enforce positive practices by helping those who make and use AI systems think critically about the potential impacts from those systems. The RMF, she said, converts AI-specific principles, such as transparency, accountability, and explainability, into practice by providing a consensus-driven methodology for conducting AI risk assessments and a lexicography to help communicate risks to others. The AI RMF, it is hoped, will help companies and others operationalize AI governance. Rep. Frank Lucas, Chairman of the House Science, Space, and Technology Committee, along with ranking member Rep. Zoe Lofgren, and the White House’s Dr. Alandra Nelson, Office of Science, Technology and Policy, also addressed the need for the AI RMF today.

The framework can be used to contextualize and manage the potential risks of harm posed by AI systems, technologies, and practices in all areas where they may be used. The AI RMF’s succinct “Map, Measure, and Manage” approach to AI self-governance makes it one of the most useful risk management protocols produced to date. Its flexible approach should appeal to both small and large businesses and others looking for ways to purposefully identify and mitigate the potential risks of harm from AI before they can occur.

If you have questions about assessing risks related to data practices in your organization, whether related to AI or other technologies, contact Brian Wm. Higgins, Sharon R. Klein or another member of Blank Rome’s Privacy, Security & Data Protection or Artificial Intelligence Technology teams.

nist-ai-risk-mgmt-privacy-security-data-protection-thumbnail
Download This Alert
©2023 Blank Rome LLP. All rights reserved. Please contact Blank Rome for permission to reprint. Notice: The purpose of this update is to identify select developments that may be of interest to readers. The information contained herein is abridged and summarized from various sources, the accuracy and completeness of which cannot be assured. This update should not be construed as legal advice or opinion, and is not a substitute for the advice of counsel.