Skip to main content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

AI Guide for Government

A living and evolving guide to the application of Artificial Intelligence for the U.S. federal government.

Chapter 3: Responsible and Trustworthy AI Implementation

AI’s continually evolving landscape promises to change businesses, governments, society, and the world around us. Like with development of any other technology, we must carefully consider how AI affects the people who use it and are impacted by it.

Building responsible and trustworthy AI systems is an ongoing area of study. Industry, academia, civil society groups, and government entities are working to establish norms and best practices. The National Institute of Standards defines the essential building blocks of AI responsibility and trustworthiness to include accuracy, explainability and interpretability, privacy, reliability, robustness, safety, security, and importantly the mitigation of harmful bias. These building blocks raise important questions for every AI system such as “safety for whom?” or “how reliable is good enough?”

An essential (but not solely sufficient) practice that can help answer these important questions, and enable responsible and trustworthy AI is to ensure that diversity, equity, inclusion, and accessibility (DEIA) are prioritized and promoted throughout the design, development, implementation, iteration, and ongoing monitoring after deployment. Unintended, even negligent, negative impacts will likely occur without developing responsible and trustworthy AI practices with strong DEIA practices.

Some of the worst negative impacts are due to harmful biases in AI system outcomes, including many cases where AI further entrenches inequality in both the private and public sectors. One of the key ways the outcomes of AI systems become biased is by not carefully curating, evaluating, and monitoring the underlying data and subsequent outcomes. Without this preparation, there is a greater likelihood that the data used to train the AI is unrepresentative for the proposed use case. For many use cases, a biased dataset can contribute to discriminatory outcomes against people of color, women, people with disabilties, or other marginalized groups. Along with bias in the input datasets, the design of the AI system (such as what to optimize for) or simply how the results are interpreted also can cause biased outcomes.

An additional concern with biased outcomes is that the “black box” nature of the system obfuscates how a decision was made or the impact of certain decisions on the outcomes. Due to this, biased AI systems can easily perpetuate or even amplify existing biases and discrimination towards underserved communities. Algorithmic bias has been shown to amplify inequities in health systems and cause discriminatory harm to already marginalized groups in housing, hiring, and education. The people that build, deploy, and monitor AI systems, particularly those deploying government technology, must not ignore these harmful impacts.

To implement responsible AI practices, and prevent harms, including biased outcomes, AI systems must both be rigorously tested and continually monitored. Affected individuals and groups must be able to understand the decisions that are made by these systems. Because a broad set of topics such as security, privacy, and explainability are also important to the development of responsible AI, interdisciplinary and diverse teams are key to success. Interdisciplinary teams must include AI experts, other technical subject-matter experts, program-specific subject-matter experts, and of course the end-users. At the same time, a diverse team can ensure that a wide range of people with varied backgrounds and experiences are able to oversee these models and their impact. In combination, knowledgeable and diverse interdisciplinary teams can ensure that multiple perspectives are considered when developing AI.

This is especially true for government innovation. The government’s mission is to serve the public and when it uses AI to meet that mission, the government must take extra precautions to ensure that AI is used responsibly. Part of the challenge is that AI is evolving so quickly that frameworks, tools, and guidance will need to be continuously updated and improved as we learn more. Along with this challenge, the technology industry historically has failed to recruit, develop, support, and promote talent from underrepresented and underserved communities, exacerbating the challenge of creating diverse interdisciplinary teams.