Artificial Intelligence Risk Management Framework: NIST AI 100-1 January 2023

$9.99


Brand National Institute of Standards and Technology
Merchant Amazon
Category Books
Availability In Stock
SKU B0CQHNGNLN
Age Group ADULT
Condition NEW
Gender UNISEX
Google Product Category Media > Books
Product Type Books > Subjects > Computers & Technology > Computer Science > AI & Machine Learning > Expert Systems

About this item

Artificial Intelligence Risk Management Framework: NIST AI 100-1 January 2023

Artificial Intelligence Risk Management Framework NIST AI 100-1 January 2023 Artificial intelligence (AI) technologies have significant potential to transform society and people’s lives – from commerce and health to transportation and cybersecurity to the environment and our planet. AI technologies can drive inclusive economic growth and support scientific advancements that improve the conditions of our world. AI technologies, however, also pose risks that can negatively impact individuals, groups, organizations, communities, society, the environment, and the planet. Like risks for other types of technology, AI risks can emerge in a variety of ways and can be characterized as long- or short-term, high- or low-probability, systemic or localized, and high- or low-impact. While there are myriad standards and best practices to help organizations mitigate the risks of traditional software or information-based systems, the risks posed by AI systems are in many ways unique (See Appendix B). AI systems, for example, may be trained on data that can change over time, sometimes significantly and unexpectedly, affecting system functionality and trustworthiness in ways that are hard to understand. AI systems and the contexts in which they are deployed are frequently complex, making it difficult to detect and respond to failures when they occur. AI systems are inherently socio-technical in nature, meaning they are influenced by societal dynamics and human behavior. AI risks – and benefits –can emerge from the interplay of technical aspects combined with societal factors related to how a system is used, its interactions with other AI systems, who operates it, and the social context in which it is deployed. These risks make AI a uniquely challenging technology to deploy and utilize both for organizations and within society. Without proper controls, AI systems can amplify, perpetuate, or exacerbate inequitable or undesirable outcomes for individuals and communities. With proper controls, AI systems can mitigate and manage inequitable outcomes. AI risk management is a key component of responsible development and use of AI systems. Responsible AI practices can help align the decisions about AI system design, development, and uses with intended aim and values. Core concepts in responsible AI emphasize human centricity, social responsibility, and sustainability. AI risk management can drive responsible uses and practices by prompting organizations and their internal teams who design, develop, and deploy AI to think more critically about context and potential or unexpected negative and positive impacts. Understanding and managing the risks of AI systems will help to enhance trustworthiness, and in turn, cultivate public trust. As directed by the National Artificial Intelligence Initiative Act of 2020 (P.L. 116-283), the goal of the AI RMF is to offer a resource to the organizations designing, developing, deploying, or using AI systems to help manage the many risks of AI and promote trustworthy and responsible development and use of AI systems. The Framework is intended to be voluntary, rights-preserving, non-sector-specific, and use-case agnostic, providing flexibility to organizations of all sizes and in all sectors and throughout society to implement the approaches in the Framework. The Framework is designed to equip organizations and individuals – referred to here as AI actors – with approaches that increase the trustworthiness of AI systems, and to help foster the responsible design, development, deployment, and use of AI systems over time. AI actors are defined by the Organization for Economic Co-operation and Development (OECD) as “those who play an active role in the AI system lifecycle, including organizations and individuals that deploy or operate AI” [OECD (2019) Artificial Intelligence in Society—OECD iLibrary].

Brand National Institute of Standards and Technology
Merchant Amazon
Category Books
Availability In Stock
SKU B0CQHNGNLN
Age Group ADULT
Condition NEW
Gender UNISEX
Google Product Category Media > Books
Product Type Books > Subjects > Computers & Technology > Computer Science > AI & Machine Learning > Expert Systems

Compare with similar items

Billionaire Undeceived ~ Devon: (Montana...

The New World Religion: How Occultism an...

Callista's Adventures: The Historic Batt...

Looking for The Stranger: Albert Camus a...

Price $11.99 $11.99 $10.00 $22.00
Brand J. S. Scott Mark A. Palmer Joharra Harper Alice Kaplan
Merchant Amazon Amazon Amazon Amazon
Availability In Stock In Stock In Stock In Stock