
Explaining decisions made with AI
Produced mainly for: Tech teams
Origin: North America
Language: English
Type: Tech tool
Creator: IBM
Europe
Europe
Framework
Information Commissioner's Officer (ICO)
Mainly focused on explaining decisions made with AI, but it contains fairness issues in the model

Ethics Canvas
Produced mainly for: Tech teams
Origin: North America
Language: English
Type: Tech tool
Creator: IBM
English
Tech tool
ADAPT Centre
Helps you structure ideas about the ethical implications of the projects you are working on, to visualize them and to resolve them. Produced mainly for Managers

Fairness-indicators: Tensorflow's Fairness Evaluation and Visualization Toolkit
Produced mainly for: Tech teams
Origin: North America
Language: English
Type: Tech tool
Creator: IBM
English
Tech tool
Designed to support teams in evaluating, improving, and comparing models for fairness concerns in partnership with the broader Tensorflow toolkit. Perspective AI is provided as a content moderation case study.

CERTIFAI
Produced mainly for: Tech teams
Origin: North America
Language: English
Type: Tech tool
Creator: IBM
English
Tech tool
Cognitive Scale - Cortex
Tool developed by Cognitive Scale for data scientists to evaluate their AI models for robustness, fairness, and explainability, and allows users to compare different models or model versions for these qualities.

Guidelines for Quality Assurance of Machine Learning-based Artificial Intelligence
Produced mainly for: Tech teams
Origin: North America
Language: English
Type: Tech tool
Creator: IBM
Japanese
Guide or manual
QA4AI
The Guidelines for the Quality Assurance of AI Systems offers a comprehensive technical assessment of quality measures for AI systems, but it is not strictly speaking a document on AI Fairness. It is updated periodically in its original Japanese version, but an informal English translation is available too.

From Principles to Practice – An interdisciplinary framework to operationalize AI ethics
Produced mainly for: Tech teams
Origin: North America
Language: English
Type: Tech tool
Creator: IBM
English
Guide
AIEI Group
The paper offers concrete guidance to decision-makers in organizations developing and using AI on how to incorporate values into algorithmic decision-making, and how to measure the fulfillment of values using criteria, observables and indicators combined with a context dependent risk assessment.

Review into bias in algorithmic decision-making
Produced mainly for: Tech teams
Origin: North America
Language: English
Type: Tech tool
Creator: IBM
Europe
Europe
Guide
Centre for Data Ethics and Innovation
It's more an educational publication than a tool (as the name suggests: "Review of.."). However, also provides some (high-level) recommendations for governments and regulators.

Ethically Aligned Design
Produced mainly for: Tech teams
Origin: North America
Language: English
Type: Tech tool
Creator: IBM
English
Guide or manual
IEEE Global A/IS Ethics Initiative
Identifies specific verticals and areas of interest and helps provide highly granular and pragmatic papers and insights as a natural evolution of our work. Produced mainly for Tech teams.

ML Fairness Gym
Produced mainly for: Tech teams
Origin: North America
Language: English
Type: Tech tool
Creator: IBM
English
Tech tool
Open source development tool for building simple simulations that explore the potential long-run impacts of deploying machine learning-based decisión systems.

Fairness feature testing
Produced mainly for: Tech teams
Origin: North America
Language: English
Type: Tech tool
Creator: IBM
English
Tech tool
Data Robot
Allows you to flag protected features in your dataset and then actively guides you through the selection of the best fairness metric to fit the specifics of your use case. Produced mainly for Tech teams.

RCModel, a Risk Chain Model for Risk Reduction in AI Services
Produced mainly for: Tech teams
Origin: North America
Language: English
Type: Tech tool
Creator: IBM
Asia
English
Guide or manual
The University of Tokyo
The risk chain model (RCModel) supports AI service providers in proper risk assessment and control, and offers rpolicy recommendations

Algorithmic Accountability Policy Toolkit
Produced mainly for: Tech teams
Origin: North America
Language: English
Type: Tech tool
Creator: IBM
English
Guide or manual
AI Now Institute
This toolkit includes resources for advocates interested in or currently engaged in work to uncover where algorithms are being used and to create transparency and accountability mechanisms. Produced mainly for Tech teams.
¿Deseas contribuir?
Esta es una Biblioteca Global Viva.
Si tienes algún recurso sobre la justicia de la IA que no ha sido publicado en esta
Biblioteca y te gustaría que se tenga en cuenta, o si eres el
creador de un recurso publicado aquí, y le gustaría editar la información,
Por favor envíanos un email
yo
Divulgaciones:
El material incluido en este sitio no está necesariamente respaldado por el Foro Económico Mundial, el Consejo del Futuro Global sobre IA para la Humanidad, C Minds y/u otros colaboradores.
Los y las lectoras y/o personas usuarias de cada recurso deben evaluar cada herramienta para su propósito específico previsto. Esta primera interacción incluye solo recursos gratuitos y disponibles públicamente.
La propiedad intelectual de todos los recursos es propiedad de los creadores de cada recurso individual.
Este material puede ser compartido, siempre que se atribuya claramente a sus creadores. Este material no puede ser utilizado con fines comerciales.
Consejo del Futuro Global sobre IA para la Humanidad,WEF con el apoyo de C Minds