Please disable your adblock and script blockers to view this page

Perceptron: AI bias can arise from annotation instructions


AAVE
annotators’
Arizona State University
the Allen Institute
Nvidia
AI
Meta’s
MetaImage
AR
MIT
MITNormally
EPFL
iPad
Duke


Perceptron
Parmar et al
Quoref
GPT-3
Thibault Asselborn’s
Samantha Robler
Susan Emmett


African-American
some Black Americans

No matching tags

No matching tags


Switzerland
Project Nazare

No matching tags

Positivity     43.00%   
   Negativity   57.00%
The New York Times
SOURCE: https://techcrunch.com/2022/05/08/perceptron-ai-bias-can-arise-from-annotation-instructions/
Write a review: TechCrunch
Summary

This column, Perceptron (previously Deep Science), aims to collect some of the most relevant recent discoveries and papers — particularly in, but not limited to, artificial intelligence — and explain why they matter.This week in AI, a new study reveals how bias, a common problem in AI systems, can start with the instructions given to the people recruited to annotate data from which AI systems learn to make predictions. The coauthors find that annotators pick up on patterns in the instructions, which condition them to contribute annotations that then become over-represented in the data, biasing the AI system toward these annotations.Many AI systems today “learn” to make sense of images, videos, text, and audio from examples that have been labeled by annotators. For example, over half of the annotations in Quoref, a data set designed to test the ability of AI systems to understand when two or more expressions refer to the same person (or thing), start with the phrase “What is the name,” a phrase present in a third of the instructions for the data set.The phenomenon, which the researchers call “instruction bias,” is particularly troubling because it suggests that systems trained on biased instruction/annotation data might not perform as well as initially thought.

As said here by Kyle Wiggers, Devin Coldewey