foundations of computational agents
Learning systems, trained on large datasets, produce outputs that reflect any bias present in the training sets. Since the datasets were acquired in the past, using them to predict outcomes in the future propagates any bias from the past to the future. What if the future will not, or should not, resemble the past?
In machine learning, bias has a neutral technical meaning, “the tendency to prefer one hypothesis over another”. The no-free-lunch theorem implies that any effective learning algorithm must have a bias in that sense. But in ordinary language use, human bias has a negative connotation, meaning “prejudice in favor of or against one thing, person, or group compared with another, usually in a way considered to be unfair” [Stevenson and Lindberg, 2010].
Training sets for facial recognition, often acquired without informed consent, typically do not represent people equitably, thereby causing misclassification, often with harmful effect as discussed in Section 7.7. Large language models, pre-trained on vast text corpora, when prompted often produce new text that is racist, sexist, or otherwise demeaning of human dignity.
Any AI-based decision system inherently reflects certain implicit values, or preferences. The key question to ask is: whose values are they? Typically, the values embedded in an AI system are the values of the designer or owner of the system, or the values implicit in a deep learning training set. Further questions arise. Can those values be made explicit? Can they be specified? Is it possible to ensure those are democratic values, avoiding discrimination and prejudice? Can they be transparent to the users, or targets, of the system? Do they reflect the values of everyone who may be affected, directly or indirectly? Can systems be designed that respect privacy, dignity, equity, diversity, and inclusion?
The role of social bias in training data is described in Section 7.7. Bias and other social impact concerns in modern deep learning systems trained on large corpora are discussed in Section 8.7. These questions, ongoing challenges to AI system designers, are examined critically by O’Neil [2016], Eubanks [2018], Noble [2018], Broussard [2018], Benjamin [2019], and Bender et al. [2021].