Part 1: Questions for Dr. Gebru
Introduction
Dr. Timnit Gebru of the Distributed AI Research Institute is virtually visiting our class (CSCI 0451 Machine Learning) and giving a public zoom lecture on bias and social impacts of artificial intelligence. The public talk will take place 7:00–8:15 PM on April 24, 2023 at Hillcrest 103.
Dr. Gebru is a prominent Black computer scientist and researcher, whose work concentrates on algorithmic bias and data mining, and she is also an advocate for ethical AI and diversity in her field. She was honored as one of Fortune’s 50 Greatest Leaders worldwide and recognized as one of ten individuals who significantly influenced science in 2021 by Nature.
Dr. Gebru joined Google in 2018, where she co-led an AI ethics team with Margaret Mitchell. Gebru focused on AI’s societal implications and advocated for responsible technology. In 2019, she called for Amazon to stop selling biased facial recognition technology to law enforcement. In December 2020, her employment with Google ended controversially after disagreements over a research paper on the dangers of large language models. The incident led to widespread criticism and support from Google employees, academics, and civil society. Google’s CEO, Sundar Pichai, later apologized but did not clarify whether Gebru was terminated or resigned. In the wake of the incident, two Google employees resigned, and multiple investigations into the company’s treatment of minority employees were launched. In June 2021, Gebru announced plans to establish an independent research institute, and in December 2021, she launched the Distributed Artificial Intelligence Research Institute (DAIR), focusing on AI’s impact on marginalized communities.
FATE/CV 2020 Talk Summary
Dr. Gebru’s talk, titled “Fairness, Accountability, Transparency, and Ethics in Computer Vision,” begins by addressing biases and harm caused by computer vision technology. She asserts that white supremacy and colorism manifest differently worldwide, but marginalized groups, often harmed by existing discrimination, lack representation in technology. In a survey investigating people’s opinions on computer vision technology usage and its greatest positive and negative potentials, she demonstrates that the perceived benefits and drawbacks depend on context, as the technology can be advantageous for unsupervised groups but detrimental for targeted ones. Dr. Gebru provides examples of harmful applications, such as software misidentifying terrorists, emotion detection tools like HireVue misinterpreting internal states, and police using facial recognition to target protesters.
Expanding the discussion on the lack of diversity in datasets, she refers to Mimi Onuoha’s quote, reminding people that both data collectors and subjects are human beings. Dr. Gebru cites examples of biased datasets, leading to higher error rates for black women in facial recognition systems and object classification systems displaying biases against Asian and African cultures.
Dr. Gebru emphasizes that understanding should not be limited to diverse datasets. Visibility is not inclusion, and social and structural problems cannot be ignored. She advocates for considering data acquisition methods, its applications, and potential targeting of marginalized groups such as black and brown people, protesters, and transgender communities.
Addressing structural representation in the computer vision field, she highlights that current ethics boards often do not adequately represent marginalized groups. Dr. Gebru argues that fairness transcends datasets and mathematics, encompassing societal issues, and urges us to consider how technologies are used to marginalize certain groups. To move towards socially responsible and ethics-informed research practices, she reminds researchers that technology is not value-neutral, and they are accountable for the intended and unintended consequences of their work. Researchers should consider multiple stakeholders and be attentive to the social relations and power dynamics shaping the construction and use of technology.
In conclusion, Dr. Gebru’s talk underscores the importance of ethical considerations in computer vision technologies, highlighting potential biases and the need for transparency, accountability, and diversity in AI systems development.
tl;dr
Since both data collectors and subjects are human beings, it is crucial to be aware of how technology is used to target vulnerable groups.
Questions
What are some practical ways for AI researchers and practitioners to stay informed about and incorporate ethical considerations into their work consistently?
In your opinion, what are the most effective strategies for fostering interdisciplinary collaboration between AI researchers, social scientists, and policymakers to address the ethical implications of AI technologies?