Like many other emergent technologies, artificial intelligence requires us to grapple with a variety of ethical concerns in research, teaching, and life in general. This page breaks down a number of those concerns and includes resources for understanding and discussing with students.
For more information, check out Leon Furze's AI Ethics posts.
Mural Photo by Chris Barbalis. (Public Domain CC0)
Algorithmic Bias is computational discrimination in which machine learning (and thus, AI) systematically disadvantages groups of people. Because algorithms learn on datasets, if the datasets these algorithms learn from aren't diverse, or have bias within them, that bias will exist in the algorithm's outputs. This video from UCLA Institute for Technology, Law & Policy shows some of the ways in AI can show bias: from facial recognition software in law enforcement, to job screening, and more.
Kate Crawford, a professor at the University of Southern California, has written an important book, The Atlas of AI (Yale University Press, 2021), on how artificial intelligence is made and the costs involved in its production--from extracting minerals to run the machines to data extraction, environmental degradation, labor conditions, and capital accumulations.
Another important book written by Mary Gray, an anthropologist, and Siddharth Suri, a computational social scientist, both working for Microsoft Research, is Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass (Harper Collins, 2019). The book focuses on on-demand workers who clean training data--fix typos, add descriptive tags to images, and other tasks--that make information intelligible to software programs.
Generative AI can aid in many aspects of academic work, however how it is applied brings up a number of questions, including: Can AI be used for idea generation or article summation? Should AI be considered an author? How can AI help with research?
This new technology has many implications for privacy issues. Bias, data exploitation, tracking through AIs can result in real-world consequences. How one’s personal data is processed, stored and deleted will continue to be a challenge.
Should authors whose works trained AI be compensated? Can pictures and text generated by AI be copyrighted? Should users be able to prompt these tools with direct reference other creators’ copyrighted and trademarked works by name without their permission? These are some of the issues surrounding copyright and generative AI.
The environmental impact of generative AI is often overlooked when considering its costs. In fact, the water for cooling, data centers and energy consumption required to run AI have created a carbon footprint as large as a small nation.
Crawford, Kate. (2024, February 20). Generative AIs environmental costs are soaring--and mostly secret. Nature.
Council on Foreign Relations. (2022, June 28). Artificial intelligence's environmental cost and promises. [Blog].
Li, P., Yang, J., Islam, M. & Ren. S. (2023, October 29). Making AI less "thirsty": Uncovering and addressing the secret water footprints of AI models. arXiv.
Ren, S. (2023, November 30). How much water does AI consume? The public deserves to know [Blog]. OECD: AI Policy Observatory.
Berreby, D. (2024, February 6). As use of AI soars, so does the energy and water it requires. Yale Environment360
AI's excessive water consumption threatens to drown out its environmental contributions. (2024, March 21). The Conversation.