Skip to Main Content Mansfield Library Research Guides

Artificial Intelligence (AI)

A regularly-updated guide to generative AI, AI tools in research, and more. Please email karli.cotton@umontana.edu with suggestions for resources to include.

Ethical Considerations

The rapid emergence of AI tools in the past few years has revealed new aspects of ethical concerns technology users have been grappling with for decades. As with any technology, it is important to understand the ethical implications of production and use.

Below is a very high-level introduction to some of the ethical concerns surrounding AI technology--keep in mind that these are multi-faceted and complicated issues that are difficult to distill into a single page! We encourage you to view our additional resources on these issues and do your own research on topics that have the potential to directly impact you, your students, and your discipline.

Key issues include:

Bias and Equity

AI systems inherit the biases of their human creators and the data they are trained on. This can result in discriminatory, prejudiced, or unfair outputs. As more pay-to-play versions of these tools appear, questions of inequitable access to advanced tools emerge as well.

Privacy and Copyright

AI is trained on a vast amount of data, which can include personal information, copyrighted material, and intellectual property. For example, the concept of "openness" touted by many technology companies and their advocates blatantly undermines principles of Indigenous data sovereignty and cultural integrity. Another urgent concern centers on how user data is stored, shared, and used as further training data for the AI models.

Environment

Training and running AI systems takes a significant amount of energy, emitting large amounts of carbon and using fresh water for cooling systems--and larger amounts of water are required for cooling in hotter and drought-prone climates. Data centers also pose an inequitable risk to regions already bearing an environmental or socioeconomic disadvantage, and as AI systems and algorithms become more complex, they will need more storage and data power. AI systems also rely on microchips, which contain rare earth metals that are mined globally (though primarily in China). These mines pose significant environmental and human rights/labor issues.

Low-Wage and Human Labor

AI models still need human intervention--often called "human in the loop"--to validate and provide oversight over AI decisions and output. AI companies often rely on low-wage human labor, also called "ghost labor", to validate their models. Examples of ghost labor models include overseas workers, prison inmates, and crowdsourcing programs like Amazon Mechanical Turk.

Accountability

Who is held responsible when AI makes critical decisions or mistakes or is used for harm? When is the user liable? Who is ultimately responsible for improving AI systems and fighting issues like bias?

Transparency and Oversight

Otherwise known as the "black box problem", the process and logic behind many AI tools and outputs can be hard (or impossible) to track or identify. This can make their processes and output difficult to monitor or correct, and prevents users from fully understanding or preventing potential ethical implications.

Take a deeper dive

This H5P object was created by Rebecca Sweetman and is available via a Creative Commons BY-NC-SA license.

More Resources on Ethical Implications

Recommended Articles

Books at the Mansfield Library