Skip to Main Content Mansfield Library Research Guides

Artificial Intelligence (AI)

A regularly-updated guide to generative AI, AI tools in research, and more. Please email karli.cotton@umontana.edu with suggestions for resources to include.

AI Hallucinations

Generative AI algorithms have been known to produce false, misleading, inaccurate, or completely fabricated outputs, commonly referred to as hallucinations *. Hallucinations can be caused by insufficient training data, identifying false patterns, and biases and other flaws in algorithms and training data. In one popular example, a lawyer in New York used ChatGPT to help research legal precedents for a case - and ended up citing completely fictional court cases in a legal brief.

It's unlikely that the problems contributing to hallucinations will be solved, since the nature and purpose of LLMs is not to tell the truth, but to produce a convincing response. This is why it's more important than ever to develop and apply our uniquely human critical thinking skills to determine the accuracy and usefulness of AI-generated information.

* Need New York Times access? The Mansfield Library now offers free access for UM students and staff! Activate your access by viewing the "See More..." dropdown here.

How to Critically Evaluate AI Output

You should always have a critical approach to information you find online, especially when viewing sources you're not familiar with or information you see on social media. This same principle extends to content generated by AI tools, which are trained on human-created algorithms and data from the internet, but you may need to ask some new questions to determine if the generated content is trustworthy and usable.

The table below provides some examples of questions you can ask yourself to evaluate AI generated content:

Area of Evaluation Evaluative Questions
Authority
  1. Are there references to where the information came from? Are the citations real?
  2. Is the information from a trustworthy source? Does the author's credentials, prior publications, or lived experience make them an authority on this specific topic?
  3. Is the information reviewed or evaluated in some way before publication?
  4. Who might have paid for or sponsored the information that was used in the generated output?
  5. What information will you need to follow up on with your own research?
Bias
  1. What perspective is the source approaching the subject from? Does it acknowledge that perspective?
  2. Does the output consider or compare multiple viewpoints?
  3. Can you think of any perspectives that are missing from the information provided?
  4. Who would the strongest critic of this source be and why?
Credibility
  1. Is the information provided true and accurate?
  2. Are there any claims or information that have not been backed up with evidence?
  3. Can you follow the argument or logic of the output?
  4. Are you able to verify any claims from the citations? If not, where will you look to do so?
Currency
  1. How up to date is the data your tool has been trained on? Is that information available?
  2. When asked for citations, does it cite recent sources?
  3. For your work, how much does currency of information matter?
  4. Will you need to seek out more current information elsewhere?
Relevance
  1. Is the generated output relevant to your work? How might you adjust the prompt to get the information you want?
  2. Do you need to ask for clarification or more information on anything?
  3. Does the output communicate at the level you need for yourself and your audience?
  4. How does the information provided support, refute, or contextualize your work?
For more information on how to be a savvy information consumer in the age of AI, check out our page on Misinformation, Disinformation, Deepfakes, and More.