Generative AI algorithms have been known to produce false, misleading, inaccurate, or completely fabricated outputs, commonly referred to as hallucinations *. Hallucinations can be caused by insufficient training data, identifying false patterns, and biases and other flaws in algorithms and training data. In one popular example, a lawyer in New York used ChatGPT to help research legal precedents for a case - and ended up citing completely fictional court cases in a legal brief.
It's unlikely that the problems contributing to hallucinations will be solved, since the nature and purpose of LLMs is not to tell the truth, but to produce a convincing response. This is why it's more important than ever to develop and apply our uniquely human critical thinking skills to determine the accuracy and usefulness of AI-generated information.
You should always have a critical approach to information you find online, especially when viewing sources you're not familiar with or information you see on social media. This same principle extends to content generated by AI tools, which are trained on human-created algorithms and data from the internet, but you may need to ask some new questions to determine if the generated content is trustworthy and usable.
The table below provides some examples of questions you can ask yourself to evaluate AI generated content:
Area of Evaluation | Evaluative Questions |
---|---|
Authority |
|
Bias |
|
Credibility |
|
Currency |
|
Relevance |
|
Maureen and Mike Mansfield Library, 32 Campus Drive, Missoula, MT 59812 | 406-243-6866 | Contact Us