In this section you will find tips on how to write more succinct and effective prompts prompts, how to use proven techniques to fact-check the information generated by AI, how to verify if an AI resource is legitimate, and how to go about protecting your privacy while engaging with these tools. Click on the corresponding tabs to learn more.
Being AI Literate doesn't mean that you must comprehend the complex mechanics of AI. It means that you are actively learning about the technology involved and that you examine any materials regarding AI, particularly news items, with a critical eye.
Use this tool when reading about AI applications to help consider the legitimacy of the technology:
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
Hervieux, S. & Wheatley, A. (2020). The ROBOT test [Evaluation tool]. The LibrAIry. https://thelibrairy.wordpress.com/2020/03/11/the-robot-test
One of the most effective methods of checking the information presented to you by AI (or the AI's programmers) is to utilize the method of analysis known as lateral reading. Lateral reading is done by comparing what was presented to you by the AI based on your prompt with other outside sources.
View this video from the Stanford History Education Group for an overview on Lateral Reading:
Lateral reading when working with AI can look a little different than when you are working with other online resources as you won't have access to items usually present in other sources (such as the author or publisher). Instead, you must work with the factual claims that are being output by the AI tool. By asking breaking the output into smaller segments of information, one can take each individually and ask "who can confirm this information?" This is particularly helpful to do as AI will often mix both factual and false claims. From there, one should be able to find multiple other independent sources that either verify or dispute the claims being made. This will allow the user to judge the validity of the AI's results.
Another aspect to be aware of in order to maximize a legitimate output from the AI is to make sure that all of the information in the prompt is factual. AI will not correct mismatched dates, people, or other flawed prompts, but rather it will generate false information in order to fit those parameters.
Use this step-by-step guide to fact-check AI outputs:
1. Break down the information into smaller, searchable claims
2. Look for supporting information from Google, Wikipedia, our library catalog, or our databases. (Make sure the AI is putting information in the correct context as well!)
3. Question what assumptions are being made regarding your prompt, the output, the possible biases that exist regarding the topic, and the limits of the AI's training data sources
4. Make a judgement call: can you re-prompt the AI for more accurate information or use a different resource you found while fact checking?
5. Repeat for each claim made by the AI
The CLEAR framework, created by Librarian Leo S. Lo at the University of New Mexico, is a framework to optimize prompts given to generative AI tools. To follow the CLEAR framework, prompts must be:
Concise: "brevity and clarity in prompts"
Logical: "structured and coherent prompts"
Explicit: "clear output specifications"
Adaptive: "flexibility and customization in prompts"
Reflective: "continuous evaluation and improvement of prompts"
This information comes from the following article. Read through this article if you would like to improve your prompt writing.
Lo, L. S. (2023). The CLEAR path: A framework for enhancing information literacy through prompt engineering. The Journal of Academic Librarianship, 49(4), 102720–. https://doi.org/10.1016/j.acalib.2023.102720
Users should also be aware that most AI tools have some form of data collection that is gathered from your prompts and conversations. Some allow users to opt out; however, there are are some general guidelines/recommendations that you should follow to protect your privacy.