Artificial Intelligence: Tips for Using AI

Questions?

  • If you are unsure if the certain use of an AI-based tool is a violation of an academic integrity policy, please communicate with the professor of that course. Each instructor may have different expectations regarding AI in the classroom, so it is best to double check or seek clarity if you are unsure.
  • If you have any questions about AI literacy or finding additional resources, please email e-reference@brescia.edu.edu or schedule an appointment with a librarian. We'd be happy to assist in person or online!
  • If you have any additional questions about AI that go beyond the scope of this guide, please check our catalog or databases for additional resources to explore or contact a librarian to assist you in your search.

 Please Note

  • Information regarding Artificial Intelligence (AI) updates at a rapid pace. As such, you may come across information in this guide that is considered out of date despite efforts to keep information up to date. 
  • The Fr. Leonard Alvey Library does not endorse any specific AI technologies and encourages users to be diligent about sharing personal information when utilizing AI tools. 

In this section you will find tips on how to write more succinct and effective prompts prompts, how to use proven techniques to fact-check the information generated by AI, how to verify if an AI resource is legitimate, and how to go about protecting your privacy while engaging with these tools. Click on the corresponding tabs to learn more.


 

The ROBOT Test

Being AI Literate doesn't mean that you must comprehend the complex mechanics of AI. It means that you are actively learning about the technology involved and that you examine any materials regarding AI, particularly news items, with a critical eye.

Use this tool when reading about AI applications to help consider the legitimacy of the technology:

Reliability

Objective

Bias

Ownership

Type


Reliability
  • How reliable is the information available about the AI technology?
  • If it’s not produced by the party responsible for the AI, what are the author’s credentials? Bias?
  • If it is produced by the party responsible for the AI, how much information are they making available? 
    • Is information only partially available due to trade secrets?
    • How biased is the information that they produce?
Objective
  • What is the goal or objective of the use of AI?
  • What is the goal of sharing information about it?
    • To inform?
    • To convince?
    • To find financial support?
Bias
  • What could create bias in the AI technology?
  • Are there ethical issues associated with this?
  • Are bias or ethical issues acknowledged?
    • By the source of information?
    • By the party responsible for the AI?
    • By its users?
Owner
  • Who is the owner or developer of the AI technology?
  • Who is responsible for it?
    • Is it a private company?
    • The government?
    • A think tank or research group?
  • Who has access to it?
  • Who can use it?
Type
  • Which subtype of AI is it?
  • Is the technology theoretical or applied?
  • What kind of information system does it rely on?
  • Does it rely on human intervention? 

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Hervieux, S. & Wheatley, A. (2020). The ROBOT test [Evaluation tool]. The LibrAIry. https://thelibrairy.wordpress.com/2020/03/11/the-robot-test

One of the most effective methods of checking the information presented to you by AI (or the AI's programmers) is to utilize the method of analysis known as lateral reading. Lateral reading is done by comparing what was presented to you by the AI based on your prompt with other outside sources.

 

View this video from the Stanford History Education Group for an overview on Lateral Reading:

 

Lateral reading when working with AI can look a little different than when you are working with other online resources as you won't have access to items usually present in other sources (such as the author or publisher). Instead, you must work with the factual claims that are being output by the AI tool. By asking breaking the output into smaller segments of information, one can take each individually and ask "who can confirm this information?" This is particularly helpful to do as AI will often mix both factual and false claims. From there, one should be able to find multiple other independent sources that either verify or dispute the claims being made. This will allow the user to judge the validity of the AI's results.

Another aspect to be aware of in order to maximize a legitimate output from the AI is to make sure that all of the information in the prompt is factual. AI will not correct mismatched dates, people, or other flawed prompts, but rather it will generate false information in order to fit those parameters. 

 

Use this step-by-step guide to fact-check AI outputs:

1. Break down the information into smaller, searchable claims

2. Look for supporting information from Google, Wikipedia, our library catalog, or our databases. (Make sure the AI is putting information in the correct context as well!)

3. Question what assumptions are being made regarding your prompt, the output, the possible biases that exist regarding the topic, and the limits of the AI's training data sources

4. Make a judgement call: can you re-prompt the AI for more accurate information or use a different resource you found while fact checking? 

5. Repeat for each claim made by the AI

The CLEAR framework, created by Librarian Leo S. Lo at the University of New Mexico, is a framework to optimize prompts given to generative AI tools. To follow the CLEAR framework, prompts must be: 

 

Concise: "brevity and clarity in prompts"

  • This means to remain specific in your prompt. 

Logical: "structured and coherent prompts" 

  • Maintain a logical flow and order of ideas within your prompt.

Explicit: "clear output specifications"

  •  Provide the AI tool with precise instructions on your desired output format, content, or scope to receive a stronger answer. 

Adaptive: "flexibility and customization in prompts"

  • Experiment with various prompt formulations and phrasing to attempt different ways of framing an issue to see new answers from the generative AI 

Reflective: "continuous evaluation and improvement of prompts" 

  • Adjust and improve your approach and prompt to the AI tool by evaluating the performance of the AI based on your own assessments of the answers it gives. 

 

This information comes from the following article. Read through this article if you would like to improve your prompt writing. 

Lo, L. S. (2023). The CLEAR path: A framework for enhancing information literacy through prompt engineering. The Journal of Academic Librarianship, 49(4), 102720–. https://doi.org/10.1016/j.acalib.2023.102720 

Users should also be aware that most AI tools have some form of data collection that is gathered from your prompts and conversations. Some allow users to opt out; however, there are are some general guidelines/recommendations that you should follow to protect your privacy.

 

  • Do not enter any personally identifiable information such as your name, address, student ID, medical conditions or other sensitive information into an AI tool
  • Read and understand the terms and conditions governing its usage so that you can understand how the information you input is used. You many not be able to remove what you input or create. This is especially important if you are entering information that belongs to someone else or exists behind a paywall as you may be violating copyright law.
  • Question if the privacy policies and settings are easy to find, clearly understood, and transparent about how they use the data. If the aren't then there are chances that the intentions of the creator may be malicious.
  • Be conscious about what you share online and on social media as these may be data sets used for AI training
  • Consider using anonymous web browsers or a VPN to minimize tracking