Skip to Main Content
Go to Penn Libraries homepage   Go to Guides homepage

AI Tools and Best Practices: AI Pitfalls/Ethics

An evolving collection of tools and best practices for AI.

Overview

While AI presents remarkable opportunities for innovation and discovery, it is also prone to a variety of problems.

AI Veracity

AI Hallucinations

  • AI hallucination is a phenomenon wherein an AI model perceives patterns or objects that are nonexistent or imperceptible to human observers, creating outputs that are nonsensical or altogether inaccurate. (See this article from IBM.)
  • Hallucinations can lead to unreliable or outright wrong information. (This article from the BBC describes a real-life example.)

Data Quality

Bias

  • If bias is built into AI models (whether intentionally or unintentionally), the model will produce skewed results. This can mean racial profiling in facial recognition tools, bias in healthcare diagnoses, etc.
  • Bias may stem from sample groups, incomplete data, flawed measurement, etc.
  • This article from IBM covers many key topics related to AI bias.

Plagiarism

Image generated by craiyon.com

Image generated by craiyon.com

AI and Academic Rigor

When using AI tools for research, researchers should treat them with the same academic rigor they would for any other type of source. Researchers should consider questions like:

  • Who makes the tool being used?
  • On what data was the tool trained?
  • Is there bias in that data, and if so, does that bias impact the results?
  • Does the developer have financial interests that might call the results into question?
  • Can the results be confirmed by other, non-AI tools?

AI Usage at Penn

Penn has created a guiding document for the use of generative AI, which can be found here. Students are encouraged to peruse this document if they choose to use AI tools, to ensure that said usage is appropriate.

Please note that individual departments at Penn (and even specific classes) may have different guidelines for AI usage.

Frameworks for AI Ethics

While there is no "one-size-fits-all" ethical framework for AI, there are some generally accepted principles surrounding responsible use of AI. These include:

  • AI should be used for socially beneficial purposes
  • It should avoid creating or reinforcing bias
  • It should be accountable to humans
  • It should uphold human rights and dignity

Two helpful frameworks for AI ethics are those from Google and UNESCO.

Penn Libraries Home Search the Catalog
(215) 898-7555