Skip to Main Content
Go to Penn Libraries homepage   Go to Guides homepage

Chatting Up ChatGPT: Misinformation & Hallucinations

Generative AI, Prompts, and Misinformation in Health Science Research

Hallucination & Misinformation in the News

  • 2023: U.S. lawyers used AI to research a legal case; it hallucinated fake judicial decisions, quotes, and citations. The lawyers were "unaware that its content could be false."
  • 2023: Microsoft posts an AI-written article on must-see destinations in Ottawa, Canada, and includes a food bank as a tourist hotspot.
  • 2023: Amazon publishes an AI-generated guide on foraging mushrooms, which encourages readers to forage species that are protected or poisonous.
  • 2024: In the U.K., TikTok users were fed misleading election news through AI-generated misinformation and deep-fake videos.
  • 2025:

Expert shows how to spot a deepfake created with AI

CBS Chicago. (2025, January 7). Expert shows how to spot a deepfake created with AI [Video]. YouTube. https://youtu.be/yiXzKN7M2f0?si=x56s1AYcpebXjtK2

Spotting AI Images

 

Graphic listing how you can detect fake pictures

Source: Weber, J., Wesolowski, K., & Sparrow, T. (2023, April 9). Fact check: How can I spot AI-generated images? DW News. https://www.dw.com/en/fact-check-how-can-i-spot-ai-generated-images/a-65252602

Privacy Concerns

Can you trust AI with sensitive information? Can you trust the information that AI gives you?

  • An article from the University of Kentucky recommends not putting any personal information into AI because it can never be deleted, and hackers can breach the tools, leading to identity theft and fraud.
  • When you enter health data or research-related prompts, this information may be stored and used to improve the model, potentially raising privacy concerns. For example, you ask AI to analyze patient treatment plans— is that information secure?

How to Fact-Check AI

1. Cross-Check with Trusted Sources

AI generates content based on patterns in data, but it doesn’t know if that data is accurate. Always verify facts with credible, reliable sources.

What to do:

  • Reference authoritative websites like academic databases, government pages, or reputable organizations.
  • For statistics or specific data, look for the original report or study.
  • Use fact-checking platforms such as Snopes, FactCheck.org, or PoliFact for verification.

2. Look for Citations and Sources

AI often produces content without clear citations or references, making it hard to trace its origins. Request that your AI tool provides sources whenever possible, and verify those sources directly.

What to do:

  • If an AI tool mentions a study, article, or statistic, search for the original work to verify the claim.
  • Use academic search engines like Google Scholar or PubMed to track down studies.
  • If no source is provided, investigate the topic independently.

3. Spot Inconsistencies or Contradictions

AI can sometimes create content with internal contradictions. A claim made in one part of the text might be disproven later. Identifying inconsistencies early on ensures the information is coherent and trustworthy.

What to do:

  • Read through the AI-generated content carefully for any conflicting statements or logical inconsistencies.
  • Ensure that the key arguments or points align throughout the text and that there are no contradictions.

4. Verify Timeliness

AI tools might pull outdated information, especially when it comes to fast-moving fields like technology, science, or current events. Always confirm that the content is up-to-date, particularly when referencing recent events or trends.

What to do:

  • Check if the information is still relevant and accurate, especially for rapidly changing topics.
  • Look for the publication date of the sources cited and ensure they reflect the latest data or developments.

5. Consult Experts for Specialized Topics

AI excels in general knowledge but may fall short in highly specialized areas such as medicine, law, or engineering. For topics requiring deep expertise, consulting an expert can ensure accuracy and prevent errors.

What to do:

  • Reach out to professionals or specialists to verify complex or technical information.
  • In niche fields, even small inaccuracies can lead to significant misunderstandings or consequences, so expert validation is essential.

Source: Smith, J. (2025, January 24). How to fact-check ai content like a pro. Articulate. https://www.articulate.com/blog/how-to-fact-check-ai-content-like-a-pro/

Penn Libraries Home Search the Catalog
(215) 898-7555