Posts

Showing posts from July, 2024

Decoding Hallucinations: Navigating Missteps in Physics Exams

In both high school physics exams and the realm of artificial intelligence, "hallucinations"—instances where fabricated answers are provided—are a common occurrence. This document explores the intriguing comparison between students "hallucinating" answers in physics exams and large language models (LLMs) like ChatGPT generating incorrect responses. We will delve into the factors contributing to these errors, their negative consequences, and provide targeted advice to help students and educators mitigate these missteps, ensuring a stronger grasp of physics concepts and more accurate responses. Architectural/Design Angles Aspect Students 'Hallucinating' Answers LLMs 'Hallucinating' Answers Knowledge Base Students rely on their memory and prior knowledge, which may be incomplete or incorrect. LLMs rely on vast amounts of training data, which includes both correct and incorrect information. Processing Mechanism Students use cognitive processes to recall