A Stanford University study published in November 2025 used a "carefully curated" set of true and false statements combined with statements of belief or knowing ("I believe" or "know"; "John/Mary believes" or "knows") to test Large Language Models on "differences between belief, knowledge, and fact". LLMs had the most trouble "distinguish[ing] between whether a belief is held and whether that belief corresponds to reality." The results point toward "superficial pattern matching rather than robust epistemic understanding." Such a focus on patterns rather than meanings is built into information processing in general and LLMs in particular, especially with their gigantic data sets scraped without any possible curation from internally contradictory online data. (Andrew Shields, #111words, 10 February 2026)

Suzgun, M., Gur, T., Bianchi, F. et al. Language models cannot reliably distinguish belief from knowledge and fact. Nat Mach Intell 7, 1780–1790 (2025). https://doi.org/10.1038/s42256-025-01113-8

Pattern matching without knowledge and truth in LLMs