… don’t factor out pragmatics (context)
Thomas Dietterich, the machine learning pioneer and emeritus professor of computer science at Oregon State University talks about “What’s Wrong with Large Language Models, and What We Should Be Building Instead.”
During his talk at Johns Hopkins, his summary slide at the 1 hour, 2 minute mark explains the problem that I see with LLMs.
“They are statistical models of knowledge bases rather than knowledge bases”
Without a knowledge base, how can an LLM not confabulate (called hallucinations)?
To illustrate the point, I tried my favorite sandbox, the Microsoft Bing Copilot. I started with a test to see its use of restaurant. It did well.
Experiment 1: Can Copilot handle events?
I searched and got nothing related in Pensoe. By not having a knowledge base that includes restaurants, it made one up presumably using my input as factual. Sometimes that works, but not in this case. Sometimes I would be tricked!