Our brain’s evolution and observations of its function need the adoption of a consistent model to help us with AI. Patom theory (PT) is such a model.

It stands to reason that as the only human-level intelligence is in humans, brain science should lead the way to create artificial intelligence (AI). Patom theory was first described in the 1990s and since then the theory has been effective at solving problems in natural language understanding (NLU). Its terse description is that with Patom theory (PT), all a brain does is store, match and use hierarchical, bidirectional linkset patterns[i]. That is, sets and lists are sufficient to explain all that a human brain is capable of. …


Understanding human language (NLU) is best seen, not discussed.

Today I want you to see what is required for Natural Language Understanding (NLU) to meet the demands of the human device interface of tomorrow. This will be mainly visual to keep it simple. Your own English understanding can confirm what the machine is doing more effectively than a forensic explanation! If you’re not involved in NLU, you may wonder what all the fuss is about, since the responses are obvious, but that’s exactly what our machines need to do to help and emulate us: make us think that a person responded. …


Understanding with Grice’s Maxims is helpful for NLU

Here’s an ambiguous sentence in English: “I’m going to jump in the shower.”

Is the speaker going to do jumping jacks in the shower!? Or is the speaker going to move into the shower in a single jump? Maybe the speaker is going up a tall building and will then jump and fall into a shower positioned at the bottom! What a scary utterance!

For AI to understand its meaning, how should we approach it?

Cognitive scientists look to emulate the human brain using any of its interdisciplinary parts (philosophy, psychology, linguistics, computer science, neuroscience, or anthropology). …


Representing language-independent meaning like a brain won’t come from a modern system that represents meaning in English alone

Human language is like a code. Our brain takes meaningful ideas and converts them into sequences of muscle movements to communicate with others. It also does the reverse, converting sounds received into their meaning. This works with other modalities like writing, touch, and visual sign language. The communication is very rich and flexible, able to deal with real-world knowledge as well as exquisitely generalized details within the immediate conversation.

Today, I explore the code-breaking model that enables the ultimate goal of natural language understanding (NLU): understanding the full meaning of language, and then storing it for ongoing reference. Knowledge representation…


Meaning is the building block of human language, uncovered by context.

Meaning is core to language because the meaning of a sentence determines the forms of words and phrases that are selected and vice versa. Or as I say: Form follows meaning®. But what is meaning?

In language, the word forms that we use to communicate with others follow the meaning of what we want to say and, just as importantly, the meaning of what we say is far deeper than the words we can use to say it. Therefore, meaning needs to be at the core of our language understanding systems, not word forms.

What is missing from data science…


Knowledge representation is key to the future of natural language understanding because the right model enables all languages to share a common ‘repository of knowledge.’ But to this date, models are immature. By analogy, we haven’t seen the kind of breakthrough to better explain knowledge as Copernicus did in the field of astronomy. Fundamentally, models are misaligned with what we know in the cognitive sciences.

Today, I’ll look into the arbitrary nature of the current approach to knowledge representation as an enabler of artificial intelligence (AI) and consider an alternative optimized for human language representation. My justification for the alternative…


Representing knowledge, in a language-independent manner that is also bidirectional, is needed to make NLP more effective.

Yesterday, my copy of the book, Rebooting AI by Marcus and Davis[i], arrived. Although I’ve only looked at a couple of pages so far, it is going to be a good reference point for scientific observations about artificial intelligence (AI) because its authors are experts “at the forefront of AI research.” If they can’t explain the state-of-the-art, nobody can!

Because my work doesn’t come from the academic world, its findings aren’t broadly known at the moment, but it’s easy to show solutions to the book’s problems. I want to share solutions to current problems to help reboot AI and my…


Aiming at the target is the best way to hit it. An NLU benchmark needs to have the same target — i.e. NLU in conversation. Search is NOT language.

An NLU benchmark should progress NLP performance in conversation, making it as accurate as mathematics on computer.

My SuperGLUE benchmark article notes that the consortium doesn’t ask questions in language, or generate answers in language. It is more of a test of search, rather than natural language understanding (NLU), which could explain the observable limitations in conversational AI that is using technology that is improving at the GLUE benchmark.

I was immediately asked what a benchmark for natural language understanding should look like.

The benchmark for natural language processing (NLP), which should be comprised of NLU and natural language generation (NLG), should test language, not knowledge. What’s the difference?

Language allows communications to take place, leveraging shared…


Linguistic models scale exponentially when taught; NLP training data does not.

Linguistic models add exponential knowledge: that’s good. The data science training model, by comparison, is slow: that’s bad.

The “data” model promised effective NLP (natural language processing) given just “more data” and later, perhaps, AGI (artificial general intelligence). But data availability is terribly limited compared to the scale of a natural language and that possibly explains why the data model doesn’t scale to conversations.

I’ll use English to show the scale that machine learning systems need to deal with. …


I spent 3 days last week in Buffalo, New York, at the International Role and Reference Grammar (RRG) conference at the University at Buffalo that was reporting on progress in humanity’s final frontier: how our languages work.

Or as I say: how human intelligence is enabled, because intelligence comes from language (language use is the differentiator between humans and other animals).

Amazingly, I was the sole industry representative at the conference! In this article, I want to explain some of the features of language discussed and why it is needed for natural language processing (NLP).

Understanding conversations explained at RRG 2019.

Why NLP needs this scientific progress

RRG models languages with three…

John Ball

I'm a cognitive scientist working on NLU (Natural Language Understanding) systems based on RRG (Role and Reference Grammar). A mouthful, I know!

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store