Pat Inc

A scientific breakthrough in #ConversationalAI. Meaning-based NLU vs. Deep Learning Intent NLU. Sign up for early access: https://pat.ai/

Member-only story

Matching (not Processing) removes Search

John Ball
Pat Inc
Published in
17 min readJul 18, 2022

--

Human brains exploit languages in many forms, apparently through evolutionary steps. Rather than trying to make our own machines from scratch, perhaps a focus on ways to imitate the brain will help us get better Artificial Intelligence (AI) — at least for Natural Language Understanding (NLU). Image: Adobe Stock.

The hardest thing in science can be to change a basic building-block of the current model.

Artificial Intelligence (AI) is stuck with a few of those today, some more than 60-years-old. AI is still an early science in that many basics are unsolved, which is why the fundamentals need to be addressed before diving into narrow solutions (and claiming false victory!).

I like the analogy of the geocentric earth model with circular orbits and epicycles that caused problems in astronomy until the elliptical model was discovered. Eventually, if I remember my high school math properly, F=ma (Newton’s 2nd law) allows us to derive planetary motion mathematically.

To consider the difference between the digital computer and a brain, the epicycle model is like the digital computer (a bad model), while brain function is Newton’s model of the heliocentric solar system (a good model). Brains evolved to understand language, while computers were designed to solve computational problems.

Although memory is an essential, foundational part of the digital computer, I’ll show you how it causes nasty problems for brain emulation and AI. Those problems that come from the computational paradigm were uncovered in the 1950s and remained unsolved — at least until Patom theory came about.

The Better Model

The better model of brains for AI removes the processing model from digital computers and replaces it with a pattern-matching one. I call it Patom theory. Patterns can combine based on experience into multi-sensory referents, interactions of those referents (predicates) and sequences of them (events).

To make the case, I’ll review the semiotics model for language, and how it combines individual senses like vision (reading) or sound (hearing) or touch in Braille to interact in language, including the distinction between the types of signs in English for regular and irregular verbs.

Then by removing the processing models, a simpler one that needs no search is introduced.

I Background: Processing Assumptions

Key elements of digital computers are:

--

--

Pat Inc
Pat Inc

Published in Pat Inc

A scientific breakthrough in #ConversationalAI. Meaning-based NLU vs. Deep Learning Intent NLU. Sign up for early access: https://pat.ai/

John Ball
John Ball

Written by John Ball

I'm a cognitive scientist working on NLU (Natural Language Understanding) systems based on RRG (Role and Reference Grammar). A mouthful, I know!

No responses yet