What is Brainhat?

What is this?

 

Back in the day, there were two distinct paths for making computers "think."  The first was a family of inventions that came to include multi-layer perceptrons, neural nets, genetic programming and machine learning.  If you have ever taken a business or statistics course, you have probably performed a linear regression: graph a set of points and derive a linear equation that models their behavior.  Very much like that, neural nets, etc. are forms of curve-fitting, except that the equations for the curves are unknown.  These systems are trained.  The products of training are matrices of numbers.  Almost magically, they reproduce and even embellish the data they were trained on.  Just a few years ago, curve-fitting appeared as the science behind deciding what kind of music you'd like to hear, or what else you might want to buy.  It's gotten a lot more sophisticated, and is now called "AI."

The other old-school approach to making computers "think" was the product of linguistics and epistemology.  Language loosely captures knowledge.  Use of a computer to model language and interact with language was called Natural Language Processing (NLP).  In the 1970s, classical NLP was a popular pursuit, and it seemed like an academic avenue that would bear intelligent computing more quickly than curve-fitting.  But, NLP was difficult, brittle and not scalable. Excitement waned and other forms of artificial intelligence, such a Expert Systems, took the limelight until their lights dimmed as well. NLP has since come to mean other things like text summarization, translation, data mining and command and control, like Siri.

Brainhat is a lonely adult orphan of the second approach to making computers "think," thirty years in the making. It treats human language like a programming language.  The problems of brittleness and scale are addressed with massive coarse-grained parallelism.  A Brainhat instance can include an unlimited number of computers, each with their own knowledge domains.  And, that same swarm of machines can serve many users, all at once, across the globe.  Where toy implementations with old-school NLP systems failed, Brainhat approaches the problem with Gestalt.  More is better!

ChatGPT and other generative AIs are from the curve-fitting family. They are trained, intensely capable, but static.  Brainhat can be a front-end to ChatGPT that adds short-term memory, the ability to learn and share new information instantly and globally, a means to learn and execute novel processes on-the-fly, a way to assume perspectives and identities, and the power to reason openly and to explain its conclusions.  NLP and Neural Networks--two paths to making computers "think"- meet again in the mid-2020s.

There is much to cover--from the basics of coding and parsing, to setting up information sharing across a network of Brainhats, to creating and sharing knowledge domains with collegues or the rest of the world.  My approach will be to go through each function, verify its function, write a "how-to" and make a release.  Writing the code took place over decades.  Verifying and releasing it may take a little while.  Have patience and join me!  It's pretty cool.

-Kevin