What is this all about?
Copyright 2020, Kevin Dowd
What is this?
We're going to program with knowledge. It's likely to be different than other kinds of programming you are familiar with. Most languages, like C or Python, are procedural, meaning that code executes in sequence.
integer i,j,k string c j = 2 k = 1 i = j + k string = "here is the answer" print string, " ", i end
Procedural code executes from the top to the bottom. It might also have flow control statements, like 'if' or 'while,' that cause branches. There may be subroutines an perhaps there's an event loop. The program may jump from place to place, but execution will always be somewhere within the code.
Brainhat is dataflow programming, and dataflow programming is different. There are no flow control statements. There's no program, in regular sense. Rather, the data themselves direct execution.
"the dog is hungry."
Think of this statement as an assignment. It adds to the collection of things that are known.
"if the dog is hungry then the dog wants to eat."
This statement is executable; it becomes part of the program. When the two statements find each other, there will be side-effects, such as:
"the dog wants the dog to eat."
How do the statements find each other? They're hashed. When hashes match, Brainhat executes. This, in turn, can generate more hashes, knowledge and other side-effects. Hashing is the key to recalling memories, testing for the truth or existence of knowledge and for triggering inferences.
Procedural programs are deterministic. But in Brainhat, where the programming can be modified by additional input, the answer can change. Moreover, two copies of Brainhat running side-by-side might provide different results. This is because all input is interpreted against a context, and contexts can differ. If you say "the dog is hungry," I will understand what you mean. But the resulting knowledge probably won't be the same as yours. That's because I understand "the dog is hungry" in my context, using my image of a dog and what a dog likes to eat. You understand it in your context. My dog might be brown; yours could be tan. If you had said "the tan dog is hungry," I would have had an entirely different understanding. My dog is brown. Ergo, you must be talking about a different dog. That's the tricky part, and the fun it all: human language makes knowledge portable, but it leaves out the details.
Here's how "the dog wants the dog to eat" looks inside Brainhat:
o Root /|\ SUBJECT / | \ / | \ / VERB \ dog o | \ | \ o \ to want o Root /| / | / | / | VERB SUBJECT | / | o o dog to eat
This is a knowledge data structure (called a Complex Concept or CC). It's very much like a diagrammed sentence that you might have created in grammar school English. However, unlike a diagram or the variable j, this data can be modified by changes in the context around it. If we later learn that the dog is brown, this data structure changes too.
It's not all chaos; there is a great deal of formal input processing for shaping input and interpreting it correctly. A number of post processing efforts seek to discover patterns in the context and to pro-actively recall memories and exercise inferences.
To read human language, Brainhat has to recognize the tokens (words) and it has to make sense of their order. Tokens and order are prescribed by a vocabulary and a grammar. The vocabulary says what the tokens can be. The grammar says how they can be combined. The vocabulary and grammar create the possibilies for, and define the limits of, what Brainhat can understand.
Brainhat's vocabulary is defined hierarchically. A poodle is a dog, a dog is a pet, a pet is an animal, and so on. Concepts for hamster, cat, dog and wildebeest are children (hierarchically) of the concept for animal. So, we may ask if a wildebeest is an animal and whether it shares some features in common with dogs, and the answer will be "yes." However, there is no upward path from wildebeest to pet. Therefore a wildebeest is not a pet.
o things o actions o adjectives /|\ /|\ /|\ / | \ / | \ / | \ / | \ o | o to eat o | \ o | \ to be | happy | o pretty vegetable | \ o to sense | | o / \ o color | mineral / \ / \ animal o o \ / \ |\ to hear to see o o red | \ blue / \ | o wildebeest / \ | / o o pet / pink /|\ o / | \ scarlet / | o cat hamster o | o dog | | poodle o
A vocabulary definition appears below. This definition has multiple synonyms: "hot dog", "hot dogs" and the name "hotdog-1". To reference this definition, one could refer to it by any of the synonyms, e.g. "the dog ate a hot dog" or "the dog ate a hotdog." [footnote: You'll notice that we've grouped plural and singular forms together. This is not a requirement; the vocabulary can be built so that singular and plural are distinct. For the time-being, and for simplicity, we will work with combined forms.]
define hotdog-1 label hot dogs label hot dog label hotdogs label hotdog orthogonal food-1 child-of food-1
When a word has two or more distinct meanings, there may be two or more definitions. For example, a ball is round, a ball has the quality of being a certain color, a "ball" is a toy. A "ball" might also be a formal dance, with an orchestra and glass slippers.
define ball-1 label balls label ball child-of toy-1 wants color-1 wants size-1 related round-1 related play-1 wants shape-1 define ball-2 label balls label ball child-of party-1 related loud-1 wants volume-1
To tell the two forms of ball apart, we may add hints to the definitions. If we say "the ball is red," chances are that we'll get the correct sense of the word "ball" because of the hint that says "ball-1 wants color-1". Once Brainhat has been processing for a while, reliance on hints in the vocabulary become less important. Instead, Brainhat will look to memories and context when trying to determine the sense of a word.
Grammar defines how the vocabulary elements can be combined. Consider the statement: "I hear the mailman." The words "I" or "mailman" may play the part of a subject or object. The word "hear" can only take the part of a verb. Using a grammar rule like this, we can recognize "I hear the mailman":
grammar_rule = subject + verb + object,
Brainhat tags parts of speech--the subject, verb and object--and places them into a data structure that represents the statement. Once the knowledge is captured, the original statement is no longer needed.
o Root /|\ SUBJECT / | \ OBJECT o | \ I | o mailman | VERB | o to hear
In Brainhat terminology, each of the vocabulary elements--"I", "to hear" and "mailman" is called a concept. Any combination of concepts is called a complex concept or CC. Compiling the statement "I hear the mailman" produces a CC data structure that resembles the diagram, above. A CC can be indexed, stored, recalled, compared and transformed:
>> i hear the mailman You do hear the mailman. >> what do i hear? You do hear the mailman. >> do i hear the mailman? yes. You do hear the mailman. >> what do i do? You do ask do You hear the mailman.
Returning to the definition of the concept hotdog-1: as a child of concept food-1, the concept hotdog-1 becomes part of a class that may include hamburgers, potato chips and chicken parmigiana, provided that they are also children of food-1.
define hotdog-1 label hot dogs label hot dog label hotdogs label hotdog label weiner orthogonal food-1 child-of food-1 define hamburger-1 label burger label haburger orthogonal food-1 child-of food-1
Hot dogs are food. Hamburgers are food. But hamburgers are not hotdogs. Indicating orthogonality makes it possible to tell them apart. The directive "orthogonal food-1" instructs Brainhat that hotdogs and hamburgers are exclusively different than other concepts that are also declared orthogonal to food-1 (or one of its parents).
>> what are hotdogs? hotdog is food. >> are hotdogs hamburgers? no. hotdog is not hamburger.
As orthogonality helps to differentiate objects, it applies to attributes, too. [Footnote: an ATTRIBUTE is an adjective or propositional phrase, typically, though it can be a binary large object (BLOB), too, such a thumbprint, picture or voice scan.] The "red dog" is not the "blue dog,"the "first dog" is not the "second dog", and the "dog in the restaurant" is not the "dog in the library."
>> the first dog sees a cat the first dog sees a cat. >> the second dog sees a squirrel the second dog sees a squirrel. >> does the first dog see a squirrel? maybe. the second dog sees a squirrel.
Orthogonality may apply to whole clauses playing the part of subject or object. For instance "the party at the beach" may be orthogonal to "the funeral for my uncle." Tests for orthogonality of more complex CCs like these may take place by semantic evaluation.
Inheritance and Orthogonality and Truth
Computing with knowledge, we need to be able to ask whether something is true or false, or whether we don't know the answer. Truth is important for steering processing, and invoking inferences and memories. Truth motivates computation.
Brainhat's vocabulary is built of taxonomies. Every concept, except for the very top concepts, is the child of another. Some are the children of many. A toy is a thing; a ball is a toy. Inheritance is functionally broken, however, when orthogonality is detected. A blue ball is not a red toy, for example.
Just as individual concepts can be the children of others, whole CCs (complex concepts, described above) can be the children other CCs. For one CC to be the child of another, the child CC must contain the same basic parts of speech as the parent. [Foornote: In Brainhat parlance, we say that two CCs have the same shape.] Furthermore, each of the child's constituent concepts, taken in pairwise comparison to the parent's, must be child concepts, or be the same.
To take a few examples, the CC on the left is a proper child of the CC on the right. Each of the concepts in the left-hand CC is a child of each of the concepts on the right, and the two CCs have the same shape.
Root Root o o /|\ /|\ SUBJ / | \ OBJ SUBJ / | \ OBJ / | \ / | \ o | \ o | \ dog VERB o ball animal VERB o toy | | o o to see to sense
The next two CCs do not have a child/parent relationship because they have different shapes. This is because links are different; OBJECT versus ATTRIBUTE. They also lack pairwise inheritance in all but the OBJECT.
Root Root o o /|\ /|\ SUBJ / | \ OBJ SUBJ / | \ ATTRIBUTE / | \ / | \ o | \ o | \ dog VERB o ball animal VERB o happy | | o o to see to be
In this next comparison, the shape is the same. The corresponding concepts are identical. In fact, the CCs are identical. A concept can be a child of itself; the child/parent relationship is valid.
Root Root o o /|\ /|\ SUBJ / | \ OBJ SUBJ / | \ OBJ / | \ / | \ o | \ o | \ dog VERB o ball dog VERB o ball | | o o to see to see
Next, we have two CCs that do not share a child/parent relationship because at least one of the constituent parts-of-speech is not a child of the other, even though they have the same shape.
Root Root o o /|\ /|\ SUBJ / | \ OBJ SUBJ / | \ OBJ / | \ / | \ o | \ o | \ dog VERB o ball dog VERB o ball | | o o to see to eat
In this next example, the constituent concepts of "dog sees red ball" ("dog", "to see" and "ball") are all proper children of the corresponding concepts in the itinerant parent. The shape is the same, too. However, the concepts occupying the OBJECT positions are orthogonal because they bear orthogonal ATTRIBUTEs ("red" versus "blue"). Thus, even though each of the concepts that make up the two CCs have child/parent relationships, orthogonality between the concepts breaks the child/parent relationship between the CCs; when we test to see if the left-hand CC is a child of the right, the answer is NO.
Root Root o o /|\ /|\ SUBJ / | \ OBJ SUBJ / | \ OBJ / | \ / | \ o | \ o | \ dog VERB o ball dog VERB o toy | | | | o ATTRIBUTE o ATTRIBUTE to see | to see | o o red blue
Reviewing all of the examples again, we ask the question:
is the CC on the left a child of the CC on the right?
The answers are TRUE, MAYBE, TRUE, MAYBE and FALSE, in order. "TRUE" signifies that the relationship is established; one CC is the child of another. An answer of "FALSE" means that the two are the same shape, but in opposition, due to an orthogonality. "MAYBE" means that there is insufficient shared structure to answer the question. For instance, in the second example, the CC representing 'the dog sees a ball' neither is nor isn't a child of 'the dog is happy'. Accordingly, the answer is MAYBE; the two CCs can't be compared.
Other parts of speech within CCs will affect the child/parent comparison, too. Negation, for example, in a comparison of "ball is not blue" (child) "the ball is red" (parent) will return TRUE. If we reverse them, testing to see whether "the ball is red" is a child of "ball is not blue", the answer will be MAYBE.
Tense can be attached to verbs and attributes. This also affects child/parent comparisions. Take, for example, "the ball is blue" test as the child of "the toy was red." The answer is MAYBE because the two CCs occur in diferent tenses. Orthogonality really only applies when attributes are in the present tense.
>> the ball is red the red ball is red. >> debug eval the ball is red the red ball is red. "True" >> debug eval the ball was red the red ball was red. "Maybe" >> debug eval the ball is blue the red blue ball is blue. "False" >> debug eval the ball is not grey the red not grey ball is not grey. "True"
There are a number of situations where Brainhat may ask itself "is this CC a child of that CC?" or "is this CC true?" The answer may be unknown; "is this CC true?" could return MAYBE. Given an answer of MAYBE, Brainhat will often initiate a more aggressive search for the answer in what is called "self talk." Brainhat will reformulate a question to itself (in English) and process it as if it came from the user. Self-talked input may cause inferences to fire, memories to be restored or motives to be advanced--all of which can lead to a more definitive answer to "is this CC true?"
As Brainhat processes, it builds and updates running context and discourse buffers. These provide a background against which Brainhat can better understand what might come next. A simple illustration of the role the context is the resolution of pronouns. If we refer to something by "he", "she" or "it", we're hoping that the program will recognize these as wild cards for some definite noun or noun phrase that came before.
>> i see the dog You see the dog. >> it is hungry the hungry dog is hungry.
The "it" we are referring to in the statement above is an immediate reference to a dog, fetched from the context. An example matching a noun phrase:
>> the princess loves luigi it the princess loves luigi. >> mario hates it mario dislikes the princess loves luigi.
"W" words (who, what, when, where and why) can be satisfied by the context, too.
>> the cat meows the cat meows. >> the dog barks the dog barks. >> what meows the cat meows. >> does something bark? yes. the dog barks.
The context grows as the program runs. Older knowledge is pushed to the back. New entries are added to the front. By this, the resolution of ambiguous references evolves to favor the new. This shows the context following "the cat meows" and "the dog barks":
>> debug Break in debug at the start: debug> xspeak 1 You say the dog barks. the dog barks. You say the cat meows. the cat meows.
The CCs of the context are the product of parsing, disambiguation and an election process that arrives at a single input interpretation. The product may be constructed from a combination of clean and dirty concepts. Clean concepts are those that are sourced from the vocabulary, but have not been modified. Dirty concepts are those that have been modified. An example of a clean concept is "dog." An example of a dirty concept is "happy dog." Below, we see two concepts for dog--one dirty and one clean:
>> happy dog dog to be happy. >> break Break in debug at the start: debug> list dog Meme: default (clean text) dog->dog-1 (clean), MID 10000 (context text) dog->dog-14c57 (dirty), MID 10000
For any vocabulary concept, there will always be a clean original. The original is never touched. Rather a dirty copy is created and modified when necessary. During processing, Brainhat strives to create as few dirty copies as possible. Two copies of "dog"--clean and dirty--may be sufficient for the life of a session. A case where Brainhat will create multiple dirty copies is where an orthogonality is detected.
>> the happy dog has food the happy dog has food. >> the sad dog does not have food the sad dog does not have food. >> is the happy dog the sad dog no. the dog is not the happy dog. >> break Break in debug at the start: debug> list dog Meme: default (clean text) dog->dog-1 (clean), MID 10000 (context text) dog->dog-17028 (dirty), MID 10000 (context text) dog->dog-1f64f (dirty), MID 10000
Now, we see two dirty dogs: one is happy; one is sad.