A little bit of hands-on
Copyright 2020, Kevin Dowd
Introductory Hands On
This chapter introduces Brainhat by example. As we look at examples, we will also have the opportunity to introduce terminology and discuss how knowledge is handled within Brainhat.
To run Brainhat from the command line, go to the directory where you have unpacked the program and enter "./run". The "run" script will check for changes you might have made in the configuration files, update as needed, and then start the program, like so:
% ./run make: 'data' is up to date. Initializing >> hello hello. >> goodbye goodbye! %
Brainhat comes with a basic vocabulary and grammars that support all of the exercises we will present. You are encouraged to experiment as we proceed. Be aware that if something doesn't work the way you hope, it may be because the vocabulary or grammar don't currently cover it. We will talk about how to extend both much later on.
We refer to simple English as code. Once parsed, stored and indexed, the code becomes knowledge, and may be executable. The exact nature of the stored knowledge is affected by context, which is the product of code that came before.
In Brainhat, an individual knowledge entity is called a concept. A concept is similar to a vocabulary word. For example, "dog" is a concept:
>> dog dog.
When we combine two or more concepts, we create a complex concept or CC:
>> dog is happy happy dog is happy. >> dog dog is happy.
Attribute assignments are the combination of a concept that is a thing with a concept that is an attribute. Below, we build a dog with several attributes:
% ./run >> dog is happy happy dog is happy. >> dog is hungry happy hungry dog is hungry. >> dog is big hungry happy dog is large. >> bye
Once we have a dog, we can ask questions about the dog's attributes.
% ./run >> dog is happy happy dog is happy. >> dog is hungry happy hungry dog is hungry. >> is dog hungry? yes. happy dog is hungry. >> is dog happy? yes. hungry dog is happy. >> bye
Just as concept for happy is an attribute, so is the concept for sad. These two concepts are orthogonal, meaning that you cannot attach both concepts to a third concept at the same time. Orthogonality is useful because it makes it possible to differentiate things based on their attributes.
% ./run >> dog is happy happy dog is happy. >> is dog happy? yes. dog is happy. >> is dog sad? no. dog is happy. >> bye
Attribute assignments can be explicit or implicit. The examples we have looked at so far demonstrate explicit assignments. Implicit attribute assignments occur when an attribute is adjacent the subject. An example of an implicit assignment is the application of happy in the following example:
% ./run >> happy dog is hungry happy hungry dog is hungry. >> is dog happy? yes. hungry dog is happy. >> is dog hungry? yes. happy dog is hungry. >> bye
Something to be careful about: the use of orthogonal implicit attributes can spawn multiple variants of a CC; though you may believe that you are dealing with just one copy of dog, you may find that you actually have two. Here's an example of how this can happen:
% ./run >> happy dog is hungry happy hungry dog is hungry. >> sad dog is hungry sad hungry dog is hungry. >> describe dog happy dog is hungry. sad hungry dog is hungry. >> bye
We have already seen that attributes sad and happy are orthogonal. The concept for dog cannot carry two orthogonal attributes at the same time and in the same tense; that is, we cannot have a "happy sad dog." Accordingly, the above code produces two CCs for dog--one that is happy, and one that is sad. This might be exactly what you want, in which case the example shows how it's done.
When making explicit attribute assignments, and when there is an orthogonality, Brainhat will push the older attribute into the perfect past tense. By this, a dog that was once sad may become happy:
% ./run >> dog is sad sad dog is sad. >> dog is happy past sad dog is happy. >> bye
Attributes can take the form of prepositional phrases, too.
% ./run >> the dog is in the water. the dog is in the water. >> where is the dog. the dog is in the water. >> bye
As with simple attributes, prepositional phrases can be orthogonal. For example, the dog cannot be inside the house and in the yard at the same time. This is because the objects of the prepositions, house and yard, are orthogonal, but the preposition, in, is the same.
% ./run >> dog is in the house. dog is in the house. >> is dog in the yard? no. dog is in the house. >> bye
We could keep the objects the same and vary the preposition and get a similar result. Here we find the the orthogonality between prepositions inside and outside:
% ./run >> the dog is inside the house. the dog is inside the house. >> is the dog outside the house? no. the dog is inside the house. >> bye
As we have seen, orthogonal attributes can coexist if they are in different tenses. More precisely, orthogonal attributes can coexist as long as they are not both in the present tense. It is possible to have orthogonal attributes in future or past tenses, such as "the dog was happy" in combination with "the dog was sad" as both could have been true at different times, and so may co-exist. Here are some explicit attribute assignments in several tenses:
% ./run >> i was sad. You were sad. >> i am angry. You are angry. >> i will be happy You will be happy. >> bye
Possession is another form of attribute based on the prepositional notion of belonging to, as demonstrated in the following example:
% ./run >> the dog's ball is red. dog's red ball belonging to dog is red. >> the cat's ball is blue. cat's blue ball belonging to cat is blue. >> is cat's ball red? no. the ball is blue. >> bye
A name is another form of attribute. Names have the special property that they can be used in a position where a reference to a concept that is a thing would typically appear.
% ./run >> the big dog's name is rover large rover dog is rover. >> the little dog's name is sparky small sparky dog is sparky. >> rover is hungry large rover dog is hungry. >> sparky is happy small sparky dog is happy. >> bye
Following the name assignment, we can use "rover" in place of "the large hungry dog" or "large dog" or "hungry dog."
Propositions (not prepositions)
In Brainhat, we refer to a statement that uses verbs other than to be as a proposition. Here's an example of a proposition. The verb is to want:
% ./run >> the dog wants a bone. the dog wants a bone. >> does the dog want a bone? yes. the dog wants a bone. >> bye
Many complex concepts (CCs) have tense, number and person (TNP) attached. The tense, number and person are derived from the subject and a verb or auxiliary verb. The tense portion of a CC's TNP is one of past, present, future, conditional, future conditional, and so on. Here are a few examples of tense:
% ./run >> i saw my mother. You did see your mother belonging to You. >> i see my country. You see your country belonging to You. >> i will see europe. You will see europe. >> bye
Auxiliary or helper verbs like do, did, will and would can also carry tense, like so:
% ./run >> i saw the princess. You did see the princess. >> did i see the princess? yes. You did see the princess. >> do i see the princess? maybe. You did see the princess. >> bye
Tenses may be grouped for simplicity. For instance, the English gerund may be understood as the present tense, as in the following example:
% ./run >> i am sleeping. You sleep. >> bye
One particular tense--the future imperfect--is programmed to be recognized by Brainhat post-processing as an opportunity for asking a question (if Brainhat doesn't already know the answer). This functionality is configurable.
% ./run >> i might like french fries. You might like fries. do You like fries? >> bye
A CC's number is derived from the subject, and can be either of singular or plural. For simplicity, most of the concepts within a Brainhat vocabulary distribution are in the singular with plural forms as aliases for their singular counterparts, as demonstrated here. This, too, is configurable:
% ./run >> the cats chase the dogs. the cat chases the dog. >> bye
A CC's person comes from the subject, too. I (or brainhat) is first-person; You (or speaker) is second-person; almost every other object is in the third-person. Combinations of TNP are derived from the components of the input and become part of the knowledge represented by the corresponding CC. Here, we demonstrate how the number and person found on the verb are subordinate to the number and person of the subject. In each case, we have used the verb infinitive, yet produce a sensible CC:
% ./run >> i to be hungry. You are hungry. >> you to be happy. I am happy. >> the dog to eat his food. the dog eats the dog's food belonging to the dog. >> bye
Brainhat resolves pronouns from the immediate context. Gender is taken into consideration. Often, pronouns are ambiguous, in which case Brainhat attempts to resolve based on proximity or position.
% ./run >> the princess is pretty. the pretty princess is pretty. >> mario is ugly. ugly mario is ugly. >> the food is delicious. the delicious food is delicious. >> she likes him to eat it. she likes ugly mario eats the delicious food. >> who is she? the pretty princess to be pretty woman. >> bye
In some cases, Brainhat will resolve anaphors. These are pronouns at the end of a statement that reference a noun at the beginning, such as:
% ./run >> mario knows that he is happy mario knows mario is happy. >> bye
Sometimes, they're implied:
% ./run >> mario wants to see the dog mario wants mario to see the dog. >> bye
As we have seen in the last few examples, the object in a proposition can be another proposition, as with "mario wants mario to see the dog", just above. Object propositions may also be referenced by (event) pronouns:
% ./run >> the princess swam in the river. the princess swam in the river. >> mario liked it. mario liked the princess swam in the river. >> bye
Propositions built with stative verbs make an assertion about the subject. They are similar to attribute assignments made with the verb "to be". However, they describe an action performed by or on behalf of the subject; they don't bind the the attribute to the subject. For example:
% ./run >> mario seems happy mario appears happy. >> is mario happy maybe. mario. >> how does mario seem? he seems happy. >> bye
Later, we will see that implicit or explicit inferences provide a path to attach happy to mario when he seems to be happy. Or, on the contrary, inferences may provide a path to sad.
In the same way that a stative verb combined with an adjective says something non-binding about the subject, a stative verb combined with an object proposition makes the object non-binding. We say that the object proposition is in the subjunctive case. This will become clearer with an example:
% ./run >> mario believes that the princess loves luigi mario thinks the princess loves luigi. >> does the princess love luigi? maybe. I do not know. >> does mario believe that the princess loves luigi? yes. mario thinks the princess loves luigi. >> bye
The fact that mario believes that the princess loves luigi doesn't necessarily make it so. What we do know is that mario believes it.
A little digression: thus far in our examples, Brainhat has believed everything I've said. Consider, however, that it could all be prefaced with "You say...". It's the subjunctive at work again, just like in the previous example. Why should Brainhat believe anything that I say just because I say it?
% ./run >> the dog is happy the happy dog is happy. >> what do i say You say the happy dog is happy. >> bye
Brainhat believes what I say because that's the default setting. However, we can change it so that Brainhat registers what I say, but doesn't necessarily believe it. There is a debug flag, verbatim, that we will reset for this purpose:
% ./run >> debug unset verbatim >> the dog is happy the dog is happy. >> is the dog happy? maybe. the dog. >> do i say that the dog is happy? yes. You say the dog is happy. >> bye
Once Brainhat is free to believe as it wishes, it needn't trust the code from it's interlocutor. In this example, Brainhat believes the opposite of when I tell it, based on an inference template:
% ./run >> debug unset verbatim >> if i say the dog is happy then the dog is sad if You say the dog is happy then the dog is sad. >> the dog is happy the dog is happy. the sad dog is sad. >> why the sad dog is sad because You say the sad dog is happy. >> bye
Concepts for things, attribute, verbs, etc., live in a taxonomically ordered vocabulary. A dog is a pet, a pet is an animal, and so on. At the same time, a dog is a mammal, a mammal is an animal, and so on. We say that pet and mammal are parents of dog and that dog is a child of mammal and pet. Each concept may have zero, one or more parents. The inter-relationships between parents and children can be arbitrary with the exception that no concept can be its own parent. A concept's relationship to its parents and children play a role in recognizing the concept as a specific instance of a general reference to its parent or vice versa.
% ./run >> is a dog a pet? yes. a dog is a pet. >> is a dog an animal? yes. a dog is a animal. >> is a dog a mammal? yes. a dog is a mammal. >> is a dog a thing? yes. a dog is a thing. >> is a dog a fish? no. a dog is not a fish. >> bye
The answer to the last question, "is a dog a fish", is "no" because fish and dogs are orthogonal within the vocabulary. If I were to ask "is the dog a banana?", the answer would be "maybe." This may seem odd to you; a dog is certainly not a banana as we know it. However, we will see later that we can make a relationship between dogs and bananas by saying "the dog is a banana" or "the dog is not a banana", in which case the answer will be "yes" or "no."
Here's an important point: within Brainhat, yes means something is definitely the child of something else; no means something is definitely not the child of something else (due to orthogonality or a knowledge); or maybe, which really means that the two cannot be compared.
Whole CCs can be compared to other CCs as long as their piecewise components (subject, verb, attribute, object, whatever) have a homologous superior/inferior parent/child relationship.
% ./run >> dog sees cat." dog sees cat. >> does dog see animal yes. dog sees cat. >> does something see cat yes. dog sees cat. >> does something sense something? yes. dog sees cat. >> does dog see fish? maybe. I do not know. >> bye
We start with the proposition "dog sees cat." We ask "does something sense something?," and this returns true because each of the concepts in the question (something, sense and something) is a parent of the corresponding concept in the answer (dog, sees and cat) and because "something senses something" is the the same shape as "dog sees cat." That is: "thing verb thing."
The last question, "does dog see fish?" comes back "maybe" because fish is not a child of cat, therefore "dog see fish" is not a child of "dog sees cat." Accordingly, we never get to the point where we would ask if fish and cat are orthogonal because the two CCs don't have a parent/child relationship in the first place. The two CCs cannot be compared. This is what we want.
An intransitive verb is one that takes no object. Examples of intransitive verbs are to sleep or to wonder. One does not say "I sleep my lunch" or "I wonder the banana." But, a reflexive "I sleep myself" almost makes sense.
A transitive verb, on the other hand, is one that takes a subject and an object. Examples of transitive verbs are to eat or to want, as in "I eat my lunch" or "He wants a doughnut." Transitive verbs can be used intransitively, too, with a subject implied, such as I eat or he sings.
In Brainhat, intransitive objects are represented by the concept things, which is at the top of the taxonomy for all things--dogs, french fries, potato bugs. So, when you input "mario sleeps", the CC that's generated captures "mario sleep things." Likewise, when you
ask "does mario sleep?", the question is internally represented as "does mario sleep things?"
% ./run >> mario sleeps mario sleeps. >> luigi talks luigi talks. >> the princess sees bananas the princess sees banana. >> does mario sleep? yes. mario sleeps. >> does luigi talk? yes. luigi talks. >> does the princess see? yes. the princess sees banana. >> does mario sleep bananas? maybe. I do not know. >> bye
In Brainhat, the assignment of one concept as the child of another is termed an equivalence. Examples are "the dog is a banana" or "mario is the king". When "mario is the king" is input, it causes the concept for mario to become a child, hierarchically, of king. The term equivalence isn't perfect; the two concepts are not made equivalent; one becomes subordinate to the other. However, once we say "mario is the king", all of the parents of king become parents of mario.
% ./run >> mario is an onion. mario is a onion. >> is mario a vegetable? yes. he is a onion. >> bye
We can also say that mario is not something, like so:
% ./run >> mario is not the king. mario is not the king. >> is mario king? no. he is not the king. >> bye
Orthogonality applies to the relationships formed by equivalences:
% ./run >> is a fruit a vegetable? no. a fruit is not a vegetable. >> mario is a fruit mario is a fruit. >> is mario a vegetable? no. he is not a vegetable. >> bye
Explicit inference templates are a executable code stored in the subjunctive. Elements of the template--subject, verb, attribute, etc.--are unresolved references to concepts that may fill their spots if the inference is executed. Here's a simple example:
% ./run >> if i love you then you love me. if You love Me then I love You. >> i love you You love Me. I love You. >> why? I love You because You love Me. >> bye
Here's an inference template with more complicated substitutions:
% ./run >> if an animal sees something then an animal eats something. if a animal sees thing then a animal eats thing. >> bird sees a grape bird sees a grape. bird eats a grape. >> cat sees a bird cat sees a bird. cat eats a bird. >> bye
Orthogonality makes it possible to build inference templates that would otherwise be difficult to express:
% ./run >> if a dog sees another dog then the dog barks at the other dog. if a dog sees another dog then the dog barks at the dog. >> a dog's name is fido. fido is fido. >> a dog's name is sparky. sparky is sparky. >> fido sees sparky. fido sees sparky. fido barks at sparky. >> bye
In this case, the concepts another and other signal orthogonality between the second dog and the first dog. By this, we can reference two dogs in an inference template and be able to tell them apart at runtime. Other orthogonality clues can come from first versus second or this versus that.
When the result of one inference provides the input for the next, we say that the inferences form a chain. Chaining is applicable to explicit and implicit inferences alike. Here's an example with explicit inference chaining:
% ./run >> if i am happy then you are happy if You are happy then I am happy. >> if i see a toy then i am happy if You see a toy then You are happy. >> if i look then i see a ball if You look then You see a ball. >> i am looking You do look. You see ball. You are happy. I am happy. >> why I am happy because You are happy. >> why You are happy because You see ball. >> why You see ball because You look. >> bye
An explicit inference template is executable code. Execution of an inference depends on the conditions of the template being true. When Brainhat recognizes that some knowledge in the context that may invoke an inference, it substitutes elements of that knowledge into the inference template and tests for truth. To be successful, the substituted concepts must be children of the corresponding concepts of the template and there must be no orthogonalities.
% ./run >> if i see a red house then i am happy. if You see a red house then You are happy. >> i see a house. You see a house. >> Am I happy? maybe. You. >> i see a blue house. You see a blue house. >> Am I happy? maybe. You. >> i see a red house. You see a red house. You are happy. >> bye
The inference does not fire the first time because "I see a house" does not satisfy "I see a red house." The inference does not fire the second time because "I see a blue house" not satisfy "I see a red house." Moreover, it's orthogonal. When we say "I see red house," the condition is satisfied and the inference fires.
% ./run >> if i see a red house then i am happy. if You see a red house then You are happy. >> i see a house You see a house. >> debug eval i see a house You see a house. "True" >> debug eval i see a red house You see a red house. "Maybe" >> bye
Implicit chaining occurs when memories are restored. We will look at implicit inferences when we explore memories, later on.
Copyright 2020, Kevin Dowd