Skip to content

Overview of operations inside Logicmoo's bot

Douglas R. Miles edited this page Dec 5, 2020 · 1 revision

Overview of operations inside Logicmoo's bot

There are multiple MUDs running inside a single Virtual Robot

MUD#1 - Space for learning from VirtualTrainer#1 MUD#2 - Vantage point of bot's in a simplistically imagined world MUD#3 - The MUD that real players play in.

BOT#1 - Bot in MUD#1 that is tethered to a VirtualTrainer BOT#2 - Bot in MUD#2 that records information from BOT#1 BOT#3 - Bot in MUD#3 The Virtual Robot with actual humans

We designed MUD#1 to transfer its logic to MUD#2 without too much fuss. We tethered BOT#1 to a VirtualTrainer in MUD#1 so that wherever the VirtualTrainer goes the bot (LM489) goes. LM489 bots have an empty MUD which serves as it's imagination called MUD#2 . MUD#1 is the playable PrologMUD. MUD#2 is a version of PrologMUD that is not playable and only requires "sequences of precepts" and creates minorly stateful objects. Each time a "sequence” happens in MUD#1 the MUD#2 records these as "valid mud sequences". The idea here is that MUD#2 is slowly "programmed" as interaction in MUD#1 takes place. This will lead to many misunderstood possibly "useful simplifications" .

CAS-AM:
Recap of what I’m guessing is happening: The NARRATION COMPONENT. We tether a bot to a human player in MUD#1 (the shared game world), so the bot is always with the player to create and document the map in Mud 2. It records the map and important events, but not in much detail.

Maybe we need LM489 (the AGI), the Bot (who interacts and lives with players) and the bots(which live inside the Bot and do unconscious things) In a world of spys, you have spys themselves (they monitor things and sometimes do a small job) (ex: Narration Component), and their supervisor contact (who decide when they should do important actions) (also Narration component), and Their Boss (who told them the goal for the political arena) (???) and that supervisors boss The Director (LM489). The lower level agents don’t really make those big calls (the plot of every spy movie is a lower level agent trying to act like a higher-level agent?) It’s a hive mind - and you’re trying to describe the jobs of a hivemind. And this is the part that tries to make sense of sensory input.

In other words, the "sequences" very quickly constitute "valid MUD sequences" (VMSes) in MUD#2. MUD#2 is extremely permissive and allows every sequence it has seen to magically take place. After all, it is not the reality, it is the imaginary world for LM489. (note, MUD#2 data expires rather quickly). The idea is eventually LM489 will attempt to model what worked in MUD#2 inside of MUD#1 which mostly should cause errors because MUD#1 follows actual rules. When errors take place, LM489 has to correct these things. LM489 has some canned correction dialogs (CCDs) built in already that are programmed/vetted in terms of what already is known to work in both MUDs. (CCDs themselves are NarrationPLLs) The idea here is that CCDs will train LM489 to interact and become an expert at using MUD#1. Before getting our whatnots in a bunch, the point here is to not emulate anything at all close to the real world. Or even real-world type learning but to ensure we have at least a "transfer model" for transferring bits and pieces between uninformed mind-slices that constitute the make-up of LM489's total mind areas. These several simpleton's each whom each have only [mostly unshared] sub-slices of MUD#2 .

CAS-AM Recap of what I’m guessing is happening: Mud 2 records the map and world’s “valid mud sequences” (VMS) without judgment. This data is being saved and serialized into PLL. When the two different Muds are compared there will be in congruencies, and there need to be, as LM489 will seek to correct these errors in its internal world. LM489 has some “canned correction dialogs” (CCDs) built in already that are programmed/vetted in terms of what already is known to work in both MUDs. (CCDs themselves are Narration PLLs) Each player-connected bot is actually made of many smaller bots who work in unison, with each tiny bot tasked with successfully completing a relatively simple action (like navigation or an acting like eating). (I’m imagining this like making 9 year olds do the daily activities of congress, it's not great, but to an alien it would look equivalent. Then you can get a review board of a higher level to decide what the worst mistake was.) Our only goal above was to prove to our project we can get our transfers written in PLL. And test that the PLLs, when broken, have a CCDs that mitigates the repairs. (This repair process is a developmental milestone.)

The bot's next goal is to convert as many VMSes (Valid MUD Sequences) into VLSes (Valid Language Sequences) Next we up our game by having a human giving a description of the VMS while it is happening. This means as the human moves around and acts from MUD#1 (where MUD#1 send precepts the human announces what they want to do "I am going to eat because i am hungry" then they perform actions in the MUD#1. "take food from the backpack. eat the food. I am no longer hungry." LM489 sees the food appear into the humans hand. LM489 makes food appear in a human's hand in MUD#2. LM489 sees the eating act. LM489 hears the human is no longer hungry." ( MUD#2 replicates the narrative speaking of the human as it is going on as well as all the changes). LM489 already has a model that allows it to replace the human with anybody in MUD#2. Sound like more canned stuff? Yes, we still are cheating. What we are doing in this phase though is ensuring that we can have Canned Monkey Scripts (CMSes) since we will have several a-priori CMSes (CMSes are again NarrationPLLs) which are called proto-memories that are analogous to what mechanism that is used in animals that allows them to store very simple behaviours like walking. Why we adorn these with announcements like "I am going to eat because i am hungry" is that we are creating an infrastructure that operates from an "internal dialog" and not from any other seedlings.

So we have so far described a system that can look at a MUD#1 and fully transfer the low-level description into MUD#2 rule base from three types of PLLs..

VMS- Valid MUD Sequences (observed in MUD#1s and transferred to MUD#2s) CMS - Canned Monkey Scripts (VMS that have spoken intents and outcomes) CCD - Canned Correction Dialogs (when MUD#2 VMS won't transfer correctly back to MUD#1 we use CCDs to correct them) Example: "I expected $A to work, but it didn't, may we discuss $A ?" wait for confirmation.. Ask initial categorizations "Is $A an action I can do?" store the results of AConvert this to a STRIPS notation: (always-rule (preconds (At ?User1) (Unknown ?ConceptA)) (postconds (stable-system) (knownAbout ?ConceptA))) ...

Before getting too deep we will attempt to summarize the rest of the non-technical overview:

How VMSes are combined CMS's spoken intents/outcomes are added to narrations that become VMSes now called VLS (Valid Language Scripts) How CCDs are adapted to work for VLSes (Valid Language Sequences)

This correction process happens by having semi-canned dialogs with a ...

Non-Technical Overview The restriction of using limited resources

The restriction of using limited resources is a part of Logicmoo's AGI implementation (AIKR - Assumption of Insufficient Knowledge and Resources). It not just brings the theory and implementation of the system closer together, but also brings together artificial and biological systems. Firstly, it is the restriction of having limited physical resources (in terms of time and memory) and secondly, limitation of information (amount and truthfulness) the system can perceive. AIKR doesn't really allow using a conventional (axiomatic) logic to build such a system, at least not in its conceptual level. Regarding the inverse optics problem (which could be generalized to any type of perception, physical measurements, or perceived information), there is no way to track back the source of the information received one by means of any logical transformation on it. There are many other phenomena, such as the decision problem (Entscheidung's problem), where a first order logic statement, is not always (universally) provable by an axiomatic logic (a finite set of axioms), or the implication paradox, where using irrelevant data might be intuitively problematic, although it gives us correct results. In artificial intelligence, a procedural reasoning system (PRS) is a framework for constructing real-time scripted solutions for completing tasks A user application when defined, provides the PRS system a set of knowledge areas. Each knowledge area is a piece of procedural knowledge that specifies how to do something, e.g., how to navigate down a corridor, or how to plan a path (in contrast with robotic architectures where the programmer just provides a model of what the states of the world are and how the agent's primitive actions affect them). Such a program, together with a PRS interpreter, is used to control an agent.

An interpreter is responsible for maintaining current beliefs about the world state, choosing which goals to attempt to achieve next, and choosing which knowledge area to apply in the current situation. How exactly these operations are performed might depend on domain-specific meta-level knowledge areas. Unlike traditional AI planning systems that generate a complete plan at the beginning, and replan if unexpected things happen, PRS interleaves planning and doing actions in the world. At any point, the system might only have a partially specified plan for the future.

PRS is based on the State, Goal, Action framework for intelligent agents. State consists of what the agent believes to be true about the current state of the world, Goals consist of the agent's goals, and Actions consist of the agent's current plans for achieving those goals. Furthermore, each of these three components is typically explicitly represented somewhere within the memory of the PRS agent at runtime, which is in contrast to purely reactive systems, such as the subsumption architecture. Our PRS system is in the business of organizing these:

Pseudonyms

Actions (compound and simple) done by agents. Intentions, Plans, action primitive, Taking a bite of food, Chewing Food Exemplars Objects and Structures Types of Object, Agents, Smells, Tastes Joe. Food. Myself. You. States properties of Exemplars World State Joe has some food.
Percepts and used to detect the above Observations, Joe takes a bite of food Goals by agents. Desires Joe wants to be full Beliefs about states of how the world is presently arranged Imaginary world Joe is hungry, Joe took a bite of food Event Frames Narratives that may contain any or all of the above Frames, Events, Memories (All of the above) + Joe is a person whom was hungry and then he took a bit of food Explanation Narratives Rulify the why Procedural nature and nuances of the above into contaminants and sequences Text, PPLLL, English, PDDL (All of the above) + Joe is a person whom was hungry so that is why he took a bit of food

Psychology Section:

The programs SAM/PAM by Roger Schank was one of the first most viable starts of AI. His theory may be viewed as one version of the Language of Thought hypothesis (which Schank calls 'Conceptual Dependency' theory, abbreviated as CD). Although much of his work was based on natural language "understanding". He defined, at minimum, what the tenants understanding might look like. From this very start opponents will use the Chinese Room argument against this language. I'll ignore this because we've agreed "Artificial" is fine when it comes to machine intelligence. Those who have seen the source code of SAM realize that it is a system whose job is to find a "best fit" on programmed patterns. What it does is create a language that "best fit" can exist. We see that to take that initial program really into our world, millions of facts and rules are required to be put into the system. Before we attempt to add these millions of facts and rules we have to define a very clear meta language (above C.D.).

"Inner self" exists in some (game?) world that is separate from the outer environment. It probably has objects and actions not defined or restricted by spatial coordinates. It probably has bio-rythemy (dictated by some biochemistry) weather like system that is controlled autonomic-ally and may even be irrelevant to the situation a self-aware being is in. That process is an "internal dialog" like a computerized poetry or story generator simply constructing stories. Everything that the inner voice says has to be consistent and hopefully relevant to the rest of the system. The speed in which a system operates even in the real world processing is only at the speed of the internal voice.

"Self awareness" means that in order for a program to operate it must [be forced to] "observe" its execution transcript in the same language in which it interacts with it's environment. One's own thoughts and plans are just as much part of the world we live in as the outside environment. The inner environment has many cause-effect rules as the outside does of physics. We (and the program) strive for control (satisfaction of goals) of the inner world as much as the outside. One definition of "Personality" I learned in school was "The manner of skill in which a person exerts their intentions to the control of their environment'' We say a person has a well developed personality when they have found a way to make their environment (others around them) comfortable while they are satisfying their immediate goals. I believe that in order for a person to function at a high skill level here they must master and win at the games of their inner self. The concept of "inner self" is what is supposedly so hard to define for AI scientists. So before defining what "it is" we are better off implementing the framework in which an inner self could operate in. I think that C.D. representation or CycL might provide sufficient data types for whatever processor we define in this document.

"speech is a behavioral act" Also we can actually have silent speech acts called internal dialog. Usual internal dialog can be listened to. Think quietly "I can hear myself think" in your own voice. now do it in another person's voice "I can hear you think". Try to have a thought that has no voice. Now paraphrase that voiceless thought back with your own voice. My voiced version was "That chair is made out of wood" But i had to pick out something in my environment or some sensory memory that I never bothered voicing: "wow that was a salty steak last night" Perhaps you can come up with thoughts in which there are no words for. Generally with enough work you can write some sort of description in which words are used. This has led research to decide that all thoughts may be defined in speech acts. Maybe all thought is a behaviour (you are trained to do linguistics.. internal voices give us positive feedback (Pavlov comes in)). I mean from the very level of composing a lucid thought had to be done via some rules close to linguistics.

Items when the system starts out Actions (compound and simple) done by agents.

Intentions, Plans Action primitives

Containment relationships such as Equivalencies and Implications (physical and otherwise), What is not contained in what

Goals by agents.

Desire DesiredStates

Percepts and Exemplar Are used to Form

Containment relationships such as Equivalencies and Implications (physical and otherwise), What is not contained in what Sequences relationships such as What happens Automatically. What happens by Choices made by Agents. What has never happened What can't ever happen

Percepts and used to detect the above

Beliefs of Events Observations, Percepts and Exemplar Are used to Form

Containment relationships such as Equivalencies and Implications (physical and otherwise), What is not contained in what Sequences relationships such as What happens automatically. What happens by Choices made by Agents. What has never happened What can't ever happen

Exemplars Objects and Structures

Beliefs of Types of:

Objects, Agents, Smells, Tastes Types of: Objects, Agents, Smells, Tastes

Percepts and Exemplar Are used to Form

Containment relationships such as Equivalencies and Implications (physical and otherwise) Physical containership What is not contained in what

Sequences relationships such as What happens Automatically. What happens by Choices made by Agents. What has never happened What can't ever happen

States properties of Exemplars

Beliefs of World State Imaginary world World State/Imaginary world

Percepts and Exemplar Are used to Form

Containment relationships such as Equivalencies and Implications (physical and otherwise), What is not contained in what Sequences relationships such as What happens Automatically. What happens by Choices made by Agents. What has never happened What can't ever happen

Event Frames Narratives that may contain any or all of the above Imagination and Memories

Combining States and Precepts and Event Frames Narrative Combining existing with Explanation Narratives creates new ones

Explanation Narratives Rulify the why Procedural nature and nuances of the above into contaminants and sequences Text, PPLLL, English, PDDL Explanations in Phrases/Words that could represent such Narrative Frames

Combining existing with Explanation Narratives creates new ones

Such Explanations organize these things into:

Containment relationships such as Equivalencies and Implications (physical and otherwise), What is not contained in what Sequences relationships such as What happens Automatically. What happens by Choices made by Agents.

Runtime Item Creation Using defined and discovering a narrative procedure that describes how the states are put together. some(State)-Implies-some(State)

Defining and discovering a narrative procedure that describes how the actions are put together. some(Action)-Follows-some(Action) some(Action)-Implies-some(State) some(State)-Implies-some(Action)

Defining and discovering a narrative procedure that describes how the goals are put together some(Goal)-Implies-some(Goal)

Defining and discovering a narrative procedure that describes how natural language is put together. some(WordClasses)-Follow-some(WordClasses) some(WordClasses)-Contain-some(Words) Non- LOGICMOO PRSs only do a subset of the above (See the list below) All other planning systems seem to only be in the business of defining a defining and discovering a narrative procedure that describes how goals, states, actions are put together. Defining and discovering defining and discovering a narrative procedure that describes the goal. More assumptions Starting with the restaurant script by Schank we might have an inner script called "the first things we think about at the start of the day". For some of us in order for items to make it on the list they have to first be qualified by "what is relevant for us to think about", "what do we have time to think about'', "what deserves our attention" and "what things do I already think about each morning no matter what". My point is that we have definite rules (personality) in which we use to keep our inner self compliant. First this may sound like some phase of goal based planning, but that is not the point of this paragraph the goal is to point out there is a sense of ontologizing our inner world simply as on the outside. Imagine how simple it would be to write a flowchart of diagnosing why an engine wont start and realize it'd be just as simple as picking out what the first things we need to think about the start of the day would be. Again not for a planner but just to understand how we label the rules of such an enterprise. This meta language can have vague operators such as "this is more important than that" or "i want to talk to this person" and "each day I have to put gas in the car".. The reason I declare this stuff as "easy" is because if someone was to as "why?" We'd be able to explain in some ready made language script. The point where some things are harder to explain is when we've either formed a postulate that cannot be further be simplified ("i am hungry" "chicken tastes great and I can't explain it") or when the explanation is something that came from the autonomic instant weather system like: "it just came to my mind". Things will come to mind often because they by tradition just do. In Sci-fi, we like thinking androids will solve everything in life the same way they would play a game of chess. We imagine them short circuiting when they encounter unexplainable emotions, situations or people. So is that AI useful? I wont say short circuiting is useful but say such an AI is exactly what we all want. We want a tireless logic machine taking in the big and small picture and computing the most brilliant "act" or "hypothesis" for the moment that it is in. We want to sit by it's side and explain how we think and feel so that it can inherit those same behaviors. We hope to do that in English. Answering many questions it has for us about the exciting new world we have brought it into. How far is that from a reality? Initially very very far. It is important to define the types of questions we'd enjoy answering because those are the exact ones we think "make us human."

Steps (Wrote the steps in 2006 - so needs a contemporary rewrite that is less NL-ish) Define a MUD world model in STRIPS notation using Schanks C.D. language of anything/everything that we'd like the robot to be able to do. "here is how to gather wood and build a fire to achieve warmth" "you want warmth because it makes you feel good" Simply these models into the most concise featureless version possible. "do X, then do Y to achieve Z" "wanting Z because it makes you feel A" "A is good" Extract the stop-words that are left: "do" "then" "wanting" "makes you" "feel" "is". Even: "good" decide the ontology of X,Y,Z,A write a small system to create new X,Y,Z,A's variables. Define these mbuild rules in the original way you did step 1 and repeat until you get back to this rule. Repeat the same steps 1-4 for your stop-words. save this off as a new STRIPS notation

put your rules of legal construction of such sentences back into STRIPS form so that only valid sentences can be generated. out comes: "do sit then do sit" .. find and create ways of stopping such exceptions (make a DSL) simplify your exceptions language created for detecting this new "exceptions language" repeat steps 1-8 on it run the sentence generator again.. When i say "sentence generator": i mean really it is a "rule generator".. hopefully seemingly generating a great number of rules. reduce the X,Y,Z,A into only a small set of literals and see if you can ever make the generator ever stop. You should be able to… rewrite the generator to allow yourself to predict exactly how many rules it can produce at any given time if you haven't already done so. invent new sets of X,Y,Z,As that together make good sense. Determine what ontological basis you went by. example: GoCabin->Sitting->Comfort->Good ontologically: "chairs are comfortable and found in cabins" Again steps: 1-8.. on step 7.3: "foundIn" "is" .. remember, step 3 before had found "is". Are you creating a new language yet? or have you been reusing the same language you created the very first time? Decide that your stop-word generation should not be the same as the first time.. create new versions of "is". like "feeling_is_goal" and "goal_is_subgoal" define a program that will have generated everything you have done up to now.. including automatic forking the definition if "is"... based in a DSL. Use no more than candidate items per datatype. (the limit imposed mainly for debugging) rewrite this program now entirely in a STRIPS format that will generate exactly the kind of template you just created. use a version of a STRIPS like planner to generate the said templates. create a framework that pumps these templates into a generator system that consumes them. in the framework allow the generators to pump output into another STRIPS like planner. Decide why the first and 2nd level of planners inputs are incompatible (due to collision?). If so, make sure collisions don't happen and they are totally separate. During this process you may have seen some capabilities. Find sane ways to leverage those compatibilities.. If none is found, worry not. figure out if you've created an optimization problem (size and scope of data) If so, find solutions shaped like "taxonomic pairs solution". Decide these "shapes" are in fact tenants of your language. Taking a break but will resume the steps shortly Much of this Workflow sounds like writing a prolog program that is domain specific.. then rewriting the program to remove the domain.. In a way it very much is except ontologizing is added the same way as required in CycL.. Correct, the point of this initial bit is to flex the C.D. representation into something more semantic than what Schank initially taught. The reason he stayed away from this is he needed to build a working NL representation based on his 7-10 primitives (which are easily anglified to an explanation (see XP) ).. You are doing the same. Except you are designing the base primitives that have no definition other than to dictate the discourse of representation. It wasn't the solidness of the primitives that made his work easy, it was the fact that XP (explanation patterns) make absolute sense (they are intended to do so!) You are going to make a system that can not "think" but in the Chinese room sense is stuck only transcribing things that can make sense. No matter how many random number generators are used, the system will be incapable of a non lucid thought. "Thought?'' Yes, we are building a program that is forced into pretending it is always thinking. The internal representation of Schank's forced it to tell detailed and lucid descriptions of scenes. The process of explanation of A,B,C,D,E,F proved the listener rather have heard A,B,D,E steps and assumed to create in their own mind the missing pieces. The user became impressed then they asked how did you get from B->D and the program this time around doesn't leave out C. I believe the dialog of the mind is a similar implementation. We have some very long thought chains but only have to deal with partial descriptions at a time. We are optimized to hide away C and the robot would be well off to emulate that same behavior. not yet finished explaining... Back to some more steps... In step 5 "write a small system to create new X,Y,Z,A's" This was not using a dialog based model. It would be time to explore what a dialog for this system would look like. It would also be good to next ontologizing the phases of such dialog. Dialog phases (pre as well): A was observed in some way and has not yet been in the system. "I have recognition A.. may we discuss A?" wait for confirmation Ask initial categorizations "Is ?A an action I can do?" "Is ?A an object that exists in the world?" store the results of A Convert this to a STRIPS notation: (always-rule (preconds (At ?User1) (Unknown ?ConceptA)) (postconds (stable-system) (knownAbout ?ConceptA))) ...

Clone this wiki locally