Skip to content

Releases: opennars/opennars_core

OpenNARS v3.1.1 (Experimental)

25 May 17:57
10cb47f
Compare
Choose a tag to compare

New OpenNARS architecture

Second implementation (by Xiang Li) attempt of Dr. Wang's new design based on 1.5.8. (first one here which was based on v3.0.4: https://github.com/opennars/opennars/releases/tag/v3.1.0 )
What is new for OpenNARS v3.1.1 over v1.5.8 ( https://github.com/patham9/opennars_declarative_core/releases/tag/v1.5.8 ):

  1. New datastructure Buffer will be used for Overall experience buffer and internal experience buffer, as well as channels
  2. EventBuffer as a subclass of the buffer which works for both eternal experience or events, only buffers works with event inehrit from this one
  3. Two lists in the event buffer, PRIORITY list and SEQUENCE list, priority list stores everything comes into the event buffer and sort them by their priorities (for both event and knowledge).
    Sequence list only store events and Sequence Conjuntion generated by the temporal induction, sort by occurrence time;
  4. For each of the item which attempt to insert into the buffer, if it is eternal or goal or question, the item will only be insert into the priority list. For events, it will be insert into both
    priority list as well as the sequence list.
  5. Each of the item in the priority list can only stay in the priority list for certain period, current time duration is 20(inference steps), if any item stays in the list over the duration
    it will be removed from the list, and the corresponding item in the priority list will also be removed.
  6. Take out operation only take out one item with the highest priority from the priority list.
  7. Take out operation won't take out any item in the sequence list, the event in the sequence list will be removed if the difference between the occurrence time and current time is greater than
    the duration(same duration used for priority list)
  8. Temporal induction happens when the item successfully insert into the sequence list.
    The idea of temporal induction is from ONA, but not quite the same
    a. if there is an event "a" in the sequence list
    b. when event "b" is insert into the sequence list, temporal induction will be triggered and implication a =/> b, sequence conjunction (&/, a, b) will be generated
    c. the implication will be directly processed as a judgment and be stored in the memory, the reason why does this early instead of waiting until selected by the buffer is
    when different events are continiously input into the buffer, the priority of the new events and goals or quetions are much higher than the inducted implications, and there has
    a very low chance for those inducted implications to be selected from the buffer to process before they get kicked out because they stay in the buffer for too long.
    d. The inducted sequence conjunctions will be put into the lists to wait for selection.
    e. No implication will be generated if subject or the predicate is an operation.
    f. No restrictions for the generation of sequence conjunction
  9. When an implication is processed as a judgment, two new method will be triggered. One to generate anticipation relations, and one to generate goal relations. For anticipation, if implication
    a =/> b is processing as a judgment, it will store a =/> b in the anticipation list inside the Concept a, the list has limitation, only couple can be store in each concept. Sorted and compete by
    the expectation of the implication sentence, when revision is triggered, the list might be changed based on the new evaluation.
  10. Similar to anticipation, when a implication is processed as a goal, the implication relations will be stored in the preondition list in the Concept of the post condition.
  11. Expectation is generated by the time when one event is successfully insert into the sequence list, if not inserted, no anticipation will be generated even there is an implication with event as
    a precondition. If one event is insert into the sequence list, it will check if there is an implication relation with the event as the precondition, if so, generate the expectation by calling the
    Detachment, to generate the expectation event and the truth value of the expectation.
  12. Disappointment mechanism, for each expectation, there is an expected item, as well as the expect happening time, if the expected event not happening at the expected time, a negative evidence of
    the expected event will be generated, the truth value is depends on the truth value of the expectation, "more you expect, more you dissapoint". It's okay if the expected event is happening shortly after the
    expected time. For example, if we have a=/>b with interval 8, when a happens at 10, an expectation is generated for b will happen at 18, if b doesn't happen at 18, a=/>b with low frequency will be generated for
    a=/>b with interval 8, even if b happens at 11, the temporal induction will generate an event a =/> b with interval 11. Here the negative evidence is for a =/> b with interval 8, the positive evidence is for a =/> b
    , with interval 11.
  13. For each take out in the buffer, it clears the expired items and expectations.
  14. When a goal is processed, do revision with the old goal, and find the best solution for the goal.
  15. The inferenced goal will be sent to internal buffer first, if the goal is an operation, execute the operation, if the goal is an sequence conjunction, (&/, a, b) where a is an event and b is an operation,
    first check if a is happening(if a is in the sequence list of the overall buffer), if yes, execute b, if not, generate a subgoal for a
  16. Inference rules for event are added for different rules.
  17. Part of emotions in my research are included in the released code, but not all since some of them are still in the process of research.

Future works:

  1. Since we add time, the inference rules for event might not be accurate as we exected, I have tried my best to make sure everything I can see to correct, but no guarantee for all the rules are correct
  2. Goal processing should be considered more to see if it works for different situations, so far so good, works pretty good for the plane example.