Friday, May 21, 2010

Recall

Knowledge can be represented as strings and trees of nodes connected together. In order to retrieve information and obtain a logical behavior these strings and these trees are scanned in every direction. A thought is a flow of linked recollections. A recall starts from sensors. Thought may follow its process a long time after sensors triggered an initial recall. But at least from time to time sensors trigger recalls. Thought from previous recalls may interfere with new triggered recalls. A reader receives stimuli from vision sensors and they interfere with previously induced thoughts. However an explanation of a recall process shall be explained from sensors in order to remain as simple as possible.
The following process may probably work:
A bunch of stimuli is memorized in a time line. This bunch activates a node.  This node becomes a template. This template will be dropped or reinforced in the future.  All new similar records will be attached to this reinforced node. The recall process propagates both forward and backward.
- Forward from sensors to higher levels.
- Backward from higher level to sensors and actuators.
The forward propagation is a kind of hypothesis and needs confirmations by a sufficient amount of active entries. The back propagation looks like a conclusion. True or not, anyway it shall be taken into account.
If this conclusion fails, and the upper level fails too, then a new template shall appear.
The final conclusion may contain sequences of elementary actions.
A dynamic graph and a simulation  would help: more in a few days.

Thought


Forecasting future by replaying the past.
A thought is a sequential activation of linked pieces of information. Information may have been stored as crisp unique experiences. Multiple crisp past experiences have often been mixed up together into templates representing a fuzzy mean value.
From present activated stimuli or from already activated ideas, templates are activated. This activation tries and moves forward through the upper levels of memorized knowledge.
Several kind of levels shall be specified:
- Time scale levels, from tenth of seconds to hours.
- Association levels, from local sensor groups to large associations.
- Abstraction level, from crisp automatic response to philosophy.
In order to reach the upper levels, this activation needs support from the lower levels. Several rules shall drive this activation process.
- A node is activated if a sufficient amount of its inputs are activated.
- An activated node may include non-activated inputs.
- Non-activated inputs in an activated node represent predictions or actions to be done.

A thought is triggered either by a stimuli or by another thought.
A thought is driven by past experience.

Wednesday, May 19, 2010

Thought and percolation threshold

Persistence of thought
Thought may persist within a human brain a long time after the last significant amount of stimuli fired it up. For example, you read this phrase, close your eyes and think of your own experience about thinking... And now I think of you thinking about thinking. Human brain is very strong at this kind of role playing exercise. These strange skills take part in self-consciousness and deserve a whole dedicated chapter.

 Our present topic needs a simpler example of thought:
You are lying back on your bed in the darkness and you are planning your next week-end. During several minutes, you are building a project. Maybe you will postpone this project and you will resume it tomorrow.
Paths of thought
This continuous thought moves from an idea to the next one. Thought may be considered as a flow of small pieces of knowledge linked together. This recollection itself is memorized: we can recall our past thought. An intelligent being shall link together all different events in order to take into account a complex world. A path can be found from every idea to every other one. However, when we plan our next week-end, we avoid a  total recollection of our past  holidays. We avoid an entire recollection of our whole geographical knowledge. Not all information is activated. This is a matter of proximity. If a part of knowledge is close enough to the present thought, then this part is more or less activated. What happens if activated knowledge is not enough? The thought process is broken. What happens if too much knowledge is activated? A huge amount of irrelevant and useless path is to be searched. An optimized thought would be just below the system calculation power limit.
In order to obtain an efficient thought we need:
- Clever activation rules in order to trigger knowledge activation steps.
- Clever real time control rules in order to monitor the total amount of path being explored. This feedback loop is compulsory.
Without this feedback loop, we encounter a percolation phenomenon.
Thought can be compared to a blaze in many ways.

Percolation
A forest fire needs propagation conditions. Let us suppose there is no wind. Consider tree density as a main propagation condition. Below a given threshold of tree density, the fire decreases because of gaps remaining between groups of trees. "Islands" of trees are all surrounded by paths of bare ground. If you outreach a density threshold, a fire grows up more and more, because of paths of trees linking most groups of trees together. The configuration has suddenly moved from islands of trees to islands of bare ground.
This kind of phenomenon is called percolation.
Thought proceeds in the same way. Below a given threshold, thought needs external stimulations to be fired up. Above this threshold, thought runs through the entire mind as a fire does in a dry forest.

Online demo applets of percolation
This simple applet comes from this page. Try it out first.
Another applet on this page.
A nice applet on this page simulates forest fires. This is worth playing a while with its control panel. You can run it without reading comments.
You will see how tree density alters fire propagation. A slight modification on this tree density from 0.58 to 0.62 induces big consequences on fire propagation (tree density=probability cursor).

Saturday, May 15, 2010

Learning, induction, deduction

Learning process starts from induction.
For example, I discover a chair. I learn that such a thing is a chair.
But what is "such a thing"? No previous experience.

I suppose I may ask questions about it. Next time I see a chair quite different from the first one I probably don't recognize it as a chair.
But I ask and I get an answer again. Later, I recognize many chairs without any help.

From different chairs, I have built a general idea of what a chair may look like.
Induction
Inductive reasoning is moving from a set of specific facts to a general conclusion. This type of reasoning leads to over-generalization. Not all seats are called "chairs". Learning process tends to remove these errors through more experience.
Deduction
Deductive reasoning reaches a conclusion by following a logical inference from general rules applied to a specific fact.
Back to the first example of a chair. Deductive reasoning may be used in order to validate or invalidate a guess.
"From my point of view, this is a chair." How to be sure? I use general rules.
This must be of a given size and shape allowing human beings to sit on it...

Induction is in process whenever experience is building up new knowledge.
Deduction is working whenever knowledge is being used.
This assertion remains true within science domain. Theories are validated from inductive reasoning. Deductive reasoning applies once theories are validated.

Monday, May 10, 2010

Knowledge

Knowledge is the sum of all added information from the birth of an intelligent being. What exactly is information for an intelligent being? Footprints of past days. In order to be depicted as "information", these footprints must be "computable" by the entity.
Knowledge is different from skills. Knowledge is used to enhance skills. Not all skills come from knowledge. Not the whole knowledge is used to enhance skills.
Knowledge shall be stored. This storage must save available resources. How exactly is knowledge stored within an intelligent being memory? Clues have been given by neurosciences but the whole system complexity still remains globally out of reach. However, knowledge management can be studied from its external appearance.
First, knowledge shall be stored.
Moreover, knowledge shall be re-usable. A past experience has to be compared to a new similar situation.
No new real event is entirely similar to a previous one. The storage of knowledge shall ease approximative comparison and classification. 
Of course, neural networks rather well perform such a task.
The ability of generalization is a powerful feature of evolved intelligent being brains. Generalization operates within time and space.
Storage
Stored knowledge starts always from stimuli.
Are elementary stimuli close together in space and in time?
If they are close enough in space and in time, this proximity is recorded as an elementary knowledge.

On most sensor events, only time proximity is significant. The sequence doesn't matter. However chronology  is recorded on a time line and can be used later if relevant. The elementary sense of cause and consequence comes later from the sense of sequence.
Recall
New stimuli activate stored elementary knowledge. Stimuli must be numerous enough and located in a zone of sensors previously involved in a stored elementary knowledge. The recall can be triggered even if new stimuli are slightly different from stored stimuli.

Competitive recall
If several elementary pieces of knowledge in the same zone are activated by new stimuli, then they will compete. At a given time in a given zone, most activated piece of knowledge will be considered as most suited for the present situation.

Sunday, May 9, 2010

Self-learning

Self learning ability is a key feature of an intelligent being.
However, if you test an intelligent being within a limited short time, self learning doesn't immediately appear as an essential component. Anyway, you may conclude by a quick test that you are more intelligent than a monkey; a monkey is more intelligent than a rat; this rat appears as  being more intelligent than a lizard; a lizard is more intelligent than a clam!
Comparing Man and Chimpanzee, please, don't be too condescending against Chimps. Otherwise you would be a little bit surprised if I selected the following test: Look at this video: a monkey performing a short-term memory test.
An intelligence test gives an evaluation of a total amount of skills, including genetically inherited behavior, learned behavior, learned knowledge, problem solving skills, conceptualization...
Back to learning and self learning:
Learning includes memorization. Inherent value of learning is re-usability.
Personal experiences may apply whenever similar circumstances occur.  A mere memorization is not reusable. Taking into account the huge amount of possibilities, there is never two identical situations. Learning process and memorization include approximation and global description. This is a condition for re-usability of experience. The need of a global fuzzy description of a past experience is a first step on the way to conceptualization.
Self learning requires self marking of learned patterns.
Learned patterns are used to forecast the consequences of an action.
If this forecast fails, the pattern will be downgraded from good forecast to bad forecast.
But this pattern was just selected because it was in a way similar to the present situation. In order to make a difference between the new situation and the ancient one, there is a need of complete memorization of each situation beside the need of global and fuzzy memorization. Thanks to memorized details of the old situation, the learning process tries and shapes the differences between the new situation and the ancient one. Both remain fully memorized for a future comparison to another situation, if needed. However, the content of learned patterns lies in fuzzy contours just sufficient enough to underline the differences between two situations. If patterns were crisp and sharp, they were never applicable to any other new situation. These patterns may link to acute recalls, but learned patterns are of a different nature. However, "acute" recalls may be partially rebuilt from patterns linked together with short-term memory. This is a known issue for eyewitness accuracy (more about this topic). This is another story.


If you liked the above video there is a longer one about the same topic: Ayumu the Chimp.

Stability, positive or negative feedback

A simple closed loop controller must be stable. Feedback must be negative at low frequencies and static operation. Stability of a closed loop controller is predictable from its open loop behaviour: gain and phase as a function of the frequency. Open loop gain must decrease with frequency, in order to become less than 1 before phase shift reach a critical point (phase margin and gain margin are both required).
An intelligent being is a non-linear system. The main closed loop of an intelligent being shall remain stable. Perception of self-reaction consequences must not exceed the initial perception. However this situation happens from time to time to intelligent beings:
Pain ==> Quick reaction ==> more pain ==> more reaction.
Loss of stability may lead to loss of life: each new attempt in the same way leads to a worse situation. Control elements of an intelligent being detect that a previous action is inefficient and select another strategy.
For example, attacking may be better than fleeing.

Learning capabilities facilitate new accurate response elaboration.
Control elements of an intelligent being switch from a possible behavior to another one in order to retrieve stability.
Control elements of an intelligent being learn from experience how to do and how to select what to do.
Increasing predictability in a control element increases stability and shortens response time.
Learning capability tends to increase the forecasting horizon of an intelligent being.


Model predictive control (see wikipedia) uses predictive models in order to enhance control elements.

Saturday, May 8, 2010

Main feedback loop in a self-learning AI

An intelligent being produces decisions leading to actions. A self learning intelligence produces new knowledge from experience too. Acquisition of new knowledge is considered as a modification of an internal state.
External interactions are relevant outputs in order to compare animal, human and artificial intelligences.
Efficiency of messages and actions of a given intelligent being can be checked.
We consider these outputs now.

The main elements of a closed loop system are:
- Reference input
- Comparator
- Control elements and actuators leading to output
- Feedback elements

An intelligent being is constantly comparing sensor signals to reference inputs. Reference inputs for example give limits on cold, warm, noise, hunger, generally: pain thresholds. A set of sensor signals can be considered as a point in a multidimensional space. In this space there are areas to be avoided. Other areas are to be reached.

Reference inputs are characterized in the same multidimensional space by these "good" and "bad" areas. Most of sensor signal space belongs to neutral areas (neither good nor bad ones). Most of sensors are neutral themselves.

The comparator output gives control elements a gradient to be followed in order to reach certain areas or in order to avoid other ones.
Basically, there is always a wired response to a strong signal located in a "painful" area. Basically, all intelligent beings can learn more or less a customized response from their own experience.
Less evolved intelligent beings give a "wired" response to each gradient.
More evolved intelligent beings can learn more and give learned responses.

The control elements command actuators. They interact with the universe of the given intelligent being. This intelligent being perceive, at least from time to time, a part of this interaction thanks to its sensors.

The feedback loop is closed.

That's all for the main feedback loop in a self-learning AI.

Still to come:
What about stability, positive or negative feedback?
What about self-learning?

Wednesday, May 5, 2010

Strong AI shall be self-learning

We may "bootstrap" a strong AI from a kind of knowledge database. After this initial bootstrap, does this AI remain unchanged?
"Real" strong artificial intelligence implies ability of discovering acceptable solutions from past experience when new encountered situations are partially or completely unknown.
How to evaluate acceptability of solutions?
A comparison between expected computed consequences and real events gives an evaluation. The result of this comparison is new knowledge itself.
Human beings constantly increase their knowledge this way. Artificial Intelligence shall use self-learning in order to think in the same way as humans do.
Real artificial intelligence = getting smarter from cumulated experience.
Once implemented such a property, is it compulsory to "bootstrap" a huge amount of knowledge?
Self-learning capability is a key feature for strong AI.

Self-learning AI represented as a closed loop

The term "closed loop" is used for an automatic control system in which a process is regulated by feedback.
An input is compared to a set point.
The difference between the imput and the set point is amplified and filtered in order to deliver an command output.
This output is filtered again and linked to the input (feedback loop).

Auto-learning AI may be represented as multiple closed loops.
The main closed loop tries and keeps the system into safe boundaries and leads it to "satisfaction".
Other closed loops deal with utilities and help managing system resources.

Monday, May 3, 2010

Strong AI general requirements

Here is a list of minimum requirements from my point of view:
-Sensors bring information from the given universe.
-Sensors are given static or dynamic limit zones.
-The limit zones are expected not to be reached.
-Actuators are capable of modifying things within the given universe.
-The above modifications must be at least partially detectable by the sensors.
-If limit zones are reached, actions are required and will be triggered*.
-A memory associate sensor stimuli together and with actuator commands
-Simultaneity and distance matter on memory association strength
-Memory match associations together in upper levels
-Ability of building a unique template from several similar associations
-Ability of retrieving a path leading to a previous memorized experience
-This path may lead to a response supposed to suit the present situation
*when limit zones are about to be reached, actions are triggered by following preferred paths presently activated. The system tries and repeat previous responses to similar situations supposed to lead to an acceptable new situation.

The consequences of a given response is associated to the present situation, reinforcing the present template or creating a new one. Learning capability is provided by this feature.
This specification of a strong AI starts from bottom level.
High level process requirements will come later and will depend mostly on experiments and competitive programming techniques. This approach is close to a connexionist approach.
We must get rid of unuseful complexity. Complexity will come easily later!
The term "AI-complete" is built on the same template as "NP-complete" already existing in computational complexity theory. In order not to waste time in definitions, let us admit that AI-complete programs provide "strong" artificial intelligence. (see AI-complete and NP-complete on Wikipedia).
They tell us that AI-complete problems can't be solved by a simple algorithm.
Not a specialized one, for sure, but why not a simple one?...
Problems similar to AI-complete problems exist in reduced universes.
A reduced universe is either a part of our entire universe or a simplified simulation running on a computer. In order to get artificial intelligence, we don't need computer vision first. We don't need language recognition first. We need universal components of general intelligence. We will use later these universal components in a similar way for computer vision, language recognition, and abstract concept handling.

Within a reduced universe Ur1 an algorithm A is able to solve many problems.
Within a reduced universe Ur2 the same algorithm A is able to solve similar problems and other problems too.
Within our real complete universe U the same algorithm A is able to solve lots of encountered problem.

How do you figure out this algorithm A?
Will it be exactly the same for all the given universes?
Hypothesis: Providing that we adapt sensors and actuators to each universe, I guess that we can keep the same algorithm.
I don't pretend that there is only ONE algorithm A. I guess that a general algorithm A, or B, may solve a significant amount of known and new encontered AI problems within different universes. "A" may be better than "B". Never mind. If these algorithms meet the minimum requirements, they can both run a strong AI system. Implementing and improving A or B is a matter of efficiency. This improvement will come later.