Here is a list of minimum requirements from my point of view:
-Sensors bring information from the given universe.
-Sensors are given static or dynamic limit zones.
-The limit zones are expected not to be reached.
-Actuators are capable of modifying things within the given universe.
-The above modifications must be at least partially detectable by the sensors.
-If limit zones are reached, actions are required and will be triggered*.
-A memory associate sensor stimuli together and with actuator commands
-Simultaneity and distance matter on memory association strength
-Memory match associations together in upper levels
-Ability of building a unique template from several similar associations
-Ability of retrieving a path leading to a previous memorized experience
-This path may lead to a response supposed to suit the present situation
*when limit zones are about to be reached, actions are triggered by following preferred paths presently activated. The system tries and repeat previous responses to similar situations supposed to lead to an acceptable new situation.
The consequences of a given response is associated to the present situation, reinforcing the present template or creating a new one. Learning capability is provided by this feature.
This specification of a strong AI starts from bottom level.
High level process requirements will come later and will depend mostly on experiments and competitive programming techniques. This approach is close to a connexionist approach.
We must get rid of unuseful complexity. Complexity will come easily later!
The term "AI-complete" is built on the same template as "NP-complete" already existing in computational complexity theory. In order not to waste time in definitions, let us admit that AI-complete programs provide "strong" artificial intelligence. (see AI-complete and NP-complete on Wikipedia).
They tell us that AI-complete problems can't be solved by a simple algorithm.
Not a specialized one, for sure, but why not a simple one?...
Problems similar to AI-complete problems exist in reduced universes.
A reduced universe is either a part of our entire universe or a simplified simulation running on a computer. In order to get artificial intelligence, we don't need computer vision first. We don't need language recognition first. We need universal components of general intelligence. We will use later these universal components in a similar way for computer vision, language recognition, and abstract concept handling.
Within a reduced universe Ur1 an algorithm A is able to solve many problems.
Within a reduced universe Ur2 the same algorithm A is able to solve similar problems and other problems too.
Within our real complete universe U the same algorithm A is able to solve lots of encountered problem.
How do you figure out this algorithm A?
Will it be exactly the same for all the given universes?
Hypothesis: Providing that we adapt sensors and actuators to each universe, I guess that we can keep the same algorithm.
I don't pretend that there is only ONE algorithm A. I guess that a general algorithm A, or B, may solve a significant amount of known and new encontered AI problems within different universes. "A" may be better than "B". Never mind. If these algorithms meet the minimum requirements, they can both run a strong AI system. Implementing and improving A or B is a matter of efficiency. This improvement will come later.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment