GB2338315A - Artificial intelligence or common-sense machines - Google Patents

Artificial intelligence or common-sense machines Download PDF

Info

Publication number
GB2338315A
GB2338315A GB9812300A GB9812300A GB2338315A GB 2338315 A GB2338315 A GB 2338315A GB 9812300 A GB9812300 A GB 9812300A GB 9812300 A GB9812300 A GB 9812300A GB 2338315 A GB2338315 A GB 2338315A
Authority
GB
United Kingdom
Prior art keywords
state
construct
means
perception
action
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB9812300A
Other versions
GB9812300D0 (en
Inventor
John Vassell Jackson
Original Assignee
John Vassell Jackson
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by John Vassell Jackson filed Critical John Vassell Jackson
Priority to GB9812300A priority Critical patent/GB2338315A/en
Publication of GB9812300D0 publication Critical patent/GB9812300D0/en
Publication of GB2338315A publication Critical patent/GB2338315A/en
Application status is Withdrawn legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06NCOMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computer systems using knowledge-based models

Abstract

A machine having artificial intelligence or common-sense includes: a collection of primitive, freely-associatable, constructs (demons) 40, including an action construct, each of the constructs being switchable between a 'standby' state [in the stands 34] and an 'active' state [in the arena 32]; means [sub-arena 36] for controlling the states of the constructs; and means [e.g. motors 24,26,28] for performing a task related to the action construct. The constructs have links between each other, the strength of the link depending on how much association there is between them for the operation for which they are designed. The sub-arena is operable to test, when the action construct is in its active state, whether the task performing means will be/is/has been able to perform its task, and to set a sub-state of the action construct, when in its active state, as between an 'unreal' state and a 'real' state in dependence upon the test. If the tested action construct cannot perform its task it is replaced [in the arena] by one of the constructs from the stands in the hope of the newly selected construct being able to perform the task with the already selected constructs [those in the arena].

Description

TITLE Artificial Intelligence or Common-Sense Machines

DESCRIPTION

2338315 This invention relates to machines which employ artificial intelligence or common-sense.

In particular, the present invention is concerned with a machine including: a collection of primitive, freely-associatable constructs (or "demons") including an action construct, each of the constructs being switchable between a standby state ("in the stands") and an active state ("in the arena"); means (a "sub-arena") for controlling the states of the constructs; and means for performing a task related to the action construct.

Such a machine is derivable from the paper JACKSON, John V, 1dea for a Mind", Sigart Newsletter, No. 181, July 1987, pp 23-26, which is set out below for ease 10 of reference: - A computer can only do what it's been told to do.' Is this really true? ne Japanese have made disproving this notion a national objective. One of the best answers to it is still 'Mat if you teach a computer how to think?' The counter riposte might then be 'But there are no rules for thinking. 7he best thinking is inspirational, creative. No 15 one knows How this is done'.

However, some people believe that humans operate mainly through a complex structure of habits punctuated occasionally by copying and recombining concepts and actions, with even the processes for generating new actions and thought being higher level habits adjusted to fit the circumstances. Instead of trying to elevate computers to 20 a higher plane, we might consider humans as 'systems that know how to think'.

Mat would we have to consider if we were trying to design a mind? What functions should it be capable of and what would be its likely characteristics? We should try to tackle many mentalfunctions together in case some depend on each other. We might as well also assume that the model be surprisingly simple; it should be based on what we know of the essential characteristics from the working examples we have available, both natural and man-made, but we need not specify every characteristic ourselves. Having chosen the bare essentials well, we should expect further characteristics to appear spontaneously.

We should consider the useful contributions of man-made expert systems, and also of the behaviouristic rules of animal learning. 77ze latter is a much malignedfleld which however does actually work within limits; it is worth remembering that many of its principles are successfully applied to millions of humans every day.

ne study of simple behaviour suggests that links are established between stimuli, and between stimuli and responses. It also suggests that the strength of the links depends on the extent to which the stimuli and responses affect, or seem likely to affect the drive levels of the subject - worthwhile rewards speed up learning. Direct links between most kinds of events formed more than 5 seconds apart are hard to create.

The 'behaviourists' studiously ignored invisible internal phenomena such as 'thoughts' - they claimed 'introspection' was unscientific. However, most people would agree that thoughts often seem to have similar characteristics to stimuli and actions. We should keep an open mind on internal mental events; if a philosopher frowns at us, we can always say we're technicians! More imporlantly, too-simple systems tend to have difficulty with complex pattern recognition.

Expert systems must be taken account of as they are earning their keep at tasks which were more dif ficult or impossible before their advent and which sometimes rival the wisdom of expert humans. Very roughly, their basic structure is often much like ordinary computer programs with lots of rules or statements of the form: IF (... conditions...) THEN (.. actions...) but with the rigid structure dissolved away; the actions from a rule possibly providing the conditions for any other rule that can use it, not just the one immediately below it.

Sometimes such a system works in cycles with external conditions being applied to the mass of rules as a whole, with some actions acting internally as conditions and 5 others acting externally.

This sort of thing resembles a behaviouristic reactive system run by rules instead of stimulus-response pairs. The basic operations of such systems involves a collection of entities, in many ways equal, occasionally coming into action, and sometimes bringing others into action.

There is already a psychological theory known as the 'Pandemonium Theory of Perception' (Oliver Sey-idge) which works in a similarfashion. It uses 'demons', each of which is a kind of rule which responds immediately it is stimulated. The Pandemonium theory suggests that we identify an object by applying its component details to a crowd of demons which 'shout' to a degree determined by how well they match their input. The demon most matching the input shouting the loudest and being taken as the identification of the object. For example, a circle crossed by a diagonal line from its centre to its bottom right would stimulate the demonsfor the letters R and 0 slightly, but most of all the demonfor the letter Q.

If we extend the Pandemonium theory beyond perception, we can envisage a system consisting almost entirely of demons, each occasionally shouting. Some are involved with external perception, some cause external actions, and some act purely internally on other demons. Since the mind seems able to concentrate on a small number of things at once, we might consider the mass of demons to be the crowd in a stadium, and the selectedfew at any time (hay'a dozen to a dozen) down in the arena causing the crowd to shout.

At any moment the demon from the crowd shouting the loudest is selected to take its place in the arena; one already there being displaced, returning to the crowd.

Remembering from behaviourism that we want to vary the strength of links between entities, we will try to strengthen links between demons in the arena proportionally to the time they have been in it together. Since we want this strengthening to depend on the motivational levels, we could turn up the 'gain' when things were going well, and turn it down (or even make it negative) if things got worse. Demons (eg action demons) would tend to reappear if they were associated with improved conditions and not to be selected if the outcome was disappointing. This would tend to cause the systems behaviour to steer it away from or out of disagreeable situations, and vice versa.

If we arranged that links from old dernons to recently arrived ones were stronger than the reverse we would cause a sequencing effect, especially if the demons gradually fadedfirom the arena. Sequences of stimuli which had been experienced repeatedly in the past could be completedfrorn memory if their initial sections were presented. The system would then be able to predict certain commonly occurring sequences of its environment, a very valuable characteristic.

This system of demons recirculating through the arena constitutes an 'association engine'. 'Beneath' the arena is an entity (the 'sub-arena) which measures the 'well being' of the system from moment to moment and adjusts the gain on the link strengths between demons in the arena. The sub-arena also performs sensory input by transferring from the crowd into the arena, demons representing low-level sensations. Those demons brought into the arena by the association engine which represent lowlevel actions are 'carried out' by the sub-arena.

A further useful development would be to merge certain demons together. Those selectedfor merging would have appeared together very frequently, the links gradually becoming stronger and stronger. The component demons would probably survive the creation of the new 'concept demon'. Conceptualisation would allow larger problems to overcome the bottleneck of the arena's capacity. It would also allow higher levelfeatures of one problem to be shared by other problems, enabling solutions to be transferred between different problems with similar high level representations. This would often give the impression of creativity. Incidentally, explicit concept demons would not be vitalfor all forms of generalisation - different objects of the same class might arouse similar responses simply because they shared similarfeatures.

The struggle to overcome the arena bottleneck looks like being a recurring theme with the system. Compound concept demons (which would allow the use of hierarchies) are one way of linking demons which would not usually share the arena. 'Dreaming' is another way: by turning off sub-arena interference, especially the external sensory channels, and letting the association engine free-wheel', demons would only be brought in by association. Since they will not now be interspersed by external sensory demons, memory demons which did not quite share the arena will now do so and links will be formed over longer distances. Another result will be that the common aspects of demon sequences with minor differences will be strengthened, even more than through repeated experience with reality. ne variable aspects will be winnowed out and the internal representations tidied up. (To digress, it is interesting to note that mental abilities in the natural world are most apparent in warm bloodied animals whose brains can remain active 'at night'.) Ruminating over past experiences allows 'long distance learning'. Since connections can only be made between demons firing within a few seconds of one another, some learning (eg. learning through experience which time of the year to plant crops) can only be done with remembered representations of the action, the whole episode passing through the arena at many times normal speed.

It is interesting to consider how the association engine relates to the problem of 'parallelism'. Is the mind one big computer or lots of little ones? Men many processes work in parallel on a task, their cooperation becomes a problem - they often spend more time coordinating than doing. On the other hand, a serial processor doing one thing at a time is too slow, leaving most of its hardware unused at any moment. Our system compromises by scanning all the demons in the crowd simultaneously but spinning the selected demons into a single strand - the memory access is the bit you can do in parallel without ruining your ef ficiency. Ideally, each cycle, we would simply 'pull a lever', and the loudest shouting demon wouldfall out from the crowd. Pzis would seem an excellent application for a parallel computer, but if we point the shouting the other way, we only need consider the few demons in the arena each time. ne system would now resemble a stream of witnesses being summoned to, and removedfrom a courtroom, and would run well on an ordinary computer.

The system might seem rather 'random' in its operation, but this merely reflects its considerable flexibility; higher level concepts can rub shoulders with basic perceptual elements - usefulfor example in speech comprehension where meaning, grammar and sounds must be processed together. We only need consider the plasma of a living cell to see that complex systems can operate amidst apparent chaos.

Flexible and speedy indexing is always a problem in complex cognitive systems; considering our system it is reassuring to see that the database consists almost entirely ofpointers. 'If you want the answer to something, you simply load it into the arena and hey presto, you get immediate access to. ..' the next demon. It is of course the responsibility of the system or its trainer to ensure the appropriateness of the next demon! If the feeling that 'it has too much of a mind of its own'persists, it should be noted that behaviour tends to get less 'predictable' as it becomes more advanced. True intelligent behaviour has so far been restricted to free agents eg. animals of some kind. 77zey were not made to obey orders but to 'manage' in any way they couldfind. To hope to design a totally directable but highly flexible and creative system would be like trying to fly a model aircraft but not wanting to let go of it.

John Bridle's recent quote as joint head of the RSRE speech research unit is interesting and encouraging. He says of self-learning machines or adaptive networks: that can acquire an internal structure suitable for solving a pattern 25 recognition job simply by being exposed to sufficient examples. We're talking of interactions between lots and lots of processors, something that is massively parallel, sub-symbolic, and has some method of adapting to repeated patterns so as to be able to modify its behaviour to be appropriate for the job. ne reason for calling them self-learning machines is that this adaptive behaviour resultsfrom a mechanism which is actually part of the machine, as opposed to running some algorithm on the computer. 77zese things can be thought of as actually realisable in hardware directly. It looks as though they could be very important indeed. ' 77ze system does seem able to carry out many of the functions of a mind:

deduction combining rules (selecting the next demon), as well as induction -learning its own rules through repeated observation (adjusting link strengths between demons). Ais latter it carries out on many demons at once. It would tend to combine perception and complex actions, continuously alternating between the two and making each of them the richerfor it its predictive abilities would enable it to model the world to some extent. It could use trial and error methods by fixing inhibitory links to demons which when summoned proved disappointing. It would learn links to 'successful' demons, but like people, it wouldfind long periods of unproductive thought depressing.

It can be thought of as an architecturefor a computer which can have programs written for it; it can be thought of as an 'experience processor', or a system that tried to match the 'right' actions to stimuli; in some ways it would resemble a goal-seeking system; it should have the ability to carry out simple tasks as a direct function of its essential architecture, but to build up more complex systems if the job requires.

If it could be shown to have, or be provided with, the abilities to compare a desired and complex state of reality with a state that it knew how to achieve, and to convert a high level theoretical representation of a plan into a series of real actions, then it might demonstrate some interesting abilities. Learning how best to interpret things and how to carry something out are not the easiest things for even a human to carry out.

But isn't all thought based on logic though? Well, if logic was the root of all thinking wouldn't we find it a lot easier ? It seems to fit animals' behaviour only where it touches, so to speak. Pure logic in various forms has been trying to squeeze its foot into the glass slipperfor quite a while now. It works well with bodies of knowledge that are thoroughly known. It is however often necessary to operate in an environment that is imperfectly known, or that is changing. Hopefully a 'logic program', which in some ways represents a desirable state of total mastery of a field of knowledge, could be written to run on an association engine. Ideally a system based on association would, after a diligent apprenticeship, achieve a mode of operation which at a slightly higher level would look very much like a logic system. It would moreover be largely self organising, and could start to operate much earlier. We could then convert logical statements representing nuggets of knowledge into associated demon form, and feed it to the system as high level advice.

However it is important to remember that a sequence of logical thought trails behind it a ruthlessly convincing, ready-made explanation. A primitive association engine, after a strange piece of work in an area where only one of its kind could boldly go, would be likely to explain itsetf with a shrug of the shoulders and the irritating phrase just experience'. It would of course be able to supply a list of demons it had used in order, but if it had been working on its own for a while, a non-trivial session of talking around the subject might be required in order to explain thefull significance of new concepts it had used.

The association engine described above has further problems, (for example, what is the best way for it to distinguish remembered stimuli from real ones?) but it has one overwhelming virtue: it is an ideal vehiclefor running the evolutionary processes which we now see to be so powerful. Not only can it evolve useful thoughts itsef, but it is simple enough for its own characteristics to be investigated and perfected using techniques of evolutionary design.

The above paper is also discussed in detail in FRANKLIN, Stan, "Artificial Minds", MIT Press (Bradford Book), ISBN 0-262-06178-3, pp 234- 243, to which reference is directed.

The machine of the present invention is characterised in that: means is provided for testing, when the action construct (or action demon) is in its active state, whether the task performing means will be/is/has been able to perform its task; and the state controlling means is operable to set a sub-state of the action construct, when in its active state, as between an 'unreal' state and a 'real' state in dependence upon the testing means.

In one embodiment, the task performing means is actuated in response to the action construct assuming its active state; if the testing means determines that the task performing means is unable to perform the task, the state controlling means sets the state of the action construct to unreal; and if the state of the action construct is set to unreal, 5 the task performing means is subsequently actuated again.

The machine may further include: a perception construct in the collection of constructs; means for realising a perception related to the perception construct; and means for testing whether the perception realising means will be/is/has been able to realise its perception; wherein the state controlling means is operable to set a sub-state of the perception construct, when in its active state, as between an 'unreal' state and a Ireal' state in dependence upon the testing means. In this case, in one embodiment, the perception realising means is actuated in response to the perception construct assuming its active state; if the testing means determines that the perception realising means is unable to realise the perception, the state controlling means sets the state of the perception construct to unreal; and if the state of the perception construct is set to unreal, the perception realising means is subsequently actuated again.

The action construct may be one of a plurality of such action constructs; and/or the task performing means may be one of a plurality of such task performing means for performing different tasks; and/or the perception construct may be one of a plurality of such perception constructs; and/or the perception realising means may be one of a plurality of such perception realising means for realising different perceptions.

In one embodiment, the state controlling means operates cyclically and is operable in each cycle: to attempt to execute an action construct and/or realise a perception construct, if any, which has been set to its active state in that cycle; and to attempt to execute an action construct andlor realise a perception construct, if any, which has been set to its unreal state in an earlier cycle.

As in the earlier proposal, each construct may have at least one variable strength link to each of the other constructs, the state controlling means controlling the active and inactive states of the constructs at least partly in dependence upon the strengths of the links between the constructs. In this case, means is preferably provided for changing the strength(s) of the link(s) andlor the way in which the control depends on the link(s) of the or each action or perception construct upon its switching between its real and unreal states. More particularly, the changing means is preferably operable to cause the state controlling means temporarily to utilise a link from at least one of the action or perception constructs rather than a link to that construct and/or to utilise a link to at least one of the action or perception constructs rather than a link from that construct when the state of that construct is active and unreal.

Specific embodiments of the present invention will now be described, purely by 10 way of example, with reference to the accompanying drawings, in which:

Figure 1 is a schematic drawing of a robot forming a first embodiment of the invention; Figure 2 is a conceptual drawing to assist in explaining the artificial intelligence or common-sense of the robot of Figure 1.

Referring to Figure 1, the robot 10 comprises a processor 12, memory 14 (which may include volatile and non-volatile memory), input/output circuits 16 and a rechargeable power supply 18 having a connector 19 for connection to a mains supply for recharging the power supply 18. The I/0 circuits 16 are connected to a number of input devices, including a position sensor 20 for sensing the position of the robot 10 and a charge level sensor 22 for sensing the state of charge of the power supply 18. The I/0 circuits 16 are also connected to a number of output devices, including drive motors 24 for moving the robot 10, a connector motor 26 for connecting the connector 19 to a mains socket, and a switch motor 28 for switching on the mains socket.

The processor 12 is programmed so that the robot 10 operates conceptually as described in Jackson, supra, and furthermore as will now be described with reference to Figure 2. The robot 10 maintains what can be thought of as a stadium 30 having an arena 32, stands 34, a sub-arena 36, and a collection of demons 40, which may be hundreds or thousands in number, depending on the complexity of the robot. Most of the demons 40 are in the stands 34 (in a "standby" state), but a predetermined number of them, for example seven, are in the arena 34 (in an "active" state). The sub-arena 36 operates cyclically, and once per cycle causes one of the demons 40 in the stands 34 to move to the arena 32, and one of the demons in the arena 32 to move to the stands 34.

The algorithms or mechanisms for doing this are generally as described in Jackson, supra. As explained there, each demon 40 has a link to each other demon 40. The subarena 36 monitors the well-being of the robot 10, or its ability at achieving its goals, and adjusts the strengths of the links so as to improve it. Generally, in each cycle, for each demon 40 in the stands 34, the sum of the strengths of the links from each demon 40 in the arena 32 to that demon 40 in the stands 34 is calculated, and that one of the demons in the stands 34 with the highest aggregate score is voted into the arena 32.

In the embodiment of the invention, at least some of the demons 40, when in the arena 32, can have an unreal state and a real state. In particular, "action demons" have an unreal state when they are in the arena 32 but are unable to cause their particular action to be be performed, and a real state when the action can be or has been performed. Also, "perception demons" have an unreal state when they are unable to perceive that which they are designed to perceive, and a realisable state when they can or have done. When an action or perception demon 40 enters the arena 32, the sub-arena 36 attempts to execute or realise it. If successful, the state of the demon is set to real, and if unsuccessful, the state of the demon is set to unreal. Then, with every cycle of the subarena 36 a check is made to see whether there are any unreal demons 40 in the arena 32, and if so the sub-arena 36 attempts to execute or realise it, or at least one of them, for example the strongest one. If successful, the state of that demon is changed from unreal to real.

The well-being of the system tends to improve when certain of the demons 40 have come into the arena 32 in a particular temporal order, and accordingly the strengths of the links between those demons 40 in one direction is increased. If one of those demons 40 is an unrealisable demon and it enters the arena 32 "out of order", then the system is improved if more is done to encourage the earlier missing demon into the arena 32. Accordingly, in the embodiment of the invention, when the decision is made in each cycle as to which of the demons 40 in the stands 34 to vote in, in the case of an unrealisable demon in the arena 32, the strength of the link TO that demon 40 in the arena 32 FROM the demon 40 in the stands 34 is used, rather than the strength of the link from that demon 40 in the arena 32 to the demon 40 in the stands 34. In other words, the realisable demons 40 in the arena 32 vote for the demons 40 in the stands 34 according to the strengths of the links from the demons 40 in the arena 32 to the demons 40 in the stands 34, and the unrealisable demons 40 in the arena 32 vote for the demons 40 in the stands 34 according to the strengths of the links from the demons 40 in the stands 34 to the demons 40 in the arena 32.

As a specific example in the case of the robot 10, suppose that the demons 40 include the following:

a "move-to-the-mains-socket" demon; this is an action demon which when executed via the sub-arena 36 causes the drive motors 24 to be operated so that the robot 10 moves to the mains socket; an "at-the-mains-socket" demon; this is a Perception demon responsive via the sub-arena 36 to the position sensor 20; a "plug-into-the-socket" demon; this is an action demon which when executed via the sub-arena 36 causes the connector motor 26 to plug the connector 19 into the mains socket; this demon is unrealisable unless the robot 10 is at the mains socket; a "switch-on-the-socket" demon; this is an action demon which when executed via the sub-arena 36 causes the switch motor 28 to switch on the mains socket; this demon is unrealisable unless the robot 10 is at the mains socket; and a "recharge" demon; this is an expiation demon and is unrealisable until the connector 19 is plugged into the mains socket and the mains socket is switched on.

When these demons come into the arena 32 in the order stated, they lead to the recharge expiation demon being satisfied, and so the strengths of the links from one demon to the next in the order stated would be relatively strong. The links in the other direction might be relatively weak.

Suppose that the charge level of the power supply 18, as sensed by the charge level sensor 22 falls, and suppose that none of the above five demons is in the arena 32.

The lower the charge level goes, the more the sub-arena 36 excites the recharge expiation demon, until it is voted into the arena 32. The recharge demon is unrealisable and is flagged as such, and so it votes for the demons in the stands 34 according to the strengths of their links to it, rather than its links to them. This is more likely to encourage the other four demons, and in particular the switch-on-the-socket demon into the arena 32.

Suppose the switch-on-the-socket demon enter the arena 32 next. It too is flagged as unrealisable. Suppose too that the plug-into-the-socket demon (unrealisable), at-the mains-socket demon (unrealisable) and move-to-the-mains-socket demon enter the arena 32 in that order. The move-to-the-mains-socket demon is realisable, and so it is executed.

Once the robot 10 reaches the mains socket, this is perceived by the atthe-mains-socket demon, and the plug-into-the-socket and switch-on-the-socket demons are then flagged as realisable and are executed. Once executed, the recharge expiation demon is satisfied, and the sub-arena increases the strengths of the links in the direction from the move-to the-mains-socket demon to the recharge demon.

When implementing the above concept in the robot 10, it will be appreciated that the demons 40 do not physically move between stands 34 and an arena 32. In practice, each demon 40 is represented by data in a respective portion of the memory 14, including a flag indicating whether that demon is in the stands or in the arena and a flag indicating whetherthat demon is realisable or unrealisable. The links between the demons may be contained in an N X N array of variables (where N is the number of demons). When a particular demon is unrealisable, the strength of the link to it from another demon may be looked up in the array, say, at the row for the particular demon and the column for the other demon, and when the particular demon is realisable, the strength of the link from it to the other demon may be looked up in the array at the column for the particular demon and the row for the other demon.

A further embodiment of the invention, referred to as a Pandemonium Association Engine, will now be described. An introduction is presented to the workings of the Pandemonium Association Engine: a sequential neural net and thus a (novel) form of adaptive non-linear network. It is designed as an experience processor, to implement artificial common sense and a certain degree of intelligence, both as a tool for understanding natural minds and as a robot mind or component thereof in its own right. Its bywords are Induction and Flexibility. Though only in its crudest initial form, it shows a capacity to perform instrumental learning tasks and to discover and exploit even clashing sequences of goal achieving opportunities automatically.

origins Two main beliefs lay behind this model, as explained in Jackson, supra: that an intelligent system would require new heights of flexibility, and that an inductive learning facility should be available to as much of the system as possible. The deductive powers of the Prolog language (see Clocksin W.R, Mellish C.S. "Programming in Prolog" Springer-Verlag, Berlin 1981), and its abilities to carry out simple operations through its basic operating cycle but to build up more complex abilities through a structure of commands, were also valued, and these features, combined with flexibility and induction, were considered adequate bases for an interesting development. However as the deductive abilities of a Prolog program can easily exceed those of a human let alone an intelligent non-human, their level of performance would not be essential.

Induction - the acquisition of new basic rules - is demonstrated by natural systems, and an extensive animal learning literature is available. It was felt that the gradual strengthening of linkages between atoms in a flexible system could only be implemented through a large number of "Connectionist" links. Genetic algorithms were considered too slow and indirect, and lacked plausibility as anything like the true operation of the brain to be used as the basic operation (though they may play a useful role in the design process).

Links would need to be formed between all mental elements - stimuli, responses, and intermediates of all kinds; the chaining of these into meaningful sequences seems to be a characteristic of minds. Such links in natural systems seem to arise through temporal adjacency. How adjacent? How many links? A leap of faith was made, and the "Short Term Mernory" (see P.H. Lindsay, D.A.Norman "Human Information Processing" Academic Press, NY/London 1973) familiar to psychologists was pressed into service as the crucible or cockpit of the associative process. This provided a capacity of "about seven" (see G. A. Miller "The Magical Number seven plus or minus two: Some limits on our capacity for processing information. " Psychological Review Vol 63 pp81- 97) and the feature of gradual fading items. Links would be formed between elements "in" together, and if the strongest links were from older to newer elements, past sequences would tend to be repeated or completed. Strengthening the links "in good times" and weakening them in "bad" might be expected to cause successful behaviours, and indeed perceptions, to increase in frequency. The links often express expectations, and a limited consensus of the best of these determines the next mental act (which may also be physical).

Quite early, it was noticed that the perception side of the system bore a considerable resemblance to the first item the present inventor learned as an undergraduate: Selfridge's "Pandemonium" theory of perception (see 0. Selfridge, U Neisser "Pattern recognition by machine" Scientific American 1960 vol 203920 pp6068), where potential identifications of raw input to the visual system simply "shout" simultaneously, like the crowd at a stadium - the loudest being accepted. The shouting elements were called demons, presumably partly because pandemonium expresses the chaotic nature of the process. (Nowadays, the related notion "random access" is associated with the desirable characteristic of minimum latency.) With the "Pandemonium Association Engine" (PAE) the perception demons are directed through the "arena", (the short term memory) along with other sorts of demons.

in mis implementation the demons in the arena shout the other way, out at the demons in the crowd. They shout loudest to demons they have shared the arena with most often during "good" times (with high "gain") in the past, though the reverse can result in strong negative links in this implementation. Only A(A-1)/2 links are adjusted each cycle, where A is the arena size, but up to A (the total demon number) votes must be counted, though happily it would be hard to imagine a more parallelisable process. Rn C+ + on a 486 machine, these experiments with about 30 demons (Ds) executed over 2000 arena cycles (ACs) per second.] The small matter of "what is good" is decided by a component named the "subarena", relatively unintelligent but much more complex than the arena itself. Nice temperature? Good - turn up the gain. Hungry? Not so good. Damage sustained? Worse. Just this task is a subject in itself. Moreover, the sub-arena also encourages basic perceptual components into the arena, as well as "executing" action demons that appear in the arena, though these demons may on occasion be interpreted as triggers or 10 parameters to well-practised behaviours.

Ee,atures Capacity of arena (number of demons): Total number of demons in system:

7 works well to 40 Fade Factor: decay of arena Demons/Arena cycle 70 % typical Link Strength Increment/arena cycle: D l strength D2 strength Gain (D2 more recent than DI) Initial Link Strengths: All zero Initial order of D presentation: For simple tasks, this can change the time taken to learn a task from 20 arena cycles, to over a hundred, since the system can only learn the value of a D when it is called at an appropriate moment. Where a range or approximation of no. of cycles to learn a task is indicated, this is mainly due to differing initial order.

When-ar-e,na-and-eayiwnment disagree When an action demon enters the arena, the sub-arena tries to execute it, but a problem arises when its execution is impossible. Similarly (and with similar oversimplification currently unavoidable) an object in the environment may be represented by a demon in the arena - but what if it has not been inserted by the subarena, just elected by other demons ? The creature is merely imagining it.

Goals Such demons are flagged as "Unreal", and treated as goals. Doesn't apply to humans? The creature isn't a human - and how do you know this principle doesn't apply to other animals, or wasn't an essential early stage in pre-human thought?! If it is inappropriate to wish for something, the creature simply learns not to wish for it. It is worth remembering that sportsmen are advised to imagine only what they wish to become real. One suspects it would be natural for a bull one had been chased by to be imagined when passing its field later, but in fact demons leading to something more positive would probably be more likely, at least in this system. Indeed, although it has been suggested (here and in Jackson, supra) that old sequences would be completed and repeated from memory, sensory D's originally called through the sub-arena supplementing their "vote", are often omitted "in recalV unless a high overall gain was originally in force. The pain D is almost never elected.

Realising-Unreals Each cycle, any unreal demons in the arena are checked to see if they may be "realised", starting with the most recent. This asks a lot of the subarena; in future developments the arena should be made to play a large part in this process, hopping cunningly from one level to another. However the essential process allows subgoals to be mustered out of order and then sorted. It also affords an opportunity to implement a bucket brigade (see J.H. Holland "Escaping Brittleness - the possibilities of general purpose learning algorithms applied to parallel rule-based systems" Machine Learning vol 2 ed Michalski R. S; Morgan Kauftnann, Los Altos, CA.) (passing "credit" along a line of sub-goals, encouraging performance of the earlier ones): each time an unreal D is realised, a gain pulse occurs, strengthening links to it.

Reverse Links Unreal demons enjoy a further special feature. Consider a final goal - known in this system as an "Expiation" demon - for example "Eat". When executed it is preceded by other demons in the order: "Go to Kitchen" = > "Food" = > "Eat". When the expiation D "Eat" is executed, the resulting drop in drive boosts the gain and causes links in the direction of the arrows to be strengthened. Now, when hungry, the "EaC demon is encouraged into the arena (with a frequency monotonic with the drive level). What should happen if "Eat" is unreal? It should pull in those demons that allowed it 5 to be executed - in this case "Food" and "Go To Kitchen".

For this reason, it is extremely useful for unreal demons to vote not according to what they point to, but according to what points to them. Whether or however the brain might do this, it is too easy and useful to implement on the computer not to be employed. (At the moment, link strengths are only strengthened "backwards" like this if both D's in a pair are unreal; the best link update means for differing reality statuses is being sought.) Tasks-and performances Simple-learning:-The -Cyclical Jump tasL The creature was placed in a virtual double chamber; it could move to the "other" cell by calling the 'Jurnp" D. One room was "pleasant" (22'C), the other "unpleasant" (10'Q; these characteristics were swapped every 10 arena cycles (AC's).

The system (seven D's in arena, 27 D's in total) was posed with some typical stimulusresponse learning tasks. The creature has no unconditional (natural, automatic) responses, so we ought to consider the task "Instrumental": learning to make a voluntary action in a particular circumstance, under reinforcement by reward or punishment, rather than "ClassicaV: learning to attach a reflex response to a new stimulus. The tasks as set unfortunately sometimes required exact timing, and a delayed response, both of which considerably add to the difficulty.

The creature's temperature halved the difference between it and its environment each cycle. Gain was calculated as:

(magnitude of movement towards 22'C) - 5 (magnitude of difference from 22'C) 1 Performance on this task requires the use of a "Pain" demon, inserted by the sub- arena whenever the gain goes more negative than an arbitrary -70 (e.g. each time it initially finds itself in the cold). This acts as an attachment point for "pain avoidance" demons; without it, the tendency for D's to migrate away from "areas of negative gain" can result in solution sequences being destroyed as soon as they start to assemble. The creature learns to call the jump D immediately after the pain D after between about 20 and 100 AC's., i.e. experimental cycles 2-10. This is also the typical time to master the task when a warning demon is inserted immediately before each "climate swap"; the creature quickly places the jump demon immediately after the warning D, thus maintaining a constant desirable environment. With the gain function described above, the gain zeroises, so no further changes in behaviour occur. (NB. For the cyclical jump test shown here a gain supplement of 5 was added each cycle.) Under the basic system however it finds it hard to improve on jumping just after the pain D if the warning does not immediately precede the environment change. The run which included tasks A & B started with the sequence below, where the jump response to the two warning Ds is learned more quickly than average due to a fortuitous early appearance of the jump D.

(Initial contents of arena: Ds 5-11, all strength 1.0.) Cycle Temperature: Total Link Next Demon No. Internal External Gain streng!h 1 10 10.00 -55.00 0 0 2 10 10.00 -55.00 0 1 3 10 10.00 -55.00 0 2 4 10 10,00 -55.00 0 3 22 10.00 -55.00 0 4 jump 6 22 16.00 35.00 0 12 7 22 19.00 20.00 0 7 warnO 1 8 22 20.50 12.50 0 8 warn02 9 22 21.25 8.75 0 13 10 15,63 -83.13 0 10 pain 11 10 12.81 -69.06 0 14 12 10 11.41 -62.03 0 15 13 22 10.70 -58.52 3 4 jump 14 22 16.35 33.24 9 12 22 19.18 19.12 0 16 16 22 20.59 12.06 0 17 17 22 21.29 8.53 0 7 warnO 1 Cycle Temperature: Total Link Next Demon No. Internal External Gain strength is 22 21.65 6.77 0 8 warn02 19 22 21.82 5.88 0 3 10 15.91 -84.56 0 10 pain 21 22 12.96 -69.78 47 4 jump 22 22 17.48 27.61 36 12 23 22 19.74 16.31 14 16 24 22 20.87 10.65 1 17 22 21.43 7.83 0 18 26 22 21.72 6.41 0 19 27 22 21.86 5.71 0 7 warnOl 28 22 21.93 5.35 0 8 warn02 29 10 21.96 5.18 32 4 jump 22 21.98 5.09 52 12 31 22 21.99 5.04 32 16 32 22 22.00 5.02 17 17 33 22 22.00 5.01 15 18 34 22 22.00 5.01 14 19 22 22,00 5.00 0 20 36 22 22,00 5.00 0 0 37 22 22.00 5.00 16 7 warnO 1 38 22 22.00 5.00 29 8 warn02 39 10 22.00 5.00 47 4 jump 22 22.00 5.00 65 12 In Task A below, where it did learn a delayed response after a single warning, preliminary training with two warning D's immediately prior to the jump cycle was given, and on learning, the second warning was removed (on cycle 68). At first, the creature finds it hard to resist calling the jump D immediately after the sole (the earliest) remaining warning D, since it is to the jump D that the warning Ds forged the strongest links; on the first non-appearance of the second warning demon, the jump D took its place, resulting in negative reinforcement. Since the system's initial stages of learning tend to resemble a k-armed bandit (see J.H Holland "Adaptation in natural and artificial systems", Ann Arbor, MI: Univ. Michigan, 1975) (employing strategies in proportion to their apparent success), the jump D was then banished for 22 AC's. It's next few reappearances did not fit happily with the climate cycle, and it did not take up position immediately behind the pain D until AC 141. It tended to slipstream the pain D until it leapfrogged it with an interval of eight. A half-way-house interval of nine was impossible, since that spot was occupied by the pain D which had priority. Leapfrogging the pain D meant building up strong links from the demons preceding it, including the warning D. This tended to move the jump D relatively forward by one (ie making a cycle of 9) placing it immediately after the warning D again. Sometimes, after leapfrogging the pain D it hopped back behind it again by putting in a cycle of 12. Its diligence is eventually rewarded however:

(Some inter-D link strengths prior to task A:

lst warn.D = > 31.80 = > jump; pain > 49.56 > jump; 2nd warn. D = > 47.62 = > jump) Task A: Jump D Overtakes pain D: on AC: 169 259 289 389 449 479 499 529 599 611 639 729 799 829 Jump D reappearance interval: 8, 9, 8, 15 8, 12 8,9, 8, 15 8, 9, 13 8, 9, 13... 8, 12... 8, 9, 13... 8, 9, 13 8, 12... 8, 12... 8, 9, 8, 8, 17 8, 12, 8, 12, 8, 9, 13... 8, 9, 13... 8, 12 A reappearance interval of 8 means the Jump D has leapfrogged another D (eg the Pain D) which was sticking to an interval of 10.

859 8,9, 13 909 8, 9, 13 949 8, 12... 969 8, 9, 11, 10, 10, 10... and success. Task A looks like search, between 8, 9..., and 8, 12. (Some inter-D link strengths after task A & prior to task B: l st warn. D = > 92.16 = > jump; pain = > 1403. = > jump; 2nd warn.D(now omitted) = > -106. 82 = > jump; ["Gap W is a "grist W inserted by the system between the first warning D and the jump D; it replaces the omitted 2nd warn. D.] "Gap W' = > 76.55 = > jump; Ist warn.D = > -105.37 = > "Gap W. less -ve than to other Ds.

Task B shows the cycle lengths on a continuation of the same run, but after the l st warning D two cycles before the jump point is removed, while the 2nd warning D is simultaneously replaced in its original position, immediately before the ideal jump point.

Task B:

Jump D Over takes Jump D pain D: reappearance on AC: interval:

1109:

1219:

1239:

1269:

1309:

1329:

1349:

8, 8, 8, 8, 18 8, 12... 8, 8, 14... 8, 8, 8, 16 8, 13... 8, 12... 8, 10, 10, 10...

(Some inter-D link strengths after task B:

Ist warn.D(now omitted) = > -91.73 > jump; pain = > 1956. = > jump; 2nd warn. D(reinstated) = > 208.86 > jump) Other Jump Tasks- When the room temperature swapping interval varies randomly between 15 and 35, but is signalled by two D's immediately prior to it, the task is mastered by about 240 AC's., i.e. experimental cycle 24. When the environment doesn't swap, the jump D is eliminated (e.g. by 250 AC's) More -Complex_Behaviour Competing Goal-Achievemen! (Seven D's in arena, 33 D's in total.) The environment consisted of four rooms; four D's took the creature to a particular room. Two rooms were empty. The "kitchen" held a fridge, which could be opened by the "open" D so long as the "fridge" D was also in the arena. On opening, a new piece of food appeared in the kitchen. Initially the kitchen also held six pieces of food outside the fridge.

The "bar" room held a bottle which could also be opened by the open D so long as the bottle D was in the arena at the time. On opening the bottle a new item of water appeared in the bar. Initially the bar also held six items of water outside the bottle. If both fridge and bottle were present in the arena when the open D was called, the object of the command was taken to be the stronger (more recently called) of the two.

Whenever a new item was created in the environment, the sub-arena didn't force it willy nilly into the arena since that would detract from the system's flexibility and hence its intelligence. It was however "encouraged" into the arena by having a 'Totentiation Supplement" added to its "Link Strength" - the total weighted vote of the AD's (arena demons) of 3 average deviation of the winning Link Strength over the last couple of dozen AC's. Throughout the system, Pot. Supp's were always by this arbitrary amount.

The Mo to room..." D's were always valid. The "open" D was treated as "real" for each meaning if the fridgelbottle was present in the same room at the time. The "Eat" D was real if food was present in the room; similarly with "drink" and water. "Eat" and "Drink" - the "Expiation D's", removed one item of food/water from the room, lowered the appropriate drive level to zero, and produced a positive "gain" rise for one cycle proportional to the drop in drive level. Otherwise the level of each drive went up by 1 each AC. The drive levels did not cause any pattern of "Potentiation Supplement" to be applied to the expiation D's in this experiment.

Each cycle all unreal D's, starting with the strongest, were checked for realisibility. The strongest realisable D had its Pot. Supp'ed. If reelected, it was "realised", and produced a gain spike calculated as the average of the total drive level at the unreal D's original call and its realisation. This realisation gain gradually increases along a series of realising subgoals, the last ones often exceeding that for the Expiation D itself. (The task was not mastered unless 5 was added to total gain each cycle).

The drive systems clashed over rooms, and could have been badly confused by the "open" command but by AC 350 the creature had settled into an alternating sequence of going to the kitchen, opening the fridge, eating,.... opening the bottle, drinking, repeating every 30 cycles or so. The particular interpretation of "open" was never confused, always seemingly steered by the surrounding drive-specific D's "Go to kitchen", "Food", "EaC and "Fridge", (in the cat drive) but also three other "GrisC D's conscripted into the sequence for each drive (however, see paradoxical link strengths below). These three "intermediate" D's were almost always present Oust one was replaced over nearly 3000 AC's), though they often varied in order and position. By AC 360, this particularly intriguing experiment settled into a pattern of dynamic equilibrium (unlike the "Cyclical Jump TasM' of tasks A & B above, the gain was not zeroised).

Immediately after a successful "EaC, the "Drink" D was called, in unreal form (and vice versa). Typically the next few D's consisted of most of the grist D's for the drive plus the fridge/bottle in unreal form which was always followed by its enabling D, "Go to 33 Link Strengths (x10) after 360 cycles:

Demons F R m 11 Eat 32 grist/Eat 28 grist/Eat 15 grist/Eat 23 Fridge 12 Drink 27 grist/Drink 18 grist/Drink 13 grist/Drink 24 Bottle 1 Go Ktch'n 2 Go Bar 20 Open TO 11 32 28 15 23 12 27 18 13 24 1 2 20 -4 1 0 6 0 0 -1 1 0 1 9 7 0 2 0 0 0 -2 0 0 1 0 -1 13 0 0 6 0 0 0 0 -1 3 1 0 -2 0 -1 2 0 0 0 0 0 0 6 8 5 -3 -2 -4 3 -3 -2 -1 -3 -5 5 -2 0 -1 -1 0 2 0 0 5 -2 2 0 0 0 -3 0 0 1 0 0 -2 -1 -1 0 0 0 0 0 0 0 0 0 0 -1 0 0 17 -2 0 -3 7 0 -1 0 -1 -3 -2 3 -1 -1 -1 -3 22 -1 2 -3 -1 -2 -4 Some paradoxical entries ought to be mentioned: even after 3000 AC's, a pattern emerges of the grist D's pointing mildly against each other and even their own open- object e.g. fridge/bottle - more strongly against their own open-object than the other one! After 3000 AC's they point strongly to "open" and very strongly to their own expiation D. The expiation D's point approximately 0 to their own open- object but quite strongly against the other one. Only the "Go W' D's point strongly to their open-objects (mildly against the other's).

In this example the "realisation gain" approach to the bucket brigade problem seemed adequate in principle though optimisatoin was not attmepted. A suspicion that the reinforcement may have been too high for actions not well connected to meaningful end goals was gained while running the system before the "DrinV system was fully coded. The creature established a routine of imagining "Open Bottle" outside the bar, then moving to the bar thus opening the bottle and enjoying gain for a double realisation, even though it was unable to drink at that stage. This behaviour was repeated over and over again, though perhaps it should be given credit for working out how to do something for its own sake, and who is to say that, for example, attending AI g conferences is not a more sophisticated version of the same thing? C-oacepts.,-G-e,neralisa.tioia,Cb,unking,eiz.- The key to getting any system to say "I'll solve this problem the way 1 solved that one" is to encode much of the processing in general terms, handling specifics as members of a class. Internal operands must therefore be classified, at least to some extent. The full use of generalisation in, and transference of behaviour probably require heavy use of classification within the structure; almost a tautology really.

The present inventor suspects that considerable use may be made of basic "types", from "noun" and "verb", to emotional states (which characterise situations in goal achievement and interactions with other animals) in successful natural and artificial systems, but it must surely sometimes be necessary to create new classes.

When carving complex entities and episodes out of the stream of consciousness for classification, they must first, if novel, be segmented. Characteristic segments in the stream will have their strings of demons. They will not always be quite the same string, but then, they will also have characteristic rises and falls of goal levels, link strengths (i.e. total vote for a called demon) etc. Just cutting at minima in say the link strength sequence would seem one likely way to provide useful segmentation.

We already suspect that "chunking" will often involve demons in unreal form. For example, when wishing or planning a long and well established sequence of actions, the D's involved will pour into the arena, often in reverse order of execution and unreal form and tend to overflow. That doesn't matter too much so long as the last ones in the reversed order are left available. However introspection suggests that an overview of the whole sequence is available, at least in partial detail, emphasising the "highlights". Extending the arena might help towards this, but combining some of the demons In 0 concerned must be investigated.

Do compound demons need to be converted explicitly into a single demon" If so, how would its new class be decided? How would you decide when to treat it as a single D or as its components? How should link increments be handled? How and when should compound D's be (un)bundled? These issues are being addressed.

Relation-to Maes & Minsky models:

Other "multiplicity of mind" models have been proposed. How do they relate to this system? Minsky's agents are planlets, or concepts, which might correspond to a dozen or so demons in a group, perhaps demarcated by troughs in link strengths, though interacting strongly with other groups. The "Eating and Drinking" experiment showed that it may well be possible for an individual demon to participate in more than one group. Minsky's "K-lines" correspond to links stronger than other inter-group links though weaker than intra-group. Overall his model is offered as an explanation of mind rather than a robot architecture whereas PAE as modelled here is still trying to straddle both senses. Maes's system of "competence modules" seems more committed to the robot line. It expresses plans more explicitly than PAE which is an experience processor and only plans what it has already done, though it does experiment. Maes's system seems more human-programmable than the more autonomous PAE whose programming will be interesting. Maes's system operates under system-wide principles of gradual potentiation as opposed to the more independent "all or nothing" PAE demons' style, so the two systems cannot be cast in terms of each other though if they were, most competence modules would correspond to more than one demon.

It should be understood that the embodiments of the invention have been described above purely by way of example, and that many modifications and developments may be made thereto within the scope of the present invention.

Claims (9)

1 A machine having artificial intelligence or common-sense and including: a collection of primitive, freely-associatable, constructs including an action construct, each of the constructs being switchable between a 'standby' state and an &active' state; means for controlling the states of the constructs; and means for performing a task related to the action construct; characterised in that: means is provided for testing, when the action construct is in its active state, whether the task performing means will be/is/has been able to perform its task; and the state controlling means is operable to set a sub-state of the action construct, when in its active state, as between an 'unreal' state and a 'real' state in dependence upon the testing means.
2. A machine as claimed in claim 1, wherein:
the task performing means is actuated in response to the action construct assuming its active state; if the testing means determines that the task performing means is unable to perform the task, the state controlling means sets the state of the action construct to unreal; and if the state of the action construct is set to unreal, the task performing means is 20 subsequently actuated again.
3. A machine as claimed in claim 1 or 2, further including: a perception construct in the collection of constructs; means for realising a perception related to the perception construct; and means for testing whether the perception realising means will belis/has been able to realise its perception; wherein the state controlling means is operable to set a sub-state of the perception construct, when in its active state, as between an 'unreal' state and a 'real' state in dependence upon the testing means.
4.
A machine as claimed in claim 3, wherein: the perception realising means is actuated in response to the perception construct assuming its active state; if the testing means determines that the perception realising means is unable to realise the perception, the state controlling means sets the state of the perception construct to unreal; and if the state of the perception construct is set to unreal, the perception realising means is subsequently actuated again.
5. A machine as claimed in any preceding claim, wherein: the action construct is one of a plurality of such action constructs; and/or the task perfom-iing means is one of a plurality of such task performing means for performing different tasks; andlor the perception construct is one of a plurality of such perception constructs; andlor the perception realising means is one of a plurality of such perception realising 20 means for realising different perceptions.
6. A machine as claimed in any preceding claim, wherein the state controlling means operates cyclically and is operable in each cycle: to attempt to execute an action construct and/or realise a perception construct, if any, which has been set to its active state in that cycle; and to attempt to execute an action construct andlor realise a perception construct, if any, which has been set to its unreal state in an earlier cycle.
7. A machine as claimed in any preceding claim, wherein: each construct has at least one variable strength link to each of the other constructs; the state controlling means controls the active and inactive states of the constructs at least partly in dependence upon the strengths of the links between the constructs; and means is provided for changing the strength(s) of the link(s) and/or the way in which the control depends on the link(s) of the or each action or perception construct 10 upon its switching between its real and unreal states.
8. A machine as claimed in claiJ, wherein the changing means is operable to cause the state controlling means temporarily to utilise a link from at least one of the action or perception constructs rather than a link to that construct andlor to utilise a link to at least one of the action or perception constructs rather than a link from that construct 15 when the state of that construct is active and unreal.
9. A machine having artificial intelligence or common-sense and substantially as described with reference to the drawings.
GB9812300A 1998-06-09 1998-06-09 Artificial intelligence or common-sense machines Withdrawn GB2338315A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB9812300A GB2338315A (en) 1998-06-09 1998-06-09 Artificial intelligence or common-sense machines

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB9812300A GB2338315A (en) 1998-06-09 1998-06-09 Artificial intelligence or common-sense machines

Publications (2)

Publication Number Publication Date
GB9812300D0 GB9812300D0 (en) 1998-08-05
GB2338315A true GB2338315A (en) 1999-12-15

Family

ID=10833389

Family Applications (1)

Application Number Title Priority Date Filing Date
GB9812300A Withdrawn GB2338315A (en) 1998-06-09 1998-06-09 Artificial intelligence or common-sense machines

Country Status (1)

Country Link
GB (1) GB2338315A (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5390282A (en) * 1992-06-16 1995-02-14 John R. Koza Process for problem solving using spontaneously emergent self-replicating and self-improving entities

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5390282A (en) * 1992-06-16 1995-02-14 John R. Koza Process for problem solving using spontaneously emergent self-replicating and self-improving entities

Also Published As

Publication number Publication date
GB9812300D0 (en) 1998-08-05

Similar Documents

Publication Publication Date Title
Oyama The ontogeny of information: Developmental systems and evolution
Collins The structure of knowledge
Bauersfeld Theoretical perspectives on interaction in the mathematics classroom
Franklin et al. LIDA: A systems-level architecture for cognition, emotion, and learning
Steels et al. The artificial life route to artificial intelligence: Building embodied, situated agents
Clark Reasons, robots and the extended mind
Cozolino The Social Neuroscience of Education: Optimizing Attachment and Learning in the Classroom (The Norton Series on the Social Neuroscience of Education)
Baker Beyond MacCrate: The Role of Context, Experience, Theory, and Reflection in Ecological Learning
US5761381A (en) Computer system using genetic optimization techniques
D’Mello Monitoring affective trajectories during complex learning
Mantzavinos Individuals, institutions, and markets
Franklin A conscious artifact?
Zeleny Human systems management: Integrating knowledge, management and systems
Elbers et al. Internalization and adult-child interaction
Wheeler et al. Culture, embodiment and genes: Unravelling the triple helix
Dreyfus Overcoming the myth of the mental
Elman The emergence of language: A conspiracy theory
Ziemke The construction of ‘reality’in the robot: Constructivist perspectives on situated artificial intelligence and adaptive robotics
Ziman Technological innovation as an evolutionary process
Couvillon et al. A conventional conditioning analysis of" transitive inference" in pigeons.
Thagard Cognitive science
Williamson et al. Prior experiences and perceived efficacy influence 3-year-olds' imitation.
Bogdan Minding minds: Evolving a reflexive mind by interpreting others
Amati et al. On the emergence of modern humans
Norman Reflections on cognition and parallel distributed processing

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)