GB2530105A - Thinking Machines - Google Patents
Thinking Machines Download PDFInfo
- Publication number
- GB2530105A GB2530105A GB1416294.5A GB201416294A GB2530105A GB 2530105 A GB2530105 A GB 2530105A GB 201416294 A GB201416294 A GB 201416294A GB 2530105 A GB2530105 A GB 2530105A
- Authority
- GB
- United Kingdom
- Prior art keywords
- free
- response
- responses
- mechanism according
- stimulus
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63H—TOYS, e.g. TOPS, DOLLS, HOOPS OR BUILDING BLOCKS
- A63H33/00—Other toys
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63H—TOYS, e.g. TOPS, DOLLS, HOOPS OR BUILDING BLOCKS
- A63H3/00—Dolls
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63H—TOYS, e.g. TOPS, DOLLS, HOOPS OR BUILDING BLOCKS
- A63H2200/00—Computerized interactive toys, e.g. dolls
Landscapes
- Toys (AREA)
Abstract
A device comprises a memory 11 containing a set of responses 11a-d to a stimulus 10; a random signal generator 12 trigged by the stimulus to select a response from the set, and means 13a-d to give an effect to that response. Weights may be associated with the set of responses so that responses occur with a particular probability. Responses may be added, deleted or amended, and the weights might be changed. The responses may be words output by a speaker, so that the device talks. The stimulus might be internal stimulus such as a power supply indicating a low battery, or external stimulus such as speech from a user. The random signal generator may be a true random number generator or a pseudorandom number generator.
Description
Thinking Machine This invention relates to thinking machines.
While it is generally acknowledged that computers perform actions equivalent to those performed in the human brain, and that human-emulating robots can he designed to have an apparent measure of autonomy, the question, "Can machines think?" is, to date, not fully answered.
Alan Turing proposed a test, which was simply that, in an exchange between a human and a machine, could the machine's responses so adequately simulate those of a human that it would be impossible to distinguish machine from human. if so the machine passed IS the test. Although machines are capable of outperforming humans in many regards, such as playing chess and other games, and in mathematical computation, the machines are programmed by humans and, even though capable of learning', through neural nets, and solving complex problems through heLiristic programming and fuzzy logic, are deemed incapable of human' thought.
However, it only has to he realised that the hLlman brain is itself a machine, albeit a highly complex one, and the best answer to the question, "Can machines think?" is, "Yes, we can." The stumbling block is free will. How can we build a free will' engine into an inanimate machine? The present invention is based on a novel understanding of the nature of free will', and provides a mechanism for providing a free will' functionality in a computer.
The invention comprises, in or for a computer, a free will mechanism comprising: a memory unit adapted to contain a set of responses to a stimulus; a random signal generator triggered by a stimulus to select a response randomly from the set: and effector means to give effect to the response.
The novel understanding is that the brain has a prepared set of responses to any given stimulus. On receipt of the stimulus, one of the responses is selected, unconsciously, at random. Once the selection has been made, the brain gives effect to the selection, in the first instance by consciously recognising which response has been selected. It is at this stage that the conscious brain has the impression that it is making a decision.
I
Some responses maybe hard-wired in the brain, and this maybe of the nature of instinct, while others will be created through learning, and, as such, will be capable of change, some may be deleted, others may be added to the set. It may well be the case that the set of responses that may be selected from in regard to any stimulus defines the character' of a person, a person being good' or bad' depending on the set of responses from which a selection may be made. The set may be altered depending on the effect perceived by giving effect to a particular response, and it would appear that the sets of responses for different stimuli would he in constant flux Likewise, the set of responses in the computer's free will mechanism, may be subject to change, by adding a response, deleting a response or altering a response, and this maybe done by feedback from having given effect to a response.
A memory unit may contain multiple sets of responses, and multiple random generators, IS and, indeed multiple' may imply very large numbers.
A stimulus may be external or internal, i.e. picked up by a sensor connected to the compLiter or generated by the computer. An internal stimulus might he a signal from a power supply indicating a low battery, and this might trigger, in the case of a mobile computer, driving a robot, travelling to a power point. There may be several power points, and one would he selected at random, the robot appearing to exercise free will in regard to the selection. Feedback from a number of selections might be used to alter weightings for the various power points, so that, over time, those being least troublesome to reach would be more likely to be selected. Thus the robot would not only appear to have free will, bitt also intelligence.
Two computers, identically constructed and programmed (except having different random generators), would, because of the random selection of responses and the different feedbacks from putting them into effect, develop differently, another characteristic of the way living beings behave and develop.
A free will mechanism for a computer according to the invention will now he described with reference to the accompanying drawings, in which: Figure I is a diagrammatic illustration of a free will mechanism embedded in a computer; Figure 2 is a flow diagram showing the operation of the mechanism of Figure I. The drawings illustrate a free will mechanism in a computer comprising: a memory unit I I adapted to contain a set of responses to a stimulus; a random signal generatorl2 triggered by a stimulus 10 to select a response randomly from the set: and effector means 13 to give effect to tile response.
The memory unit 11 is shown as having a set of four responses. These might be, for example, responses to the question, "How are you?", and might comprise, "Very well, thank you." (response 1 la), "Very well, thanks," (response 11 b), "I'm good, thanks." (lie) and "Great, how are you?" (lid). Clearly, in the minds of most people there will be many more than four such ready responses, and it may well he the case that responses can he componnded from individual words, so that there may he a random selection of the first word, let us say it is "Very", followed by a second random selection from all the words that can follow "Very", such as "good", "well" and so forth. Such an anangement can he included in the free will mechanism of the computer.
So, the random signal generator 12 will, on excitation by the stimulus 10, trigger one of the responses 1 Ia, 1 lb etc., and whichever one it triggered will energise its associated IS effector means i 3a, I 3h etc. to give effect to the response, in our example, by causing pre-recorded words to be delivered from a speaker, or by causing words to be created by a speech synthesiser.
Figure 2 is a flow chart for a simple, single stimulus, single response procedure. The procedure starts at 21 when a stimulus is received, which prompts the selection of a response at step 22 from the possible responses i 2a, I 2h etc. At step 23 the response is received by the effector means 13 which, at step 24, causes a mechanism, a speaker or voice simulator in this instance, to deliver the response. The delivered response, or information about it, is then sent back at step 25 to the memory unit 11 where it might alter the weighting given to the selection of the same response to a future stimlilLis, for example by lowering the weighting so as to tend to avoid repetition of responses.
Having delivered the response, the free will unit 1 I is immediately ready to receive another stimulus, whether it is a response to its response or an internal stimulus, triggered perhaps by its own, lust delivered response, In this way, a conversation' is created that can proceed through multiple iterations, each successive stimulus triggering selection of a response from a different set of responses each appropriate to the trigger stimulus.
in addition, of course, to the selection of words, there are other free will' selections that will he associated with the words, such, for example, as the pitch at which they are spoken, the speed and the volume at which they are delivered, and this extends the response storage requirement substantially.
Responses may be loaded into memory sets manually, or by copying in some existing response-loaded memory. However, responses may be loaded through a learning process, in which a response is associated, possibly in a conversation', with a stimulus, and added to a set of responses associated with that stimulus. Responses maybe weighted by repetition, and weights changed according to perceived reaction. A computer equipped with facial recognition functionality may, for example, note a smile when returning one particular response, which may increase the likelihood of that response's being used again, if only by adding it as a second or third instance in the memory of that response.
While the discussion has so far been concerned primanly with words, and particularly spoken words, other forms of stimulus and response can be similarly treated. As already mentioned, a low battery (thirst) stimulus may prompt the selection of a respouise from a set of appropriate responses that will include visits to a charging unit, a temperature stimulus may elicit a response randomly selected from increasing or decreasing heat demand by 10% or 20%, an alarm stimulus may prompt a response to be randomly selected from focusing CPU atteiltion on audio input, on video illput, flight, in one or another of several direction options and so forth.
By random' as used herein is implied unpredictability, or apparent unpredictability, rather thall true randonmess as may be tested mathematically, and a random number generator, such as a measurement from thermal noise in a neon tube, or quasi-random generator such as an algorithm generating sequences of apparently random numbers, will serve more or less equally well.
And while the discussion above has concerned primarily responses to stimuli mimicking human responses, so that a computer, or a computer-based entity such as a robot, can pass, or stand a better chance of passing, the Turing test, there is an equally important consideration, which is the creation of original thoughts.
Iii accordailce with the insight referred to above, the creation of a new thought begins with a random event triggered by a stimulus that selects a response from a se of lire-stored possible responses. The selected response may then, itself, serve as a trigger for a further railciom selection from another set of responses, and this process may be repeated a number of times to build up a composite response.
While each selection will be random (in the sense referred to above) this does not mean that a completely random and meaningless collection of entities, usually words, will result. Each set of responses will have some relation to the stimulus that selects from it, so that complete nonsense will be avoided, or, if not, will be recognised by the conscious brain -the CPU mid associated hardware and software -as such and dismissed, mid further trains of that thought inhibited.
Occasionally, a new sequence of entities, or words, usually, will emerge that the CPU can search against its database, in the fashion of a web search engine, to establish its novelty.
A computer, equipped with the free will engine, and with an adequate knowledge database, can be expected, therefore, to come up with new ideas. A senteilce can be constructed from random selections from limited sets of terms, and this sentence can he examined by suitable algorithms for syntax, logical consistency and other attributes, and submitted to a search engine to check for novelty.
S
As the free will mechanism in the computer is based on an insight into the way the hLlman brain works, given a suitably populated database, it should work over any discipline the human brain does, including mathematics, physics, chemistry, medicine, engineering, chess moves and so forth -a mathematical equation is, after all, only shorthand for a word-based sentence.
And, if one sentence can he produced, there is no reason why two or more connected sentences, even a full length novel, should not he produced by a uitahly powerful computer, which has, perhaps, read' novels by human authors to pick up genre, style, plot and other characteristics for and from which it can create algorithms to use in writing' its own work. Or even, when it makes an invention, its own patent
specification.
Claims (12)
- Claims: A free will mechanism comprising: a memory unit adapted to contain a set of responses to a stimulus; a random signal generator triggered by a stimulus to select a response randomly from the set; and effector means to give effect to the response.
- 2 A free will mechanism according to claim 1, in which the memory unit can be altered in at least one of the following ways: any response may be amended; any response may he deleted; a further response may be added.
- 3 A free will mechanism according to claim 1 or claim 2, in which weights are associated with the set of responses so that some are more likely to be selected than others.
- 4 A free will mechanism according to claim 4, in which the weighting can be changed.
- S A free will mechanism according to claim 5, in which the weighting can be changed so that the weighting of a recently selected response is reduced in relation to other responses to reduce the likelihood of its being repeated.
- 6 A free will mechanism according to any one of claims 1 to 5, in which the responses are words or sequences of words.
- 7 A free will mechanism according to claim 6, in which the effector means comprise a speaker delivering recorded words or a voice simulator with associated software.
- 8 A free will mechanism according to any one of claims 1 to 7, in which the random signal generator generates a memory address corresponding to a response.
- 9 A free will mechanism according to claim 8, in which the random signal generator is truly random, as by generating a number from a measurement of thermal noise in a neon tithe.
- A free will mechanism according to claim 8, in which the random signal generator comprises software generating quasi-random numbers.
- II A free will mechanism according to any one of claims I to 10, in a computer, associated with an external stimulus receiver, such as an audio or video receiver.
- 12 A free will mechanism according to any one of claims I to II, in a computer, associated with an internal stinitihis generator 13 A free will mechanism accordillg to claim 12, in which the internal stimulus generator comprises a random generator sending a stimulus randomly to a plurality of free-will mechanisms.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB1416294.5A GB2530105A (en) | 2014-09-15 | 2014-09-15 | Thinking Machines |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB1416294.5A GB2530105A (en) | 2014-09-15 | 2014-09-15 | Thinking Machines |
Publications (2)
Publication Number | Publication Date |
---|---|
GB201416294D0 GB201416294D0 (en) | 2014-10-29 |
GB2530105A true GB2530105A (en) | 2016-03-16 |
Family
ID=51869640
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
GB1416294.5A Withdrawn GB2530105A (en) | 2014-09-15 | 2014-09-15 | Thinking Machines |
Country Status (1)
Country | Link |
---|---|
GB (1) | GB2530105A (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113946604B (en) * | 2021-10-26 | 2023-01-20 | 网易有道信息技术(江苏)有限公司 | Staged go teaching method and device, electronic equipment and storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4696653A (en) * | 1986-02-07 | 1987-09-29 | Worlds Of Wonder, Inc. | Speaking toy doll |
US20010041496A1 (en) * | 2000-05-13 | 2001-11-15 | Smirnov Alexander V. | Talking toy |
US20040077265A1 (en) * | 1999-07-10 | 2004-04-22 | Ghaly Nabil N. | Interactive paly device and method |
-
2014
- 2014-09-15 GB GB1416294.5A patent/GB2530105A/en not_active Withdrawn
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4696653A (en) * | 1986-02-07 | 1987-09-29 | Worlds Of Wonder, Inc. | Speaking toy doll |
US20040077265A1 (en) * | 1999-07-10 | 2004-04-22 | Ghaly Nabil N. | Interactive paly device and method |
US20010041496A1 (en) * | 2000-05-13 | 2001-11-15 | Smirnov Alexander V. | Talking toy |
Also Published As
Publication number | Publication date |
---|---|
GB201416294D0 (en) | 2014-10-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20200065389A1 (en) | Semantic analysis method and apparatus, and storage medium | |
Vaidman | Many-worlds interpretation of quantum mechanics | |
Khamparia et al. | Knowledge and intelligent computing methods in e-learning | |
CN108986908A (en) | Interrogation data processing method, device, computer equipment and storage medium | |
CN107919116A (en) | A kind of voice-activation detecting method and device | |
CN108287881A (en) | A kind of optimization method found based on random walk relationship | |
Singh et al. | Attention on attention: Architectures for visual question answering (vqa) | |
CN106875940A (en) | A kind of Machine self-learning based on neutral net builds knowledge mapping training method | |
Gefter et al. | The evolutionary argument against reality | |
Yannakakis | Preference learning for affective modeling | |
Szymański et al. | Information retrieval with semantic memory model | |
Lee et al. | Exemplars, prototypes, similarities, and rules in category representation: An example of hierarchical Bayesian analysis | |
Amirian et al. | Continuous multimodal human affect estimation using echo state networks | |
GB2530105A (en) | Thinking Machines | |
Xu et al. | Multi-Scale Approaches to the MediaEval 2015" Emotion in Music" Task. | |
Chhikara et al. | Knowledge-enhanced agents for interactive text games | |
Yao et al. | Knowledge rumination for pre-trained language models | |
Saleh et al. | Memristive computational architecture of an echo state network for real-time speech-emotion recognition | |
Mathur et al. | Affect-aware deep belief network representations for multimodal unsupervised deception detection | |
Korn | From the systemic view to systems science | |
Motamed et al. | Speech emotion recognition based on brain and mind emotional learning model | |
Huang et al. | Principled neuro-functional connectivity discovery | |
Kirandziska et al. | Finding important sound features for emotion evaluation classification | |
Cuayáhuitl | Deep reinforcement learning for conversational robots playing games | |
Samani | Multimodal cognitive processing using artificial endocrine system for development of affective virtual agents |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WAP | Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1) |