WO2001093076A2 - Simulation de l'intelligence humaine par ordinateur utilisant le dialogue en langage naturel - Google Patents
Simulation de l'intelligence humaine par ordinateur utilisant le dialogue en langage naturel Download PDFInfo
- Publication number
- WO2001093076A2 WO2001093076A2 PCT/US2001/014829 US0114829W WO0193076A2 WO 2001093076 A2 WO2001093076 A2 WO 2001093076A2 US 0114829 W US0114829 W US 0114829W WO 0193076 A2 WO0193076 A2 WO 0193076A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- pattern
- data
- response
- input
- patterns
- Prior art date
Links
- 241000282414 Homo sapiens Species 0.000 title abstract description 75
- 238000000034 method Methods 0.000 claims abstract description 245
- 238000012549 training Methods 0.000 claims abstract description 91
- 230000004044 response Effects 0.000 claims description 243
- 230000015654 memory Effects 0.000 claims description 83
- 239000013598 vector Substances 0.000 claims description 56
- 230000009471 action Effects 0.000 claims description 54
- 238000012545 processing Methods 0.000 claims description 27
- 238000013519 translation Methods 0.000 claims description 27
- 230000009466 transformation Effects 0.000 claims description 17
- 230000014509 gene expression Effects 0.000 claims description 15
- 238000013473 artificial intelligence Methods 0.000 claims description 11
- 238000004458 analytical method Methods 0.000 claims description 10
- 239000011159 matrix material Substances 0.000 claims description 10
- 230000015572 biosynthetic process Effects 0.000 claims description 7
- 238000003786 synthesis reaction Methods 0.000 claims description 7
- 239000003550 marker Substances 0.000 claims 2
- 241000282412 Homo Species 0.000 abstract description 20
- 230000003993 interaction Effects 0.000 abstract description 18
- 230000001149 cognitive effect Effects 0.000 abstract description 11
- 238000013178 mathematical model Methods 0.000 abstract description 8
- 230000008569 process Effects 0.000 description 137
- 230000003936 working memory Effects 0.000 description 60
- 230000002452 interceptive effect Effects 0.000 description 40
- 238000004422 calculation algorithm Methods 0.000 description 36
- 238000010586 diagram Methods 0.000 description 35
- 230000014616 translation Effects 0.000 description 25
- 230000001953 sensory effect Effects 0.000 description 24
- 230000008901 benefit Effects 0.000 description 17
- 238000013507 mapping Methods 0.000 description 17
- 238000005516 engineering process Methods 0.000 description 16
- 230000007246 mechanism Effects 0.000 description 16
- 210000004556 brain Anatomy 0.000 description 13
- 230000006870 function Effects 0.000 description 13
- 230000007704 transition Effects 0.000 description 13
- 238000004891 communication Methods 0.000 description 12
- 238000000844 transformation Methods 0.000 description 10
- 230000033001 locomotion Effects 0.000 description 8
- 238000003860 storage Methods 0.000 description 8
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 8
- 238000001514 detection method Methods 0.000 description 7
- 238000012544 monitoring process Methods 0.000 description 7
- 241000282693 Cercopithecidae Species 0.000 description 6
- 230000003044 adaptive effect Effects 0.000 description 6
- 230000008859 change Effects 0.000 description 6
- 241000271566 Aves Species 0.000 description 5
- 230000019771 cognition Effects 0.000 description 5
- 238000010276 construction Methods 0.000 description 5
- 230000000694 effects Effects 0.000 description 5
- 238000007726 management method Methods 0.000 description 5
- 239000000463 material Substances 0.000 description 5
- 238000003491 array Methods 0.000 description 4
- 238000006243 chemical reaction Methods 0.000 description 4
- 229920006235 chlorinated polyethylene elastomer Polymers 0.000 description 4
- 238000000136 cloud-point extraction Methods 0.000 description 4
- 230000006835 compression Effects 0.000 description 4
- 238000007906 compression Methods 0.000 description 4
- 238000004519 manufacturing process Methods 0.000 description 4
- 238000012800 visualization Methods 0.000 description 4
- 230000001755 vocal effect Effects 0.000 description 4
- 241001125840 Coryphaenidae Species 0.000 description 3
- 241000196324 Embryophyta Species 0.000 description 3
- 235000010582 Pisum sativum Nutrition 0.000 description 3
- 240000004713 Pisum sativum Species 0.000 description 3
- 241000220317 Rosa Species 0.000 description 3
- 239000003086 colorant Substances 0.000 description 3
- 238000013480 data collection Methods 0.000 description 3
- 230000001939 inductive effect Effects 0.000 description 3
- 230000003340 mental effect Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 206010037844 rash Diseases 0.000 description 3
- 238000012360 testing method Methods 0.000 description 3
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 2
- 241000283153 Cetacea Species 0.000 description 2
- RYGMFSIKBFXOCR-UHFFFAOYSA-N Copper Chemical compound [Cu] RYGMFSIKBFXOCR-UHFFFAOYSA-N 0.000 description 2
- 241000270322 Lepidosauria Species 0.000 description 2
- 241000288906 Primates Species 0.000 description 2
- 241000320126 Pseudomugilidae Species 0.000 description 2
- 241000287530 Psittaciformes Species 0.000 description 2
- RAHZWNYVWXNFOC-UHFFFAOYSA-N Sulphur dioxide Chemical compound O=S=O RAHZWNYVWXNFOC-UHFFFAOYSA-N 0.000 description 2
- 238000013459 approach Methods 0.000 description 2
- 230000006399 behavior Effects 0.000 description 2
- 239000012141 concentrate Substances 0.000 description 2
- 229910052802 copper Inorganic materials 0.000 description 2
- 239000010949 copper Substances 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 239000000835 fiber Substances 0.000 description 2
- 238000010304 firing Methods 0.000 description 2
- PCHJSUWPFVWCPO-UHFFFAOYSA-N gold Chemical compound [Au] PCHJSUWPFVWCPO-UHFFFAOYSA-N 0.000 description 2
- 229910052737 gold Inorganic materials 0.000 description 2
- 239000010931 gold Substances 0.000 description 2
- 230000005484 gravity Effects 0.000 description 2
- 230000010365 information processing Effects 0.000 description 2
- 238000005461 lubrication Methods 0.000 description 2
- QSHDDOUJBYECFT-UHFFFAOYSA-N mercury Chemical compound [Hg] QSHDDOUJBYECFT-UHFFFAOYSA-N 0.000 description 2
- 229910052753 mercury Inorganic materials 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012806 monitoring device Methods 0.000 description 2
- 230000001459 mortal effect Effects 0.000 description 2
- 238000003058 natural language processing Methods 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 238000000926 separation method Methods 0.000 description 2
- 238000004088 simulation Methods 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 239000000126 substance Substances 0.000 description 2
- 230000004083 survival effect Effects 0.000 description 2
- 238000012956 testing procedure Methods 0.000 description 2
- 241000086550 Dinosauria Species 0.000 description 1
- 241000282326 Felis catus Species 0.000 description 1
- 241000124008 Mammalia Species 0.000 description 1
- 241001465754 Metazoa Species 0.000 description 1
- 241000283283 Orcinus orca Species 0.000 description 1
- 241000282320 Panthera leo Species 0.000 description 1
- 230000001154 acute effect Effects 0.000 description 1
- 230000003466 anti-cipated effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 150000001768 cations Chemical class 0.000 description 1
- 230000001143 conditioned effect Effects 0.000 description 1
- 230000008094 contradictory effect Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 230000002996 emotional effect Effects 0.000 description 1
- 230000003203 everyday effect Effects 0.000 description 1
- 239000002360 explosive Substances 0.000 description 1
- 239000004744 fabric Substances 0.000 description 1
- 238000010348 incorporation Methods 0.000 description 1
- 230000006698 induction Effects 0.000 description 1
- 229910052500 inorganic mineral Inorganic materials 0.000 description 1
- 239000011707 mineral Substances 0.000 description 1
- 201000002266 mite infestation Diseases 0.000 description 1
- 230000000116 mitigating effect Effects 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 230000036961 partial effect Effects 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000000737 periodic effect Effects 0.000 description 1
- 239000002243 precursor Substances 0.000 description 1
- 230000035755 proliferation Effects 0.000 description 1
- 239000002994 raw material Substances 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000002829 reductive effect Effects 0.000 description 1
- 230000021670 response to stimulus Effects 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000001932 seasonal effect Effects 0.000 description 1
- 238000000638 solvent extraction Methods 0.000 description 1
- 241000894007 species Species 0.000 description 1
- 230000003595 spectral effect Effects 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 239000005437 stratosphere Substances 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
- 239000005436 troposphere Substances 0.000 description 1
- 238000010200 validation analysis Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
- 238000001429 visible spectrum Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/205—Parsing
- G06F40/216—Parsing using statistical methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/279—Recognition of textual entities
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/279—Recognition of textual entities
- G06F40/289—Phrasal analysis, e.g. finite state techniques or chunking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/30—Semantic analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/40—Processing or translation of natural language
- G06F40/42—Data-driven translation
- G06F40/45—Example-based machine translation; Alignment
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/40—Processing or translation of natural language
- G06F40/55—Rule-based translation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/40—Processing or translation of natural language
- G06F40/55—Rule-based translation
- G06F40/56—Natural language generation
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L13/00—Speech synthesis; Text to speech systems
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/26—Speech to text systems
Definitions
- the present invention relates to the field of Artificial Intelligence (Al) and the use of Interactive Computer Systems, Computational Linguistics and Natural Language Processing. More particularly, this invention comprises methods and apparatus for modeling human-like interactions on a computer for commercial applications.
- the presenttheory of intelligence abandons many, if not most, of the assumptions of conventional technology made in the last fifty years by the Al community.
- the present invention includes methods and apparatus for simulating human intelligence using natural language processing.
- the invention comprises:
- a novel theory of human intelligence is developed that is concrete and practical enough to be incorporated into machines that employ intelligent, directed use of language.
- the methods and apparatus disclosed provide enabling information to implement the theory in a conventional computer.
- the cognitive model is a theoretical basis of the entire invention. It describes the way humans learn and interact in general terms.
- the mathematical model of information abstraction and synthetic dialog interaction and method of language-independent computer learning through training, interaction and document reading provide a mathematical basis for natural language learning and interaction between humans and a computer. It also provides the basis for machine translation from one language to another, the detection of patterns of speech for the purpose of identification, and provides the basis for personality simulations.
- ADAM Automated Dialog Adaptive Machine
- the cognitive model of human intelligence is referred to herein as the Associative Abstraction Sensory Model (AASM).
- AASM Associative Abstraction Sensory Model
- the description of the invention is organized onto three parts: (1) a description of the theory of intelligence that the computer algorithms are based on; (2) a mathematical model and (3) a computer implementation.
- AASM Associative Abstraction Sensory Model
- cognition involves the encoding into the brain an unknown deep representation of knowledge sometimes called "mentalese.”
- Language production is seen as decoding mentalese into strings of symbols and language understanding as coding mentalese from symbols. Therefore cognition must reside in a hidden, unknown mechanism of the human brain. No such assumption is made in the AASM. The model does not require that hidden mechanisms are necessary to explain human comprehension.
- the model posits that human-like intelligent behavior comes from the language itself. That is, it is the ability of humans to use language, i.e. strings of symbols, as representations of meaning in combination with other characteristics of the brain that define human intelligent behavior. How language is combined with other sensory information is the key to describing a working model of intelligence as well as reproducing it on a computer. The description of this process lies at the heart of the AASM.
- FIG. 1 is a schematic diagram of the Cognitive Model—a model of the process of "knowing" used as a basis for the present invention.
- Figure 2 depicts a block diagram at a high level, of the Associative Abstract Sensory Model
- Figure 3 reveals a flow diagram depicting how a computer such as ADAM "reads" an input text stream.
- Figure 3 is an expansion of block 42 in Figure 2.
- Figure 3 A is an expansion of the "Find Response" block 48 of the flow diagram shown in Figure 2, illustrating how eigen words and patterns in an input sentence are identified, patterns and concepts are drawn from data bases and an output sentence is generated.
- FIG. 4 depicts the three broad areas of capability of the Automated Dialog Adaptive Machine (ADAM) of this invention.
- ADAM Automated Dialog Adaptive Machine
- Figure 4A is a flow diagram showing how the data base is applied in ADAM to inputs of data in natural language in training mode.
- Figure 4B is a flow diagram showing how the data base is applied in ADAM to inputs of questions about information in the database in operational mode.
- Figure 5 is a flow diagram illustrating at a top level the Training process of ADAM.
- Figure 5 A is a flow diagram depicting the construction of Meta Maps in ADAM.
- Figure 6 is a further detailed flow diagram of the training process describing the process of storing user input in working memory.
- Figure 7 is a flow diagram showing further detail of the conversion of user input to working memory elements.
- Figure 8 is a flow diagram revealing details of the pattern seeker process. Figures 8- 12 detail the steps necessary to produce patterns and the process of creation of a Functor
- Figure 13 depicts details of the Structure Pattern Match 254 process.
- Figures 14 & 15 show details of the Structure Best Match process.
- Figure 16 shows a diagram which describes the flow of interactive voice input to a speaker-independent, continuous speech recognizer, external to ADAM.
- Figure 17 shows how the Recognition Event is handled by ADAM, generating a response to the user speech.
- Figure 18 describes the process flow for creating a sentence Functor set.
- Figures 18-21 show the process flow which creates a Functor from the user's speech input.
- Figure 22 delineates the Generate Response process in a flow diagram.
- Figure 23 shows the flow within the Add Stimulus to Memory process.
- Figure 24 further describes the Response Method.
- Figure 25 displays how the Found Concept and Response are passed to a Do External Action process.
- Figure 26 is a flow diagram revealing the Context Switcher shown in Figure 22.
- Figure 26a is a flow diagram showing the flow for adding a concept to conceptual memory.
- Figure 26B describes the flow withing the context switcher shown in Figure 25.
- Figures 27-30 depicts the process for Reading Text Documents, building Concepts therefrom and storing Concepts as Functors.
- Figures 30 through 36 are flow diagrams which show the process whereby a Functor is created and a Pattern Match is found.
- Figures 37 and 38 depict the ADAM Database data retrieval process.
- Figure 39 shows the top-level pattern association mechanism.
- Figure 40 shows how the pattern buffer connects abstractions in the primary language to abstractions in the target language.
- Figure 41 relates the processes of training, goal processes, and reading text.
- Figure 42 is a flow chart which illustrates methods of the invention.
- Figure 43 relates working memory data and a pattern list.
- Figure 44 relates working memory data and a pattern buffer.
- Figure 45 relates working memory data and a functor list.
- Figure 46 concerns a pattern, an original element set, an action and a locality.
- Figure 47 is related to virtual map learning.
- Figure 48 relates a user input string and a memory.
- Figure 49 relates an input word list, a memory match and a working memory element.
- Figure 50 pertains to meta maps.
- Figure 51 concerns a voice input and a goal process.
- Figure 52 concerns a language model and a language model tree.
- Figure 53 relates a pattern element set and a concept.
- Figure 54 relates an element set and a functor.
- Figures 55 and 56 relate an element set and a raw argument list.
- Figure 57 relates an element set and a pattern.
- Figure 58 relates argument lists in raw set forms and in canonical form.
- Figure 59 depicts a set of steps concerning an element set and a functor.
- Figure 60 depicts a process that involves a concept and a stimulus.
- Figure 61 illustrates a series of steps concerning a concept and the creation of a map.
- Figure 62 relates contexts and concepts.
- Figure 63 pertains to a concept and a stimulus.
- Figure 64 is a flow chart which concerns a candidate abstract response and arguments from an input.
- Figure 65 relates a meta map and a response concept.
- Figure 66 concerns a partially instantiated candidate response and a concept.
- Figure 67 relates a concept and a map.
- Figure 68 relates a stimulus concept and a response concept.
- Figures 69 and 70 pertain to a scenario map and a response concept.
- Figures 7 land 72 pertain to the analysis of a concept.
- Figures 73, 74, 75 and 76 reveal processes related to the generation of a response to a concept.
- BEST MODE FOR CARRYING OUT THE INVENTION Overview of the Invention provides methods and apparatus for constructing and operating a machine that is able to develop and store a set of language patterns for expressing knowledge, acquire specific knowledge, substitute that specific knowledge into recognized language patterns and extract meaning from these patterns.
- the invention offers efficient retrieval of information using natural language, the production of directed discourse, that is, the machine can be goal-oriented, and the efficient and accurate control of any number of machine tasks by the use of natural language.
- the invention provides a method of operating an artificial intelligence system, including the steps of abstracting from input stimulus data one or more patterns which are independent of the data, and generating a response to the input data on the basis of a response pattern linked to the or each stimulus pattern and of the input data.
- the invention may comprise an artificial intelligence system including input means operable to receive input stimulus data; processing means operable to abstracting from input stimulus data one or more patterns which are independent of the data, and response generating means operable to generate a response to the input data on the basis of a response pattern linked to the or each stimulus pattern and of the input data.
- the invention may comprise language translation apparatus in which a generated response includes a response pattern associated with an input pattern and data in a different language but equivalent to input data.
- the invention may comprise text or speech analysis apparatus, in which a processing means is operable to determine repetitions of patterns in input text or speech; to determine therefrom the originator of the text or speech from previously stored repetition data or statistics and to generate a response based upon the determination of the originator.
- the invention may be used to furnish a vehicle safety system in which a response indicates safety related information or procedures associated with an input pattern and data related to vehicle operating conditions.
- the invention may comprise a method of training an artificial intelligence system operable to generate a response to a stimulus input including the step of inputting into the system data streams formed of patterns and data, which patterns are independent of data and are similar from one input stream to the next; extracting from the data streams the patterns; and storing the extracted patterns in memory.
- Sentence any string of words that might be used to either stimulate a thought, respond to another sentence that is used as a stimulus, or to declare information.
- the four example sentences above are in these categories,, in order: stimulus, response, declarative and stimulus. Some declarative sentences can also be responses. Whether a sentence is declarative or a response depends upon when the sentence occurs in a dialog. For example, if it occurs after a question is asked, it is aresponse. If before, it is a declaration. A stimulus sentence can never be a response or a declarative type. The following notation is used: Ss -a stimulus sentence; Sr - a response sentence; Sd - a declarative sentence.
- St an acknowledgment of success
- Sf an acknowledgment of failure
- Dialog a sequence of sentences created alternatively by a human and a machine.
- the beginning sentence type in a sequence is always stimulus or declarative, by definition.
- a response sentence used as the beginning sentence in a sequence would be classified as a declarative.
- the reason for this careful separation of definitions is to never confuse the declaration of information with a request for information.
- a dialog is defined as consisting of a legitimate sequence of a pair of sentence types defined as: Ss : St ; Sd : Sf; Ss : Sr; Sd : St; Sd : Sf; or Sd : Sd.
- (Ss : Sr) is called a stimulus-response pair and (Sd : Sd) is called a logical inference or deduction.
- Eigen words and eigen vectors An eigen word is a word that can be replaced by another word in the same sentence without changing the conceptual basis of the sentence. For example in the sentence "Red is a color,” Red could be replaced by blue giving: “Blue is a color.”
- Both the word red and blue are eigen words. Both sentences can be represented by the following notation: ⁇ Red Blue ⁇ is a color.
- ⁇ Red Blue ⁇ is called an eigen vector.
- the set of all names for colors is called the eigen vector space for color.
- ⁇ Red Blue Green .... ⁇ is a color is an eigen vector space representation of all of the possible sentences that can be generated about the concept of names of a color.
- a related notation to the eigen vector space representation above is the following: ( ).en is a color. This notation is a way of expressing the pattern inherent in the eigen vector space for color. Pattern: A pattern is an expression of the type: ( ). en is a color. In this expression, n is an integer, e.g., ( ).e23.
- Instantiated setVariable An example of an "instantiated" setVariable is : (Red).en. Abstraction: The process of pattern creation. Consider the sentence: John went to school. An abstraction of this sentence is ( ).e23 went to ( ).e89. The integers 23 and 89 have been selected only as examples.
- Patterns also can create a vector space.
- ( )e23 is the setVariable of all male first names and ( ).e89 is the setVariable ⁇ school lunch dinner ⁇ .
- This sentence is an example of a vector space chosen to be called a "pattern space.”
- ( ).e45 which is the setVariable of all female first names is also present. Then, the expression
- Real Space Ordinary words can also form vector spaces in what has been named "real space.”
- the ⁇ Red Blue ⁇ eigen vector example above is a vector space that exists in real space. By definition a vector cannot exist in both pattern space and real space.
- Complete pattern All the patterns used in this section so far are examples of "complete" patterns.
- a complete pattern is one that does not rely on external information to complete its meaning.
- Incomplete patterns are ambiguous and in themselves do not contain a concept. Examples of “incomplete” patterns are: It was ( ).e56; That was a good ( ).e49; etc. Making an incomplete pattern into a complete one is a type of "disambiguity.”
- a Functor is as a construct with two components: a pattern and an argument list, as in the following example:
- the argument list is any list of set variables and functions.
- a Concept is a list, tree structure that can contain any amount of complexity.
- the argument list serves as potential instantiations of a pattern.
- red is a color can be written:
- the AASM theory recognizes two distinct components of intelligence: language capability; and the ability to use language for directed tasks. In human beings, these are primary but it is recognized there are other components of intelligence. Thus intelligence in this model is not one thing, but rather a series of interlocking capabilities. Language Capability
- language capability has been defined as the ability to decode a list of words into a deep-knowledge representation and back into language. Unfortunately, this definition assumes the model, which is a logical error. We need a better way of defining language capability.
- Language is composed of many combinations of simple patterns.
- Language capability in AASM is defined as the ability to "transform"sentences (symbol strings) and the ability to create and instantiate abstractions, that is, the process of creating patterns represented by concrete examples. These processes are defined in the following sections.
- a transformation relationship simply describes how one sentence can produce another sentence. As an example, consider the transformations between a stimulus and a response:
- the stimulus and response can be written as pairs:
- Equation 7 i and j are integers and represent all possible transformations between ith stimulus to the jth response. This means for the ith stimulus there are n possible responses where n can be any number. There are similar expressions for all legitimate (legal) sentence transformation pairs. Only legitimate (legal) sentence transformations are allowed but there may be any number of transformations possible.
- a sentence can be abstracted by replacing eigen words with corresponding uninstantiated set variables.
- Red is a color
- ( ).e23 is a color.
- an Abstract Association is produced.
- the ability to store abstract associations is a key part of the intelligence model.
- the abstractions are deductive, the subset of Abstract Association is termed a Deductive Association.
- the Associative Abstraction Sensory Model describes how humans learn language and are able to produce intelligent dialogs. The latter is more than just memory and association. Human beings respond differently in different situations, solve problems and even have their own agendas.
- the following discussion begins by introducing the model for cognition. That is followed by an explanation of how language is learned, and finally how this model is extended to produce directed dialogs.
- Figure 1 schematically describes the Cognitive Model 10 of learning used in the present invention.
- Human beings learn language at an early age, at first by simply mimicking the adults or older children around them. Parrots can do the same thing. Parrots can learn that when they hear a certain pattern of sounds, another pattern of sounds is expected. What differentiates humans at a very early age is the ability to detect patterns in the sequence of sounds and to associate those patterns with real-world objects and situations (although many mammals and birds apparently have some pattern-using capability).
- Sensory information 12 e.g., sight, sound, touch, etc.
- Sensory information is associated with things in the real world and stored in a person's associative memory 14. For example, if a parent points at a chair and says "this is a chair” the child associates that sentence with sensory information, in this case a mental picture of the chair, how it feels to the touch etc.
- a human being needs only this information to associate the word "red” as a word that can be used in a pattern. With only this much information one knows how to describe reality in the context of language. Deep knowledge, such as the fact that red is associated with a particular frequency in the optical spectrum, is not required for every day conversation, but can be added as a series of deductive associations. Thus human knowledge is a combination of inductive knowledge about language and deductive associations that connect language elements.
- the AAS model organizes information in bundles called the "context" that allows information on specific subjects to be efficiently processed by grouping together related associations and patterns. It might be that this is an unnecessary step and a disadvantage of the model, but experiential knowledge suggests that this is the process the brain uses to organize information.
- the processes of the present invention are not merely methods for processing information efficiently but, with extension, also serve as a way of directing comprehension and discourse.
- Every algorithm in the AAS model is also an information compression and retrieval method. If intelligence is the outcome (perhaps inevitable) of evolution finding ways of storing and retrieving large amounts of information necessary for survival in the brain, the AAS model simulates that. It may be that life develops intelligence only when situations arise that give a survival advantage to processing and storing large amounts of information.
- Figure 2 shows inputs as speech 40 to a speech recognizer 41.
- a keyboard or other reader may be used as an input device.
- Figure 2 is helpful in understanding the process by which a learned stimulus is able to produce an intelligent response from the invention.
- the speech recognizer 4 land the (text to speech) speech synthesizer 26 are generally known and not part of this invention. Therefore, they are not described in detail in this Specification. However the effectiveness of these modules are increased by ADAM technology.
- a speech recognizer can understand more words if it has an expectation of what is to be said, based on the current context.
- a Speech Synthesizer can be made more understandable by subtle modifications of the words, such as emphasis, that is possible through understanding of the meaning contained in the language provided by ADAM technology.
- a Stack 24 is simply a list of Concepts, which are normally accessed from the top-down. This process is driven by (1) the text string 50 produced by the Speech Recognizer and (2) a module called the Dialog Supervisor 36.
- a text string 50 is received from the Speech Recognizer 41 it is converted 42 to an instantiation of an abstract concept (or Concept) and handed to the Dialog supervisor 36.
- a primary function of the Dialog Supervisor 36 is to recognize the current context of the dialog. It does this by searching the Contextual Database 32.
- the Contextual Database 32 (which can be edited by an external program called the Context Editor) contains information that the Dialog Supervisor 36 can use to actively direct the flow of the dialog. The default action is simply to respond to' a stimulus 16, for example in answering a question.
- the Dialog Supervisor 36 can use its knowledge of context to load the Speech Recognizer 41 with words and grammars that are expected in the current context.
- the stimulus 16 is abstracted from the input text string 50 and the Pattern Buffer 34 is searched for an identical abstract stimulus 16. When one is found, a copy of the associated response pattern is created and instantiated with any words found in the original stimulus 16. (The Pattern Buffer 34 contains information about which words are identical in both the abstract stimulus and the abstract response patterns.) At this point, a response 18 has been created with usually only a partial instantiation.
- the Pattern Buffer 34 also contains inferences as abstractions, although there are stricter rules about what can be abstracted as an inference in the Pattern Buffer 34. If the response pattern is not completely instantiated, the inference patterns are instantiated and "fired.” Firing an inference means that the Conceptual Database 30 is searched to see if the instantiated inference is contained there. This mechanism is very useful because it can resolve ambiguities and simulate deductive reasoning.
- mapping 44 is the association of patterns of instantiation with stimulus-response pairs. For example the question “what is 2 plus 3?" is a mapping problem since it maps 2 and 3 into 5 by associating it with a certain pattern. The program stores a large selection of mapping algorithms and “learns” how to map by finding the best algorithm during training. Another mechanism is called an “Action.” This is a facility for associating an abstraction with an action that can be performed by a computer such as reading a file.
- the Goal Processor 22 Supervising the total process is a Goal Processor 22.
- the Goal Processor normally takes concepts off of the Stack 24, translates them into a text string 50 (with any instructions on emphasis and timing) and hands them to the Speech Synthesizer 41.
- the Goal Processor 22 handles timing issues involved in producing a natural dialog. It also can interrupt human-machine dialogs when external issues arise such as low battery power or meta-goals such as a need for certain kinds of information. Many applications require that ADAM have a self identity, and that has implications for goals that are handled by the Goal Processor 22.
- Figure 3 reveals a flow diagram depicting a process 51 by which a computer system, for example ADAM, "reads" an input text stream 50. In the current invention, a computer system learns by reading written language or hearing spoken language.
- a response 18 to a response link that is, a deductive association link
- the AAS model 20 uses deductive association to make knowledge unambiguous and to make it declarative and explicit. This process eliminates the sometimes "shorthand" characteristic of language (e.g., the use of pronouns) and makes the conceptual knowledge contained in language explicitly available in memory as Concepts (recall the definition of a Concept from above).
- Language learning in the AAS model is accomplished by structured, interactive induction.
- Early training of a computer to implement the model consists of inputting sentence pairs which are analyzed to produce abstract information about language. No specific world knowledge is stored in the computer at this stage since it is later abstracted from input patterns. It is at the early training stage that eigen vector sets are produced by noticing similarities between sentences. Once eigen vector sets are created, the next step is to create and store simple patterns using eigen vectors as markers. As with humans, it is important that only simple grammatical patterns are used at this stage. The simple patterns are used as the basis of creating more complex patterns later. This structured way of learning grammar is what makes Golds "proof,” that inductive learning of grammar is impossible, invalid.
- the AASM automatically uses the simple patterns to build more complex patterns.
- the patterns are stored in memory as tree structures with a pattern representing the central abstract concept of the sentence, being the root of the tree.
- Complex patterns are created from simple patterns by using two simple rules: (1) No pattern can exist twice in memory and, after the first version is created, only a unique symbol for that pattern can be used; and (2) no pattern can contain a sub-pattern without explicitly containing the unique symbol for that pattern.
- the last learning stage is creation of a data base of world knowledge. Since grammar is now learned, inputting world knowledge is done either by human interaction or reading in text. The result is a list of Concepts called the Conceptual Database 30. The stages of learning are summarized in Table One. Note that this method does not require any pre-existing knowledge of language or grammar.
- a word can be any series of characters or a symbol.
- the theory does not depend on the particular words that are chosen , but how those words are used to convey meaning.
- the meaning is captured by abstraction and association. If the set of all possible words in any particular language is S then a particular sentence is an ordered set, or vector, in the space given by S.
- M 1 ij is a matrix rotation and Wlj is a word vector.
- Ml ij Wlj is a sum over j; that is, repeated indices are summed.
- the elements of M can only have the value 1 or 0.
- M selects a word from its vector space.
- the Wkj word can be a "null word” — that is, no word—but there are restrictions of how many null words are allowed. See below.
- a sentence is defined as a set of k orthogonal rotations in k vector spaces. Some vectors spaces only contain one word. In that case the associated matrix is equal to 1. It is important to note that the vector spaces are dynamically modified by extending the number of dimensions. This reflects the fact that this is a learning model.
- the first objective of this mathematical theory is to find a way to represent similar sentences.
- N sentences can be described as a series of rotations given by Equation 9.
- working memory In the computer, there is a “working memory” that stores raw input as lists of similar sentences. It is from this raw material that abstract knowledge is created.
- a particular value w of the setVariable is represented by (w). en
- a pattern is created by replacing the eigen value by its corresponding setVariable. Recall that this process is called Abstraction.
- a Functor (f) is a way of expressing a sentence by explicitly separating eigen values and patterns.
- a "complete" sentence expressed as one or more functions is called a Concept (C).
- f ⁇ (argument list) p>* *The less than and greater than signs are used as delimiters, not as logic symbols.
- Element p is a pointer to a pattern.
- the argument list can be either eigen values or functions.
- the expression is called the canonical form for representing concepts.
- the Conceptual Database 30 contains Concepts created by reading in text or processing speech input 40. Concepts are created by de-constructing sentences into patterns and eigen words.
- the Conceptual Database 30 is a repository of knowledge.
- the Pattern Buffer 34 though not a repository of knowledge, is as important.
- the Pattern Buffer 34 does not have specific knowledge about the real world. That information is relegated to the Conceptual Database 30.
- the Pattern Buffer 34 contains information about how to extract information from the
- Reading consists of the following process: dividing the text string 50 into sentences 52; converting the sentences into concepts.
- the above sentence is converted into a concept having a set of two functions, Cl: (fl f2) which is abstracted as follows (the digits identifying the eigen vector spaces and pointers being arbitrary):
- the pattern referred to by the first pointer p2.1 in the first Functor is:
- a stimulus 16 is entered into the computer, for example, by a person:
- ADAM "knows" that the pail contains water because it recognized the pattern gl . l which embodies the concept of an object containing something and, in addition, can generate a way of verbalizing that concept as a response to the question. If this seems confusing, remember that gl. l can also be part of a S-R pair. It is the information contained in the S-R pairs that allows the program to know how to answer questions even when information is embedded inside sub-units of the language. Compare this method with traditional parsing methods which depend on static identification of parts of speech. In real language, the part of speech of any particular work can change by simply embedding the sentence it is contained within inside a larger unit.
- the pointer p2.1 represents the abstraction of someone doing something up the something.
- the program has abstracted the specific details of identification, motion and place. This may seem an odd way of abstracting the idea, but the particular form of the abstraction in the computer does not matter. It only matters that it has found a way to store how to use language separate from the specific details. The abstraction process is completely automated. Once the program has found a way to abstract a particular idea, it will use that method consistently.
- One of the functions of the Pattern Buffer 34 is to obviate the differences.
- the Pattern Buffer 34 stores lists of patterns, not just a single pattern. From the Pattern Buffer 34, the program knows that either sentence is the answer to the stimulus Who hit Mary? In other words, one of the functions of the Pattern Buffer 34 is to record the duality of language.
- the Goal Processor 22 is the highest level process. It takes sentences on the Stack 24 and hands them to the Speech Synthesizer 26. It is the responsibility of the Goal Processor 22 to mange realtime response to stimulus 16. The Goal Processor 22 can override verbal response if necessary.
- the Conceptual Database 30 contains real-world knowledge
- the Contextual Database 32 can direct discourse in certain circumstances
- the Pattern Buffer 34 which contains information about how to extract knowledge from the Conceptual Database 30.
- Figure 3 A further expands the "Find Response Pattern" process 48 of Figures 2 and 3.
- An input sentence 1 10 obtained from the convert-to-sentences process 52 is handed to a routine 120, 122 which identifies all the eigen words ⁇ en...em ⁇ and patterns ⁇ ( )p ⁇ from the Conceptual and Contextual Databases 30, 32.
- the result is a Functor 122 or set of functions.
- the concept is abstracted by removing the eigen words and a search 124 is performed in the Pattern Buffer 34 for a matching abstraction. When a match is made, an abstract response is generated by following the links in the Pattern Buffer 34. Once found, a search 128 is made of the Conceptual Database 30.
- the eigen argument list 130 is created for the search.
- Some of the eigens ⁇ er..es ⁇ can come from the stimulus 16 and the rest are filled in by the found Concept 132.
- a sentence is created 134 from the eigen argument list and pattern 132.
- a test 126 is made to see if the sentence 110 is complete, i.e., all set variables have been instantiated. If true, an output sentence 112 is generated.
- the output sentence 112 is placed on the stack 24, passed through the goal processor 22 to the speech synthesizer and delivered to the user by audio.
- the output sentence 112 may also be printed or reproduced by most known means.
- Figure 4 shows the modules describing three broad areas of capability and operability of the
- ADAM Automated Dialog Adaptive Machine 200, a preferred embodiment of this invention.
- the first module, Training 201 reveals how ADAM "learns" information input by a user.
- a second module, Interactive Dialog 202 describes ADAM's capability to do interactive, goal-driven dialog. Read Text
- Documents 203 module describes machine 200 reading and comprehension.
- ADAM 200 is intended to simulate the human capability to converse and understand in a practical, efficient and useful way.
- the inventor views human intelligence as the result of several interlocking simulation processes that can be on programmed on a computer.
- the inventor does not claim that the described processes can simulate all capabilities of human intelligence, only that enough of this capability can be simulated to perform useful tasks.
- ADAM 200 as does human intelligence, rests on six major pillars of information processing and together are the major innovations of this invention. These are: (1) Abstraction of information into patterns; (2) Association of stimulus and response patterns; (3) Abstraction of logical Inference; (4) Mapping of objects and abstract concepts into concrete reality; (5) Reasoning by analogy; and (6) Learning language and knowledge through training, reading and human interaction.
- Figure 4A shows how natural language information 50, input in training mode 201 , interfaces with the data base 30, 32, 34 and is retrieved for use in ADAM 200.
- Figure 4B depicts how questions about information in the data base, input in natural language 50 in interactive dialog mode 202 interface with the data base 30, 32, 34 to produce an output of answers and data forms 206 from ADAM 200.
- Figure 5 shows that user inputs may be a stimulus 211, a response 212, an "if statement” 213 or an "else statement".
- the inputs may be conditioned in time 217.
- the user selects 218 the time 217 in which the condition should be applied relative to the time of the response. Note that this is not the tense of the condition, which can be different. This selection allows a comprehension engine to make an inference that takes into account the passage of time.
- Link 220 allows creation of a data structure that defines how each of the user inputs 211, 212, 213, 214 link together.
- the user selects 216 any external action215 associated with a response 18. This ultimately creates an association between an abstraction and an action 215. For example, if an associated response 18 is "a wrench has been selected,” the action 215 selected should be able to deal with the abstract notion of selecting something. The action 215 should deduce from a particular stimulus 16, the statement “select a wrench.” This statement is the one which would elicit the response 18 "a wrench has been selected.”
- the user selects 223 between non-local and several choices of locality 225.
- "Locality" 225 relates to identifying a concept as finite in space and time.
- a person is "local” because the concept of a specific human being is local to where that person is in space and time.
- the concept of a human being is non-local. That is, the attributes of a human being are independent of space and time. If the response was "Jane has blue eyes,” this is a local statement since it is only true for a particular point in space and time, that is, where Jane is. However, if the response was "humans have two legs,” that is a non-local statement since it is true for all humans everywhere.
- Statements that should be labeled "local” are only statements that define a particular type of locality. The most important of these is the concept of a person. Example stimulus-response pairs that define the differences between people should be input in the training process. The main purpose of this is to allow ADAM 200 to disambiguate references. It also serves the purpose of being able to separate things in time and space.
- the program should be established by the user in a practical language to identify reference tags for each locality.
- the non-local tags are: he, she, it, they.
- the user can create new information at any time associated with each of these tags but the tag names should be used consistently. This is important since the program keeps maps for different non-local objects for the purpose of disambiguation.
- This aspect of training substitutes for the human experience of being able to map concrete objects with human senses as a part of reality.
- a baby learns to associate "mommy" with only one local object, for example, and that learning is eventually integrated into language about local objects.
- a Map that describes how to create a Map is called a Meta Map.
- a Map is created when the user indicates he or she is introducing stimulus-response pairs 16, 18 about a local object or an external action 215. Theusr is allowed to identify a locality with a tag such as "he", “she”, or “it” for convenience, although that is not necessary for proper functioning of the program.
- the information needed by the program is that stimulus- response pairs 16, 18 are about local objects and what kind of external action 215 is associated with the patterns being created.
- the user can "lock in” a particular locality. All examples then input while the training is “locked” are about one particular local object.
- the program uses this information to create an array of setVariable indexes called “discriminators.” For example, if the user should input several stimulus- response pairs 16. 18 about someone named John, the program sees the setVariable for a male first name is being repeated while locked on a local object. It stores this information. Later, when the program detects any male first name, it creates a unique Map for this local object. The Map collects indexes into the Pattern Buffer 34 for a local object. This information is used to generate questions about a particular locality. For example, it could generate questions about a persons wife and children.
- MapProcessor is called during comprehension. See the later discussion about the Read Text Documents
- the routine can resolve automatically mathematics questions based on learned algorithms.
- the technique used to create Maps is extended to data base applications.
- the program has tested algorithms until it could predict the results of examples it was given .
- the program applied a list of algorithms stored in memory until it succeeded in predicting the result.
- the program associates the correct algorithm with the pattern associated with the input sentence. Once it has found the way to return a correct answer, the program recalls the method when a similar sentence is entered.
- the database application technique produces methods to extract data from a data base 30, 32, 34 based on examples given to ADAM 200 during training.
- Figure 4B indicates that users can take advantage of this feature to find new relationships in their data bases.
- a Locality Meta Map 226 stores data about how to differentiate between different kinds of mapping situations and how to tell the difference between different kinds of maps 228.
- Figure 5 A contains a structure diagram for Meta Maps 226.
- Locality Meta Maps 226 have two kinds of data structures.
- Type (1) structure has indexes to Pattern Buffer entries that were made in training while in the "local” mode. These are stored to allow ADAM 200 to ask questions about local objects.
- the Pattern Buffer 34 contains abstracted stimulus-response pairs which have "locality". For example, upon being accessed by a new user, the program can access its Meta Map 227 about people to ask question about the user's marriage, family etc. It can do this because the Meta Map 227 tells ADAM 200 that a person has all these possibilities that are "local” to him or her.
- a second type (2) is a list of patterns, (empty set variables) and eigen words.
- the locality maps are checked to see if there is a matching eigen word or setVariable. If there is a match, the Concept is added to a Map 228.
- the Maps 228 keeps track of particular instances of local objects. For one thing, this provides for efficient pronoun disambiguation.
- Locality Meta Maps 226 are a specialized subclass of Meta Maps 227. Meta Maps 227 in general are used when there is a need to simulate a capability to predict. As described earlier, there are additional built-in maps 228 for mathematics and counting. For example, if a user asks "what is the square root of three?" the program may not at first know. But using the Meta Map's math algorithms, after shown examples, the program can predict the answer. In effect, ADAM 200 can learn to do math, on its own, in any language.
- Maps 228 can help solve is a counting problem. Suppose the program has been told there are three peas in a cup and now one has been taken out. How many peas are in the cup? This is a trivial problem for a person.
- a Map 228 can store the number of peas as a "local" characteristic of a cup and associate the change in this characteristic with the concept of taking something out of the cup. This process combines locality mapping with mathematical mapping.
- Her name is Jane. Jane is female.
- Table 3 tells the program that the specific name (e.g., Jane) can change, but words like name, her, female, gender and she are fixed for this locality Meta Map 227.
- ADAM 200 uses this information to decide if a reference is being made. In the sentence “her pride was hurt,” Adam would associate this sentence with a "she” Map that could contain other information about the local object— in this case a female human. If the question were asked, "whose pride was hurt,” ADAM 200 would look for a statement like
- Map 227 can spawn a Map for a particular instance.
- a Meta Map 227 is created which, among other things, establishes a list of eigen types for human names and gender. When these words are part of an input stimulus 16, a Meta Map 227 is created.
- ADAM 200 knows that two Maps refer to two different things because the specific eigen values are different. In the example above, the eigens that are not constant are used to differentiate between different objects of the same locality. Note in the example that this automatically handles names and points of view.
- Working Memory 219 causes user text strings to be tokenized and stored as lists of objects.
- the various types of text strings 50 that is, stimulus 16, response 18, "if" 213, etc., are linked together in working memory.
- Working memory is temporary and serves to store user examples of stimulus-response links 220 and associated logic and action statements 215. These examples are used to build abstractions in pattern memory 30,32 and the pattern buffer 34.
- Figure 6 discloses a flow diagram for the processing and storing into working memory of user inputs 211-214.
- the user input string 230 is matched 231 to existing working memory 237. If there is no match, the user input string 230 is added working memory 237. If the difference 235 between the input string 230 and stored memory 237 is small, the user input string 230 is forced 236 to be equal to the closest match.
- Figure 7 shows the flow diagram describing the detailed process of working memory storage.
- User input 230 is compared with existing memory 237 and one data structure is created from both.
- the resulting structure replaces discrete words with rotation matrices that select individual words from a vector space. This is stated in equation form as follows:
- w(i) is a discrete word
- R(i, j) is a rotation matrix
- E(j) represents an eigen vector space
- E(j) is the vector space (Jane Alice Paula). If the program encountered the following two sentences: "Alice went to the store” and "Jane went to the store,” the program would represent both sentences as the following three by three matrix:
- Working memory 242 is a list of raw input data created during training. In working memory 242, sentences and clauses have been converted to arrays of objects containing hashed integers representing individual words. In addition, Working Memory 242 contains a rich set of links that give information on how to associate stimulus 16, response 18 and conditional links . Working Memory 242 is the raw data from which the abstract data in the Pattern Buffer 34 is created. Pattern Seeker
- the Pattern Seeker 240 finds simple patterns and stores them, leaving the stimulus:response pattern in the Pattern Buffer 34.
- the Pattern Seeker 240 finds responses 18 that have the following patterns: (1) the pattern contains one or more eigen sets; (2) the pattern has one or two words bounded by an eigen set and an eigen word.
- Pattern Buffer maintains the association lists between patterns. Patterns are categorized and flagged as stimulus, response, inference and action. The following convention is used:
- a single element of a pattern buffer can be noted by:
- (a b) is a list of the subjects a and b.
- the integer indexes indicate the pattern and pattern level.
- Any number of responses and inferences can be associated with a stimulus. For example:
- stimulus 45, second level is associated with two responses. Logically, this means that, in an abstract sense, that if the pattern s45.2 is detected, the "form" of the response (meaning not the actual content) can either be r23.1 or r35.2. However, if the association was actually
- Pattern Seeker 240 program would create the pattern
- the postulated pattern ( ).e56 is the vector space ( ⁇ my family ⁇ ⁇ my relatives ⁇ ).
- the expression in brackets is called a vector set and is represented by the same rotational matrix structure. However, instead of being a rotation in eigen space, this is a rotation in "pattern" space.
- the word "pattern” is used to mean an empty setVariable and also lists of empty set variables.
- Figure 9 is a further detailed flow diagram of Creating a Pattern process 244 shown in Figure 8 flow diagram.
- a pattern is abstracted (made) from Working Memory 242 data.
- the abstracted pattern is searched for a "clause".
- a clause is defined as a sub-pattern within a sentence. Clauses are detected in two ways: (1) by comparing with known simpler patterns; (2) by recognizing that many clauses start with certain words like to, from and about. If the pattern "X is a Y" is known and the sentence
- a Functor List 246 is an abstract form of a segment of Working Memory 242. It is similar to a
- a pattern is an empty setVariable and lists of empty set variables.
- ( ).e4 is full ⁇ (( ).e7) p2.3>" where ( ).e4 is an empty setVariable and ⁇ (( ).e7) p2.3> is a Functor containing an empty setVariable.
- Figure 10 is a further detailed flow diagram of the process of making a pattern 248 from a
- FIG 11 is a further detailed flow diagram of the process 250 of making a Functor 256 as shown in Figure 9.
- the element set 252 of the memory segment is processed by the Structure Pattern Match 254 to find an existing pattern that best matches the element set 252 or install a new pattern if no match is found.
- the objective of the Structure Pattern Match 254 is to create the raw form 256 of the Functor argument list
- Creating a Functor 258 comprises the process depicted in Figure 1 1 A.
- the argument list 255 in raw set form is converted to Functors 257 and put into a canonical form of Argument List 259.
- Figure 12 presents additional detail of the process of creation of a Functor 256 from the element set 252 of the memory segment.
- Figure 13 depicts details of the Structure Pattern Match 254 process.
- the best match 260 to the element set 252 is checked for a pattern 262 already existing. If one exists, the pattern and the element set 252 is kept as part of a raw argument list 266. If no pattern exists, a pattern is created and installed 264 with the element set
- Figure 4 reveals the Interactive Dialog module 202 of ADAM 200.
- Figure 16 shows a diagram which describes the flow of interactive voice input 300 to a speaker-independent, continuous speech recognizer 302, external to ADAM 200.
- the person skilled in the art will recognize that other types of textual input devices may be used in place of verbal devices, for example, keyboard entry, scanned text, etc.
- the speech recognizer 302 returns a "Recognition Event 304.”
- the Recognition Event 304 is handled by ADAM 200 in a process that is described in Figure 17.
- the Process Recognition Event 304 generates a response 312 to the user speech input 300, 302.
- the response 312 is in the form of a Concept data structure which is then placed on the stack 24.
- FIG 18 the process flow for creating a sentence Functor set 310 is depicted.
- the elements 313 of the user's input 300, 302 are obtained by comparison to the language model 306. From these elements 313 a Concept is created 314.
- a Functor is made 316 which describes the Concept. The Functor is appended to the Concept.
- the raw Argument List 322 is converted 326 to Functors and a canonical form of the Argument List 328 results.
- the Generate Response process 312 is further delineated in the flow diagram of Figure 22.
- the Stimulus Concept 338 is added to memory 342.
- Figure 23 shows the continuation of flow within the Add Stimulus to Memory process 340.
- a Concept 360 created from user input 300 is added to the Conceptual Memory 362 data base.
- a decision is made 364 about the "locality" of the Concept 360 and a Map is created 366.
- an interrogative statement requires a response.
- a declarative statement is information to be learned and stored. Therefore, the Stimulus Concept 338 is examined 342 to determine which case it is. If it is not declarative statement, a Response Method is determined 344. If the Response Method 344 returns a Concept from the data bases 30, 32, the Response is found and can be processed 348.
- the Response Method is further described in Figure 24.
- a search is undertaken over all of the primary patterns in memory for a Response Set 370 , a pattern like that of the Concept Stimulus 338. From the found Response Set 370, the Conceptual data base 30 is searched for a matching Concept. The Concept and Response are retained 376 for further processing 348.
- the Found Concept and Response 376 are passed to a Do External Action process 382 as displayed in Figure 25.
- Some Responses 376 held in memory may have had an external action attached to them.
- a robotic application has as a possible Response "the wrench is selected.”
- That Response has an action code associated with it that implements the external action.
- the input Concept 376 is returned and eventually placed 384 on a primary stack 386 and output as speech, text or both.
- this code returns an appropriate Concept as a Response 376.
- the Response 376 may not contain a Concept but only an inference.
- the Conceptual Database 30 is then searched to see if the instantiated inference is contained there 388. This step is called “Firing an Inference” 388. If an inference did create a Concept 376, the Concept 376 is passed through the Do
- the Context Switcher 396 flow is depicted in Figure 26.
- the program searches the Contextual Data base 32 for a Concept whose context would suggest the Concept 376 in question. If such a Concept is found there, the Conceptual Database 30 is searched 402 to see if the Concept, that is, the instantiated, generated inference has been stored there. If so, that Concept is placed on the primary stack 386, preparatory to processing for output.
- Reading Text Documents 203 The third module comprising ADAM 200 is Reading Text Documents 203. As seen in Figure 27, he text file is handled 420 in a conventional way in respect of opening 421, reading 422 and closing 426.
- Figure 28 expands the textual input 422 Comprehend process 424.
- the input text string 426 is added to memory 428 and concepts are constructed 430 from the statements input.
- Figure 29 the Building of Concepts process flow is shown.
- a set of Functors is created 430 from the element set 432 contained in the processed text string 426.
- the prior discussion of Figures 18, 19, 20, 21, 14 and 15 support the flow diagrams depicted in Figures 30 through 36, respectively.
- the flow diagrams show the process whereby a Functor is created 438, 450, 452 and a Pattern Match is found 440, 460 and
- ADAM The ability to perform database extraction is an important application of ADAM. Normally database information retrieval is a costly and time-consuming process requiring a lengthy requirements process, database programming, debug and testing procedures. ADAM can be substituted for most or all of this process.
- ADAM Database data retrieval process is shown in Figures 37 and 38.
- Figure37 examples are given to ADAM including a database query (in natural language) and an example of how the program should answer.
- ADAM treats the database as a map and uses its prediction algorithms to guess how data from the database was used to formulate an answer. It does not matter if there are ambiguous ways to give any one particular answer.
- ADAM can use multiple examples to find an algorithmic solution that solves all examples simultaneously. This method is only limited by the number of algorithms that are stored internally. Once the learned algorithms are stored as associations with natural language, the algorithms can be used to extract data as shown in Figure 38.
- FIG 39 shows the top-level pattern association mechanism.
- the input sentence is abstracted and a pattern buffer is used to find an abstraction that represents how to express a response.
- the conceptual database is then searched for a concrete expression that matches information found in the input sentence.
- the pattern buffer connects abstractions in the primary language to abstractions in the target language.
- the target language abstraction is then matched with concepts in the target language to perform the translation. In this way, the meaning of sentences is used to perform the translation.
- the same mechanism as in Figure 40 is used except that the training necessary to create the pattern buffer comes from training in translation.
- ADAM can also be used for identification purposes. Every person has characteristic patterns they use in writing and in speech. During text and speech comprehension, ADAM matches abstract patterns to speech patterns as a uniform comprehension mechanism. By simply counting the number of each pattern used by different individuals, a characteristic signature can be developed for recognition purposes.
- Prob(i) n(l)*p(l,i) + n(2)*p(2, i) + n(3)*p(3,i) + ....
- n(j) are measured numbers of usage of each pattern p and p(i, j) are individual probabilities of each pattern based on calculated statistics of individuals.
- ADAM The capability of ADAM to comprehend text and perform interactive real-time dialog can be used to provide a personal assistant and companion that has up to date information on topics of interest to the user. This information will be comprehended in real-time over the Internet by the ADAM-driven companion and made available as natural conversation. From the viewpoint of the user, this has the advantage of quickly accessing information of interest in the most natural of ways. It would simulate an attentive, knowledgeable person who would be available for helpful information or just companionship 24 hours a day. People living alone or in remote areas might find this a helpful psychological aid.
- ADAM can provide interactive assistance through a voice wireless link.
- the worker can describe a situation and ADAM can provide procedural help, situation analysis, and descriptions of other information that is developing from a remote area. Since ADAM has the ability to learn scenarios, ADAM can anticipate next steps, warn of possible dangers and provide reminders of plans and procedures in complex situations. This could apply to many different tasks and situations such as a mechanic repairing an automobile, an astronaut servicing a spacecraft, or even a soldier on the battlefield.
- ADAM On-board vehicle safety issues can be addressed with ADAM technology by providing emergency information that goes beyond pre-canned warnings.
- Complex technology on today's airplanes, cars and military vehicles can be improved by ADAM's ability to conceptualize failures and verbalize emergency procedures in real-time.
- ADAM's ability to create descriptive maps of objects can be used to translate descriptions of objects and scenes into drawings of those objects. This capability can be used by architects, movie makers, interior decorators, or anyone who want to achieve an immediate visualization of anything from a verbal or written description.
- ADAM can be used to create prototypes of articles and scientific papers by interacting with the author.
- ADAM can learn the habits and styles of a particular author and produce preliminary prototype articles and papers by interacting verbally with the author. For example, by just stating a topic and type of document, ADAM can lay out the basis for the final document by organizing learned material into the proper form.
- ADAM can combine its ability to retrieve data ("mine”) from databases with its ability to reason by analogy and inference to find new relationships in data and suggest these new relationships using natural language. Those that work in creative fields, such as scientists, can use this capability to accelerate the creative process, especially when dealing with large amounts of data.
- ADAM can benefit customer relations in a number of ways. Using its database capability discussed above, ADAM can provide solutions to customer needs by extracting an optimal solution under complex situations. For example, transportation industries can use ADAM to do ticketing and other transactions that depend on optimization of many parameters.
- Financial services companies including, but not limited to brokerage firms, banks, insurance companies
- Transportation companies and agencies including but not limited to brokerage firms, banks, insurance companies.
- Robots can benefit from ADAM's real-time voice interaction.
- Robots need conceptual information to perform complex tasks. For example, a robot helicopter might be used to identify and track criminals on foot or in vehicles. Voice interaction could be used to alert the helicopter to rapidly changing situations.
- ADAM can conceptualize input information and turn it into real-time actions. It can also advice operators of failures and suggest actions.
- ADAM can be used as an integral part of computer operating systems, concluding but limited to Windows, Windows NT, Apple Operating Systems, Linux and Unix.
- any system using ADAM consists of ( 1 ) a software executable residing on volatile or non-volatile memory and (2) some kind of memory device that stores language and world knowledge that is basic to its functionality and (3) a memory device that stores new learned language and world knowledge capability.
- This functionality can be embedded in any device in which interactive language capability is needed. This could include, but not be limited to, handheld computers, smart hotel locks, cash registers, fax machines, home entertainment systems, smart watches, smart phones, GPS gear, and mapping systems in cars.
- the memory type and size described in the above paragraph would be adjusted for each of these applications.
- ADAM can be used in any type of computer, including but not limited to portable computers, desktop computers, workstations, servers, mainframes and super computers.
- ADAM can be used in any communications network, including but not limited to wireless networks, copper networks, fiber optic networks, satellite networks, video networks, and free air optical networks. Also, ADAM can be used in any Customer Premise Equipment (CPE), including but not limited to wireless phones, PCS phones GMS phones, CDMA phones, wireless phones based on any type of protocol, computer based CPEs, and satellite receivers/CPEs.
- CPE Customer Premise Equipment
- ADAM can be used as a personality simulant for the entertainment industry, and any type of machine, robot, game or toy that can benefit from a language interface. This would include interactive personality simulants at entertainment parks, casinos and other resort and vacation centers.
- ADAM can be used to monitor or control any type of machine, including but not limited to engines, motors, motorcycles, automobiles, trucks, aircraft, spacecraft, MEMS (Microminiature Electronic Machine Systems), oscillators, and voltage monitoring devices.
- MEMS Microminiature Electronic Machine Systems
- ADAM is completely language independent, learning from stimulus-response patterns that it can associate with actions and various types of data structures. This means that communication is not limited to human languages. The only requirement is that the communication be broken down into distinct, recognizable units with the proper interface. This means that ADAM can be used to communicate with dolphins, whales, primates, birds and any other organism that communicates by producing sounds, motions, chemicals, or any other form of communication that can be detected by a sensing device.
- ADAM's capabilities Additional products which would use ADAM's capabilities include:
- B2B and B2C adaptive, personalized online shopping tools.
- ADAM The ability to perform database extraction is an important application of ADAM. Normally database information retrieval is a costly and time-consuming process requiring a lengthy requirements process, database programming debug and testing procedures. ADAM can be substituted for most or all of this process.
- ADAM Database data retrieval process is shown in Figures 37 and 38.
- examples are given to ADAM including a database query (in natural language), and an example of how the program should answer.
- ADAM treats the database as a map and uses its prediction algorithms to guess how data from the database was used to formulate an answer. It does not matter if there are ambiguous ways to give any one particular answer.
- ADAM can use multiple examples to find an algorithmic solution that solves all examples simultaneously. This method is only limited by the number of algorithms that are stored internally. Once the learned algorithms are stored as associations with natural language, the algorithms can be used to extract data as shown in Figure 38.
- FIG 39 shows the top-level pattern association mechanism.
- the input sentence is abstracted, and a pattern buffer is used to find an abstraction that represents how to express a response.
- the conceptual database is then searched for a concrete expression that matches information found in the input sentence.
- the pattern buffer connects abstractions in the primary language to abstractions in the target language.
- the target Language abstraction is then matched with concepts in the target language to perform the translation. In this way, the meaning of sentences is used to perform the translation.
- the same mechanism as in Figure 39 is used except that the training necessary to create the pattern buffer comes from training in translation.
- ADAM can also be used for identification purposes. Every person has characteristic patterns they use in writing and in speech. During text and speech comprehension, ADAM matches abstract patterns to speech patterns as a uniform comprehension mechanism. By simply counting the number of each pattern used by different individuals, a characteristic signature can be developed for recognition purposes. This means that an individual "i” would have an individual signature that could be developed by analyzing text and speech of different people. The probability of the speaker being person "i" would be:
- n(j) are measured numbers of usage of each pattern p and p(i, j) are individual probabilities of each pattern based on calculated statistics of individuals.
- ADAM cognitive assisted multi-media analysis
- the capability of ADAM to comprehend text and perform interactive real-time dialog can be used to provide a personal assistant and companion that has up to date information on topics of interest to the user.
- This information will be comprehended in real-time over the Internet by the ADAM-driven companion and made available as natural conversation. From the viewpoint of the user, this has the advantage of quickly accessing information of interest in the mostnatural of ways. It wouldsimulate an attentive, knowledgeable person who would be available for helpful information or just companionship 24 hours a day. People living alone or in remote areas might find this a helpful psychological aid.
- ADAM can provide interactive assistance through a voice wireless link.
- the worker can describe a situation and ADAM can provide procedural help, situation analysis, and descriptions of other information that is developing from a remote area. Since ADAM has the ability to learn scenarios, ADAM can anticipate next steps, warn of possible dangers and provide reminders of plans and procedures in complex situations. This could apply to many different tasks and situations such as a mechanic repairing an automobile, an astronaut servicing a spacecraft, or even a soldier on the battlefield.
- ADAM Vehicle-related Safety On-board vehicle safety issues can be addressed with ADAM technology by providing emergency information that goes beyond pre-canned warnings.
- Complex technology on today's airplanes, cars and military vehicles can be improved by ADAM's ability to conceptualize failures and verbalize emergency procedures in real-time.
- Graphics Visualization ADAM's ability to create descriptive maps of objects can be used to translate descriptions of objects and scenes into drawings of those objects. This capability can be used by architects, movie makers, interior decorators, or anyone who want to achieve an immediate visualization of anything from a verbal or written description.
- Authoring Tools ADAM can be used to create prototypes of articles and scientific papers by interacting with the author.
- ADAM can learn the habits and styles of a particular author and produce preliminary prototype articles and papers by interacting verbally with the author. For example, by just stating a topic and type of document, ADAM can lay out the basis for the final document by organizing learned material into the proper form.
- ADAM can combine its ability to retrieve data ("mine”) from databases with its ability to reason by analogy and inference to find new relationships in data and suggest these new relationships using natural language.
- ADAM can benefit customer relations in a number of ways. Using its database capability discussed above, ADAM can provide solutions to customer needs by extracting an optimal solution under complex situations.
- Transportation companies and agencies including but not limited to brokerage firms, banks, insurance companies (6) Press agencies
- Robots can benefit from ADAM's real-time voice interaction.
- Robots need conceptual information to perform complex tasks. For example, a robot helicopter might be used to identify and track criminals on foot or in vehicles. Voice interaction could be used to alert the helicopter to rapidly changing situations.
- ADAM can conceptualize input information and turn it into real-time actions. It can also advice operators of failures and suggest actions. Operating Systems
- ADAM can be used as an integral part of computer operating systems, concluding but limited to Windows,
- any system using ADAM consists of (1) a software executable residing on volatile or nonvolatile memory and (2) some kind of memory device that stores language and world knowledge that is basic to its functionality and (3) a memory device that stores new learned language and world knowledge capability.
- This functionality can be embedded in any device in which interactive language capability is needed. This could include, but not be limited to, handheld computers, smart hotel locks, cash registers, fax machines, home entertainment systems, smart watches, smart phones, GPS gear, and mapping systems in cars.
- the memory type and size described in the above paragraph would be adjusted for each of these applications.
- ADAM can be used in any type of computer, including but not limited to portable computers, desktop computers, workstations, servers, mainframes and super computers.
- ADAM can be used in any communications network, including but not limited to wireless networks copper networks, fiber optic networks, satellite networks, video networks, and free air optical networks. Also, ADAM can be used in any Customer Premise Equipment CPE including but not limited to wireless phones, PCS phones GMS phones, CDMA phones, wireless phones based on any type of protocol, computer based
- ADAM can be used as a personality simulant for the entertainment industry, and any type of machine, robot, game or toy that can benefit from a language interface. This would include interactive personality simulants at entertainment parks, casinos and other resort and vacation centers.
- ADAM can be used to monitor or control any type of machine, including but not limited to engines, motors, motorcycles, automobiles, trucks, aircraft, spacecraft, MEMS (Microminiature Electronic Machine Systems), oscillators, and voltage monitoring devices.
- MEMS Microminiature Electronic Machine Systems
- ADAM is completely language independent, learning from stimulus-response patterns that it can associate with actions and various types of data structures. This means that communication is not limited to human languages. The only requirement is that the communication be broken down into distinct, recognizable units with the proper interface. This means that ADAM can be used to communicate with dolphins, whales, primates, birds and any other organism that communicates by producing sounds, motions, chemicals, or any other form of communication that can be detected by a sensing device.
- ADAM's capabilities include: (1) Corporate portals that learn via use, understand what -users need, and are conversational
- ADAM is about simulating the human capability to converse and understand in a practical, efficient and useful way.
- the inventor views human intelligence as the result of several interlocking processes that can be simulated on a computer.
- the inventor does not claim that the described processes can simulate all capabilities of human intelligence, only that enough of this capability can be simulated to perform useful tasks.
- ADAM like human intelligence, rests on six major pillars of information processing and together are the major innovations of this invention. These are:
- Block 1 is concerned with training.
- Block 2 describes ADAM's capability to do interactive, goal-driven dialog and Block 3.
- ADAM operates in two modes: (I ) training and (2) dialog.
- I training
- dialog In the training mode, ADAM incrementally learns language, learning how to make inferences and ask questions and learning how to perform actions such as opening a file or telling time.
- the user is presented with several blocks of text to fill in, and options about the kinds of concepts that are being generated.
- the If and Else statements are optional.
- the list of actions only contains things that are external to the program such as opening a file. All other actions are handled by the comprehension algorithms.
- the user must end complete sentences with punctuation, otherwise the program will assume the input is a clause.
- the response may be left blank if the stimulus requires an inference as a response. For example, a stimulus that begins with the word "why" does not usually have a response. This is handled by simply restating the stimulus as an inference and letting the inference engine find the answer.
- an event loop periodically checks to see if there is an event that needs processing. If someone is talking, that event is processed and a response is generated immediately, otherwise events that are on the goal processor stacks are processed.
- Working memory is a list of raw input data created during training. In working memory, sentences and clauses have been converted to arrays of objects containing hashed integers representing individual words. In addition, Working Memory contains a rich set of links that give information on how to associate stimulus, response and conditional links. Working Memory is the raw data that the abstract data in the
- Pattern Buffer is created from.
- Pattern Buffer maintains the association lists between patterns. Patterns are categorized and flagged as stimulus, response, inference and action. The following convention is used:
- a single element of a pattern buffer can be noted by:
- Parenthesis indicate a list, for example, (a b) is a list of the objects a and b.
- the integer indexes indicate the pattern and pattern level. Any number of responses and inferences can be associated with a stimulus. For example
- stimulus 45, second level is associated with two responses. Logically this means that, in an abstract sense, that if the pattern s45.2 is detected, the "form" of the response (meaning not the actual content) can either be r23.1 or r35.2. However, if the association was actually
- the Pattern Seeker finds simple patterns and stores them, leaving the stimulus:response pattern in the pattern buffer.
- Training Process and Store into Working Memory
- User text strings are tokenized and stored as lists of objects.
- the various types of text strings i.e. stimulus, response, if etc., are linked together in working memory.
- Working memory is temporary and serves to store user examples of stimulus-response links and associated logic and action statements. These examples are used to build abstractions in pattern memory and the pattern buffer.
- CONDITIONAL OCCURS The user selects the time in which the conditional should be applied relative to the time of the response.
- a Meta Map stores data on how differentiate between different kinds of mapping situations, and how to tell the difference between different kinds of maps.
- a Map is a data structure of storing an instance of an object that is described by a Meta Maps.
- Meta Maps are a specialized subclass of Meta Maps. Meta Maps in general are used when there is a need to simulate a predictive capability. For example, if some asks what is the square root of three, the program may not at first know. But using the Meta Maps math algorithms, it can, after shown examples, predict what the answer is. In effect, ADAM can learn to do math, on its own, in any language. Another type of mapping problem that Meta Maps and Maps are good for is counting problems.
- This association will ultimately create an association between an abstraction and an action. For example, if the associated response is "a wrench has been selected,” the action selected should be able to deal with the abstract notion of selecting something, and deduce from the particular stimulus (i.e. "Select a wrench") what is required in the specific response.
- a PC application might be concerned with file management for example.
- a robotic application might contain actions having to do with manipulating objects.
- an external database may be connected to the program. If one is selected, the program automatically uses the database to try to predict responses that are given in the training. This is the same concept as the map concept, except in the case the program connects to a pre-existing database. If during training the program can predict the response successfully (using the example given) the algorithm found is stored together with relevant database record location information that will allow the program to generalize the response, i.e., predict a correct response in all cases. This option should only be used if the application is designed to use the database in question. Training: Non-local
- “Locality” has to do with identifying if the concept is finite in space and time.
- a person is local because the concept of a specific human being is local to where that person is in space and time.
- the concept of a human being is noncritical. That is, the attributes of a human being are independent of space and time. If the response was "Jane has blue eyes,” this is a local statement, since it is only frue for a particular point in space and time, that is, where Jane is. However, if the response was "Humans have two legs,” that is a non-local statement, since it is true for all humans everywhere.
- the program should be set up by the user in the target language to identify reference tags for each locality.
- the tags are Non-local, he, she, it, they.
- the user can create new information at any time associated with each of these tags but the tag names should be used consistently. This is important since the program keeps maps for different non-local objects for the purpose of disambiguation.
- This aspect of training substitutes for the human experience of being able to map concrete objects with human senses as a part of reality.
- a baby learns to associate "mommy" with only one local object, for example, and that learning is eventually integrated into language about local objects.
- ADAM automatically abstracts the training examples, so only representative S-R pairs need be given.
- ADAM uses this information to decide if a reference is being made. For example, in the sentence "Her pride was hurt,” Adam would associate this sentence with a "she” Map that could contain other information about the local object-in this case a female human. If the question was asked, "Whose pride was hurt,” ADAM would look for a statement like X's pride was hurt,” and it would look in the most recent "she” Map for an instantiation of X.
- Working memory is a list of raw input data created during training. In working memory, sentences and clauses have been converted to arrays of objects containing hashed integers representing individual words. In addition, Working Memory contains a rich set of links that give information on how to associate stimulus, response and conditional links. Working Memory is the raw data that the abstract data in the Pattern Buffer is created from.
- a map can be associated with this pattern under two conditions: (1 ) The user has associated an action with this pattern.
- the program can associated a "virtual" map with this pattern.
- a virtual map is one in which eigen information can be used of predict other eigens such as in a mathematical operation.
- Pattern Seeker Working Memory
- Working memory is a list of raw input data created during training. In working memory, sentences and clauses have been converted to arrays of objects containing hashed integers representing individual words. In addition, Working Memory contains a rich set of links that give information on how to associate stimulus, response and conditional links.
- Working Memory is the raw data that the abstract data in the Pattern Buffer is created from.
- a clause is defined as a sub pattern inside of a sentence. Clauses are detected in two ways: (I ) by comparing with known, more simple patterns. Thus if the pattern "X is a Y" is known and the sentence “To the left is a house,” is detected, it can be assumed that "to the left” is a clause. (2) by recognizing that many clauses start with certain words like to, from and about. The program keeps a list of these words as a special type of eigen.
- ⁇ (Q.e7) p2.3> is a functor containing an empty setvariable.
- Pattern Seeker Create Pattern: Contains Clause
- a clause is defined as a sub pattern inside of a sentence. Clauses are detected in two ways:
- Pattern Seeker Set Map Define Map
- a locked training session means that the trainer is producing examples about a specific item in a specified locality. For example, the training could be about a specific woman while the 'she' locality is locked.
- the program uses this information to build a list of eigen numbers for which the eigen number was fixed over the locality. The program stores these as discriminators. In the case of a 'she' map, the program would build a female first name as a discriminator of this map. The allows the program to build a map for a woman named Alice and know that "she" refers to Alice.
- Pattern Seeker Set Map: Search for a Meta Map associated with this locality: Virtual Map Learning
- a virtual map is simply an index to an algorithm that predicts the response.
- the program checks to see if the response can be predicted using the information immediately available. An example of this would be the sentence "the square root of nine is three.”
- This module will detect the presence of numbers and realize that three can be generated from nine by several algorithms. It will pick one (the most statistically likely). If the program guesses wrong this will be corrected with other examples. All possible algorithms are pre- stored and there is a unique index for each one. Contrast this with the way instance maps are created in database applications. (See “Create new instance map.")
- Pattern Seeker Set Map: Search for a Meta Map associated with this locality: Create new instance map For most localities, such as involving a person, thing or event, this module simply creates a space in memory in which concepts will be stored relating to the locality.
- a special type of locality is the external database (shown in the diagram as optional). This is used in applications in which the user wants to access data using natural language. In this case, the module searches for an algorithm that will predict the response using the data found by accessing the external database through a special interface.
- the instance map will contain information on how to extract information for the database records and how to instantiate the concept so that it predicts the response.
- This is similar to virtual maps except there is no pre-defined algorithm index.
- the program calculates a method and stores the details in the instance map. Like the virtual map, the program can recover from error when given further examples.
- the program will attempt to find an algorithm that will satisfy all examples simultaneously for the particular stimulus-response pattern. Training: Process and Store into Working Memory: Force Output to be Equal to Closest Match
- training input is compared with existing memory, and one data structure is created from both.
- the resulting structure replaces discrete words with rotation matrices that select individual words from a vector space. This can be notated by:
- E(j) represents an eigen vector space.
- E j was the vector space
- the program would represent both sentences as:
- ( ).e34 is called a set variable.
- the "e34” says that this variable is a member of a particular eigen set- in this case (Jane Alice Paula). This pattern is stored, not in working memory, but in "pattern space.”
- My family is not a member of the eigen set e34. In this case, the program would create
- brackets The expression on brackets is called a vector set, and is represented by the same rotational structure.
- E(j) represents an eigen vector space
- ( ).e34 is called a set variable.
- the "e34” says that that this variable is a member of a particular eigen set— in this case (Jane Alice Paula). This pattern is stored, not in working memory, but in "pattern space.”
- Pattern Seeker Create Pattern: Make Functor: Create Functor Block 1. 12.4.3.1 (Structure Pattern Match has organized the argument list data into groups (sets) that need to be converted into functors.
- Pattern Seeker Create Pattern: Make Functor: Structure Pattern Match
- the objective of Structure Pattern Match is to
- a response can be predicted. Prediction is based on scenarios that are learned in training. For example, if someone drops something it falls to the floor. That is a type of seen ario that predicts the next event. The program can use this information to predict responses. In some cases, the scenarios can be goal- driven. In this case, the predictor tries to optimizes the response. Training: Locality Meta Maps
- a Meta Map stores data on how differentiate between different kinds of mapping situations and how to tell the difference between different kinds of maps.
- a Map is a data structure of storing an instance of an object that is described by a Meta Maps
- the locality maps are checked to see if there is a matching eigen word or set Variable. If there is a match, the Concept is added to a Map.
- the Maps keeps track of particular instances of local objects. This provides for efficient pronoun disambiguation, for example.
- Meta Maps are a specialized subclass of Meta Maps. Meta Maps in general are used when there is a need to simulate a predictive capability. For example, if some asks what is the square root of three, the program may not at first know. But using the Meta Maps math algorithms, it can, after shown examples, predict what the answer is. In effect, ADAM can learn to do math, on its own, in any language.
- mapping information stored is the index. No other memory is needed.
- the most common example of a concept that can be processed using the virtual map processor is a sentence that contains a mathematical algorithm. For example, if the user asked "What is the square root of nine?" this can be handled by a virtual map, since the information needed to predict the response is contained in the sentence itself. All that needs to be stored is an index to the square root algorithm, and information about where information needs to be stored in the abstract pattern.
- This part of the program simply looks up the algorithm and instantiates the eigen variables appropriately.
- a “scenario” in ADAM is any physical or mental sequence that has a natural order. For example, there is a natural order of events that occurs when using a restaurant. Another example would be a chess game which has a defined beginning, middle and end. In the case of chess, there is a goal state. If the program detects that it has entered a goal-driven scenario, it automatically uses data it has accumulated from previous instance maps of this type to predict the next response.
- the transition is applied to the pattern which generates a predicted response. For example, in the case of chess, the response would be the next move, which has the effect of optimizing a sub-goal of the goal state.
- This general principle is known as the "Uniform Learning Theory.”
- the implementation cation of that theory is unique to the present invention. This technique allows the program to learn to optimize goals by learning from examples.
- This module provides an interface to an external database and applies learned algorithms to initiate patterns. These patterns are turned into concepts which are returned. The specific details of how this is done depend on the application, but the technique is similar to other types of map processing: data is retrieved using stored location information, an algorithm is applied and the resulting information is used to instantiate a concept.
- the eigen state is the current pattern plus the value of all setvariables.
- Pending is an ordered list of stimulus concepts to be tried again. This stack is used when the program can't answer a question, but has asked for more information. After more information is gathered, the Goal Processor pops the stack and tries to generate a response (but with some of the feedback comments suppressed).
- Some responses may have had an external action attached to them. For example, if this is a robotic application, then a possible response would be the "The wrench is selected.”
- Each action has code associated with it that implements the external action.
- the input concept is returned which eventually be placed on the primary stack and output as speech, text or both.
- it the responsibility of this code to return an appropriate concept as a response.
- the present invention has been formulated in terms of the translation strings of symbols to another string of symbols based on learned patterns derived from examples.
- This technique has broader applications than just language.
- the basic technique of science is to observe patterns in nature, and to try to find mathematical models that can be used to predict events.
- Isaac Newton discovered that certain principles of geometry could be used to predict the motion of the planets. His theories not only enabled the position of celestial bodies to be calculated, but it provided a way of understanding the concept of gravity. It is important to realize that Newton's mathematical theory of gravity was also "wrong," since it did not predict accurately the motion of Mercury, the inner most planet in the Solar System.
- Every scientific theory that has ever been formulated is “wrong” (approximate).
- the ultimate goal of science is to have completely consistent set of models that work in all domains, the so- called Theory of Everything, but that goal remains elusive.
- the scientific method is therefore about predicting events based on approximate mathematical models that work within a certain domain.
- Every scientific theory can be formulated as a translation between input and output. For example, observations of the motion of planets can be translated to predictions of observations in the future.
- ADAM's basic technique of using examples of symbolic stimulus to response can be generalized to examples of input and output. Eigen states can be created that characterize these translations and can be used to predict new outputs from learned examples.
- Such hazards range from potential tectonic deformation related to earthquakes, to severe storm monitoring, to the tracking and monitoring of the movement of volcanic ash in the troposphere and stratosphere after a major explosive eruption.
- plumes of fine particulate volcanic ash and sulfur dioxide can drift thousands of miles from the eruption source, presenting a severe ash-ingestion hazard to turbine engined jet aircraft.
- plumes are easily camouflaged by the presence of water vapor clouds, however, they do exhibit some spectral contrasts in characteristic bands at visible and infrared wavelengths.
- the detection and tracking of such plumes is of paramount importance to aviation worldwide, given that there are about 1000 potentially active volcanoes scattered across the land surface of the earth. Nevertheless, unambiguous discrimination between volcanic plumes and meteorological clouds remains a difficult and not fully solved scientific and operational problem.
- ADAM The capabilities of the ADAM program to learn when given identified instances of a particular phenomenon, coupled to a library of instances, would be a general approach to change detection.
- the challenge is to identify the translucent plumes at the earliest possible post- eruption stage, despite the confusion from intermingled water-vapor meteorological clouds, and a variable background albedo, which at times could include snowfields and oceans.
- ADAM's ability to assimilate numerous cases and then access and organize the most likely past cases when confronted with a new similar (but not exactly similar) case would be a powerful way to identify volcanic ash plumes. Such plumes would be discriminated from meteorological clouds under a wide variety of atmospheric, seasonal, and ground albedo conditions.
- An as yet untapped area will similarly be the downloading of full-length movies over the Internet.
- trailers are downloaded, in the range of 10-20Mbytes of data volume, with times that can range up to a half-hour (or more) over the most common modem connections.
- DVD technology employs loss-less or near-loss-less compression schemes of -20:1 advantage, thus reducing a 200 Gbyte (potential data volume for a 2hr. long movie at 1000x1000 8 bit pixels per frame, at 30 frames/sec).
- the typical 7- 8Gbyte DVD data volume would be a quick download (70-80 seconds® 1 OOMbits/sec) over the state-of- the-art "T- 1 " data lines employed by large corporations and the government, but would be hopeless (34 hours) using the 57,600bits/sec modem now typical of most homes, DSL notwithstanding.
- a 100: 1 to 1000: 1 lossless or near- lossless compression scheme that can compress full length movies into a compressed data volume of -lOMbits will allow films to be downloaded and displayed easily on typical home systems. Whoever can do this will reap a fortune, and will trigger a revolution in the film and video industry comparable to that of the MPE music revolution.
- ADAM is a potential component in the achievement of that goal.
- ADAM's ability to create scenarios from instances empirically could allow the storing of abstractions [suites of most probably characteristics and frame-to-frame relationships] of scenes and of individual actors as eigenvectors at great economy of data volume [>I 00: 1 ], needing only periodic reference frames to validate frame-to-frame interpolations.
- Such instances could be assimilated from both scan-throughs of existing (2D) movies, and/or with the incorporation of (3D) "motion-capture” (i.e., actors instrumented with transducers to input generic range-of-motion instances).
- this conceptualized technology may also allow substantial reductions in post- production editing costs. Fewer scene takes will be needed if there exists a capability to "adjust" scenes via ADAM-generated instances and scenarios in post-production editing. That is, an original scene need only be approximate with respect to lighting and actor positioning and dialogue-only enough necessary to give ADAM a seed or kernel for all required instances and for reference frames-and adjustments, in principle indistinguishable from the original could be made in editing. Ultimately, realistic movies could be constructed whole-cloth from ADAM-generated instances and scenarios, however, the aesthetic, professional, and artistic issues incumbent in such an activity would be knotty. Nevertheless, an ADAM- type assimilative and inductive algorithmic framework could be employed as sketched here to revolutionize film making.
- a system that "listens" to sound patterns emanating from devices (motors, gas turbines, shaft bearings, etc.) and detects new sounds (non-normal) coming from the system (loss of lubrication, imminent bearing failure, etc);
- a system that listens to sounds (motors, shaft bearings, etc.) and detects when the sound coming from the system moves out of its normal pattern or domain;
- a system capable of being connected to a single sensory input (artificial sight, smell, touch, etc.) and recognizing inputs as they occur;
- a system capable of being connected to a single sensory input (artificial sight, smell, touch, micrometeor impact, etc.), recognizing inputs as they occur, and comprehending what may occur as multiple different or overlapping inputs aggregate in the sensory and/or time domains; 5.
- a system capable of being connected to multiple simultaneous sensory inputs (artificial sight, smell, touch, micrometeor impact, etc.) and recognizing inputs as they occur;
- a system capable of being connected to multiple simultaneous inputs (artificial sight, smell, touch, micrometeor impact, etc.), recognizing inputs as they occur, and comprehending what may occur as multiple different or overlapping inputs aggregate in the sensory and/or time domains; 7.
- a "tool” for analysts capable of being used in manual, semi-automatic, or automatic modes to quickly identify various "patterns” as they may occur in single or multisensory spatial and time domains;
- a system capable of operating, recognizing, and comprehending "data” as gathered or stored in data collection or data based systems;
- a universal language translator or translation system between languages used by various human beings A language translation system as might be used between human beings and animals (dolphins, orcas cats, lions, etc.);
- a system that " listens” to sound patterns emanating from devices (motors, gas turbines, shaft bearings, etc.) and detects new sounds (non-normal) coming from the system (loss of lubrication, imminent bearing failure, etc);
- a system capable of being connected to a single sensory input (artificial sight, smell, touch, etc.) and recognizing inputs as they occur;
- a system capable of being connected to a single sensory input (artificial sight, smell, touch, micrometeor impact, etc.), recognizing inputs as they occur, and comprehending what may occur as multiple different or overlapping inputs aggregate in the sensory and/or time domains;
- a system capable of operating, recognizing, and comprehending "data” as gathered or stored in data collection or data based systems
- a system capable of operating, recognizing, and comprehending "data” or “metadata” patterns or other recognizable attributes as gathered or stored in data or meta-data collection and data based storage systems;
- a trainable system based upon "stimulus-response", capable of discerning, recognizing and possibly comprehending data, data patterns or other information type patterns in large, historical data and information storage bases;
- a system which enables “ultra dense data storage, in which "eigenvectors” and “eigenvalues” are stored in a novel and unique manner. Data, information, and patterns related thereto are stored in a novel and ultra dense or compact fashion. All data, information, and perhaps some knowledge bases may have applications whereby relevant portions may be recognized, comprehended, and/or stored using novel storage techniques. This would permit slower processors, using less memory, resulting in systems drawing lower power consumption and operating longer to function effectively in existing or new application spaces and other domains.
- the present invention is designed to provide a system for simulating human intelligence.
- the present invention will be applicable to a vast array of communications and computing uses. LIST OF REFERENCE CHARACTERS
- ADAM Automated Dialog Adaptive Machine
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- General Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biophysics (AREA)
- Evolutionary Computation (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Data Mining & Analysis (AREA)
- Probability & Statistics with Applications (AREA)
- Machine Translation (AREA)
Abstract
L'invention concerne un procédé et un système de simulation de l'intelligence humaine et de la capacité de dialogue en langage naturel, qui mettent en oeuvre un modèle cognitif (20) d'intelligence humaine, un modèle mathématique d'abstraction de l'information, une interaction de dialogue synthétique (202), une méthode d'apprentissage informatisée non tributaire du langage qui s'effectue par l'entraînement (201), l'interaction et la lecture de documents (203), et une méthode d'implémentation efficace par ordinateur (200) de tous les rôles précédents. Le modèle cognitif (20) est la base théorique de toute l'invention. Il décrit la manière dont les êtres humains apprennent et interagissent en règle générale, fournit une base mathématique pour l'apprentissage et l'interaction en langage naturel (40), et établit une base pour l'implémentation détaillée par ordinateur de la théorie.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
AU2001274818A AU2001274818A1 (en) | 2000-05-25 | 2001-05-23 | Simulating human intelligence in computers using natural language dialog |
Applications Claiming Priority (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US57932500A | 2000-05-25 | 2000-05-25 | |
US09/579,325 | 2000-05-25 | ||
US09/634,896 | 2000-08-09 | ||
US09/634,896 US6604094B1 (en) | 2000-05-25 | 2000-08-09 | Simulating human intelligence in computers using natural language dialog |
US67671700A | 2000-09-29 | 2000-09-29 | |
US09/676,717 | 2000-09-29 |
Publications (2)
Publication Number | Publication Date |
---|---|
WO2001093076A2 true WO2001093076A2 (fr) | 2001-12-06 |
WO2001093076A8 WO2001093076A8 (fr) | 2004-11-25 |
Family
ID=27416321
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2001/014829 WO2001093076A2 (fr) | 2000-05-25 | 2001-05-23 | Simulation de l'intelligence humaine par ordinateur utilisant le dialogue en langage naturel |
Country Status (2)
Country | Link |
---|---|
AU (1) | AU2001274818A1 (fr) |
WO (1) | WO2001093076A2 (fr) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7546295B2 (en) | 2005-12-27 | 2009-06-09 | Baynote, Inc. | Method and apparatus for determining expertise based upon observed usage patterns |
US7698270B2 (en) | 2004-12-29 | 2010-04-13 | Baynote, Inc. | Method and apparatus for identifying, extracting, capturing, and leveraging expertise and knowledge |
US8095523B2 (en) | 2004-12-29 | 2012-01-10 | Baynote, Inc. | Method and apparatus for context-based content recommendation |
CN106951491A (zh) * | 2017-03-14 | 2017-07-14 | 广东工业大学 | 一种应用于机器人的智能对话控制方法及装置 |
US9836765B2 (en) | 2014-05-19 | 2017-12-05 | Kibo Software, Inc. | System and method for context-aware recommendation through user activity change detection |
US10362113B2 (en) | 2015-07-02 | 2019-07-23 | Prasenjit Bhadra | Cognitive intelligence platform for distributed M2M/ IoT systems |
CN110648652A (zh) * | 2019-11-07 | 2020-01-03 | 浙江如意实业有限公司 | 一种智能互动玩具 |
AU2019210603B2 (en) * | 2016-01-21 | 2020-10-22 | Accenture Global Solutions Limited | Processing data for use in a cognitive insights platform |
-
2001
- 2001-05-23 AU AU2001274818A patent/AU2001274818A1/en not_active Abandoned
- 2001-05-23 WO PCT/US2001/014829 patent/WO2001093076A2/fr active Search and Examination
Non-Patent Citations (1)
Title |
---|
No Search * |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8095523B2 (en) | 2004-12-29 | 2012-01-10 | Baynote, Inc. | Method and apparatus for context-based content recommendation |
US7698270B2 (en) | 2004-12-29 | 2010-04-13 | Baynote, Inc. | Method and apparatus for identifying, extracting, capturing, and leveraging expertise and knowledge |
US7702690B2 (en) | 2004-12-29 | 2010-04-20 | Baynote, Inc. | Method and apparatus for suggesting/disambiguation query terms based upon usage patterns observed |
US8601023B2 (en) | 2004-12-29 | 2013-12-03 | Baynote, Inc. | Method and apparatus for identifying, extracting, capturing, and leveraging expertise and knowledge |
US7693836B2 (en) | 2005-12-27 | 2010-04-06 | Baynote, Inc. | Method and apparatus for determining peer groups based upon observed usage patterns |
US7856446B2 (en) | 2005-12-27 | 2010-12-21 | Baynote, Inc. | Method and apparatus for determining usefulness of a digital asset |
US7580930B2 (en) | 2005-12-27 | 2009-08-25 | Baynote, Inc. | Method and apparatus for predicting destinations in a navigation context based upon observed usage patterns |
US7546295B2 (en) | 2005-12-27 | 2009-06-09 | Baynote, Inc. | Method and apparatus for determining expertise based upon observed usage patterns |
US9836765B2 (en) | 2014-05-19 | 2017-12-05 | Kibo Software, Inc. | System and method for context-aware recommendation through user activity change detection |
US10362113B2 (en) | 2015-07-02 | 2019-07-23 | Prasenjit Bhadra | Cognitive intelligence platform for distributed M2M/ IoT systems |
AU2019210603B2 (en) * | 2016-01-21 | 2020-10-22 | Accenture Global Solutions Limited | Processing data for use in a cognitive insights platform |
US11144839B2 (en) | 2016-01-21 | 2021-10-12 | Accenture Global Solutions Limited | Processing data for use in a cognitive insights platform |
CN106951491A (zh) * | 2017-03-14 | 2017-07-14 | 广东工业大学 | 一种应用于机器人的智能对话控制方法及装置 |
CN110648652A (zh) * | 2019-11-07 | 2020-01-03 | 浙江如意实业有限公司 | 一种智能互动玩具 |
CN110648652B (zh) * | 2019-11-07 | 2021-10-01 | 浙江如意实业有限公司 | 一种智能互动玩具 |
Also Published As
Publication number | Publication date |
---|---|
WO2001093076A8 (fr) | 2004-11-25 |
AU2001274818A1 (en) | 2001-12-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US6604094B1 (en) | Simulating human intelligence in computers using natural language dialog | |
US11734375B2 (en) | Automatic navigation of interactive web documents | |
Alvarez-Melis et al. | A causal framework for explaining the predictions of black-box sequence-to-sequence models | |
US20030144832A1 (en) | Machine translation system | |
US7295965B2 (en) | Method and apparatus for determining a measure of similarity between natural language sentences | |
US20050005266A1 (en) | Method of and apparatus for realizing synthetic knowledge processes in devices for useful applications | |
CN114676234A (zh) | 一种模型训练方法及相关设备 | |
JP7315065B2 (ja) | 質問生成装置、質問生成方法及びプログラム | |
CN117648429A (zh) | 基于多模态自适应检索式增强大模型的问答方法及系统 | |
WO2001093076A2 (fr) | Simulation de l'intelligence humaine par ordinateur utilisant le dialogue en langage naturel | |
CN118228694A (zh) | 基于人工智能实现工业行业数智化的方法和系统 | |
Toy | Transparency in AI | |
CN112069813B (zh) | 文本处理方法、装置、设备及计算机可读存储介质 | |
CN117193582A (zh) | 交互控制方法及系统、电子设备 | |
Evans | Descriptive pattern-analysis techniques: potentialities and problems | |
Kramer et al. | Tell your robot what to do: evaluation of natural language models for robot command processing | |
Feng | Formal analysis for natural language processing: a handbook | |
Römer et al. | Behavioral control of cognitive agents using database semantics and minimalist grammars | |
CN117033649A (zh) | 文本处理模型的训练方法、装置、电子设备及存储介质 | |
CN114998041A (zh) | 理赔预测模型的训练方法和装置、电子设备及存储介质 | |
Surdeanu et al. | Deep learning for natural language processing: a gentle introduction | |
Khakhalin et al. | Integration of the Image and NL-text Analysis/Synthesis Systems | |
Litvin et al. | Development of natural language dialogue software systems | |
Mote | Natural language processing-a survey | |
Ekpenyong et al. | Agent-based framework for intelligent natural language interface |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A2 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG UZ VN YU ZA ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A2 Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
DFPE | Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101) | ||
122 | Ep: pct application non-entry in european phase | ||
D17 | Declaration under article 17(2)a | ||
NENP | Non-entry into the national phase in: |
Ref country code: JP |