CN101128864A - Conversational user interface - Google Patents

Conversational user interface Download PDF

Info

Publication number
CN101128864A
CN101128864A CNA2006800048979A CN200680004897A CN101128864A CN 101128864 A CN101128864 A CN 101128864A CN A2006800048979 A CNA2006800048979 A CN A2006800048979A CN 200680004897 A CN200680004897 A CN 200680004897A CN 101128864 A CN101128864 A CN 101128864A
Authority
CN
China
Prior art keywords
processor
computing machine
computer
storer
user interface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CNA2006800048979A
Other languages
Chinese (zh)
Inventor
卡尔·爱德华·卡彭特
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Celf Corp
Original Assignee
Celf Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Celf Corp filed Critical Celf Corp
Publication of CN101128864A publication Critical patent/CN101128864A/en
Pending legal-status Critical Current

Links

Images

Abstract

A conversational user interface (CUI) is implemented in a computer by mimicking the organization and retrieval of linguistic memories of human conversation. To mimic the organization of linguistic memories in a human brain, the artificial memories are stored as sequences of patterns, are stored in invariant form, are organized hierarchically, and are recalled auto-associatively. To mimic the recall of linguistic memories in a human brain, the same algorithm performs the recall of the various memories. Each artificial memory is a pairing of the invariant representation and an associated responsive message. When a received utterance is determined to match the invariant representation of a memory, the memory is evoked and the associated responsive message is presented to the person.

Description

Conversational user interface
Related application
The application's case is advocated the S/N 60/652 of application on February 15th, 2005, the S/N 60/740 of No. 748 U.S. Provisional Patent Application cases and application on November 29th, 2005, the right of priority of No. 147 U.S. Provisional Patent Application cases, and described temporary patent application case is incorporated herein by reference.
Technical field
The present invention relates to computer-implemented artificial intelligence field, and more particularly, relate to a kind of conversational user interface.
Background technology
In the epoch that are called " information age ", information and infosystem have very important meaning.Now can obtain about the information of the enormous amount of any theme almost.We see that the world just enters " symbiosis epoch ", and wherein infosystem more closely combines with human user, so that information is not only available, and can be rapidly and access easily.
At " The Conversational User Interface (Linguistic User Interface): Our Next Great LeapForward " (on World Wide Web<http://www.accelerationwatch.com/lui.html〉locate to deliver), John Smart is described below this transformation:
In next our in fact foreseeable all calculating in 20 years changed, conversational user interface (CUI or " cooey ") was likely impayable at it aspect ordinary people's influence.When realizing cheap and ubiquitous CUI and high bandwidth network and analog basis structure (2015? the year two thousand twenty? the year two thousand thirty? this may be determined by us to a great extent), this will make us step into brand-new epoch from the information age, and some futurist will be called the symbiosis epoch this epoch.At this moment, all mankind on our celestial body (comprising current that be deprived of citizen's rights, functional illiterate and marginalized " bottom 3,000,000,000 ") can be targetedly and ubiquitous semi-intelligent technological system dialogue, and use every day them to solve trifling but very actual human problem in the calculating of broad range.
Mathematician and science fiction writer Vernor Vinge have described the technology focus from the similar transfer of artificial intelligence (AI) to " singularity " in the speech in NASA ' s VISION-21 symposium in 1993:
When the intelligent life as people refer to the establishment superman, they imagine the AI engineering usually.But exist other approach to realize superman's ability.As if computer network and man-machine interface are more more common than AI, but they can cause singularity.I call intelligence to this contrast approach and expand liter (LA).LA is such things: it runs their course very much, and in most of the cases it is what is not even discovered by its developer.But when being improved whenever our access information and with its ability that sends other people to, we have obtained to surpass the ability of natural intelligence in some sense.And IA is probably than the easier realization of pure AI superman ability.For the mankind, solved the most difficult development problem.Developing from our inside should be easy than understanding fully at first what we are actually and then build the machine that belongs to that class.
Therefore, be starved of remarkable improvement access information and make this type of information can be by the ability of other people access.
Summary of the invention
According to the present invention, the tissue of linguistic memories and the retrieval and in computing machine implement of conversational user interface by the imitation human conversation.Programmed computer is a difficulty very to experience the world around it and to use himself logic to obtain memory body.Yet computing machine carries out true and natural dialogue with the individual can be realized with calling by the natural fabric of duplicating linguistic memories at the linguistic memories of manual creation.
In order to imitate the tissue of the linguistic memories in the human brain, artificial memories is stored as pattern sequence, with the storage of constant form, with layering form tissue, and auto-associating call.In order to simulate calling of linguistic memories in the human brain, identical algorithms is carried out calling of various memory bodys.
Artificial memories is stored as text-string, its expression sentence or phrase.Described sentence or phrase are word sequences, and institute's predicate is a pattern.Artificial memories is represented the things that computing machine had been listened and may have been heard once more in the past.Unless the text-string of expression artificial memories is learnt by symbiosis otherwise can not changed, and therefore with constant form storage.When computing machine is heard speech with the invariant representation of text-string coupling, call artificial memories, and therefore call artificial memories auto-associating.Computing machine can be converted to its text representation and " hearing " speech by receiving from the sound signal of the text representation of individual's message or the spoken language speech by capturing the expression individual and with described sound signal.
Each artificial memories is an invariant representation and the pairing of the response message that is associated.When the invariant representation of the speech of determining to be received and memory body mates, ekphorize body and the response message that is associated presented to the individual.
The laminated tissue of artificial memories allows response message and context-sensitive.The order of search language memory body depends on the ad-hoc location in the laminated tissue of the artificial memories of arousing recently.
The linguistic memories of computing machine is by human mind author's manual creation.When the new memory body of needs, learn described memory body in the symbiosis mode.COMPUTER DETECTION is to fail ekphorize body and the speech that received is forwarded to the intelligence author together with previous session as context from individual's the speech that receives.Intelligence author determines and implements remedial modification at artificial memories, so that when receiving identical message subsequently, calls suitable memory body rightly.
Description of drawings
Fig. 1 be show according to the present invention computing machine just with the block scheme of user session.
Fig. 2 is explanation constitutional diagram of ongoing dialogue between user and the computing machine according to the present invention.
Fig. 3 is a block scheme of implementing the artificial linguistic cortex of conversational user interface.
Fig. 4 is the block scheme of artificial memories of the artificial linguistic cortex of Fig. 3.
Fig. 5 is the pseudo-representation of memory prediction loop of the artificial linguistic cortex of Fig. 3.
Fig. 6 is the block scheme of the laminated tissue of explanation artificial memories of artificial linguistic cortex and theme.
Fig. 7 is the optimum matching pseudo-representation partly of the memory prediction loop of Fig. 5.
Fig. 8 is the string matching pseudo-representation partly of the memory prediction loop of Fig. 5.
Fig. 9 is the block scheme that the symbiosis of the talking-computer of key diagram 1 is learnt.
Figure 10 comprises three logical flow charts of explanation symbiosis study, symbiosis instruction and user notification.
Figure 11 is that the table of the database of storage artificial memories in the artificial linguistic cortex of Fig. 3 is represented.
Figure 12 is the screen view of the graphical user interface of symbiosis instruction instrument.
Figure 13 shows the synthetic face that the audiovisual be used to present response message is represented.
Figure 14 is the logical flow chart that new theme is determined in explanation when working as the artificial memories that the speech that is received do not arouse current theme.
Embodiment
According to the present invention, the human brain behavior of computing machine 102 (Fig. 1) when calling linguistic memories by simulation to engage in the dialogue with human user 110.Computing machine 102 comprises artificial linguistic cortex (ALC) 300 (Fig. 3), and it imitates the storage of the linguistic memories in human brain and calls.This has realized dialogue behavior true to nature, and this is the manoeuvre of calling linguistic memories to a great extent.The instruction of ALC 300 (that is, the establishment of linguistic memories) is symbiosis, because human " intelligence author " manual creation and organize the linguistic memories of ALC 300.Thereby, do not need to develop a kind of computing machine that can before people can engage in the dialogue realistically with computing machine, learn voluntarily.
User 110 (Fig. 1) makes computing machine 102 add dialogue, and this is initial to the speech 120A of computing machine 102 by user 110.Speech 120A is the message with any substantially natural language form of communication that user 110 can use.In certain embodiments, speech 120A is the text message of being keyed in by user 110.In other embodiments, speech 120A is the auditory message of being said and being received by computing machine 102 by user 110, for example as the sound signal that receives by microphone.
Computing machine 102 usefulness speech 120B responds, and for instance, described speech 120B can be text, the sense of hearing (for example, synthetic or prerecorded spoken words) or audiovisual (for example, the individual says synthesizing of words or the audiovisual of pre-recording is represented).In response to speech 120B, user 110 120C that will make a speech in response sends to computing machine 102, and computing machine 102 usefulness are made a speech and 120 responded.
The ongoing interactive sessions of natural language (that is dialogue) of using user 110 to be familiar with between speech 120A-D representative of consumer 110 and the computing machine 102.Fig. 2 is shown as state Figure 200 with this interactive sessions.In state 202, user 110 sends speech to computing machine 102.In state 204, call responsive utterance computing machine 104 auto-associatings, and in state 206, send described speech.
Imitate the mode of in state 206, calling responsive utterance according to the achievement recently that Jeff Hawkins and Sandra Blakeslee describe in " On Intelligence " computing machine 104 auto-associatings.This achievement proposes, and the language in the brain of living naturally relates to the manoeuvre that the memory body of single cortical algorithm calls.In addition, four (4) of the neocortex memory body individual character promote human language to exchange: (i) neocortex is stored as pattern sequence with linguistic memories; (ii) store these patterns with constant form; (iii) store these patterns with the layering form; And (iv) call these patterns auto-associating.
In fact, computing machine 102 participation and user's 110 human session.Human session can be expressed as on the meaning of relatively simply and unusual summary: under the situation of hypothesis speaker and hearer's participant session, arouse which memory body among the human hearer with language performance and that memory body of rising in human speakers? in the situation of Fig. 2, with language performance and rise in user 110 and arouse which memory body in the computing machine 102 as that memory body of the speech in the state 202? answer is the speech of computing machine 102 in state 206.In subsequent act of dialog, with language performance and rise in computing machine 102 and arouse which memory body among the user 110 as that memory body of the computing machine 102 of the speech in the state 206? answer is the subsequent utterance in the state 202.
Call in order in state 204, to carry out auto-associating, computing machine 102 comprises artificial linguistic cortex 300 (Fig. 3), it is sometimes referred to as ALC 300 in this article and is at computing machine 102 and/or operationally is coupled to all or part of of one or more Computer Processing of carrying out in other computing machine (for example, computing machine 106) of computing machine 102.This paper uses " manually " on " artificial intelligence " meaning, this means that it takes place in the machine of for example computing machine and is not to take place in the biological brain that exists naturally.ALC 300 comprises artificial neocortex zone 302, and its form with simulation storage linguistic memories in the true neocortex zone of human brain is stored linguistic memories.The linguistic memories of neocortex area stores particular topic, and each memory body in this type of zone makes the language performance of memory and corresponding response language express pairing.In Fig. 3, in artificial neocortex zone (ANR) 302, linguistic memories makes as the memory language performance of invariant representation IR1, IR2 and IR3 and response language and expresses the OS pairing.
These memory bodys satisfy four (4) the individual character that as indicated above being used to promotes the neocortex linguistic memories of human session.First character is to become pattern sequence.In ANR 302, invariant representation is expressed as word sequence with the language performance of memory, and institute's predicate is patterns of text and/or sound signal.In this illustrative embodiment, represent word sequence with textual form.
The second quality is to store with constant form.In order to promote understanding and understanding to this character, consider someone for hear from another people the nature of " Hello " and almost the response of instinct be helpful.Nature and almost the response of instinct be, also to say " Hello " even another people is complete strange people.The human mind body of " Hello " is stored in the human neocortex unchangeably with " Hello " that suitably respond.Therefore, even " Hello " that hear has the form (for example, being unfamiliar strangers from voice, intonation and tone for us) of never listening before us, we also can recognize spoken language " Hello ".Similarly, even the change of vocal cord vibration makes our speech " Hello " be different from any other speech " Hello " of making or hear in the past slightly, but our response is often identical.
In a similar manner, invariant representation IR1, IR2 and IR3 (language performance of its each expression memory) with can not change during user 110 exchanges.In fact, they only change during symbiosis study, and this more completely is described hereinafter.
The 3rd character is with layering form stored pattern.As shown in Figure 3, ANR 302 stores for example memory body of invariant representation IR1, IR2 and IR3 with the layering form.In particular, the particular topic in the theme 304 expression ANR 302 is organized invariant representation IR1, IR2 and IR3 with the layering form in ANR 302.In particular, invariant representation IR1 is the direct filial generation of theme 304, and each is the direct filial generation of invariant representation R1 for invariant representation IR2; And each is the direct filial generation of invariant representation IR2 for invariant representation IR3.In addition, although showed single theme 304 for simplicity and in order to promote the understanding of the present invention and understanding, should be appreciated that this illustrative embodiment comprises the theme level with the tree construction tissue, has invariant representation filial generation (for example, invariant representation IR1) at described tree construction middle period theme.
For instance, show in Fig. 6 with the theme and the memory body of layering form tissue.Because invariant representation IR1, IR2 and IR3 are texts in this illustrative embodiment, and therefore represent and carry out laminated tissue with tree construction as shown in the figure by the variable of " character string " type, thus set 600 or its any subtree be called as the character string tree sometimes.
The 4th character be auto-associating call linguistic memories.As used herein, auto-associating calls the key which linguistic memories the speech conduct that is to use reception is used to determine to arouse.As shown in Figure 3, each invariant representation IR1, IR2 and IR3 are associated with a responsive utterance OS.Provide example in Fig. 4, wherein invariant representation IR1 402 is associated with responsive utterance OS 404.The input of character string of invariant representation IR1 402 expression memories, character string is a series of text character.Branch link 406 is considered to the equivalence memory input of character string of same memory body, and also is invariant representation.The output string that responsive utterance OS 404 expression is associated, its will be issued as to the invariant representation IR1 402 of input of character string coupling or branch link 406 in any one response.Therefore, the linguistic memories of being called is associated with the invariant representation of input of character string itself, and therefore by auto-associating call.
In addition, relate to single simulated cortical algorithm from calling of ANR 302, promptly memory prediction loop 320.Memory prediction loop 320 is showed with pseudocode form in Fig. 5.
Input of character string (instring) is expression from the data of the speech of user 110 (Fig. 1) (for example, the speech in the state 202 (Fig. 2), or among speech 120A and the 120C any one).In this illustrative embodiment, represent input of character string with textual form.In one embodiment, user 110 is spoken utterance in state 202, and memory prediction loop 320 (Fig. 3) comprises conventional speech-to-text logic, and its sound signal that is received that will represent the speech of state 202 is converted to the text input of character string.
Output string (outstring) is expression from the data of the speech of computing machine 102 (for example, the speech in the state 206 (Fig. 2), or among speech 120B (Fig. 1) and the 120D any one).In this illustrative embodiment, represent output string with textual form.In one embodiment, memory prediction loop 320 (Fig. 3) comprises conventional Text To Speech logic, and it is converted to synthetic speech with the text output string and is used for playing to user 110 as the sound signal of state 206 (Fig. 2) to create speech.In addition, synthetic speech mode and the graphical face of visually representing by computing machine 102 lip-sync of more complete description hereinafter.
In one embodiment, output string can comprise the reference to institute's program stored, thereby allows computational logic is used to formulate responsive utterance.Example be problem " How old are you? " linguistic memories.The response output string can comprise the reference to stored procedure, and described program stored is measured the creation-time of ALC 300 and the difference between the current time, to produce the age measured value of ALC 300.Other example comprise problem " How many millimeters arein a mile? " " What ' s the weather like in Russia today? " linguistic memories.The former can comprise the reference to the stored procedure that is used to change linear module, and the latter can comprise the reference to the stored procedure that is used to retrieve and analyze Weather information.With regard to the things of stored procedure may command except that computing machine 102 (for example, lamp in user 110 families), output string can comprise the reference to this type of program stored, can be by computer-controlled anything conversational user interface thereby ALC 300 is become be used for.Institute's program stored is to be stored in part or all of executable computer program in the database.Institute's program stored is known and this paper is not further described.
Variable cmem represents the current memory body in the ANR 302.Type mem shows in more detail in table 1106 (Fig. 9), and more completely describes hereinafter.The notion of current memory body is corresponding to the theory recently about human brain function, and this is hinting as handle a linguistic memories at spotlight next time.This notion of current memory body is expressed as the spotlight 310 on the invariant representation IR1 306 in Fig. 3.
At each input of character string that receives from user 110 and repeated treatments memory prediction loop 320 (Fig. 5), as shown in the for/next instruction at row 504 and 532 places.Row 505 and 531 definition repeated searching memory bodys are up to thinking the loop of search till finishing.At first, memory prediction loop 320 children memories of the current memory body of 507-511 place search of being expert at.This represents that in Fig. 3 wherein best match logic 312 is applied to the children memories of invariant representation IR1 306, i.e. invariant representation IR2 308A-C.
The children memories of searching for current memory body allows the session sequence that follows logic.In particular, each expression of the filial generation of given memory body is made a speech subsequently in response to the user's 110 of the most recent responsive utterance of computing machine 102 expection.Therefore memory body level among memory prediction loop 320 and the ANR 302 allows dialogue to follow the nature flow process, and wherein user 110 can respond in the speech of the speech of computing machine 102, and computing machine 102 can be followed the logical process of speech.This gives the matter of the dialogue high degree of interactivity between user 110 and the computing machine 102.
If any one in the children memories of current memory body and input of character string fully mate, arouse first memory body of abundant coupling so, and 508 places of being expert at send corresponding output string.In addition, think the search finish.In alternate embodiment, if the optimum matching children memories of current memory body is selected in a plurality of filial generations of current memory body and input of character string fully mates and be not that F1 memory body and input of character string fully mate so.
The 509-510 place (Fig. 5) of being expert at, when memory prediction loop 320 detects the children memories of current memory body and does not fully mate with input of character string, and in the case, the mother of current memory body is set at new current memory body repeatedly follow-up with the loop that is used for row 505-531 generation.Therefore, if user 110 speech in response to the previous speech recently of computing machine 102, memory prediction loop 320 also can be followed described session process so.
All memory bodys in the current string tree have been traveled through to be arrived when not having the memory body in female generation always, and all memory bodys and input of character string with current theme among the 513-530 that is expert at compare, thereby seek optimum matching.There is not the memory body in female generation to be called as first order inquiry sometimes or to be called as " LIQ " sometimes.Given theme can have many first order inquiries.If the previous current memory body in repeatedly of the loop of row 513-530 is first order inquiry, the current string tree is the theme that comprises described first order inquiry so.
If in the memory body of current string tree, find the coupling of superior quality, be expert at so and send corresponding output string in 515 as speech and think that search finishes.On the contrary, if in the current string tree, there is not to find fully coupling, make female the tree become current (as shown in row 519) and carry out another repeated searching so for character string.
If do not find any abundant coupling as yet and searched for complete all character strings trees, do not have to have detected new things so as indicated in female generation situation of (as shown in row 518) as the current string tree.New things are at its current input of character string that does not have the memory body that comprises whole ALC in any one of ANR302 or the character string that searched tree.Owing to the memory body that comes manual creation ANR 302 in the mode of hereinafter more completely describing, thereby new things are not to be the desired input of character string of intelligence author of ANR 302.In response to new things, the memory prediction loop 320 intelligence author of 523 places by notice ANR 302 that be expert at comes initial symbiosis study about described new things, and 526 places of being expert at notify user 110 not response be available.But memory prediction loop 320 also can attempt to notify user 110 in the answer time spent (for example) by Email.
Show the best match logic 312 that call at 507 and 514 places of being expert at pseudocode form among Fig. 7.Best match logic 312 is used the string matching logics that the invariant representation of input of character string and memory body is compared and the keyword of input of character string and memory body is compared.This string matching logic shows with pseudocode form in Fig. 8, and relates to that the certain memory body is that user 110 expects that when sending described input of character string the possibility of the memory body that arouses is carried out bilateral is definite excessively.In first passed through, memory prediction loop 320 determined whether input of character string contains any keyword of described memory body.If no, refuse described memory body so fast.This realizes effectively getting rid of fast inapplicable memory body.
If input of character string comprises at least one keyword of described memory body, memory prediction loop 320 is used character strings relatively to come invariant representation to memory body is that the possibility of user 110 invariant representation of desired memory body when sending input of character string is estimated so.Character string relatively is known, and is not described herein.In this illustrative embodiment, the character string type relatively that memory prediction loop 320 is used is on World Wide Web <http://www.catalvsoft.com/articles/StrikeAMatch.htmlLocate to describe, its source code duplicates in this article as appendix A.
Therefore, the optimum matching between the invariant representation of the memory body of memory prediction loop 320 repeated searching input of character string and ANR 302 is widened the theme of described search as required repeatedly, till finding optimum matching.If do not find fully coupling, so initial symbiosis study.If find fully coupling, so computing machine 102 send described optimum matching memory body the output string that is associated as response to user 110, and serve as context starting point the follow-up memory body search that memory prediction loop 320 carries out from the memory body that calls out output string.In this illustrative embodiment, compare to determine fully coupling by possibility and predetermined threshold possibility (for example, 60%) with determined invariant representation and input of character string coupling.Should be appreciated that branch link 406 is searched for as independent memories, and each of described independent memories is associated with output string OS 404.Yet branch link 406 all is identical output string OS 404 of arousing of same linguistic memories and the part of sharing same parent and filial generation in the level of ANR 302.The equivalent phrasings of the main message of branch link 406 expression invariant representation IR1 402 and each keyword sets that is associated with those equivalent phrasings.
Symbiosis study
A lot of effort of doing for the artificial intelligence that obtains to be enough to realize conversational user interface have concentrated on establishment can be as the computing machine of human study.Yet, show that as nearest human brain theory language relates generally to the single cortical algorithm of access/memory body alternately.By recognizing that computing machine can be implemented artificial cortical algorithm and the access artificial memories realizes marked improvement with the simulating human dialogue, even this type of memory body is not created by computing machine itself.In fact, these artificial memories are by real human brain, are sometimes referred to as by this paper that intelligence author's people creates.
This illustrates in Fig. 9, wherein intelligence author 904 902 memory bodys that come manual modification ANR 302 (Fig. 3) that use a computer.In this illustrative embodiment, ANR 302 and memory prediction loop 320 are implemented by computing machine 106 (Fig. 9).In alternate embodiment, memory prediction loop 320 is implemented by computing machine 102, and ANR 302 implements in computing machine 106, and many artificial linguistic memory that it as required will (for example) theme send to computing machine 102 to be used for the 110 ongoing dialogues with the user.User 110 comes to communicate by letter with computing machine 102 substantially in the manner described above.If ANR 302 and memory prediction loop 320 are implemented by computing machine 106, computing machine 102 (is for example implemented thin-client so, web browser), it will be forwarded to computing machine 106 by network (for example, the Internet 104) from user 110 speech and handle in the manner described above being used for.The result of this type of processing is a responsive utterance, and computing machine 106 sends to computing machine 102 to present to user 110 with described responsive utterance by the Internet 104.
As mentioned above, memory prediction loop 320 520 places of being expert at determine when that user 110 speech do not arouse the artificial memories in the ANR 302.Under this type of situation, the memory prediction loop 320 523 places initial symbiosis study of being expert at, and 526 places of being expert at notify the user temporarily to lack suitable response.
Symbiosis study according to this illustrative embodiment is showed in logical flow chart 1000 (Figure 10), 1020 and 1040.The request of 300 pairs of symbiosis study of logical flow chart 1000 explanation ALC.Logical flow chart 1020 explanation intelligence authors 904 are by the 902 symbiosis instructions of carrying out that use a computer.Logical flow chart 1040 explanations learn to notify user 110 by ALC 300 in response to the successful symbiosis with respect to the speech that before can not answer.
In step 1002, the context of ALC 300 packing new things.All speeches of dialogue between this packaged combination user 110 and the computing machine 102, go up to and comprise new things.Figure 11 shows the illustrative diagram of the database of representing ANR 302.In brief, keeper and the intelligence author of subscriber's meter (users table) 1102 expression ALC 300; Revise the modification that table (mods table) 1104 expressions are done any memory body of ALC 300 (for example, ANR 302) by means of symbiosis study; All indivedual memory bodys of memory body surface (mems table) 1106 expression ALC 300, for example memory body of ANR302; The exchange of conversational list (dialogues table) 1108 link sessions; And input of character string that swap table (exchange table) 1110 expression receives from user 110 and the ekphorize body that is associated of ANR 302.When initial symbiosis was learnt, the record of the session of new things wherein took place in the expression of ALC 300 identification conversational lists 1108, and described session is called as theme conversation sometimes.In order to pack described session, ALC 300 collects all exchange records relevant with described theme conversation and the expression of memory body surface 1106 exchanges all records of the relevant memory body of record with those.In alternate embodiment, ALC 300 does not pack session, but changes the data of collecting the identification theme conversation into, makes the intelligence author can discern the theme dialogue after a while so that carry out the symbiosis instruction in mode described below.
In step 1004 (Figure 10), the described new things of ALC 300 notice intelligence author 904 (Fig. 9), and make theme conversation described packing or only identification to use for intelligence author 904.In one embodiment, ALC 300 sends to intelligence author 904 with the theme conversation of described packing as email message.In another embodiment, the ALC300 data that will discern theme conversation send to intelligence author 904 as email message.In another embodiment, ALC300 will indicate the message that has new things to be solved to send to intelligence author 904.In this last embodiment, how expection intelligence author 904 knows (for example, in the inbox for intelligence author 904 appointments) access new things and context thereof in ALC 300.
In step 1022 (Figure 10), intelligence author 904 (Fig. 9) receives about new things and contextual notice thereof.Intelligence author 904 determines that remedial modification for ANR 302 is to solve described new things.The method that is used for definite remedial modification that intelligence author 904 takes can change according to the particular technology and the ability of individual mindwriter.Yet preferably, intelligence author 904 at first searches for the equivalent invariant representation that memory body surface 1106 (Figure 11) obtains the input of character string of new things.If find an equivalent invariant representation, intelligence author 904 determines that preferably described remedial modification is to add another equivalent phrasings of the invariant representation of being found as branch link so, for example branch link 406 (Fig. 4).If do not find the equivalent invariant representation of the input of character string of new things in memory body surface 1106 (Figure 11), intelligence author 904 (Fig. 9) can determine that remedial modification to ANR 302 comprises new memory body added to and remember body surface 1106 so.Intelligence author 904 determines preferably where new memory body will be placed in theme and the memory body level.The invariant representation that this determines and whether other memory body has an equivalence is to be undertaken by the complete human language ability of using human mind author.Therefore, ALC300 uses human learning ability to learn, and promptly learns in the symbiosis mode.
In step 1026 (Figure 10), intelligence author 904 submits remedial modification to by computing machine 902 (Fig. 9), realizes remedial modification whereby in ALC 300, thereby instructs ALC 300 in the symbiosis mode.In this illustrative embodiment, intelligence author 904 submits this type of remedial modification to by the mental creation instrument of hereinafter more complete description.
In step 1042 (Figure 10), ALC 300 receives remedial modification and it is implemented in ANR 302, learns in the symbiosis mode whereby.In step 1044, ALC 300 notifies user 110ALC 300 existing previous novel utterance for user 110 to have response.This notice can for example be undertaken by Email, and comprises the hyperlink that can be combined in the web browser use of carrying out in the computing machine 102, to recover the previous session that suspends between user 110 and the ALC 300.
As mentioned above, intelligence author 904 (Fig. 9) use mental creation instrument is specified the remedial modification to ANR 302, instructs ALN 300 in the symbiosis mode whereby.Figure 12 shows the screen view of the graphical user interface of mental creation instrument.Framework 1202 is showed the tree view with the relevant theme of layering form, and provides the intelligence author 904 can be so as to the user interface tool of the theme of selecting ANR 302.Framework 1204 is illustrated in the table view of each artificial memories of selected theme in the framework 1202.
In this particular instance, the new things that framework 1206 displayings are associated with intelligence author 904 or the table view of other alarm, described intelligence author's 904 empirical tests are to use the mental creation instrument.In alternate embodiment, all new things and alarm show in framework 1206, and be not associated with any individual mindwriter, and each intelligence author freely selects indivedual alarms and/or new things to handle from framework 1206.
Framework 1208 and 1210 allows each memory body in 904 couples of ANR of intelligence author 302 to make amendment.When having selected memory body in framework 1204, the invariant representation of the keyword of described memory body, expection input of character string, corresponding output string and the hyperlink that is associated show in framework 1208.In this illustrative embodiment, the hyperlink that is associated be with play the URL identification content that output string is made a speech in response and shown simultaneously.This allows to carry out the customizing messages multimedia display in response to user 110 problem and/or comment.In addition, described output string or other output string can be associated with synthetic voice and face in the framework 1210.The specific synthetic speech message that the graphical user interface of framework 1210 (GUI) element allows that intelligence author 904 plays, suspends, stops, editor and preservation are associated with the selected memory body of ANR 302.Figure 13 is the synthetic face of illustrative of display frame 1210 in more detail.Should be appreciated that in this illustrative embodiment, Figure 12 and 13 synthetic face present to user 110 as the participant in the ongoing dialogue shown in Figure 2 by computing machine 102.
In order to revise selected memory body, intelligence author 904 uses conventional graphical user interface techniques to revise the data of expression in framework 1208 and 1210, relates to the physical manipulation of intelligence author 904 to one or more user input apparatus.For the specified modification of impelling ALC 300 storages that ANR 302 is done, intelligence author 904 starts the GUI button that is labeled as " memory ".
Framework 1208 comprises that intelligence author 904 can be so as to carrying out other GUI button of following operation: (i) add new memory body; (ii) add new children memories to selected memory body; (iii) branch link is added to selected memory body; And/or (iv) delete selected memory body.When adding the new children memories of new memory body or selected memory body, the appropriate GUI button that intelligence author 904 starts in the framework 1208, and the hyperlink that the invariant representation of the input of character string of import one or more keywords, being remembered, corresponding output string and (according to circumstances) are associated, and start " memory " GUI button.When adding new branch link to selected memory body, new invariant representation and one or more keywords of the equivalent input of character string of intelligence author's 904 inputs, and start " memory " GUI button.The startup of deletion GUI button can be deleted selected memory body in the framework 1208, preferably inquiry intelligence author 904 before deleting.
The study incident of showing in the framework 1206 comprises notice new things, grade and previous modification.When checking new things, intelligence author 904 can visit the whole sessions before the described new things.Therefore, intelligence author 904 can determine what linguistic memories is that lack or inadequate in ANR 302.In case intelligence author 904 need to have determined what remedial modification of the memory body in the ANR 302 to solve described new things, intelligence author 904 just implements those modifications by mental creation instrument shown in Figure 12.
Level events is that user 110 has expressed the incident to the quantitative evaluation of the satisfaction of the ongoing session shown in Fig. 1 and 2.When session was carried out, user 110 can mobile GUI slide block or other other GUI element of degree of being satisfied with level.This class hierarchy is sent to ALC 300 and is recorded in the framework 1206.If the certain memory body of ANR 302 receives relatively low mark always, intelligence author 904 determines to wait to be applied to the remedial modification of the memory body of ANR 302 so, and implements those modifications in the mode of above describing with respect to new things.
The output character string sequence
Sometimes, may be complicated for the answer of user 110 problem, need tediously long and complicated relatively response output string.As mentioned above, the hyperlink that is associated in the memory body can cause and present to come display web page or other content of multimedia in conjunction with output string.Yet, in the situation of the output string that relates to tediously long relatively and/or complexity, may need to present webpage or other content of multimedia of a sequence.For this ability is provided, output string can have (for example) one or more additional sequences, for example as lists of links.
Sequence be represented as only have output string, optional associated hyperlink and to the memory body of the optional sequence link of next sequence (if present).During the output string that presents the memory body that is aroused by nearest speech of the user 110, ALC 300 presents the output string of memory body to user 110, and shows the associated hyperlink content identified by output string simultaneously.In the now that is of finishing output string, ALC 300 retrieves the output string of sequence following closely, and shows the associated hyperlink content identified by described sequence simultaneously.Therefore, also realize transformation from finishing initial output string to the transformation that begins to present output string following closely from the content of the associated hyperlink that shows the former simultaneously to the content of the associated hyperlink that shows the latter simultaneously.
The result is that the theme of will be said to ALC 300 adds the webpage of contextual information or the slideshow that other content of multimedia is coordinated.
Subject segueon
Sometimes, user 110 (Fig. 1) will deviate from current theme in fact with the speech relevant with different (may have nothing to do) themes.For instance, user 110 may just talk about mosquito with ALC 300 and responsive utterance that this moment, ALC 300 was done is mentioned Canada, to this user 110 to respond about Canadian speech, for example " provincial capital of Ontario where? "
Above the processing that memory prediction loop 320 is carried out under this type of situation has been described with respect to row 513-529 (Fig. 5).In alternate embodiment, thereby memory prediction loop 320 is handled failure and is fully mated linguistic memories in the current theme of ANR 302 in the mode of showing in the logical flow chart 1400 (Figure 14).
In step 1402, memory prediction loop 320 (Fig. 3) is parsed into individual words with some output strings that send recently, and in step 1404 (Figure 14), any artificial linguistic memory of search in any theme of keyword with one or more and those output string speech coupling.In illustrative embodiment, the output string that uses two or three to send recently.Basic herein hypothesis is, the something or other in output string that sends recently triggers the interest of user 110 to another theme.The memory body that the speech that uses some output strings that send is recently searched for ANR302 as keyword allows memory prediction loop 320 to dwindle extensive search at new search fully, and therefore improves performance and apparent continuity with user's 110 dialogues.
In step 1406, the optimum matching that memory prediction loop 320 (Fig. 3) uses above-described string matching logic to come the recognition memory body.In testing procedure 1408 (Figure 14), memory prediction loop 320 (Fig. 3) determines whether the optimum matching of memory body fully mates with input of character string.In this illustrative embodiment, compare to determine fully coupling by possibility and predetermined threshold possibility (for example, 60%) with determined invariant representation and input of character string coupling.
If memory prediction loop 320 determines that optimum matching is sufficient, handle so and transfer to step 1410, the output string that memory prediction loop 320 is sent the optimum matching memory body in step 1410 is made a speech in response, and current theme is set at the theme that is associated with the memory body that mates.On the contrary, if memory prediction loop 320 determines that optimum matching is inadequate, handle so and transfer to step 1412.
In step 1412 (Figure 14), memory prediction loop 320 provides graphical user interface by computing machine 102 to user 110, can navigate and browse the layering subject tree with the identification theme that the user was paid close attention to by described computing machine 102 users 110.When user 110 had selected new theme, memory prediction loop 320 was handled the user's 110 who receives recently speech to call the artificial linguistic memory of optimum matching in the new theme of selecting.In addition, thus memory prediction loop 320 allows users to make new speech to be continued to engage in the dialogue in the new theme of selecting in the manner described above.
The server/customer end of ALC 300 distributes and prediction array
In this illustrative embodiment, ALC 300 (Fig. 3) is embodied in the computing machine 102 (Fig. 1), and the artificial linguistic memory of the computing machine 106 storage various themes that can be used by ALC 300.In this embodiment, computing machine 102 (Fig. 1) can be talked with user 110, till the new theme of needs.When the new theme of needs, computing machine 102 sends to computing machine 106 with some output strings that send recently, and computing machine 106 uses described output string to select new theme in the mode of above describing with respect to logical flow chart 1400 (Figure 14).
In another embodiment, ALC 300 (Fig. 3) is embodied in the computing machine 106 (Fig. 1), and with producing 110 by thin-client and ALC 300 dialogues, described thin-client for example is common web browser, and it is carried out in computing machine 102 and passes through the Internet 104 and communicate by letter with computing machine 106.This allows many users, and the minimal configuration and the ALC 300 of client computer system talk with separately with it.In this illustrative embodiment, user 110 communicates by letter with ALC 300 with textual form by the application program that conduct or similar conventional instant message transmit application program.Be delivered to computing machine 106 by user 110 102 message of keying in that use a computer by the Internet 104, ALC 300 determines that response messages are to transfer back to computing machine 102 to show to user 110 by the Internet 104 in computing machine 106.
In another thin-client embodiment, thin-client can be internet voice protocol (VoIP) client, and user 110 sends and receive audio voice messages by described VoIP client.Computing machine 106 receives speech message by the Internet 104, and uses conventional speech-to-text technology to be converted into text.ALC 300 handles the text input of character string through conversion in the manner described above, to produce the output string of expression response message.Computing machine 106 uses conventional Text To Speech technology to synthesize the clear speech message that sends output string, and by the Internet 104 speech message is sent to computing machine 102 to play to user 110 as spoken speech message.User 110 also can use conventional telephone plant to come by communicating via conventional compuphone circuit and public exchanging telephone network and computing machine 106 switched voice signals.
In other embodiments, computing machine 102 can be used for ALC 300 is carried out some pre-service or aftertreatment or processing, so that Internet traffic minimizes and/or reduction or corrective network stand-by period.For instance, in one embodiment, the client application of carrying out in computing machine 102 is carried out from the Text To Speech of the speech-to-text processing of user 110 speech and responsive utterance and is handled to play to user 110.Therefore, user 110 can be oral and computing machine 102 dialogues, and computing machine 102 can translate into text with spoken message, and talk with ALC 300 in the computing machine 106 with textual form by the Internet 104.
In another embodiment, the client application of carrying out in computing machine 102 duplicates the part of the logic of memory prediction loop 320, postpones with the dialogue that helps to avoid causing owing to the network stand-by period by the Internet 104.In this embodiment, the ALC 300 that carries out in computing machine 106 sends the prediction array of memory body, and wherein each output string is represented response message.In particular, ALC 300 sends all children memories of the certain memory body that is aroused by the input of character string that receives recently as prediction array.When user 110 sends subsequent utterance (being used as another input of character string captures), client application sends to ALC 300 in the computing machine 106 by the Internet 104 with new input of character string, and the memory body prediction logic of using the 507-508 place displaying of being expert at independently partly determines whether to arouse the memory body in the prediction array.
If arouse the memory body in the prediction array, client application can directly send the output string of the memory body that is aroused and need not to wait for response output string from ALC 300 so.ALC 300 handles input of character string simultaneously in the same manner, and the identical output string that will use identity logic to produce sends together with new prediction array.Because client application sent responsive utterance, thereby client application ignores the output string from ALC 300, but the new prediction array that storage receives from ALC 300.Therefore, client application can respond to user 110 immediately, and no matter pass through the quite long stand-by period of the Internet 104.At the intermittence that the client application experience is quite long, user 110 creates another speech with oral or textual form during this period.Off period during this time, client application has time enough and receives output string and new prediction array.
On the contrary, if do not arouse memory body in the prediction array, client application is waited for simply from the output string of ALC300 and new prediction array so.
In this illustrative embodiment, and send that corresponding prediction array is separated and before sending corresponding prediction array, ALC 300 sends the output string of expression responsive utterance, to avoid postponing the reception of client application to output string.
It only is illustrative and and nonrestrictive more than describing.In fact, the present invention is only defined by the equivalent of appended claims and four corner thereof.
Appendix A
′***********************************************************
′Function?StrCompare
′Purpose:Compare?two?strings?and?return?a?value?of?how?similar?they
are
′Returns:A?value?in?the?range?of?⊙(⊙%)to?1(1⊙⊙%)(⊙?being?not?at
all,and?1′being?exact)
′.76235?would?be?76%
′Algorithm:
′Similarity(s1,s2)=(2*|nIntersection|)/(|pairs(s1)|+|pairs
(s2)|)
′nIntersection?is?the?number?of?pairs?that?both?strings?have?in?common
′The?pipes?mean?number?of
′Example:
′Compare″FRANCE″with″FRENCH″
′First,group?each?word?into?letter?pairs:
′FRANCE:{FR,RA,AN,NC,CE}
′FRENCH:{FR,RE,EN,NC,CH}
′Second,determine?the?intersection?between?the?two?strings:
′Intersection?is{FR,NC}
′Third,plug?the?variables?into?the?formula:
′Similarity(FRANCE,FRENCH)=(2*|{FR,NC}|)/(|{FR,RA,AN,NC,CE}|
+|{FR,RE,EN,NC,CH}|)
′Similarity(FRANCE,FRENCH)=(2*2)/(5+5)
′Similarity(FRANCE,FRENCH)=4/1⊙=.4=4⊙%
′*******************************************************
Public?Function?Utl_StrCompare(ByVal?RefStr?As?String,ByVal?Str2Compare
As?String)As?Double
Dim?pairsl()As?String,pairs2()As?String,pairl?As?String,pair2?As
String
Dim?intersection?As?Long,union?As?Long,i?As?Long,j?As?Long
′Init
intersection=⊙
pairs1=Utl_StrCompareWordLetterPairs(UCase(RefStr))
pairs2=Utl_StrCompareWordLetterPairs(UCase(Str2Compare))
union=UBound(pairs1)+UBound(pairs2)+2
For?i=⊙To?UBound(pairs1)
pair1=pairs1(i)
For?j=⊙To?UBound(pairs2)
pair2=pairs2(j)
If?pair1=pair2?Then
intersection=intersection+1
pairs2(j)=″″
Exit?For
End?If
Next
Next
Utl_StrCompare=(2*intersection)/union
End?Function
′****************************************************
′Function?StrCompareWordLetterPairs
′Purpose:Takes?a?string?of?word(s)and?turns?them?into?2-character
strings
′Returns:An?array?where?each?element?is?a?2-character?string?pair
′****************************************************
Private?Function?Utl_StrCompareWordLetterPairs(s?As?String)As?String()
Dim?allPairs()As?String,pairsInWord()As?String,words()As?String,
word?As?String
Dim?apcntr?As?Long,i?As?Long,j?As?Long
′Init
apcntr=⊙
ReDim?allPairs(⊙)
allPairs(⊙)=″″
′Get?a?list?of?words(separated?by?spaces)
words=Split(s,″″)
For?i=⊙To?UBound(words)
′Find?the?pairs?of?characters
pairsInWord=Utl_StrCompareLetterPairs(words(i))
′Add?the?pairs?to?master?list
For?j=⊙To?UBound(pairsInWord)
ReDim?Preserve?allPairs(apcntr)
allPairs(apcntr)=pairsInWord(j)
apcntr=apcntr+1
Next
Next
Ut1_StrCompareWordLetterPairs=allPairs
End Function
′****************************************************
′Function?StrCompareLetterPairs
′Purpose:Takes?a?string?(1?word)an?pairs?up?adjacent?letters
′Returns:An?array?where?each?element?is?an?adjacent?letter?pair
′****************************************************
Private?Function?Utl_StrCompareLetterPairs(s?As?String)As?String()
Dim?pairs()As?String
Dim?numPairs?As?Long,i?As?Long
numPairs=Len(s)-2
ReDim?pairs(numPairs)
For?i=⊙?To?numPairs
pairs(i)=Mid(s,i+1,2)
Ne×t
Utl_StrCompareLetterPairs=pairs
End?Function

Claims (69)

1. computer-implemented conversational user interface method, it comprises:
Reception is from the speech data of individual's representation language message;
Search is to look for the artificial linguistic memory of a coupling in the described artificial linguistic memory in artificial linguistic memory;
Determine the whether predetermined at least similarity grade of artificial linguistic memory of described coupling with described speech Data Matching;
Under the condition of the described at least predetermined similarity grade of the artificial linguistic memory of described coupling and described speech Data Matching, present response message to described individual, wherein said response message is associated with the artificial linguistic memory of described coupling; And
Under the condition of the not enough described predetermined similarity grade of the artificial linguistic memory of described coupling and described speech Data Matching, initial symbiosis study is so that obtain new linguistic memories with the described at least predetermined similarity grade of described speech coupling.
2. method according to claim 1, wherein said speech data are text representations of described verbal messages.
3. method according to claim 1, the described individual of wherein said speech data representation says the sound signal of described verbal messages.
4. method according to claim 1 is wherein organized described artificial linguistic memory with the layering form.
5. method according to claim 4, at least one mother of wherein said artificial linguistic memory has one or more filial generation artificial linguistic memory for artificial linguistic memory.
6. method according to claim 5, wherein each artificial linguistic memory comprises invariant representation that described individual's expection is made a speech and the response of making a speech for described expection;
In addition, each in the wherein said filial generation artificial linguistic memory comprises the invariant representation of the expection subsequent utterance that described individual makes for the response of memory body in response to described mother.
7. method according to claim 1, wherein search comprises:
Search for described artificial linguistic memory by this way: search for one the nearer artificial linguistic memory of in described artificial linguistic memory, arousing recently in the described artificial linguistic memory, search in the described artificial linguistic memory artificial linguistic memory far away with respect to described artificial linguistic memory of arousing recently afterwards.
8. method according to claim 7, wherein search further comprises:
One or more filial generation artificial linguistic memory of described speech data and described artificial linguistic memory of arousing are recently compared, afterwards other person in described speech data and the described artificial linguistic memory is compared.
9. method according to claim 1, wherein search comprises:
Each that two or more expections of described speech data and artificial linguistic memory are made a speech compares.
10. method according to claim 9, at least one of two or more expection speeches of wherein said artificial linguistic memory is the branch's link that arrives described artificial linguistic memory.
11. method according to claim 1, wherein search comprises:
With described speech data with and one or more keywords of being associated of artificial linguistic memory compare;
And
Described speech data with and the condition of at least one keyword coupling of being associated of described artificial linguistic memory under, with described speech data with and the expection that is associated of described artificial linguistic memory make a speech and compare.
12. method according to claim 1, wherein said response message is a text.
13. method according to claim 1, wherein said response message comprises sound signal.
14. method according to claim 1, wherein said response message is represented voice signal.
15. method according to claim 12, wherein said voice signal synthesizes.
16. comprising, method according to claim 1, wherein said response message arrive the hyperlink for the treatment of in conjunction with described response message content displayed.
17. method according to claim 1, wherein said response message comprises at least two parts, its each represent according to the order of sequence.
18. method according to claim 1, wherein initial symbiosis study comprises:
Send the request of carrying out the symbiosis instruction to human mind author.
19. method according to claim 18, wherein said request comprise the data of representing described speech data.
20. method according to claim 19, wherein said request comprise the conversation history data of expression from described user's at least one previous verbal messages He at least one response message that had before presented.
21. method according to claim 18, it further comprises:
By receiving the data of the one or more change in the described artificial linguistic memory are learnt in the symbiosis mode from described intelligence author's expression; And
Implement described change to described one or more artificial linguistic memory.
22. computer-readable media, it can use explicitly with the computing machine that comprises processor and storer, and described computer-readable media comprises and is configured to impel described computing machine to implement the computer instruction of conversational user interface by carrying out method according to claim 1.
23. computer-readable media, it can use explicitly with the computing machine that comprises processor and storer, and described computer-readable media comprises and is configured to impel described computing machine to implement the computer instruction of conversational user interface by carrying out method according to claim 2.
23. computer-readable media, it can use explicitly with the computing machine that comprises processor and storer, and described computer-readable media comprises and is configured to impel described computing machine to implement the computer instruction of conversational user interface by carrying out method according to claim 2.
24. computer-readable media, it can use explicitly with the computing machine that comprises processor and storer, and described computer-readable media comprises and is configured to impel described computing machine to implement the computer instruction of conversational user interface by carrying out method according to claim 3.
25. computer-readable media, it can use explicitly with the computing machine that comprises processor and storer, and described computer-readable media comprises and is configured to impel described computing machine to implement the computer instruction of conversational user interface by carrying out method according to claim 4.
26. computer-readable media, it can use explicitly with the computing machine that comprises processor and storer, and described computer-readable media comprises and is configured to impel described computing machine to implement the computer instruction of conversational user interface by carrying out method according to claim 5.
27. computer-readable media, it can use explicitly with the computing machine that comprises processor and storer, and described computer-readable media comprises and is configured to impel described computing machine to implement the computer instruction of conversational user interface by carrying out method according to claim 6.
28. computer-readable media, it can use explicitly with the computing machine that comprises processor and storer, and described computer-readable media comprises and is configured to impel described computing machine to implement the computer instruction of conversational user interface by carrying out method according to claim 7.
29. computer-readable media, it can use explicitly with the computing machine that comprises processor and storer, and described computer-readable media comprises and is configured to impel described computing machine to implement the computer instruction of conversational user interface by carrying out method according to claim 8.
30. computer-readable media, it can use explicitly with the computing machine that comprises processor and storer, and described computer-readable media comprises and is configured to impel described computing machine to implement the computer instruction of conversational user interface by carrying out method according to claim 9.
31. computer-readable media, it can use explicitly with the computing machine that comprises processor and storer, and described computer-readable media comprises and is configured to impel described computing machine to implement the computer instruction of conversational user interface by carrying out method according to claim 10.
32. computer-readable media, it can use explicitly with the computing machine that comprises processor and storer, and described computer-readable media comprises and is configured to impel described computing machine to implement the computer instruction of conversational user interface by carrying out method according to claim 11.
33. computer-readable media, it can use explicitly with the computing machine that comprises processor and storer, and described computer-readable media comprises and is configured to impel described computing machine to implement the computer instruction of conversational user interface by carrying out method according to claim 12.
34. computer-readable media, it can use explicitly with the computing machine that comprises processor and storer, and described computer-readable media comprises and is configured to impel described computing machine to implement the computer instruction of conversational user interface by carrying out method according to claim 13.
35. computer-readable media, it can use explicitly with the computing machine that comprises processor and storer, and described computer-readable media comprises and is configured to impel described computing machine to implement the computer instruction of conversational user interface by carrying out method according to claim 14.
36. computer-readable media, it can use explicitly with the computing machine that comprises processor and storer, and described computer-readable media comprises and is configured to impel described computing machine to implement the computer instruction of conversational user interface by carrying out method according to claim 15.
37. computer-readable media, it can use explicitly with the computing machine that comprises processor and storer, and described computer-readable media comprises and is configured to impel described computing machine to implement the computer instruction of conversational user interface by carrying out method according to claim 16.
38. computer-readable media, it can use explicitly with the computing machine that comprises processor and storer, and described computer-readable media comprises and is configured to impel described computing machine to implement the computer instruction of conversational user interface by carrying out method according to claim 17.
39. computer-readable media, it can use explicitly with the computing machine that comprises processor and storer, and described computer-readable media comprises and is configured to impel described computing machine to implement the computer instruction of conversational user interface by carrying out method according to claim 18.
40. computer-readable media, it can use explicitly with the computing machine that comprises processor and storer, and described computer-readable media comprises and is configured to impel described computing machine to implement the computer instruction of conversational user interface by carrying out method according to claim 19.
41. computer-readable media, it can use explicitly with the computing machine that comprises processor and storer, and described computer-readable media comprises and is configured to impel described computing machine to implement the computer instruction of conversational user interface by carrying out method according to claim 20.
42. computer-readable media, it can use explicitly with the computing machine that comprises processor and storer, and described computer-readable media comprises and is configured to impel described computing machine to implement the computer instruction of conversational user interface by carrying out method according to claim 21.
43. a computer system, it comprises:
Processor;
Storer, it operationally is coupled to described processor; And
The conversational user interface module, (i) it is carried out from described storer and in described processor, and (ii) it impels described computing machine to carry out method according to claim 1 when being carried out by described processor.
44. a computer system, it comprises:
Processor;
Storer, it operationally is coupled to described processor; And
The conversational user interface module, (i) it is carried out from described storer and in described processor, and (ii) it impels described computing machine to carry out method according to claim 2 when being carried out by described processor.
45. a computer system, it comprises:
Processor;
Storer, it operationally is coupled to described processor; And
The conversational user interface module, (i) it is carried out from described storer and in described processor, and (ii) it impels described computing machine to carry out method according to claim 3 when being carried out by described processor.
46. a computer system, it comprises:
Processor;
Storer, it operationally is coupled to described processor; And
The conversational user interface module, (i) it is carried out from described storer and in described processor, and (ii) it impels described computing machine to carry out method according to claim 4 when being carried out by described processor.
47. a computer system, it comprises:
Processor;
Storer, it operationally is coupled to described processor; And
The conversational user interface module, (i) it is carried out from described storer and in described processor, and (ii) it impels described computing machine to carry out method according to claim 5 when being carried out by described processor.
48. a computer system, it comprises:
Processor;
Storer, it operationally is coupled to described processor; And
The conversational user interface module, (i) it is carried out from described storer and in described processor, and (ii) it impels described computing machine to carry out method according to claim 6 when being carried out by described processor.
49. a computer system, it comprises:
Processor;
Storer, it operationally is coupled to described processor; And
The conversational user interface module, (i) it is carried out from described storer and in described processor, and (ii) it impels described computing machine to carry out method according to claim 7 when being carried out by described processor.
50. a computer system, it comprises:
Processor;
Storer, it operationally is coupled to described processor; And
The conversational user interface module, (i) it is carried out from described storer and in described processor, and (ii) it impels described computing machine to carry out method according to claim 8 when being carried out by described processor.
51. a computer system, it comprises:
Processor;
Storer, it operationally is coupled to described processor; And
The conversational user interface module, (i) it is carried out from described storer and in described processor, and (ii) it impels described computing machine to carry out method according to claim 9 when being carried out by described processor.
52. a computer system, it comprises:
Processor;
Storer, it operationally is coupled to described processor; And
The conversational user interface module, (i) it is carried out from described storer and in described processor, and (ii) it impels described computing machine to carry out method according to claim 10 when being carried out by described processor.
53. a computer system, it comprises:
Processor;
Storer, it operationally is coupled to described processor; And
The conversational user interface module, (i) it is carried out from described storer and in described processor, and (ii) it impels described computing machine to carry out method according to claim 11 when being carried out by described processor.
54. a computer system, it comprises:
Processor;
Storer, it operationally is coupled to described processor; And
The conversational user interface module, (i) it is carried out from described storer and in described processor, and (ii) it impels described computing machine to carry out method according to claim 12 when being carried out by described processor.
55. a computer system, it comprises:
Processor;
Storer, it operationally is coupled to described processor; And
The conversational user interface module, (i) it is carried out from described storer and in described processor, and (ii) it impels described computing machine to carry out method according to claim 13 when being carried out by described processor.
56. a computer system, it comprises:
Processor;
Storer, it operationally is coupled to described processor; And
The conversational user interface module, (i) it is carried out from described storer and in described processor, and (ii) it impels described computing machine to carry out method according to claim 14 when being carried out by described processor.
57. a computer system, it comprises:
Processor;
Storer, it operationally is coupled to described processor; And
The conversational user interface module, (i) it is carried out from described storer and in described processor, and (ii) it impels described computing machine to carry out method according to claim 15 when being carried out by described processor.
58. a computer system, it comprises:
Processor;
Storer, it operationally is coupled to described processor; And
The conversational user interface module, (i) it is carried out from described storer and in described processor, and (ii) it impels described computing machine to carry out method according to claim 16 when being carried out by described processor.
59. a computer system, it comprises:
Processor;
Storer, it operationally is coupled to described processor; And
The conversational user interface module, (i) it is carried out from described storer and in described processor, and (ii) it impels described computing machine to carry out method according to claim 17 when being carried out by described processor.
60. a computer system, it comprises:
Processor;
Storer, it operationally is coupled to described processor; And
The conversational user interface module, (i) it is carried out from described storer and in described processor, and (ii) it impels described computing machine to carry out method according to claim 18 when being carried out by described processor.
61. a computer system, it comprises:
Processor;
Storer, it operationally is coupled to described processor; And
The conversational user interface module, (i) it is carried out from described storer and in described processor, and (ii) it impels described computing machine to carry out method according to claim 19 when being carried out by described processor.
62. a computer system, it comprises:
Processor;
Storer, it operationally is coupled to described processor; And
The conversational user interface module, (i) it is carried out from described storer and in described processor, and (ii) it impels described computing machine to carry out method according to claim 20 when being carried out by described processor.
63. a computer system, it comprises:
Processor;
Storer, it operationally is coupled to described processor; And
The conversational user interface module, (i) it is carried out from described storer and in described processor, and (ii) it impels described computing machine to carry out method according to claim 21 when being carried out by described processor.
64. a computer system, it comprises:
Can carry out the artificial linguistic cortex of symbiosis study.
65. according to the described computing machine of claim 64, wherein said artificial linguistic cortex can also be carried out ongoing context session with the individual.
66. a computer-implemented conversational user interface, it comprises:
The set of two or more artificial linguistic memory, it is expressed as and is stored in the computing machine, wherein said artificial linguistic memory:
Each is stored in the described computing machine as pattern sequence;
Each is represented with constant form by invariant representation; And
Be organized in the described computing machine with the layering form;
The linguistic memories calling logic, its in response to individual's speech to call corresponding one in the described artificial linguistic memory in the following manner auto-associating:
Determine the invariant representation and the described speech coupling of corresponding one in the described artificial linguistic memory.
67. according to the described computer-implemented conversational user interface of claim 66, each comprises corresponding response language expression wherein said linguistic memories; And
In addition, wherein said linguistic memories calling logic impels the response language that presents corresponding artificial linguistic memory to express.
68. according to the described computer-implemented conversational user interface of claim 66, each comprises corresponding program stored wherein said linguistic memories; And
In addition, wherein said linguistic memories calling logic impels the program stored of carrying out corresponding artificial linguistic memory.
CNA2006800048979A 2005-02-15 2006-02-15 Conversational user interface Pending CN101128864A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US65274805P 2005-02-15 2005-02-15
US60/652,748 2005-02-15
US60/740,147 2005-11-29

Publications (1)

Publication Number Publication Date
CN101128864A true CN101128864A (en) 2008-02-20

Family

ID=39096070

Family Applications (1)

Application Number Title Priority Date Filing Date
CNA2006800048979A Pending CN101128864A (en) 2005-02-15 2006-02-15 Conversational user interface

Country Status (1)

Country Link
CN (1) CN101128864A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10049675B2 (en) 2010-02-25 2018-08-14 Apple Inc. User profiling for voice input processing
CN109947907A (en) * 2017-10-31 2019-06-28 上海挖数互联网科技有限公司 Construction, response method and device, storage medium, the server of chat robots

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10049675B2 (en) 2010-02-25 2018-08-14 Apple Inc. User profiling for voice input processing
CN109947907A (en) * 2017-10-31 2019-06-28 上海挖数互联网科技有限公司 Construction, response method and device, storage medium, the server of chat robots

Similar Documents

Publication Publication Date Title
US20220319517A1 (en) Electronic personal interactive device
Bohn et al. The pervasive role of pragmatics in early language
US9704103B2 (en) Digital companions for human users
US11475897B2 (en) Method and apparatus for response using voice matching user category
Bock Structure in language: Creating form in talk.
CN106469212A (en) Man-machine interaction method based on artificial intelligence and device
CN107944027A (en) Create the method and system of semantic key index
Wilks et al. A prototype for a conversational companion for reminiscing about images
Tintarev et al. Personal storytelling: Using Natural Language Generation for children with complex communication needs, in the wild…
Gibson et al. Analysing your data
JP2006061632A (en) Emotion data supplying apparatus, psychology analyzer, and method for psychological analysis of telephone user
US7805309B2 (en) Conversational user interface that mimics the organization of memories in a human brain
Athari et al. Vocal imitation between mothers and infants
Needle et al. Gendered associations of English morphology
Gibson Sociophonetics of popular music: Insights from corpus analysis and speech perception experiments
Rangan et al. Thinking with an accent: Toward a new object, method, and practice
Li Divination engines: A media history of text prediction
CN101128864A (en) Conversational user interface
Mayr et al. Bilingual phonological development across generations: Segmental accuracy and error patterns in second-and third-generation British Bengali children
Lubold Producing acoustic-prosodic entrainment in a robotic learning companion to build learner rapport
US20030158733A1 (en) Character type speak system
Campbell Expressive/affective speech synthesis
Hobbs The origin and evolution of language: A plausible, strong-AI account
Moore Cognitive informatics: The future of spoken language processing
De Bruyn et al. Polyphony Beyond the Human

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 1114234

Country of ref document: HK

C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Open date: 20080220

REG Reference to a national code

Ref country code: HK

Ref legal event code: WD

Ref document number: 1114234

Country of ref document: HK