CN102737104B - Task driven user intents - Google Patents
Task driven user intents Download PDFInfo
- Publication number
- CN102737104B CN102737104B CN201210093414.4A CN201210093414A CN102737104B CN 102737104 B CN102737104 B CN 102737104B CN 201210093414 A CN201210093414 A CN 201210093414A CN 102737104 B CN102737104 B CN 102737104B
- Authority
- CN
- China
- Prior art keywords
- phrase
- application
- word
- network
- user
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/332—Query formulation
- G06F16/3329—Natural language query formulation or dialogue systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/903—Querying
- G06F16/9032—Query formulation
- G06F16/90332—Natural language query formulation or dialogue systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/951—Indexing; Web crawling techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/953—Querying, e.g. by the use of web search engines
- G06F16/9537—Spatial or temporal dependent retrieval, e.g. spatiotemporal queries
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/30—Semantic analysis
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/26—Speech to text systems
Abstract
An augmented conversational understanding architecture may be provided. Upon receiving a natural language phrase from a user, the phrase may be translated into a search phrase and a search action may be performed on the search phrase.
Description
Technical field
The application is related to the user view of task-driven.
Background technology
The user view of task-driven can provide for being easy to user to inquire about the mechanism with the natural language understanding of session.
Under certain situation, the network service based on web and/or based on cloud can provide a user with substantial amounts of information, but search agent can
Which the context of user can be understood to select to service-seeking.For example, " let us eats Italian dishes (let ' s tonight
Do Italian tonight) " natural language phrase may not be searched engine understanding, its can return to by phrase translation
The associated result of non-search Italian restaurant into Italian.Thus, traditional system needs clear and definite syntax to define
Region of search, is unable to identify the domain from the context of search.
The content of the invention
Present invention is provided to introduce will further describe in following specific embodiment in simplified form
Some concepts.This content of the invention is neither intended to identify the key feature or essential feature of theme required for protection.In of the invention
Appearance is intended to be used to limit the scope of theme required for protection.
The mark of user view can be provided.Multiple network applications can be identified, and with the plurality of application in each should
Can be defined with associated body (ontology).If body defined in the phrase received from user and at least one
Associated, then the action being associated with network application can be performed.
It is generally described above and described in detail below both there is provided example, and be merely exemplary.Therefore, the above
Be broadly described with it is described in detail below be not construed as it is restricted.Additionally, except those features set forth herein
Or beyond variant, other features or variant can also be provided.For example, embodiment can relate to it is each described in specific embodiment
Plant combinations of features and sub-portfolio.
Brief description of the drawings
Merge in the disclosure and constitute part thereof of accompanying drawing and embodiments of the invention are shown.In the accompanying drawings:
Fig. 1 is the block diagram of operating environment;
Fig. 2 is the flow chart for providing the method to the understanding of user view;And
Fig. 3 is the block diagram for including the system of computing device.
Specific embodiment
It is described in detail below to refer to each accompanying drawing.As possible, identical accompanying drawing is just used in the accompanying drawings and the description below
Mark to indicate same or analogous element.Although may describing embodiments of the invention, change, adapt and other
Realization is possible.For example, line replacement, addition can be entered to the element shown in accompanying drawing or changed, and can be by disclosed
Method displacement, rearrangement or addition stage change method described herein.Therefore, it is described in detail below not
The limitation present invention.Conversely, correct scope of the invention is defined by the appended claims.
Spoken dialog system (SDS) allows one to be interacted with their sound and computer.Drive the master of SDS
Wanting component may include dialog manager:The assembly management is with user based on the session talked with.Dialog manager can be by multiple
The combination of input source determines the intention of user, such as speech recognition and the output of natural language understanding component, from preceding dialog
The context of round, user's context, and/or the semantic concept being associated with body and data.After it is determined that being intended to, dialogue pipe
Reason device can take action, such as show final result to user and/or continue the dialogue with user to meet their intention.
Fig. 1 is the block diagram of operating environment 100, and the operating environment 100 includes server 105, and the server 105 includes oral account
Conversational system (SDS) 110.Server 105 may include software application, such as personal assistant program 112 and/or search agent 114.
SDS 110 may include dialog manager 115 and can be used to receive user's phrase, inquiry and/or action request by network 120.
Network 120 may include the public networks such as proprietary network (for example, corporate intranet), cellular network and/or internet.Operation
Environment 100 may also include multiple network applications 150 (A)-(C).Network application 150 (A)-(C) may include data source, such as stock
Market quotations service and/or weather data service, and/or the web services such as restaurant predetermined tool.
Fig. 2 be illustrate an embodiment of the invention for providing institute in the method 200 to the understanding of user view
The flow chart of each general stages being related to.Method 200 can be used computing device 300 to realize, such as be retouched in more detail below with reference to Fig. 3
State.The mode in each stage of implementation method 200 is described more fully below.Method 200 starts from initial block 205, and
Advancing to wherein computing device 300 can identify the stage 210 of multiple network applications.For example, SDS 110 can be parsed by multiple networks
Using the webpage that each network application in 150 (A)-(C) is provided.These webpages may include can be by the API of public visit, this
A little API can be by the grade remote process call of search agent 114.Such API may include the function definition in webpage, these work(
Can define can identify to successfully call the parameter needed for API.As another example, can be provided by network application using special
Body.
Method 200 may then proceed to the stage 215, there computing device 300 can receive and/or define identified should
The body of each application in.For example, SDS 110 can receive restaurant grading website (such as) or scheduled station" restaurant " body.Similarly, can be for such asTourism predetermined website define " tourism "
Body.Application, its associated body and any required parameter that each is identified can then be stored in and the phases of SDS 110
In the database of association.
Method 200 may then proceed to the stage 220, and computing device 300 can receive phrase from user there.For example, with
Family can speak and say " let us is had a meal tonight together " to the cell phone including user equipment 130.
Method 200 may then proceed to the stage 225, there computing device 300 can determine that phrase whether with it is defined
One of body is associated.For example, " meal " may include the keyword being associated with " restaurant " body.Each implementation of the invention
Example, the body being associated with application may include shared body, and the shared body can be with the personal ontology merging of user.It is received
Phrase can be compared to mark with the semantic concept that be associated with the body (and/or multiple merge body) for merging and connect
The intention of the phrase for receiving.
If associated semantic concept is identified, method 200 may proceed to the stage 227, there computing device 300
Can according to the body for merging by phrase translation into the agent actions that are associated of at least one of the network application of multiple.
For example, received phrase " let us is had a meal tonight together " can be translated into searching for the search restaurant that nearby tonight can be predetermined
Rope is acted.
Then method 200 may proceed to the stage 230, and computing device 300 can be determined whether to receive required ginseng there
Number.For example, in order to perform the search to the service of restaurant predetermined network, it may be necessary to the time limit.Concept " tonight " can be translated into pass
Time limit needed for the application of connection.
If being matched without body in the stage 225 or if not finding required parameter, method in the stage 230
200 may proceed to the stage 235, and computing device 300 can ask more information there.For example, personal assistant program 112 can lead to
The information that the display crossed on voice message and/or user equipment 130 comes needed for being required to user.
If finding required parameter in the stage 230, method 200 may proceed to the stage 240, there computing device
300 actions that translation can be performed in the application of association.For example, server 105 can be used from the ginseng needed for user's phrase
Number to perform remote procedure call to network application 150 (A).
Method 200 may then proceed to the stage 245, and computing device 300 can show result to user there.For example, clothes
Business device 105 can receive the result being associated with performed action from network application 150 (A).The result can be then passed to
User equipment 130 is for the display on screen and/or for by audio output (for example, Text To Speech).Method 200 is right
After can terminate in the stage 250.
The system that an embodiment of the invention may include the mark for providing user view.The system may include to deposit
Reservoir stores and is coupled to the processing unit of the memory storage.The processing unit can be used to identify multiple applications, definition and be somebody's turn to do
The associated domain of each application in multiple application, receive the phrase from user and determine the phrase whether with least one
At least one associated domain of individual application is associated.In response to determine the phrase include with least one application be associated to
The associated context in a few domain, processing unit can be used to call at least one application according to phrase execution.Each
Using can be associated with Internet resources, the function of search of such as webpage.In some and/or all may include one group it is related
API (API).For example, this group of API can be associated with difference in functionality available at webpage.
Processing unit can be further used for display and call the knot being associated at least one application with according to phrase execution
Really, determine whether from user receive the second phrase and if it is, determine second phrase whether with identical application phase
Association.In response to determining that second phrase is associated with identical application, processing unit can be used to be performed according to second phrase
The second of at least one of the API related to the group calls, and to show and second call the result that is associated with this.
The system that may include the mark for providing user view according to another embodiment of the present invention.The system may include
Memory storage and the processing unit for being coupled to the memory storage.Processing unit can be used to receive the phrase from user, incite somebody to action
The phrase be parsed into multiple words, according to multiple words come in identifying the domain being associated with the phrase, determining multiple applications extremely
Whether few one be associated and if it is, by least one of multiple words as the ginseng called with the domain for being identified
Number calls to perform and the action that is associated of application preparing this.Processing unit can be further used for receive the second phrase, should
Phrase is parsed into multiple second words, determines whether second phrase is associated and if it is, with including calling with domain
At least one of multiple second words of the second parameter updates and the agent actions that are associated of application.Second phrase can be with slave phase
Same user and/or second user are received, such as when having two users in a session.In response to determine second phrase not with
Domain is associated, processing unit it is executable at least one of multiple API call, receive with it is performed to multiple API extremely
Few one calls associated response and shows received response to user.Each embodiment of the invention, it is and short
The associated domain of language may include for example, working field, restaurant domain, calendar domain, tourism domain, amusement domain and map domain.
The system that may include the mark for providing user view according to another embodiment of the present invention.The system may include
Memory storage and the processing unit for being coupled to the memory storage.The processing unit can be used to identifying multiple applications, definition with
The associated body of each application in the plurality of application, receive the first phrase from user and determine the phrase whether with
It is associated with least one associated body of at least one of multiple application.Multiple application in each application may include to
Parameter needed for few one.In response to determining that the phrase is associated with least one body, processing unit can be used to be defined
Body with and user be associated the second ontology merging, according to the body for merging by the first phrase translation into multiple based on net
Whether the associated agent actions of at least one of application of network and determination phrase include enough information to perform agency
Action (for example, with multiple application at least one of be associated at least one needed for parameter).If it is, processing unit
Can be used to such as perform agency by performing the calling including the parameter needed at least one of the network application to being associated
Action, and the result that display is associated with execution agent actions.
Fig. 3 is the block diagram of the system for including computing device 300.According to one embodiment of present invention, above-mentioned memory is deposited
Storage and processing unit can be realized in the computing device of the computing device 300 of such as Fig. 3 etc.Hardware, software or solid can be used
Any suitable of part combines to realize memory storage and processing unit.For example, memory storage and processing unit can use tricks
Calculation equipment 300 is realized with reference to any one in other computing devices 318 of computing device 300.Implementation of the invention
Example, said system, equipment and processor are examples, and other systems, equipment and processor may include above-mentioned memory storage and
Processing unit.Additionally, computing device 300 may include the operating environment for system as described above 100.System 100 can be at it
Operated in his environment, and be not limited to computing device 300.
With reference to Fig. 3, system according to an embodiment of the invention may include computing device, such as computing device 300.
In basic configuration, computing device 300 may include at least one processing unit 302 and system storage 304.Depending on computing device
Configuration and type, system storage 304 may include, but be not limited to, and volatile memory is (for example, random access memory
(RAM)), nonvolatile memory (for example, read-only storage (ROM)), flash memory or any combinations.System storage 304 can be with
Including operating system 305, one or more programming modules 306, and personal assistant program 112 can be included.For example, operating system
305 operations for being applicable to control computing device 300.Additionally, embodiments of the invention can combine shape library, other operation systems
System or any other application program are put into practice, and are not limited to any application-specific or system.The basic configuration in figure 3 by
Those components in dotted line 308 show.
Computing device 300 can have supplementary features or function.For example, computing device 300 may also include additional data storage
Equipment (removable and/or irremovable), such as, disk, CD or tape.These additional storages are in figure 3 by removable
Dynamic storage 309 and irremovable storage 310 show.Computer-readable storage medium may include to refer to for storage such as computer-readable
Make, the volatibility that any method or technique of the information such as data structure, program module or other data is realized and it is non-volatile,
Removable and irremovable medium.System storage 304, removable Storage 309 and irremovable storage 310 are all that computer is deposited
The example of storage media (that is, memory storage).Computer-readable storage medium may include, but be not limited to, and RAM, ROM, electric erasable are only
Read memory (EEPROM), flash memory or other memory technologies, CD-ROM, digital versatile disc (DVD) or other optical storages, magnetic
Tape drum, tape, disk storage or other magnetic storage apparatus or can be used for storage information and can be accessed by computing device 300
Any other medium.Any such computer-readable storage medium can be a part for equipment 300.Computing device 300 can be with
With input equipment 312, such as keyboard, mouse, pen, audio input device, touch input device.May also include and such as show
The output equipments such as device, loudspeaker, printer 314.The said equipment is example, and other equipment can be used.
Computing device 300 can also include can allow equipment 300 such as by network in DCE (for example,
Intranet or internet) come the communication connection 316 that is communicated with other computing devices 318.Communication connection 316 is communication media
An example.Communication media is generally by the computer in the modulated message signal of such as carrier wave or other transmission mechanisms etc
Readable instruction, data structure, program module or other data embody, and including any information-delivery media.Term is "
The information that modulated data signal " can be described in the signal sets in the way of being encoded or changes one or more
The signal of feature.Unrestricted as an example, communication media includes the wire medium such as cable network or the connection of direct line, with
And the wireless medium such as acoustics, radio frequency (RF), infrared ray and other wireless mediums.Term " computer as used herein
Computer-readable recording medium " may include both storage medium and communication media.
As described above, can be stored including the multiple program module sums including operating system 305 in system storage 304
According to file.When performing on processing unit 302, programming module 306 (for example, personal assistant program 112) can perform each process,
Including for example, one or more in each stage of method 200 as described above.Said process is an example, and treatment is single
Unit 302 can perform other processes.Other usable programming modules of embodiments in accordance with the present invention may include Email and connection
It is people's application, text processing application, spreadsheet application, database application, slide presentation application, drawing or area of computer aided
Application program etc..
In general, each implementation method of the invention, program module can include performing particular task or can
To realize routine, program, component, data structure and the other kinds of structure of particular abstract data type.Additionally, of the invention
Embodiment can be put into practice with other computer system configurations, including portable equipment, multicomputer system, based on microprocessor
System or programmable consumer electronics, minicomputer, mainframe computer etc..Embodiments of the invention can also wherein task by leading to
Put into practice in the DCE of the remote processing devices execution for crossing communication network links.In a distributed computing environment, journey
Sequence module can be located in both local and remote memory storage devices.
Additionally, embodiments of the invention can be in the circuit including discrete electronic component, the encapsulation comprising gate or integrated
Electronic chip, the circuit using microprocessor are put into practice on the one single chip comprising electronic component or microprocessor.The present invention
Embodiment it is also possible to use and be able to carry out such as, AND (with), OR (or) and NOT (non-) logical operation other technologies
To put into practice, including but not limited to, machinery, optics, fluid and quantum techniques.In addition, embodiments of the invention can be in general-purpose computations
Put into practice in machine or any other circuit or system.
For example, embodiments of the invention can be implemented as computer procedures (method), computing system or such as computer journey
The product of sequence product or computer-readable medium etc.Computer program product can be computer system-readable and to for holding
The computer-readable storage medium of the computer program code of the instruction of row computer procedures.Computer program product can also be calculating
System is readable and carrier of computer program code to the instruction for performing computer procedures on transmitting signal.Therefore,
The present invention can be embodied with hardware and/or software (including firmware, resident software, microcode etc.).In other words, embodiments of the invention
Can be can be used or computer-readable program for instruction execution system use or the computer being used in combination with using including thereon
The computer of code can be used or computer-readable recording medium on computer program product form.Computer can be used or
Computer-readable medium can be can include, store, communicating, propagation or transmission procedure for instruction execution system, device or set
The standby any medium for using or being used in combination with.
Computer can be used or computer-readable medium can be, such as, but not limited to, electricity, magnetic, optical, electromagnetic, it is infrared,
Or semiconductor system, device, equipment or propagation medium.More specifically computer-readable medium examples (non-exhaustive list), calculate
Machine computer-readable recording medium may include following:Electrical connection, portable computer diskette with one or more wire, random access memory
(RAM), read-only storage (ROM), Erasable Programmable Read Only Memory EPROM (EPROM or flash memory), optical fiber and Portable compressed
Disk read-only storage (CD-ROM).Note, computer can be used or computer-readable medium can even is that and be printed with journey thereon
The paper of sequence or another suitable medium, because program can be via for example to the optical scanner of paper or other media and electronically
Capture, is then compiled, explains or is otherwise processed in a suitable manner, and be subsequently stored in Computer Storage if necessary
In device.
Above with reference to for example according to an embodiment of the invention the block diagram of method, system and computer program product and/or
Operational illustrations describe embodiments of the invention.Each function/the action indicated in frame can be as shown in different from any flow chart
Order occur.For example, depending on involved function/action, two frames for continuously showing can actually be performed simultaneously substantially,
Or these frames can be executed in the reverse order sometimes.
Although having been described for the particular embodiment of the present invention, it is also possible to there is other embodiment.In addition, although the present invention
Embodiment be described as being associated with data of the storage in memory and other storage mediums, but data can also be stored
On other kinds of computer-readable medium or it is read from, such as auxiliary storage device is (as hard disk, floppy disk or CD-
ROM carrier wave or the RAM of other forms or ROM), from internet.Additionally, each step of disclosed method can be with any
Mode is changed, including is resequenced and/or insertion or delete step by each step, without departing from the present invention.
Applicant is all belonged to including all authority including the copyright in code included herein and be the application
The property of people.The applicant keeps and retains all authority in code included herein, and authorizes only about being authorized
The reproduction of patent and the license of the material is reproduced for no other purpose.
Although this specification includes example, the scope of the present invention is indicated by appended claims.Although additionally, with
Special language is acted to architectural feature and/or method and describes this specification, but claims are not limited to the above and are retouched
The feature stated or action.Conversely, special characteristic described above and action are to come public as the example of embodiments of the invention
Open.
Claims (9)
1. a kind of for providing the method (200) that user view is identified, methods described (200) includes:
Mark (210) multiple network applications (150 (A)-(C));
(220) first natural language phrases are received from user;
The first natural language phrase is parsed into multiple words;
Using the multiple word and the body being associated with the multiple network application, by computing device come from institute
State and identify the network application being associated with the first natural language phrase in multiple network applications;
In response to identifying the network application being associated with the first natural language phrase, by the described first natural language
Phrase translation is sayed into the agent actions being associated with the network application, wherein, the agent actions include described many
At least one of individual word word as the network application parameter;
Receive the second nature language phrase;
The second nature language phrase is parsed into more than second word;
Determine that the second nature language phrase is associated with the network application based on more than second word;
In response to determining that the second nature language phrase is associated with the network application, by the agent actions more
New is the second parameter for including at least one of described more than second word as the network application;And
Performed according to the first natural language phrase and the second nature language phrase and the network application
The associated agent actions.
2. the method for claim 1 (200), it is characterised in that further include by with it is at least one network
The shared body being associated using (150 (A)-(C)) and the personal ontology merging being associated with the user.
3. the method for claim 1 (200), also including defining (215) and the multiple application (150 (A)-(C)) in
The associated body of each application, including identify and be associated with each application in the multiple application (150 (A)-(C))
At least one needed for parameter.
4. method (200) as claimed in claim 3, it is characterised in that using the multiple word come by the computing device
The network application bag being associated with the first natural language phrase is identified from the multiple network application
Include:Determine whether the first natural language phrase includes be associated with least one application (150 (A)-(C)) at least one
Parameter needed for individual.
5. the method for claim 1 (200), it is characterised in that in the multiple application (150 (A)-(C)) at least
One is associated to multiple related API, and each API is associated with shared body, and methods described is further included:
The result that display (245) is associated with the execution action at least one application (150 (A)-(C));
It is determined that whether (220) receive the second phrase from the user;And
In response to determining that (220) receive the second phrase from the user, it is determined that whether (225) described second phrase is common with described
Enjoying body is associated.
6. a kind of method (200) for providing the mark of user view, is included by methods described (200):
(220) phrase is received from user;
The phrase is parsed into multiple words;
The body that (215) are associated with the phrase is identified using the multiple word;
It is determined that whether at least one of (225) multiple applications (150 (A)-(C)) are associated with the body for being identified;And
In response to determining that at least one of (225) multiple applications (150 (A)-(C)) are associated with the body for being identified, according to
Received phrase creates the agent actions of (227) at least one of multiple application (150 (A)-(C)), the generation
Reason action includes parameter of at least one of the multiple word word as described at least one in the multiple application;
Receive the second phrase;
Second phrase is parsed into more than second word;
Determine that second phrase is associated with the body for being identified based on more than second word;And
In response to determining that second phrase is associated with the body for being identified, the agent actions are updated to include described the
At least one of more than two word as described at least one in the multiple application the second parameter.
7. method as claimed in claim 6, it is characterised in that also include:
In response to determining that (225) described second phrase is not associated with the body, in multiple application (150 (A)-(C))
(240) agent actions are performed at least one.
8. method as claimed in claim 6, it is characterised in that the body being associated with phrase includes at least one of:Work
Make domain, restaurant domain, calendar domain, tourism domain, amusement domain and map domain.
9. a kind of system for providing the mark of user view, the system includes:
Memory storage (304);And
The processing unit (302) of the memory storage (304) is coupled to, wherein the processing unit is used to:
Mark (210) multiple network applications (150 (A)-(C)), wherein each in the multiple network application
Individual application is associated with webpage and body;
First phrase of (220) from user is received, wherein first phrase includes the first natural language phrase;
First phrase is parsed into multiple words;
Determine whether (225) described first phrase is associated with least one body using the multiple word;
In response to determining that (225) described first phrase is associated with least one body, according at least one body
By first phrase translation (227) into from multiple network applications (150 (A)-(C)) in based on network
The associated agent actions of application, wherein, the agent actions include at least one of the multiple word word as described
The parameter of network application;
It is determined that whether the translation of (230) described first phrase to agent actions is dynamic to perform the agency including enough information
Make;
Translation in response to determining (230) described first phrase to agent actions does not include enough information to perform the agency
Action, at least one element of the user's request (235) additional information;
The second phrase is received, wherein, second phrase includes the second nature language phrase;
Second phrase is parsed into more than second word;
Second phrase is determined based on more than second word and at least one of the network application is associated with
Body is associated;
In response to determining that second phrase is associated with least one body for being associated with the network application, by institute
Agent actions are stated to be updated to include second ginseng of at least one of described more than second word as the network application
Number;
(240) described agent actions are performed in the network application;And
The result that display (245) is associated with the execution agent actions.
Applications Claiming Priority (14)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/077,431 US10642934B2 (en) | 2011-03-31 | 2011-03-31 | Augmented conversational understanding architecture |
US13/077,233 | 2011-03-31 | ||
US13/077,368 US9298287B2 (en) | 2011-03-31 | 2011-03-31 | Combined activation for natural user interface systems |
US13/077,455 | 2011-03-31 | ||
US13/076,862 US9760566B2 (en) | 2011-03-31 | 2011-03-31 | Augmented conversational understanding agent to identify conversation context between two humans and taking an agent action thereof |
US13/077,233 US20120253789A1 (en) | 2011-03-31 | 2011-03-31 | Conversational Dialog Learning and Correction |
US13/077,396 | 2011-03-31 | ||
US13/077,303 | 2011-03-31 | ||
US13/077,455 US9244984B2 (en) | 2011-03-31 | 2011-03-31 | Location based conversational understanding |
US13/077,431 | 2011-03-31 | ||
US13/076,862 | 2011-03-31 | ||
US13/077,303 US9858343B2 (en) | 2011-03-31 | 2011-03-31 | Personalization of queries, conversations, and searches |
US13/077,396 US9842168B2 (en) | 2011-03-31 | 2011-03-31 | Task driven user intents |
US13/077,368 | 2011-03-31 |
Publications (2)
Publication Number | Publication Date |
---|---|
CN102737104A CN102737104A (en) | 2012-10-17 |
CN102737104B true CN102737104B (en) | 2017-05-24 |
Family
ID=46931884
Family Applications (8)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201210087420.9A Active CN102737096B (en) | 2011-03-31 | 2012-03-29 | Location-based session understands |
CN201610801496.1A Active CN106383866B (en) | 2011-03-31 | 2012-03-29 | Location-based conversational understanding |
CN201210090634.1A Active CN102750311B (en) | 2011-03-31 | 2012-03-30 | The dialogue of expansion understands architecture |
CN201210091176.3A Active CN102737101B (en) | 2011-03-31 | 2012-03-30 | Combined type for natural user interface system activates |
CN201210090349.XA Active CN102737099B (en) | 2011-03-31 | 2012-03-30 | Personalization to inquiry, session and search |
CN201210101485.4A Expired - Fee Related CN102750271B (en) | 2011-03-31 | 2012-03-31 | Converstional dialog learning and correction |
CN201210093414.4A Active CN102737104B (en) | 2011-03-31 | 2012-03-31 | Task driven user intents |
CN201210092263.0A Active CN102750270B (en) | 2011-03-31 | 2012-03-31 | The dialogue of expansion understands agency |
Family Applications Before (6)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201210087420.9A Active CN102737096B (en) | 2011-03-31 | 2012-03-29 | Location-based session understands |
CN201610801496.1A Active CN106383866B (en) | 2011-03-31 | 2012-03-29 | Location-based conversational understanding |
CN201210090634.1A Active CN102750311B (en) | 2011-03-31 | 2012-03-30 | The dialogue of expansion understands architecture |
CN201210091176.3A Active CN102737101B (en) | 2011-03-31 | 2012-03-30 | Combined type for natural user interface system activates |
CN201210090349.XA Active CN102737099B (en) | 2011-03-31 | 2012-03-30 | Personalization to inquiry, session and search |
CN201210101485.4A Expired - Fee Related CN102750271B (en) | 2011-03-31 | 2012-03-31 | Converstional dialog learning and correction |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201210092263.0A Active CN102750270B (en) | 2011-03-31 | 2012-03-31 | The dialogue of expansion understands agency |
Country Status (5)
Country | Link |
---|---|
EP (6) | EP2691877A4 (en) |
JP (4) | JP6087899B2 (en) |
KR (3) | KR101922744B1 (en) |
CN (8) | CN102737096B (en) |
WO (7) | WO2012135210A2 (en) |
Families Citing this family (205)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8677377B2 (en) | 2005-09-08 | 2014-03-18 | Apple Inc. | Method and apparatus for building an intelligent automated assistant |
US9318108B2 (en) | 2010-01-18 | 2016-04-19 | Apple Inc. | Intelligent automated assistant |
US8977255B2 (en) | 2007-04-03 | 2015-03-10 | Apple Inc. | Method and system for operating a multi-function portable electronic device using voice-activation |
US10002189B2 (en) | 2007-12-20 | 2018-06-19 | Apple Inc. | Method and apparatus for searching using an active ontology |
US9330720B2 (en) | 2008-01-03 | 2016-05-03 | Apple Inc. | Methods and apparatus for altering audio output signals |
US8996376B2 (en) | 2008-04-05 | 2015-03-31 | Apple Inc. | Intelligent text-to-speech conversion |
US20100030549A1 (en) | 2008-07-31 | 2010-02-04 | Lee Michael M | Mobile device having human language translation capability with positional feedback |
US8676904B2 (en) | 2008-10-02 | 2014-03-18 | Apple Inc. | Electronic devices with voice command and contextual data processing capabilities |
US20120311585A1 (en) | 2011-06-03 | 2012-12-06 | Apple Inc. | Organizing task items that represent tasks to perform |
US10241644B2 (en) | 2011-06-03 | 2019-03-26 | Apple Inc. | Actionable reminder entries |
US10241752B2 (en) | 2011-09-30 | 2019-03-26 | Apple Inc. | Interface for a virtual digital assistant |
US10276170B2 (en) | 2010-01-18 | 2019-04-30 | Apple Inc. | Intelligent automated assistant |
US8682667B2 (en) | 2010-02-25 | 2014-03-25 | Apple Inc. | User profiling for selecting user specific voice input processing information |
US10032127B2 (en) | 2011-02-18 | 2018-07-24 | Nuance Communications, Inc. | Methods and apparatus for determining a clinician's intent to order an item |
US9262612B2 (en) | 2011-03-21 | 2016-02-16 | Apple Inc. | Device access using voice authentication |
US10642934B2 (en) | 2011-03-31 | 2020-05-05 | Microsoft Technology Licensing, Llc | Augmented conversational understanding architecture |
US9760566B2 (en) | 2011-03-31 | 2017-09-12 | Microsoft Technology Licensing, Llc | Augmented conversational understanding agent to identify conversation context between two humans and taking an agent action thereof |
US9842168B2 (en) | 2011-03-31 | 2017-12-12 | Microsoft Technology Licensing, Llc | Task driven user intents |
US9064006B2 (en) | 2012-08-23 | 2015-06-23 | Microsoft Technology Licensing, Llc | Translating natural language utterances to keyword search queries |
US10057736B2 (en) | 2011-06-03 | 2018-08-21 | Apple Inc. | Active transport based notifications |
US10134385B2 (en) | 2012-03-02 | 2018-11-20 | Apple Inc. | Systems and methods for name pronunciation |
US10417037B2 (en) | 2012-05-15 | 2019-09-17 | Apple Inc. | Systems and methods for integrating third party services with a digital assistant |
US9721563B2 (en) | 2012-06-08 | 2017-08-01 | Apple Inc. | Name recognition system |
CN104704797B (en) | 2012-08-10 | 2018-08-10 | 纽昂斯通讯公司 | Virtual protocol communication for electronic equipment |
US9547647B2 (en) | 2012-09-19 | 2017-01-17 | Apple Inc. | Voice-based media searching |
US10199051B2 (en) | 2013-02-07 | 2019-02-05 | Apple Inc. | Voice trigger for a digital assistant |
EP2946322A1 (en) * | 2013-03-01 | 2015-11-25 | Nuance Communications, Inc. | Methods and apparatus for determining a clinician's intent to order an item |
US10652394B2 (en) | 2013-03-14 | 2020-05-12 | Apple Inc. | System and method for processing voicemail |
US9436287B2 (en) * | 2013-03-15 | 2016-09-06 | Qualcomm Incorporated | Systems and methods for switching processing modes using gestures |
US10748529B1 (en) | 2013-03-15 | 2020-08-18 | Apple Inc. | Voice activated device for use with a voice-based digital assistant |
WO2014197334A2 (en) | 2013-06-07 | 2014-12-11 | Apple Inc. | System and method for user-specified pronunciation of words for speech synthesis and recognition |
WO2014197335A1 (en) | 2013-06-08 | 2014-12-11 | Apple Inc. | Interpreting and acting upon commands that involve sharing information with remote devices |
US10176167B2 (en) | 2013-06-09 | 2019-01-08 | Apple Inc. | System and method for inferring user intent from speech inputs |
CN110442699A (en) | 2013-06-09 | 2019-11-12 | 苹果公司 | Operate method, computer-readable medium, electronic equipment and the system of digital assistants |
US9728184B2 (en) | 2013-06-18 | 2017-08-08 | Microsoft Technology Licensing, Llc | Restructuring deep neural network acoustic models |
US9589565B2 (en) | 2013-06-21 | 2017-03-07 | Microsoft Technology Licensing, Llc | Environmentally aware dialog policies and response generation |
US9311298B2 (en) | 2013-06-21 | 2016-04-12 | Microsoft Technology Licensing, Llc | Building conversational understanding systems using a toolset |
US10296160B2 (en) | 2013-12-06 | 2019-05-21 | Apple Inc. | Method for extracting salient dialog usage from live data |
CN104714954A (en) * | 2013-12-13 | 2015-06-17 | 中国电信股份有限公司 | Information searching method and system based on context understanding |
US20150170053A1 (en) * | 2013-12-13 | 2015-06-18 | Microsoft Corporation | Personalized machine learning models |
US10534623B2 (en) | 2013-12-16 | 2020-01-14 | Nuance Communications, Inc. | Systems and methods for providing a virtual assistant |
US10015770B2 (en) | 2014-03-24 | 2018-07-03 | International Business Machines Corporation | Social proximity networks for mobile phones |
US9529794B2 (en) | 2014-03-27 | 2016-12-27 | Microsoft Technology Licensing, Llc | Flexible schema for language model customization |
US20150278370A1 (en) * | 2014-04-01 | 2015-10-01 | Microsoft Corporation | Task completion for natural language input |
US10111099B2 (en) | 2014-05-12 | 2018-10-23 | Microsoft Technology Licensing, Llc | Distributing content in managed wireless distribution networks |
US9874914B2 (en) | 2014-05-19 | 2018-01-23 | Microsoft Technology Licensing, Llc | Power management contracts for accessory devices |
US9633004B2 (en) | 2014-05-30 | 2017-04-25 | Apple Inc. | Better resolution when referencing to concepts |
US9715875B2 (en) | 2014-05-30 | 2017-07-25 | Apple Inc. | Reducing the need for manual start/end-pointing and trigger phrases |
US10170123B2 (en) | 2014-05-30 | 2019-01-01 | Apple Inc. | Intelligent assistant for home automation |
US9430463B2 (en) | 2014-05-30 | 2016-08-30 | Apple Inc. | Exemplar-based natural language processing |
EP3480811A1 (en) | 2014-05-30 | 2019-05-08 | Apple Inc. | Multi-command single utterance input method |
US9355640B2 (en) * | 2014-06-04 | 2016-05-31 | Google Inc. | Invoking action responsive to co-presence determination |
US9717006B2 (en) | 2014-06-23 | 2017-07-25 | Microsoft Technology Licensing, Llc | Device quarantine in a wireless network |
JP6275569B2 (en) * | 2014-06-27 | 2018-02-07 | 株式会社東芝 | Dialog apparatus, method and program |
US9338493B2 (en) | 2014-06-30 | 2016-05-10 | Apple Inc. | Intelligent automated assistant for TV user interactions |
US9811352B1 (en) | 2014-07-11 | 2017-11-07 | Google Inc. | Replaying user input actions using screen capture images |
US10146409B2 (en) * | 2014-08-29 | 2018-12-04 | Microsoft Technology Licensing, Llc | Computerized dynamic splitting of interaction across multiple content |
US9818400B2 (en) | 2014-09-11 | 2017-11-14 | Apple Inc. | Method and apparatus for discovering trending terms in speech requests |
US10127911B2 (en) | 2014-09-30 | 2018-11-13 | Apple Inc. | Speaker identification and unsupervised speaker adaptation techniques |
US9668121B2 (en) | 2014-09-30 | 2017-05-30 | Apple Inc. | Social reminders |
US10074360B2 (en) | 2014-09-30 | 2018-09-11 | Apple Inc. | Providing an indication of the suitability of speech recognition |
KR102188268B1 (en) * | 2014-10-08 | 2020-12-08 | 엘지전자 주식회사 | Mobile terminal and method for controlling the same |
EP3210096B1 (en) * | 2014-10-21 | 2019-05-15 | Robert Bosch GmbH | Method and system for automation of response selection and composition in dialog systems |
KR102329333B1 (en) * | 2014-11-12 | 2021-11-23 | 삼성전자주식회사 | Query processing apparatus and method |
US9836452B2 (en) | 2014-12-30 | 2017-12-05 | Microsoft Technology Licensing, Llc | Discriminating ambiguous expressions to enhance user experience |
US10713005B2 (en) | 2015-01-05 | 2020-07-14 | Google Llc | Multimodal state circulation |
US10572810B2 (en) | 2015-01-07 | 2020-02-25 | Microsoft Technology Licensing, Llc | Managing user interaction for input understanding determinations |
WO2016129767A1 (en) * | 2015-02-13 | 2016-08-18 | 주식회사 팔락성 | Online site linking method |
US10152299B2 (en) | 2015-03-06 | 2018-12-11 | Apple Inc. | Reducing response latency of intelligent automated assistants |
US10567477B2 (en) | 2015-03-08 | 2020-02-18 | Apple Inc. | Virtual assistant continuity |
US9721566B2 (en) | 2015-03-08 | 2017-08-01 | Apple Inc. | Competing devices responding to voice triggers |
US9886953B2 (en) | 2015-03-08 | 2018-02-06 | Apple Inc. | Virtual assistant activation |
US10460227B2 (en) | 2015-05-15 | 2019-10-29 | Apple Inc. | Virtual assistant in a communication session |
US10200824B2 (en) | 2015-05-27 | 2019-02-05 | Apple Inc. | Systems and methods for proactively identifying and surfacing relevant content on a touch-sensitive device |
US10083688B2 (en) | 2015-05-27 | 2018-09-25 | Apple Inc. | Device voice control for selecting a displayed affordance |
US9578173B2 (en) | 2015-06-05 | 2017-02-21 | Apple Inc. | Virtual assistant aided communication with 3rd party service in a communication session |
US11025565B2 (en) | 2015-06-07 | 2021-06-01 | Apple Inc. | Personalized prediction of responses for instant messaging |
US9792281B2 (en) * | 2015-06-15 | 2017-10-17 | Microsoft Technology Licensing, Llc | Contextual language generation by leveraging language understanding |
US20160378747A1 (en) | 2015-06-29 | 2016-12-29 | Apple Inc. | Virtual assistant for media playback |
US10249297B2 (en) | 2015-07-13 | 2019-04-02 | Microsoft Technology Licensing, Llc | Propagating conversational alternatives using delayed hypothesis binding |
US10747498B2 (en) | 2015-09-08 | 2020-08-18 | Apple Inc. | Zero latency digital assistant |
US10331312B2 (en) | 2015-09-08 | 2019-06-25 | Apple Inc. | Intelligent automated assistant in a media environment |
US10740384B2 (en) | 2015-09-08 | 2020-08-11 | Apple Inc. | Intelligent automated assistant for media search and playback |
US10671428B2 (en) | 2015-09-08 | 2020-06-02 | Apple Inc. | Distributed personal assistant |
KR20170033722A (en) * | 2015-09-17 | 2017-03-27 | 삼성전자주식회사 | Apparatus and method for processing user's locution, and dialog management apparatus |
US10262654B2 (en) * | 2015-09-24 | 2019-04-16 | Microsoft Technology Licensing, Llc | Detecting actionable items in a conversation among participants |
US10970646B2 (en) * | 2015-10-01 | 2021-04-06 | Google Llc | Action suggestions for user-selected content |
US10691473B2 (en) | 2015-11-06 | 2020-06-23 | Apple Inc. | Intelligent automated assistant in a messaging environment |
US10956666B2 (en) | 2015-11-09 | 2021-03-23 | Apple Inc. | Unconventional virtual assistant interactions |
KR102393928B1 (en) | 2015-11-10 | 2022-05-04 | 삼성전자주식회사 | User terminal apparatus for recommanding a reply message and method thereof |
WO2017090954A1 (en) * | 2015-11-24 | 2017-06-01 | Samsung Electronics Co., Ltd. | Electronic device and operating method thereof |
KR102502569B1 (en) | 2015-12-02 | 2023-02-23 | 삼성전자주식회사 | Method and apparuts for system resource managemnet |
US10049668B2 (en) | 2015-12-02 | 2018-08-14 | Apple Inc. | Applying neural network language models to weighted finite state transducers for automatic speech recognition |
US10223066B2 (en) | 2015-12-23 | 2019-03-05 | Apple Inc. | Proactive assistance based on dialog communication between devices |
US9905248B2 (en) | 2016-02-29 | 2018-02-27 | International Business Machines Corporation | Inferring user intentions based on user conversation data and spatio-temporal data |
US9978396B2 (en) | 2016-03-16 | 2018-05-22 | International Business Machines Corporation | Graphical display of phone conversations |
US10587708B2 (en) | 2016-03-28 | 2020-03-10 | Microsoft Technology Licensing, Llc | Multi-modal conversational intercom |
US11487512B2 (en) | 2016-03-29 | 2022-11-01 | Microsoft Technology Licensing, Llc | Generating a services application |
US10158593B2 (en) * | 2016-04-08 | 2018-12-18 | Microsoft Technology Licensing, Llc | Proactive intelligent personal assistant |
US10945129B2 (en) * | 2016-04-29 | 2021-03-09 | Microsoft Technology Licensing, Llc | Facilitating interaction among digital personal assistants |
US10409876B2 (en) * | 2016-05-26 | 2019-09-10 | Microsoft Technology Licensing, Llc. | Intelligent capture, storage, and retrieval of information for task completion |
EP3465463A1 (en) * | 2016-06-03 | 2019-04-10 | Maluuba Inc. | Natural language generation in a spoken dialogue system |
US11227589B2 (en) | 2016-06-06 | 2022-01-18 | Apple Inc. | Intelligent list reading |
US10249300B2 (en) | 2016-06-06 | 2019-04-02 | Apple Inc. | Intelligent list reading |
US10282218B2 (en) * | 2016-06-07 | 2019-05-07 | Google Llc | Nondeterministic task initiation by a personal assistant module |
US10049663B2 (en) | 2016-06-08 | 2018-08-14 | Apple, Inc. | Intelligent automated assistant for media exploration |
DK179309B1 (en) | 2016-06-09 | 2018-04-23 | Apple Inc | Intelligent automated assistant in a home environment |
US10586535B2 (en) | 2016-06-10 | 2020-03-10 | Apple Inc. | Intelligent digital assistant in a multi-tasking environment |
US10067938B2 (en) | 2016-06-10 | 2018-09-04 | Apple Inc. | Multilingual word prediction |
DK179343B1 (en) | 2016-06-11 | 2018-05-14 | Apple Inc | Intelligent task discovery |
DK201670540A1 (en) | 2016-06-11 | 2018-01-08 | Apple Inc | Application integration with a digital assistant |
DK179415B1 (en) | 2016-06-11 | 2018-06-14 | Apple Inc | Intelligent device arbitration and control |
US10216269B2 (en) * | 2016-06-21 | 2019-02-26 | GM Global Technology Operations LLC | Apparatus and method for determining intent of user based on gaze information |
US10509795B2 (en) * | 2016-08-23 | 2019-12-17 | Illumina, Inc. | Semantic distance systems and methods for determining related ontological data |
US10474753B2 (en) | 2016-09-07 | 2019-11-12 | Apple Inc. | Language identification using recurrent neural networks |
US10446137B2 (en) | 2016-09-07 | 2019-10-15 | Microsoft Technology Licensing, Llc | Ambiguity resolving conversational understanding system |
US10503767B2 (en) * | 2016-09-13 | 2019-12-10 | Microsoft Technology Licensing, Llc | Computerized natural language query intent dispatching |
US10043516B2 (en) | 2016-09-23 | 2018-08-07 | Apple Inc. | Intelligent automated assistant |
US9940390B1 (en) * | 2016-09-27 | 2018-04-10 | Microsoft Technology Licensing, Llc | Control system using scoped search and conversational interface |
CN115858730A (en) * | 2016-09-29 | 2023-03-28 | 微软技术许可有限责任公司 | Conversational data analysis |
US10535005B1 (en) | 2016-10-26 | 2020-01-14 | Google Llc | Providing contextual actions for mobile onscreen content |
JP6697373B2 (en) | 2016-12-06 | 2020-05-20 | カシオ計算機株式会社 | Sentence generating device, sentence generating method and program |
US10593346B2 (en) | 2016-12-22 | 2020-03-17 | Apple Inc. | Rank-reduced token representation for automatic speech recognition |
US11204787B2 (en) | 2017-01-09 | 2021-12-21 | Apple Inc. | Application integration with a digital assistant |
EP3552114A4 (en) * | 2017-02-08 | 2020-05-20 | Semantic Machines, Inc. | Natural language content generator |
US10643601B2 (en) * | 2017-02-09 | 2020-05-05 | Semantic Machines, Inc. | Detection mechanism for automated dialog systems |
WO2018156978A1 (en) | 2017-02-23 | 2018-08-30 | Semantic Machines, Inc. | Expandable dialogue system |
CN110301004B (en) * | 2017-02-23 | 2023-08-08 | 微软技术许可有限责任公司 | Extensible dialog system |
US10798027B2 (en) * | 2017-03-05 | 2020-10-06 | Microsoft Technology Licensing, Llc | Personalized communications using semantic memory |
US10237209B2 (en) * | 2017-05-08 | 2019-03-19 | Google Llc | Initializing a conversation with an automated agent via selectable graphical element |
US10417266B2 (en) | 2017-05-09 | 2019-09-17 | Apple Inc. | Context-aware ranking of intelligent response suggestions |
DK201770383A1 (en) | 2017-05-09 | 2018-12-14 | Apple Inc. | User interface for correcting recognition errors |
DK201770439A1 (en) | 2017-05-11 | 2018-12-13 | Apple Inc. | Offline personal assistant |
US10726832B2 (en) | 2017-05-11 | 2020-07-28 | Apple Inc. | Maintaining privacy of personal information |
DK180048B1 (en) | 2017-05-11 | 2020-02-04 | Apple Inc. | MAINTAINING THE DATA PROTECTION OF PERSONAL INFORMATION |
US10395654B2 (en) | 2017-05-11 | 2019-08-27 | Apple Inc. | Text normalization based on a data-driven learning network |
US11301477B2 (en) | 2017-05-12 | 2022-04-12 | Apple Inc. | Feedback analysis of a digital assistant |
DK201770428A1 (en) | 2017-05-12 | 2019-02-18 | Apple Inc. | Low-latency intelligent automated assistant |
DK179496B1 (en) | 2017-05-12 | 2019-01-15 | Apple Inc. | USER-SPECIFIC Acoustic Models |
DK179745B1 (en) | 2017-05-12 | 2019-05-01 | Apple Inc. | SYNCHRONIZATION AND TASK DELEGATION OF A DIGITAL ASSISTANT |
DK201770432A1 (en) | 2017-05-15 | 2018-12-21 | Apple Inc. | Hierarchical belief states for digital assistants |
DK201770431A1 (en) | 2017-05-15 | 2018-12-20 | Apple Inc. | Optimizing dialogue policy decisions for digital assistants using implicit feedback |
US20180336892A1 (en) | 2017-05-16 | 2018-11-22 | Apple Inc. | Detecting a trigger of a digital assistant |
DK179549B1 (en) | 2017-05-16 | 2019-02-12 | Apple Inc. | Far-field extension for digital assistant services |
US10311144B2 (en) | 2017-05-16 | 2019-06-04 | Apple Inc. | Emoji word sense disambiguation |
US10403278B2 (en) | 2017-05-16 | 2019-09-03 | Apple Inc. | Methods and systems for phonetic matching in digital assistant services |
US20180336275A1 (en) | 2017-05-16 | 2018-11-22 | Apple Inc. | Intelligent automated assistant for media exploration |
US10664533B2 (en) * | 2017-05-24 | 2020-05-26 | Lenovo (Singapore) Pte. Ltd. | Systems and methods to determine response cue for digital assistant based on context |
US10679192B2 (en) * | 2017-05-25 | 2020-06-09 | Microsoft Technology Licensing, Llc | Assigning tasks and monitoring task performance based on context extracted from a shared contextual graph |
US10657328B2 (en) | 2017-06-02 | 2020-05-19 | Apple Inc. | Multi-task recurrent neural network architecture for efficient morphology handling in neural language modeling |
US10742435B2 (en) * | 2017-06-29 | 2020-08-11 | Google Llc | Proactive provision of new content to group chat participants |
US11132499B2 (en) | 2017-08-28 | 2021-09-28 | Microsoft Technology Licensing, Llc | Robust expandable dialogue system |
US10445429B2 (en) | 2017-09-21 | 2019-10-15 | Apple Inc. | Natural language understanding using vocabularies with compressed serialized tries |
US10755051B2 (en) | 2017-09-29 | 2020-08-25 | Apple Inc. | Rule-based natural language processing |
US10546023B2 (en) | 2017-10-03 | 2020-01-28 | Google Llc | Providing command bundle suggestions for an automated assistant |
US10636424B2 (en) | 2017-11-30 | 2020-04-28 | Apple Inc. | Multi-turn canned dialog |
US11341422B2 (en) | 2017-12-15 | 2022-05-24 | SHANGHAI XIAOl ROBOT TECHNOLOGY CO., LTD. | Multi-round questioning and answering methods, methods for generating a multi-round questioning and answering system, and methods for modifying the system |
CN110019718B (en) * | 2017-12-15 | 2021-04-09 | 上海智臻智能网络科技股份有限公司 | Method for modifying multi-turn question-answering system, terminal equipment and storage medium |
US10733982B2 (en) | 2018-01-08 | 2020-08-04 | Apple Inc. | Multi-directional dialog |
US10839160B2 (en) * | 2018-01-19 | 2020-11-17 | International Business Machines Corporation | Ontology-based automatic bootstrapping of state-based dialog systems |
US10733375B2 (en) | 2018-01-31 | 2020-08-04 | Apple Inc. | Knowledge-based framework for improving natural language understanding |
US10789959B2 (en) | 2018-03-02 | 2020-09-29 | Apple Inc. | Training speaker recognition models for digital assistants |
US10592604B2 (en) | 2018-03-12 | 2020-03-17 | Apple Inc. | Inverse text normalization for automatic speech recognition |
KR102635811B1 (en) * | 2018-03-19 | 2024-02-13 | 삼성전자 주식회사 | System and control method of system for processing sound data |
US10818288B2 (en) | 2018-03-26 | 2020-10-27 | Apple Inc. | Natural assistant interaction |
US10909331B2 (en) | 2018-03-30 | 2021-02-02 | Apple Inc. | Implicit identification of translation payload with neural machine translation |
US10685075B2 (en) | 2018-04-11 | 2020-06-16 | Motorola Solutions, Inc. | System and method for tailoring an electronic digital assistant query as a function of captured multi-party voice dialog and an electronically stored multi-party voice-interaction template |
US11145294B2 (en) | 2018-05-07 | 2021-10-12 | Apple Inc. | Intelligent automated assistant for delivering content from user experiences |
US10928918B2 (en) | 2018-05-07 | 2021-02-23 | Apple Inc. | Raise to speak |
US10984780B2 (en) | 2018-05-21 | 2021-04-20 | Apple Inc. | Global semantic word embeddings using bi-directional recurrent neural networks |
DK179822B1 (en) | 2018-06-01 | 2019-07-12 | Apple Inc. | Voice interaction at a primary device to access call functionality of a companion device |
DK180639B1 (en) | 2018-06-01 | 2021-11-04 | Apple Inc | DISABILITY OF ATTENTION-ATTENTIVE VIRTUAL ASSISTANT |
US10892996B2 (en) | 2018-06-01 | 2021-01-12 | Apple Inc. | Variable latency device coordination |
DK201870355A1 (en) | 2018-06-01 | 2019-12-16 | Apple Inc. | Virtual assistant operation in multi-device environments |
US11386266B2 (en) | 2018-06-01 | 2022-07-12 | Apple Inc. | Text correction |
US11076039B2 (en) | 2018-06-03 | 2021-07-27 | Apple Inc. | Accelerated task performance |
WO2020044990A1 (en) | 2018-08-29 | 2020-03-05 | パナソニックIpマネジメント株式会社 | Power conversion system and power storage system |
US11010561B2 (en) | 2018-09-27 | 2021-05-18 | Apple Inc. | Sentiment prediction from textual data |
US10839159B2 (en) | 2018-09-28 | 2020-11-17 | Apple Inc. | Named entity normalization in a spoken dialog system |
US11170166B2 (en) | 2018-09-28 | 2021-11-09 | Apple Inc. | Neural typographical error modeling via generative adversarial networks |
US11462215B2 (en) | 2018-09-28 | 2022-10-04 | Apple Inc. | Multi-modal inputs for voice commands |
US11475898B2 (en) | 2018-10-26 | 2022-10-18 | Apple Inc. | Low-latency multi-speaker speech recognition |
US11638059B2 (en) | 2019-01-04 | 2023-04-25 | Apple Inc. | Content playback on multiple devices |
CN111428721A (en) * | 2019-01-10 | 2020-07-17 | 北京字节跳动网络技术有限公司 | Method, device and equipment for determining word paraphrases and storage medium |
US11348573B2 (en) | 2019-03-18 | 2022-05-31 | Apple Inc. | Multimodality in digital assistant systems |
US11307752B2 (en) | 2019-05-06 | 2022-04-19 | Apple Inc. | User configurable task triggers |
US11475884B2 (en) | 2019-05-06 | 2022-10-18 | Apple Inc. | Reducing digital assistant latency when a language is incorrectly determined |
DK201970509A1 (en) | 2019-05-06 | 2021-01-15 | Apple Inc | Spoken notifications |
US11423908B2 (en) | 2019-05-06 | 2022-08-23 | Apple Inc. | Interpreting spoken requests |
US11140099B2 (en) | 2019-05-21 | 2021-10-05 | Apple Inc. | Providing message response suggestions |
DK180129B1 (en) | 2019-05-31 | 2020-06-02 | Apple Inc. | User activity shortcut suggestions |
US11496600B2 (en) | 2019-05-31 | 2022-11-08 | Apple Inc. | Remote execution of machine-learned models |
US11289073B2 (en) | 2019-05-31 | 2022-03-29 | Apple Inc. | Device text to speech |
DK201970510A1 (en) | 2019-05-31 | 2021-02-11 | Apple Inc | Voice identification in digital assistant systems |
US11468890B2 (en) | 2019-06-01 | 2022-10-11 | Apple Inc. | Methods and user interfaces for voice-based control of electronic devices |
US11360641B2 (en) | 2019-06-01 | 2022-06-14 | Apple Inc. | Increasing the relevance of new available information |
WO2021056255A1 (en) | 2019-09-25 | 2021-04-01 | Apple Inc. | Text detection using global geometry estimators |
US11043220B1 (en) | 2020-05-11 | 2021-06-22 | Apple Inc. | Digital assistant hardware abstraction |
US11061543B1 (en) | 2020-05-11 | 2021-07-13 | Apple Inc. | Providing relevant data items based on context |
US11755276B2 (en) | 2020-05-12 | 2023-09-12 | Apple Inc. | Reducing description length based on confidence |
US11490204B2 (en) | 2020-07-20 | 2022-11-01 | Apple Inc. | Multi-device audio adjustment coordination |
US11438683B2 (en) | 2020-07-21 | 2022-09-06 | Apple Inc. | User identification using headphones |
US11783827B2 (en) | 2020-11-06 | 2023-10-10 | Apple Inc. | Determining suggested subsequent user actions during digital assistant interaction |
EP4174848A1 (en) * | 2021-10-29 | 2023-05-03 | Televic Rail NV | Improved speech to text method and system |
CN116644810B (en) * | 2023-05-06 | 2024-04-05 | 国网冀北电力有限公司信息通信分公司 | Power grid fault risk treatment method and device based on knowledge graph |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101499277A (en) * | 2008-07-25 | 2009-08-05 | 中国科学院计算技术研究所 | Service intelligent navigation method and system |
Family Cites Families (71)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5265014A (en) * | 1990-04-10 | 1993-11-23 | Hewlett-Packard Company | Multi-modal user interface |
US5748974A (en) * | 1994-12-13 | 1998-05-05 | International Business Machines Corporation | Multimodal natural language interface for cross-application tasks |
US5970446A (en) * | 1997-11-25 | 1999-10-19 | At&T Corp | Selective noise/channel/coding models and recognizers for automatic speech recognition |
CN1313972A (en) * | 1998-08-24 | 2001-09-19 | Bcl计算机有限公司 | Adaptive natural language interface |
US6499013B1 (en) * | 1998-09-09 | 2002-12-24 | One Voice Technologies, Inc. | Interactive user interface using speech recognition and natural language processing |
US6332120B1 (en) * | 1999-04-20 | 2001-12-18 | Solana Technology Development Corporation | Broadcast speech recognition system for keyword monitoring |
JP3530109B2 (en) * | 1999-05-31 | 2004-05-24 | 日本電信電話株式会社 | Voice interactive information retrieval method, apparatus, and recording medium for large-scale information database |
CA2375222A1 (en) * | 1999-06-01 | 2000-12-07 | Geoffrey M. Jacquez | Help system for a computer related application |
US6598039B1 (en) * | 1999-06-08 | 2003-07-22 | Albert-Inc. S.A. | Natural language interface for searching database |
JP3765202B2 (en) * | 1999-07-09 | 2006-04-12 | 日産自動車株式会社 | Interactive information search apparatus, interactive information search method using computer, and computer-readable medium recording program for interactive information search processing |
JP2001125896A (en) * | 1999-10-26 | 2001-05-11 | Victor Co Of Japan Ltd | Natural language interactive system |
US7050977B1 (en) * | 1999-11-12 | 2006-05-23 | Phoenix Solutions, Inc. | Speech-enabled server for internet website and method |
JP2002024285A (en) * | 2000-06-30 | 2002-01-25 | Sanyo Electric Co Ltd | Method and device for user support |
JP2002082748A (en) * | 2000-09-06 | 2002-03-22 | Sanyo Electric Co Ltd | User support device |
US7197120B2 (en) * | 2000-12-22 | 2007-03-27 | Openwave Systems Inc. | Method and system for facilitating mediated communication |
GB2372864B (en) * | 2001-02-28 | 2005-09-07 | Vox Generation Ltd | Spoken language interface |
JP2003115951A (en) * | 2001-10-09 | 2003-04-18 | Casio Comput Co Ltd | Topic information providing system and topic information providing method |
US7224981B2 (en) * | 2002-06-20 | 2007-05-29 | Intel Corporation | Speech recognition of mobile devices |
US7693720B2 (en) * | 2002-07-15 | 2010-04-06 | Voicebox Technologies, Inc. | Mobile systems and methods for responding to natural language speech utterance |
EP1411443A1 (en) * | 2002-10-18 | 2004-04-21 | Hewlett Packard Company, a Delaware Corporation | Context filter |
JP2004212641A (en) * | 2002-12-27 | 2004-07-29 | Toshiba Corp | Voice input system and terminal device equipped with voice input system |
JP2004328181A (en) * | 2003-04-23 | 2004-11-18 | Sharp Corp | Telephone and telephone network system |
JP4441782B2 (en) * | 2003-05-14 | 2010-03-31 | 日本電信電話株式会社 | Information presentation method and information presentation apparatus |
JP2005043461A (en) * | 2003-07-23 | 2005-02-17 | Canon Inc | Voice recognition method and voice recognition device |
KR20050032649A (en) * | 2003-10-02 | 2005-04-08 | (주)이즈메이커 | Method and system for teaching artificial life |
US7747601B2 (en) * | 2006-08-14 | 2010-06-29 | Inquira, Inc. | Method and apparatus for identifying and classifying query intent |
US7720674B2 (en) * | 2004-06-29 | 2010-05-18 | Sap Ag | Systems and methods for processing natural language queries |
JP4434972B2 (en) * | 2005-01-21 | 2010-03-17 | 日本電気株式会社 | Information providing system, information providing method and program thereof |
EP1686495B1 (en) * | 2005-01-31 | 2011-05-18 | Ontoprise GmbH | Mapping web services to ontologies |
GB0502259D0 (en) * | 2005-02-03 | 2005-03-09 | British Telecomm | Document searching tool and method |
CN101120341A (en) * | 2005-02-06 | 2008-02-06 | 凌圭特股份有限公司 | Method and equipment for performing mobile information access using natural language |
US20060206333A1 (en) * | 2005-03-08 | 2006-09-14 | Microsoft Corporation | Speaker-dependent dialog adaptation |
US7409344B2 (en) * | 2005-03-08 | 2008-08-05 | Sap Aktiengesellschaft | XML based architecture for controlling user interfaces with contextual voice commands |
WO2006108061A2 (en) * | 2005-04-05 | 2006-10-12 | The Board Of Trustees Of Leland Stanford Junior University | Methods, software, and systems for knowledge base coordination |
US7991607B2 (en) * | 2005-06-27 | 2011-08-02 | Microsoft Corporation | Translation and capture architecture for output of conversational utterances |
US7640160B2 (en) * | 2005-08-05 | 2009-12-29 | Voicebox Technologies, Inc. | Systems and methods for responding to natural language speech utterance |
US7620549B2 (en) * | 2005-08-10 | 2009-11-17 | Voicebox Technologies, Inc. | System and method of supporting adaptive misrecognition in conversational speech |
US7627466B2 (en) * | 2005-11-09 | 2009-12-01 | Microsoft Corporation | Natural language interface for driving adaptive scenarios |
US7822699B2 (en) * | 2005-11-30 | 2010-10-26 | Microsoft Corporation | Adaptive semantic reasoning engine |
US20070136222A1 (en) * | 2005-12-09 | 2007-06-14 | Microsoft Corporation | Question and answer architecture for reasoning and clarifying intentions, goals, and needs from contextual clues and content |
US20070143410A1 (en) * | 2005-12-16 | 2007-06-21 | International Business Machines Corporation | System and method for defining and translating chat abbreviations |
CN100373313C (en) * | 2006-01-12 | 2008-03-05 | 广东威创视讯科技股份有限公司 | Intelligent recognition coding method for interactive input apparatus |
US8209407B2 (en) * | 2006-02-10 | 2012-06-26 | The United States Of America, As Represented By The Secretary Of The Navy | System and method for web service discovery and access |
CA2652150A1 (en) * | 2006-06-13 | 2007-12-21 | Microsoft Corporation | Search engine dash-board |
US20080005068A1 (en) * | 2006-06-28 | 2008-01-03 | Microsoft Corporation | Context-based search, retrieval, and awareness |
US8204739B2 (en) * | 2008-04-15 | 2012-06-19 | Mobile Technologies, Llc | System and methods for maintaining speech-to-speech translation in the field |
CN1963752A (en) * | 2006-11-28 | 2007-05-16 | 李博航 | Man-machine interactive interface technique of electronic apparatus based on natural language |
EP2122542B1 (en) * | 2006-12-08 | 2017-11-01 | Medhat Moussa | Architecture, system and method for artificial neural network implementation |
US20080172359A1 (en) * | 2007-01-11 | 2008-07-17 | Motorola, Inc. | Method and apparatus for providing contextual support to a monitored communication |
US20080172659A1 (en) | 2007-01-17 | 2008-07-17 | Microsoft Corporation | Harmonizing a test file and test configuration in a revision control system |
US20080201434A1 (en) * | 2007-02-16 | 2008-08-21 | Microsoft Corporation | Context-Sensitive Searches and Functionality for Instant Messaging Applications |
US20090076917A1 (en) * | 2007-08-22 | 2009-03-19 | Victor Roditis Jablokov | Facilitating presentation of ads relating to words of a message |
US7720856B2 (en) * | 2007-04-09 | 2010-05-18 | Sap Ag | Cross-language searching |
US8762143B2 (en) * | 2007-05-29 | 2014-06-24 | At&T Intellectual Property Ii, L.P. | Method and apparatus for identifying acoustic background environments based on time and speed to enhance automatic speech recognition |
US7788276B2 (en) * | 2007-08-22 | 2010-08-31 | Yahoo! Inc. | Predictive stemming for web search with statistical machine translation models |
CA2698105C (en) * | 2007-08-31 | 2016-07-05 | Microsoft Corporation | Identification of semantic relationships within reported speech |
US8165886B1 (en) * | 2007-10-04 | 2012-04-24 | Great Northern Research LLC | Speech interface system and method for control and interaction with applications on a computing system |
US8504621B2 (en) * | 2007-10-26 | 2013-08-06 | Microsoft Corporation | Facilitating a decision-making process |
JP2009116733A (en) * | 2007-11-08 | 2009-05-28 | Nec Corp | Application retrieval system, application retrieval method, monitor terminal, retrieval server, and program |
JP5158635B2 (en) * | 2008-02-28 | 2013-03-06 | インターナショナル・ビジネス・マシーンズ・コーポレーション | Method, system, and apparatus for personal service support |
US20090234655A1 (en) * | 2008-03-13 | 2009-09-17 | Jason Kwon | Mobile electronic device with active speech recognition |
US8874443B2 (en) * | 2008-08-27 | 2014-10-28 | Robert Bosch Gmbh | System and method for generating natural language phrases from user utterances in dialog systems |
JP2010128665A (en) * | 2008-11-26 | 2010-06-10 | Kyocera Corp | Information terminal and conversation assisting program |
JP2010145262A (en) * | 2008-12-19 | 2010-07-01 | Pioneer Electronic Corp | Navigation apparatus |
US8326637B2 (en) * | 2009-02-20 | 2012-12-04 | Voicebox Technologies, Inc. | System and method for processing multi-modal device interactions in a natural language voice services environment |
JP2010230918A (en) * | 2009-03-26 | 2010-10-14 | Fujitsu Ten Ltd | Retrieving device |
US8700665B2 (en) * | 2009-04-27 | 2014-04-15 | Avaya Inc. | Intelligent conference call information agents |
US20100281435A1 (en) * | 2009-04-30 | 2010-11-04 | At&T Intellectual Property I, L.P. | System and method for multimodal interaction using robust gesture processing |
KR101622111B1 (en) * | 2009-12-11 | 2016-05-18 | 삼성전자 주식회사 | Dialog system and conversational method thereof |
KR101007336B1 (en) * | 2010-06-25 | 2011-01-13 | 한국과학기술정보연구원 | Personalizing service system and method based on ontology |
US20120253789A1 (en) * | 2011-03-31 | 2012-10-04 | Microsoft Corporation | Conversational Dialog Learning and Correction |
-
2012
- 2012-03-27 KR KR1020137025540A patent/KR101922744B1/en active IP Right Grant
- 2012-03-27 EP EP12765896.1A patent/EP2691877A4/en not_active Withdrawn
- 2012-03-27 KR KR20137025578A patent/KR20140014200A/en not_active Application Discontinuation
- 2012-03-27 EP EP12764494.6A patent/EP2691870A4/en not_active Ceased
- 2012-03-27 JP JP2014502723A patent/JP6087899B2/en not_active Expired - Fee Related
- 2012-03-27 KR KR1020137025586A patent/KR101963915B1/en active IP Right Grant
- 2012-03-27 EP EP12763913.6A patent/EP2691885A4/en not_active Ceased
- 2012-03-27 JP JP2014502718A patent/JP6105552B2/en active Active
- 2012-03-27 WO PCT/US2012/030730 patent/WO2012135210A2/en unknown
- 2012-03-27 EP EP12763866.6A patent/EP2691949A4/en not_active Ceased
- 2012-03-27 WO PCT/US2012/030751 patent/WO2012135226A1/en unknown
- 2012-03-27 JP JP2014502721A patent/JP2014512046A/en active Pending
- 2012-03-27 WO PCT/US2012/030636 patent/WO2012135157A2/en unknown
- 2012-03-27 WO PCT/US2012/030757 patent/WO2012135229A2/en active Application Filing
- 2012-03-27 WO PCT/US2012/030740 patent/WO2012135218A2/en active Application Filing
- 2012-03-29 CN CN201210087420.9A patent/CN102737096B/en active Active
- 2012-03-29 CN CN201610801496.1A patent/CN106383866B/en active Active
- 2012-03-30 EP EP12764853.3A patent/EP2691875A4/en not_active Ceased
- 2012-03-30 CN CN201210090634.1A patent/CN102750311B/en active Active
- 2012-03-30 CN CN201210091176.3A patent/CN102737101B/en active Active
- 2012-03-30 WO PCT/US2012/031736 patent/WO2012135791A2/en unknown
- 2012-03-30 CN CN201210090349.XA patent/CN102737099B/en active Active
- 2012-03-30 WO PCT/US2012/031722 patent/WO2012135783A2/en unknown
- 2012-03-30 EP EP12765100.8A patent/EP2691876A4/en not_active Ceased
- 2012-03-31 CN CN201210101485.4A patent/CN102750271B/en not_active Expired - Fee Related
- 2012-03-31 CN CN201210093414.4A patent/CN102737104B/en active Active
- 2012-03-31 CN CN201210092263.0A patent/CN102750270B/en active Active
-
2017
- 2017-03-01 JP JP2017038097A patent/JP6305588B2/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101499277A (en) * | 2008-07-25 | 2009-08-05 | 中国科学院计算技术研究所 | Service intelligent navigation method and system |
Non-Patent Citations (1)
Title |
---|
A hybrid approach of personalized web information retrieval;Namita Mittal et al.;《Web Intelligence and Intelligent Agent Technology(WI-IAT),2010 IEEE/WIC/ACM International Conference on》;20100903;第308-313页 * |
Also Published As
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102737104B (en) | Task driven user intents | |
US10755713B2 (en) | Generic virtual personal assistant platform | |
US10452251B2 (en) | Transactional conversation-based computing system | |
US20180075151A1 (en) | Task driven user intents | |
US20160004707A1 (en) | Translating natural language utterances to keyword search queries | |
US8429099B1 (en) | Dynamic gazetteers for entity recognition and fact association | |
US11087090B2 (en) | System for focused conversation context management in a reasoning agent/behavior engine of an agent automation system | |
CN107924679A (en) | Delayed binding during inputting understanding processing in response selects | |
US20120253789A1 (en) | Conversational Dialog Learning and Correction | |
US9565301B2 (en) | Apparatus and method for providing call log | |
US10474439B2 (en) | Systems and methods for building conversational understanding systems | |
US20210141820A1 (en) | Omnichannel virtual assistant using artificial intelligence | |
US20210182339A1 (en) | Leveraging intent resolvers to determine multiple intents | |
KR102188564B1 (en) | Method and system for machine translation capable of style transfer | |
US11586677B2 (en) | Resolving user expression having dependent intents | |
Sateli et al. | Smarter mobile apps through integrated natural language processing services | |
WO2020226617A1 (en) | Invoking functions of agents via digital assistant applications using address templates | |
US20230153541A1 (en) | Generating and updating conversational artifacts from apis | |
US20230281396A1 (en) | Message mapping and combination for intent classification | |
US11798536B2 (en) | Annotation of media files with convenient pause points | |
US20230317069A1 (en) | Context aware speech transcription | |
US20230412475A1 (en) | Extracting corrective actions from information technology operations | |
CN104750821A (en) | Service message processing method and device | |
KR20230014680A (en) | Bit vector based content matching for 3rd party digital assistant actions |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
ASS | Succession or assignment of patent right |
Owner name: MICROSOFT TECHNOLOGY LICENSING LLC Free format text: FORMER OWNER: MICROSOFT CORP. Effective date: 20150728 |
|
C41 | Transfer of patent application or patent right or utility model | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20150728 Address after: Washington State Applicant after: Micro soft technique license Co., Ltd Address before: Washington State Applicant before: Microsoft Corp. |
|
GR01 | Patent grant | ||
GR01 | Patent grant |