CN110297702B - Multitask parallel processing method and device - Google Patents

Multitask parallel processing method and device Download PDF

Info

Publication number
CN110297702B
CN110297702B CN201910446011.5A CN201910446011A CN110297702B CN 110297702 B CN110297702 B CN 110297702B CN 201910446011 A CN201910446011 A CN 201910446011A CN 110297702 B CN110297702 B CN 110297702B
Authority
CN
China
Prior art keywords
task
user
interface
answer
voiceprint feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910446011.5A
Other languages
Chinese (zh)
Other versions
CN110297702A (en
Inventor
叶午
原利鹏
张伟萌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Moran Cognitive Technology Co Ltd
Original Assignee
Beijing Moran Cognitive Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Moran Cognitive Technology Co Ltd filed Critical Beijing Moran Cognitive Technology Co Ltd
Priority to CN201910446011.5A priority Critical patent/CN110297702B/en
Publication of CN110297702A publication Critical patent/CN110297702A/en
Application granted granted Critical
Publication of CN110297702B publication Critical patent/CN110297702B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Machine Translation (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses a method for multitask parallel processing, which comprises the following steps: receiving a first voice input and a second voice input; calling a first task interface according to the first voice input, and calling a first task model by the first task interface according to the first voice input; calling a second task interface according to the second voice input, and calling a second task model by the second task interface according to the second voice input; the first task model initiates a first multi-turn dialogue to the user through a first task interface, fills a slot position of the first task model according to the answer of the user in the multi-turn dialogue, and generates and executes a first task list; and the second task model initiates a second multi-turn dialogue to the user through a second task interface, fills the slot position of the second task model according to the answer of the user in the multi-turn dialogue, and generates and executes a second task list. By executing the two tasks in parallel, the processing efficiency of the tasks can be improved, and the user experience is improved.

Description

Multitask parallel processing method and device
Technical Field
The embodiment of the invention relates to the technical field of voice recognition, in particular to a multitask parallel processing method and a multitask parallel processing device.
Background
When a user inputs a task, for example, orders, the user is often interrupted by another emergency task, for example, an air ticket is bound suddenly when ordering, and a general method is to suspend the task of ordering the air ticket and resume the task of ordering the air ticket after completing the task of ordering the air ticket. Or when a plurality of users simultaneously initiate a plurality of tasks, the tasks cannot be executed in parallel. That is, only one user task can be performed simultaneously, multiple user tasks cannot be completed simultaneously, and multiple rounds of conversations cannot be performed simultaneously. This results in interrupted user tasks, such as ordering tasks, not being completed in a timely manner, which can affect user experience. Thus, there is a need to provide a multitasking parallel processing method.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides a method and a device for multitask parallel processing.
The invention provides a multitask parallel processing method, which comprises the following steps: receiving a first voice input and a second voice input; calling a first task interface according to the first voice input, and calling a first task model by the first task interface according to the first voice input; calling a second task interface according to the second voice input, and calling a second task model by the second task interface according to the second voice input; the first task model initiates a first multi-turn dialogue to the user through a first task interface, fills a slot position of the first task model according to the answer of the user in the multi-turn dialogue, and generates and executes a first task list; the second task model initiates a second multi-turn dialogue to the user through a second task interface, fills a slot position of the second task model according to the answer of the user in the multi-turn dialogue, and generates and executes a second task list; in the multi-turn dialogue step, judging whether the answer of the user aims at a first multi-turn dialogue or a second multi-turn dialogue, and calling a first task interface if the answer of the user aims at the first multi-turn dialogue; if the user answers for a second plurality of conversations, a second task interface is invoked.
Preferably, the first voice input is specifically voice input of a first user; the second voice input is specifically voice input of a second user.
Alternatively, the first speech input and the second speech input are both speech inputs of the first user, the first speech input being a speech input associated with the first task, and the second speech input being a speech input associated with the second task.
Preferably, the first voice input and the second voice input are input simultaneously or input occurring in the same time period.
Further, after receiving the first voice input and the second voice input, respectively extracting a first voiceprint feature of the first voice input and a second voiceprint feature of the second voice input, and performing semantic analysis on the first voice input and the second voice input to obtain a first task instruction and a second task instruction.
Further, whether the first voiceprint feature is the same as the second voiceprint feature or not is judged, and if not, the corresponding relation between the first task interface and the first voiceprint feature and the corresponding relation between the second task interface and the second voiceprint feature are established; and if the first task instruction/the first label is the same as the second task instruction/the second label, storing the first label of the first task model and the second label of the second task model, and establishing a corresponding relation between the first task instruction/the first label and the second task instruction/the second label.
Further, if the first voiceprint feature is different from the second voiceprint feature, in the multi-round dialogue step, judging whether the answer of the user is specific to the first multi-round dialogue or the second multi-round dialogue, specifically, extracting the voiceprint feature of the answer of the user, judging whether the voiceprint feature of the answer of the user is matched with the first voiceprint feature and the second voiceprint feature, and if the answer of the user is matched with the first voiceprint feature, inputting the answer of the user into the first task interface; if the second vocal print feature matches, the user answer is entered into a second task interface.
Further, if the first voiceprint feature is the same as the second voiceprint feature, in the multi-round dialogue step, it is determined whether the answer of the user is for the first multi-round dialogue or the second multi-round dialogue specifically that semantic analysis is performed on the answer of the user, association degrees of the semantic analysis result with the first task instruction and the first tag and association degrees of the semantic analysis result with the second task instruction and the second tag are obtained respectively, and it is determined whether the answer of the user is for the first multi-round dialogue or the second multi-round dialogue according to the association degrees.
Preferably, a third voice input is received, a third task interface is established according to the third voice input, and the third task interface calls a third task model according to the third voice input; and the third task model initiates a third multi-turn dialogue to the user through a third task interface, fills a slot position of the third task model according to the answer of the user in the multi-turn dialogue, and generates and executes a third task list.
The embodiment of the invention also provides a multi-task parallel processing device, which comprises a receiving module, a task interface calling module, a first task interface, a second task interface, a first task model, a second task model and an output module, wherein the first task interface and the second task interface are respectively connected with the task interface calling module; the receiving module is used for receiving a first voice input and a second voice input; the task interface calling module is used for calling a first task interface according to the first voice input and calling a second task interface according to the second voice input; the first task interface is used for calling a first task model according to a first voice input, and the second task interface is used for calling a second task model according to a second voice input; the first task model is used for initiating a first multi-turn dialog to the user through the first task interface, filling a slot position of the first task model according to answers of the user in the multi-turn dialog, and generating and executing a first task list; the second task model is used for initiating a second multi-turn dialog to the user through a second task interface, filling a slot position of the second task model according to the answer of the user in the multi-turn dialog, and generating and executing a second task list; the task interface calling module is also used for judging whether the answer of the user is directed to the first multi-turn conversation or the second multi-turn conversation in the multi-turn conversation step, and calling the first task interface if the answer of the user is directed to the first multi-turn conversation; invoking a second task interface if the user answer is for a second plurality of conversations; and the output module is used for outputting a plurality of turns of dialogue questions to the user.
Preferably, the first voice input is specifically voice input of a first user; the second voice input is specifically voice input of a second user.
Alternatively, the first speech input and the second speech input are both speech inputs of the first user, the first speech input being a speech input associated with the first task, and the second speech input being a speech input associated with the second task.
Preferably, the first voice input and the second voice input are input simultaneously or input occurring in the same time period.
Furthermore, the multitask parallel processing device also comprises an analysis module, wherein the analysis module is respectively connected with the receiving module and the interface calling module; the analysis module is used for extracting a first voiceprint feature of a first voice input and a second voiceprint feature of a second voice input, performing semantic analysis on the first voice input and the second voice input to obtain a first task instruction and a second task instruction, and sending the first voiceprint feature, the second voiceprint feature, the first task instruction and the second task instruction to the interface calling module; in the multi-turn dialogue step, the analysis module is further used for extracting the voiceprint features of the user answers, carrying out semantic analysis on the user answers, and sending the voiceprint features and semantic analysis results to the interface calling module.
Further, the interface calling module judges whether the first voiceprint feature is the same as the second voiceprint feature, and if not, establishes a corresponding relationship between the first task interface and the first voiceprint feature, and between the second task interface and the second voiceprint feature; and if the first task instruction/the first label is the same as the second task instruction/the second label, storing the first label of the first task model and the second label of the second task model, and establishing a corresponding relation between the first task instruction/the first label and the second task instruction/the second label.
Further, the interface calling module further determines whether the first voiceprint feature is the same as the second voiceprint feature, and if not, establishes a corresponding relationship between the first task interface and the first voiceprint feature, and between the second task interface and the second voiceprint feature; and if the first task instruction/the first label is the same as the second task instruction/the second label, storing the first label of the first task model and the second label of the second task model, and establishing a corresponding relation between the first task instruction/the first label and the second task instruction/the second label.
Further, if the first voiceprint feature is different from the second voiceprint feature, the interface calling module judges whether the answer of the user is specific to the first multi-turn dialog or the second multi-turn dialog, receives the voiceprint feature of the user answer extracted by the analysis module, judges whether the voiceprint feature of the user answer is matched with the first voiceprint feature and the second voiceprint feature, and inputs the user answer into the first task interface if the voiceprint feature of the user answer is matched with the first voiceprint feature; if the second vocal print feature matches, the user answer is entered into a second task interface.
Further, if the first voiceprint feature is the same as the second voiceprint feature, the interface calling module determines, in a multi-turn dialog step, whether the answer of the user is for a first multi-turn dialog or a second multi-turn dialog specifically, receives a semantic analysis result of the answer of the analysis module to the user, respectively obtains a correlation degree of the semantic analysis result with the first task instruction and the first tag, and a correlation degree of the semantic analysis result with the second task instruction and the second tag, and determines whether the answer of the user is for the first multi-turn dialog or the second multi-turn dialog according to the correlation degree.
Preferably, the multitask parallel processing device further comprises a third task interface and a third task model, the receiving module further receives a third voice input, the interface calling module calls the third task interface according to the third voice input, and the third task interface calls the third task model according to the third voice input; and the third task model initiates a third multi-turn dialogue to the user through a third task interface, fills a slot position of the third task model according to the answer of the user in the multi-turn dialogue, and generates and executes a third task list.
An embodiment of the present invention further provides a multitasking parallel processing device, where the device includes a processor and a memory, where the memory stores a computer program that can be executed on the processor, and the computer program, when executed by the processor, implements the method as described above.
An embodiment of the present invention further provides a computer-readable storage medium, where a computer program that can be executed on a processor is stored in the computer-readable storage medium, and when the computer program is executed, the computer program implements the method as described above.
According to the multitask parallel processing method and the multitask parallel processing device, the plurality of task interfaces are established, different or the same task models are respectively called to execute the tasks of a plurality of users or the tasks of one user in parallel, and the task interface calling module can be used for switching among the task interfaces, so that the efficiency and the flexibility of task execution are improved, and the user experience is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
FIG. 1 is a flow diagram of a method for multitasking parallel processing in one embodiment of the invention.
Fig. 2 is a block diagram of a multitask parallel processing unit according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in detail with reference to the accompanying drawings. The embodiments and specific features of the embodiments of the present invention are detailed descriptions of technical solutions of the embodiments of the present invention, and are not limited to technical solutions of the present invention, and the technical features of the embodiments and the embodiments of the present invention may be combined with each other without conflict.
The invention provides a multitask parallel processing method, which comprises the following steps of, referring to fig. 1: receiving a first voice input and a second voice input; calling a first task interface according to the first voice input, and calling a first task model by the first task interface according to the first voice input; calling a second task interface according to the second voice input, and calling a second task model by the second task interface according to the second voice input; the first task model initiates a first multi-turn dialogue to the user through a first task interface, fills a slot position of the first task model according to the answer of the user in the multi-turn dialogue, and generates and executes a first task list; the second task model initiates a second multi-turn dialogue to the user through a second task interface, fills a slot position of the second task model according to the answer of the user in the multi-turn dialogue, and generates and executes a second task list; in the multi-turn dialogue step, judging whether the answer of the user aims at a first multi-turn dialogue or a second multi-turn dialogue, and calling a first task interface if the answer of the user aims at the first multi-turn dialogue; if the user answers for a second plurality of conversations, a second task interface is invoked.
Preferably, the first voice input is specifically voice input of a first user; the second voice input is specifically voice input of a second user;
for example, during travel, driver a says: turn on navigation, passenger B says: and ordering us to have lunch. The car machine receives a first voice input "turn on navigation" from the first user a and a second voice input "order us for lunch" from the second user B almost simultaneously. The voice input may be performed simultaneously, for example, by the driver a and the passenger B issuing commands simultaneously, or by the driver a and the passenger B issuing commands one after the other within a specific time period, which is generally a short time period, for example, 10 seconds.
After receiving the first voice input and the second voice input, the vehicle machine extracts voiceprint features to obtain a first voiceprint feature of the first voice input and a second voiceprint feature of the second voice input; and semantic analysis is carried out on the voice input to obtain a first task instruction ' open navigation ' and a second task instruction ' to order lunch for people, and the distinguishing mode of the first voice input and the second voice input can adopt the voice recognition and semantic analysis technology in the prior art, which is not described herein any more. The car machine calls two task interfaces respectively aiming at the two task instructions, namely calls a first task interface for the first task instruction and calls a second task interface for the second task instruction.
Preferably, the task interface is a function module which is pre-established and stored in the vehicle, and can be directly called. The task interfaces can execute tasks independently.
The corresponding relation between the voiceprint characteristics and the task interface is established and stored by the vehicle machine, namely the corresponding relation between the first voiceprint characteristics/the first task interface and the corresponding relation between the second voiceprint characteristics/the second task interface are stored. The first task interface calls a navigation task model according to the first task instruction; and the second task interface calls an 'order' task model according to the second task instruction. The task model is a task module stored in the car machine, can execute a specific type of task, and completes the user intention through multiple turns of dialogue with the user. The first task model and the second task model may be the same task model or different task models.
In this embodiment, the first task interface calls the first task model, that is, calls the "navigation" task model to execute the navigation instruction of the driver a. The navigation task model then interacts with the user through the first task interface and other functions of the car machine, which may be a multi-turn dialog. For example: the "navigation" task model asks the user the questions: asking where to go? The user answers "Tanjing building". The "navigation" task model then asks the user the questions: is the request for the shortest time or shortest distance? The user answers: the distance is shortest. The navigation task model obtains the key information of the filling slot position through the answer of the user, executes the navigation task, generates the navigation information with the shortest distance from the current position to the Tanjing mansion, and plays the navigation information through the vehicle machine.
The second task interface calls a second task model, i.e., an "order" task model, to execute the order instructions of passenger B. And similarly, the meal ordering task model completes interaction with the user through the second task interface and other functions of the car machine, and the interaction can also be multi-turn conversation. For example: the "order" task model asks the user questions: asking what restaurant to order? The user answers "Tangcheng Xiaochu". The "order" task model then asks the user the questions: asking for outgoing calls or eating from a hall? The user answers "outgoing". The "order" task model then asks the user the questions: asking for outgoing addresses and contact details? The user answered "Tanjin building A, room 1101, 13710001234". And the ordering task model generates an outgoing order and completes payment according to the slot position information corresponding to the answer filling task of the user in multiple rounds of conversations.
In the two multi-turn conversations, there are two interactions performed crosswise, one is the interaction of the first task interface and the driver a and one is the interaction of the second task interface and the passenger B, how do the four parties involved in the two interactions distinguish and confirm their opposite ends of communication? This is a core problem to be solved in the multitask parallel processing method. In this embodiment, preferably, the confirmation of the opposite end of communication is completed by voiceprint feature recognition. As described above, after receiving the first voice input and the second voice input, the car machine extracts the voiceprint features respectively to obtain a first voiceprint feature of the first voice input and a second voiceprint feature of the second voice input; the car machine also establishes and stores the corresponding relation between the voiceprint characteristics and the task interface, namely, the corresponding relation between the first voiceprint characteristics/the first task interface and the corresponding relation between the second voiceprint characteristics/the second task interface. In multiple rounds of conversations, voiceprint extraction is also performed on the user's answer by the in-vehicle machine, when the extracted voiceprint features match the first voiceprint features, the user's answer is recognized as an answer for a first multiple round of conversations and is sent to the first task interface, and when the extracted voiceprint features match the second voiceprint features, the user's answer is recognized as an answer for a second multiple round of conversations and is sent to the second task interface. And the driver a and the passenger B can distinguish whether to ask questions intended by themselves and answer them by distinguishing the question types of the multi-turn conversations. Preferably, the car machine can process the question of the first multi-turn dialogue initiated by the first task model according to the third voiceprint feature, and play the question with the third voiceprint feature to the user; and processing the question of the second multi-turn dialogue initiated by the second task model according to the fourth voiceprint feature, and playing a question with the fourth voiceprint feature to the user. This enables the user to more easily distinguish between the first and second plurality of conversations.
According to another embodiment of the invention, the first speech input and the second speech input may both be speech inputs of the first user, the first speech input being a speech input associated with the first task and the second speech input being a speech input associated with the second task.
For example, during driving, driver a opens the car machine and issues an instruction to the car machine: and opening the navigation and ordering the lunch for me. The car machine receives a first voice input "turn on navigation" and a second voice input "order me for lunch" from the first user a almost simultaneously. That is, driver a issues two commands in less than 10 seconds. After the car machine receives the voice input, the first task instruction and the second task instruction are recognized through semantic analysis, for example, the first voice input and the second voice input are distinguished by using a pause in the user voice, and the recognition can be performed by using other methods in the prior art, which is not described herein again. And then semantic analysis is respectively carried out on the two voice inputs to obtain a first task instruction and a second task instruction. The first task instruction and the second task instruction may also be obtained by other methods known in the art.
Through semantic analysis, the first task instruction and the second task instruction are determined to be two different task instructions, and the vehicle machine establishes two task interfaces respectively aiming at the two task instructions, namely, the first task interface is established for the first task instruction, and the second task interface is established for the second task instruction. The two task interfaces execute tasks independently of each other. The first task interface calls a navigation task model according to the first task instruction; and the second task interface calls an 'order' task model according to the second task instruction.
However, since the two task commands are both that the voice input given by the driver a has the same voiceprint feature, the subsequent input cannot be distinguished through the voiceprint feature. Whether the user's answer is a first plurality of conversations for a first task model or a second plurality of conversations for a second task model may be distinguished through semantic parsing. Specifically, the association degree characteristics of the semantic analysis result of the user response and the tags of the first task model and the second task model can be obtained, and the first task interface or the second task interface is called according to the association degree. For example, if the first task model is a navigation model and the label is "navigation", the second task model is "order" and the label is "restaurant", and if the user answers "guan jing building", the association degree characteristic between the guan jing building and the navigation is 90, and the association degree characteristic between the guan jing building and the restaurant is 20, the task model with the high association degree is selected as the task model for which the user answers. Furthermore, the relevance between the semantic analysis result of the user response and the first task instruction and the second task instruction can be calculated, and the relevance is compared with the relevance obtained by carrying out weighted average on the relevance to increase reliability. And calculating the association degree of the semantic analysis result answered by the user and the task instruction and/or the task model label, and selecting the task model with high association degree as an opposite end answered by the user. The calculation of the correlation degree may be calculated by using a correlation algorithm in the prior art, which is not limited herein.
Preferably, in order to avoid voice conflicts among multiple rounds of conversations, the multitask parallel processing method further includes an output buffer step, that is, the questions generated by the first multiple round of conversations and the second multiple round of conversations are firstly input into a buffer queue of the car machine, the car machine calls the questions from the buffer queue according to a first-in first-out sequence to generate voice information to be played to a user, and a certain interval can be set between two times of question playing, so that the user has enough time to input answers to the questions.
Furthermore, the user can call two task interfaces at different times, but call one task interface at a time, and when the first task interface is called, the associated task model can directly complete the task through multiple rounds of conversation with the user; the second task interface that is not invoked is in a paused state and does not receive and respond to speech input or responses from the user. When the calling of the first task interface is finished, for example, the first task is executed completely, or the user pauses the first task interface, the second task interface is switched to according to the user command, and the interaction between the second task model and the user is activated. And the user realizes the switching among the plurality of task interfaces through the task interface switching command. The task interface switch command may be, for example: switching to the + task name, switching to ordering and switching to navigation; it may also be simply "switch," switching sequentially between multiple task interfaces in a default order. Preferably, when execution of one task is completed, switching to the next task interface is automatically performed.
The above embodiment relates to the case where two tasks are processed in parallel. Another embodiment of the invention also provides for more tasks to be processed in parallel, for example, in the case of three tasks being processed in parallel. Further, the car machine also receives a third voice input, where the third voice input may be an input from a third user within the same time period, and may also be a voice input initiated by the first user for a third task within the same time period. Calling a third task interface according to the third voice input, and calling a third task model by the third task interface according to a third task instruction contained in the third voice; and the third task model initiates a third multi-turn dialogue to the user through a third task interface, fills a slot position of the third task model according to the answer of the user in the multi-turn dialogue, and generates and executes a third task list.
The method for distinguishing the third plurality of dialogs from the first and second plurality of dialogs is similar to the method for distinguishing between the first and second plurality of dialogs, and is not described in detail herein.
According to another embodiment of the present invention, the present invention further provides a multitask parallel processing device, referring to fig. 2, comprising: the system comprises a receiving module, a task interface calling module, a first task interface, a second task interface, a first task model, a second task model and an output module, wherein the first task interface and the second task interface are respectively connected with the task interface calling module; the receiving module is used for receiving a first voice input and a second voice input; the task interface calling module is used for calling a first task interface according to the first voice input and calling a second task interface according to the second voice input; the first task interface is used for calling a first task model according to a first voice input, and the second task interface is used for calling a second task model according to a second voice input; the first task model is used for initiating a first multi-turn dialog to the user through the first task interface, filling a slot position of the first task model according to answers of the user in the multi-turn dialog, and generating and executing a first task list; the second task model is used for initiating a second multi-turn dialog to the user through a second task interface, filling a slot position of the second task model according to the answer of the user in the multi-turn dialog, and generating and executing a second task list; the task interface calling module is also used for judging whether the answer of the user is directed to the first multi-turn conversation or the second multi-turn conversation in the multi-turn conversation step, and calling the first task interface if the answer of the user is directed to the first multi-turn conversation; invoking a second task interface if the user answer is for a second plurality of conversations; and the output module is used for outputting a plurality of turns of dialogue questions to the user.
The first task interface is used for calling a first task model, and the second task interface is used for calling a second task model;
according to one embodiment of the present invention, the first voice input is specifically a voice input of a first user; the second voice input is specifically voice input of a second user.
According to another embodiment of the invention, the first speech input and the second speech input are both speech inputs of the first user, the first speech input being a speech input associated with the first task and the second speech input being a speech input associated with the second task.
Preferably, the first voice input and the second voice input are input simultaneously or input occurring in the same time period.
Furthermore, the multitask parallel processing device also comprises an analysis module, wherein the analysis module is respectively connected with the receiving module and the interface calling module; the analysis module is used for extracting a first voiceprint feature of a first voice input and a second voiceprint feature of a second voice input, performing semantic analysis on the first voice input and the second voice input to obtain a first task instruction and a second task instruction, and sending the first voiceprint feature, the second voiceprint feature, the first task instruction and the second task instruction to the interface calling module; in the multi-turn dialogue step, the analysis module is further used for extracting the voiceprint features of the user answers, carrying out semantic analysis on the user answers, and sending the voiceprint features and semantic analysis results to the interface calling module.
Further, the interface calling module judges whether the first voiceprint feature is the same as the second voiceprint feature, and if not, establishes a corresponding relationship between the first task interface and the first voiceprint feature, and between the second task interface and the second voiceprint feature; and if the first task instruction/the first label is the same as the second task instruction/the second label, storing the first label of the first task model and the second label of the second task model, and establishing a corresponding relation between the first task instruction/the first label and the second task instruction/the second label.
Further, if the first voiceprint feature is different from the second voiceprint feature, the interface calling module judges whether the answer of the user is specific to the first multi-turn dialog or the second multi-turn dialog, receives the voiceprint feature of the user answer extracted by the analysis module, judges whether the voiceprint feature of the user answer is matched with the first voiceprint feature and the second voiceprint feature, and inputs the user answer into the first task interface if the voiceprint feature of the user answer is matched with the first voiceprint feature; if the second vocal print feature matches, the user answer is entered into a second task interface.
Further, if the first voiceprint feature is the same as the second voiceprint feature, the interface calling module determines, in a multi-turn dialog step, whether the answer of the user is for a first multi-turn dialog or a second multi-turn dialog specifically, receives a semantic analysis result of the answer of the analysis module to the user, respectively obtains a correlation degree of the semantic analysis result with the first task instruction and the first tag, and a correlation degree of the semantic analysis result with the second task instruction and the second tag, and determines whether the answer of the user is for the first multi-turn dialog or the second multi-turn dialog according to the correlation degree.
For example, user a and user B travel together and two persons make reservations for attraction tickets and return tickets through a multitask parallel processing device placed in a hotel room. The user a sends out an instruction of 'order two tickets of the scenic spots' to the multitask parallel processing device ', and the user B instructs' order two tickets of 20 am to beijing 'to the multitask parallel processing device'; the receiving module of the multitask parallel processing device receives two voice inputs, the first voice input 'orders two scenic spot tickets' and the second voice input 'orders two airline tickets from 20 am to Beijing', the receiving module sends the two voice inputs to the analyzing module, the analyzing module firstly extracts a first voiceprint characteristic of the first voice input and a second voiceprint characteristic of the second voice input, then, performing semantic analysis on voice input to obtain a first task instruction 'order two sight spot tickets' and a second task instruction 'order two air tickets from 20 am to beijing', sending the extracted voiceprint characteristics and the task instruction obtained by analysis to a task interface calling module by the analysis module, calling a first task model, namely a 'ticket ordering' task model according to the first task instruction by the task interface calling module, and establishing a corresponding relation between the first task interface and the first voiceprint characteristics; meanwhile, a second task model, namely a 'ticket booking' task model is called according to the second task instruction, and a corresponding relation between a second task interface and a second voiceprint characteristic is established. In this embodiment, the task interface calling module calls two different task interfaces according to two task instructions, but the two different task interfaces respectively call the same task model, and can simultaneously execute two different tasks belonging to the same task type, that is, simultaneously execute two different tasks belonging to the same ticket booking type, one is an entrance ticket and the other is an air ticket booking, so that the processing efficiency of the user task is improved.
The multitask parallel processing device comprises an output module, wherein a first task model initiates a first multi-turn conversation to a user through a first task interface, an analysis module and the output module, a second task model initiates a second multi-turn conversation to the user through a second task interface, the analysis module and the output module, and further, in order to enable the user A and the user B to distinguish the first multi-turn conversation from the second multi-turn conversation conveniently, the output module also comprises a voiceprint processing unit which processes output information from different multi-turn conversations by using different voiceprint characteristics stored by the processing unit and plays the multi-turn conversation information with different voiceprint characteristics to the user so as to distinguish the different multi-turn conversations; for example, the voiceprint processing unit processes the questions of the first plurality of sessions using the third voiceprint feature and the questions of the second plurality of sessions using the fourth voiceprint feature, and different users hear different sounds from the multitask parallel processing device, so that user a and user B can accurately distinguish which questions should be answered by themselves. Further, in order to avoid the problem that the two programs are played simultaneously to cause conflict and make the listening and distinguishing of the users difficult, the output module may further include an output buffer for buffering output information from different multiple sessions. After the problems generated by the first multi-turn conversation and the second multi-turn conversation pass through the analysis module, the problems firstly enter an output buffer queue of the output module, and the output module calls the problems from the output buffer queue according to the first-in first-out sequence to generate voice information to be played to a user.
The interaction process of the first multi-turn conversation and the second multi-turn conversation is described below, the first task model extracts key information filling slot position information in a first task instruction, the instruction of ' order two pieces of sight ticket ' contains the number information 2, ticket type information ' ticket ', and sight spot information '? The user answers "two-dimensional code ticket". The answer of the user is firstly received by an input module of the multi-task parallel processing module, then a first voiceprint feature is extracted by an analysis module, semantic analysis is carried out on the first voiceprint feature and then the first voiceprint feature is sent to a task interface calling module, the task interface calling module sends a semantic analysis result to a first task model according to the stored corresponding relation between the first voiceprint feature and a first task interface, the first task model fills a ticket taking mode slot position with the content to form a two-dimensional code ticket, then a second question is generated to ask the user, and the two-dimensional code is asked and sent to which mobile phone? And filling the slot position of the receiving mobile phone by using the mobile phone number answered by the user. And then, the first task model continuously initiates the question of the payment information, fills the slot position after receiving the answer of the user, generates a task execution list after the slot position information is filled, and executes the task.
Meanwhile, the second task model also initiates a second multi-turn dialogue, fills slot position information of the second task, generates a second task list and executes the task.
The problem of two multi-turn conversations is usually cross-conducted, and the second question is played and extracted in the interval of one question, so that the efficiency of task execution is improved.
Preferably, the interface calling module further comprises an interface switching function, the interface calling module calls one task interface at a time, and when the first task interface is called, the associated task model can directly complete a task through multiple rounds of conversations with a user; the second task interface that is not invoked is in a paused state and does not receive and respond to speech input or responses from the user. When the calling of the first task interface is finished, for example, the first task is executed completely, or the user pauses the first task interface, the interface calling module switches to the second task interface according to the user command, and the interaction between the second task model and the user is finished. And the user realizes the switching among the plurality of task interfaces through the task interface switching command. The task interface switch command may be, for example: switching to the + task name, switching to ordering and switching to navigation; it may also be simply "switch," switching sequentially between multiple task interfaces in a default order. Preferably, when one task is executed, the interface calling module automatically switches to the next task interface.
Preferably, the multitask parallel processing device may be an independent hardware device, for example, a multitask parallel processing device placed in a hotel room, a hardware module installed in an intelligent terminal such as a car machine, an intelligent household appliance, a mobile phone, a PAD, or a notebook computer, or a software module installed in intelligent hardware such as a car machine, an intelligent household appliance, a mobile phone, a PAD, or a notebook computer, and the multitask parallel processing device may be installed in a remote server for downloading, installing, downloading, and updating, or performing operations such as deleting, clearing, and caching by using intelligent hardware such as a car machine, an intelligent household appliance, a mobile phone, a PAD, or a notebook computer.
The invention also provides a multitasking parallel processing device comprising a processor and a memory, said memory having stored therein a computer program being executable on said processor, said computer program implementing the method as described above when being executed by said processor.
The invention also provides a computer-readable storage medium, characterized in that a computer program is stored in the computer-readable storage medium, which computer program is executable on a processor, and when executed implements the method as described above.
Any combination of one or more computer-readable media may be employed. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. The computer-readable storage medium may include: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), a flash memory, an erasable programmable read-only memory (EPROM), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
Computer program code for carrying out operations of the present invention may be written in one or more programming languages, or a combination thereof.
The above description is only an example for the convenience of understanding the present invention, and is not intended to limit the scope of the present invention. In the specific implementation, a person skilled in the art may change, add, or reduce the components of the apparatus according to the actual situation, and may change, add, reduce, or change the order of the steps of the method according to the actual situation without affecting the functions implemented by the method.
While embodiments of the invention have been shown and described, it will be understood by those skilled in the art that: various changes, modifications, substitutions and alterations can be made to the embodiments without departing from the principles and spirit of the invention, the scope of which is defined by the claims and their equivalents, and all changes that come within the meaning and range of equivalency of the claims are to be embraced within their scope.

Claims (20)

1. A method of multitasking parallel processing, the method comprising: receiving a first voice input and a second voice input; calling a first task interface according to the first voice input, and calling a first task model by the first task interface according to the first voice input; calling a second task interface according to the second voice input, and calling a second task model by the second task interface according to the second voice input; the first task model initiates a first multi-turn dialogue to the user through a first task interface, fills a slot position of the first task model according to the answer of the user in the multi-turn dialogue, and generates and executes a first task list; the second task model initiates a second multi-turn dialogue to the user through a second task interface, fills a slot position of the second task model according to the answer of the user in the multi-turn dialogue, and generates and executes a second task list; in the multi-turn dialogue step, judging whether the answer of the user aims at a first multi-turn dialogue or a second multi-turn dialogue, and calling a first task interface if the answer of the user aims at the first multi-turn dialogue; invoking a second task interface if the user answer is for a second plurality of conversations; in the first and second multi-turn dialogs, there are two dialogs that cross each other.
2. The multitask parallel processing method according to claim 1, wherein the first speech input is specifically a speech input of a first user; the second voice input is specifically voice input of a second user.
3. The method of claim 1, wherein the first speech input and the second speech input are both speech inputs of a first user, the first speech input being a speech input associated with a first task, and the second speech input being a speech input associated with a second task.
4. A method for multitasking and parallel processing according to any one of claims 1-3, characterized in that the first speech input and the second speech input are input simultaneously or are input occurring within the same time period.
5. The multitask parallel processing method according to claim 4, wherein after receiving the first speech input and the second speech input, extracting a first voiceprint feature of the first speech input and a second voiceprint feature of the second speech input, respectively, and performing semantic analysis on the first speech input and the second speech input to obtain a first task instruction and a second task instruction.
6. The multitask parallel processing method according to claim 5, wherein it is determined whether the first voiceprint feature and the second voiceprint feature are the same, and if not, a corresponding relationship between the first task interface and the first voiceprint feature, and between the second task interface and the second voiceprint feature is established; if the first task instruction and the second task instruction are the same as each other, the first label of the first task model and the second label of the second task model are saved, and the corresponding relation between the first task instruction and the first label as well as the corresponding relation between the second task instruction and the second label are established.
7. The multitask parallel processing method according to claim 6, wherein if the first voiceprint feature and the second voiceprint feature are not the same, said determining, in the multi-turn dialog step, whether the answer of the user is for the first multi-turn dialog or the second multi-turn dialog is specifically that a voiceprint feature of the answer of the user is extracted, determining whether the voiceprint feature of the answer of the user matches the first voiceprint feature and the second voiceprint feature, and if the answer of the user matches the first voiceprint feature, inputting the answer of the user into the first task interface; if the second vocal print feature matches, the user answer is entered into a second task interface.
8. The multitask parallel processing method according to claim 6, wherein if the first voiceprint feature is the same as the second voiceprint feature, in the multi-turn dialogue step, it is determined whether the answer of the user is for a first multi-turn dialogue or a second multi-turn dialogue, specifically, the answer of the user is subjected to semantic analysis, association degrees between the semantic analysis result and the first task instruction and the first tag and between the semantic analysis result and the second task instruction and between the semantic analysis result and the second tag are obtained, and it is determined whether the answer of the user is for the first multi-turn dialogue or the second multi-turn dialogue according to the association degrees.
9. The multitask parallel processing method according to claim 1, further receiving a third speech input, establishing a third task interface based on the third speech input, the third task interface invoking a third task model based on the third speech input; and the third task model initiates a third multi-turn dialogue to the user through a third task interface, fills a slot position of the third task model according to the answer of the user in the multi-turn dialogue, and generates and executes a third task list.
10. A multitask parallel processing device is characterized by comprising a receiving module, a task interface calling module, a first task interface, a second task interface, a first task model, a second task model and an output module, wherein the first task interface and the second task interface are respectively connected with the task interface calling module; the receiving module is used for receiving a first voice input and a second voice input; the task interface calling module is used for calling a first task interface according to the first voice input and calling a second task interface according to the second voice input; the first task interface is used for calling a first task model according to a first voice input, and the second task interface is used for calling a second task model according to a second voice input; the first task model is used for initiating a first multi-turn dialog to the user through the first task interface, filling a slot position of the first task model according to answers of the user in the multi-turn dialog, and generating and executing a first task list; the second task model is used for initiating a second multi-turn dialog to the user through a second task interface, filling a slot position of the second task model according to the answer of the user in the multi-turn dialog, and generating and executing a second task list; the task interface calling module is also used for judging whether the answer of the user is directed to the first multi-turn conversation or the second multi-turn conversation in the multi-turn conversation step, and calling the first task interface if the answer of the user is directed to the first multi-turn conversation; invoking a second task interface if the user answer is for a second plurality of conversations; the output module is used for outputting a plurality of turns of dialogue questions to a user; in the first and second multi-turn dialogs, there are two dialogs that cross each other.
11. The multitasking parallel processing device according to claim 10, wherein the first speech input is specifically a speech input of a first user; the second voice input is specifically voice input of a second user.
12. The multitasking parallel processing device according to claim 10, wherein the first speech input and the second speech input are speech inputs of a first user, the first speech input is a speech input associated with a first task, and the second speech input is a speech input associated with a second task.
13. A multitasking parallel processing device according to any of claims 10-12, characterized in that the first speech input and the second speech input are input simultaneously or are input occurring within the same time period.
14. The multitask parallel processing device according to claim 13, further comprising a parsing module, said parsing module being connected to said receiving module and said interface calling module, respectively; the analysis module is used for extracting a first voiceprint feature of the first voice input and a second voiceprint feature of the second voice input, performing semantic analysis on the first voice input and the second voice input to obtain a first task instruction and a second task instruction, and sending the first voiceprint feature, the second voiceprint feature, the first task instruction and the second task instruction to the interface calling module; in the multi-turn dialogue step, the analysis module is further used for extracting the voiceprint features of the user answers, carrying out semantic analysis on the user answers, and sending the voiceprint features and semantic analysis results to the interface calling module.
15. The apparatus according to claim 14, wherein the interface calling module further determines whether the first voiceprint feature is the same as the second voiceprint feature, and if not, establishes a correspondence between the first task interface and the first voiceprint feature, and between the second task interface and the second voiceprint feature; and if the first task instruction/the first label is the same as the second task instruction/the second label, storing the first label of the first task model and the second label of the second task model, and establishing a corresponding relation between the first task instruction/the first label and the second task instruction/the second label.
16. The apparatus according to claim 15, wherein if the first voiceprint feature is different from the second voiceprint feature, the interface calling module determines, in the multi-session step, whether the user's answer is for the first multi-session or the second multi-session, specifically, receives the voiceprint feature of the user's answer extracted by the parsing module, determines whether the voiceprint feature of the user's answer matches the first voiceprint feature and the second voiceprint feature, and if the voiceprint feature matches the first voiceprint feature, inputs the user's answer into the first task interface; if the second vocal print feature matches, the user answer is entered into a second task interface.
17. The apparatus according to claim 15, wherein if the first voiceprint feature is the same as the second voiceprint feature, the interface invoking module determines, in the multi-turn dialog step, whether the answer of the user is for the first multi-turn dialog or the second multi-turn dialog, specifically, receives a semantic analysis result of the answer of the parsing module to the user, obtains a degree of association between the semantic analysis result and the first task instruction and the first tag, and a degree of association between the semantic analysis result and the second task instruction and the second tag, respectively, and determines whether the answer of the user is for the first multi-turn dialog or the second multi-turn dialog according to the degree of association.
18. The multitask parallel processing device according to claim 10, wherein the device further comprises a third task interface and a third task model, the receiving module further receives a third speech input, the interface calling module calls the third task interface according to the third speech input, and the third task interface calls the third task model according to the third speech input; and the third task model initiates a third multi-turn dialogue to the user through a third task interface, fills a slot position of the third task model according to the answer of the user in the multi-turn dialogue, and generates and executes a third task list.
19. A multitasking parallel processing device, characterized in that the device comprises a processor and a memory, in which a computer program is stored which is executable on the processor, which computer program, when being executed by the processor, carries out the method as claimed in any one of claims 1 to 9.
20. A computer-readable storage medium, in which a computer program that is executable on a processor is stored, which computer program, when being executed, carries out the method according to any one of claims 1 to 9.
CN201910446011.5A 2019-05-27 2019-05-27 Multitask parallel processing method and device Active CN110297702B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910446011.5A CN110297702B (en) 2019-05-27 2019-05-27 Multitask parallel processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910446011.5A CN110297702B (en) 2019-05-27 2019-05-27 Multitask parallel processing method and device

Publications (2)

Publication Number Publication Date
CN110297702A CN110297702A (en) 2019-10-01
CN110297702B true CN110297702B (en) 2021-06-18

Family

ID=68027292

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910446011.5A Active CN110297702B (en) 2019-05-27 2019-05-27 Multitask parallel processing method and device

Country Status (1)

Country Link
CN (1) CN110297702B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111124866B (en) * 2019-12-26 2023-12-08 光禹莱特数字科技(上海)有限公司 Voice interaction method and device

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103871400A (en) * 2012-11-13 2014-06-18 通用汽车环球科技运作有限责任公司 Methods and systems for speech systems
CN104347074A (en) * 2013-07-31 2015-02-11 通用汽车环球科技运作有限责任公司 Systems and methods for managing dialog context in speech systems
CN104813311A (en) * 2012-12-11 2015-07-29 纽昂斯通讯公司 System and methods for virtual agent recommendation for multiple persons
EP3264266A1 (en) * 2015-05-20 2018-01-03 Huawei Technologies Co. Ltd. Method for positioning sounding location, and terminal device
CN108986825A (en) * 2018-07-02 2018-12-11 北京百度网讯科技有限公司 Context acquisition methods and equipment based on interactive voice
CN109446306A (en) * 2018-10-16 2019-03-08 浪潮软件股份有限公司 A kind of intelligent answer method of more wheels dialogue of task based access control driving
CN109582767A (en) * 2018-11-21 2019-04-05 北京京东尚科信息技术有限公司 Conversational system processing method, device, equipment and readable storage medium storing program for executing

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4030295A1 (en) * 2016-04-18 2022-07-20 Google LLC Automated assistant invocation of appropriate agent

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103871400A (en) * 2012-11-13 2014-06-18 通用汽车环球科技运作有限责任公司 Methods and systems for speech systems
CN104813311A (en) * 2012-12-11 2015-07-29 纽昂斯通讯公司 System and methods for virtual agent recommendation for multiple persons
CN104347074A (en) * 2013-07-31 2015-02-11 通用汽车环球科技运作有限责任公司 Systems and methods for managing dialog context in speech systems
EP3264266A1 (en) * 2015-05-20 2018-01-03 Huawei Technologies Co. Ltd. Method for positioning sounding location, and terminal device
CN108986825A (en) * 2018-07-02 2018-12-11 北京百度网讯科技有限公司 Context acquisition methods and equipment based on interactive voice
CN109446306A (en) * 2018-10-16 2019-03-08 浪潮软件股份有限公司 A kind of intelligent answer method of more wheels dialogue of task based access control driving
CN109582767A (en) * 2018-11-21 2019-04-05 北京京东尚科信息技术有限公司 Conversational system processing method, device, equipment and readable storage medium storing program for executing

Also Published As

Publication number Publication date
CN110297702A (en) 2019-10-01

Similar Documents

Publication Publication Date Title
US11915707B1 (en) Outcome-oriented dialogs on a speech recognition platform
US11922925B1 (en) Managing dialogs on a speech recognition platform
CN110442701B (en) Voice conversation processing method and device
CN107895578B (en) Voice interaction method and device
KR102418511B1 (en) Creating and sending call requests to use third-party agents
EP3444813B1 (en) User-guided arbitration of speech processing results
US10148600B1 (en) Intelligent conversational systems
US20120253823A1 (en) Hybrid Dialog Speech Recognition for In-Vehicle Automated Interaction and In-Vehicle Interfaces Requiring Minimal Driver Processing
US20050033582A1 (en) Spoken language interface
KR20200054338A (en) Parameter collection and automatic dialog generation in dialog systems
CA2756140A1 (en) Service oriented speech recognition for in-vehicle automated interaction
US7555533B2 (en) System for communicating information from a server via a mobile communication device
KR102170088B1 (en) Method and system for auto response based on artificial intelligence
CN110442438B (en) Task cooperation method, device and system among multiple devices
KR20110127180A (en) Systems and methods for interactively accessing hosted services using voice communications
CN113362828B (en) Method and apparatus for recognizing speech
CN108924218A (en) Method and apparatus for pushed information
CN110297702B (en) Multitask parallel processing method and device
CN110675875B (en) Intelligent voice conversation technology telephone experience method and device
US10964318B2 (en) Dialogue management
KR20200024511A (en) Operation method of dialog agent and apparatus thereof
US20220013108A1 (en) Specifying trip destinations from spoken dialogs
CN112700767B (en) Man-machine conversation interruption method and device
CN112069830A (en) Intelligent conversation method and device
CA2839285A1 (en) Hybrid dialog speech recognition for in-vehicle automated interaction and in-vehicle user interfaces requiring minimal cognitive driver processing for same

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant