CN113808621A - Method and device for marking voice conversation in man-machine interaction, equipment and medium - Google Patents
Method and device for marking voice conversation in man-machine interaction, equipment and medium Download PDFInfo
- Publication number
- CN113808621A CN113808621A CN202111069995.3A CN202111069995A CN113808621A CN 113808621 A CN113808621 A CN 113808621A CN 202111069995 A CN202111069995 A CN 202111069995A CN 113808621 A CN113808621 A CN 113808621A
- Authority
- CN
- China
- Prior art keywords
- voice
- user
- machine
- satisfaction
- reply
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000003993 interaction Effects 0.000 title claims abstract description 118
- 238000000034 method Methods 0.000 title claims abstract description 57
- 230000002996 emotional effect Effects 0.000 claims abstract description 44
- 230000008921 facial expression Effects 0.000 claims description 33
- 230000008451 emotion Effects 0.000 claims description 30
- 238000013528 artificial neural network Methods 0.000 claims description 24
- 238000012549 training Methods 0.000 claims description 12
- 238000004590 computer program Methods 0.000 claims description 10
- 238000002372 labelling Methods 0.000 abstract description 39
- 230000004044 response Effects 0.000 abstract description 11
- 238000012545 processing Methods 0.000 description 15
- 238000010586 diagram Methods 0.000 description 12
- 238000003058 natural language processing Methods 0.000 description 10
- 230000005236 sound signal Effects 0.000 description 9
- 230000001815 facial effect Effects 0.000 description 6
- 230000000694 effects Effects 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 230000014509 gene expression Effects 0.000 description 3
- 239000000463 material Substances 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 230000006870 function Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000001960 triggered effect Effects 0.000 description 2
- 238000007792 addition Methods 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000005055 memory storage Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 238000005215 recombination Methods 0.000 description 1
- 230000006798 recombination Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/48—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
- G10L25/51—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
- G10L25/63—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/26—Speech to text systems
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Acoustics & Sound (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- Computational Linguistics (AREA)
- Multimedia (AREA)
- Child & Adolescent Psychology (AREA)
- General Health & Medical Sciences (AREA)
- Hospice & Palliative Care (AREA)
- Psychiatry (AREA)
- Signal Processing (AREA)
- User Interface Of Digital Computer (AREA)
- Machine Translation (AREA)
Abstract
The embodiment of the disclosure discloses a method, a device, equipment and a medium for marking voice conversation in man-machine interaction, wherein an emotional characteristic of a user in the voice response of the machine voice is determined by determining a machine voice response made by a man-machine interaction system aiming at the previous voice of the user, and a first satisfaction degree of the user aiming at the machine voice response is determined based on the emotional characteristic; if the voice is the ending voice in the multi-turn conversations, at least one second satisfaction degree of the user in the historical wheel conversations aiming at the machine voice reply output by the man-machine interaction system is determined, and the multi-turn conversations are labeled based on the first satisfaction degree and the at least one second satisfaction degree, so that the automatic labeling of the man-machine conversations can be realized.
Description
Technical Field
The present disclosure relates to natural language processing technologies, and in particular, to a method and apparatus, a device, and a medium for labeling a voice dialog in human-computer interaction.
Background
The man-machine interaction refers to the process of information exchange between a person and a computer for completing a determined task in a certain interaction mode by using a certain dialogue language between the person and the computer. While conventional human-computer interaction is mainly implemented by input and output devices such as a keyboard, a mouse, and a display, with the development of technologies such as voice recognition, Natural Language Processing (NLP), etc., human and machine can interact with each other in a manner similar to natural Language.
Along with the gradual popularization of the intelligent life concept and the continuous promotion of the human-computer interaction technology, higher requirements are also put forward on the NLP technology. For example, when a user gives a conversation such as voice conversation in order to expect the machine to give a corresponding reply or perform a related task, the conversation content is converted into text by signal processing, voice recognition, etc. as an input of the NLP system, the NLP system understands the meaning of the conversation of the user and gives a corresponding reply or performs a related task on the basis thereof.
Therefore, the accuracy of understanding the user conversation meaning by the NLP system directly influences the reply efficiency and accuracy of the NLP system for the user conversation or the task execution efficiency and accuracy, and therefore the human-computer interaction effect is influenced.
Disclosure of Invention
The present disclosure is proposed to solve the above technical problems. The embodiment of the disclosure provides a method and a device for marking a voice conversation in man-machine interaction, electronic equipment and a medium.
According to an aspect of the embodiments of the present disclosure, there is provided a method for labeling a voice dialog in a human-computer interaction, including:
determining a machine voice reply made by the human-computer interaction system aiming at the previous voice of the user;
determining emotional characteristics of the user in the current voice made by the machine voice reply;
determining a first satisfaction level of the user with respect to the machine voice reply based on the emotional feature;
if the current voice is the ending voice in the multiple rounds of conversations, determining at least one second satisfaction degree of the user in the historical round of conversations before the current round of conversations to which the current voice belongs, aiming at the machine voice reply output by the man-machine interaction system, wherein one machine voice reply corresponds to one voice of the user;
annotating the multiple turns of conversation based on the first satisfaction and the at least one second satisfaction.
According to an aspect of the embodiments of the present disclosure, there is provided an apparatus for labeling a voice dialog in a human-computer interaction, including:
the first determination module is used for determining a machine voice reply made by the man-machine interaction system aiming at the previous voice of the user;
the second determination module is used for determining the emotional characteristics of the user in the current voice made aiming at the machine voice reply;
a third determination module to determine a first satisfaction of the user with respect to the machine voice reply based on the emotional characteristic;
a fourth determining module, configured to determine, if the current voice is an end voice in multiple rounds of dialogs, at least one second satisfaction degree of the user with respect to a machine voice reply output by the human-computer interaction system in a history round of dialogs before the current round of dialogs to which the current voice belongs, where one machine voice reply corresponds to one voice of the user;
and the marking module is used for marking the multiple turns of conversations based on the first satisfaction and the at least one second satisfaction.
According to yet another aspect of the embodiments of the present disclosure, a computer-readable storage medium is provided, where the storage medium stores a computer program for executing the method for labeling a voice dialog in human-computer interaction according to any of the above embodiments of the present disclosure.
According to still another aspect of an embodiment of the present disclosure, there is provided an electronic apparatus including:
a processor;
a memory for storing the processor-executable instructions;
the processor is used for reading the executable instructions from the memory and executing the instructions to realize the method for marking the voice conversation in the man-machine interaction according to any one of the above embodiments of the present disclosure.
Based on the method and the device for labeling the voice conversation in the man-machine interaction, the electronic device and the medium provided by the embodiment of the disclosure, a machine voice reply made by the man-machine interaction system for the previous voice of the user is determined, the emotional characteristics of the user in the current voice made by the user for the machine voice reply are determined, then, the first satisfaction degree of the user for the machine voice reply is determined based on the emotional characteristics, if the current voice is the ending voice in the multi-turn conversation, at least one second satisfaction degree of the user for the machine voice reply output by the man-machine interaction system in the history round conversation before the current voice belongs to the multi-turn conversation is determined, and then, the multi-turn conversation is labeled based on the first satisfaction degree and the at least one second satisfaction degree. The method and the device for automatically marking the language materials of the human-computer interaction system determine the satisfaction degree of the user for the machine voice reply through the emotional characteristics of the user for the machine voice reply during the current voice, and determine the semantic understanding accuracy corresponding to the human-computer interaction system based on the satisfaction degree of the user for each machine voice reply in multiple rounds of conversation between the user and the human-computer interaction system, so that the automatic marking of the multiple rounds of conversation is realized, the corpus marking accuracy and efficiency of the human-computer interaction system are improved, the semantic understanding accuracy of the human-computer interaction system is improved, the reply efficiency and accuracy of the human-computer interaction system for the user conversation or the task execution efficiency and accuracy are improved, and the human-computer interaction effect is improved.
The technical solution of the present disclosure is further described in detail by the accompanying drawings and examples.
Drawings
The above and other objects, features and advantages of the present disclosure will become more apparent by describing in more detail embodiments of the present disclosure with reference to the attached drawings. The accompanying drawings are included to provide a further understanding of the embodiments of the disclosure and are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description serve to explain the principles of the disclosure and not to limit the disclosure. In the drawings, like reference numbers generally represent like parts or steps.
Fig. 1 is a scene diagram to which the present disclosure is applicable.
Fig. 2 is a flowchart illustrating a method for labeling a voice dialog in human-computer interaction according to an exemplary embodiment of the present disclosure.
Fig. 3 is a flowchart illustrating a method for labeling a voice dialog in human-computer interaction according to another exemplary embodiment of the present disclosure.
Fig. 4 is a flowchart illustrating a method for labeling a voice dialog in human-computer interaction according to still another exemplary embodiment of the present disclosure.
Fig. 5 is a flowchart illustrating a method for tagging a voice dialog in human-computer interaction according to still another exemplary embodiment of the present disclosure.
Fig. 6 is a flow chart diagram of an exemplary application embodiment of the present disclosure.
Fig. 7 is a schematic structural diagram of an apparatus for labeling a voice conversation in human-computer interaction according to an exemplary embodiment of the present disclosure.
Fig. 8 is a schematic structural diagram of an apparatus for labeling a voice conversation in human-computer interaction according to another exemplary embodiment of the present disclosure.
Fig. 9 is a block diagram of an electronic device provided in an exemplary embodiment of the present disclosure.
Detailed Description
Hereinafter, example embodiments according to the present disclosure will be described in detail with reference to the accompanying drawings. It is to be understood that the described embodiments are merely a subset of the embodiments of the present disclosure and not all embodiments of the present disclosure, with the understanding that the present disclosure is not limited to the example embodiments described herein.
It should be noted that: the relative arrangement of the components and steps, the numerical expressions, and numerical values set forth in these embodiments do not limit the scope of the present disclosure unless specifically stated otherwise.
It will be understood by those of skill in the art that the terms "first," "second," and the like in the embodiments of the present disclosure are used merely to distinguish one element from another, and are not intended to imply any particular technical meaning, nor is the necessary logical order between them.
It is also understood that in embodiments of the present disclosure, "a plurality" may refer to two or more and "at least one" may refer to one, two or more.
It is also to be understood that any reference to any component, data, or structure in the embodiments of the disclosure, may be generally understood as one or more, unless explicitly defined otherwise or stated otherwise.
In addition, the term "and/or" in the present disclosure is only one kind of association relationship describing an associated object, and means that three kinds of relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" in the present disclosure generally indicates that the former and latter associated objects are in an "or" relationship.
It should also be understood that the description of the various embodiments of the present disclosure emphasizes the differences between the various embodiments, and the same or similar parts may be referred to each other, so that the descriptions thereof are omitted for brevity.
Meanwhile, it should be understood that the sizes of the respective portions shown in the drawings are not drawn in an actual proportional relationship for the convenience of description.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the disclosure, its application, or uses.
Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
The disclosed embodiments may be applied to electronic devices such as terminal devices, computer systems, servers, etc., which are operational with numerous other general purpose or special purpose computing system environments or configurations. Examples of well known terminal devices, computing systems, environments, and/or configurations that may be suitable for use with electronic devices, such as terminal devices, computer systems, servers, and the like, include, but are not limited to: personal computer systems, server computer systems, thin clients, thick clients, hand-held or laptop devices, microprocessor-based systems, set-top boxes, programmable consumer electronics, networked personal computers, minicomputer systems, mainframe computer systems, distributed cloud computing environments that include any of the above, and the like.
Electronic devices such as terminal devices, computer systems, servers, etc. may be described in the general context of computer system-executable instructions, such as program modules, being executed by a computer system. Generally, program modules may include routines, programs, objects, components, logic, data structures, etc. that perform particular tasks or implement particular abstract data types. The computer system/server may be practiced in distributed cloud computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed cloud computing environment, program modules may be located in both local and remote computer system storage media including memory storage devices.
Summary of the application
In the human-computer interaction based on the NLP, the fact that the real feedback of a human about a machine language is accurate, wrong or general is not considered, so that a large amount of manpower needs to be invested to manually label the learning linguistic data to train the NLP system, higher manpower cost and longer time need to be consumed, the learning linguistic data cannot be collected in real time in specific application to be labeled, and the real-time updating of the learning linguistic data cannot be realized.
In view of this, the embodiments of the present disclosure provide a method and an apparatus for labeling a voice dialog in a human-computer interaction, an electronic device, and a medium, where a satisfaction degree of a user for a machine voice reply is determined by emotional characteristics of the user in the current voice in response to the machine voice reply, and an accuracy of semantic understanding corresponding to a human-computer interaction system is determined based on the satisfaction degree of the user for each machine voice reply in multiple rounds of dialogs between the user and the human-computer interaction system, so that automatic labeling of the multiple rounds of dialogs is realized, and accuracy and efficiency of corpus labeling of the human-computer interaction system are improved.
Exemplary System
The embodiment of the disclosure can be applied to various scenes with voice interaction, such as a vehicle machine, a user terminal, an Application (APP) and the like.
Fig. 1 is a diagram of a scenario to which the present disclosure is applicable. As shown in fig. 1, the system of the embodiment of the present disclosure includes: an audio acquisition module 101, a front-end signal processing module 102, a voice recognition module 103, a video sensor 104, a human-computer interaction system 105, an Emotion Perception System (EPS)106, a memory 107 and a speaker 108. The EPS106 may include a voice parameter collection module 1061, an expression recognition module 1062, an emotion determination module 1063, and a satisfaction determination module 1604.
When the embodiment of the present disclosure is applied to a voice interaction scene, an audio signal of a voice initiated by a user in the current voice interaction scene is acquired by an audio acquisition module (e.g., a microphone array, etc.) 101, processed by a front-end signal processing module 102, and then voice recognition is performed by a voice recognition module 103, so as to obtain text information and input the text information to a human-computer interaction system 105, and the human-computer interaction system 105 understands a conversation meaning of the user, and outputs a corresponding reply to convert the reply into a voice on the basis, so as to obtain a machine voice reply and play the voice reply by a speaker 108.
Then, the audio acquisition module 101 acquires the current voice made by the user in response to the machine voice output by the human-computer interaction system 105, executes the processing procedures of the front-end signal processing module 102, the voice recognition module 103 and the human-computer interaction system 105, and inputs the current voice into the EPS 106; in addition, when the audio collection module 101 collects the current voice made by the user for the machine voice reply output by the human-computer interaction system 105, the video sensor (e.g., a camera) 104 collects a face image of the user when the current voice is made for the machine voice reply output by the human-computer interaction system 105, and inputs the face image into the EPS 106. A voice parameter acquisition module 1061 in the EPS106 acquires the voice parameters of the voice this time acquired by the audio acquisition module 101, and an expression recognition module 1062 recognizes a facial expression in a facial image; then, the emotion determining module 1063 determines the emotional characteristics of the user in the current voice based on the voice parameters and the facial expressions, and further, the satisfaction determining module 1064 determines the satisfaction of the user for the machine voice reply based on the emotional characteristics, and stores the previous round of dialog (including the previous voice of the user and the machine voice reply of the human-computer interaction system 105 for the previous voice) and the corresponding satisfaction in the memory 107; the above processes are repeatedly executed, the satisfaction of the user for each machine voice reply is determined until the current voice interaction scene is finished, multiple rounds of conversations in the current voice interaction scene are labeled based on the satisfaction of the user for each machine voice reply obtained by the satisfaction determining module 1064, and the multiple rounds of conversations and the corresponding satisfaction are stored in the memory 107. One round of conversation refers to one voice of a user and one machine voice made by the man-machine interaction system aiming at the one voice.
Exemplary method
Fig. 2 is a flowchart illustrating a method for labeling a voice dialog in human-computer interaction according to an exemplary embodiment of the present disclosure. The embodiment can be applied to a car machine or an electronic device such as a user terminal, and as shown in fig. 2, the method for labeling a voice conversation in human-computer interaction in the embodiment includes the following steps:
Wherein, one machine voice reply corresponds to one voice of the user, that is, each machine reply is a reply of the human-computer interaction system to one voice output of the user.
In a particular application, a user's one voice (e.g., please go to ABC mall) and one machine voice reply made by the human-computer interaction system to the one voice (e.g., which ABC mall) may be referred to as a round of dialog. Multiple rounds of dialog may be triggered when a user gives a conversation, such as a voice conversation, in anticipation of the machine giving a corresponding reply or performing a related task.
Optionally, in some embodiments, the voice of the user may be collected by an audio collection device (e.g., a microphone or a microphone array), and after the front-end signal processing, the voice recognition may be performed to obtain text information and input the text information into the human-computer interaction system, and the human-computer interaction system may understand the meaning of the previous voice of the user and output a machine voice reply based on the meaning, so that in step 201, the machine voice reply output by the human-computer interaction system may be obtained.
The emotional characteristic in the embodiment of the present disclosure is a related characteristic for representing the emotion of the user.
The first satisfaction is used for indicating the satisfaction degree of the user for the machine voice reply, and can also be considered as the satisfaction degree of the user for the previous dialog (including the previous voice of the user and the machine voice reply made by the man-machine interaction system for the previous voice of the user).
Optionally, in some embodiments, the satisfaction level in the embodiments of the present disclosure may be expressed as a specific score, and a higher score may be set to indicate that the user has a higher satisfaction level with respect to the machine voice reply.
Alternatively, in other implementations, the satisfaction in the embodiments of the present disclosure may be specifically expressed as a satisfaction level. In a specific application, the satisfaction degree of the user may be divided into a plurality of (e.g., 5) levels according to actual requirements, where the plurality of levels gradually transition from satisfactory to unsatisfactory or from unsatisfactory to satisfactory corresponding to the satisfaction degree of the user, for example, when the satisfaction degree of the user is divided into 5 levels, the following may be respectively: is not satisfactory, is generally satisfactory and is very satisfactory. The disclosed embodiments are not limited in the specific number of satisfaction levels and transitional relationships to user satisfaction.
And 204, if the voice is the ending voice in the multiple rounds of conversations between the user and the human-computer interaction system, determining at least one second satisfaction degree of the user for the machine voice reply output by the human-computer interaction system in the historical round of conversations before the round of conversation to which the voice belongs in the multiple rounds of conversations.
In the embodiment of the disclosure, one satisfaction degree is generated for each round of conversation, each round of conversation before the current round of conversation can be called history round of conversation, the satisfaction degree can be called second satisfaction degree, and the history round of conversation before the current round of conversation has at least one second satisfaction degree according to the round of the history round of conversation.
And step 205, labeling the multiple turns of conversations based on the first satisfaction degree and the at least one second satisfaction degree.
Based on the embodiment, a machine voice reply made by a human-computer interaction system for the previous voice of a user is determined, emotional characteristics of the user in the current voice made by the user for the machine voice reply are determined, then, a first satisfaction degree of the user for the machine voice reply is determined based on the emotional characteristics, if the current voice is an ending voice in multiple rounds of conversations, at least one second satisfaction degree of the user for the machine voice reply output by the human-computer interaction system in the historical round of conversations before the current round of conversations to which the current voice belongs is determined, and then, the multiple rounds of conversations are labeled based on the first satisfaction degree and the at least one second satisfaction degree. The method and the device for automatically marking the language materials of the human-computer interaction system determine the satisfaction degree of the user for the machine voice reply through the emotional characteristics of the user for the machine voice reply during the current voice, and determine the semantic understanding accuracy corresponding to the human-computer interaction system based on the satisfaction degree of the user for each machine voice reply in multiple rounds of conversation between the user and the human-computer interaction system, so that the automatic marking of the multiple rounds of conversation is realized, the corpus marking accuracy and efficiency of the human-computer interaction system are improved, the semantic understanding accuracy of the human-computer interaction system is improved, the reply efficiency and accuracy of the human-computer interaction system for the user conversation or the task execution efficiency and accuracy are improved, and the human-computer interaction effect is improved.
Fig. 3 is a flowchart illustrating a method for labeling a voice dialog in human-computer interaction according to another exemplary embodiment of the present disclosure. As shown in fig. 3, based on the embodiment shown in fig. 2, step 202 may include the following steps:
Optionally, in some embodiments, the speech parameters may include, but are not limited to, any one or more of the following: pitch, volume (also called loudness), etc., and the embodiments of the present disclosure do not limit the specific parameters of the speech parameters.
Where pitch is used to indicate the level of a sound, the level of a sound pitch is determined by the frequency of the vibration that produces the sound, and the faster the vibration, the higher the pitch. The volume is used to indicate the magnitude of the sound, and the magnitude of a sound is determined by the magnitude of the vibration that produces the sound, the greater the magnitude of the vibration, the greater the loudness. Colloquially, pitch refers to the sharpness of a sound, while volume refers to the magnitude of a sound, e.g., the sound of a child's stealing language, with a high pitch, but a low volume; while the voice is scared by the voice of the adult, the tone is low, but the volume is large.
Optionally, in some embodiments, the facial expression may include, but is not limited to, any one or more of the following: satisfaction, dissatisfaction, happiness, neutrality, anger, fidget, etc., and the embodiments of the present disclosure do not limit the specific types of facial expressions.
Optionally, in some of these embodiments, speech parameters and facial expressions may be used as emotional features; or, feature extraction may be performed on the voice parameters and the facial expressions respectively, and the extracted features are fused to obtain emotional features.
Based on the embodiment of the disclosure, the emotion characteristics of the user in response to the machine voice are determined through the voice parameters and the facial expressions of the user in the current voice, so that the emotion of the user can be objectively and truly determined, and the satisfaction degree of the user in response to the machine voice is determined.
Alternatively, in some embodiments, starting from the starting time point of the current voice of the user for the machine voice reply, the voice parameter component corresponding to each syllable of the current voice of the user is determined by taking syllable as a unit, namely, one voice parameter component (which may also be referred to as a unit voice parameter) corresponds to each syllable, and then, based on the voice parameter components obtained during the duration of the current voice, the voice parameter of the current voice of the user for the machine voice reply is determined. For example, the speech parameter components obtained during the duration of the current speech are accumulated or averaged to obtain the speech parameter of the user at the time of the current speech made for the machine speech reply. The embodiment of the present disclosure does not limit the specific manner in which the speech parameter component corresponding to the duration of the current speech determines the speech parameter in the current speech.
Based on the embodiment, the voice parameter components of all syllables in the duration of the voice are determined by taking the syllables as units, the voice parameters of the whole voice are determined based on the voice parameter components of all syllables, the voice parameters are determined more objectively, the voice parameters of the whole voice are obtained more accurately, and therefore the emotional characteristics of the user when the voice is made are determined accurately.
Fig. 4 is a flowchart illustrating a method for labeling a voice dialog in human-computer interaction according to still another exemplary embodiment of the present disclosure. As shown in fig. 4, on the basis of the embodiment shown in fig. 3, step 2022 may include the following steps:
Step 20222, inputting the facial image into a first neural network obtained by pre-training, and outputting the facial expression corresponding to the facial image through the first neural network.
In some embodiments, when the user replies to the machine voice to make the voice, the user may collect a face image of the user when making the voice through a visual sensor (a camera), input the face image into a first neural network trained in advance, and output a facial expression corresponding to the face image through the first neural network. For example, when an audio acquisition device (e.g., a microphone or a microphone array) acquires that a user makes a current voice for a machine voice reply, a camera is triggered to acquire a face image of the user at the current time and input the face image into a first neural network, and then the first neural network identifies a facial expression in the face image and outputs the facial expression.
In the embodiment of the disclosure, a first neural network can be obtained in advance based on training of a face image sample with face expression labeling information, and after the training of the first neural network is completed, a face expression corresponding to an input face image can be identified.
Based on the embodiment, the facial expression corresponding to the facial image can be rapidly and accurately identified through the neural network, and the identification efficiency and accuracy of the facial expression are improved, so that the emotion characteristics of the user when the user makes the voice are accurately determined.
Optionally, in some implementations, in step 203 of any of the above embodiments, the first satisfaction may be output via a second neural network obtained by inputting the emotional features into the pre-trained second neural network.
In the embodiment of the disclosure, a second neural network can be obtained by training in advance based on the emotional feature sample with the satisfaction degree labeling information, and after the training of the second neural network is completed, the satisfaction degree (i.e. the first satisfaction degree) corresponding to each input emotional feature can be identified.
Based on the embodiment, the satisfaction corresponding to the emotion characteristics can be quickly and accurately determined through the neural network, so that the satisfaction of the user for machine voice reply can be quickly, accurately and objectively determined.
Optionally, in another implementation manner, in step 203 of any one of the above embodiments, a first emotion score corresponding to the speech parameter is determined, a second emotion score corresponding to the facial expression is determined, and then the first emotion score and the second emotion score are weighted and summed according to a preset manner to obtain the first satisfaction.
For example, a first emotion score corresponding to the voice parameter can be determined through a third neural network obtained through pre-training; a second emotion score corresponding to the facial expression can be determined through a fourth neural network obtained through pre-training; then, the first sentiment score and the second sentiment score are weighted and summed through a P + b Q ═ S, and the first satisfaction degree is obtained. The values of a and b can be preset and can be updated according to actual requirements; p, Q respectively expressing the first emotion score and the second emotion score, wherein the values are respectively larger than 0; s represents the first satisfaction.
Based on the embodiment, a first emotion score corresponding to the voice parameter and a second emotion score corresponding to the facial expression can be respectively determined, the weight values of the first emotion score and the second emotion score are reasonably determined according to requirements, and the first satisfaction degree is obtained by means of weighting and summing the first emotion score and the second emotion score, so that the satisfaction degree is determined to be more in line with actual requirements.
Optionally, in some implementations, in step 205 of any of the above embodiments, a comprehensive satisfaction of multiple sessions may be determined based on the first satisfaction and the at least one second satisfaction, and then the comprehensive satisfaction may be labeled for the multiple sessions.
Based on the embodiment, the satisfaction of each round of conversation in the current business scene can be comprehensively considered to determine the comprehensive satisfaction of the user on the machine voice reply in the whole current business scene, so that the semantic understanding accuracy corresponding to the man-machine interaction system in the current business scene is integrally determined, the automatic labeling of multiple rounds of conversations is realized, and the accuracy and the efficiency of the corpus labeling of the man-machine interaction system are improved.
Optionally, in some implementation manners, after step 205 in any one of the above embodiments, the first satisfaction may be further marked for the current round of conversation, so as to realize marking of the satisfaction of each round of conversation, which is beneficial to marking each round of conversation in the current service scenario based on the satisfaction of each round of conversation in the current service scenario when the man-machine of the current service scenario is finished.
Fig. 5 is a flowchart illustrating a method for tagging a voice dialog in human-computer interaction according to still another exemplary embodiment of the present disclosure. As shown in fig. 5, based on the embodiment shown in fig. 2, step 201 may include the following steps:
in step 2011, the previous speech is speech recognized to obtain a first text recognition result.
And 2013, acquiring reply content according to the first semantic analysis result.
Based on the embodiment, the first character recognition result is obtained by performing voice recognition on the previous voice made by the user, based on the historical round of conversation before the current round of conversation, semantic analysis is performed on the first character recognition result to obtain a first semantic analysis result, then reply content is obtained according to the first semantic analysis result, the reply content is converted into voice to obtain a machine voice reply, and the semantic analysis is performed on the first character recognition result corresponding to the previous voice of the user by combining the historical round of conversation to obtain the machine voice reply.
Fig. 6 is a flow chart diagram of an exemplary application embodiment of the present disclosure. As shown in fig. 6, the application embodiment takes an application scenario in the navigation APP as an example, and explains an application of the embodiment of the present disclosure. The application embodiment comprises the following steps:
The first voice "ABC mall" and the first machine voice reply "which ABC mall? "as a session, it may be referred to as a first session.
During the process of the user uttering the second voice "ABC mall at X", steps 305 and 306 are performed simultaneously.
Thereafter, step 310 is performed.
And step 306, the camera collects the face image of the user and inputs the face image into the EPS.
And 308, determining the emotional characteristics of the user when the user sends out the second voice by the EPS based on the voice parameters and the facial expression.
In step 309, the EPS determines a first satisfaction of the user with respect to the first machine voice reply based on the emotional characteristic, where the first satisfaction corresponds to a satisfaction of the first round of conversation.
In step 310, the human-computer interaction system understands the meaning of the user's conversation and outputs a corresponding second machine voice reply "is the first ABC mall in X? ".
The second voice "ABC store in X place" and the second machine voice reply "is the first ABC store in X place? "as a session, it may be referred to as a second session.
In step 311, the user utters a third voice "Ben!for the second machine voice reply! ".
When the user utters the third voice "Ben! "steps 312 and 313 are performed simultaneously.
Thereafter, step 317 is performed.
And 313, acquiring a face image of the user by the camera, and inputting the face image into the EPS.
And 315, determining the emotional characteristics of the user when the user sends the third voice by the EPS based on the voice parameters and the facial expression.
In step 316, the EPS determines a first satisfaction of the user with respect to the second machine voice reply based on the emotional characteristic, where the first satisfaction corresponds to a satisfaction of the second round of conversation.
At the moment, the second round of dialogue is the current round of dialogue, the first round of dialogue becomes the historical round of dialogue before the current round of dialogue, and the satisfaction degree of the first round of dialogue becomes the second satisfaction degree.
In step 317, the human-computer interaction system understands the meaning of the user's conversation and outputs a corresponding third machine voice reply "is the first ABC mall in X? ".
The third voice "ABC mall at X place" and the third machine voice reply "do D mall at X place" are used as a round of conversation, which may be referred to as a third round of conversation.
At step 318, the user utters a fourth voice "pair" for the third machine voice reply.
Then, for the fourth voice, the operations of step 305 and 309 or step 312 and 316 are executed to obtain the first satisfaction of the user for the third machine voice reply.
And 320, if the microphone array does not collect the voice sent by the user within the preset time, the EPS does not receive the audio signal and the face image again within the preset time, confirms that the fourth voice is the ending voice in the multi-turn conversations between the user and the man-machine interaction system, and determines three second satisfaction degrees corresponding to the first turn of conversation to the third turn of conversation.
And step 321, labeling the four dialogs based on the first satisfaction corresponding to the fourth voice and the three second satisfaction corresponding to the first dialog to the third dialog.
Any of the methods for tagging voice dialogs in human-computer interaction provided by the embodiments of the present disclosure may be performed by any suitable device having data processing capabilities, including but not limited to: terminal equipment, a server and the like. Alternatively, any method for labeling a voice dialog in human-computer interaction provided by the embodiments of the present disclosure may be executed by a processor, for example, the processor may execute any method for labeling a voice dialog in human-computer interaction mentioned in the embodiments of the present disclosure by calling corresponding instructions stored in a memory. And will not be described in detail below.
Exemplary devices
Fig. 7 is a schematic structural diagram of an apparatus for labeling a voice conversation in human-computer interaction according to an exemplary embodiment of the present disclosure. The device for labeling the voice conversation in the human-computer interaction can be arranged in electronic equipment such as a vehicle machine and a user terminal, and executes the method for labeling the voice conversation in the human-computer interaction according to any embodiment of the disclosure. As shown in FIG. 7, the apparatus for labeling the voice dialog in the man-machine interaction of the embodiment comprises: a first determining module 401, a second determining module 402, a third determining module 403, a fourth determining module 404 and an annotating module 405. Wherein:
a first determining module 401, configured to determine a machine voice reply made by the human-computer interaction system for a previous voice of the user.
A second determining module 402, configured to determine an emotional characteristic of the user in the current voice made for the machine voice reply.
A third determining module 403, configured to determine a first satisfaction degree of the user with respect to the machine voice reply based on the emotional characteristic.
A fourth determining module 404, configured to determine, if the current voice is an end voice in multiple rounds of dialogs, at least one second satisfaction of the user with respect to a machine voice reply output by the human-computer interaction system in a history round of dialogs before the current round of dialogs to which the current voice belongs, where one machine voice replies a voice of a corresponding user.
And the labeling module 405 is used for labeling the multiple turns of the dialog based on the first satisfaction degree and the at least one second satisfaction degree.
Based on the embodiment, a machine voice reply made by a human-computer interaction system for the previous voice of a user is determined, emotional characteristics of the user in the current voice made by the user for the machine voice reply are determined, then, a first satisfaction degree of the user for the machine voice reply is determined based on the emotional characteristics, if the current voice is an ending voice in multiple rounds of conversations, at least one second satisfaction degree of the user for the machine voice reply output by the human-computer interaction system in the historical round of conversations before the current round of conversations to which the current voice belongs is determined, and then, the multiple rounds of conversations are labeled based on the first satisfaction degree and the at least one second satisfaction degree. The method and the device for automatically marking the language materials of the human-computer interaction system determine the satisfaction degree of the user for the machine voice reply through the emotional characteristics of the user for the machine voice reply during the current voice, and determine the semantic understanding accuracy corresponding to the human-computer interaction system based on the satisfaction degree of the user for each machine voice reply in multiple rounds of conversation between the user and the human-computer interaction system, so that the automatic marking of the multiple rounds of conversation is realized, the corpus marking accuracy and efficiency of the human-computer interaction system are improved, the semantic understanding accuracy of the human-computer interaction system is improved, the reply efficiency and accuracy of the human-computer interaction system for the user conversation or the task execution efficiency and accuracy are improved, and the human-computer interaction effect is improved.
Fig. 8 is a schematic structural diagram of an apparatus for labeling a voice conversation in human-computer interaction according to another exemplary embodiment of the present disclosure. As shown in fig. 8, on the basis of the embodiment shown in fig. 7, in this embodiment, the second determining module 402 may include: a first determining unit 4021, configured to determine a voice parameter of the user in response to the machine voice at this time; a second determining unit 4022, configured to determine a facial expression of the user in response to the machine voice when the user makes the current voice; a third determining unit 4023, configured to determine, based on the voice parameter and the facial expression, an emotional characteristic of the user in the current voice made for the machine voice reply.
Optionally, in some embodiments, the first determining unit 4021 is specifically configured to: determining a voice parameter component corresponding to each syllable of the voice of the user in the current voice by taking the syllable as a unit from the beginning of the detected initial time point of the voice of the user for the machine voice reply of the user; and determining the voice parameters of the user aiming at the current voice made by the machine voice reply based on the voice parameter components obtained during the duration of the current voice.
Optionally, referring back to fig. 8, in a further exemplary embodiment, the second determining module 402 may further include: the first obtaining unit 4024 is configured to obtain a face image when the user makes the current voice for the machine voice reply. Correspondingly, in this embodiment, the second determining unit 4022 is specifically configured to: and inputting the face image into a first neural network obtained by pre-training, and outputting the face expression corresponding to the face image through the first neural network.
Optionally, in some embodiments, the third determining module 403 is specifically configured to: and inputting the emotional features into a second neural network obtained by pre-training, and outputting the first satisfaction degree through the second neural network.
Optionally, referring back to fig. 8, in some embodiments, the third determining module 403 may include: a third determining unit 4031, configured to determine a first emotion score corresponding to the voice parameter; a fourth determining unit 4032, configured to determine a second emotion score corresponding to the facial expression; and the weighting processing unit 4033 is configured to perform weighted summation on the first emotion score and the second emotion score according to a preset manner, so as to obtain a first satisfaction.
Optionally, referring back to fig. 8, in some embodiments, the labeling module 405 may include: a fifth determining unit 4051, configured to determine a comprehensive satisfaction degree of the multiple rounds of conversations based on the first satisfaction degree and the at least one second satisfaction degree; and the labeling unit 4052 is used for labeling the comprehensive satisfaction degrees of the multiple rounds of conversations.
Optionally, referring back to fig. 8, in some embodiments, the labeling module 405 may further be configured to: and marking the first satisfaction degree for the current round of conversation.
Optionally, referring back to fig. 8, in some embodiments, the first determining module 401 may include: the voice recognition unit 4011 is configured to perform voice recognition on a previous voice to obtain a first character recognition result; the semantic analysis unit 4012 is configured to perform semantic analysis on the first character recognition result based on a history round of conversations to obtain a first semantic analysis result; a second obtaining unit 4013, configured to obtain reply content according to the first semantic analysis result; and the conversion unit 4014 is configured to convert the reply content into a voice, so as to obtain a machine voice reply.
Exemplary electronic device
Next, an electronic apparatus according to an embodiment of the present disclosure is described with reference to fig. 9. The electronic device may be either or both of the first device and the second device, or a stand-alone device separate from them, which stand-alone device may communicate with the first device and the second device to receive the acquired input signals therefrom.
FIG. 9 illustrates a block diagram of an electronic device in accordance with an embodiment of the disclosure. As shown in fig. 9, the electronic device includes one or more processors 11 and a memory 12.
The processor 11 may be a Central Processing Unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device 10 to perform desired functions.
In one example, the electronic device may further include: an input device 13 and an output device 14, which are interconnected by a bus system and/or other form of connection mechanism (not shown).
For example, when the electronic device is the first device 100 or the second device 200, the input device 13 may be a microphone or a microphone array as described above for capturing an input signal of a sound source. When the electronic device is a stand-alone device, the input means 13 may be a communication network connector for receiving the acquired input signals from the first device and the second device.
The input device 13 may also include, for example, a keyboard, a mouse, and the like.
The output device 14 may output various information including the determined distance information, direction information, and the like to the outside. The output devices 14 may include, for example, a display, speakers, a printer, and a communication network and its connected remote output devices, among others.
Of course, for simplicity, only some of the components of the electronic device relevant to the present disclosure are shown in fig. 9, omitting components such as buses, input/output interfaces, and the like. In addition, the electronic device may include any other suitable components, depending on the particular application.
Exemplary computer program product and computer-readable storage Medium
In addition to the above-described methods and apparatus, embodiments of the present disclosure may also be a computer program product comprising computer program instructions that, when executed by a processor, cause the processor to perform the steps in the method of labeling voice dialogs in human-computer interaction according to various embodiments of the present disclosure described in the "exemplary methods" section above in this specification.
The computer program product may write program code for carrying out operations for embodiments of the present disclosure in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server.
Furthermore, embodiments of the present disclosure may also be a computer-readable storage medium having stored thereon computer program instructions that, when executed by a processor, cause the processor to perform the steps in the method of tagging a voice conversation in human-computer interaction according to various embodiments of the present disclosure described in the "exemplary methods" section above in this specification.
The computer-readable storage medium may take any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may include, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing describes the general principles of the present disclosure in conjunction with specific embodiments, however, it is noted that the advantages, effects, etc. mentioned in the present disclosure are merely examples and are not limiting, and they should not be considered essential to the various embodiments of the present disclosure. Furthermore, the foregoing disclosure of specific details is for the purpose of illustration and description and is not intended to be limiting, since the disclosure is not intended to be limited to the specific details so described.
In the present specification, the embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same or similar parts in the embodiments are referred to each other. For the system embodiment, since it basically corresponds to the method embodiment, the description is relatively simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The block diagrams of devices, apparatuses, systems referred to in this disclosure are only given as illustrative examples and are not intended to require or imply that the connections, arrangements, configurations, etc. must be made in the manner shown in the block diagrams. These devices, apparatuses, devices, systems may be connected, arranged, configured in any manner, as will be appreciated by those skilled in the art. Words such as "including," "comprising," "having," and the like are open-ended words that mean "including, but not limited to," and are used interchangeably therewith. The words "or" and "as used herein mean, and are used interchangeably with, the word" and/or, "unless the context clearly dictates otherwise. The word "such as" is used herein to mean, and is used interchangeably with, the phrase "such as but not limited to".
The methods and apparatus of the present disclosure may be implemented in a number of ways. For example, the methods and apparatus of the present disclosure may be implemented by software, hardware, firmware, or any combination of software, hardware, and firmware. The above-described order for the steps of the method is for illustration only, and the steps of the method of the present disclosure are not limited to the order specifically described above unless specifically stated otherwise. Further, in some embodiments, the present disclosure may also be embodied as programs recorded in a recording medium, the programs including machine-readable instructions for implementing the methods according to the present disclosure. Thus, the present disclosure also covers a recording medium storing a program for executing the method according to the present disclosure.
It is also noted that in the devices, apparatuses, and methods of the present disclosure, each component or step can be decomposed and/or recombined. These decompositions and/or recombinations are to be considered equivalents of the present disclosure.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present disclosure. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the disclosure. Thus, the present disclosure is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, this description is not intended to limit embodiments of the disclosure to the form disclosed herein. While a number of example aspects and embodiments have been discussed above, those of skill in the art will recognize certain variations, modifications, alterations, additions and sub-combinations thereof.
Claims (10)
1. A method of annotating a voice dialog in a human-computer interaction, comprising:
determining a machine voice reply made by the human-computer interaction system aiming at the previous voice of the user;
determining emotional characteristics of the user in the current voice made by the machine voice reply;
determining a first satisfaction level of the user with respect to the machine voice reply based on the emotional feature;
if the current voice is the ending voice in the multiple rounds of conversations, determining at least one second satisfaction degree of the user in the historical round of conversations before the current round of conversations to which the current voice belongs, aiming at the machine voice reply output by the man-machine interaction system, wherein one machine voice reply corresponds to one voice of the user;
annotating the multiple turns of conversation based on the first satisfaction and the at least one second satisfaction.
2. The method of claim 1, wherein the determining emotional characteristics of the user at the time of the voice of the machine voice reply comprises:
determining voice parameters of the user in the current voice made by aiming at the machine voice reply;
determining the facial expression of the user in the current voice made by the machine voice reply;
and determining the emotional characteristics of the user in the current voice made by the machine voice reply based on the voice parameters and the facial expression.
3. The method of claim 2, wherein the determining the voice parameters of the user at the time of the voice of the machine voice reply comprises:
determining a voice parameter component corresponding to each syllable of the current voice of the user by taking the syllable as a unit from the beginning of the detected starting time point of the current voice made by the user aiming at the machine voice reply;
and determining the voice parameters of the user aiming at the current voice made by the machine voice reply based on the voice parameter components obtained during the duration of the current voice.
4. The method of claim 2, wherein the determining the facial expression of the user at the time of the current voice made for the machine voice reply comprises:
acquiring a face image when the user makes the voice for the machine voice reply;
and inputting the face image into a first neural network obtained by pre-training, and outputting the face expression corresponding to the face image through the first neural network.
5. The method of claim 2, wherein the determining a first satisfaction level of the user with respect to the machine voice reply based on the emotional feature comprises:
determining a first emotion score corresponding to the voice parameter;
determining a second emotion score corresponding to the facial expression;
and according to a preset mode, carrying out weighted summation on the first emotion score and the second emotion score to obtain the first satisfaction.
6. The method of any of claims 2-5, wherein said annotating the plurality of conversations based on the first satisfaction and the at least one second satisfaction comprises:
determining a comprehensive satisfaction of the multiple rounds of dialog based on the first satisfaction and the at least one second satisfaction;
and marking the comprehensive satisfaction degrees for the multiple rounds of conversations.
7. The method of any of claims 1-6, further comprising, after determining a first satisfaction level of the user with respect to the machine voice reply based on the emotional characteristic:
and marking the first satisfaction degree for the current round of dialogue.
8. An apparatus for annotating a voice conversation in a human-computer interaction, comprising:
the first determination module is used for determining a machine voice reply made by the man-machine interaction system aiming at the previous voice of the user;
the second determination module is used for determining the emotional characteristics of the user in the current voice made aiming at the machine voice reply;
a third determination module to determine a first satisfaction of the user with respect to the machine voice reply based on the emotional characteristic;
a fourth determining module, configured to determine, if the current voice is an end voice in multiple rounds of dialogs, at least one second satisfaction degree of the user with respect to a machine voice reply output by the human-computer interaction system in a history round of dialogs before the current round of dialogs to which the current voice belongs, where one machine voice reply corresponds to one voice of the user;
and the marking module is used for marking the multiple turns of conversations based on the first satisfaction and the at least one second satisfaction.
9. A computer-readable storage medium, in which a computer program is stored, the computer program being adapted to perform the method of tagging speech dialogs in human-computer interactions according to any of the claims 1-7.
10. An electronic device, the electronic device comprising:
a processor;
a memory for storing the processor-executable instructions;
the processor is used for reading the executable instructions from the memory and executing the instructions to realize the method for marking voice conversations in human-computer interaction as claimed in any one of the claims 1 to 7.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111069995.3A CN113808621A (en) | 2021-09-13 | 2021-09-13 | Method and device for marking voice conversation in man-machine interaction, equipment and medium |
PCT/CN2022/112490 WO2023035870A1 (en) | 2021-09-13 | 2022-08-15 | Method and apparatus for labeling speech dialogue during human-computer interaction, and device and medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111069995.3A CN113808621A (en) | 2021-09-13 | 2021-09-13 | Method and device for marking voice conversation in man-machine interaction, equipment and medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113808621A true CN113808621A (en) | 2021-12-17 |
Family
ID=78941020
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111069995.3A Pending CN113808621A (en) | 2021-09-13 | 2021-09-13 | Method and device for marking voice conversation in man-machine interaction, equipment and medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN113808621A (en) |
WO (1) | WO2023035870A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023035870A1 (en) * | 2021-09-13 | 2023-03-16 | 地平线(上海)人工智能技术有限公司 | Method and apparatus for labeling speech dialogue during human-computer interaction, and device and medium |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116501592B (en) * | 2023-06-19 | 2023-09-19 | 阿里巴巴(中国)有限公司 | Man-machine interaction data processing method and server |
CN116775850A (en) * | 2023-08-24 | 2023-09-19 | 北京珊瑚礁科技有限公司 | Chat model training method, device, equipment and medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CA2375589A1 (en) * | 2002-03-08 | 2003-09-08 | Diaphonics, Inc. | Method and apparatus for determining user satisfaction with automated speech recognition (asr) system and quality control of the asr system |
JP2011210133A (en) * | 2010-03-30 | 2011-10-20 | Seiko Epson Corp | Satisfaction degree calculation method, satisfaction degree calculation device and program |
CN105654250A (en) * | 2016-02-01 | 2016-06-08 | 百度在线网络技术(北京)有限公司 | Method and device for automatically assessing satisfaction degree |
CN108388926A (en) * | 2018-03-15 | 2018-08-10 | 百度在线网络技术(北京)有限公司 | The determination method and apparatus of interactive voice satisfaction |
CN109036405A (en) * | 2018-07-27 | 2018-12-18 | 百度在线网络技术(北京)有限公司 | Voice interactive method, device, equipment and storage medium |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10896428B1 (en) * | 2017-12-14 | 2021-01-19 | Amazon Technologies, Inc. | Dynamic speech to text analysis and contact processing using agent and customer sentiments |
CN108255307A (en) * | 2018-02-08 | 2018-07-06 | 竹间智能科技(上海)有限公司 | Man-machine interaction method, system based on multi-modal mood and face's Attribute Recognition |
CN109308466A (en) * | 2018-09-18 | 2019-02-05 | 宁波众鑫网络科技股份有限公司 | The method that a kind of pair of interactive language carries out Emotion identification |
CN111883127A (en) * | 2020-07-29 | 2020-11-03 | 百度在线网络技术(北京)有限公司 | Method and apparatus for processing speech |
CN112562641B (en) * | 2020-12-02 | 2023-09-29 | 北京百度网讯科技有限公司 | Voice interaction satisfaction evaluation method, device, equipment and storage medium |
CN113434647B (en) * | 2021-06-18 | 2024-01-12 | 竹间智能科技(上海)有限公司 | Man-machine interaction method, system and storage medium |
CN113808621A (en) * | 2021-09-13 | 2021-12-17 | 地平线(上海)人工智能技术有限公司 | Method and device for marking voice conversation in man-machine interaction, equipment and medium |
-
2021
- 2021-09-13 CN CN202111069995.3A patent/CN113808621A/en active Pending
-
2022
- 2022-08-15 WO PCT/CN2022/112490 patent/WO2023035870A1/en active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CA2375589A1 (en) * | 2002-03-08 | 2003-09-08 | Diaphonics, Inc. | Method and apparatus for determining user satisfaction with automated speech recognition (asr) system and quality control of the asr system |
JP2011210133A (en) * | 2010-03-30 | 2011-10-20 | Seiko Epson Corp | Satisfaction degree calculation method, satisfaction degree calculation device and program |
CN105654250A (en) * | 2016-02-01 | 2016-06-08 | 百度在线网络技术(北京)有限公司 | Method and device for automatically assessing satisfaction degree |
CN108388926A (en) * | 2018-03-15 | 2018-08-10 | 百度在线网络技术(北京)有限公司 | The determination method and apparatus of interactive voice satisfaction |
CN109036405A (en) * | 2018-07-27 | 2018-12-18 | 百度在线网络技术(北京)有限公司 | Voice interactive method, device, equipment and storage medium |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023035870A1 (en) * | 2021-09-13 | 2023-03-16 | 地平线(上海)人工智能技术有限公司 | Method and apparatus for labeling speech dialogue during human-computer interaction, and device and medium |
Also Published As
Publication number | Publication date |
---|---|
WO2023035870A1 (en) | 2023-03-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107657017B (en) | Method and apparatus for providing voice service | |
CN107818798B (en) | Customer service quality evaluation method, device, equipment and storage medium | |
JP6903129B2 (en) | Whispering conversion methods, devices, devices and readable storage media | |
CN113808621A (en) | Method and device for marking voice conversation in man-machine interaction, equipment and medium | |
TWI425500B (en) | Indexing digitized speech with words represented in the digitized speech | |
CN109686383B (en) | Voice analysis method, device and storage medium | |
JP2021533397A (en) | Speaker dialification using speaker embedding and a trained generative model | |
US11164584B2 (en) | System and method for uninterrupted application awakening and speech recognition | |
CN108039181B (en) | Method and device for analyzing emotion information of sound signal | |
CN110570853A (en) | Intention recognition method and device based on voice data | |
CN114038457B (en) | Method, electronic device, storage medium, and program for voice wakeup | |
CN113362828A (en) | Method and apparatus for recognizing speech | |
JP2019020684A (en) | Emotion interaction model learning device, emotion recognition device, emotion interaction model learning method, emotion recognition method, and program | |
CN112017633B (en) | Speech recognition method, device, storage medium and electronic equipment | |
US11615787B2 (en) | Dialogue system and method of controlling the same | |
CN114420169B (en) | Emotion recognition method and device and robot | |
KR20150065523A (en) | Method and apparatus for providing counseling dialogue using counseling information | |
CN112071310A (en) | Speech recognition method and apparatus, electronic device, and storage medium | |
CN113611316A (en) | Man-machine interaction method, device, equipment and storage medium | |
CN111400463B (en) | Dialogue response method, device, equipment and medium | |
CN111949778A (en) | Intelligent voice conversation method and device based on user emotion and electronic equipment | |
CN109065019B (en) | Intelligent robot-oriented story data processing method and system | |
CN108962226B (en) | Method and apparatus for detecting end point of voice | |
CN108538292B (en) | Voice recognition method, device, equipment and readable storage medium | |
CN113889091A (en) | Voice recognition method and device, computer readable storage medium and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |