CN112992150A - Method and device for evaluating using effect of dialect template - Google Patents

Method and device for evaluating using effect of dialect template Download PDF

Info

Publication number
CN112992150A
CN112992150A CN202110258660.XA CN202110258660A CN112992150A CN 112992150 A CN112992150 A CN 112992150A CN 202110258660 A CN202110258660 A CN 202110258660A CN 112992150 A CN112992150 A CN 112992150A
Authority
CN
China
Prior art keywords
template
input data
client
emotion
speech
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110258660.XA
Other languages
Chinese (zh)
Other versions
CN112992150B (en
Inventor
陈堃
梁侃
王亚新
金潇泽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Industrial and Commercial Bank of China Ltd ICBC
Original Assignee
Industrial and Commercial Bank of China Ltd ICBC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Industrial and Commercial Bank of China Ltd ICBC filed Critical Industrial and Commercial Bank of China Ltd ICBC
Priority to CN202110258660.XA priority Critical patent/CN112992150B/en
Publication of CN112992150A publication Critical patent/CN112992150A/en
Application granted granted Critical
Publication of CN112992150B publication Critical patent/CN112992150B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/27Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique
    • G10L25/30Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique using neural networks
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/63Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers
    • H04M3/50Centralised arrangements for answering calls; Centralised arrangements for recording messages for absent or busy subscribers ; Centralised arrangements for recording messages
    • H04M3/527Centralised call answering arrangements not requiring operator intervention

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Child & Adolescent Psychology (AREA)
  • General Health & Medical Sciences (AREA)
  • Hospice & Palliative Care (AREA)
  • Psychiatry (AREA)
  • Machine Translation (AREA)

Abstract

The invention discloses a method and a device for evaluating the using effect of a dialect template, which can be used in the financial field or other technical fields, and the method comprises the following steps: acquiring client input data before and after a speech template, wherein the client input data before and after the speech template comprises: the input data of the client before the speech template and the input data of the client after the speech template; respectively identifying emotion classifications corresponding to input data of the client in front of the dialect template and input data of the client in back of the dialect template according to a preset emotion analysis model; and determining the use effect of the speech template corresponding to the speech template information according to the emotion classification corresponding to the input data of the client before the speech template and the input data of the client after the speech template. The invention realizes the technical effect of efficiently and accurately evaluating the using effect of the dialogue model and is beneficial to improving the service effect of customer service.

Description

Method and device for evaluating using effect of dialect template
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a method and a device for evaluating the using effect of a dialect template.
Background
The artificial intelligence technology greatly facilitates the life of people, provides a solution for the technical development in various fields, and in the traditional voice customer service field, the service quality of the customer service is related to the public praise and reputation of enterprises, thereby representing the image of the enterprises to a great extent. In the process of communication between the customer service and the customer, in order to embody specialization and standardization of the service, the customer service often uses a formulated conversational template for interaction, so how to evaluate the use effect of the conversational template is a key problem in the field. For enterprises, the service quality can be confirmed only by the feedback opinions of clients afterwards, and even if the service quality is low, the problem is difficult to locate. The prior art lacks a more efficient and accurate method for evaluating the effectiveness of the usage of the dialogue model.
Disclosure of Invention
The invention provides a method and a device for evaluating the use effect of a dialect template in order to solve the technical problems in the background technology.
In order to achieve the above object, according to one aspect of the present invention, there is provided a method for evaluating the effect of usage of a tactical template, the method comprising:
acquiring client input data before and after a speech template, wherein the client input data before and after the speech template comprises: the input data of the client before the speech template and the input data of the client after the speech template;
according to a preset emotion analysis model, emotion classifications corresponding to input data of the client before the dialect template and input data of the client after the dialect template are respectively identified, wherein the emotion classifications include: positive and negative emotions;
and determining the use effect of the speech template corresponding to the speech template information according to the emotion classification corresponding to the input data of the client before the speech template and the input data of the client after the speech template.
Optionally, the method for evaluating the usage effect of the tactical template further includes:
acquiring audio data of a customer and customer service conversation;
converting the audio data into text data;
matching each piece of customer service input data in the text data with a preset dialect template to obtain customer service input data containing the dialect template in the text data;
and generating customer input data before and after the speech template according to two pieces of customer input data which are adjacent to the customer service input data containing the speech template in the text data.
Optionally, the determining, according to the emotion classifications corresponding to the input data of the client before the utterance template and the input data of the client after the utterance template, the use effect of the utterance template corresponding to the utterance template information specifically includes:
if the emotion corresponding to the input data of the client before the dialect template is classified as positive emotion, and the emotion corresponding to the input data of the client after the dialect template is classified as negative emotion, determining the use effect as a first result;
if the emotion corresponding to the input data of the client before the dialect template is classified as negative emotion, and the emotion corresponding to the input data of the client after the dialect template is classified as positive emotion, determining the use effect as a second result;
and if the emotion classifications corresponding to the input data of the client before the dialect template and the input data of the client after the dialect template are both positive emotions or both negative emotions, determining that the use effect is a third result.
Optionally, the method for evaluating the usage effect of the tactical template further includes:
acquiring a plurality of using effects corresponding to the dialect template;
and if the proportion of the first result in the plurality of using effects is larger than a preset threshold value, sending a prompt message to a customer service manager to prompt the customer service manager to adjust the conversation template.
Optionally, the customer service includes: manual customer service and robot customer service.
In order to achieve the above object, according to another aspect of the present invention, there is provided a speech template usage effect evaluation apparatus including:
the system comprises a client input data acquisition module before and after the dialect template, and a client input data acquisition module before and after the dialect template, wherein the client input data before and after the dialect template comprises: the input data of the client before the speech template and the input data of the client after the speech template;
the emotion analysis module is used for respectively identifying emotion classifications corresponding to input data of the client before the conversational template and input data of the client after the conversational template according to a preset emotion analysis model, wherein the emotion classifications include: positive and negative emotions;
and the use effect evaluation module is used for determining the use effect of the language template corresponding to the language template information according to the emotion classification corresponding to the input data of the client before the language template and the input data of the client after the language template.
Optionally, the speech operation template usage effect evaluation apparatus further includes:
the audio data acquisition module is used for acquiring audio data of a customer and customer service conversation;
the text conversion module is used for converting the audio data into text data;
the matching module of the speech technology template is used for matching each piece of customer service input data in the text data with a preset speech technology template to obtain the customer service input data containing the speech technology template in the text data;
and the client input data generation module before and after the speech template is used for generating client input data before and after the speech template according to two pieces of client input data which are adjacent to the client service input data containing the speech template in the text data.
Optionally, the usage effect evaluation module specifically includes:
a first determining unit, configured to determine the usage effect as a first result if the emotion corresponding to the input data of the client before the utterance template is classified as a positive emotion and the emotion corresponding to the input data of the client after the utterance template is classified as a negative emotion;
a second determining unit, configured to determine the usage effect as a second result if the emotion corresponding to the input data of the client before the utterance template is classified as a negative emotion and the emotion corresponding to the input data of the client after the utterance template is classified as a positive emotion;
and a third determining unit, configured to determine that the usage effect is a third result if the emotion classifications corresponding to the input data of the client before the utterance template and the input data of the client after the utterance template are both positive emotions or both negative emotions.
Optionally, the speech operation template usage effect evaluation apparatus further includes:
the using effect data acquisition module is used for acquiring a plurality of using effects corresponding to the dialect template;
and the sending module is used for sending prompt information to a customer service manager if the proportion of the first result in the plurality of using effects is greater than a preset threshold value so as to prompt the customer service manager to adjust the conversation template.
In order to achieve the above object, according to another aspect of the present invention, there is also provided a computer device, including a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the steps of the above-mentioned speech template usage effect evaluation method when executing the computer program.
In order to achieve the above object, according to another aspect of the present invention, there is also provided a computer-readable storage medium storing a computer program which, when executed in a computer processor, implements the steps in the above-described method for evaluating usage effects of a tactical template.
The invention has the beneficial effects that: according to the method and the device, emotion analysis is carried out on the input data of the client before the speech technology template and the input data of the client after the speech technology template, and the use effect of the speech technology template is determined according to emotion classification of the client before and after the speech technology template, so that the technical effect of evaluating the use effect of the speech technology model efficiently and accurately is achieved, and the service effect of customer service is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts. In the drawings:
FIG. 1 is a first flowchart of a method for evaluating the effectiveness of a conversational template according to an embodiment of the invention;
FIG. 2 is a second flowchart of a method for evaluating the effectiveness of the use of a conversational template according to an embodiment of the invention;
FIG. 3 is a third flowchart of a method for evaluating the effectiveness of the use of a surgical template according to an embodiment of the present invention;
FIG. 4 is a first block diagram of a speech template usage effect evaluation apparatus according to an embodiment of the present invention;
FIG. 5 is a second block diagram of the speech template usage effect evaluation apparatus according to the embodiment of the present invention;
FIG. 6 is a schematic diagram of a computer device according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
It should be noted that the terms "comprises" and "comprising," and any variations thereof, in the description and claims of the present invention and the above-described drawings, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be noted that the embodiments and features of the embodiments may be combined with each other without conflict. The present invention will be described in detail below with reference to the embodiments with reference to the attached drawings.
The method and the device for evaluating the use effect of the dialogical template can be applied to the financial field and can also be applied to other technical fields.
The invention provides a method for evaluating the use effect of a speech technology template, which can acquire the client feedback condition of the speech technology template in time, provide reference for managers to improve the speech technology template and contribute to improving the speech customer service speech technology quality.
Fig. 1 is a first flowchart of a method for evaluating the usage effect of a dialect template according to an embodiment of the present invention, and as shown in fig. 1, the method for evaluating the usage effect of a dialect template according to the embodiment includes steps S101 to S103.
Step S101, obtaining client input data before and after a speech template, wherein the client input data before and after the speech template comprises: the input data may include, but is not limited to, the phonetics template information, the input data of the customer before the phonetics template, and the input data of the customer after the phonetics template.
In one embodiment of the invention, the client input data before and after the conversational template is extracted from a conversation between the client and the customer service. In one embodiment of the invention, the customer service of the invention comprises: manual customer service and robot customer service. In the embodiment of the invention, the client and the customer service can carry out language or character conversation, the customer service feeds back the conversation content of the client, and if the conversation content of the client meets the matching condition of the preset conversation template, the customer service feeds back the conversation content of the client by adopting the preset conversation template.
In one embodiment of the present invention, the dialect template information may be a dialect template identification information, such as a dialect template number.
In an embodiment of the present invention, the "customer input data before and after the dialogical template" in this step may be generated specifically by the following steps S201 to S204.
Step S102, respectively identifying emotion classifications corresponding to input data of the client before the dialect template and input data of the client after the dialect template according to a preset emotion analysis model, wherein the emotion classifications include: positive emotions and negative emotions.
In one embodiment of the invention, both the input data of the customer before the utterance template and the input data of the customer after the utterance template are speech data. The emotion analysis model is trained by adopting a machine learning algorithm and used for recognizing emotion classification corresponding to the voice data. The invention inputs the input data of the client before the dialect template and the input data of the client after the dialect template into the trained emotion analysis model to obtain the emotion classification of the client and the emotion analysis model.
In another embodiment of the present invention, the input data of the customer before the dialoging template and the input data of the customer after the dialoging template are both text data, which may be text data converted from voice data input by the customer. The emotion analysis method can train the emotion analysis model based on the LSTM model, and then input the text data into the trained emotion analysis model to obtain the emotion classification corresponding to the text data.
Step S103, determining the use effect of the language template corresponding to the language template information according to the emotion classification corresponding to the input data of the client before the language template and the input data of the client after the language template.
The method obtains the use effect of the language template according to the emotion classification of the client input data before and after the language template is used.
In an embodiment of the present invention, the determining, according to the emotion classifications corresponding to the input data of the client before the utterance template and the input data of the client after the utterance template, a usage effect of the utterance template corresponding to the utterance template information includes:
if the emotion corresponding to the input data of the client before the dialect template is classified as positive emotion, and the emotion corresponding to the input data of the client after the dialect template is classified as negative emotion, determining the use effect as a first result, wherein the first result can be 'poor' in one embodiment of the invention;
if the emotion corresponding to the input data of the client before the dialect template is classified as negative emotion, and the emotion corresponding to the input data of the client after the dialect template is classified as positive emotion, determining the use effect as a second result, wherein the second result can be 'excellent' in one embodiment of the invention;
and if the emotion classifications corresponding to the input data of the client before the conversational template and the input data of the client after the conversational template are both positive emotions or both negative emotions, determining the use effect as a third result, wherein the third result can be 'good' in one embodiment of the invention.
In one embodiment of the present invention, the semantic emotion analysis is performed on the client input before and after the dialog template, the positive emotion is 1, the negative emotion is 0, and the final recorded one-segment dialog template has the following four possible emotion analysis cases [0,0], [0,1], [1,0], [1,1], which are defined as "excellent" for [0,1], good for [0,0] and [1,1], poor for [1,0], and the emotion analysis result is finally stored according to { "template number": output in "use effect" } format.
According to the emotion tendency of the client before and after the speech template is used, the use effect of the speech template is obtained, if the emotion classification of the client before the speech template is used is good (positive emotion), and the emotion classification of the client after the speech template is used is poor (negative emotion), the speech template has a negative effect on the client, and further the speech template has a poor effect. On the contrary, if the emotion classification of the client is changed from poor to good after the dialect template is used, the dialect template has a positive effect on the client, and further the dialect template has a good effect. Therefore, the method and the device can accurately and quickly determine the using effect of the speech technology template according to the emotional tendency of the client before and after the speech technology template is used.
Fig. 2 is a second flowchart of the method for evaluating the effectiveness of using the dialect template according to the embodiment of the present invention, as shown in fig. 2, in an embodiment of the present invention, the client inputs data before and after the dialect template in the step S101, specifically, the client inputs data obtained through the steps S201 to S204.
Step S201, audio data of the customer and customer service conversation is obtained.
In one embodiment of the invention, the audio data of the conversation between the client and the customer service can be acquired in two ways, the first way is direct acquisition, the newly generated voice file is acquired in real time by connecting a customer service voice file storage system, and the file is automatically brought into analysis; the second way is to provide a network interface, and the speech detection party transmits the voice file into the system in the form of interface call, and performs speech template quality inspection in the corresponding partition of the system.
In an embodiment of the present invention, the audio data of the customer-service conversation in this step may be collected by an audio input module. The audio input module specifically comprises: the device comprises an audio receiving unit, a redundancy intercepting unit, a PCM conversion unit and an audio denoising unit. Specifically, the method comprises the following steps: an audio receiving unit: and the system is responsible for exposing an audio transmission interface to the outside and simultaneously recording the service parameters corresponding to each audio, including information such as service types and an affiliated dialect template library. A redundancy intercepting unit: the system is responsible for intercepting redundant parts in the audio, and when the amplitude is smaller than a given threshold value through setting the threshold value of the amplitude, the audio is judged to be redundant, and the system can intercept and discard the part of the audio so as to reduce the workload of subsequent processing logic. A PCM conversion unit: and the system is responsible for converting the audio file subjected to redundant interception into an uncompressed pure waveform file, so that subsequent voice-to-word processing is facilitated.
Step S202, converting the audio data into text data.
In the embodiment of the present invention, a preset speech recognition algorithm may be adopted in this step to convert the audio data into a corresponding text. The present invention may employ any of the speech recognition algorithms of the prior art.
In an embodiment of the present invention, the conversion of the audio data into the text data in this step may be specifically realized by a preset speech recognition module. The voice recognition module specifically comprises: the device comprises an audio framing unit, a feature extraction unit and a model matching unit. Specifically, the method comprises the following steps: audio framing unit: the audio of a long segment is decomposed into audio frames of 25 milliseconds in length by a moving window function. A feature extraction unit: and receiving the 2.1 time domain audio signal, carrying out waveform change through the MFCC, and extracting audio features. A model matching unit: and matching the audio features by using the HMM, and obtaining the recognized text through model operation.
Step S203, matching each piece of customer service input data in the text data with a preset dialect template to obtain customer service input data containing the dialect template in the text data.
In an embodiment of the present invention, in this step, for each piece of customer service input data in the text data, the conversational templates are circularly traversed, all the conversational templates in the text data are identified, and each conversational template is identified in the text data, that is, the customer service input data containing the conversational template in the text data is obtained.
In one embodiment of the present invention, the present invention further provides a template marking module, which comprises: a phonetics template updating unit, a phonetics template identifying unit and a phonetics template marking unit. Specifically, the method comprises the following steps:
a dialogies template updating unit: the voice template to be listened to is added, modified and deleted as an operable interface exposed to the outside by the system.
A dialogies template identification unit: and receiving the text information converted by the voice recognition module, circularly traversing the dialect template, and recognizing the dialect template contained in the text information.
A speaker art template marking module: and receiving the speech technology template identification information generated by the speech technology template identification unit, matching the occurrence position of the speech technology template with the template type and outputting the matched speech technology template.
Step S204, according to two pieces of customer input data adjacent to the customer service input data containing the speech template in the text data, generating customer input data before and after the speech template.
In the embodiment of the present invention, this step extracts the client input data before and after each dialect template in the text data is used. And generating corresponding 'input data of clients before and after the language template' aiming at each language template in the text data. The "customer input data before and after the dialogistic template" specifically includes: the input data may include, but is not limited to, the phonetics template information, the input data of the customer before the phonetics template, and the input data of the customer after the phonetics template.
Fig. 3 is a third flowchart of a method for evaluating the usage effect of a dialogistic template according to an embodiment of the present invention, as shown in fig. 3, in an embodiment of the present invention, the method for evaluating the usage effect of a dialogistic template further includes step S301 and step S302.
Step S301, obtaining a plurality of usage effects corresponding to the dialogistic template.
In one embodiment of the invention, the method can summarize and sort a large number of using effects of each speech technology template, and output a statistical statement according to dimensions such as time periods, excellent rates and the like, thereby providing a speech technology quality improvement basis for a voice customer service manager. The administrator can correct the lower-quality dialect template according to the quality condition of the dialect template displayed by the report.
In an embodiment of the present invention, the present invention provides a data analysis module, which specifically includes: the device comprises an emotion statistic unit, a data persistence unit, a data report unit and a data retrieval unit. Specifically, the method comprises the following steps:
an emotion statistic unit: used for obtaining the use effect of the dialogical template.
A data persistence unit: matching the use effect of the dialect template obtained in the emotion statistical unit with the corresponding template identification, and performing data persistence storage.
Data table unit: and generating a report by using the analysis result of the dialect template according to dimensions such as time, template types and the like by using recording means such as an Echar component or an excel document and the like.
A data retrieval unit: and the data stored in the data persistence unit in a persistence manner is retrieved for customer service management personnel in a retrieval interface manner, so that the use condition of the dialect template can be analyzed in real time, and the dialect template can be adjusted.
Step S302, if the ratio of the first result in the multiple usage effects is greater than a preset threshold, sending a prompt message to a customer service administrator to prompt the customer service administrator to adjust the conversation template.
The embodiment shows that the invention provides an efficient speech technology template quality evaluation system, compared with the problem that objective evaluation cannot be performed on the template quality by the traditional speech technology template improvement method, customer service managers can conveniently obtain speech technology template effect evaluation through the system, speech technology template prototype iteration is facilitated, customer service quality improvement is effectively assisted, and management cost and maintenance efficiency are reduced.
It should be noted that the steps illustrated in the flowcharts of the figures may be performed in a computer system such as a set of computer-executable instructions and that, although a logical order is illustrated in the flowcharts, in some cases, the steps illustrated or described may be performed in an order different than presented herein.
Based on the same inventive concept, the embodiment of the present invention further provides a device for evaluating the usage effect of a dialect template, which can be used to implement the method for evaluating the usage effect of the dialect template described in the above embodiment, as described in the following embodiments. Because the principle of solving the problems of the dialect template use effect evaluation device is similar to the dialect template use effect evaluation method, the embodiment of the dialect template use effect evaluation device can be referred to the embodiment of the dialect template use effect evaluation method, and repeated parts are not described again. As used hereinafter, the term "unit" or "module" may be a combination of software and/or hardware that implements a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
Fig. 4 is a first block diagram of a speech template use effect evaluation device according to an embodiment of the present invention, and as shown in fig. 4, the speech template use effect evaluation device according to the present invention includes:
the system comprises a client input data acquisition module 1 before and after the speech technology template, which is used for acquiring client input data before and after the speech technology template, wherein the client input data before and after the speech technology template comprises: the input data of the client before the speech template and the input data of the client after the speech template;
the emotion analysis module 2 is configured to identify, according to a preset emotion analysis model, emotion classifications corresponding to input data of the client before the utterance template and input data of the client after the utterance template, respectively, where the emotion classifications include: positive and negative emotions;
and the use effect evaluation module 3 is used for determining the use effect of the language template corresponding to the language template information according to the emotion classification corresponding to the input data of the client before the language template and the input data of the client after the language template.
Fig. 5 is a second configuration block diagram of the speech template use effect evaluation device according to the embodiment of the present invention, and as shown in fig. 5, the speech template use effect evaluation device according to the present invention further includes:
the audio data acquisition module 4 is used for acquiring audio data of a customer and customer service conversation;
the text conversion module 5 is used for converting the audio data into text data;
a matching module 6 of a speech technology template, configured to match each piece of customer service input data in the text data with a preset speech technology template, to obtain customer service input data including the speech technology template in the text data;
and a pre-post client input data generation module 7 for generating pre-post client input data of the speech template according to two pieces of client input data adjacent to the client service input data containing the speech template in the text data.
In an embodiment of the present invention, the using effect evaluating module 3 specifically includes:
a first determining unit, configured to determine the usage effect as a first result if the emotion corresponding to the input data of the client before the utterance template is classified as a positive emotion and the emotion corresponding to the input data of the client after the utterance template is classified as a negative emotion;
a second determining unit, configured to determine the usage effect as a second result if the emotion corresponding to the input data of the client before the utterance template is classified as a negative emotion and the emotion corresponding to the input data of the client after the utterance template is classified as a positive emotion;
and a third determining unit, configured to determine that the usage effect is a third result if the emotion classifications corresponding to the input data of the client before the utterance template and the input data of the client after the utterance template are both positive emotions or both negative emotions.
In one embodiment of the present invention, the speech template usage effect evaluation apparatus of the present invention further includes:
the using effect data acquisition module is used for acquiring a plurality of using effects corresponding to the dialect template;
and the sending module is used for sending prompt information to a customer service manager if the proportion of the first result in the plurality of using effects is greater than a preset threshold value so as to prompt the customer service manager to adjust the conversation template.
To achieve the above object, according to another aspect of the present application, there is also provided a computer apparatus. As shown in fig. 6, the computer device comprises a memory, a processor, a communication interface and a communication bus, wherein a computer program that can be run on the processor is stored in the memory, and the steps of the method of the above embodiment are realized when the processor executes the computer program.
The processor may be a Central Processing Unit (CPU). The Processor may also be other general purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, or a combination thereof.
The memory, which is a non-transitory computer readable storage medium, may be used to store non-transitory software programs, non-transitory computer executable programs, and units, such as the corresponding program units in the above-described method embodiments of the present invention. The processor executes various functional applications of the processor and the processing of the work data by executing the non-transitory software programs, instructions and modules stored in the memory, that is, the method in the above method embodiment is realized.
The memory may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created by the processor, and the like. Further, the memory may include high speed random access memory, and may also include non-transitory memory, such as at least one disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory optionally includes memory located remotely from the processor, and such remote memory may be coupled to the processor via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The one or more units are stored in the memory and when executed by the processor perform the method of the above embodiments.
The specific details of the computer device may be understood by referring to the corresponding related descriptions and effects in the above embodiments, and are not described herein again.
In order to achieve the above object, according to another aspect of the present application, there is also provided a computer-readable storage medium storing a computer program which, when executed in a computer processor, implements the steps in the above-described method for evaluating usage effects of a tactical template. It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic Disk, an optical Disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a Flash Memory (Flash Memory), a Hard Disk (Hard Disk Drive, abbreviated as HDD) or a Solid State Drive (SSD), etc.; the storage medium may also comprise a combination of memories of the kind described above.
It will be apparent to those skilled in the art that the modules or steps of the present invention described above may be implemented by a general purpose computing device, they may be centralized on a single computing device or distributed across a network of multiple computing devices, and they may alternatively be implemented by program code executable by a computing device, such that they may be stored in a storage device and executed by a computing device, or fabricated separately as individual integrated circuit modules, or fabricated as a single integrated circuit module from multiple modules or steps. Thus, the present invention is not limited to any specific combination of hardware and software.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (11)

1. A method for evaluating the use effect of a dialogistic template is characterized by comprising the following steps:
acquiring client input data before and after a speech template, wherein the client input data before and after the speech template comprises: the input data of the client before the speech template and the input data of the client after the speech template;
according to a preset emotion analysis model, emotion classifications corresponding to input data of the client before the dialect template and input data of the client after the dialect template are respectively identified, wherein the emotion classifications include: positive and negative emotions;
and determining the use effect of the speech template corresponding to the speech template information according to the emotion classification corresponding to the input data of the client before the speech template and the input data of the client after the speech template.
2. The method of evaluating the effectiveness of use of a tactical template of claim 1, further comprising:
acquiring audio data of a customer and customer service conversation;
converting the audio data into text data;
matching each piece of customer service input data in the text data with a preset dialect template to obtain customer service input data containing the dialect template in the text data;
and generating customer input data before and after the speech template according to two pieces of customer input data which are adjacent to the customer service input data containing the speech template in the text data.
3. The method for evaluating the usage effect of the conversational template according to claim 1, wherein the determining the usage effect of the conversational template corresponding to the conversational template information according to the emotion classification corresponding to the input data of the client before the conversational template and the input data of the client after the conversational template specifically comprises:
if the emotion corresponding to the input data of the client before the dialect template is classified as positive emotion, and the emotion corresponding to the input data of the client after the dialect template is classified as negative emotion, determining the use effect as a first result;
if the emotion corresponding to the input data of the client before the dialect template is classified as negative emotion, and the emotion corresponding to the input data of the client after the dialect template is classified as positive emotion, determining the use effect as a second result;
and if the emotion classifications corresponding to the input data of the client before the dialect template and the input data of the client after the dialect template are both positive emotions or both negative emotions, determining that the use effect is a third result.
4. The method of evaluating the effectiveness of use of a surgical template according to claim 3, further comprising:
acquiring a plurality of using effects corresponding to the dialect template;
and if the proportion of the first result in the plurality of using effects is larger than a preset threshold value, sending a prompt message to a customer service manager to prompt the customer service manager to adjust the conversation template.
5. The conversational template usage effectiveness evaluation method of claim 2, wherein the customer service comprises: manual customer service and robot customer service.
6. An apparatus for evaluating the effect of speech template use, comprising:
the system comprises a client input data acquisition module before and after the dialect template, and a client input data acquisition module before and after the dialect template, wherein the client input data before and after the dialect template comprises: the input data of the client before the speech template and the input data of the client after the speech template;
the emotion analysis module is used for respectively identifying emotion classifications corresponding to input data of the client before the conversational template and input data of the client after the conversational template according to a preset emotion analysis model, wherein the emotion classifications include: positive and negative emotions;
and the use effect evaluation module is used for determining the use effect of the language template corresponding to the language template information according to the emotion classification corresponding to the input data of the client before the language template and the input data of the client after the language template.
7. The tactical template usage effectiveness evaluation device of claim 6, further comprising:
the audio data acquisition module is used for acquiring audio data of a customer and customer service conversation;
the text conversion module is used for converting the audio data into text data;
the matching module of the speech technology template is used for matching each piece of customer service input data in the text data with a preset speech technology template to obtain the customer service input data containing the speech technology template in the text data;
and the client input data generation module before and after the speech template is used for generating client input data before and after the speech template according to two pieces of client input data which are adjacent to the client service input data containing the speech template in the text data.
8. The device for evaluating the usage effect of a verbal template according to claim 6, wherein the usage effect evaluation module specifically comprises:
a first determining unit, configured to determine the usage effect as a first result if the emotion corresponding to the input data of the client before the utterance template is classified as a positive emotion and the emotion corresponding to the input data of the client after the utterance template is classified as a negative emotion;
a second determining unit, configured to determine the usage effect as a second result if the emotion corresponding to the input data of the client before the utterance template is classified as a negative emotion and the emotion corresponding to the input data of the client after the utterance template is classified as a positive emotion;
and a third determining unit, configured to determine that the usage effect is a third result if the emotion classifications corresponding to the input data of the client before the utterance template and the input data of the client after the utterance template are both positive emotions or both negative emotions.
9. The tactical template usage effectiveness evaluation apparatus of claim 8, further comprising:
the using effect data acquisition module is used for acquiring a plurality of using effects corresponding to the dialect template;
and the sending module is used for sending prompt information to a customer service manager if the proportion of the first result in the plurality of using effects is greater than a preset threshold value so as to prompt the customer service manager to adjust the conversation template.
10. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the method of any of claims 1 to 5 when executing the computer program.
11. A computer-readable storage medium, in which a computer program is stored which, when executed in a computer processor, implements the method of any one of claims 1 to 5.
CN202110258660.XA 2021-03-10 2021-03-10 Method and device for evaluating using effect of dialect template Active CN112992150B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110258660.XA CN112992150B (en) 2021-03-10 2021-03-10 Method and device for evaluating using effect of dialect template

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110258660.XA CN112992150B (en) 2021-03-10 2021-03-10 Method and device for evaluating using effect of dialect template

Publications (2)

Publication Number Publication Date
CN112992150A true CN112992150A (en) 2021-06-18
CN112992150B CN112992150B (en) 2022-06-21

Family

ID=76336247

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110258660.XA Active CN112992150B (en) 2021-03-10 2021-03-10 Method and device for evaluating using effect of dialect template

Country Status (1)

Country Link
CN (1) CN112992150B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060218032A1 (en) * 2005-03-25 2006-09-28 Edward Patrick Real-time customer service assistance using collected customer life cycle data
CN107885726A (en) * 2017-11-06 2018-04-06 广州杰赛科技股份有限公司 Customer service quality evaluating method and device
CN111031183A (en) * 2019-11-18 2020-04-17 咪咕文化科技有限公司 Color ring back tone playing method, electronic equipment and storage medium
CN111324865A (en) * 2020-02-24 2020-06-23 浪潮天元通信信息系统有限公司 Storefront satisfaction intelligent analysis method and system based on Internet of things
CN111598485A (en) * 2020-05-28 2020-08-28 成都晓多科技有限公司 Multi-dimensional intelligent quality inspection method, device, terminal equipment and medium
CN111696556A (en) * 2020-07-13 2020-09-22 上海茂声智能科技有限公司 Method, system, equipment and storage medium for analyzing user conversation emotion
CN111949778A (en) * 2020-07-24 2020-11-17 北京奇保信安科技有限公司 Intelligent voice conversation method and device based on user emotion and electronic equipment
CN112468659A (en) * 2020-11-20 2021-03-09 平安普惠企业管理有限公司 Quality evaluation method, device, equipment and storage medium applied to telephone customer service

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060218032A1 (en) * 2005-03-25 2006-09-28 Edward Patrick Real-time customer service assistance using collected customer life cycle data
CN107885726A (en) * 2017-11-06 2018-04-06 广州杰赛科技股份有限公司 Customer service quality evaluating method and device
CN111031183A (en) * 2019-11-18 2020-04-17 咪咕文化科技有限公司 Color ring back tone playing method, electronic equipment and storage medium
CN111324865A (en) * 2020-02-24 2020-06-23 浪潮天元通信信息系统有限公司 Storefront satisfaction intelligent analysis method and system based on Internet of things
CN111598485A (en) * 2020-05-28 2020-08-28 成都晓多科技有限公司 Multi-dimensional intelligent quality inspection method, device, terminal equipment and medium
CN111696556A (en) * 2020-07-13 2020-09-22 上海茂声智能科技有限公司 Method, system, equipment and storage medium for analyzing user conversation emotion
CN111949778A (en) * 2020-07-24 2020-11-17 北京奇保信安科技有限公司 Intelligent voice conversation method and device based on user emotion and electronic equipment
CN112468659A (en) * 2020-11-20 2021-03-09 平安普惠企业管理有限公司 Quality evaluation method, device, equipment and storage medium applied to telephone customer service

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王超等: "基于护患关系的情感管理方案分析", 《中医药管理杂志》 *

Also Published As

Publication number Publication date
CN112992150B (en) 2022-06-21

Similar Documents

Publication Publication Date Title
CN112804400B (en) Customer service call voice quality inspection method and device, electronic equipment and storage medium
JP6671020B2 (en) Dialogue act estimation method, dialogue act estimation device and program
CN109256150B (en) Speech emotion recognition system and method based on machine learning
WO2021128741A1 (en) Voice emotion fluctuation analysis method and apparatus, and computer device and storage medium
CN108682420B (en) Audio and video call dialect recognition method and terminal equipment
CN111182162B (en) Telephone quality inspection method, device, equipment and storage medium based on artificial intelligence
US10965812B1 (en) Analysis and classification of unstructured computer text for generation of a recommended conversation topic flow
CN110910283A (en) Method, device, equipment and storage medium for generating legal document
CN111785275A (en) Voice recognition method and device
US11553085B2 (en) Method and apparatus for predicting customer satisfaction from a conversation
CN108257594A (en) A kind of conference system and its information processing method
CN111177350A (en) Method, device and system for forming dialect of intelligent voice robot
CN105810205A (en) Speech processing method and device
CN113239147A (en) Intelligent conversation method, system and medium based on graph neural network
CN113436634A (en) Voice classification method and device based on voiceprint recognition and related equipment
CN105957517A (en) Voice data structural transformation method based on open source API and system thereof
CN117441165A (en) Reducing bias in generating language models
CN113903361A (en) Speech quality detection method, device, equipment and storage medium based on artificial intelligence
JP6910002B2 (en) Dialogue estimation method, dialogue activity estimation device and program
CN112992150B (en) Method and device for evaluating using effect of dialect template
US20230130777A1 (en) Method and system for generating voice in an ongoing call session based on artificial intelligent techniques
CN111508530A (en) Speech emotion recognition method, device and storage medium
CN115831125A (en) Speech recognition method, device, equipment, storage medium and product
CN114579751A (en) Emotion analysis method and device, electronic equipment and storage medium
CN111246026A (en) Recording processing method based on convolutional neural network and connectivity time sequence classification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant