CN111883115A - Voice flow quality inspection method and device - Google Patents

Voice flow quality inspection method and device Download PDF

Info

Publication number
CN111883115A
CN111883115A CN202010552865.4A CN202010552865A CN111883115A CN 111883115 A CN111883115 A CN 111883115A CN 202010552865 A CN202010552865 A CN 202010552865A CN 111883115 A CN111883115 A CN 111883115A
Authority
CN
China
Prior art keywords
user
conversation
sentence
context
intention
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010552865.4A
Other languages
Chinese (zh)
Other versions
CN111883115B (en
Inventor
曹磊
杜冰竹
白安琪
赵立军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mashang Xiaofei Finance Co Ltd
Mashang Consumer Finance Co Ltd
Original Assignee
Mashang Xiaofei Finance Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mashang Xiaofei Finance Co Ltd filed Critical Mashang Xiaofei Finance Co Ltd
Priority to CN202010552865.4A priority Critical patent/CN111883115B/en
Publication of CN111883115A publication Critical patent/CN111883115A/en
Application granted granted Critical
Publication of CN111883115B publication Critical patent/CN111883115B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/3332Query translation
    • G06F16/3335Syntactic pre-processing, e.g. stopword elimination, stemming
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3344Query execution using natural language analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/289Phrasal analysis, e.g. finite state techniques or chunking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers
    • H04M3/50Centralised arrangements for answering calls; Centralised arrangements for recording messages for absent or busy subscribers ; Centralised arrangements for recording messages
    • H04M3/51Centralised call answering arrangements requiring operator intervention, e.g. call or contact centers for telemarketing
    • H04M3/5166Centralised call answering arrangements requiring operator intervention, e.g. call or contact centers for telemarketing in combination with interactive voice response systems or voice portals, e.g. as front-ends
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers
    • H04M3/50Centralised arrangements for answering calls; Centralised arrangements for recording messages for absent or busy subscribers ; Centralised arrangements for recording messages
    • H04M3/51Centralised call answering arrangements requiring operator intervention, e.g. call or contact centers for telemarketing
    • H04M3/5175Call or contact centers supervision arrangements

Abstract

The embodiment of the invention provides a method and a device for voice flow quality inspection, wherein the method comprises the following steps: obtaining an original voice file, wherein the original voice file comprises: voice conversation between the user and the agent; identifying and obtaining an intention list of a user according to the original voice file, wherein the intention list comprises one or more intentions of the user, and each intention corresponds to a process node identifier; acquiring a standard path corresponding to the original voice file and used for transacting actual business by the user, wherein the standard path comprises one or more process nodes, and each node corresponds to a process node identifier; and analyzing to obtain a quality inspection result of the original voice file according to the intention list and the standard path. In the embodiment of the invention, full-automatic detection can be realized, the quality inspection efficiency is greatly improved, the quality inspection coverage rate is improved, meanwhile, manpower can be greatly liberated, and the customer service cost of a company is reduced.

Description

Voice flow quality inspection method and device
Technical Field
The embodiment of the invention relates to the technical field of computers, in particular to a method and a device for voice flow quality inspection.
Background
With the application of internet information technology in the financial field becoming deeper and deeper, the innovation of each enterprise is continuously strengthened, the market competition becomes more and more intense, and in the intense market competition, the user service becomes more and more important measures for embodying competition difference, improving the company image and increasing the user satisfaction, so that the management and control of the service quality of the customer service system becomes the daily important work of the enterprise operation manager, and the intelligent quality inspection is the main component of the enterprise. The daily customer service system generates a large amount of voice data, if the data can be well utilized, intelligent quality inspection work is carried out according to the standard requirements, and irregular points in customer service calls are detected, so that the quality of customer service and the user satisfaction can be well improved, manual operation is reduced, and meanwhile, customer service personnel can be evaluated, and the work evaluation system of the customer service personnel is perfected.
The voice flow quality inspection is an important component of intelligent quality inspection, most detection points of the current intelligent quality inspection system are focused on the aspects of speech speed detection, silence detection, emotion detection and the like, research on detection of flow nodes is less, and in practice, whether customer service calls according to standards and specifications is often the key point of enterprise attention. The method has the advantages of simple implementation, organization and execution of personnel in a company, and implementation without many professional technicians, but the method has extremely low coverage rate (1%), high omission ratio, high cost, strong subjectivity, high misjudgment rate, and high consumption of manpower, material resources and financial resources.
Disclosure of Invention
An object of the embodiments of the present invention is to provide a method and an apparatus for quality inspection of a voice flow, which solve the problem of low efficiency of manual voice quality inspection.
In a first aspect, an embodiment of the present invention provides a method for quality inspection of a voice flow, including:
obtaining an original voice file, wherein the original voice file comprises: voice conversation between the user and the agent;
identifying and obtaining an intention list of a user according to the original voice file, wherein the intention list comprises one or more intentions of the user, and each intention corresponds to a process node identifier;
acquiring a standard path corresponding to the original voice file and used for transacting actual business by the user, wherein the standard path comprises one or more process nodes, and each node corresponds to a process node identifier;
and analyzing to obtain a quality inspection result of the original voice file according to the intention list and the standard path.
Optionally, the recognizing, according to the original voice file, to obtain the user's intention list includes:
converting the original voice file into a text file;
obtaining the conversation content of the user and/or the conversation content of the seat according to the role label in the text file;
and identifying and obtaining the intention list of the user according to the conversation content of the user and/or the conversation content of the seat.
Optionally, the identifying and obtaining the intention list of the user according to the dialog content of the user and/or the dialog content of the agent includes:
acquiring each sentence of conversation of the user and the context of the conversation according to the conversation content of the user;
identifying and obtaining an intention list of the user according to each sentence of the dialog of the user and the context of the dialog;
alternatively, the first and second electrodes may be,
acquiring each sentence of conversation of the seat and the context of the conversation according to the conversation content of the seat;
identifying and obtaining an intention list of the user according to each sentence of the dialogue of the seat and the context of the dialogue;
alternatively, the first and second electrodes may be,
acquiring each sentence of conversation of the user and the context of the conversation according to the conversation content of the user;
acquiring each sentence of conversation of the seat and the context of the conversation according to the conversation content of the seat;
and identifying and obtaining an intention list of the user according to each sentence of conversation of the user, the context of the conversation and each sentence of conversation of the agent and the context of the conversation.
Optionally, the recognizing, according to the original voice file, to obtain the user's intention list includes:
inputting each sentence of dialogue and the context of the dialogue into a bidirectional recurrent neural network of a model, and coding through the bidirectional recurrent neural network to respectively obtain high-dimensional feature vectors of the current sentence and the context;
inputting the high-dimensional characteristics of the current sentence and the context into an attention layer in the model, and outputting a joint representation of the current sentence and the context;
and classifying the joint representation to obtain an intention label of each sentence, and outputting an intention list of the user in the original voice file.
Optionally, the classifying the joint representation to obtain an intention label of each sentence includes:
and inputting the joint representation of the current sentence and the context into a fully-connected neural network, and outputting the probability of the current sentence under a preset label to obtain an intention label of each sentence.
Optionally, the analyzing, according to the intention list and the standard path, to obtain a quality inspection result of the original voice file includes:
obtaining a path to be inspected of the user according to the intention list;
matching the path to be quality-checked of the user with the standard path in a character string matching mode, and calculating the compliance of the path to be quality-checked of the user;
and obtaining a quality inspection result of the original voice file according to the compliance of the path to be quality inspected of the user.
Optionally, the method further comprises:
positioning the flow nodes lacked in the path to be quality-tested according to the quality testing result of the original voice file;
and determining the violation reason of the seat in the original voice file according to the missing process nodes.
In a second aspect, an embodiment of the present invention provides an apparatus for quality inspection of a voice process, including:
a first obtaining module, configured to obtain an original voice file, where the original voice file includes: voice conversation between the user and the agent;
the recognition module is used for recognizing and obtaining an intention list of a user according to the original voice file, wherein the intention list comprises one or more intentions of the user, and each intention corresponds to a process node identifier;
a second obtaining module, configured to obtain, corresponding to the original voice file, a standard path for the user to handle an actual service, where the standard path includes one or more process nodes, and each node corresponds to a process node identifier;
and the quality inspection module is used for analyzing and obtaining a quality inspection result of the original voice file according to the intention list and the standard path.
Optionally, the identification module comprises:
the conversion unit is used for converting the original voice file into a text file;
the first processing unit is used for obtaining the conversation content of the user and/or the conversation content of the seat according to the role label in the text file;
and the identification unit is used for identifying and obtaining the intention list of the user according to the conversation content of the user and/or the conversation content of the seat.
Optionally, the identification unit is further configured to:
acquiring each sentence of conversation of the user and the context of the conversation according to the conversation content of the user;
identifying and obtaining an intention list of the user according to each sentence of the dialog of the user and the context of the dialog;
alternatively, the first and second electrodes may be,
acquiring each sentence of conversation of the seat and the context of the conversation according to the conversation content of the seat;
identifying and obtaining an intention list of the user according to each sentence of the dialogue of the seat and the context of the dialogue;
alternatively, the first and second electrodes may be,
acquiring each sentence of conversation of the user and the context of the conversation according to the conversation content of the user;
acquiring each sentence of conversation of the seat and the context of the conversation according to the conversation content of the seat;
and identifying and obtaining an intention list of the user according to each sentence of conversation of the user, the context of the conversation and each sentence of conversation of the agent and the context of the conversation.
Optionally, the quality inspection module comprises:
the second processing unit is used for obtaining a path to be inspected of the user according to the intention list;
the matching unit is used for matching the path to be quality-checked of the user with the standard path in a character string matching mode and calculating the compliance of the path to be quality-checked of the user;
and the third processing unit is used for obtaining a quality inspection result of the original voice file according to the compliance of the path to be inspected of the user.
In a third aspect, an embodiment of the present invention provides a server, including: a processor, a memory and a program stored on the memory and executable on the processor, the program, when executed by the processor, implementing the steps of the method of voice flow quality inspection according to the first aspect.
In a fourth aspect, an embodiment of the present invention provides a readable storage medium, where a program is stored, and when the program is executed by a processor, the steps of the method for quality inspection of voice flow according to the first aspect are provided.
In the embodiment of the invention, the intention list of the user is obtained based on the voice conversation recognition of the user and the seat, and then the quality inspection result of the original voice file of the voice conversation of the user and the seat is obtained by comparing and analyzing the intention list according to the standard path of the actual service transacted by the user, so that the full-automatic detection of the voice file can be realized, the quality inspection efficiency is greatly improved, and the quality inspection coverage rate is improved.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
FIG. 1 is a flowchart of a voice flow quality inspection method according to an embodiment of the present invention;
FIG. 2 is a second flowchart of a voice flow quality inspection method according to an embodiment of the present invention;
FIG. 3 is a diagram illustrating a voice process quality inspection according to an embodiment of the present invention;
FIG. 4 is a diagram illustrating a text classification model according to an embodiment of the present invention;
FIG. 5 is a text context interaction diagram of an embodiment of the present invention;
FIG. 6 is a schematic diagram of a voice flow quality inspection apparatus according to an embodiment of the present invention
Fig. 7 is a schematic diagram of a server according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "comprises," "comprising," or any other variation thereof, in the description and claims of this application, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus. Furthermore, the use of "and/or" in the specification and claims means that at least one of the connected objects, such as a and/or B, means that three cases, a alone, B alone, and both a and B, exist.
In the embodiments of the present invention, words such as "exemplary" or "for example" are used to mean serving as examples, illustrations or descriptions. Any embodiment or design described as "exemplary" or "e.g.," an embodiment of the present invention is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
Referring to fig. 1, an embodiment of the present invention provides a method for quality inspection of a voice flow, which includes: step 101 to step 104.
Step 101: obtaining an original voice file, wherein the original voice file comprises: voice conversation between the user and the agent;
step 102: identifying and obtaining an intention list of a user according to the original voice file, wherein the intention list comprises one or more intentions of the user, and each intention corresponds to a process node identifier;
it is understood that the character intention recognition may adopt a model based on a bidirectional recurrent neural network (for example, Long Short-Term Memory (LSTM) is a type of time recurrent neural network), the input of the model is a sentence in the text, the high-dimensional feature vectors of the current sentence are respectively obtained through the model encoding, and the obtained high-dimensional feature vectors are used to obtain the intention labels through a classification algorithm (for example, softmax classifier).
It should be noted that, in the embodiment of the present invention, the intention of the user may also be identified by using a transform, bert, or other model, that is, the model for identifying the intention is not specifically limited in the embodiment of the present invention.
Optionally, the corresponding relationship between the intention and the flow node identifier is configured in advance. Wherein, the intention refers to analyzing which flow the user wants to select based on the conversation content.
Step 103: acquiring a standard path corresponding to the original voice file and used for transacting actual business by the user, wherein the standard path comprises one or more process nodes, and each node corresponds to a process node identifier;
step 104: and analyzing to obtain a quality inspection result of the original voice file according to the intention list and the standard path.
To facilitate understanding of steps 101-104, an illustrative description of an original speech file is provided below.
A seat: you good, i are xx company seating personnel.
The user: thanks, hello.
A seat: ask you for li xx?
The user: is, i am li xx.
A seat: is you good in li, is you about to expire the business you transact in our unit, ask you for a delay?
The user: what conditions are needed for a delay?
A seat: mr. plum you need to apply for on the app if you want to defer and have 200 dollars.
The user: so troublesome, i do not want to do so.
A seat: mr. Li, we can help you handle deferred business.
The user: and you help me go to the bar.
A seat: mr. li, we need to verify your identity and ask what is the four digits after your identity card?
The user: hiccup, what do you say?
A seat: what is the four digits after asking for your ID card?
The user: one, two, three and four.
A seat: good, Mr. Li, we will help you to do it later, and you will receive our short message to remind after doing it
The user: fine powder of red sage root
A seat: do mr. li also what we can help you?
The user: have not, as long as
A seat: good, Mr. Li thank you, thank you for listening, congratulating you for life pleasure, and what you see again.
Based on the above speech content, the intention recognition result: 8-1-2-3-5-9-1-4-8-2-4-9-3-6, wherein each number represents a process node identifier.
And the standard path for the user to transact business includes: 8-3-5-7-1-4-2-9-6.
Path matching: according to the intention recognition result and the standard path, the flow compliance of the conversation can be analyzed, namely whether the seat is communicated with the user according to the standard requirement is judged, if the intention recognition result is completely matched with the standard path and the number of the node identifications in the intention recognition result is the same as that of the total node identifications of the standard path, the seat is considered to be completely communicated with the user according to the standard requirement, the quality inspection result of the original voice file is in accordance with the requirement, if the intention recognition result is completely matched with the standard path and the number of the node identifications in the intention recognition result is more than that of the total node identifications of the standard path, the seat is considered not communicated with the user according to the standard requirement, but the quality inspection result of the original voice file can also be considered to be in accordance with the requirement, if the intention recognition result is not matched with the standard path, the seat is considered not communicated with the user according to the standard requirement, the quality inspection result of the original voice file can be regarded as not meeting the requirements.
Further, according to the intention recognition result and the standard path, which node has a problem when the seat communicates with the customer can be determined.
The embodiment of the invention decomposes the quality inspection of the recording process into intention identification and process compliance detection, wherein the intention identification can adopt a transformer model, a bert model and the like, and the process compliance detection can adopt a character string matching mode and can also adopt a dictionary tree to carry out path searching mode to carry out path detection. The method has high detection efficiency, and can accurately position the flow nodes lacking in the path, so that quality testing personnel can accurately and quickly position the recording violation reasons.
The voice flow quality inspection method provided by the embodiment of the invention can realize full-automatic detection, 100% coverage and 0 omission factor, greatly improves the quality inspection efficiency, improves the quality inspection coverage rate (at least 50 times of improvement), can greatly liberate manpower and reduce the customer service cost of a company.
Referring to fig. 2 and 3, the method for quality inspection of voice flow will be described, and the specific steps in fig. 3 include: step 301 to step 307.
Step 301: obtaining an original voice file, wherein the original voice file comprises: voice conversation between the user and the agent;
step 302: converting the original voice file into a text file;
referring to fig. 2, an original speech file is converted into a text file, for example, by using existing speech recognition technology, and then the text file is preprocessed, which generally includes but is not limited to: word segmentation, word stem extraction, stop word removal, word vector, data equalization processing and the like.
Optionally, the pre-treatment mainly comprises:
(1) adjacent sentences and labels of the same role are combined, so that enough information can be obtained under a fixed context window, the phenomenon that the same keyword appears in different sentences due to pause is reduced, and the semantic consistency is ensured;
(2) performing operations such as word segmentation, part of speech extraction, word vector representation and the like on the text processed in the step 302;
(3) and sample equalization, namely performing down-sampling and over-sampling on samples with corresponding intentions aiming at the problem that the generalization capability of the model is weakened due to unbalanced sample distribution, so that the generalization capability of the model can be effectively improved.
For example, after preprocessing, each word in the text and its context representation can be obtained, then the current sentence and its context are respectively encoded by a bidirectional recurrent neural network (BiLSTM encoding) to obtain a high-dimensional feature representation, a joint representation of the current sentence and the context is obtained by an Attention mechanism (Attention), finally, classification (classification layer) is performed by softmax to obtain an intention label of each word in the text, and an intention list is output.
Step 303: obtaining the conversation content of the user and/or the conversation content of the seat according to the role label in the text file;
step 304: identifying and obtaining an intention list of the user according to the conversation content of the user and/or the conversation content of the seat;
mode 1: acquiring each sentence of conversation of the user and the context of the conversation according to the conversation content of the user; identifying and obtaining an intention list of the user according to each sentence of the dialog of the user and the context of the dialog;
mode 2: acquiring each sentence of conversation of the seat and the context of the conversation according to the conversation content of the seat; identifying and obtaining an intention list of the user according to each sentence of the dialogue of the seat and the context of the dialogue;
mode 3: acquiring each sentence of conversation of the user and the context of the conversation according to the conversation content of the user; acquiring each sentence of conversation of the seat and the context of the conversation according to the conversation content of the seat; and identifying and obtaining an intention list of the user according to each sentence of conversation of the user, the context of the conversation and each sentence of conversation of the agent and the context of the conversation.
With reference to fig. 2, the coding layer adopts a BilSt model, which can make good use of the context text information in the same sentence, and secondly, the context sentence interaction adopts an attention mechanism, so that the information between the context sentences can be made good use of, and the text semantic information can be effectively extracted. Furthermore, the intentions are recognized according to the roles, and the intentions of each role can be recognized by different methods and strategies respectively, so that on one hand, the intention recognition accuracy can be improved, and on the other hand, the problem caused by inconsistent accuracy of the seat and the voice transcription of the user can be effectively solved (as seat personnel are trained professionally, the pronunciation and the voice speed are relatively standard, the voice recognition accuracy is higher, the users are distributed all over the country, the accents are different, the age groups are different, and the voice recognition accuracy of the users is relatively low).
That is to say, the text classification model adopted for character intention recognition may be a bi-directional recurrent neural network-based attention mechanism (bilstm-attention), see fig. 4, where an input of the model is a sentence in a text or a context of the sentence, high-dimensional feature vectors of the current sentence and the context are obtained by encoding through a bi-directional bilstm, and then through an attention layer (attention), the layer calculates a correlation between the current sentence and the context sentence, then combines the current sentence and a context weighted sum vector to obtain a joint vector, where a weight is a correlation score calculated by the aforementioned attention layer, and finally classifies the obtained joint vector by softmax to obtain an intention tag.
The bilstm-attention model used in the embodiment of the invention mainly comprises four parts, namely feature representation, semantic coding, context interaction and classification.
(1) Representation of features
The feature representation is mainly embodied in an input layer, namely what features are selected to represent the text, and the optional features are word vectors, parts of speech and the like. The invention mainly adopts the following characteristics: a. word vector: the pre-trained word vectors are used for representing the language units after word segmentation; b. word vector: the method aims to solve the problems of noise caused by word segmentation errors and word outliers; c. part of speech tags: introducing lexical prior knowledge; d. role labeling: used in the context interaction layer to distinguish the role information of the context.
(2) Semantic coding
The invention adopts a Bidirectional Recurrent Neural Network (BRNN), firstly inputs embedding of the text, passes through a bidirectional bilstm Layer, and then combines the output in the positive and negative directions to obtain the code of the text (see the LSTM Layer in figure 5).
(3) Contextual interactions
In a general dialogue, there is continuity between sentences, and the intention of the current sentence is greatly related to the sentences in the previous and later sentences, so the invention can comprehensively combine a plurality of sentences in the previous and later sentences to predict the intention of the current sentence. The association between sentences adopts an Attention mechanism, that is, the current sentence and the context sentence adopt an Attention mechanism to perform correlation calculation, new context representation is obtained by weighting, and then the new context representation is cascaded with the current sentence feature as the state representation of the context information (see fig. 5Attention Layer).
Firstly, coding h according to the statement of the current sentenceiThe correlation calculation is performed separately with its context statement encoding, see equation (1):
eij=score(hi,hj) (1)
where i denotes the number of the current sentence, hiCode representing current sentence, hjNumbering the sequence numbers before and after the current statement, and taking the value range of [ i-M, i + N]And i! J, M, N denote the number of sentences participating in the context interaction.
Then, calculating the weight coefficients of the preceding and following sentences, and performing weighted summation on the codes of the sentences by using formula (2) and formula (3) to finally obtain a context vector:
Figure BDA0002543136560000111
Figure BDA0002543136560000112
wherein alpha isijWeight coefficient representing current sentence and context sentence, ciRepresenting a weighted text vector, T, containing context informationxRepresenting the number of texts in the context.
(4) A classification layer
According to the text representation result, through a layer of fully-connected neural network, calculating the probability of the sample under each label by adopting softmax, and obtaining the intention result (equivalent to an intention list) of the text.
Step 305: obtaining a path to be inspected of the user according to the intention list;
step 306: matching the path to be quality-tested of the user with a standard path, and calculating the compliance of the path to be quality-tested of the user;
the standard path is a standard path for the user to transact the actual service corresponding to the original voice file.
For example, matching the path to be quality-checked of the user with the standard path in a character string matching manner, and calculating the compliance of the path to be quality-checked of the user;
step 307: and obtaining a quality inspection result of the original voice file according to the compliance of the path to be quality inspected of the user.
Further, the method further comprises: positioning the flow nodes lacked in the path to be quality-tested according to the quality testing result of the original voice file; and determining the violation reason of the seat in the original voice file according to the missing process nodes.
In the embodiment of the invention, the voice flow quality inspection flow is decomposed into two parts, namely, the role intention identification and the path compliance detection. Character intent recognition employs a model based on a bi-directional recurrent neural network plus attention mechanism (bilstm-attention), the input of the model is a sentence in the text or the context of the sentence, high-dimensional feature vectors of the current sentence and the context are obtained by encoding through a bidirectional lstm, and then an attention layer (attention), the layer calculates the correlation between the current sentence and the context sentence, then splices the current sentence and the context weighted sum vector to obtain a joint vector, the weight is the correlation score calculated by the attention layer, finally classifies the obtained joint vector by softmax to obtain an intention label, on one hand, the information of the context words in the same sentence can be effectively utilized, and the information of the sentences with different contexts can also be utilized, so that the accuracy and generalization capability of the model can be effectively improved; the method for matching the character strings is adopted for path compliance detection, the method is high in detection efficiency, and the flow nodes lacking in the paths can be accurately positioned, so that quality testing personnel can accurately and quickly position the recording violation reasons, and the scheme has high descriptiveness. Compared with the traditional manual spot inspection method, the scheme can realize full-automatic detection, 100% coverage and 0 omission factor, greatly improves the quality inspection efficiency, improves the quality inspection coverage rate (at least 50 times of improvement), can greatly liberate manpower and reduce the customer service cost of a company; compared with a quality inspection method based on rules, the scheme has the advantages of higher accuracy, low maintenance cost and strong generalization capability.
Referring to fig. 6, an embodiment of the present invention provides an apparatus for quality inspection of a voice process, where the apparatus 600 includes:
a first obtaining module 601, configured to obtain an original voice file, where the original voice file includes: voice conversation between the user and the agent;
a recognition module 602, configured to recognize, according to the original voice file, an intention list of a user, where the intention list includes intentions of one or more users, and each intention corresponds to a process node identifier;
a second obtaining module 603, configured to obtain a standard path, corresponding to the original voice file, where the standard path is used for the user to handle an actual service, where the standard path includes one or more process nodes, and each node corresponds to a process node identifier;
and a quality inspection module 604, configured to analyze the intention list and the standard path to obtain a quality inspection result of the original voice file.
In some implementations, the identification module 602 includes:
the conversion unit is used for converting the original voice file into a text file;
the first processing unit is used for obtaining the conversation content of the user and/or the conversation content of the seat according to the role label in the text file;
and the identification unit is used for identifying and obtaining the intention list of the user according to the conversation content of the user and/or the conversation content of the seat.
In some embodiments, the identification unit is further configured to:
acquiring each sentence of conversation of the user and the context of the conversation according to the conversation content of the user;
identifying and obtaining an intention list of the user according to each sentence of the dialog of the user and the context of the dialog;
alternatively, the first and second electrodes may be,
acquiring each sentence of conversation of the seat and the context of the conversation according to the conversation content of the seat;
identifying and obtaining an intention list of the user according to each sentence of the dialogue of the seat and the context of the dialogue;
alternatively, the first and second electrodes may be,
acquiring each sentence of conversation of the user and the context of the conversation according to the conversation content of the user;
acquiring each sentence of conversation of the seat and the context of the conversation according to the conversation content of the seat;
and identifying and obtaining an intention list of the user according to each sentence of conversation of the user, the context of the conversation and each sentence of conversation of the agent and the context of the conversation.
In some embodiments, the identification unit is further configured to: taking each sentence of dialogue and the context of the dialogue as the input of a model of a bidirectional recurrent neural network with an attention mechanism; coding through a bidirectional cyclic neural network in the model to respectively obtain high-dimensional characteristic vectors of the current sentence and the context; inputting high-dimensional characteristics of the current sentence and the context into an attention layer in the model, and outputting a joint representation of the current sentence and the context; and classifying the joint representation to obtain an intention label of each sentence, and outputting an intention list of the user in the original voice file.
In some embodiments, the identification unit is further configured to: and calculating the probability of the current sentence under a preset label through a classifier through a layer of fully connected neural network according to the joint expression of the current sentence and the context to obtain the intention label of each sentence.
In some embodiments, the quality inspection module 604 includes:
the second processing unit is used for obtaining a path to be inspected of the user according to the intention list;
the matching unit is used for matching the path to be quality checked of the user with the standard path and calculating the compliance of the path to be quality checked of the user;
and the third processing unit is used for obtaining a quality inspection result of the original voice file according to the compliance of the path to be inspected of the user.
In some embodiments, the quality inspection module 604 further comprises:
the positioning unit is used for positioning the flow nodes lacking in the path to be quality-tested according to the quality testing result of the original voice file;
and the fourth processing unit is used for determining the violation reason of the seat in the original voice file according to the missing process node.
The apparatus provided in the embodiment of the present invention may execute the method embodiment shown in fig. 1 or fig. 2, which has similar implementation principles and technical effects, and this embodiment is not described herein again.
Referring to fig. 7, fig. 7 is a block diagram of a server according to an embodiment of the present invention, and as shown in fig. 7, an apparatus 700 for testing an application includes: a processor 701, a transceiver 702, a memory 703 and a bus interface, wherein:
in one embodiment of the present invention, the apparatus 700 for testing an application further comprises: a program stored on the memory 703 and executable on the processor 701, which when executed by the processor 701 performs the steps shown in fig. 1 or fig. 2.
In fig. 7, the bus architecture may include any number of interconnected buses and bridges, with one or more processors, represented by processor 701, and various circuits, represented by memory 703, being linked together. The bus architecture may also link together various other circuits such as peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further herein. The bus interface provides an interface. The transceiver 702 may be a number of elements including a transmitter and a receiver that provide a means for communicating with various other apparatus over a transmission medium.
The processor 701 is responsible for managing the bus architecture and general processing, and the memory 703 may store data used by the processor 701 in performing operations.
The server provided by the embodiment of the present invention may execute the above method embodiment, and the implementation principle and technical effect are similar, which are not described herein again.
The embodiment of the present invention further provides a readable storage medium, where a program is stored on the readable storage medium, and when the program is executed by a processor, the program implements each process of the embodiment of the voice process quality inspection method, and can achieve the same technical effect, and in order to avoid repetition, the detailed description is omitted here. The readable storage medium may be a Read-only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
The steps of a method or algorithm described in connection with the disclosure herein may be embodied in hardware or in software instructions executed by a processor. The software instructions may consist of corresponding software modules that may be stored in RAM, flash memory, ROM, EPROM, EEPROM, registers, hard disk, a removable hard disk, a compact disk, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. Of course, the storage medium may also be integral to the processor. The processor and the storage medium may reside in an ASIC. Additionally, the ASIC may reside in a core network interface device. Of course, the processor and the storage medium may reside as discrete components in a core network interface device.
Those skilled in the art will recognize that, in one or more of the examples described above, the functions described in this invention may be implemented in hardware, software, firmware, or any combination thereof. When implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a general purpose or special purpose computer.
The above-mentioned embodiments, objects, technical solutions and advantages of the present invention are further described in detail, it should be understood that the above-mentioned embodiments are only exemplary embodiments of the present invention, and are not intended to limit the scope of the present invention, and any modifications, equivalent substitutions, improvements and the like made on the basis of the technical solutions of the present invention should be included in the scope of the present invention.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications may be made in the embodiments of the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the embodiments of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to encompass such modifications and variations.

Claims (10)

1. A method for voice flow quality inspection is characterized by comprising the following steps:
obtaining an original voice file, wherein the original voice file comprises: voice conversation between the user and the agent;
identifying and obtaining an intention list of a user according to the original voice file, wherein the intention list comprises one or more intentions of the user, and each intention corresponds to a process node identifier;
acquiring a standard path corresponding to the original voice file and used for transacting actual business by the user, wherein the standard path comprises one or more process nodes, and each node corresponds to a process node identifier;
and analyzing to obtain a quality inspection result of the original voice file according to the intention list and the standard path.
2. The method of claim 1, wherein the recognizing the list of intentions of the user from the original speech file comprises:
converting the original voice file into a text file;
obtaining the conversation content of the user and/or the conversation content of the seat according to the role label in the text file;
and identifying and obtaining the intention list of the user according to the conversation content of the user and/or the conversation content of the seat.
3. The method according to claim 2, wherein the identifying the intention list of the user according to the dialog content of the user and/or the dialog content of the agent comprises:
acquiring each sentence of conversation of the user and the context of the conversation according to the conversation content of the user;
identifying and obtaining an intention list of the user according to each sentence of the dialog of the user and the context of the dialog;
alternatively, the first and second electrodes may be,
acquiring each sentence of conversation of the seat and the context of the conversation according to the conversation content of the seat;
identifying and obtaining an intention list of the user according to each sentence of the dialogue of the seat and the context of the dialogue;
alternatively, the first and second electrodes may be,
acquiring each sentence of conversation of the user and the context of the conversation according to the conversation content of the user;
acquiring each sentence of conversation of the seat and the context of the conversation according to the conversation content of the seat;
and identifying and obtaining an intention list of the user according to each sentence of conversation of the user, the context of the conversation and each sentence of conversation of the agent and the context of the conversation.
4. The method of claim 3, wherein the identifying the list of intentions of the user from the original speech file comprises:
inputting each sentence of dialogue and the context of the dialogue into a bidirectional recurrent neural network of a model, and coding through the bidirectional recurrent neural network to respectively obtain high-dimensional feature vectors of the current sentence and the context;
inputting the high-dimensional characteristics of the current sentence and the context into an attention layer in the model, and outputting a joint representation of the current sentence and the context;
and classifying the joint representation to obtain an intention label of each sentence, and outputting an intention list of the user in the original voice file.
5. The method of claim 4, wherein the classifying the joint representation resulting in an intent tag for each sentence, comprises:
and inputting the joint representation of the current sentence and the context into a fully-connected neural network, and outputting the probability of the current sentence under a preset label to obtain an intention label of each sentence.
6. The method according to claim 1, wherein the analyzing the quality inspection result of the original voice file according to the intention list and the standard path comprises:
obtaining a path to be inspected of the user according to the intention list;
matching the path to be quality-checked of the user with the standard path in a character string matching mode, and calculating the compliance of the path to be quality-checked of the user;
and obtaining a quality inspection result of the original voice file according to the compliance of the path to be quality inspected of the user.
7. The method of claim 6, further comprising:
positioning the flow nodes lacked in the path to be quality-tested according to the quality testing result of the original voice file;
and determining the violation reason of the seat in the original voice file according to the missing process nodes.
8. An apparatus for voice process quality inspection, comprising:
a first obtaining module, configured to obtain an original voice file, where the original voice file includes: voice conversation between the user and the agent;
the recognition module is used for recognizing and obtaining an intention list of a user according to the original voice file, wherein the intention list comprises one or more intentions of the user, and each intention corresponds to a process node identifier;
a second obtaining module, configured to obtain a standard path, corresponding to the original voice file, where the standard path is used for the user to handle an actual service, where the standard path includes one or more process nodes, and each node corresponds to a process node identifier;
and the quality inspection module is used for analyzing and obtaining a quality inspection result of the original voice file according to the intention list and the standard path.
9. A server, comprising: a processor, a memory and a program stored on the memory and executable on the processor, the program, when executed by the processor, implementing the steps of the method of voice flow quality inspection according to any one of claims 1 to 7.
10. A readable storage medium, having a program stored thereon, which when executed by a processor, performs steps of a method comprising voice flow quality testing according to any one of claims 1 to 7.
CN202010552865.4A 2020-06-17 2020-06-17 Voice flow quality inspection method and device Active CN111883115B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010552865.4A CN111883115B (en) 2020-06-17 2020-06-17 Voice flow quality inspection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010552865.4A CN111883115B (en) 2020-06-17 2020-06-17 Voice flow quality inspection method and device

Publications (2)

Publication Number Publication Date
CN111883115A true CN111883115A (en) 2020-11-03
CN111883115B CN111883115B (en) 2022-01-28

Family

ID=73157912

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010552865.4A Active CN111883115B (en) 2020-06-17 2020-06-17 Voice flow quality inspection method and device

Country Status (1)

Country Link
CN (1) CN111883115B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113270114A (en) * 2021-07-19 2021-08-17 北京明略软件系统有限公司 Voice quality inspection method and system
CN113393844A (en) * 2021-06-24 2021-09-14 大唐融合通信股份有限公司 Voice quality inspection method, device and network equipment
CN113411454A (en) * 2021-06-17 2021-09-17 商客通尚景科技(上海)股份有限公司 Intelligent quality inspection method for real-time call voice analysis
CN113515594A (en) * 2021-04-28 2021-10-19 京东数字科技控股股份有限公司 Intention recognition method, intention recognition model training method, device and equipment
CN113674765A (en) * 2021-08-18 2021-11-19 中国联合网络通信集团有限公司 Voice customer service quality inspection method, device, equipment and storage medium
CN113761204A (en) * 2021-09-06 2021-12-07 南京大学 Emoji text emotion analysis method and system based on deep learning
WO2022126969A1 (en) * 2020-12-15 2022-06-23 平安科技(深圳)有限公司 Service voice quality inspection method, apparatus and device, and storage medium
WO2022134591A1 (en) * 2020-12-23 2022-06-30 深圳壹账通智能科技有限公司 Stage-based quality inspection data classification method, apparatus, and device, and storage medium
CN115660458A (en) * 2022-09-26 2023-01-31 广州云趣信息科技有限公司 Call quality inspection method and device based on context reasoning and electronic equipment

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101662780A (en) * 2008-08-27 2010-03-03 中国移动通信集团湖北有限公司 Method and system for automatically detecting customer service voices
CN104123590A (en) * 2014-06-27 2014-10-29 国家电网公司 95598 customer service center operation monitoring system and method
CN106776806A (en) * 2016-11-22 2017-05-31 广东电网有限责任公司佛山供电局 The methods of marking and system of call center's quality inspection voice
CN107766560A (en) * 2017-11-03 2018-03-06 广州杰赛科技股份有限公司 The evaluation method and system of customer service flow
CN107833059A (en) * 2017-11-03 2018-03-23 广州杰赛科技股份有限公司 The QoS evaluating method and system of customer service
CN107886231A (en) * 2017-11-03 2018-04-06 广州杰赛科技股份有限公司 The QoS evaluating method and system of customer service
CN107886232A (en) * 2017-11-03 2018-04-06 广州杰赛科技股份有限公司 The QoS evaluating method and system of customer service
CN107886233A (en) * 2017-11-03 2018-04-06 广州杰赛科技股份有限公司 The QoS evaluating method and system of customer service
CN108737667A (en) * 2018-05-03 2018-11-02 平安科技(深圳)有限公司 Voice quality detecting method, device, computer equipment and storage medium
CN109062951A (en) * 2018-06-22 2018-12-21 厦门快商通信息技术有限公司 Based on conversation process abstracting method, equipment and the storage medium for being intended to analysis and dialogue cluster
CN109327632A (en) * 2018-11-23 2019-02-12 深圳前海微众银行股份有限公司 Intelligent quality inspection system, method and the computer readable storage medium of customer service recording
CN109618064A (en) * 2018-12-26 2019-04-12 合肥凯捷技术有限公司 A kind of artificial customer service voices quality inspection system
CN109740155A (en) * 2018-12-27 2019-05-10 广州云趣信息科技有限公司 A kind of customer service system artificial intelligence quality inspection rule self concludes the method and system of model
US20190147855A1 (en) * 2017-11-13 2019-05-16 GM Global Technology Operations LLC Neural network for use in speech recognition arbitration
CN109902175A (en) * 2019-02-20 2019-06-18 上海方立数码科技有限公司 A kind of file classification method and categorizing system based on neural network structure model
CN110083689A (en) * 2019-03-20 2019-08-02 上海拍拍贷金融信息服务有限公司 Customer service quality determining method and device, readable storage medium storing program for executing
CN110472224A (en) * 2019-06-24 2019-11-19 深圳追一科技有限公司 Detection method, device, computer equipment and the storage medium of service quality
CN110570853A (en) * 2019-08-12 2019-12-13 阿里巴巴集团控股有限公司 Intention recognition method and device based on voice data

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101662780A (en) * 2008-08-27 2010-03-03 中国移动通信集团湖北有限公司 Method and system for automatically detecting customer service voices
CN104123590A (en) * 2014-06-27 2014-10-29 国家电网公司 95598 customer service center operation monitoring system and method
CN106776806A (en) * 2016-11-22 2017-05-31 广东电网有限责任公司佛山供电局 The methods of marking and system of call center's quality inspection voice
CN107766560A (en) * 2017-11-03 2018-03-06 广州杰赛科技股份有限公司 The evaluation method and system of customer service flow
CN107833059A (en) * 2017-11-03 2018-03-23 广州杰赛科技股份有限公司 The QoS evaluating method and system of customer service
CN107886231A (en) * 2017-11-03 2018-04-06 广州杰赛科技股份有限公司 The QoS evaluating method and system of customer service
CN107886232A (en) * 2017-11-03 2018-04-06 广州杰赛科技股份有限公司 The QoS evaluating method and system of customer service
CN107886233A (en) * 2017-11-03 2018-04-06 广州杰赛科技股份有限公司 The QoS evaluating method and system of customer service
US20190147855A1 (en) * 2017-11-13 2019-05-16 GM Global Technology Operations LLC Neural network for use in speech recognition arbitration
CN108737667A (en) * 2018-05-03 2018-11-02 平安科技(深圳)有限公司 Voice quality detecting method, device, computer equipment and storage medium
CN109062951A (en) * 2018-06-22 2018-12-21 厦门快商通信息技术有限公司 Based on conversation process abstracting method, equipment and the storage medium for being intended to analysis and dialogue cluster
CN109327632A (en) * 2018-11-23 2019-02-12 深圳前海微众银行股份有限公司 Intelligent quality inspection system, method and the computer readable storage medium of customer service recording
CN109618064A (en) * 2018-12-26 2019-04-12 合肥凯捷技术有限公司 A kind of artificial customer service voices quality inspection system
CN109740155A (en) * 2018-12-27 2019-05-10 广州云趣信息科技有限公司 A kind of customer service system artificial intelligence quality inspection rule self concludes the method and system of model
CN109902175A (en) * 2019-02-20 2019-06-18 上海方立数码科技有限公司 A kind of file classification method and categorizing system based on neural network structure model
CN110083689A (en) * 2019-03-20 2019-08-02 上海拍拍贷金融信息服务有限公司 Customer service quality determining method and device, readable storage medium storing program for executing
CN110472224A (en) * 2019-06-24 2019-11-19 深圳追一科技有限公司 Detection method, device, computer equipment and the storage medium of service quality
CN110570853A (en) * 2019-08-12 2019-12-13 阿里巴巴集团控股有限公司 Intention recognition method and device based on voice data

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
FAN JUN: "Research on E-Service Quality, Customer Relational Benefits and Customer Satisfaction", 《2009 IEEE》 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022126969A1 (en) * 2020-12-15 2022-06-23 平安科技(深圳)有限公司 Service voice quality inspection method, apparatus and device, and storage medium
WO2022134591A1 (en) * 2020-12-23 2022-06-30 深圳壹账通智能科技有限公司 Stage-based quality inspection data classification method, apparatus, and device, and storage medium
CN113515594A (en) * 2021-04-28 2021-10-19 京东数字科技控股股份有限公司 Intention recognition method, intention recognition model training method, device and equipment
CN113411454A (en) * 2021-06-17 2021-09-17 商客通尚景科技(上海)股份有限公司 Intelligent quality inspection method for real-time call voice analysis
CN113393844A (en) * 2021-06-24 2021-09-14 大唐融合通信股份有限公司 Voice quality inspection method, device and network equipment
CN113393844B (en) * 2021-06-24 2022-12-06 大唐融合通信股份有限公司 Voice quality inspection method, device and network equipment
CN113270114A (en) * 2021-07-19 2021-08-17 北京明略软件系统有限公司 Voice quality inspection method and system
CN113674765A (en) * 2021-08-18 2021-11-19 中国联合网络通信集团有限公司 Voice customer service quality inspection method, device, equipment and storage medium
CN113761204A (en) * 2021-09-06 2021-12-07 南京大学 Emoji text emotion analysis method and system based on deep learning
CN113761204B (en) * 2021-09-06 2023-07-28 南京大学 Emoji text emotion analysis method and system based on deep learning
CN115660458A (en) * 2022-09-26 2023-01-31 广州云趣信息科技有限公司 Call quality inspection method and device based on context reasoning and electronic equipment
CN115660458B (en) * 2022-09-26 2023-10-20 广州云趣信息科技有限公司 Conversation quality inspection method and device based on context reasoning and electronic equipment

Also Published As

Publication number Publication date
CN111883115B (en) 2022-01-28

Similar Documents

Publication Publication Date Title
CN111883115B (en) Voice flow quality inspection method and device
CN110377911B (en) Method and device for identifying intention under dialog framework
CN113032545B (en) Method and system for conversation understanding and answer configuration based on unsupervised conversation pre-training
CN111104498A (en) Semantic understanding method in task type dialogue system
CN113268610B (en) Intent jump method, device, equipment and storage medium based on knowledge graph
CN110399472B (en) Interview question prompting method and device, computer equipment and storage medium
CN111897935B (en) Knowledge graph-based conversational path selection method and device and computer equipment
CN111625634A (en) Word slot recognition method and device, computer-readable storage medium and electronic device
CN114416989A (en) Text classification model optimization method and device
CN111984780A (en) Multi-intention recognition model training method, multi-intention recognition method and related device
CN115455982A (en) Dialogue processing method, dialogue processing device, electronic equipment and storage medium
CN113486174B (en) Model training, reading understanding method and device, electronic equipment and storage medium
CN113239694B (en) Argument role identification method based on argument phrase
CN114003700A (en) Method and system for processing session information, electronic device and storage medium
CN114548119A (en) Test set generation method, test method, device, equipment and medium
CN112036122B (en) Text recognition method, electronic device and computer readable medium
CN110795531B (en) Intention identification method, device and storage medium
CN112784580A (en) Financial data analysis method and device based on event extraction
CN116702765A (en) Event extraction method and device and electronic equipment
CN116127011A (en) Intention recognition method, device, electronic equipment and storage medium
CN113408287B (en) Entity identification method and device, electronic equipment and storage medium
CN115410560A (en) Voice recognition method, device, storage medium and equipment
CN112883183B (en) Method for constructing multi-classification model, intelligent customer service method, and related device and system
CN115357711A (en) Aspect level emotion analysis method and device, electronic equipment and storage medium
CN114239548A (en) Triple extraction method for merging dependency syntax and pointer generation network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant