CN112308591A - User evaluation method, device, equipment and computer readable storage medium - Google Patents

User evaluation method, device, equipment and computer readable storage medium Download PDF

Info

Publication number
CN112308591A
CN112308591A CN201910712760.8A CN201910712760A CN112308591A CN 112308591 A CN112308591 A CN 112308591A CN 201910712760 A CN201910712760 A CN 201910712760A CN 112308591 A CN112308591 A CN 112308591A
Authority
CN
China
Prior art keywords
service
user
audio
service link
link
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910712760.8A
Other languages
Chinese (zh)
Inventor
程印超
刘阳
颜红燕
刘晓宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
China Mobile Communications Ltd Research Institute
Original Assignee
China Mobile Communications Group Co Ltd
China Mobile Communications Ltd Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, China Mobile Communications Ltd Research Institute filed Critical China Mobile Communications Group Co Ltd
Priority to CN201910712760.8A priority Critical patent/CN112308591A/en
Publication of CN112308591A publication Critical patent/CN112308591A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0282Rating or review of business operators or products
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Development Economics (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Accounting & Taxation (AREA)
  • Evolutionary Biology (AREA)
  • Strategic Management (AREA)
  • Finance (AREA)
  • Game Theory and Decision Science (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • General Business, Economics & Management (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The invention provides a user evaluation method, a device, equipment and a computer readable storage medium, wherein the user evaluation method comprises the following steps: acquiring audio and video information of a user in a target service process; according to the service links included by the target service, sequentially extracting the characteristics of the audio and video fragments corresponding to each service link in the audio and video information to obtain the characteristic information of each service link; and analyzing the characteristic information of each service link in sequence to obtain the evaluation result of the user on each service link. According to the embodiment of the invention, the local evaluation of the user on a single link of the service can be finely obtained, so that the evaluation condition of the user on the service content can be fully known, the accuracy of the evaluation result is improved, and the optimization and the improvement of the service quality are facilitated.

Description

User evaluation method, device, equipment and computer readable storage medium
Technical Field
The present invention relates to the field of communications technologies, and in particular, to a user evaluation method, apparatus, device, and computer-readable storage medium.
Background
Currently, when user evaluation is implemented, the user is usually required to give an evaluation in a short message, an electronic questionnaire, or the like after the user receives a service. For example, a communication business hall generally sends a short message to a user after providing a service, so that the user can provide an evaluation on the service. However, this user evaluation method usually only requires the user to give a comprehensive evaluation to the service, which is not fine enough, and cannot fully understand the evaluation condition of the user to the service content.
Disclosure of Invention
The embodiment of the invention provides a user evaluation method, a user evaluation device, user evaluation equipment and a computer-readable storage medium, and aims to solve the problem that the existing user evaluation method cannot fully know the evaluation condition of a user on service content.
In order to solve the technical problem, the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides a user evaluation method, applied to a user evaluation device, including:
acquiring audio and video information of a user in a target service process;
according to the service links included by the target service, sequentially extracting the characteristics of the audio and video fragments corresponding to each service link in the audio and video information to obtain the characteristic information of each service link;
and analyzing the characteristic information of each service link in sequence to obtain the evaluation result of the user on each service link.
Optionally, the analyzing the feature information of each service link in sequence to obtain an evaluation result of the user on each service link includes:
and sequentially inputting the characteristic information of each service link into a pre-trained evaluation model of each service link to obtain the evaluation result of each service link by the user.
Optionally, before the obtaining of the audio/video information of the user in the target service process, the method further includes:
acquiring a service training set;
classifying the services in the service training set to obtain each class of service;
according to a preset rule, dividing each class of service into N service links, wherein N is an integer greater than or equal to 1;
for each service link of each type of service, respectively executing the following processes:
respectively acquiring a plurality of audio and video clips of a plurality of users corresponding to the service link and a plurality of evaluation results of the plurality of users on the service link;
extracting the characteristics of the audio and video clips to obtain a plurality of characteristic information of the service link;
and training to obtain an evaluation model of the service link by using the plurality of characteristic information of the service link and the plurality of evaluation results.
Optionally, the performing, according to the service links included in the target service, feature extraction on the audio/video fragments corresponding to each service link in the audio/video information in sequence to obtain feature information of each service link includes:
analyzing the audio and video information, and sequentially determining each service link of the target service according to an analysis result and the service links included by the target service;
and sequentially intercepting the audio and video fragments corresponding to each service link from the audio and video information, and extracting the characteristics of the audio and video fragments corresponding to each service link to obtain the characteristic information of each service link.
Optionally, after obtaining the evaluation result of the user on each service link, the method further includes:
and determining the comprehensive evaluation result of the user on the target service according to the evaluation result of the user on each service link.
Optionally, after obtaining the evaluation result of the user on each service link, the method further includes:
and storing the audio and video clips corresponding to each service link and the evaluation result of the user on each service link into a database so as to serve as training samples to update the evaluation model of each service link.
Optionally, the feature information includes at least one of:
facial expression feature information, limb action feature information and voice semantic feature information.
In a second aspect, an embodiment of the present invention provides a user evaluation apparatus, which is applied to a user evaluation device, and includes:
the first acquisition module is used for acquiring audio and video information of a user in a target service process;
the characteristic extraction module is used for sequentially extracting the characteristics of the audio and video fragments corresponding to each service link in the audio and video information according to the service links included by the target service to obtain the characteristic information of each service link;
and the analysis module is used for analyzing the characteristic information of each service link in sequence to obtain the evaluation result of each service link by the user.
Optionally, the analysis module is specifically configured to:
and sequentially inputting the characteristic information of each service link into a pre-trained evaluation model of each service link to obtain the evaluation result of each service link by the user.
Optionally, the user evaluation device may further include:
the second acquisition module is used for acquiring a service training set;
the classification processing module is used for classifying the services in the service training set to obtain each class of service;
the dividing module is used for dividing each type of service into N service links according to a preset rule, wherein N is an integer greater than or equal to 1;
an execution module, configured to execute the following processes for each service link of each type of service respectively:
respectively acquiring a plurality of audio and video clips of a plurality of users corresponding to the service link and a plurality of evaluation results of the plurality of users on the service link;
extracting the characteristics of the audio and video clips to obtain a plurality of characteristic information of the service link;
and training to obtain an evaluation model of the service link by using the plurality of characteristic information of the service link and the plurality of evaluation results.
Optionally, the feature extraction module may include:
the analysis unit is used for analyzing the audio and video information and sequentially determining each service link of the target service according to an analysis result and the service links included by the target service;
and the characteristic extraction unit is used for sequentially intercepting the audio and video fragments corresponding to each service link from the audio and video information and extracting the characteristics of the audio and video fragments corresponding to each service link to obtain the characteristic information of each service link.
Optionally, the user evaluation device may further include:
and the determining module is used for determining the comprehensive evaluation result of the user on the target service according to the evaluation result of the user on each service link.
Optionally, the user evaluation device may further include:
and the storage module is used for storing the audio and video fragments corresponding to each service link and the evaluation result of the user on each service link into a database so as to serve as training samples to update the evaluation model of each service link.
Optionally, the characteristic information may include, but is not limited to, at least one of:
facial expression feature information, limb action feature information and voice semantic feature information.
In a third aspect, an embodiment of the present invention provides a user evaluation device, which includes a memory, a processor, and a computer program stored on the memory and executable on the processor, where the computer program, when executed by the processor, may implement the steps of the user evaluation method described above.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, can implement the steps of the user evaluation method described above.
In the embodiment of the invention, the audio and video information of the user in the target service process is obtained, the characteristic extraction is sequentially carried out on the audio and video fragments corresponding to each service link in the audio and video information according to the service links included in the target service, the characteristic information of each service link is obtained, and the characteristic information of each service link is sequentially analyzed, so that the evaluation result of the user on each service link can be obtained. Compared with the prior art, the embodiment of the invention can be used for finely obtaining the local evaluation of the user on a single link of the service, thereby fully knowing the evaluation condition of the user on the service content, improving the accuracy of the evaluation result and being beneficial to optimizing and improving the service quality.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required to be used in the embodiments of the present invention will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive labor.
FIG. 1 is a flow chart of a user evaluation method according to an embodiment of the present invention;
FIG. 2 is a flow diagram of a user evaluation process according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a user evaluation system according to an embodiment of the present invention;
FIG. 4 is a schematic structural diagram of a user evaluation device according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a user evaluation device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, fig. 1 is a flowchart of a user evaluation method according to an embodiment of the present invention, where the method is applied to a user evaluation device, and the user evaluation device may be a user terminal or a server. As shown in fig. 1, the method may include the steps of:
step 101: and acquiring audio and video information of a user in a target service process.
In this embodiment, the target service may be any service provided for the user, such as a phone purchase service, a card change service, a package change service, a mobile phone maintenance service, and the like provided for the user by a communication operator. The audio/video information may be obtained by performing service field audio/video acquisition on the user in real time by an audio/video acquisition device (for example, including a camera and an audio acquisition device) in the process of receiving the target service by the user.
In one embodiment, when it is monitored that a preset trigger condition of a target service is triggered (for example, the target service is started), an audio/video acquisition device may be started to perform service field audio/video acquisition on a user, so as to obtain audio/video information of the user in a target service process.
Step 102: and according to the service links included by the target service, sequentially extracting the characteristics of the audio and video fragments corresponding to each service link in the audio and video information to obtain the characteristic information of each service link.
It is understood that, for the service links included in the target service, the division may be performed based on the service content of the target service, and for example, N may be included, where N is an integer greater than or equal to 1. Taking the target service as the "machine purchasing service" provided by the communication operator as an example, the service links included in the "machine purchasing service" may be: experience model machine link, purchase machine work order filling link, payment link, signature link and the like.
In the step 102, when the features of the audio/video clip are extracted, for the video information in the audio/video clip, the frame image in the video information may be identified by using the existing image identification algorithm (such as an image identification model) to obtain corresponding feature information; for the audio information in the audio-video clip, the audio information can be identified by means of an existing audio identification algorithm (such as an audio identification model) to obtain corresponding characteristic information.
Optionally, the characteristic information of the service link may include, but is not limited to, at least one of the following: facial expression feature information, limb action feature information, voice semantic feature information and the like. Further, the facial expression feature information may include, for example, frown, glary, anger, smile, etc.; the limb action characteristic information can comprise shoulder shrugging, waist nipping, arm holding, head shaking, head nodding and the like; the speech semantic feature information may include, for example, slow speech speed, fast speech speed, trouble (semantic features), and the like.
Step 103: and analyzing the characteristic information of each service link in sequence to obtain the evaluation result of the user on each service link.
In this embodiment, when analyzing the feature information of each service link, the analysis may be performed according to a preset rule (for example, a preset correspondence between the feature information and the evaluation result), or may be performed through a pre-trained evaluation model.
Optionally, the step 103 may include:
and sequentially inputting the characteristic information of each service link into a pre-trained evaluation model of each service link to obtain the evaluation result of each service link by the user.
After the evaluation result of the user on each service link is obtained by means of the evaluation model, the audio and video segment corresponding to each service link and the evaluation result of the user on each service link can be stored in the database to serve as a training sample to update the evaluation model of each service link, so that the analysis accuracy of the evaluation model is ensured.
It should be noted that the pre-trained evaluation models correspond to the service links, that is, each service link corresponds to one pre-trained evaluation model, and different service links correspond to different pre-trained evaluation models.
According to the user evaluation method provided by the embodiment of the invention, the audio and video information of the user in the target service process is obtained, the characteristics of the audio and video fragment corresponding to each service link in the audio and video information are sequentially extracted according to the service links included by the target service, the characteristic information of each service link is obtained, and the characteristic information of each service link is sequentially analyzed, so that the evaluation result of the user on each service link can be obtained. Compared with the prior art, the embodiment of the invention can be used for finely obtaining the local evaluation of the user on a single link of the service, thereby fully knowing the evaluation condition of the user on the service content, improving the accuracy of the evaluation result and being beneficial to optimizing and improving the service quality.
In order to obtain the evaluation result of the user on the service link in real time, objectively and timely, the audio and video information in the embodiment of the invention can be obtained in real time in the target service process, and by monitoring the service link of the target service, the evaluation result of the user on the current service link is obtained in real time by utilizing the audio and video fragment corresponding to the current service link until the next service link is reached, and the evaluation result of the user on the next service link is obtained by utilizing the audio and video fragment corresponding to the next service link until the service process is ended.
And when the service link of the target service is monitored, monitoring can be realized by analyzing the audio and video information of the user, and monitoring can also be realized by analyzing the operation of the service personnel on the corresponding service system. It can be understood that, since the service contents of different service links are generally different, for example, in the "machine purchasing service", the machine purchasing work order filling link requires the user to fill the machine purchasing work order, and the payment link requires the user to pay, the monitoring service link can be realized by analyzing the audio and video information of the user in the service process.
Optionally, the step 102 may include:
analyzing the audio and video information, and sequentially determining each service link of the target service according to an analysis result and the service links included by the target service;
and sequentially intercepting the audio and video fragments corresponding to each service link from the audio and video information, and extracting the characteristics of the audio and video fragments corresponding to each service link to obtain the characteristic information of each service link.
Therefore, the service link where the user is located can be determined in real time, and the evaluation result of the user on the service link where the user is located can be obtained in real time, objectively and timely by means of feature extraction of the corresponding audio and video clips.
Further, after obtaining the evaluation result of the user on the service link, the evaluation result can be fed back to the current service staff, so that the current service staff can perform self-check and adjust the service on the user in real time according to the evaluation result.
In order to obtain the evaluation result of the user on the corresponding service link by using the pre-trained evaluation model, the embodiment of the invention can train to obtain the evaluation model before the user evaluation is specifically realized. Optionally, the model training process in the embodiment of the present invention may include:
acquiring a service training set;
classifying the services in the service training set to obtain each class of service;
according to a preset rule, dividing each class of service into N service links, wherein N is an integer greater than or equal to 1;
for each service link of each type of service, respectively executing the following processes:
respectively acquiring a plurality of audio and video clips of a plurality of users corresponding to the service link and a plurality of evaluation results of the plurality of users on the service link;
extracting the characteristics of the audio and video clips to obtain a plurality of characteristic information of the service link;
and training to obtain an evaluation model of the service link by using the plurality of characteristic information of the service link and the plurality of evaluation results.
The service training set may include a plurality of services for the user. For example, taking the business hall service of a communication operator as an example, the corresponding service training set can be shown in table 1 below:
TABLE 1
Service training set Concrete service
Service 1 Machine purchasing service
Service 2 Card changing service
Service 3 Package changing service
…… ……
Service M Servicing mobile phones
Optionally, when classifying the services in the service training set, the services of the same class may be classified into one class, so as to obtain each class of service. For example, in the service training set shown in table 1, the service class obtained by the classification processing may be a phone purchase service, a card change service, a package change service, or a mobile phone maintenance service.
When each type of service is divided into service links, the service links can be divided according to preset rules and specific service contents. For example, taking the "machine purchasing service" in table 1 as an example, the service links obtained by dividing the "machine purchasing service" can be shown in table 2 below:
TABLE 2
Service link sequence number Service link content
Link 1 Experience model machine link
Link 2 Purchasing machine work order filling link
Link 3 Payment link
…… ……
Link N Signing link
It should be noted that each service link in this embodiment corresponds to a pre-trained evaluation model. Therefore, when model training is performed, a specific service link is aimed at, that is, a training sample (at least comprising a plurality of audio/video clips and corresponding evaluation results) of the service link is firstly obtained, and then an evaluation model of the service link is obtained by training with the training sample. The feature information obtained in the model training may also include facial expression feature information, limb movement feature information, and/or speech semantic feature information. The specific model training algorithm may adopt an existing algorithm, such as a machine learning algorithm, a neural network deep learning algorithm, and the like, which is not limited in the embodiments of the present invention.
For example, taking the "payment link" in table 2 as an example, the obtained multiple audio/video clips of multiple users may be as shown in table 3 below:
TABLE 3
Payment link Audio-video clip
User 1 Audio/video clip 1
User 2 Audio/video clip 2
User 3 Audio-video clip 3
…… ……
User M Audio and video clip M
In at least one embodiment of the present invention, after the step 103, the method may further include:
and determining the comprehensive evaluation result of the user on the target service according to the evaluation result of the user on each service link.
The evaluation result may be in the form of a score (for example, 10 points). When determining the comprehensive evaluation result of the corresponding service according to the evaluation result of the service link, either an arithmetic mean or a weighted mean may be used. In this way, the overall evaluation of the corresponding service by the user can be understood.
For example, taking the "machine purchasing service" in table 1 as an example, the evaluation result of each service link and the corresponding comprehensive evaluation result can be shown in table 4 as follows:
TABLE 4
Figure BDA0002154342570000101
The user evaluation process according to the embodiment of the present invention is described in detail below with reference to fig. 2.
In the embodiment of the present invention, taking the service a received by the user as an example, as shown in fig. 2, the corresponding user evaluation process may include the following steps:
step 201: monitoring whether the service A is triggered;
step 202: when monitoring that the service A is triggered, starting an audio and video acquisition device, and acquiring audio and video information of a user in the service A process in real time;
step 203: analyzing the collected audio and video information, sequentially determining a current service link of the service A, extracting the characteristics of audio and video fragments corresponding to the current service link, analyzing corresponding characteristic information by using an evaluation model of the current service link to obtain an evaluation result of a user, and storing the obtained evaluation result;
step 204: monitoring whether the current service node of the service A is the last service node or not; if yes, go to step 205, otherwise go to step 203;
step 205: analyzing the corresponding characteristic information by using an evaluation model of the last service link to obtain an evaluation result of the user;
step 206: and when the service A is finished, closing the audio and video acquisition device and stopping audio and video acquisition.
Step 207: and obtaining a comprehensive evaluation result of the user on the service A according to the evaluation result of the user on each link of the service A.
It should be noted that, in the specific example of the present invention, the user evaluation device may be a server (e.g., a backend server), and the corresponding user evaluation system may be as shown in fig. 3, and may include a user terminal 31 and a backend server terminal 32.
The user terminal 31 mainly includes an audio/video acquisition device 311, configured to acquire audio/video information of a user in a service process. The audio/video capture device 311 is, for example, a camera and an audio collector.
The background server 32 may include a service classification management module 321, a video processing and feature extraction module 322, a video feature training and modeling module 323, a video acquisition and control module 324, a video library module 325, a real-time evaluation module 326, a service category and service link determination module 327, an overall evaluation module 328, and other modules 329. The service classification management module 321 is mainly used for classification processing and storage of service classification and service links. The video processing and feature extraction module 322 is mainly used for preprocessing audio and video information and extracting feature information such as expressions, actions, semantics and the like. The video feature training and modeling module 323 is mainly used for performing feature training and modeling of audio and video samples according to preset rules and the like. The video acquisition and control module is mainly used for acquiring user audio and video information in a service process, controlling the audio and video acquisition device 311 and the like. The video library module 325 is mainly used for archiving and storing the user audio-video information and the evaluation result. The real-time evaluation module 326 is mainly used for performing real-time evaluation according to the characteristic information of the audio/video clips of the service link where the user is currently located. The service type and service link determination module 327 is mainly used to detect and determine the service type and service link of the current user. The overall evaluation module 328 is mainly used for performing overall evaluation on the current service according to preset rules. The other module 329 can be selected as a customer service personnel operation module, a real-time evaluation display module, and the like.
The application scenarios of the embodiment of the invention include, but are not limited to, the traditional fields of communication, finance, catering and the like, and can also be emerging fields of self-service selling, unmanned supermarkets and the like, so that automatic service evaluation is provided for different fields, and service quality of the self-service selling and unmanned supermarkets are improved.
Referring to fig. 4, fig. 4 is a schematic structural diagram of a user evaluation apparatus according to an embodiment of the present invention, where the user evaluation apparatus is applied to a user evaluation device, and the user evaluation device may be a user terminal or a server. As shown in fig. 4, the user evaluation device 40 includes:
the first obtaining module 41 is configured to obtain audio and video information of a user in a target service process;
the feature extraction module 42 is configured to sequentially perform feature extraction on audio/video segments corresponding to each service link in the audio/video information according to the service links included in the target service, so as to obtain feature information of each service link;
and the analysis module 43 is configured to analyze the feature information of each service link in sequence to obtain an evaluation result of each service link by the user.
According to the user evaluation device provided by the embodiment of the invention, the audio and video information of the user in the target service process is obtained, the characteristics of the audio and video fragment corresponding to each service link in the audio and video information are sequentially extracted according to the service links included by the target service, the characteristic information of each service link is obtained, and the characteristic information of each service link is sequentially analyzed, so that the evaluation result of the user on each service link can be obtained. Compared with the prior art, the embodiment of the invention can be used for finely obtaining the local evaluation of the user on a single link of the service, thereby fully knowing the evaluation condition of the user on the service content and improving the accuracy of the evaluation result.
Optionally, the analysis module 43 is specifically configured to:
and sequentially inputting the characteristic information of each service link into a pre-trained evaluation model of each service link to obtain the evaluation result of each service link by the user.
Optionally, the user evaluation device 40 may further include:
the second acquisition module is used for acquiring a service training set;
the classification processing module is used for classifying the services in the service training set to obtain each class of service;
the dividing module is used for dividing each type of service into N service links according to a preset rule, wherein N is an integer greater than or equal to 1;
an execution module, configured to execute the following processes for each service link of each type of service respectively:
respectively acquiring a plurality of audio and video clips of a plurality of users corresponding to the service link and a plurality of evaluation results of the plurality of users on the service link;
extracting the characteristics of the audio and video clips to obtain a plurality of characteristic information of the service link;
and training to obtain an evaluation model of the service link by using the plurality of characteristic information of the service link and the plurality of evaluation results.
Optionally, the illustrated feature extraction module 42 may include:
the analysis unit is used for analyzing the audio and video information and sequentially determining each service link of the target service according to an analysis result and the service links included by the target service;
and the characteristic extraction unit is used for sequentially intercepting the audio and video fragments corresponding to each service link from the audio and video information and extracting the characteristics of the audio and video fragments corresponding to each service link to obtain the characteristic information of each service link.
Optionally, the user evaluation device 40 may further include:
and the determining module is used for determining the comprehensive evaluation result of the user on the target service according to the evaluation result of the user on each service link.
Optionally, the user evaluation device 40 may further include:
and the storage module is used for storing the audio and video fragments corresponding to each service link and the evaluation result of the user on each service link into a database so as to serve as training samples to update the evaluation model of each service link.
Optionally, the characteristic information may include, but is not limited to, at least one of:
facial expression feature information, limb action feature information and voice semantic feature information.
In addition, an embodiment of the present invention further provides a user evaluation device, which includes a memory, a processor, and a computer program that is stored in the memory and is executable on the processor, where the computer program, when executed by the processor, can implement each process of the user evaluation method embodiment, and can achieve the same technical effect, and is not described herein again to avoid repetition.
Specifically, referring to fig. 5, an embodiment of the present invention further provides a user evaluation device, which includes a bus 51, a transceiver 52, an antenna 53, a bus interface 54, a processor 55, and a memory 56.
In this embodiment of the present invention, the user evaluation device further includes: a computer program stored on the memory 56 and executable on the processor 55.
Optionally, the computer program may be adapted to perform the following steps when executed by the processor 55:
acquiring audio and video information of a user in a target service process;
according to the service links included by the target service, sequentially extracting the characteristics of the audio and video fragments corresponding to each service link in the audio and video information to obtain the characteristic information of each service link;
and analyzing the characteristic information of each service link in sequence to obtain the evaluation result of the user on each service link.
It can be understood that, in the embodiment of the present invention, when being executed by the processor 55, the computer program can implement the processes of the user evaluation method embodiment shown in fig. 1, and can achieve the same technical effects, and details are not repeated here to avoid repetition.
In fig. 5, a bus architecture (represented by bus 51), bus 51 may include any number of interconnected buses and bridges, with bus 51 linking together various circuits including one or more processors, represented by processor 55, and memory, represented by memory 56. The bus 51 may also link together various other circuits such as peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further herein. A bus interface 54 provides an interface between the bus 51 and the transceiver 52. The transceiver 52 may be one element or may be multiple elements, such as multiple receivers and transmitters, providing a means for communicating with various other apparatus over a transmission medium. The data processed by the processor 55 is transmitted over a wireless medium via the antenna 53, and further, the antenna 53 receives the data and transmits the data to the processor 55.
The processor 55 is responsible for managing the bus 51 and general processing and may also provide various functions including timing, peripheral interfaces, voltage regulation, power management, and other control functions. And memory 56 may be used to store data used by processor 55 in performing operations.
Alternatively, the processor 55 may be a CPU, ASIC, FPGA or CPLD.
In addition, an embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the computer program can implement each process of the user evaluation method embodiment shown in fig. 1, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
Computer-readable media, which include both non-transitory and non-transitory, removable and non-removable media, may implement the information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (10)

1. A user evaluation method is applied to user evaluation equipment and is characterized by comprising the following steps:
acquiring audio and video information of a user in a target service process;
according to the service links included by the target service, sequentially extracting the characteristics of the audio and video fragments corresponding to each service link in the audio and video information to obtain the characteristic information of each service link;
and analyzing the characteristic information of each service link in sequence to obtain the evaluation result of the user on each service link.
2. The method according to claim 1, wherein the analyzing the feature information of each service link in turn to obtain the evaluation result of the user on each service link comprises:
and sequentially inputting the characteristic information of each service link into a pre-trained evaluation model of each service link to obtain the evaluation result of each service link by the user.
3. The method of claim 2, wherein before the obtaining of the audio/video information of the user in the target service process, the method further comprises:
acquiring a service training set;
classifying the services in the service training set to obtain each class of service;
according to a preset rule, dividing each class of service into N service links, wherein N is an integer greater than or equal to 1;
for each service link of each type of service, respectively executing the following processes:
respectively acquiring a plurality of audio and video clips of a plurality of users corresponding to the service link and a plurality of evaluation results of the plurality of users on the service link;
extracting the characteristics of the audio and video clips to obtain a plurality of characteristic information of the service link;
and training to obtain an evaluation model of the service link by using the plurality of characteristic information of the service link and the plurality of evaluation results.
4. The method according to claim 1, wherein the sequentially performing feature extraction on the audio/video segments corresponding to each service link in the audio/video information according to the service links included in the target service to obtain the feature information of each service link comprises:
analyzing the audio and video information, and sequentially determining each service link of the target service according to an analysis result and the service links included by the target service;
and sequentially intercepting the audio and video fragments corresponding to each service link from the audio and video information, and extracting the characteristics of the audio and video fragments corresponding to each service link to obtain the characteristic information of each service link.
5. The method of claim 1, wherein after obtaining the user evaluation result for each of the service segments, the method further comprises:
and determining the comprehensive evaluation result of the user on the target service according to the evaluation result of the user on each service link.
6. The method of claim 2, wherein after obtaining the user evaluation result for each of the service segments, the method further comprises:
and storing the audio and video clips corresponding to each service link and the evaluation result of the user on each service link into a database so as to serve as training samples to update the evaluation model of each service link.
7. The method according to any one of claims 1 to 6, characterized in that the feature information comprises at least one of:
facial expression feature information, limb action feature information and voice semantic feature information.
8. A user evaluation device applied to a user evaluation device is characterized by comprising:
the first acquisition module is used for acquiring audio and video information of a user in a target service process;
the characteristic extraction module is used for sequentially extracting the characteristics of the audio and video fragments corresponding to each service link in the audio and video information according to the service links included by the target service to obtain the characteristic information of each service link;
and the analysis module is used for analyzing the characteristic information of each service link in sequence to obtain the evaluation result of each service link by the user.
9. A user rating device comprising a memory, a processor and a computer program stored on said memory and executable on said processor, wherein said computer program, when executed by said processor, performs the steps of the user rating method of any of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the user rating method of any one of claims 1 to 7.
CN201910712760.8A 2019-08-02 2019-08-02 User evaluation method, device, equipment and computer readable storage medium Pending CN112308591A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910712760.8A CN112308591A (en) 2019-08-02 2019-08-02 User evaluation method, device, equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910712760.8A CN112308591A (en) 2019-08-02 2019-08-02 User evaluation method, device, equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN112308591A true CN112308591A (en) 2021-02-02

Family

ID=74486011

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910712760.8A Pending CN112308591A (en) 2019-08-02 2019-08-02 User evaluation method, device, equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN112308591A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109726655A (en) * 2018-12-19 2019-05-07 平安普惠企业管理有限公司 Customer service evaluation method, device, medium and equipment based on Emotion identification
KR20190052584A (en) * 2017-11-08 2019-05-16 주식회사 하이퍼커넥트 Terminal and server providing a video call service
CN109801105A (en) * 2019-01-17 2019-05-24 深圳壹账通智能科技有限公司 Service methods of marking, device, equipment and storage medium based on artificial intelligence
CN109858410A (en) * 2019-01-18 2019-06-07 深圳壹账通智能科技有限公司 Service evaluation method, apparatus, equipment and storage medium based on Expression analysis

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20190052584A (en) * 2017-11-08 2019-05-16 주식회사 하이퍼커넥트 Terminal and server providing a video call service
CN109726655A (en) * 2018-12-19 2019-05-07 平安普惠企业管理有限公司 Customer service evaluation method, device, medium and equipment based on Emotion identification
CN109801105A (en) * 2019-01-17 2019-05-24 深圳壹账通智能科技有限公司 Service methods of marking, device, equipment and storage medium based on artificial intelligence
CN109858410A (en) * 2019-01-18 2019-06-07 深圳壹账通智能科技有限公司 Service evaluation method, apparatus, equipment and storage medium based on Expression analysis

Similar Documents

Publication Publication Date Title
US20210256320A1 (en) Machine learning artificialintelligence system for identifying vehicles
CN109783632B (en) Customer service information pushing method and device, computer equipment and storage medium
CN109684047A (en) Event-handling method, device, equipment and computer storage medium
CN110110038B (en) Telephone traffic prediction method, device, server and storage medium
CN110287316A (en) A kind of Alarm Classification method, apparatus, electronic equipment and storage medium
KR102002024B1 (en) Method for processing labeling of object and object management server
CN109976997A (en) Test method and device
CN110445939B (en) Capacity resource prediction method and device
CN114170482B (en) Document pre-training model training method, device, equipment and medium
CN108537422A (en) Security risk early warning system and method
US20220414689A1 (en) Method and apparatus for training path representation model
CN109873790A (en) Network security detection method, device and computer readable storage medium
CN108446659A (en) Method and apparatus for detecting facial image
CN111144215A (en) Image processing method, image processing device, electronic equipment and storage medium
CN117409419A (en) Image detection method, device and storage medium
CN109064464B (en) Method and device for detecting burrs of battery pole piece
CN108182180B (en) Method and apparatus for generating information
CN114360027A (en) Training method and device for feature extraction network and electronic equipment
CN113704389A (en) Data evaluation method and device, computer equipment and storage medium
CN110855474B (en) Network feature extraction method, device, equipment and storage medium of KQI data
CN110910241B (en) Cash flow evaluation method, apparatus, server device and storage medium
CN116843395A (en) Alarm classification method, device, equipment and storage medium of service system
CN108062423B (en) Information-pushing method and device
CN112308591A (en) User evaluation method, device, equipment and computer readable storage medium
CN107798556A (en) For updating method, equipment and the storage medium of situation record

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination