CN109766770A - QoS evaluating method, device, computer equipment and storage medium - Google Patents
QoS evaluating method, device, computer equipment and storage medium Download PDFInfo
- Publication number
- CN109766770A CN109766770A CN201811547937.5A CN201811547937A CN109766770A CN 109766770 A CN109766770 A CN 109766770A CN 201811547937 A CN201811547937 A CN 201811547937A CN 109766770 A CN109766770 A CN 109766770A
- Authority
- CN
- China
- Prior art keywords
- mood
- emotion
- assessed
- score
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 46
- 230000008451 emotion Effects 0.000 claims abstract description 192
- 230000002996 emotional effect Effects 0.000 claims abstract description 106
- 230000001815 facial effect Effects 0.000 claims abstract description 104
- 239000000284 extract Substances 0.000 claims abstract description 28
- 230000036651 mood Effects 0.000 claims description 207
- 238000004590 computer program Methods 0.000 claims description 27
- 238000013441 quality evaluation Methods 0.000 claims description 17
- 238000012360 testing method Methods 0.000 claims description 14
- 238000004364 calculation method Methods 0.000 claims description 8
- 238000000605 extraction Methods 0.000 claims description 6
- 238000012163 sequencing technique Methods 0.000 claims description 3
- 230000014509 gene expression Effects 0.000 abstract description 19
- 238000011156 evaluation Methods 0.000 abstract description 11
- 238000013473 artificial intelligence Methods 0.000 abstract 1
- 238000004458 analytical method Methods 0.000 description 7
- 238000010586 diagram Methods 0.000 description 6
- 238000001514 detection method Methods 0.000 description 2
- 239000000203 mixture Substances 0.000 description 2
- 230000011218 segmentation Effects 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 230000015572 biosynthetic process Effects 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 235000013399 edible fruits Nutrition 0.000 description 1
- 230000005611 electricity Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 238000005194 fractionation Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
Landscapes
- Image Analysis (AREA)
Abstract
Emotion identification namely a kind of QoS evaluating method, device, computer equipment and storage medium this application involves the bio-identification in artificial intelligence, in particular in micro- Expression Recognition.Method includes: to obtain video to be assessed, and extract the picture frame in video to be assessed as image to be assessed;Identity is received, and facial image to be analyzed is chosen from image to be assessed according to identity;Identify the current emotional information of facial image to be analyzed;Count the frame number of image to be assessed corresponding to current emotional information;Target emotion information is obtained according to current emotional information and frame number;The corresponding target emotion score of target emotion information is inquired, and service quality score is calculated according to target emotion score.The efficiency of service evaluation can be improved using this method.
Description
Technical field
This application involves field of computer technology, set more particularly to a kind of QoS evaluating method, device, computer
Standby and storage medium.
Background technique
Due to the development of service industry, more and more enterprises be required to when servicing client to business personnel into
Row service evaluation, such as in the sale of insurance products, then need to carry out business evaluation to business personnel.
Traditionally, during evaluating business personnel, usually manually to the service process recorded a video into
Row analysis, so that the service process to business personnel is manually evaluated, leads to the low efficiency manually evaluated.
Summary of the invention
Based on this, it is necessary in view of the above technical problems, provide a kind of service quality evaluation that can be improved evaluation efficiency
Method, apparatus, computer equipment and storage medium.
A kind of QoS evaluating method, which comprises
Video to be assessed is obtained, and extracts the picture frame in the video to be assessed as image to be assessed;
Identity is received, and facial image to be analyzed is chosen from the image to be assessed according to the identity;
Identify the current emotional information of the facial image to be analyzed;
Count the frame number of image to be assessed corresponding to the current emotional information;
Target emotion information is obtained according to the current emotional information and the frame number;
The corresponding target emotion score of the target emotion information is inquired, and is calculated and is serviced according to the target emotion score
Quality score.
In one embodiment, the current emotional information of the identification facial image to be analyzed, comprising:
Receive the mood probability that the corresponding current emotional of the facial image to be analyzed is standard mood;
The obtained mood probability is ranked up, and is extracted and preset quantity pair according to the mood probability after sequence
The standard mood for the quantity answered;
Judge whether the corresponding type of emotion of extracted standard mood is identical;
When the corresponding type of emotion difference of extracted standard mood, then the standard mood institute of mood maximum probability is obtained
The corresponding type of emotion is as current emotional information.
In one embodiment, it is described judge whether the corresponding type of emotion of extracted standard mood identical after, packet
It includes:
When the extracted standard mood is corresponding with identical type of emotion, then the identical standard of type of emotion is inquired
Mood;
Destination probability is obtained according to the corresponding mood probability calculation of the identical standard mood of the type of emotion;
Obtain the maximum value in the destination probability and the mood probability of the different standard mood of type of emotion;
Using the corresponding type of emotion of the maximum value as current emotional information.
It is in one embodiment, described that service quality score is calculated according to the target emotion score, comprising:
Inquire the scoring weight of the facial image to be analyzed corresponding to the target emotion score;
According to the target emotion score and the scoring weight, the service quality score is calculated.
In one embodiment, the corresponding target emotion score of the inquiry target emotion information, and according to described
Target emotion score calculates after service quality score, comprising:
Range of value is obtained, the corresponding range of value of the service quality score is inquired;
Inquire the associated grade of service of range of value corresponding with the service quality score;
Service quality report is generated according to the grade of service.
In one embodiment, the method also includes:
Extract in the video to be assessed with voice messaging corresponding to the facial image to be analyzed;
Identidication key is obtained, is detected in the voice messaging and whether is included the identidication key and obtain detection knot
Fruit;
The testing result is added in the service quality report.
A kind of service quality evaluation device, described device include:
First obtain module, for obtaining video to be assessed, and extract the picture frame in the video to be assessed be used as to
Assess image;
Receiving module, for receiving identity, and chosen from the image to be assessed according to the identity to
Analyze facial image;
Identification module, for identification the current emotional information of the facial image to be analyzed;
Statistical module, for counting the frame number of image to be assessed corresponding to the current emotional information;
Second obtains module, for obtaining target emotion information according to the current emotional information and the frame number;
Computing module, for inquiring the corresponding target emotion score of the target emotion information, and according to the target feelings
Thread score calculates service quality score.
In one embodiment, the identification module, comprising:
Receiving unit, it is general for receiving the mood that the corresponding current emotional of the facial image to be analyzed is standard mood
Rate;
Sequencing unit, for being ranked up to obtained mood probability, and according to after sequence mood probability extract with
The standard mood of the corresponding quantity of preset quantity;
Judging unit, for judging whether the corresponding type of emotion of extracted standard mood is identical;
First acquisition unit, for when the corresponding type of emotion difference of extracted standard mood, then it is general to obtain mood
The type of emotion corresponding to the maximum standard mood of rate is as current emotional information.
A kind of computer equipment, including memory and processor, the memory are stored with computer program, the processing
The step of device realizes the above method when executing the computer program.
A kind of computer readable storage medium, is stored thereon with computer program, and the computer program is held by processor
The step of above-mentioned method is realized when row.
Above-mentioned QoS evaluating method, device, computer equipment and storage medium, without manually to the service of video recording
Cheng Jinhang analysis, obtains the evaluation of corresponding service quality, but gets video to be assessed, extracts the figure in video to be assessed
Picture frame receives identity as image to be assessed, and face to be analyzed is chosen from image to be assessed according to identity
Image identifies the current emotional information of facial image to be analyzed, and then counts the corresponding image to be assessed of current emotional information
Frame number obtains target emotion information, the corresponding target emotion of inquiry target emotion information according to current emotional information and frame number
Score, and service quality score is calculated according to target emotion score, so as to improve the efficiency of service evaluation.
Detailed description of the invention
Fig. 1 is the application scenario diagram of quality of server evaluation method in one embodiment;
Fig. 2 is the flow diagram of QoS evaluating method in one embodiment;
Fig. 3 is the flow diagram of current emotional information identification step in one embodiment;
Fig. 4 is the structural block diagram of service quality evaluation device in one embodiment;
Fig. 5 is the internal structure chart of computer equipment in one embodiment.
Specific embodiment
It is with reference to the accompanying drawings and embodiments, right in order to which the objects, technical solutions and advantages of the application are more clearly understood
The application is further elaborated.It should be appreciated that specific embodiment described herein is only used to explain the application, not
For limiting the application.
QoS evaluating method provided by the present application can be applied in application environment as shown in Figure 1.Wherein, eventually
End 102 is communicated by network with server 104.Server 104 gets the video to be assessed of the shooting of terminal 102, and mentions
Take picture frame in video to be assessed as image to be assessed, server 104 receives identity, and according to identity to
Facial image to be analyzed is chosen in assessment image, identifies the current emotional information of facial image to be analyzed, and then server 104 is united
The frame number for counting image to be assessed corresponding to current emotional information, obtains target emotion information according to current emotional and frame number,
And then server 104 inquires target emotion score corresponding to target emotion information, and is calculated and serviced according to target emotion score
Quality score.Wherein, terminal 102 can be, but not limited to be various personal computers, laptop, smart phone, plate electricity
Brain and portable wearable device, server 104 can use the server of the either multiple server compositions of independent server
Cluster is realized.
In one embodiment, as shown in Fig. 2, providing a kind of QoS evaluating method, it is applied to Fig. 1 in this way
In server for be illustrated, comprising the following steps:
S202: obtaining video to be assessed, and extracts the picture frame in the video to be assessed as image to be assessed.
Specifically, video to be assessed refers to the video of the collected service overall process of terminal in service process, and to be evaluated
Estimate in video includes attendant and client.Image to be assessed refers to include different frame image in video to be assessed.
Specifically, for terminal in service process, the image of continuous collecting service process obtains services video, using the services video as to
Video is assessed, and then video to be assessed is sent to server by terminal, server receives video to be assessed, then from view to be assessed
Extract image to be assessed in frequency, can be, server can extract each frame image, be also possible to server by
It is extracted according to preset time, such as every 5 minutes, and extraction time section is all picture frames in 3 minutes as figure to be assessed
Picture, for example, image all out of video to be assessed initial time to third minute extracts, and then is separated by 5 minutes,
It is extracted i.e. since the 8th minute to all images in the 11st minute, and above-mentioned time interval and extraction time
Section can be configured according to different scenes.
S204: identity is received, and face to be analyzed is chosen from the image to be assessed according to the identity
Image.
Specifically, identity refers to include different identity of personage marks in image to be assessed, identity
It can be name or ID card No. etc..Specifically, when server extracts to obtain image to be assessed, then the identity of input is received
Mark, finds and facial image to be analyzed corresponding to identity.It can be, shown on the corresponding display interface of server
Corresponding input frame, user inputs the identity for the different personages for including in image to be assessed according to input frame, and then services
Device receives the different identity mark of input, is respectively compared different identity and pre-stored identity one by one
It is right, when comparing successfully, then get the associated character image of identity of successful match, by the associated character image with
The different faces for including in image to be assessed are compared, when comparing successfully, then will compare successful face as with identity
The corresponding facial image to be analyzed of mark.For example, showing corresponding input frame, Yong Hugen on the corresponding display interface of server
The ID card No. for the different personages for including in image to be assessed is inputted according to input frame, as included two people in image to be assessed
Object then inputs the first ID card No. and the second ID card No., by the first ID card No. and pre-stored identification card number
Code is compared one by one, when comparing successfully, then gets and compares facial image associated by successful ID card No., in turn
Identification obtains the different faces image that image to be assessed includes, such as the first facial image and the second facial image, and then will association
Facial image, the first facial image and the second facial image respectively according to preset human face region division rule carry out face
Subregion, so server get the associated facial image after the completion of subregion, the first facial image after the completion of subregion and
The feature to be compared of identical subregion in the second facial image after the completion of subregion, by the feature to be compared of associated facial image
Be compared to obtain the first comparison result with the feature to be compared of the first facial image respectively, then by associated facial image to
It compares feature to be compared to obtain the second comparison result with the feature to be compared of the second facial image, be obtained according to the first comparison result
To the first human face similarity degree, the second human face similarity degree is obtained according to the second comparison result, inquires the first human face similarity degree and second
Be more than the facial image of threshold value in human face similarity degree, compare successful facial image as with associated facial image, then it should be to
To compare successful face be personage associated by the first identification card number in assessment image, when only including two in video to be assessed
When personage, then another personage is then personage associated by the second identification card number, or can be identified using identical method
The corresponding personage of second ID card No., so as to distinguish to obtain the personage of identity different in video to be assessed, Ye Jiru
When video to be assessed is services video, then the role for the different personages for including in video to be assessed is distinguished, namely respectively
For for business personnel and client.
S206: the current emotional information of the identification facial image to be analyzed.
Specifically, current emotional information refers to the current mood for the different personages for including in every frame image to be assessed.Specifically
Ground then gets different faces to be analyzed when server recognizes the facial image to be analyzed in image to be assessed respectively
Micro- expression information corresponding to image inquires default emotional information corresponding to micro- expression information, this is preset emotional information and is made
For current emotional information.For example, when server recognizes the different facial images to be analyzed in image to be assessed, such as business
Member's facial image and client's facial image, then it is corresponding first to get the business personnel's facial image for including in every frame image to be assessed
Micro- expression information, and corresponding default emotional information is got according to micro- expression information of every frame image to be assessed, this is pre-
If current emotional information of the emotional information as every frame image to be assessed, and for client's facial image, also using such as business personnel
The analytical current emotional information of facial image.
S208: the frame number of image to be assessed corresponding to the current emotional information is counted.
Specifically, when current emotional information corresponding to server gets every frame image to be assessed, then count different
Current emotional information corresponding to image to be assessed frame number, can be, it is such as above-mentioned to get the progress of business personnel's facial image
Illustrate, server obtains 50 frames image to be assessed altogether, then the corresponding current emotional information of the beautiful image of the business personnel people obtained has height
It is emerging, disappointed and boring, then the frame number for occurring glad image in 50 frames image to be assessed is counted respectively, and 50 frames of statistics are to be assessed
Occur the frame number of disappointed image in image, counts in 50 frames image to be assessed and the frame number of boring image occur.
S210: target emotion information is obtained according to the current emotional information and the frame number.
Specifically, target emotion information refers to the whole emotional information of the facial image to be analyzed in image to be assessed.When
Server obtains current emotional information corresponding to every frame image to be assessed, that is to say the facial image to be analyzed of each frame image
Corresponding current emotional information, and frame number corresponding to obtained current emotional information, then it is corresponding to obtain target emotion
Information is chosen corresponding to current emotional information wherein the frame number of image to be assessed corresponding to current emotional information can be inquired
Image to be assessed the most current emotional information of frame number as target emotion information, be also possible to server obtain it is to be evaluated
Estimate image totalframes, inquire frame number corresponding to current emotional information, calculates the total frame of frame number Zhan corresponding to current emotional information
Ratio is more than the current emotional information of threshold value as target emotion information by several ratios.For example, being obtained according to above-mentioned steps
To 50 frames image to be assessed, wherein including business personnel's facial image, the current emotional of the business personnel's facial image got is believed
Breath has glad, disappointed and boring, and it is 30 frames, picture frame corresponding to disappointment that statistics, which obtains number of image frames corresponding to happy emoticon,
Number is 5 frames, and boring corresponding number of image frames is 15 frames, then it is most to can be frame number shared by happiness, then 50 frames image to be assessed
Business personnel's facial image whole emotional information that is to say target emotion information be happiness, be also possible to calculate separately height
The ratio of emerging corresponding number of image frames and totalframes is 60%, and the ratio of disappointed corresponding number of image frames and totalframes is 10%,
The ratio of boring corresponding number of image frames and totalframes is 30%, and then it is 50% that server, which gets threshold value, more than threshold value
For the ratio of glad corresponding number of image frames and totalframes, therefore, by the glad whole emotional information as image to be assessed,
As target emotion information, and can include other face figures to be analyzed in image to be assessed using the calculating of identical method
The target emotion information of picture, such as the corresponding target emotion information of client's facial image.
S212: the corresponding target emotion score of the target emotion information is inquired, and according to the target emotion score meter
Calculate service quality score.
Specifically, target emotion score refers to score corresponding to preset different mood, can be, works as Emotion expression
More optimistic, then score is higher, Emotion expression be it is more pessimistic, then score is lower.Service quality score refers to according to different wait divide
Analyse facial image target emotion score, the corresponding score of obtained service quality, so as to evaluate service quality it is excellent
It is bad.Specifically, when server obtains target emotion information corresponding to different facial images to be analyzed, then according to target emotion
Information inquires target emotion score, using the target emotion score inquired as target corresponding to facial image to be analyzed
Mood score, and then corresponding service quality score computation rule can be got according to different target emotion scores, thus
Service quality score is calculated, carries out different target emotion scores for example, service quality score computation rule can be
It is added, can also be added according to different weights.For example, including different face figures to be analyzed in image to be assessed
Picture is such as included business personnel's facial image and client's facial image, is obtained corresponding to business personnel's facial image using the above method
Target emotion information be happiness, the corresponding target emotion information of client be disappointment, then inquire target emotion corresponding to happiness
10 points are scored at, disappointed corresponding target emotion is scored at 5 points, then the service quality score computation rule got is by the two
It is added, then can be added to obtain service quality for the two and be scored at 15 points.
In the present embodiment, video to be assessed can be got by server, and extract the picture frame in video to be assessed
As image to be assessed, and then identity is received, chooses face figure to be analyzed from image to be assessed according to identity
Picture identifies the current emotional information of facial image to be analyzed, and then counts frame number corresponding to current emotional information, according to current
Emotional information and frame number obtain target emotion information, inquire the corresponding target emotion score of target emotion information, and according to mesh
It marks mood score and calculates service quality score, without manually corresponding services video is inquired and evaluated one by one, Ke Yiti
The efficiency of high service evaluation, and avoid causing to evaluate disunity due to the subjectivity manually evaluated, improve the accuracy of evaluation.
In one embodiment, Fig. 3 is referred to, provides the flow diagram of a current emotional information identification step, currently
Emotional information identification step, namely the current emotional information of identification facial image to be analyzed, comprising: receive facial image to be analyzed
Corresponding current emotional is the mood probability of standard mood;Obtained mood probability is ranked up, and according to sequence after
Mood probability extracts the standard mood of quantity corresponding with preset quantity;Judge the corresponding type of emotion of extracted standard mood
It is whether identical;When the corresponding type of emotion difference of extracted standard mood, then the standard mood of mood maximum probability is obtained
Corresponding type of emotion is as current emotional information.
Specifically, standard mood refers to that mood corresponding to preset micro- expression, preset micro- expression can be not of the same race
Micro- expression of class, such as 54 kinds of micro- expressions.Mood probability refers to every kind that the micro- Expression Recognition model completed according to training obtains
The probability of preset micro- expression, and mood probability more it is big then be this kind of micro- expression a possibility that it is higher.Type of emotion refers to will not
The different mood subregions that mood corresponding to same micro- expression is classified, can be, using similar mood as same
A type of emotion, namely can be, using similar mood in mood corresponding to 54 kinds of preset micro- expressions as the same feelings
Thread type is sick of for example, can have indignation for business personnel's facial image, boring, passive, generally, smiles, happy mood
Type, and include similar mood in every kind of type of emotion, for client's facial image, there can be resistance, be sick of, it is boring,
It doesn't matter, is interested in, satisfied type of emotion.
Specifically, the current emotional that server receives the corresponding each frame image to be assessed of facial image to be analyzed is every
The mood probability of kind standard mood, and then the mood probability received is ranked up by server, sequence can arrange from big to small
Sequence, and then server gets preset quantity, the mood probability completed from sequence extracts the standard with preset quantity corresponding number
Mood, and then type of emotion corresponding to the standard mood extracted is inquired, and type of emotion corresponding to judgment criteria mood
It is whether identical, when extracting type of emotion difference corresponding to obtained standard mood, then inquire the standard of mood maximum probability
Type of emotion corresponding to mood, using the type of emotion as current emotional information.For example, when server receives people to be analyzed
When face image is business personnel's facial image, then receiving current emotional corresponding to each frame image to be assessed is the general of standard mood
Rate, namely can be first inquire first frame image to be assessed corresponding to various criterion mood probability namely first frame it is to be evaluated
The mood probability for estimating the corresponding 54 kinds of standard moods of image, will acquire mood probability and is ranked up, sequence can be from greatly to
It is small to be ranked up, and then preset quantity is got, if preset quantity is 3, then the mood probability after sequence is come into front three
Standard mood extracts, and then inquires and come type of emotion different corresponding to the standard mood of front three, such as mood class
Type is respectively general, smiles, and happily, namely type of emotion is different type of emotion at this time, and then inquires mood maximum probability
Standard mood corresponding to type of emotion, for example happily, then at this time happily be business personnel's facial image in first frame it is to be evaluated
Estimate the current emotional information of image, and with same procedure, can identify other frames image to be assessed in business personnel's facial image
Current emotional information, and same procedure is used, it can identify that all images to be assessed are corresponding current in client's facial image
Emotional information.It should be noted that mood probability is got accordingly by corresponding Emotion identification server in the present embodiment
Facial image acquires the default expressive features on facial image, and then micro- expression that expressive features are input to training completion is known
It is identified in other model, obtains the probability that current expression is each micro- expression, also as mood probability.
In the present embodiment, it is standard feelings that current emotional can be inquired when analyzing current emotional information to facial image to be analyzed
The mood probability of thread, and then directly mood probability is ranked up, and according to the extraction of mood probability and preset quantity after sequence
The standard mood of corresponding quantity judges whether the corresponding type of emotion of extracted standard mood identical, when type of emotion not
Meanwhile type of emotion corresponding to the standard mood of mood maximum probability is then obtained as current emotional information, then without artificial
It is analyzed, improves the efficiency of inquiry current emotional information, and subjectivity analysis is avoided to lead to analysis inaccuracy, improve analysis feelings
The accuracy of thread.
In one embodiment, can with continued reference to Fig. 3, judge the corresponding type of emotion of extracted standard mood whether phase
With after, comprising: when extracted standard mood is corresponding with identical type of emotion, then inquire the identical standard of type of emotion
Mood;Destination probability is obtained according to the corresponding mood probability calculation of the identical standard mood of type of emotion;Obtain destination probability with
And the maximum value in the mood probability of the different standard mood of type of emotion;Using the corresponding type of emotion of maximum value as current
Emotional information.
Specifically, destination probability, which refers to, is corresponding with probability associated by the standard mood of identical type of emotion, the target
Probability can be calculated as corresponding to mood probability corresponding to standard mood different in identical type of emotion, can
To be that mood probability corresponding to the corresponding different standard moods in identical type of emotion is added to obtain.Specifically, when
Server extracts standard mood, the corresponding type of emotion of query criteria mood, when standard mood is corresponding with identical mood class
When type, then for accuracy of judgement, then avoid directly choosing mood classification conduct corresponding to the standard mood of mood maximum probability
Current emotional information, namely inquiry correspond to the mood probability of the various criterion mood of identical type of emotion, and correspondence is identical
The mood probability of the various criterion mood of type of emotion is summed to obtain destination probability, so by the destination probability with it is corresponding
The mood probability of the standard mood of different type of emotion is compared, and the big corresponding type of emotion of comparison result is made
For current emotional information.That is, when server receives facial image to be analyzed as business personnel's face figure in such as above-mentioned steps
When picture, then receives current emotional corresponding to each frame image to be assessed and be the probability of standard mood, namely can be and first inquire
The corresponding 54 kinds of standards of the probability namely first frame image to be assessed of various criterion mood corresponding to first frame image to be assessed
The mood probability of mood will acquire mood probability and be ranked up, and sequence can be to be ranked up from big to small, and then is got
Preset quantity then extracts the standard mood that the mood probability after sequence comes front three, in turn if preset quantity is 3
Inquiry, which comes, identical type of emotion in type of emotion corresponding to the standard mood of front three, the standard for position of such as ranking the first
Mood corresponds to happiness, and the standard mood for position of being number two and the standard mood for position of being number three are corresponding with identical mood class
Type, namely it is corresponding general, then the synthesis of the mood probability of the corresponding standard mood in general type of emotion is calculated as target
The destination probability is compared by probability with the corresponding mood probability corresponding to the standard mood of happy emoticon classification, selection
Type of emotion corresponding to the standard mood of the big probability of comparison result is as current emotional information.
In the present embodiment, when the standard mood extracted is corresponding with identical type of emotion, then avoid directlying adopt feelings
Mood classification corresponding to the standard mood of thread maximum probability leads to inaccuracy as current emotional information, then inquires type of emotion
Identical standard mood, and the mood probability calculation according to corresponding to type of emotion identical standard mood obtains destination probability,
And then get the maximum in destination probability and the mood probability of the different standard mood of type of emotion extremely, by maximum value pair
The type of emotion answered guarantees the accuracy for getting current emotional information as current emotional information.
In one embodiment, service quality score is calculated according to target emotion score, comprising: inquiry target emotion score
The scoring weight of corresponding facial image to be analyzed;According to target emotion score and scoring weight, service quality score is calculated.
Specifically, it when server inquires target emotion score, then can be calculated according to target emotion score corresponding
Service quality score, and when calculating service quality score, due to being related to different face figures to be analyzed in video to be assessed
Picture, namely when being related to different roles, then the accounting when service quality scores of mood corresponding to different roles is different, example
Such as, can be when role be client, then when calculating service quality score, then accounting is more.Specifically, when server gets mesh
When marking mood score, then inquire the scoring weight of facial image to be analyzed corresponding to target emotion score, will scoring weight with
Target emotion score calculates product, and different products is added to obtain service quality score.For example, when server is got
Target emotion score, then inquiring the corresponding facial image to be analyzed of target emotion score is business personnel's facial image and client's face
Image obtains scoring weight corresponding to the corresponding scoring weight of business personnel's facial image and client's facial image respectively, from
And calculate the corresponding scoring weight of business personnel's facial image target emotion score corresponding with business personnel's facial image first multiplies
Product, and then calculate the second of the corresponding scoring weight of client's facial image target emotion score corresponding with client's facial image and multiply
Product sums the first product and the second product to obtain service quality score.
In the present embodiment, when calculating service quality score, it can be calculated according to different scoring weights, make to succeed in one's scheme
It is more accurate to calculate service quality score, it is also more accurate to service quality evaluation.
In one embodiment, the corresponding target emotion score of inquiry target emotion information, and according to target emotion score
After calculating service quality score, comprising: obtain range of value, the corresponding range of value of query service quality score;Inquiry with
The service quality score corresponds to the associated grade of service of range of value;Service quality report is generated according to the grade of service.
Specifically, range of value refers to the corresponding range of evaluation service quality, namely corresponding according to service quality score
Different ranges obtains different service quality ratings.The grade of service refers to the grade for evaluating different service quality, can be
The more high then grade of service of service quality is higher.Service quality report refer to include service quality evaluation specific report, can be with
Including servicing related essential information, and class of service information corresponding with service.Specifically, server gets different
Range of value, when service quality score is calculated in server, then the corresponding range of value of query service quality score, in turn
When getting corresponding range of value, the grade of service associated by corresponding range of value is inquired, is generated according to the grade of service
Corresponding service quality report.For example, servicing different evaluation range that it gets is 0~1 point, and 1~3 point, 3~6 points, 6~9
Point, 9~10 points, and it is that service is very poor that the associated grade of service of service range, which is 0~1 point, 1~3 sub-service business is poor, and 3~6 points are
Service is general, and 6~9 points good for service, and 9~10 points outstanding to service, then inquires that above-mentioned service quality score is corresponding to be commented
Divide range, and then get the grade of service, according to the grade of service got, generates service quality evaluation report, service quality
Appraisal report may include service item, service time and service quality etc..
In the present embodiment, server is available to arrive corresponding range of value, and then can be according to corresponding service quality
Scoring inquires corresponding range of value, to get the corresponding grade of service, generates service quality report according to the grade of service
It accuses, so that service quality evaluation is more intuitive.
QoS evaluating method in one of the embodiments, further include: extract in video to be assessed with people to be analyzed
Voice messaging corresponding to face image;Identidication key is obtained, is detected in voice messaging and whether is included identidication key and obtain
Testing result;It will test result to be added in service quality report.
Specifically, identidication key refers to the related service regulation occurred in service process for including in video to be assessed
Keyword.When server gets video to be assessed, then whether can also occur the defined dedicated key of service from voice
Son carries out service quality evaluation.Namely server is when getting video to be assessed, then it is available to arrive facial image to be analyzed
Corresponding voice messaging, and obtain voice messaging corresponding to facial image to be analyzed can by the method for Application on Voiceprint Recognition into
Row extracts, and when getting voice messaging, then is segmented corresponding voice messaging to obtain different participle fields, and dividing
During word, available preset multiple participle logics tear voice messaging open according to preset multiple participle logics
Get different segmentation sequences, calculates the corresponding fractionation accuracy of each segmentation sequence, the maximum participle of accuracy will be split
Sequence is as participle field, and then server gets different identidication keys, namely gets and need in service process
The keyword of appearance, and then whether server voice inquirement information includes identidication key, and generates corresponding testing result,
I.e. whether inquiry participle field then includes that identification is closed in voice messaging when successful match with identidication key successful match
The testing result is then added in service quality evaluation report by key word.For example, server carries out business personnel to client
Keyword as defined in whether occurring meeting during service is detected, namely according to Application on Voiceprint Recognition, extracts view to be assessed
The voice messaging that business personnel occurs in frequency, and then voice messaging is split to obtain participle field, server is got
Corresponding service keyword that is to say identidication key, such as " you are good " " work number " " thanks ", and then will participle field and identification
Keyword is compared, and then includes identidication key in voice messaging when comparing successfully, then generating testing result is " clothes
Business voice meets the requirements ".In addition, when participle field is compared with identidication key, when comparison is unsuccessful, then voice messaging
In do not include identidication key, then generate testing result be " service voice is undesirable ".
In the present embodiment, whether it includes that identification is crucial that server can detecte in the voice messaging in video to be assessed
Word so as to evaluate from the dimension of voice messaging service quality, namely from different dimensions is commented service quality
Valence guarantees the accuracy of service quality evaluation.
It should be understood that although each step in the flow chart of Fig. 2-3 is successively shown according to the instruction of arrow,
These steps are not that the inevitable sequence according to arrow instruction successively executes.Unless expressly stating otherwise herein, these steps
Execution there is no stringent sequences to limit, these steps can execute in other order.Moreover, at least one in Fig. 2-3
Part steps may include that perhaps these sub-steps of multiple stages or stage are not necessarily in synchronization to multiple sub-steps
Completion is executed, but can be executed at different times, the execution sequence in these sub-steps or stage is also not necessarily successively
It carries out, but can be at least part of the sub-step or stage of other steps or other steps in turn or alternately
It executes.
In one embodiment, as shown in figure 4, providing a kind of service quality evaluation device 400, comprising: first obtains
Module 410, receiving module 420, identification module 430, statistical module 440, second obtain module 450 and computing module 460,
In:
First obtains module 410, for obtaining video to be assessed, and extracts the picture frame in video to be assessed as to be evaluated
Estimate image.
Receiving module 420 chooses people to be analyzed for receiving identity, and according to identity from image to be assessed
Face image.
Identification module 430, for identification the current emotional information of facial image to be analyzed.
Statistical module 440, for counting the frame number of image to be assessed corresponding to current emotional information.
Second obtains module 450, for obtaining target emotion information according to current emotional information and frame number.
Computing module 460, for inquiring the corresponding target emotion score of target emotion information, and according to target emotion score
Calculate service quality score.
In one embodiment, identification module 430, comprising:
Receiving unit, for receiving the mood probability that the corresponding current emotional of facial image to be analyzed is standard mood.
Sequencing unit, for being ranked up to obtained mood probability, and according to after sequence mood probability extract with
The standard mood of the corresponding quantity of preset quantity.
Judging unit, for judging whether the corresponding type of emotion of extracted standard mood is identical.
First acquisition unit, for when the corresponding type of emotion difference of extracted standard mood, then it is general to obtain mood
Type of emotion corresponding to the maximum standard mood of rate is as current emotional information.
In one embodiment, identification module 430, comprising:
Query unit, for when extracted standard mood is corresponding with identical type of emotion, then inquiring type of emotion
Identical standard mood.
Computing unit, it is general for obtaining target according to the corresponding mood probability calculation of the identical standard mood of type of emotion
Rate.
Maximum value acquiring unit, for obtaining the mood probability of destination probability and the different standard mood of type of emotion
In maximum value.
Second acquisition unit, for using the corresponding type of emotion of maximum value as current emotional information.
In one embodiment, computing module 460, comprising:
Score weight query unit, for inquiring the scoring power of facial image to be analyzed corresponding to target emotion score
Weight.
Service quality score calculation unit, for calculating service quality score according to target emotion score and scoring weight.
In one embodiment, service quality evaluation device 400, comprising:
Range of value obtains module, for obtaining range of value, the corresponding range of value of query service quality score.
Grade of service enquiry module, for inquiring the associated grade of service of range of value corresponding with service quality score.
Generation module, for generating service quality report according to the grade of service.
In one embodiment, service quality evaluation device 400, comprising:
Voice messaging extraction module is believed for extracting in video to be assessed with voice corresponding to facial image to be analyzed
Breath.
Whether detection module detects in voice messaging for obtaining identidication key and includes identidication key and examined
Survey result.
Adding module is added in service quality report for will test result.
Specific about service quality evaluation device limits the limit that may refer to above for QoS evaluating method
Fixed, details are not described herein.Modules in above-mentioned service quality evaluation device can fully or partially through software, hardware and its
Combination is to realize.Above-mentioned each module can be embedded in the form of hardware or independently of in the processor in computer equipment, can also be with
It is stored in the memory in computer equipment in a software form, in order to which processor calls the above modules of execution corresponding
Operation.
In one embodiment, a kind of computer equipment is provided, which can be server, internal junction
Composition can be as shown in Figure 5.The computer equipment include by system bus connect processor, memory, network interface and
Database.Wherein, the processor of the computer equipment is for providing calculating and control ability.The memory packet of the computer equipment
Include non-volatile memory medium, built-in storage.The non-volatile memory medium is stored with operating system, computer program and data
Library.The built-in storage provides environment for the operation of operating system and computer program in non-volatile memory medium.The calculating
The database of machine equipment is used for storage service quality evaluation data.The network interface of the computer equipment is used for and external terminal
It is communicated by network connection.To realize a kind of QoS evaluating method when the computer program is executed by processor.
It will be understood by those skilled in the art that structure shown in Fig. 5, only part relevant to application scheme is tied
The block diagram of structure does not constitute the restriction for the computer equipment being applied thereon to application scheme, specific computer equipment
It may include perhaps combining certain components or with different component layouts than more or fewer components as shown in the figure.
In one embodiment, a kind of computer equipment, including memory and processor are provided, which is stored with
Computer program, which performs the steps of when executing computer program obtains video to be assessed, and extracts view to be assessed
Picture frame in frequency is as image to be assessed.Identity is received, and is chosen from image to be assessed according to identity wait divide
Analyse facial image.Identify the current emotional information of facial image to be analyzed.Count figure to be assessed corresponding to current emotional information
The frame number of picture.Target emotion information is obtained according to current emotional information and frame number.Inquire the corresponding target of target emotion information
Mood score, and service quality score is calculated according to target emotion score.
In one embodiment, the current emotional for identifying facial image to be analyzed is realized when processor executes computer program
Information, comprising: receive the mood probability that the corresponding current emotional of facial image to be analyzed is standard mood.To obtained mood
Probability is ranked up, and the standard mood of quantity corresponding with preset quantity is extracted according to the mood probability after sequence.Judge institute
Whether the corresponding type of emotion of standard mood of extraction is identical.When the corresponding type of emotion difference of extracted standard mood,
Type of emotion corresponding to the standard mood of mood maximum probability is then obtained as current emotional information.
In one embodiment, it is realized when processor executes computer program and judges the corresponding feelings of extracted standard mood
After whether thread type is identical, comprising: when extracted standard mood is corresponding with identical type of emotion, then inquire mood class
The identical standard mood of type.Destination probability is obtained according to the corresponding mood probability calculation of the identical standard mood of type of emotion.It obtains
Take the maximum value in destination probability and the mood probability of the different standard mood of type of emotion.By the corresponding mood of maximum value
Type is as current emotional information.
In one embodiment, it is realized when processor executes computer program and service quality is calculated according to target emotion score
Score, comprising: the scoring weight of facial image to be analyzed corresponding to inquiry target emotion score.According to target emotion score with
Score weight, calculates service quality score.
In one embodiment, inquiry target emotion information corresponding target feelings are realized when processor executes computer program
Thread score, and according to target emotion score calculate service quality score after, comprising: obtain range of value, query service quality
The corresponding range of value of score.Inquire the associated grade of service of range of value corresponding with service quality score.According to the grade of service
Generate service quality report.
In one embodiment, it also performs the steps of and is extracted in video to be assessed when processor executes computer program
With voice messaging corresponding to facial image to be analyzed.Identidication key is obtained, detects in voice messaging and whether is closed comprising identification
Key word simultaneously obtains testing result.It will test result to be added in service quality report.
In one embodiment, a kind of computer readable storage medium is provided, computer program is stored thereon with, is calculated
Machine program performs the steps of when being executed by processor obtains video to be assessed, and extracts the work of the picture frame in video to be assessed
For image to be assessed.Identity is received, and facial image to be analyzed is chosen from image to be assessed according to identity.Identification
The current emotional information of facial image to be analyzed.Count the frame number of image to be assessed corresponding to current emotional information.According to working as
Preceding emotional information and frame number obtain target emotion information.The corresponding target emotion score of inquiry target emotion information, and according to
Target emotion score calculates service quality score.
In one embodiment, that identification facial image to be analyzed is realized when computer program is executed by processor works as cause
Thread information, comprising: receive the mood probability that the corresponding current emotional of facial image to be analyzed is standard mood.To obtained feelings
Thread probability is ranked up, and the standard mood of quantity corresponding with preset quantity is extracted according to the mood probability after sequence.Judgement
Whether the corresponding type of emotion of extracted standard mood is identical.When the corresponding type of emotion of extracted standard mood is different
When, then type of emotion corresponding to the standard mood of mood maximum probability is obtained as current emotional information.
In one embodiment, it is realized when computer program is executed by processor and judges that extracted standard mood is corresponding
After whether type of emotion is identical, comprising: when extracted standard mood is corresponding with identical type of emotion, then inquire mood
The identical standard mood of type.Destination probability is obtained according to the corresponding mood probability calculation of the identical standard mood of type of emotion.
Obtain the maximum value in destination probability and the mood probability of the different standard mood of type of emotion.By the corresponding feelings of maximum value
Thread type is as current emotional information.
In one embodiment, it is realized when computer program is executed by processor and Service Quality is calculated according to target emotion score
It measures point, comprising: the scoring weight of facial image to be analyzed corresponding to inquiry target emotion score.According to target emotion score
With scoring weight, service quality score is calculated.
In one embodiment, inquiry target emotion information corresponding target is realized when computer program is executed by processor
Mood score, and according to target emotion score calculate service quality score after, comprising: obtain range of value, query service matter
Measure a point corresponding range of value.Inquire the associated grade of service of range of value corresponding with service quality score.According to service etc.
Grade generates service quality report.
In one embodiment, it is also performed the steps of when computer program is executed by processor and extracts video to be assessed
In with voice messaging corresponding to facial image to be analyzed.Identidication key is obtained, whether is detected in voice messaging comprising identification
Keyword simultaneously obtains testing result.It will test result to be added in service quality report.
Those of ordinary skill in the art will appreciate that realizing all or part of the process in above-described embodiment method, being can be with
Relevant hardware is instructed to complete by computer program, the computer program can be stored in a non-volatile computer
In read/write memory medium, the computer program is when being executed, it may include such as the process of the embodiment of above-mentioned each method.Wherein,
To any reference of memory, storage, database or other media used in each embodiment provided herein,
Including non-volatile and/or volatile memory.Nonvolatile memory may include read-only memory (ROM), programming ROM
(PROM), electrically programmable ROM (EPROM), electrically erasable ROM (EEPROM) or flash memory.Volatile memory may include
Random access memory (RAM) or external cache.By way of illustration and not limitation, RAM is available in many forms,
Such as static state RAM (SRAM), dynamic ram (DRAM), synchronous dram (SDRAM), double data rate sdram (DDRSDRAM), enhancing
Type SDRAM (ESDRAM), synchronization link (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM
(RDRAM), direct memory bus dynamic ram (DRDRAM) and memory bus dynamic ram (RDRAM) etc..
Each technical characteristic of above embodiments can be combined arbitrarily, for simplicity of description, not to above-described embodiment
In each technical characteristic it is all possible combination be all described, as long as however, the combination of these technical characteristics be not present lance
Shield all should be considered as described in this specification.
The several embodiments of the application above described embodiment only expresses, the description thereof is more specific and detailed, but simultaneously
It cannot therefore be construed as limiting the scope of the patent.It should be pointed out that coming for those of ordinary skill in the art
It says, without departing from the concept of this application, various modifications and improvements can be made, these belong to the protection of the application
Range.Therefore, the scope of protection shall be subject to the appended claims for the application patent.
Claims (10)
1. a kind of QoS evaluating method, which comprises
Video to be assessed is obtained, and extracts the picture frame in the video to be assessed as image to be assessed;
Identity is received, and facial image to be analyzed is chosen from the image to be assessed according to the identity;
Identify the current emotional information of the facial image to be analyzed;
Count the frame number of image to be assessed corresponding to the current emotional information;
Target emotion information is obtained according to the current emotional information and the frame number;
The corresponding target emotion score of the target emotion information is inquired, and service quality is calculated according to the target emotion score
Score.
2. the method according to claim 1, wherein the current emotional of the identification facial image to be analyzed
Information, comprising:
Receive the mood probability that the corresponding current emotional of the facial image to be analyzed is standard mood;
The obtained mood probability is ranked up, and corresponding with preset quantity according to the mood probability extraction after sequence
The standard mood of quantity;
Judge whether the corresponding type of emotion of extracted standard mood is identical;
When the corresponding type of emotion difference of extracted standard mood, then obtain corresponding to the standard mood of mood maximum probability
The type of emotion as current emotional information.
3. according to the method described in claim 2, it is characterized in that, the corresponding mood class of the extracted standard mood of judgement
After whether type is identical, comprising:
When the extracted standard mood is corresponding with identical type of emotion, then the identical standard feelings of type of emotion are inquired
Thread;
Destination probability is obtained according to the corresponding mood probability calculation of the identical standard mood of the type of emotion;
Obtain the maximum value in the destination probability and the mood probability of the different standard mood of type of emotion;
Using the corresponding type of emotion of the maximum value as current emotional information.
4. the method according to claim 1, wherein described calculate service quality according to the target emotion score
Score, comprising:
Inquire the scoring weight of the facial image to be analyzed corresponding to the target emotion score;
According to the target emotion score and the scoring weight, the service quality score is calculated.
5. the method according to claim 1, wherein the corresponding target feelings of the inquiry target emotion information
Thread score, and according to the target emotion score calculate service quality score after, comprising:
Range of value is obtained, the corresponding range of value of the service quality score is inquired;
Inquire the associated grade of service of range of value corresponding with the service quality score;
Service quality report is generated according to the grade of service.
6. according to the method described in claim 5, it is characterized in that, the method also includes:
Extract in the video to be assessed with voice messaging corresponding to the facial image to be analyzed;
Identidication key is obtained, is detected in the voice messaging and whether is included the identidication key and obtain testing result;
The testing result is added in the service quality report.
7. a kind of service quality evaluation device, which is characterized in that described device includes:
First obtains module, for obtaining video to be assessed, and extracts the picture frame in the video to be assessed as to be assessed
Image;
Receiving module for receiving identity, and is chosen from the image to be assessed according to the identity to be analyzed
Facial image;
Identification module, for identification the current emotional information of the facial image to be analyzed;
Statistical module, for counting the frame number of image to be assessed corresponding to the current emotional information;
Second obtains module, for obtaining target emotion information according to the current emotional information and the frame number;
Computing module for inquiring the corresponding target emotion score of the target emotion information, and is obtained according to the target emotion
Divide and calculates service quality score.
8. device according to claim 7, which is characterized in that the identification module, comprising:
Receiving unit, for receiving the mood probability that the corresponding current emotional of the facial image to be analyzed is standard mood;
Sequencing unit is extracted and is preset for being ranked up to obtained mood probability, and according to the mood probability after sequence
The standard mood of the corresponding quantity of quantity;
Judging unit, for judging whether the corresponding type of emotion of extracted standard mood is identical;
First acquisition unit, for when the corresponding type of emotion difference of extracted standard mood, then obtaining mood probability most
The type of emotion corresponding to big standard mood is as current emotional information.
9. a kind of computer equipment, including memory and processor, the memory are stored with computer program, feature exists
In the step of processor realizes any one of claims 1 to 6 the method when executing the computer program.
10. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the computer program
The step of method described in any one of claims 1 to 6 is realized when being executed by processor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811547937.5A CN109766770A (en) | 2018-12-18 | 2018-12-18 | QoS evaluating method, device, computer equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811547937.5A CN109766770A (en) | 2018-12-18 | 2018-12-18 | QoS evaluating method, device, computer equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109766770A true CN109766770A (en) | 2019-05-17 |
Family
ID=66450647
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811547937.5A Pending CN109766770A (en) | 2018-12-18 | 2018-12-18 | QoS evaluating method, device, computer equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109766770A (en) |
Cited By (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110310169A (en) * | 2019-05-22 | 2019-10-08 | 深圳壹账通智能科技有限公司 | Information-pushing method, device, equipment and medium based on interest value |
CN110415108A (en) * | 2019-07-29 | 2019-11-05 | 中国工商银行股份有限公司 | Method for processing business and device, electronic equipment and computer readable storage medium |
CN110458008A (en) * | 2019-07-04 | 2019-11-15 | 深圳壹账通智能科技有限公司 | Method for processing video frequency, device, computer equipment and storage medium |
CN110718293A (en) * | 2019-10-23 | 2020-01-21 | 合肥盛东信息科技有限公司 | Nursing staff service quality monitoring and evaluating system |
CN111383138A (en) * | 2020-03-06 | 2020-07-07 | 腾讯科技(深圳)有限公司 | Catering data processing method and device, computer equipment and storage medium |
CN111401198A (en) * | 2020-03-10 | 2020-07-10 | 广东九联科技股份有限公司 | Audience emotion recognition method, device and system |
CN111539339A (en) * | 2020-04-26 | 2020-08-14 | 北京市商汤科技开发有限公司 | Data processing method and device, electronic equipment and storage medium |
CN111784163A (en) * | 2020-07-01 | 2020-10-16 | 深圳前海微众银行股份有限公司 | Data evaluation method, device, equipment and storage medium |
CN111914810A (en) * | 2020-08-19 | 2020-11-10 | 浙江养生堂天然药物研究所有限公司 | Food inspection method, apparatus and non-volatile computer-readable storage medium |
CN112633037A (en) * | 2019-09-24 | 2021-04-09 | 北京国双科技有限公司 | Object monitoring method and device, storage medium and electronic equipment |
CN113269406A (en) * | 2021-05-06 | 2021-08-17 | 京东数字科技控股股份有限公司 | Method and device for evaluating online service, computer equipment and storage medium |
CN113434630A (en) * | 2021-06-25 | 2021-09-24 | 平安科技(深圳)有限公司 | Customer service evaluation method, customer service evaluation device, terminal equipment and medium |
CN113642503A (en) * | 2021-08-23 | 2021-11-12 | 国网山东省电力公司金乡县供电公司 | Window service scoring method and system based on image and voice recognition |
CN113723299A (en) * | 2021-08-31 | 2021-11-30 | 上海明略人工智能(集团)有限公司 | Conference quality scoring method, system and computer readable storage medium |
CN114048348A (en) * | 2021-10-14 | 2022-02-15 | 盐城金堤科技有限公司 | Video quality scoring method and device, storage medium and electronic equipment |
CN114565814A (en) * | 2022-02-25 | 2022-05-31 | 平安国际智慧城市科技股份有限公司 | Feature detection method and device and terminal equipment |
CN117131099A (en) * | 2022-12-14 | 2023-11-28 | 广州数化智甄科技有限公司 | Emotion data analysis method and device in product evaluation and product evaluation method |
CN118094173A (en) * | 2023-11-17 | 2024-05-28 | 北京理工大学 | Method for autonomously identifying and analyzing reconnaissance load target and evaluating reconnaissance load target through algorithm |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2014178835A (en) * | 2013-03-14 | 2014-09-25 | Nissha Printing Co Ltd | Evaluation system and evaluation method |
CN105049249A (en) * | 2015-07-09 | 2015-11-11 | 中山大学 | Scoring method and system of remote visual conversation services |
CN107194316A (en) * | 2017-04-20 | 2017-09-22 | 广东数相智能科技有限公司 | A kind of evaluation method of mood satisfaction, apparatus and system |
CN107452405A (en) * | 2017-08-16 | 2017-12-08 | 北京易真学思教育科技有限公司 | A kind of method and device that data evaluation is carried out according to voice content |
CN107862598A (en) * | 2017-09-30 | 2018-03-30 | 平安普惠企业管理有限公司 | Long-range the interview measures and procedures for the examination and approval, server and readable storage medium storing program for executing |
KR20180125756A (en) * | 2017-05-16 | 2018-11-26 | 전주대학교 산학협력단 | Emotion recognition interface apparatus |
-
2018
- 2018-12-18 CN CN201811547937.5A patent/CN109766770A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2014178835A (en) * | 2013-03-14 | 2014-09-25 | Nissha Printing Co Ltd | Evaluation system and evaluation method |
CN105049249A (en) * | 2015-07-09 | 2015-11-11 | 中山大学 | Scoring method and system of remote visual conversation services |
CN107194316A (en) * | 2017-04-20 | 2017-09-22 | 广东数相智能科技有限公司 | A kind of evaluation method of mood satisfaction, apparatus and system |
KR20180125756A (en) * | 2017-05-16 | 2018-11-26 | 전주대학교 산학협력단 | Emotion recognition interface apparatus |
CN107452405A (en) * | 2017-08-16 | 2017-12-08 | 北京易真学思教育科技有限公司 | A kind of method and device that data evaluation is carried out according to voice content |
CN107862598A (en) * | 2017-09-30 | 2018-03-30 | 平安普惠企业管理有限公司 | Long-range the interview measures and procedures for the examination and approval, server and readable storage medium storing program for executing |
Non-Patent Citations (2)
Title |
---|
LINH TUAN DANG 等: "Development of facial expression recognition for training video customer service representatives", 《IEEE》 * |
杨晓艺;谢俊武;张峰;: "基于人脸表情识别的呼叫中心座席服务质量监控应用研究", 电子设计工程, no. 14, pages 127 - 130 * |
Cited By (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110310169A (en) * | 2019-05-22 | 2019-10-08 | 深圳壹账通智能科技有限公司 | Information-pushing method, device, equipment and medium based on interest value |
CN110458008A (en) * | 2019-07-04 | 2019-11-15 | 深圳壹账通智能科技有限公司 | Method for processing video frequency, device, computer equipment and storage medium |
CN110415108A (en) * | 2019-07-29 | 2019-11-05 | 中国工商银行股份有限公司 | Method for processing business and device, electronic equipment and computer readable storage medium |
CN112633037A (en) * | 2019-09-24 | 2021-04-09 | 北京国双科技有限公司 | Object monitoring method and device, storage medium and electronic equipment |
CN110718293A (en) * | 2019-10-23 | 2020-01-21 | 合肥盛东信息科技有限公司 | Nursing staff service quality monitoring and evaluating system |
CN111383138A (en) * | 2020-03-06 | 2020-07-07 | 腾讯科技(深圳)有限公司 | Catering data processing method and device, computer equipment and storage medium |
CN111401198B (en) * | 2020-03-10 | 2024-04-23 | 广东九联科技股份有限公司 | Audience emotion recognition method, device and system |
CN111401198A (en) * | 2020-03-10 | 2020-07-10 | 广东九联科技股份有限公司 | Audience emotion recognition method, device and system |
CN111539339A (en) * | 2020-04-26 | 2020-08-14 | 北京市商汤科技开发有限公司 | Data processing method and device, electronic equipment and storage medium |
CN111784163A (en) * | 2020-07-01 | 2020-10-16 | 深圳前海微众银行股份有限公司 | Data evaluation method, device, equipment and storage medium |
CN111914810A (en) * | 2020-08-19 | 2020-11-10 | 浙江养生堂天然药物研究所有限公司 | Food inspection method, apparatus and non-volatile computer-readable storage medium |
CN113269406A (en) * | 2021-05-06 | 2021-08-17 | 京东数字科技控股股份有限公司 | Method and device for evaluating online service, computer equipment and storage medium |
CN113434630B (en) * | 2021-06-25 | 2023-07-25 | 平安科技(深圳)有限公司 | Customer service evaluation method, customer service evaluation device, terminal equipment and medium |
CN113434630A (en) * | 2021-06-25 | 2021-09-24 | 平安科技(深圳)有限公司 | Customer service evaluation method, customer service evaluation device, terminal equipment and medium |
CN113642503A (en) * | 2021-08-23 | 2021-11-12 | 国网山东省电力公司金乡县供电公司 | Window service scoring method and system based on image and voice recognition |
CN113642503B (en) * | 2021-08-23 | 2024-03-15 | 国网山东省电力公司金乡县供电公司 | Window service scoring method and system based on image and voice recognition |
CN113723299A (en) * | 2021-08-31 | 2021-11-30 | 上海明略人工智能(集团)有限公司 | Conference quality scoring method, system and computer readable storage medium |
CN114048348A (en) * | 2021-10-14 | 2022-02-15 | 盐城金堤科技有限公司 | Video quality scoring method and device, storage medium and electronic equipment |
CN114048348B (en) * | 2021-10-14 | 2024-08-16 | 盐城天眼察微科技有限公司 | Video quality scoring method and device, storage medium and electronic equipment |
CN114565814A (en) * | 2022-02-25 | 2022-05-31 | 平安国际智慧城市科技股份有限公司 | Feature detection method and device and terminal equipment |
CN114565814B (en) * | 2022-02-25 | 2024-07-09 | 深圳平安智慧医健科技有限公司 | Feature detection method and device and terminal equipment |
CN117131099A (en) * | 2022-12-14 | 2023-11-28 | 广州数化智甄科技有限公司 | Emotion data analysis method and device in product evaluation and product evaluation method |
CN118094173A (en) * | 2023-11-17 | 2024-05-28 | 北京理工大学 | Method for autonomously identifying and analyzing reconnaissance load target and evaluating reconnaissance load target through algorithm |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109766770A (en) | QoS evaluating method, device, computer equipment and storage medium | |
Abd El Meguid et al. | Fully automated recognition of spontaneous facial expressions in videos using random forest classifiers | |
CN109543925B (en) | Risk prediction method and device based on machine learning, computer equipment and storage medium | |
CN109376237B (en) | Client stability prediction method, device, computer equipment and storage medium | |
CN109670437B (en) | Age estimation model training method, facial image recognition method and device | |
CN109345302A (en) | Machine learning model training method, device, storage medium and computer equipment | |
CN109729383A (en) | Double record video quality detection methods, device, computer equipment and storage medium | |
CN111028305A (en) | Expression generation method, device, equipment and storage medium | |
CN109767261A (en) | Products Show method, apparatus, computer equipment and storage medium | |
CN110751533B (en) | Product portrait generation method and device, computer equipment and storage medium | |
CN109002988A (en) | Risk passenger method for predicting, device, computer equipment and storage medium | |
CN106960248B (en) | Method and device for predicting user problems based on data driving | |
CN110458008A (en) | Method for processing video frequency, device, computer equipment and storage medium | |
CN109766474A (en) | Inquest signal auditing method, device, computer equipment and storage medium | |
CN111160275B (en) | Pedestrian re-recognition model training method, device, computer equipment and storage medium | |
CN109815851A (en) | Kitchen hygiene detection method, device, computer equipment and storage medium | |
CN110175298A (en) | User matching method | |
CN109684978A (en) | Employees'Emotions monitoring method, device, computer equipment and storage medium | |
CN109461043A (en) | Product method for pushing, device, computer equipment and storage medium | |
Bai et al. | Automatic long-term deception detection in group interaction videos | |
US20180276696A1 (en) | Association method, and non-transitory computer-readable storage medium | |
Xia et al. | Cross-database micro-expression recognition with deep convolutional networks | |
CN109766773A (en) | Match monitoring method, device, computer equipment and storage medium | |
CN110377821A (en) | Generate method, apparatus, computer equipment and the storage medium of interest tags | |
CN109241864A (en) | Emotion prediction technique, device, computer equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20190517 |
|
WD01 | Invention patent application deemed withdrawn after publication |