CN111415176A - Satisfaction evaluation method and device and electronic equipment - Google Patents

Satisfaction evaluation method and device and electronic equipment Download PDF

Info

Publication number
CN111415176A
CN111415176A CN201811555021.4A CN201811555021A CN111415176A CN 111415176 A CN111415176 A CN 111415176A CN 201811555021 A CN201811555021 A CN 201811555021A CN 111415176 A CN111415176 A CN 111415176A
Authority
CN
China
Prior art keywords
comment data
satisfaction
machine learning
word
piece
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811555021.4A
Other languages
Chinese (zh)
Other versions
CN111415176B (en
Inventor
李国琪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN201811555021.4A priority Critical patent/CN111415176B/en
Publication of CN111415176A publication Critical patent/CN111415176A/en
Application granted granted Critical
Publication of CN111415176B publication Critical patent/CN111415176B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0282Rating or review of business operators or products
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Abstract

The application provides a satisfaction evaluation method, a satisfaction evaluation device and electronic equipment, wherein the method comprises the following steps: performing word segmentation on each piece of comment data of the obtained evaluation object to obtain words in each piece of comment data; extracting a feature vector from each piece of comment data based on the obtained vocabulary of each piece of comment data, and overlapping the feature vectors of each piece of comment data to obtain a feature matrix; inputting the feature matrix into a trained machine learning model, calculating the feature matrix by the machine learning model according to a regression algorithm, and outputting a plurality of satisfaction degree parameters; dispersing the plurality of satisfaction degree parameters into a specified number of box-dividing intervals, and taking the arrangement sequence of the box-dividing intervals containing the most satisfaction degree parameters in all the box-dividing intervals as satisfaction degree evaluation grades. According to the method and the device, the experience comments of the user on the evaluation object can be obtained more directly from the comment data, so that the satisfaction evaluation result is accurately obtained.

Description

Satisfaction evaluation method and device and electronic equipment
Technical Field
The present application relates to the field of natural language processing, and in particular, to a satisfaction evaluation method and apparatus, and an electronic device.
Background
Satisfaction evaluation refers to evaluation of service experience in restaurants, scenic spots, cinemas and other places, and is a form of social media emotion analysis. In the related technology, indexes such as passenger flow volume, consumption amount and the like are mainly analyzed by using an analytic hierarchy process, so that the satisfaction degree of a customer on service experience is obtained.
However, the analytic hierarchy process processes each index according to a preset weight, different weights may result in different evaluation results, and several indexes serving as evaluation bases are actually too wide, so the evaluation results obtained by the analytic hierarchy process are not accurate.
Disclosure of Invention
In view of this, the application provides a satisfaction evaluation method, a satisfaction evaluation device and an electronic device, which are used for directly obtaining experience comments of a user on an evaluation object from comment data and accurately achieving satisfaction evaluation.
Specifically, the method is realized through the following technical scheme:
a satisfaction evaluation method comprising:
performing word segmentation on each piece of comment data of the obtained evaluation object to obtain words in each piece of comment data;
extracting a feature vector from each piece of comment data based on the obtained vocabulary of each piece of comment data, and overlapping the feature vectors of each piece of comment data to obtain a feature matrix;
inputting the feature matrix into a trained machine learning model, calculating the feature matrix by the machine learning model according to a regression algorithm, and outputting a plurality of satisfaction degree parameters;
dispersing the plurality of satisfaction degree parameters into a specified number of box-dividing intervals, and taking the arrangement sequence of the box-dividing intervals containing the most satisfaction degree parameters in all the box-dividing intervals as satisfaction degree evaluation grades.
In the satisfaction evaluating method, before the comment data is subjected to the word segmentation process, the method further includes:
preprocessing the comment data to obtain comment data meeting preset word segmentation conditions; wherein the word segmentation condition at least comprises: the format of the comment data conforms to the specified encoding format, and invalid data does not exist in the comment data.
In the satisfaction evaluation method, the extracting a feature vector from each piece of comment data based on the obtained vocabulary of each piece of comment data includes:
calculating the word frequency-inverse text frequency of each vocabulary in each comment data according to each vocabulary of each comment data, and generating a word frequency-inverse text frequency vector of each comment data according to the word frequency-inverse text frequency of each vocabulary;
converting each vocabulary in each piece of comment data into a word vector according to a preset word vector conversion algorithm and a word vector corpus; the corpus comprises a mapping relation between words and word vectors;
determining an attribute vector of each piece of comment data according to a preset multi-dimensional attribute screening strategy;
and combining the word frequency-inverse text frequency vector, the word vector and the attribute vector of each piece of comment data to obtain a feature vector of the comment data.
In the satisfaction evaluation method, the inputting the feature matrix to a trained machine learning model includes:
and performing dimension reduction processing on the feature matrix, and inputting the feature matrix subjected to the dimension reduction processing into the machine learning model.
In the satisfaction evaluating method, the method further includes:
selecting a vocabulary with the highest word frequency in a certain amount of comment data, and performing visual presentation; or the like, or, alternatively,
and presenting the satisfaction evaluation result of the evaluation object on the position of the evaluation object in the map page.
In the satisfaction evaluation method, the machine learning model comprises at least two machine learning submodels, and each machine learning submodel calculates the characteristic matrix according to the regression algorithm of the submodel and outputs a plurality of satisfaction parameters;
the dispersing the plurality of satisfaction degree parameters into a specified number of box separation intervals, and taking the arrangement sequence of the box separation intervals containing the most satisfaction degree parameters in all the box separation intervals as a satisfaction degree evaluation grade, wherein the method comprises the following steps:
in the bin intervals of the assigned number of the discrete values of the multiple satisfaction degree parameters output by each machine learning submodel, and taking the arrangement sequence of the bin intervals containing the most satisfaction degree parameters in all the bin intervals as the reference satisfaction degree evaluation grade obtained by the machine learning submodel; fusing a plurality of reference satisfaction evaluation grades to obtain a satisfaction evaluation grade; alternatively, the first and second electrodes may be,
fusing satisfaction degree parameters output by all machine learning submodels, and dispersing the fused satisfaction degree parameters into a specified number of sub-box intervals; and taking the arrangement sequence of the box-dividing intervals containing the most fused satisfaction parameter in all the box-dividing intervals as the satisfaction evaluation grade.
A satisfaction evaluating apparatus comprising:
the word segmentation unit is used for carrying out word segmentation on each piece of comment data of the obtained evaluation object to obtain words in each piece of comment data;
the extracting unit is used for extracting a characteristic vector from each piece of comment data based on the obtained vocabulary of each piece of comment data, and superposing the characteristic vectors of each piece of comment data to obtain a characteristic matrix;
the calculation unit is used for inputting the characteristic matrix into a trained machine learning model, calculating the characteristic matrix by the machine learning model according to a regression algorithm and outputting a plurality of satisfaction degree parameters;
and the determining unit is used for dispersing the plurality of satisfaction degree parameters into a specified number of box-dividing intervals, and taking the arrangement sequence of the box-dividing intervals containing the most satisfaction degree parameters in all the box-dividing intervals as the satisfaction degree evaluation grade.
In the satisfaction evaluating device, the device further includes:
the preprocessing unit is used for preprocessing the comment data to obtain comment data meeting the preset word segmentation conditions; wherein the word segmentation condition at least comprises: the format of the comment data conforms to the specified encoding format, and invalid data does not exist in the comment data.
In the satisfaction evaluation device, the extraction unit is further configured to:
calculating the word frequency-inverse text frequency of each vocabulary in each comment data according to each vocabulary of each comment data, and generating a word frequency-inverse text frequency vector of each comment data according to the word frequency-inverse text frequency of each vocabulary;
converting each vocabulary in each piece of comment data into a word vector according to a preset word vector conversion algorithm and a word vector corpus; the corpus comprises a mapping relation between words and word vectors;
determining an attribute vector of each piece of comment data according to a preset multi-dimensional attribute screening strategy;
and combining the word frequency-inverse text frequency vector, the word vector and the attribute vector of each piece of comment data to obtain a feature vector of the comment data.
In the satisfaction evaluation device, the calculation unit is further configured to:
and performing dimension reduction processing on the feature matrix, and inputting the feature matrix subjected to the dimension reduction processing into the machine learning model.
In the satisfaction evaluating apparatus, the apparatus further includes a presenting unit operable to:
selecting a vocabulary with the highest word frequency in a certain amount of comment data, and performing visual presentation; or the like, or, alternatively,
and presenting the satisfaction evaluation result of the evaluation object on the position of the evaluation object in the map page.
In the satisfaction evaluation device, the machine learning model comprises at least two machine learning submodels, and each machine learning submodel calculates the characteristic matrix according to the regression algorithm of the submodel and outputs a plurality of satisfaction parameters;
the determining unit is further configured to:
in the bin intervals of the assigned number of the discrete values of the multiple satisfaction degree parameters output by each machine learning submodel, and taking the arrangement sequence of the bin intervals containing the most satisfaction degree parameters in all the bin intervals as the reference satisfaction degree evaluation grade obtained by the machine learning submodel; fusing a plurality of reference satisfaction evaluation grades to obtain a satisfaction evaluation grade; alternatively, the first and second electrodes may be,
fusing satisfaction degree parameters output by all machine learning submodels, and dispersing the fused satisfaction degree parameters into a specified number of sub-box intervals; and taking the arrangement sequence of the box-dividing intervals containing the most fused satisfaction parameter in all the box-dividing intervals as the satisfaction evaluation grade.
An electronic device comprising a memory, a processor, and machine-executable instructions stored on the memory and executable on the processor, wherein the machine-executable instructions, when executed by the processor, implement the satisfaction evaluation methods described herein.
In the embodiment of the application, the electronic equipment can perform word segmentation on the comment data, perform feature extraction on the basis of words of the comment data, effectively analyze the comment data of an evaluation object, further calculate the feature matrix according to a trained machine learning model after extracting the feature matrix from the comment data, and determine the satisfaction evaluation grade according to the calculated satisfaction parameter, so that the experience comment of a user on the evaluation object is obtained more directly, and a satisfaction evaluation result is obtained accurately.
Drawings
FIG. 1 is a flow chart of a satisfaction evaluation method illustrated herein;
FIG. 2 is a schematic flow chart diagram of a satisfaction evaluation method presented herein;
FIG. 3 is a block diagram of an embodiment of a satisfaction rating apparatus shown in the present application;
fig. 4 is a hardware configuration diagram of an electronic device shown in the present application.
Detailed Description
In order to make the technical solutions in the embodiments of the present invention better understood and make the above objects, features and advantages of the embodiments of the present invention more comprehensible, the following description of the prior art and the technical solutions in the embodiments of the present invention with reference to the accompanying drawings is provided.
Referring to fig. 1, there is shown a flow chart of a satisfaction evaluation method of the present application, as shown in fig. 1, the method comprising the steps of:
step 101: and performing word segmentation on each piece of comment data of the obtained evaluation object to obtain words in each piece of comment data.
The method is applied to an electronic device, which may be a server or a server in a server cluster for specifying a satisfaction evaluation. The scheme is described below with an electronic device as the execution subject.
The evaluation object can be various service places (such as restaurants, hotels, cinemas, amusement parks and the like), the comment data of the evaluation object is data published on a website with a comment function (such as popular comment, a Baidu map, a Gade map and the like) by a user, and the electronic equipment can acquire the comment data from the website.
In order to better sense the experience comment of the user on the evaluation object, the electronic equipment can perform word segmentation on the obtained comment data of the evaluation object, so that the comment data can be analyzed subsequently based on words in the comment data.
As an embodiment, the electronic device may extract a specified number of comment data (for example, 10000 comment data), and then match each comment data with a word in a preset first corpus one by one, so as to implement the word segmentation processing on the comment data. After the word segmentation processing is completed, the sentences in each piece of comment data are decomposed into word sequences. Such as: the 'dish is very delicious' and becomes 'dish' + 'very' + 'delicious' after being participated.
It should be noted that, during the word segmentation process, words that are not present in the first corpus may be found. In this case, the number of occurrences of the vocabulary may be counted, and if the number of occurrences of the vocabulary reaches a preset threshold, the vocabulary may be considered to be a commonly used vocabulary, and the vocabulary may be added to the first corpus, so as to perform the word segmentation process based on the first corpus more efficiently in the following.
In an embodiment shown, since a large amount of invalid data may exist in comment data acquired by an electronic device from a website, and the comment data are not in a uniform coding format, for convenience of subsequent analysis processing, the electronic device may first pre-process the comment data to obtain comment data meeting a preset word segmentation condition. Wherein, the word segmentation conditions at least comprise: the format of the comment data conforms to the specified encoding format, and invalid data does not exist in the comment data; the invalid data may include data that does not contribute to the satisfaction analysis, such as a special symbol, stop word, or the like.
Specifically, the electronic device may purge the acquired comment data to delete invalid data in the comment data. As an example, the electronic device may match the review data with various invalid data in the second corpus containing a large amount of invalid data, and delete the invalid data in the review data when any invalid data is matched.
Furthermore, the electronic equipment can unify the coding formats of the cleaned comment data, so that the coding formats of the comment data are all the specified coding formats. Such as: the encoding Format of the comment data can be unified into a universal code (8-bit unicodetransmission Format, UTF-8).
Step 102: and extracting a feature vector from each piece of comment data based on the obtained vocabulary of each piece of comment data, and superposing the feature vectors of each piece of comment data to obtain a feature matrix.
After the vocabulary of the comment data is obtained, in order to process a large number of vocabularies based on the deep learning method, firstly, the characteristics of the comment data are extracted from the vocabulary of the comment data, and the characteristics are processed by a deep learning model. It is noted that in the field of machine learning, the extracted features are usually in the form of feature vectors or feature matrices.
In one embodiment shown, the electronic device may first calculate, for each vocabulary of each piece of comment data, a Term Frequency-Inverse text Frequency (TF-IDF) of the vocabulary in the comment data, and generate a Term Frequency-Inverse text Frequency vector of the comment data according to the Term Frequency-Inverse text Frequency of each vocabulary.
The calculation mode of the word frequency-inverse text frequency of any vocabulary in any piece of comment data can be represented by the following formula (1):
Figure BDA0001911625210000071
wherein a represents the number of times the vocabulary appears in the comment data, b represents the total amount of vocabulary of the comment data, c represents the total number of comment data, and d represents the number of pieces of comment data in which the word appears.
After the word frequency-inverse text frequency of each vocabulary in one piece of comment data is obtained through calculation, a word frequency-inverse text frequency vector of the comment data can be generated.
As an example, the electronic device may determine the total number n of words present in all the comment data it has obtained, and then generate a word frequency-inverse text frequency vector of size 1 × n for each piece of comment data, each position of the word frequency-inverse text frequency vector corresponding to n different words, respectively.
After the word frequency-inverse text frequency of each vocabulary of any comment data is obtained through calculation by the electronic equipment, the word frequency-inverse text frequency of each vocabulary is filled into a position corresponding to the vocabulary in the word frequency-inverse text frequency vector of the comment data, and the word frequency-inverse text frequency vector of the comment data is obtained.
For example, if the total amount of words existing in all comment data is 1000, the size of the word frequency-inverse text frequency vector is 1 × 1000, and each position of the word frequency-inverse text frequency vector corresponds to 1000 different words, if the word frequency-inverse text frequency of each word in a piece of comment data including 4 words is calculated to be 0.12, 0.04, 0.009, 0.12, the 4 values are filled in the positions corresponding to 4 words in the vector, respectively, and the other positions of the word frequency-inverse text frequency vector of the comment data may be filled in 0.
In addition, the electronic device can convert each vocabulary in each piece of comment data into a word vector according to a preset word vector conversion algorithm and a word vector corpus. The word vector corpus comprises a mapping relation between words and word vectors. The Word vector transformation algorithm may include a Fasttext algorithm, a Word2Vec algorithm, and the like, and may be trained in advance with a common vocabulary in the field where the evaluation object is located. The training method can refer to the related art, and is not described herein.
It should be noted that in the process of converting words into word vectors, it may be found that there are rare words in the word vector corpus without word vectors. In this case, the electronic device may generate a word vector for the uncommon word based on the word vector conversion algorithm, and then add a mapping relationship between the newly generated word vector and the uncommon word to the word vector corpus. In addition, the electronic device also needs to adjust the word vector corresponding to the word with the word meaning of the uncommon word and the word vector corresponding to the word with the word meaning of the uncommon word. After the word vector corpus is updated by the measure, the electronic equipment can subsequently convert words in the comment data into word vectors more efficiently, and the accuracy of satisfaction evaluation can also be improved.
After each vocabulary was converted into a word vector of size 1 × m, each piece of comment data corresponds to at least one word vector.
In addition, the electronic equipment can also determine the attribute vector of each piece of comment data according to a preset multi-dimensional attribute screening strategy. Wherein each element in the attribute vector of each comment data characterizes an attribute of the comment data.
As an embodiment, the multidimensional attribute screening policy may be a series of judgment logics, and each comment data is judged based on the multidimensional attribute screening policy, so as to determine the attribute of the comment data.
Such as: the multidimensional attribute screening strategy can judge whether the comment data comprise English words or not, whether the comment data comprise any preset key words or not (the key words can be terms, jargon and the like in the field where the evaluation object is located), whether the vocabulary amount of the comment data reaches a preset long sentence vocabulary amount value or not and the like.
When the electronic device determines the comment data by the multidimensional attribute screening policy, if any determination result is positive, the attribute corresponding to the determination result may be recorded as 1, and if any determination result is negative, the attribute corresponding to the determination result may be recorded as 0.
Of course, the multi-dimensional attribute filtering strategy may include more complex judgment logic (e.g., AND or NOR based judgment logic) to aggregate the judgment results of multiple attributes into one dimension of the attribute vector. In this case, the judgment result of the dimension can be represented by a plurality of numerical values.
In summary, for a multidimensional attribute screening strategy containing k judgment logics, the electronic equipment can obtain an attribute vector with the size of 1 × k from each piece of comment data.
After the electronic equipment determines the word frequency-inverse text frequency vector, the word vector and the attribute vector of each piece of comment data, the feature vectors of the comment data can be obtained through combination.
Specifically, if the size of the word frequency-inverse text frequency vector of the comment data is 1 × n, the size of each word vector is 1 × m, and the size of the attribute vector is 1 × k.
The individual word vectors may be merged first.
As an example, the average value of the elements at the same position in each word vector may be calculated and then the average value may be used as the element at that position, thereby obtaining a merged word vector with a size of 1 × m.
As another example, a total of p word vectors of the comment data may be superimposed to obtain a merged word vector having a size of 1 × (m × p). to ensure uniform sizes of the merged word vectors of the comment data, a maximum value z of vocabulary in the comment data may be empirically set, and the size of the merged word vector may be determined to be 1 × (m × z), where z is equal to or greater than p.
After the merged word vectors are obtained, the word frequency-inverse text frequency vectors, the merged word vectors and the attribute vectors may be merged into feature vectors, the size of the feature vectors is 1 × (n + m + k) if the word vectors are merged in the first embodiment, and the size of the feature vectors is 1 × (n + m × z + k) if the word vectors are merged in the second embodiment.
Further, the electronic device may superimpose the feature vector of each piece of comment data to obtain the feature matrix of the comment data.
The electronic equipment can superpose the feature vectors of each piece of comment data to obtain a feature matrix, wherein the feature matrix is c × (n + m + k) if the size of the feature vector is 1 × (n + m + k), and c × (n + m × z + k) if the size of the feature vector is 1 × (n + m × z + k).
So far, the electronic device completes feature extraction on the comment data.
Step 103: and inputting the characteristic matrix into a trained machine learning model, calculating the characteristic matrix by the machine learning model according to a regression algorithm, and outputting a plurality of satisfaction degree parameters.
The regression algorithm may include linear regression, multivariate polynomial regression, gradient lifting regression, random forest regression, long-term and short-term memory network, and the like. It should be noted that, when the machine learning model is built based on the long-time and short-time memory network, the machine learning model belongs to a deep learning model.
Before the machine learning model is applied, the machine learning model is firstly constructed according to a regression algorithm.
Further, a training sample is obtained, wherein the training sample comprises a large number of feature matrices with the same size, and the feature matrices can be extracted from the comment data by the method. It should be noted that the eigenvectors of each row of the above-mentioned feature matrix are labeled with satisfaction parameter labels, and the satisfaction parameter labels characterize the corresponding satisfaction evaluation levels. Such as: the satisfaction degree parameter labels comprise 1, 2, 3, 4 and 5, and each satisfaction degree parameter label corresponds to a satisfaction degree evaluation grade. In the process of manually calibrating the training sample, the satisfaction degree evaluation grade represented by any piece of comment data is determined in a manual judgment mode, and then the satisfaction degree parameter label corresponding to the satisfaction degree evaluation grade is calibrated to the feature vector of the comment data in the feature matrix.
The electronic equipment processes the characteristic matrix in the training sample by using the machine learning model so as to obtain a plurality of satisfaction degree parameters, and trains the network parameters of the machine learning model according to the difference between the satisfaction degree parameters corresponding to each characteristic vector and the satisfaction degree parameter labels calibrated on the characteristic vectors.
After the machine learning model is trained through a certain number of training samples, the machine learning model capable of processing the characteristic matrix is obtained.
After obtaining the machine learning model, in the process of implementing satisfaction evaluation based on the machine learning model, the electronic device may input the feature matrix obtained in step 102 to the machine learning model, so that the machine learning model calculates the feature matrix according to its own regression algorithm, thereby obtaining a plurality of satisfaction parameters corresponding to each feature vector in the feature matrix.
In one embodiment, before the feature matrix is input to the machine learning model, the extracted feature matrix may be subjected to a dimension reduction process, and then the feature matrix after the dimension reduction process may be input to the machine learning model, in order to reduce the amount of computation of the electronic device.
As an example, the electronic device may perform dimension reduction processing on the feature matrix by means of Singular Value Decomposition (SVD).
The electronic device can disassemble the feature matrix into two smaller matrices, and then select one of the matrices as the feature matrix to be processed by the machine learning model.
As another embodiment, the electronic device may perform the dimension reduction processing on the feature matrix through a document theme generation model (L) for document theme matching, L "&ttttranslation = L" &tttl &ttt/t &tttda.
Of course, other dimension reduction means can be provided, and specific reference can be made to related technologies, which are not described herein again.
It should be noted that the number of rows of the feature matrix after the dimension reduction processing is the same as the number of rows of the feature matrix before the dimension reduction processing, in other words, the number of feature vectors included in the feature matrix is not changed by the dimension reduction processing.
In such an embodiment, the electronic device may perform parameter tuning on the machine learning model in order to adapt the machine learning model to the reduced-dimension feature matrix.
Specifically, the electronic device needs to adjust network parameters related to the dimension of the feature matrix in the machine learning model, so that the machine learning model can correctly process the feature matrix after dimension reduction.
In addition, since only adjusting the network parameters related to the dimension of the feature matrix may reduce the generalization ability of the machine learning model, the machine learning model may be retrained using the feature matrix after the dimension reduction.
The electronic equipment can check the influence of the network parameters on the satisfaction degree parameters according to a plurality of preset evaluation indexes (such as accuracy, recall rate, F1-Measure and the like), then calculate the feature matrix after dimension reduction through different network parameters, and further determine the optimal network parameters according to the evaluation indexes.
In the method, as an embodiment, a root mean square error is calculated according to the satisfaction degree parameters of each feature vector in the feature matrix after the dimension reduction processing and the satisfaction degree parameter labels calibrated on the feature vectors, and then the root mean square error is used as an evaluation index, so that the optimal network parameters are selected.
In one embodiment shown, the feature matrix may be processed based on a machine learning model including at least two machine learning submodels in order to more accurately achieve satisfaction evaluation by the machine learning model. Different machine learning submodels apply different regression algorithms, so that different satisfaction parameters can be obtained. For each machine learning submodel, the training process is as described above and will not be described herein.
In this embodiment, the feature matrix extracted in step 102 or the feature matrix after dimension reduction processing may be input to at least two machine learning submodels that have been trained, respectively. And each machine learning submodel can calculate the characteristic matrix according to the regression algorithm of the submodel and then output a plurality of satisfaction degree parameters.
After the characteristic matrix is processed by utilizing at least two machine learning submodels and the satisfaction degree parameter is obtained, more accurate satisfaction degree evaluation can be realized subsequently based on the satisfaction degree parameter.
Step 104: dispersing the plurality of satisfaction degree parameters into a specified number of box-dividing intervals, and taking the arrangement sequence of the box-dividing intervals containing the most satisfaction degree parameters in all the box-dividing intervals as satisfaction degree evaluation grades.
After obtaining a plurality of satisfaction parameters through the machine learning model, the electronic equipment can evaluate the evaluation object according to the satisfaction parameters.
In the technical scheme, the evaluation result of the evaluation object is a satisfaction evaluation grade, and a specified number of satisfaction evaluation grades can be pre-defined to represent the satisfaction degree of the client on the evaluation object. Such as: the satisfaction evaluation ratings can be divided into 5 ratings of "very satisfied", "poor satisfaction", "dissatisfied", "very unsatisfied". The specified number is also the number of types of the satisfaction parameter labels calibrated on the training sample during the training of the machine learning model.
The satisfaction degree parameter of the machine learning model is a plurality of numerical values which can be divided into the specified number of the box-dividing intervals.
In practice, in the training phase, the number of values (satisfaction parameter labels) calibrated on the feature matrix as the training sample can determine each interval, and each interval is a binning interval for performing satisfaction parameter binning processing according to a binning method subsequently. Taking the calibrated satisfaction parameter labels 1, 2, 3, 4, and 5 as examples, the split bin intervals are: 0 to 1, 1 to 2, 2 to 3, 3 to 4, 4 to 5. The arrangement order of the bin intervals in all the bin intervals can be used as the satisfaction degree evaluation level indicated by the bin interval.
Such as: the bin interval of 0 to 1 is ranked at 1 st, and the indicated satisfaction rating level is "very unsatisfactory"; the bin separation interval of 1 to 2 is ranked at 2, and the indicated satisfaction evaluation level is "dissatisfied"; the bin separation interval from 2 to 3 is arranged in the 3 rd, and the indicated satisfaction evaluation level is 'poor satisfaction'; the bin separation interval of 3 to 4 is ranked at 4, and the indicated satisfaction evaluation level is "satisfactory"; the bin intervals of 4 to 5 are ranked at 5, and the indicated satisfaction rating is "very satisfactory".
After the machine learning model is trained, the satisfaction parameter obtained from each feature vector in the feature matrix can be close to or even equal to the satisfaction parameter label corresponding to the satisfaction evaluation level indicated by the feature vector. Such as: if the satisfaction degree evaluation level represented by any one piece of comment data is 'dissatisfaction', and the corresponding satisfaction degree parameter label of the satisfaction degree evaluation level is 2, the satisfaction degree parameter calculated by the machine learning model on the feature vector of the comment data is close to or equal to 2.
Therefore, the satisfaction degree parameters obtained by the electronic equipment through the machine learning model to the characteristic matrix processing are gathered around the satisfaction degree parameter labels with the specified number, and further can be dispersed to the interval with the specified number.
It should be noted that, because the feature matrix input when the machine learning model is applied is different from the feature matrix in the training sample adopted in the training stage, and the generalization capability of the machine learning model is often limited, the distribution states of the satisfaction degree parameters obtained by the electronic device processing different feature matrices by using the machine learning model are different. In other words, when the satisfaction parameters obtained from different feature matrices are processed according to the binning method, the interval endpoints used may be different.
The electronic equipment can count the number of the satisfaction degree parameters in each box-dividing interval, and then the arrangement sequence of the box-dividing intervals containing the most satisfaction degree parameters in all the box-dividing intervals is used as the satisfaction degree evaluation grade.
Such as: if the 3 rd binning interval contains the most satisfaction parameters, the satisfaction evaluation grade is known as 'poor satisfaction'.
In one embodiment shown, if the electronic device obtains the satisfaction degree parameters from the feature matrix through a machine learning model comprising at least two machine learning submodels, the satisfaction degree parameters of different machine learning submodels can be fused into the satisfaction degree evaluation level through two ways.
The first mode is as follows: firstly, the electronic device can disperse a plurality of satisfaction degree parameters output by each machine learning submodel into a specified number of bin intervals according to a bin dividing method, and the arrangement sequence of the bin intervals containing the most satisfaction degree parameters in all the bin intervals is used as a reference satisfaction degree evaluation grade obtained based on the machine learning submodel.
Further, the electronic device can fuse the multiple reference satisfaction evaluation levels to obtain the satisfaction evaluation level.
As an embodiment, the electronic device may perform weighted calculation on a plurality of reference satisfaction evaluation levels based on a preset weight, so as to obtain the satisfaction evaluation level. Such as: after the electronic equipment calculates the feature matrix through the 5 machine learning submodels, the reference satisfaction evaluation grades determined according to the satisfaction parameters of the 5 machine learning submodels are respectively as follows: "very satisfactory", "poor satisfaction", "satisfactory", "unsatisfactory", and the weights corresponding to the 5 machine learning submodels are 0.1, 0.2, 0.4, 0.1, 0.2, respectively, the result of the evaluation of the satisfaction obtained by the calculation is "poor satisfaction".
As another example, the electronic device may process a plurality of reference satisfaction evaluation levels based on a preset voting algorithm, so as to obtain the satisfaction evaluation level. Such as: after the electronic equipment calculates the feature matrix through the 5 machine learning submodels, the reference satisfaction evaluation grades determined according to the satisfaction parameters of the 5 machine learning submodels are respectively as follows: "satisfactory", "poor satisfaction", "satisfactory", "very satisfactory". The satisfaction evaluation level can be determined as "satisfaction".
The second mode is as follows: firstly, the electronic equipment fuses satisfaction degree parameters output by all machine learning submodels, and then disperses the fused satisfaction degree parameters into a specified number of sub-box intervals.
The number of the satisfaction degree parameters of each machine learning submodel is the same as the number of the characteristic vectors in the characteristic matrix. When the electronic equipment fuses the satisfaction parameters, the satisfaction parameters of different machine learning submodels to the same feature vector are actually fused into one satisfaction parameter. Therefore, after the fusion process, the satisfaction degree parameter corresponding to each feature vector in the feature matrix is still obtained. Such as: the machine learning submodel 1 obtains a satisfaction parameter A1、A2……Ac-1、AcThe machine learning submodel 2 obtains a satisfaction parameter B1、B2……Bc-1、BcAnd if the satisfaction degree parameters of the machine learning submodels correspond to the c characteristic vectors, the fused satisfaction degree parameters Q1、Q2……Qc-1、QcAgain corresponding to c feature vectors.
As an embodiment, the electronic device may perform weighted calculation on multiple satisfaction parameters corresponding to any feature vector based on a preset weight, so as to obtain a fused satisfaction parameter corresponding to the feature vector. Such as: if the satisfaction degree parameters calculated for any feature vector according to the 5 machine learning submodels are 5, 4, 3, 4 and 2 respectively, and the weights corresponding to the 5 machine learning submodels are 0.1, 0.2, 0.4, 0.1 and 0.2 respectively, the fused satisfaction degree parameter of the feature vector is 3.3.
Further, the electronic device may take, as the satisfaction evaluation level, an arrangement order of the bin intervals containing the most fused satisfaction parameters in all the bin intervals.
In the embodiment, the satisfaction evaluation grade is determined through the satisfaction parameters of the multiple machine learning submodels, so that the calculation error of a single machine learning submodel is avoided, and the accuracy of the evaluation result is improved.
In the embodiment of the application, in order to show the satisfaction degree of the evaluation object in the customer mind in a richer form, the electronic equipment can visually present the satisfaction degree evaluation result. Wherein, the satisfaction evaluation result comprises the satisfaction evaluation grade and the key words in the comment data.
As an embodiment, the electronic equipment can select a plurality of words with the highest word frequency from word segmentation results of all comment data, and then displays the words in wordcloud.
As another embodiment, the electronic device may present the satisfaction evaluation level of the evaluation object at the position of the evaluation object in the map page, so that the customer may intuitively know the quality of each evaluation object from the map page.
To more clearly illustrate the technical solution of the present application, refer to fig. 2, which is a schematic flow chart of a satisfaction evaluation method shown in the present application.
As shown in fig. 2, after the electronic device obtains the comment data, it first performs data exception cleaning processing on the comment data, that is, deletes invalid data in the comment data, and unifies the encoding format of the comment data.
And then, the electronic equipment carries out word segmentation on the comment data which is cleared of invalid data and has a uniform coding format, and words in the comment data are obtained. In this process, the corpus to which the word segmentation is applied may be adjusted, i.e., the first corpus may be updated.
Furthermore, the electronic equipment performs feature extraction on the vocabulary of the comment data to obtain a word frequency-inverse text frequency vector, a word vector and an attribute vector. In this process, the corpus to which feature extraction is applied may be adjusted, i.e., the word vector corpus is updated.
Then, the electronic device can perform dimension reduction processing on the extracted feature matrix, further process the feature matrix based on the trained machine learning model, and determine a satisfaction evaluation result according to the satisfaction parameters obtained through processing.
In summary, in the embodiment of the application, the electronic device can effectively analyze the comment data of the evaluation object, extract the feature matrix from the comment data, and calculate the feature matrix according to the trained machine learning model, so that the experience comment of the user on the evaluation object is more directly obtained, and the satisfaction evaluation result is accurately obtained; when a plurality of machine learning models are applied to process the characteristic matrix, errors generated when a single machine learning model processes the characteristic matrix can be reduced, so that more accurate satisfaction degree parameters are obtained, and more accurate satisfaction degree evaluation is realized;
in addition, the electronic equipment can also present the satisfaction evaluation result of the evaluation object in a visual mode, and the user experience is improved.
In correspondence with embodiments of the aforementioned satisfaction evaluation methods, the present application also provides embodiments of a satisfaction evaluation apparatus.
Referring to fig. 3, a block diagram of an embodiment of a satisfaction evaluating apparatus shown in the present application is shown:
as shown in fig. 3, the satisfaction evaluating device 30 includes:
and the word segmentation unit 310 is configured to perform word segmentation on each piece of obtained comment data of the evaluation object to obtain a word in each piece of comment data.
And the extracting unit 320 is configured to extract a feature vector from each piece of comment data based on the obtained vocabulary of each piece of comment data, and superimpose the feature vectors of each piece of comment data to obtain a feature matrix.
The calculating unit 330 is configured to input the feature matrix into the trained machine learning model, so that the machine learning model calculates the feature matrix according to a regression algorithm and outputs a plurality of satisfaction parameters.
A determining unit 340, configured to discretize the plurality of satisfaction parameters into a specified number of binning intervals, and use an arrangement order of the binning intervals containing the most satisfaction parameters in all the binning intervals as a satisfaction evaluation level.
In this example, the apparatus further comprises:
a preprocessing unit 350 (not shown in the figure) for preprocessing the comment data to obtain comment data meeting the preset word segmentation conditions; wherein the word segmentation condition at least comprises: the format of the comment data conforms to the specified encoding format, and invalid data does not exist in the comment data.
In this example, the extracting unit 320 is further configured to:
calculating the word frequency-inverse text frequency of each vocabulary in each comment data according to each vocabulary of each comment data, and generating a word frequency-inverse text frequency vector of each comment data according to the word frequency-inverse text frequency of each vocabulary;
converting each vocabulary in each piece of comment data into a word vector according to a preset word vector conversion algorithm and a word vector corpus; the corpus comprises a mapping relation between words and word vectors;
determining an attribute vector of each piece of comment data according to a preset multi-dimensional attribute screening strategy;
and combining the word frequency-inverse text frequency vector, the word vector and the attribute vector of each piece of comment data to obtain a feature vector of the comment data.
In this example, the calculating unit 330 is further configured to:
and performing dimension reduction processing on the feature matrix, and inputting the feature matrix subjected to the dimension reduction processing into the machine learning model.
In this example, the apparatus further comprises a presentation unit 360 (not shown in the figures) for:
selecting a vocabulary with the highest word frequency in a certain amount of comment data, and performing visual presentation; or the like, or, alternatively,
and presenting the satisfaction evaluation result of the evaluation object on the position of the evaluation object in the map page.
In this example, the machine learning model includes at least two machine learning submodels, each machine learning submodel calculates the feature matrix according to its own regression algorithm and outputs a plurality of satisfaction degree parameters;
the determining unit 340 is further configured to:
in the bin intervals of the assigned number of the discrete values of the multiple satisfaction degree parameters output by each machine learning submodel, and taking the arrangement sequence of the bin intervals containing the most satisfaction degree parameters in all the bin intervals as the reference satisfaction degree evaluation grade obtained by the machine learning submodel; fusing a plurality of reference satisfaction evaluation grades to obtain a satisfaction evaluation grade; alternatively, the first and second electrodes may be,
fusing satisfaction degree parameters output by all machine learning submodels, and dispersing the fused satisfaction degree parameters into a specified number of sub-box intervals; and taking the arrangement sequence of the box-dividing intervals containing the most fused satisfaction parameter in all the box-dividing intervals as the satisfaction evaluation grade.
The embodiment of the satisfaction evaluation device can be applied to electronic equipment. The device embodiments may be implemented by software, or by hardware, or by a combination of hardware and software. Taking a software implementation as an example, as a logical device, the device is formed by reading, by a processor of the electronic device where the device is located, a corresponding computer program instruction in the nonvolatile memory into the memory for operation.
From the hardware level, as shown in fig. 4, a hardware structure diagram of an electronic device where the satisfaction evaluating apparatus of the present application is located is shown,
the electronic device may include a processor 401, a machine-readable storage medium 402 having machine-executable instructions stored thereon. The processor 401 and the machine-readable storage medium 402 may communicate via a system bus 403. The processor 401 may be capable of performing the above-described satisfaction evaluation by loading and executing machine-executable instructions stored by the machine-readable storage medium 402.
The machine-readable storage medium 402 referred to herein may be any electronic, magnetic, optical, or other physical storage device that can contain or store information such as executable instructions, data, and the like. For example, the machine-readable storage medium may be: a RAM (random Access Memory), a volatile Memory, a non-volatile Memory, a flash Memory, a storage drive (e.g., a hard drive), a solid state drive, any type of storage disk (e.g., an optical disk, a dvd, etc.), or similar storage medium, or a combination thereof.
The implementation process of the functions and actions of each unit in the above device is specifically described in the implementation process of the corresponding step in the above method, and is not described herein again.
For the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the scheme of the application. One of ordinary skill in the art can understand and implement it without inventive effort.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the scope of protection of the present application.

Claims (13)

1. A satisfaction evaluation method characterized by comprising:
performing word segmentation on each piece of comment data of the obtained evaluation object to obtain words in each piece of comment data;
extracting a feature vector from each piece of comment data based on the obtained vocabulary of each piece of comment data, and overlapping the feature vectors of each piece of comment data to obtain a feature matrix;
inputting the feature matrix into a trained machine learning model, calculating the feature matrix by the machine learning model according to a regression algorithm, and outputting a plurality of satisfaction degree parameters;
dispersing the plurality of satisfaction degree parameters into a specified number of box-dividing intervals, and taking the arrangement sequence of the box-dividing intervals containing the most satisfaction degree parameters in all the box-dividing intervals as satisfaction degree evaluation grades.
2. The method of claim 1, wherein prior to tokenizing the review data, the method further comprises:
preprocessing the comment data to obtain comment data meeting preset word segmentation conditions; wherein the word segmentation condition at least comprises: the format of the comment data conforms to the specified encoding format, and invalid data does not exist in the comment data.
3. The method of claim 1, wherein the extracting a feature vector from each piece of comment data based on the vocabulary of each piece of comment data obtained comprises:
calculating the word frequency-inverse text frequency of each vocabulary in each comment data according to each vocabulary of each comment data, and generating a word frequency-inverse text frequency vector of each comment data according to the word frequency-inverse text frequency of each vocabulary;
converting each vocabulary in each piece of comment data into a word vector according to a preset word vector conversion algorithm and a word vector corpus; the corpus comprises a mapping relation between words and word vectors;
determining an attribute vector of each piece of comment data according to a preset multi-dimensional attribute screening strategy;
and combining the word frequency-inverse text frequency vector, the word vector and the attribute vector of each piece of comment data to obtain a feature vector of the comment data.
4. The method of claim 1, wherein inputting the feature matrix to a trained machine learning model comprises:
and performing dimension reduction processing on the feature matrix, and inputting the feature matrix subjected to the dimension reduction processing into the machine learning model.
5. The method of claim 3, further comprising:
selecting a vocabulary with the highest word frequency in a certain amount of comment data, and performing visual presentation; or the like, or, alternatively,
and presenting the satisfaction evaluation result of the evaluation object on the position of the evaluation object in the map page.
6. The method of claim 1, wherein the machine learning model comprises at least two machine learning submodels, each machine learning submodel calculating the feature matrix according to its own regression algorithm and outputting a plurality of satisfaction parameters;
the dispersing the plurality of satisfaction degree parameters into a specified number of box separation intervals, and taking the arrangement sequence of the box separation intervals containing the most satisfaction degree parameters in all the box separation intervals as a satisfaction degree evaluation grade, wherein the method comprises the following steps:
in the bin intervals of the assigned number of the discrete values of the multiple satisfaction degree parameters output by each machine learning submodel, and taking the arrangement sequence of the bin intervals containing the most satisfaction degree parameters in all the bin intervals as the reference satisfaction degree evaluation grade obtained by the machine learning submodel; fusing a plurality of reference satisfaction evaluation grades to obtain a satisfaction evaluation grade; alternatively, the first and second electrodes may be,
fusing satisfaction degree parameters output by all machine learning submodels, and dispersing the fused satisfaction degree parameters into a specified number of sub-box intervals; and taking the arrangement sequence of the box-dividing intervals containing the most fused satisfaction parameter in all the box-dividing intervals as the satisfaction evaluation grade.
7. A satisfaction evaluating apparatus, characterized by comprising:
the word segmentation unit is used for carrying out word segmentation on each piece of comment data of the obtained evaluation object to obtain words in each piece of comment data;
the extracting unit is used for extracting a characteristic vector from each piece of comment data based on the obtained vocabulary of each piece of comment data, and superposing the characteristic vectors of each piece of comment data to obtain a characteristic matrix;
the calculation unit is used for inputting the characteristic matrix into a trained machine learning model, calculating the characteristic matrix by the machine learning model according to a regression algorithm and outputting a plurality of satisfaction degree parameters;
and the determining unit is used for dispersing the plurality of satisfaction degree parameters into a specified number of box-dividing intervals, and taking the arrangement sequence of the box-dividing intervals containing the most satisfaction degree parameters in all the box-dividing intervals as the satisfaction degree evaluation grade.
8. The apparatus of claim 7, further comprising:
the preprocessing unit is used for preprocessing the comment data to obtain comment data meeting the preset word segmentation conditions; wherein the word segmentation condition at least comprises: the format of the comment data conforms to the specified encoding format, and invalid data does not exist in the comment data.
9. The apparatus of claim 7, wherein the extraction unit is further configured to:
calculating the word frequency-inverse text frequency of each vocabulary in each comment data according to each vocabulary of each comment data, and generating a word frequency-inverse text frequency vector of each comment data according to the word frequency-inverse text frequency of each vocabulary;
converting each vocabulary in each piece of comment data into a word vector according to a preset word vector conversion algorithm and a word vector corpus; the corpus comprises a mapping relation between words and word vectors;
determining an attribute vector of each piece of comment data according to a preset multi-dimensional attribute screening strategy;
and combining the word frequency-inverse text frequency vector, the word vector and the attribute vector of each piece of comment data to obtain a feature vector of the comment data.
10. The apparatus of claim 7, wherein the computing unit is further configured to:
and performing dimension reduction processing on the feature matrix, and inputting the feature matrix subjected to the dimension reduction processing into the machine learning model.
11. The apparatus according to claim 9, further comprising a presentation unit configured to:
selecting a vocabulary with the highest word frequency in a certain amount of comment data, and performing visual presentation; or the like, or, alternatively,
and presenting the satisfaction evaluation result of the evaluation object on the position of the evaluation object in the map page.
12. The apparatus of claim 7, wherein the machine learning model comprises at least two machine learning submodels, each machine learning submodel respectively calculates the feature matrix according to its own regression algorithm and outputs a plurality of satisfaction parameters;
the determining unit is further configured to:
in the bin intervals of the assigned number of the discrete values of the multiple satisfaction degree parameters output by each machine learning submodel, and taking the arrangement sequence of the bin intervals containing the most satisfaction degree parameters in all the bin intervals as the reference satisfaction degree evaluation grade obtained by the machine learning submodel; fusing a plurality of reference satisfaction evaluation grades to obtain a satisfaction evaluation grade; alternatively, the first and second electrodes may be,
fusing satisfaction degree parameters output by all machine learning submodels, and dispersing the fused satisfaction degree parameters into a specified number of sub-box intervals; and taking the arrangement sequence of the box-dividing intervals containing the most fused satisfaction parameter in all the box-dividing intervals as the satisfaction evaluation grade.
13. An electronic device comprising a memory, a processor, and machine-executable instructions stored on the memory and executable on the processor, wherein the machine-executable instructions, when executed by the processor, implement the satisfaction evaluation method of any of claims 1-6.
CN201811555021.4A 2018-12-19 2018-12-19 Satisfaction evaluation method and device and electronic equipment Active CN111415176B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811555021.4A CN111415176B (en) 2018-12-19 2018-12-19 Satisfaction evaluation method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811555021.4A CN111415176B (en) 2018-12-19 2018-12-19 Satisfaction evaluation method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN111415176A true CN111415176A (en) 2020-07-14
CN111415176B CN111415176B (en) 2023-06-30

Family

ID=71490687

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811555021.4A Active CN111415176B (en) 2018-12-19 2018-12-19 Satisfaction evaluation method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN111415176B (en)

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1276061A1 (en) * 2001-07-09 2003-01-15 Accenture Computer based system and method of determining a satisfaction index of a text
CN101174273A (en) * 2007-12-04 2008-05-07 清华大学 News event detecting method based on metadata analysis
AU2009260033A1 (en) * 2008-06-19 2009-12-23 Wize Technologies, Inc. System and method for aggregating and summarizing product/topic sentiment
CN105654250A (en) * 2016-02-01 2016-06-08 百度在线网络技术(北京)有限公司 Method and device for automatically assessing satisfaction degree
CN105930503A (en) * 2016-05-09 2016-09-07 清华大学 Combination feature vector and deep learning based sentiment classification method and device
CN106156004A (en) * 2016-07-04 2016-11-23 中国传媒大学 The sentiment analysis system and method for film comment information based on term vector
CN106202481A (en) * 2016-07-18 2016-12-07 量子云未来(北京)信息科技有限公司 The evaluation methodology of a kind of perception data and system
CN107527231A (en) * 2017-07-27 2017-12-29 温州市鹿城区中津先进科技研究院 Electric business customer satisfaction evaluation method based on natural language analysis
CN107679754A (en) * 2017-09-30 2018-02-09 国网安徽省电力公司经济技术研究院 Power consumer satisfaction evaluation method based on advanced AHP and fuzzy theory
CN108038725A (en) * 2017-12-04 2018-05-15 中国计量大学 A kind of electric business Customer Satisfaction for Product analysis method based on machine learning
JP2018097610A (en) * 2016-12-13 2018-06-21 パナソニックIpマネジメント株式会社 Satisfaction degree evaluation device, satisfaction degree evaluation method, and satisfaction degree evaluation program
US10037491B1 (en) * 2014-07-18 2018-07-31 Medallia, Inc. Context-based sentiment analysis
WO2018161880A1 (en) * 2017-03-08 2018-09-13 腾讯科技(深圳)有限公司 Media search keyword pushing method, device and data storage media
CN108763477A (en) * 2018-05-29 2018-11-06 厦门快商通信息技术有限公司 A kind of short text classification method and system

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1276061A1 (en) * 2001-07-09 2003-01-15 Accenture Computer based system and method of determining a satisfaction index of a text
CN101174273A (en) * 2007-12-04 2008-05-07 清华大学 News event detecting method based on metadata analysis
AU2009260033A1 (en) * 2008-06-19 2009-12-23 Wize Technologies, Inc. System and method for aggregating and summarizing product/topic sentiment
US10037491B1 (en) * 2014-07-18 2018-07-31 Medallia, Inc. Context-based sentiment analysis
CN105654250A (en) * 2016-02-01 2016-06-08 百度在线网络技术(北京)有限公司 Method and device for automatically assessing satisfaction degree
WO2017133165A1 (en) * 2016-02-01 2017-08-10 百度在线网络技术(北京)有限公司 Method, apparatus and device for automatic evaluation of satisfaction and computer storage medium
CN105930503A (en) * 2016-05-09 2016-09-07 清华大学 Combination feature vector and deep learning based sentiment classification method and device
CN106156004A (en) * 2016-07-04 2016-11-23 中国传媒大学 The sentiment analysis system and method for film comment information based on term vector
CN106202481A (en) * 2016-07-18 2016-12-07 量子云未来(北京)信息科技有限公司 The evaluation methodology of a kind of perception data and system
JP2018097610A (en) * 2016-12-13 2018-06-21 パナソニックIpマネジメント株式会社 Satisfaction degree evaluation device, satisfaction degree evaluation method, and satisfaction degree evaluation program
WO2018161880A1 (en) * 2017-03-08 2018-09-13 腾讯科技(深圳)有限公司 Media search keyword pushing method, device and data storage media
CN107527231A (en) * 2017-07-27 2017-12-29 温州市鹿城区中津先进科技研究院 Electric business customer satisfaction evaluation method based on natural language analysis
CN107679754A (en) * 2017-09-30 2018-02-09 国网安徽省电力公司经济技术研究院 Power consumer satisfaction evaluation method based on advanced AHP and fuzzy theory
CN108038725A (en) * 2017-12-04 2018-05-15 中国计量大学 A kind of electric business Customer Satisfaction for Product analysis method based on machine learning
CN108763477A (en) * 2018-05-29 2018-11-06 厦门快商通信息技术有限公司 A kind of short text classification method and system

Also Published As

Publication number Publication date
CN111415176B (en) 2023-06-30

Similar Documents

Publication Publication Date Title
CN110084271B (en) Method and device for identifying picture category
US20190333118A1 (en) Cognitive product and service rating generation via passive collection of user feedback
CN107194430B (en) Sample screening method and device and electronic equipment
CN109190109B (en) Method and device for generating comment abstract by fusing user information
WO2021254457A1 (en) Method and device for constructing knowledge graph, computer device, and storage medium
US20200057948A1 (en) Automatic prediction system, automatic prediction method and automatic prediction program
CN112288455A (en) Label generation method and device, computer readable storage medium and electronic equipment
CN110796171A (en) Unclassified sample processing method and device of machine learning model and electronic equipment
CN110458600A (en) Portrait model training method, device, computer equipment and storage medium
CN111079937A (en) Rapid modeling method
CN108885628A (en) Data analysing method candidate's determination device
CN110968664A (en) Document retrieval method, device, equipment and medium
CN111309819A (en) Training table index extraction model, and method and system for extracting table indexes
CN110472659B (en) Data processing method, device, computer readable storage medium and computer equipment
CN109460474B (en) User preference trend mining method
CN112434862A (en) Financial predicament method and device for enterprise on market
CN111415176A (en) Satisfaction evaluation method and device and electronic equipment
CN116777281A (en) ARIMA model-based power equipment quality trend prediction method and device
KR20210091591A (en) An electronic device including evaluation operation of originated technology
CN115114073A (en) Alarm information processing method and device, storage medium and electronic equipment
CN114023407A (en) Health record missing value completion method, system and storage medium
CN114897607A (en) Data processing method and device for product resources, electronic equipment and storage medium
CN110162629B (en) Text classification method based on multi-base model framework
CN113962216A (en) Text processing method and device, electronic equipment and readable storage medium
CN112395855A (en) Comment-based evaluation method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant