CN112215014A - Portrait generation method, apparatus, medium and device based on user comment - Google Patents

Portrait generation method, apparatus, medium and device based on user comment Download PDF

Info

Publication number
CN112215014A
CN112215014A CN202011092617.2A CN202011092617A CN112215014A CN 112215014 A CN112215014 A CN 112215014A CN 202011092617 A CN202011092617 A CN 202011092617A CN 112215014 A CN112215014 A CN 112215014A
Authority
CN
China
Prior art keywords
data
user comment
target
comment data
target user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011092617.2A
Other languages
Chinese (zh)
Inventor
黄乐树
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An International Smart City Technology Co Ltd
Original Assignee
Ping An International Smart City Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An International Smart City Technology Co Ltd filed Critical Ping An International Smart City Technology Co Ltd
Priority to CN202011092617.2A priority Critical patent/CN112215014A/en
Publication of CN112215014A publication Critical patent/CN112215014A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • G06F40/35Discourse or dialogue representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/289Phrasal analysis, e.g. finite state techniques or chunking

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Document Processing Apparatus (AREA)

Abstract

The application relates to the technical field of artificial intelligence, and discloses a portrait generation method, device, medium and equipment based on user comments, wherein the portrait generation method comprises the following steps: obtaining target user comment data, wherein the target user comment data are data obtained based on user comments of the same commented object, and the target user comment data are stored in a block chain; performing semantic analysis on the target user comment data to obtain a semantic analysis result; performing sentiment analysis on the target user comment data to obtain a sentiment analysis result; and generating a target portrait according to the semantic analysis result and the emotion analysis result, wherein the target portrait is stored in a block chain. The method and the device make full use of the user comments and realize multi-dimensional evaluation of the object to be commented. Simultaneously, this application still relates to the block chain technique, and this application is applicable in fields such as wisdom government affairs, wisdom medical treatment, science and technology finance.

Description

Portrait generation method, apparatus, medium and device based on user comment
Technical Field
The application relates to the field of artificial intelligence, in particular to an portrait generation method, device, medium and equipment based on user comments.
Background
In most internet products, user comments are indispensable functions, and a large amount of comment data of users on merchants, customer service staff, operators, logistics staff, commodities and the like are collected. The system in the prior art still stays at the initial stage for the application of the user comments, namely, the comprehensive scoring, such as the most common five-score, the most satisfactory score and the like, is single in application dimension of the comprehensive scoring to the user comments, and the value of the comprehensive scoring cannot be fully utilized; besides, the heart sound of the comment party is ignored, the comment of the commented party is relatively judged, and the comment of the user is easy to flow in the form.
Disclosure of Invention
The application mainly aims to provide an portrait generation method, device, medium and equipment based on user comments, and aims to solve the technical problem that in the prior art, the application dimension of the user comments is single.
In order to achieve the above object, the present application provides a portrait generation method based on user comments, the method including:
obtaining target user comment data, wherein the target user comment data are data obtained based on user comments of the same commented object, and the target user comment data are stored in a block chain;
performing semantic analysis on the target user comment data to obtain a semantic analysis result;
performing sentiment analysis on the target user comment data to obtain a sentiment analysis result;
and generating a target portrait according to the semantic analysis result and the emotion analysis result, wherein the target portrait is stored in a block chain.
Further, the obtaining of the target user comment data is data obtained based on user comments of the same comment-subjected object, and the step of storing the target user comment data in the blockchain includes:
obtaining user comment data to be analyzed, wherein the user comment data to be analyzed are user comments of the same comment object;
and preprocessing the user comment data to be analyzed to obtain the target user comment data.
Further, the step of preprocessing the user comment data to be analyzed to obtain the target user comment data includes:
carrying out invalid data identification and deletion processing on the user comment data to be analyzed to obtain the user comment from which the invalid data is removed;
carrying out special meaning data identification and conversion on the user comment from which the invalid data is removed to obtain the user comment from which the special meaning is converted;
carrying out redundancy and deletion processing on the user comments after the special meaning conversion to obtain cleaned user comments;
and performing text error correction on the cleaned user comment to obtain the target user comment data.
Further, the step of performing semantic analysis on the target user comment data to obtain a semantic analysis result includes:
segmenting words of the target user comment data to obtain segmented word data;
and carrying out semantic analysis on the word segmentation data to obtain a semantic analysis result.
Further, the step of performing word segmentation on the target user comment data to obtain word segmentation data includes:
taking each sentence of the target user comment data as data to be split;
splitting the data to be split to obtain a splitting result;
searching the splitting result in a dictionary library, taking the splitting result as a word segmentation result when the splitting result is in the dictionary library, and taking the splitting result as the data to be split if the splitting result is not in the dictionary library, and executing the step of splitting the data to be split to obtain the splitting result;
and taking all the word segmentation results as the word segmentation data.
Further, the step of generating a target portrait according to the semantic analysis result and the emotion analysis result, wherein the target portrait is stored in a block chain includes:
performing feature selection on the semantic analysis result to obtain first feature data;
performing feature selection on the emotion analysis result to obtain second feature data;
performing classification labeling according to the first characteristic data and the second characteristic data to obtain a labeling result;
and inputting the labeling result into an image model for image generation to obtain the target image, wherein the image model is obtained based on pointer generation network training.
Further, the step of performing sentiment analysis on the target user comment data to obtain a sentiment analysis result includes:
extracting emotion words from the target user comment data to obtain an emotion word set;
and extracting emotional tendency according to the emotional word set to obtain the emotional analysis result.
The application also provides an portrait generation device based on user comments, the device includes:
the system comprises a user comment acquisition module, a block chain module and a comment processing module, wherein the user comment acquisition module is used for acquiring target user comment data, and the target user comment data are data obtained based on user comments of the same commented object and are stored in the block chain;
the semantic analysis module is used for carrying out semantic analysis on the target user comment data to obtain a semantic analysis result;
the emotion analysis module is used for carrying out emotion analysis on the target user comment data to obtain an emotion analysis result;
and the portrait module is used for generating a target portrait according to the semantic analysis result and the emotion analysis result, wherein the target portrait is stored in a block chain.
The present application further proposes a computer device comprising a memory and a processor, the memory storing a computer program, the processor implementing the steps of any of the above methods when executing the computer program.
The present application also proposes a computer-readable storage medium having stored thereon a computer program which, when being executed by a processor, carries out the steps of the method of any of the above.
The portrait generation method, the portrait generation device, the portrait generation medium and the portrait generation equipment based on the user comments have the advantages that target user comment data are based on the user comments of the same commented object, the target user comment data are stored in a block chain, and semantic analysis is conducted on the target user comment data to obtain a semantic analysis result; performing sentiment analysis on the target user comment data to obtain a sentiment analysis result; and generating a target portrait according to the semantic analysis result and the emotion analysis result, wherein the target portrait is stored in a block chain, the target portrait is obtained according to target user comment data, user comments are fully utilized, and multi-dimensional evaluation of a commented object is realized through the target portrait. Therefore, the method and the device make full use of the user comments and achieve multi-dimensional evaluation of the object to be commented. Simultaneously, this application still relates to the block chain technique, and this application is applicable in fields such as wisdom government affairs, wisdom medical treatment, science and technology finance.
Drawings
FIG. 1 is a schematic flow chart illustrating a portrait generation method based on user comments according to an embodiment of the present application;
FIG. 2 is a block diagram schematically illustrating a structure of a portrait generation apparatus based on user comments according to an embodiment of the present application;
fig. 3 is a block diagram illustrating a structure of a computer device according to an embodiment of the present application.
The implementation, functional features and advantages of the objectives of the present application will be further explained with reference to the accompanying drawings.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
With reference to fig. 1, in an embodiment of the present application, a portrait generation method based on user comments is provided, where the method includes:
s1: obtaining target user comment data, wherein the target user comment data are data obtained based on user comments of the same commented object, and the target user comment data are stored in a block chain;
s2: performing semantic analysis on the target user comment data to obtain a semantic analysis result;
s3: performing sentiment analysis on the target user comment data to obtain a sentiment analysis result;
s4: and generating a target portrait according to the semantic analysis result and the emotion analysis result, wherein the target portrait is stored in a block chain.
The target user comment data of the embodiment is based on user comments of the same commented object, the target user comment data is stored in a block chain, and semantic analysis is performed on the target user comment data to obtain a semantic analysis result; performing sentiment analysis on the target user comment data to obtain a sentiment analysis result; and generating a target portrait according to the semantic analysis result and the emotion analysis result, wherein the target portrait is stored in a block chain, the target portrait is obtained according to target user comment data, user comments are fully utilized, and multi-dimensional evaluation of a commented object is realized through the target portrait.
With step S1, target user comment data is acquired from the database. The target user comment data may be data obtained based on all user comments of the same commented object, or data obtained based on partial user comments of the same commented object. That is, the target user comment data contains at least one user comment to the comment object.
Preferably, the target user comment data may be data obtained based on user comments of the same commented object within a preset time period, so that the target user comment data can objectively evaluate the state of the commented object within the preset time period, and the accuracy of the target portrait is improved.
The preset time period comprises a start time and an end time.
The user comment comprises at least one of characters, symbols, numbers and icons.
The commented objects include, but are not limited to, businesses, merchants, customer service personnel, operators, logistics personnel, merchandise, and the like.
It should be emphasized that, in order to further ensure the privacy and security of the target user comment data, the target user comment data may also be stored in a node of a block chain.
For step S2, performing word segmentation on the target user comment data, performing semantic analysis on the word segmentation result, and implementing semantic recognition according to the semantic analysis, thereby obtaining a semantic analysis result.
The segmentation for Chinese includes segmentation based on dictionary database and segmentation based on statistics. Based on word segmentation of a dictionary base, firstly splitting the target user comment data into a plurality of parts, then searching each part for matching in the dictionary base, taking the part as a word segmentation result when matching is successful, and otherwise, continuing splitting and matching the part until matching is successful. And based on the word segmentation of the statistics, firstly, counting words formed by adjacent words of the comment data of the target user, carrying out frequency statistics on the formed words in a corpus to obtain a frequency statistical result, finding out a maximum value from the frequency statistical result, and taking the words corresponding to the found frequency statistical result as word segmentation results.
The language material base stores the language material which is actually appeared in the practical use of the language; the language database is a basic resource which takes an electronic computer as a carrier to bear language knowledge; the real corpus needs to be processed (analyzed and processed) to become a useful resource.
For step S3, extracting emotion words from the comment data of the target user to obtain an emotion word set; and extracting emotional tendency according to the emotional word set to obtain the emotional analysis result.
Preferably, matching is carried out in an emotion dictionary according to the emotion word set to obtain the emotion analysis result.
Preferably, the emotion word set is input into an emotion analysis model for emotion tendency extraction, and the emotion analysis result output by the emotion analysis model is obtained. The emotion analysis model can be a model obtained based on neural network training.
In step S4, performing classification labeling and portrait creation according to the semantic analysis result and the emotion analysis result to obtain the target portrait. Where portrayal is the process of determining the value of a pre-set comment feature. The target representation includes values of at least one predetermined comment feature. The value of the preset comment feature can be a score or a word.
The preset comment characteristics can be determined according to the purpose of the target portrait and the object to be commented. For example, when the object to be commented on is a commodity, the preset comment features include, but are not limited to: commodity quality characteristics, cost performance characteristics, and operability characteristics. For another example, when the object to be commented on is a logistics person, the preset comment features include, but are not limited to: service attitude characteristics, service specification characteristics.
Since the target user comment data is data obtained based on user comments of the same object to be commented on, the target image obtained from the target user comment data is an image of the same object to be commented on.
It is emphasized that the target image may also be stored in a node of a block chain in order to further ensure privacy and security of the target image.
In an embodiment, the obtaining of the target user comment data is based on user comments of the same comment-subject, where the step of storing the target user comment data in the blockchain includes:
s11: obtaining user comment data to be analyzed, wherein the user comment data to be analyzed are user comments of the same comment object;
s12: and preprocessing the user comment data to be analyzed to obtain the target user comment data.
According to the method and the device, the target user comment data are obtained by preprocessing the user comment data to be analyzed, the noise of the target user comment data is reduced by preprocessing, and the accuracy of the target portrait is improved.
For step S11, the user comment data to be analyzed may be all user comments of the same comment-made object, or may be partial user comments of the same comment-made object.
Preferably, the user comment data to be analyzed may be based on user comments of the same comment-subject within a preset time period, so that the user comment data to be analyzed can objectively evaluate the state of the comment-subject within the preset time period.
For step S12, the invalid data, the special meaning data, the redundant data, the missing data, and the error text data of the user comment data to be analyzed are processed to obtain the target user comment data.
In an embodiment, the step of preprocessing the user comment data to be analyzed to obtain the target user comment data includes:
s121: carrying out invalid data identification and deletion processing on the user comment data to be analyzed to obtain the user comment from which the invalid data is removed;
s122: carrying out special meaning data identification and conversion on the user comment from which the invalid data is removed to obtain the user comment from which the special meaning is converted;
s123: carrying out redundancy and deletion processing on the user comments after the special meaning conversion to obtain cleaned user comments;
s124: and performing text error correction on the cleaned user comment to obtain the target user comment data.
The embodiment realizes the processing of invalid data, special meaning data, redundant data, missing data and error text data of the user comment data to be analyzed, thereby removing noise in the user comment data to be analyzed and being beneficial to improving the accuracy of the target portrait.
For step S121, inputting the user comment data to be analyzed into an invalid data cleaning model for cleaning, to obtain the user comment from which the invalid data is removed; the invalid data cleaning model can be obtained by adopting unsupervised learning network training.
The invalid data includes: non-text data. Non-textual data includes, but is not limited to, html tags, URL addresses.
For step S122, the special meaning data includes, but is not limited to, emoticons, letter strings, and numeric strings. The emoticons are divided into emoticons and emoticons without emoticons. The letter string is comprised of a plurality of letters. The string of numbers includes a plurality of numbers.
The steps of the special meaning data identification and conversion specifically comprise:
s1221: carrying out icon identification on the user comment from which the invalid data is removed to obtain an emoticon set; determining a non-emotional icon subset and an emotional icon subset according to the expression icon set; deleting the icon without emotion from the user comment without the invalid data according to the non-emotional icon subset to obtain the user comment without the emotional icon; performing emotion label conversion according to the subset with emotion icons to obtain an emotion label conversion result; replacing the user comment without the emotion icon according to the emotion label conversion result to obtain the user comment after icon conversion;
s1222: performing letter string recognition and conversion on the user comment subjected to the icon conversion to obtain the user comment subjected to the letter string conversion;
s1223: and performing numeric string recognition and conversion on the user comment subjected to the letter string conversion to obtain the user comment subjected to the special meaning conversion.
For step S1221, all of the icons in the non-emotive subset of icons are non-emotive icons. All the icons in the emotional icon subset are emotional icons.
And replacing the user comment without the emoticon according to the emotion label conversion result to obtain the user comment after icon conversion, wherein the emotion icon in the user comment without the emoticon is replaced by an emotion label, and the user comment after the emotion icon is replaced is used as the user comment after icon conversion.
For step S1222, performing letter string recognition on the user comment subjected to icon conversion to obtain a to-be-processed letter string; performing meaning recognition according to the letter string to be processed to obtain a letter string meaning recognition result; and when the letter string meaning identification result is meaningful, replacing the user comment after the icon conversion according to the letter string meaning identification result to obtain the user comment after the letter string conversion, otherwise deleting the user comment after the icon conversion according to the letter string meaning identification result to obtain the user comment after the letter string conversion.
For step S1223, performing numeric string recognition on the user comment after the alphabetic string conversion to obtain a numeric string to be processed; performing meaning recognition according to the numeric string to be processed to obtain a numeric string meaning recognition result; and when the identification result of the meaning of the digit string is meaningful, replacing the user comment after the letter string conversion according to the identification result of the meaning of the digit string to obtain the user comment after the special meaning conversion, and otherwise deleting the user comment after the letter string conversion according to the identification result of the meaning of the digit string to obtain the user comment after the special meaning conversion.
Step S123, inputting the user comment with the special meaning converted into a redundancy elimination model for redundancy identification and elimination to obtain a redundancy-removed user comment; and inputting the redundancy-removed user comment into a missing completion model for text missing identification and completion to obtain the cleaned user comment.
The redundancy elimination model can be a model obtained by training a neural network.
The missing completion model can be a model obtained by training a neural network.
For step S124, inputting the cleaned user comment into a text correction model for text correction, so as to obtain the target user comment.
The text error correction model may employ an N-gram model (a language model commonly used in large vocabulary continuous speech recognition).
In an embodiment, the step of performing semantic analysis on the target user comment data to obtain a semantic analysis result includes:
s21: segmenting words of the target user comment data to obtain segmented word data;
s22: and carrying out semantic analysis on the word segmentation data to obtain a semantic analysis result.
For step S21, word segmentation is a process of recombining continuous word sequences into word sequences according to a certain specification.
With respect to step S22, semantic analysis is performed on the segmented word data based on NLP (natural language processing) and machine learning techniques.
Semantic analysis utilizes various machine learning methods to mine and learn deep concepts such as texts and pictures. The semantic analysis comprises vocabulary level semantic analysis and sentence level semantic analysis, and the vocabulary level semantic analysis is mainly divided into two blocks: word sense disambiguation and word similarity. Word sense disambiguation includes: semantic disambiguation based on background knowledge, supervised semantic disambiguation, a semi-supervised learning method, and an unsupervised learning method. Background knowledge based methods are rule based methods, others are machine learning methods. Sentence-level semantic analysis is divided into shallow semantic analysis and deep semantic analysis.
In an embodiment, the step of performing word segmentation on the target user comment data to obtain word segmentation data includes:
s211: taking each sentence of the target user comment data as data to be split;
s212: splitting the data to be split to obtain a splitting result;
s213: searching the splitting result in a dictionary library, taking the splitting result as a word segmentation result when the splitting result is in the dictionary library, and taking the splitting result as the data to be split if the splitting result is not in the dictionary library, and executing the step of splitting the data to be split to obtain the splitting result;
s214: and taking all the word segmentation results as the word segmentation data.
The embodiment realizes word segmentation based on the dictionary database.
From step S211 to step S214, it can be known that all words in the segmentation data can be found in the dictionary database, thereby being beneficial to improving the accuracy of semantic analysis.
In one embodiment, the generating a target portrait based on the semantic analysis result and the emotion analysis result, where the target portrait is stored in a block chain includes:
s41: carrying out classification labeling according to the semantic analysis result and the emotion analysis result to obtain a labeling result;
s42: and inputting the labeling result into an image model for image generation to obtain the target image, wherein the image model is obtained based on pointer generation network training.
For step S41, classifying according to the semantic analysis result and the emotion analysis result respectively to obtain a target classification result; and labeling according to the target classification result to obtain the labeling result. Labeling, that is, labeling, is performed, that is, the labeling result includes: label name, label value.
The target classification result is to divide the features into a plurality of feature subsets, and the features of each feature subset belong to the same category. And labeling each characteristic subset to obtain a labeling result. That is, the label names of the features in the same feature subset are the same, and the label values may be the same or different.
For example, for the case where the object being reviewed is a good, the tag name includes, but is not limited to: commodity quality, commodity cost performance, commodity operability, and label values of commodity quality include but are not limited to: excellent, good, medium, good, poor, and the label value of the cost performance of the goods includes but is not limited to: high, general, tag values for commercial operability include, but are not limited to: easy operation, general difficulty of operation, and inconvenient operation, which are not specifically limited in this example.
And step S42, extracting preset comment features according to the labeling result to obtain the target portrait. The target portrait includes a portrait label, a portrait score. The portrait tags correspond to the portrait scores one-to-one.
The target representation not only needs to summarize user comments, but also needs to have the ability to generate new words. While the pointer generation network helps to accurately copy information by copying words from the source text through pointing (pointer), while retaining the ability to generate new words through the generator, and uses the coverage mechanism (seq 2seq is a sequence-to-sequence model to account for the frequently repeated fragmentation in seq2seq model generated sentences) to track the summarized content, preventing duplication.
The pointer generation network is constructed on the basis of a sequence-to-sequence model (sequence-to-sequence model), where the original text is encoded into a hidden state of an intermediate layer by an Encoder, and then the hidden state is decoded into another text by a Decoder (converting the encoded byte sequence into a set of characters). The Encoder end is a bidirectional LSTM (long-short term memory artificial neural network), the bidirectional LSTM can capture long-distance dependency and position information of an original text, and a word is embedded into the bidirectional LSTM during coding to obtain a coding state. At the Decoder end, the Decoder is a unidirectional LSTM, the reference abstract words are sequentially input in the training stage (the generated words in the previous step are input in the testing stage), and the decoding state is obtained in unit time. The pointer generation network adds a weight P, calculated from the sequence-to-sequence encoding state, decoding state, and encoder, and the extended word list forms a larger word list, the extended word list, and the output probability of a word from the decoder is determined by the probability and the decision of whether or not its copy was generated (i.e., by copying the high weight word or from the generated extended word list). And by adding the attention weights of the previous time steps together a so-called coverage vector is obtained, and the decision of the current attention weight is influenced by the previous attention weight decision, thus avoiding repetition at the same position and further avoiding repeated generation of texts.
Preferably, the type of the object to be commented corresponding to the training sample adopted by the model obtained by training the pointer generation network training is the same as the type of the object to be commented corresponding to the target user comment data. The types are the same and are in the same category, for example, the types include but are not limited to: the commented objects corresponding to the training samples and the commented objects corresponding to the target user comment data are all mobile phones, and the commented objects corresponding to the training samples and the commented objects corresponding to the target user comment data are all mobile phones of the same model.
In an embodiment, the step of performing classification and labeling according to the semantic analysis result and the emotion analysis result to obtain a labeling result includes:
s411: performing feature selection on the semantic analysis result to obtain first feature data;
s412: performing feature selection on the emotion analysis result to obtain second feature data;
s413: and carrying out classification labeling according to the first characteristic data and the second characteristic data to obtain a labeling result.
For step S411, the method for selecting features of the semantic analysis result includes, but is not limited to, chi-squared test and information gain.
The purpose of feature selection is to select features required for rendering, which is advantageous for improving the efficiency of subsequent rendering. That is, the feature in the first feature data is a feature required for image rendering.
The chi-square test is to count the deviation degree between the actual observed value and the theoretical inferred value of the sample, the deviation degree between the actual observed value and the theoretical inferred value determines the size of the chi-square value, and if the chi-square value is larger, the deviation degree between the actual observed value and the theoretical inferred value is larger; otherwise, the smaller the deviation of the two is; if the two values are completely equal, the chi-square value is 0, which indicates that the theoretical values completely meet.
For step S412, the method for selecting features of the emotion analysis result includes, but is not limited to, chi-square test and information gain. The feature in the second feature data is a feature required for rendering.
For step S413, performing classification labeling according to the first feature data to obtain a first label; performing classification labeling according to the second characteristic data to obtain a second label; and taking the first label and the second label as the labeling result.
It is understood that the classification labeling is performed by classifying first and labeling the result of the classification.
In one embodiment, the generating a target portrait based on the semantic analysis result and the emotion analysis result, wherein the storing the target portrait in a blockchain further comprises:
s51: obtaining the scoring result of the object to be commented according to the image labels and the image scores of all the target images of the same object to be commented;
s52: and obtaining a dominance analysis result and a disadvantage analysis result according to the grading result of the commented object.
The dominance analysis result is used for guiding the commented object to keep dominance.
The disadvantage analysis result is used for guiding the object to be commented to improve the disadvantage.
The embodiment enables the user comment to be really used for advantage maintenance and disadvantage improvement of the object to be commented, thereby stimulating the comment enthusiasm of the commenter and being beneficial to improving the competitiveness of the object to be commented.
With reference to fig. 2, the present application further proposes a portrait generation method and apparatus based on user comments, where the apparatus includes:
the user comment acquisition module 100 is configured to acquire target user comment data, where the target user comment data is obtained based on user comments of the same comment-subject, and the target user comment data is stored in a block chain;
the semantic analysis module 200 is configured to perform semantic analysis on the target user comment data to obtain a semantic analysis result;
the emotion analysis module 300 is used for performing emotion analysis on the target user comment data to obtain an emotion analysis result;
a sketch module 400, configured to generate a target sketch according to the semantic analysis result and the emotion analysis result, where the target sketch is stored in a block chain.
The target user comment data of the embodiment is based on user comments of the same commented object, the target user comment data is stored in a block chain, and semantic analysis is performed on the target user comment data to obtain a semantic analysis result; performing sentiment analysis on the target user comment data to obtain a sentiment analysis result; and generating a target portrait according to the semantic analysis result and the emotion analysis result, wherein the target portrait is stored in a block chain, the target portrait is obtained according to target user comment data, user comments are fully utilized, and multi-dimensional evaluation of a commented object is realized through the target portrait.
In one embodiment, the user comment acquisition module includes: the data to be analyzed acquisition sub-module and the user comment data preprocessing sub-module:
the data to be analyzed acquisition submodule is used for acquiring user comment data to be analyzed, and the user comment data to be analyzed are user comments of the same comment object;
the user comment data preprocessing submodule is used for preprocessing the user comment data to be analyzed to obtain the target user comment data.
In one embodiment, the user comment data preprocessing sub-module includes: the system comprises an invalid data processing unit, a special meaning data processing unit, a redundancy and deficiency processing unit and a text error correction unit;
the invalid data processing unit is used for carrying out invalid data identification and deletion processing on the user comment data to be analyzed to obtain the user comment from which the invalid data is removed;
the special meaning data processing unit is used for carrying out special meaning data identification and conversion on the user comment from which the invalid data is removed to obtain the user comment from which the special meaning is converted;
the redundancy and deletion processing unit is used for performing redundancy and deletion processing on the user comment subjected to the special meaning conversion to obtain a cleaned user comment;
and the text error correction unit is used for performing text error correction on the cleaned user comment to obtain the target user comment data.
In one embodiment, the semantic analysis module comprises: the word segmentation submodule and the semantic analysis submodule are as follows:
the word segmentation sub-module is used for segmenting words of the target user comment data to obtain word segmentation data;
and the semantic analysis submodule is used for performing semantic analysis on the word segmentation data to obtain the semantic analysis result.
In one embodiment, the word segmentation sub-module is further configured to:
taking each sentence of the target user comment data as data to be split;
splitting the data to be split to obtain a splitting result;
searching the splitting result in a dictionary library, taking the splitting result as a word segmentation result when the splitting result is in the dictionary library, and taking the splitting result as the data to be split if the splitting result is not in the dictionary library, and executing the step of splitting the data to be split to obtain the splitting result;
and taking all the word segmentation results as the word segmentation data.
In one embodiment, the representation module comprises: labeling submodule and portrait submodule:
the labeling submodule is used for carrying out classification labeling according to the semantic analysis result and the emotion analysis result to obtain a labeling result;
and the portrait submodule is used for inputting the labeling result into a portrait model for portrait, so as to obtain the target portrait, wherein the portrait model is obtained based on pointer generation network training.
In one embodiment, the labeling submodule includes: a first feature selection unit, a second feature selection unit, and a labeling unit:
the first feature selection unit is used for performing feature selection on the semantic analysis result to obtain first feature data;
the second feature selection unit is used for performing feature selection on the emotion analysis result to obtain second feature data;
and the labeling unit is used for carrying out classification labeling according to the first characteristic data and the second characteristic data to obtain the labeling result.
Referring to fig. 3, a computer device, which may be a server and whose internal structure may be as shown in fig. 3, is also provided in the embodiment of the present application. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the computer designed processor is used to provide computational and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The memory provides an environment for the operation of the operating system and the computer program in the non-volatile storage medium. The database of the computer device is used for storing data such as an image generation method based on user comments. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a representation generation method based on user comments. The portrait generation method based on the user comments comprises the following steps: obtaining target user comment data, wherein the target user comment data are data obtained based on user comments of the same commented object, and the target user comment data are stored in a block chain; performing semantic analysis on the target user comment data to obtain a semantic analysis result; performing sentiment analysis on the target user comment data to obtain a sentiment analysis result; and generating a target portrait according to the semantic analysis result and the emotion analysis result, wherein the target portrait is stored in a block chain.
In the portrait generation method based on user comments implemented in this embodiment, target user comment data is user comments based on the same comment object, the target user comment data is stored in a block chain, and semantic analysis is performed on the target user comment data to obtain a semantic analysis result; performing sentiment analysis on the target user comment data to obtain a sentiment analysis result; and generating a target portrait according to the semantic analysis result and the emotion analysis result, wherein the target portrait is stored in a block chain, the target portrait is obtained according to target user comment data, user comments are fully utilized, and multi-dimensional evaluation of a commented object is realized through the target portrait.
An embodiment of the present application further provides a computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements a portrait generation method based on user comments, including the steps of: obtaining target user comment data, wherein the target user comment data are data obtained based on user comments of the same commented object, and the target user comment data are stored in a block chain; performing semantic analysis on the target user comment data to obtain a semantic analysis result; performing sentiment analysis on the target user comment data to obtain a sentiment analysis result; and generating a target portrait according to the semantic analysis result and the emotion analysis result, wherein the target portrait is stored in a block chain.
In the portrait generation method based on user comments, the target user comment data is stored in a block chain, where the target user comment data is based on user comments of the same comment object, and the target user comment data is subjected to semantic analysis to obtain a semantic analysis result; performing sentiment analysis on the target user comment data to obtain a sentiment analysis result; and generating a target portrait according to the semantic analysis result and the emotion analysis result, wherein the target portrait is stored in a block chain, the target portrait is obtained according to target user comment data, user comments are fully utilized, and multi-dimensional evaluation of a commented object is realized through the target portrait.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium provided herein and used in the examples may include non-volatile and/or volatile memory. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), double-rate SDRAM (SSRSDRAM), Enhanced SDRAM (ESDRAM), synchronous link (Synchlink) DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and bus dynamic RAM (RDRAM).
The block chain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism, an encryption algorithm and the like. A block chain (Blockchain), which is essentially a decentralized database, is a series of data blocks associated by using a cryptographic method, and each data block contains information of a batch of network transactions, so as to verify the validity (anti-counterfeiting) of the information and generate a next block. The blockchain may include a blockchain underlying platform, a platform product service layer, an application service layer, and the like.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, apparatus, article, or method that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, apparatus, article, or method. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, apparatus, article, or method that includes the element.
The above description is only a preferred embodiment of the present application, and not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings of the present application, or which are directly or indirectly applied to other related technical fields, are also included in the scope of the present application.

Claims (10)

1. A method for generating an image based on user comments, the method comprising:
obtaining target user comment data, wherein the target user comment data are data obtained based on user comments of the same commented object, and the target user comment data are stored in a block chain;
performing semantic analysis on the target user comment data to obtain a semantic analysis result;
performing sentiment analysis on the target user comment data to obtain a sentiment analysis result;
and generating a target portrait according to the semantic analysis result and the emotion analysis result, wherein the target portrait is stored in a block chain.
2. The method according to claim 1, wherein the obtaining of target user comment data is based on user comments of the same comment-made object, and the storing of the target user comment data in a blockchain includes:
obtaining user comment data to be analyzed, wherein the user comment data to be analyzed are user comments of the same comment object;
and preprocessing the user comment data to be analyzed to obtain the target user comment data.
3. The method for generating an image based on user comments as claimed in claim 2, wherein the step of preprocessing the user comment data to be analyzed to obtain the target user comment data includes:
carrying out invalid data identification and deletion processing on the user comment data to be analyzed to obtain the user comment from which the invalid data is removed;
carrying out special meaning data identification and conversion on the user comment from which the invalid data is removed to obtain the user comment from which the special meaning is converted;
carrying out redundancy and deletion processing on the user comments after the special meaning conversion to obtain cleaned user comments;
and performing text error correction on the cleaned user comment to obtain the target user comment data.
4. The method for generating an image based on user comments as claimed in claim 1, wherein the step of performing semantic analysis on the target user comment data to obtain a semantic analysis result includes:
segmenting words of the target user comment data to obtain segmented word data;
and carrying out semantic analysis on the word segmentation data to obtain a semantic analysis result.
5. The method of claim 4, wherein the step of segmenting the target user comment data to obtain segmented data includes:
taking each sentence of the target user comment data as data to be split;
splitting the data to be split to obtain a splitting result;
searching the splitting result in a dictionary library, taking the splitting result as a word segmentation result when the splitting result is in the dictionary library, and taking the splitting result as the data to be split if the splitting result is not in the dictionary library, and executing the step of splitting the data to be split to obtain the splitting result;
and taking all the word segmentation results as the word segmentation data.
6. The method of claim 1, wherein generating a target portrait based on the semantic analysis results and the emotion analysis results, wherein the step of storing the target portrait in a blockchain comprises:
performing feature selection on the semantic analysis result to obtain first feature data;
performing feature selection on the emotion analysis result to obtain second feature data;
performing classification labeling according to the first characteristic data and the second characteristic data to obtain a labeling result;
and inputting the labeling result into an image model for image generation to obtain the target image, wherein the image model is obtained based on pointer generation network training.
7. The method of claim 1, wherein the step of performing sentiment analysis on the target user comment data to obtain a sentiment analysis result comprises:
extracting emotion words from the target user comment data to obtain an emotion word set;
and extracting emotional tendency according to the emotional word set to obtain the emotional analysis result.
8. A portrait generation apparatus based on user comments, the apparatus comprising:
the system comprises a user comment acquisition module, a block chain module and a comment processing module, wherein the user comment acquisition module is used for acquiring target user comment data, and the target user comment data are data obtained based on user comments of the same commented object and are stored in the block chain;
the semantic analysis module is used for carrying out semantic analysis on the target user comment data to obtain a semantic analysis result;
the emotion analysis module is used for carrying out emotion analysis on the target user comment data to obtain an emotion analysis result;
and the portrait module is used for generating a target portrait according to the semantic analysis result and the emotion analysis result, wherein the target portrait is stored in a block chain.
9. A computer device comprising a memory and a processor, the memory storing a computer program, wherein the processor implements the steps of the method of any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7.
CN202011092617.2A 2020-10-13 2020-10-13 Portrait generation method, apparatus, medium and device based on user comment Pending CN112215014A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011092617.2A CN112215014A (en) 2020-10-13 2020-10-13 Portrait generation method, apparatus, medium and device based on user comment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011092617.2A CN112215014A (en) 2020-10-13 2020-10-13 Portrait generation method, apparatus, medium and device based on user comment

Publications (1)

Publication Number Publication Date
CN112215014A true CN112215014A (en) 2021-01-12

Family

ID=74053968

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011092617.2A Pending CN112215014A (en) 2020-10-13 2020-10-13 Portrait generation method, apparatus, medium and device based on user comment

Country Status (1)

Country Link
CN (1) CN112215014A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113051928A (en) * 2021-03-17 2021-06-29 卓尔智联(武汉)研究院有限公司 Detection comment method and device based on block chain and electronic equipment
CN113157899A (en) * 2021-05-27 2021-07-23 东莞心启航联贸网络科技有限公司 Big data portrait analysis method, server and readable storage medium
CN113269249A (en) * 2021-05-25 2021-08-17 广东技术师范大学 Multi-data-source portrait construction method based on deep learning

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105824898A (en) * 2016-03-14 2016-08-03 苏州大学 Label extracting method and device for network comments
KR102020756B1 (en) * 2018-10-23 2019-11-04 주식회사 리나소프트 Method for Analyzing Reviews Using Machine Leaning
CN110866398A (en) * 2020-01-07 2020-03-06 腾讯科技(深圳)有限公司 Comment text processing method and device, storage medium and computer equipment
CN111552734A (en) * 2020-03-30 2020-08-18 平安医疗健康管理股份有限公司 User portrait generation method and device, computer equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105824898A (en) * 2016-03-14 2016-08-03 苏州大学 Label extracting method and device for network comments
KR102020756B1 (en) * 2018-10-23 2019-11-04 주식회사 리나소프트 Method for Analyzing Reviews Using Machine Leaning
CN110866398A (en) * 2020-01-07 2020-03-06 腾讯科技(深圳)有限公司 Comment text processing method and device, storage medium and computer equipment
CN111552734A (en) * 2020-03-30 2020-08-18 平安医疗健康管理股份有限公司 User portrait generation method and device, computer equipment and storage medium

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113051928A (en) * 2021-03-17 2021-06-29 卓尔智联(武汉)研究院有限公司 Detection comment method and device based on block chain and electronic equipment
CN113051928B (en) * 2021-03-17 2023-08-01 卓尔智联(武汉)研究院有限公司 Block chain-based comment detection method and device and electronic equipment
CN113269249A (en) * 2021-05-25 2021-08-17 广东技术师范大学 Multi-data-source portrait construction method based on deep learning
CN113157899A (en) * 2021-05-27 2021-07-23 东莞心启航联贸网络科技有限公司 Big data portrait analysis method, server and readable storage medium
CN113157899B (en) * 2021-05-27 2022-01-14 叉烧(上海)新材料科技有限公司 Big data portrait analysis method, server and readable storage medium

Similar Documents

Publication Publication Date Title
CN109522557B (en) Training method and device of text relation extraction model and readable storage medium
CN109493977B (en) Text data processing method and device, electronic equipment and computer readable medium
CN110909137A (en) Information pushing method and device based on man-machine interaction and computer equipment
CN112215014A (en) Portrait generation method, apparatus, medium and device based on user comment
CN111027327A (en) Machine reading understanding method, device, storage medium and device
CN108319668A (en) Generate the method and apparatus of text snippet
CN110852110B (en) Target sentence extraction method, question generation method, and information processing apparatus
CN112818093B (en) Evidence document retrieval method, system and storage medium based on semantic matching
CN111783471B (en) Semantic recognition method, device, equipment and storage medium for natural language
KR102220894B1 (en) a communication typed question and answer system with data supplying in statistic database
CN112016314A (en) Medical text understanding method and system based on BERT model
KR101897060B1 (en) Named Entity Recognition Model Generation Device and Method
CN113268576B (en) Deep learning-based department semantic information extraction method and device
KR20200087977A (en) Multimodal ducument summary system and method
Theophilo et al. Authorship attribution of social media messages
CN114298035A (en) Text recognition desensitization method and system thereof
CN110263123B (en) Method and device for predicting organization name abbreviation and computer equipment
Alkhazi et al. Classifying and segmenting classical and modern standard Arabic using minimum cross-entropy
Defersha et al. Tuning hyperparameters of machine learning methods for afan oromo hate speech text detection for social media
CN114492437B (en) Keyword recognition method and device, electronic equipment and storage medium
Imani et al. Where did the political news event happen? primary focus location extraction in different languages
CN111523312A (en) Paraphrase disambiguation-based query display method and device and computing equipment
CN113553052B (en) Method for automatically recognizing security-related code submissions using an Attention-coded representation
Singha et al. Bengali Text Summarization with Attention-Based Deep Learning
Pham et al. VQ-based written language identification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination