CN111506733B - Object portrait generation method and device, computer equipment and storage medium - Google Patents

Object portrait generation method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN111506733B
CN111506733B CN202010479442.4A CN202010479442A CN111506733B CN 111506733 B CN111506733 B CN 111506733B CN 202010479442 A CN202010479442 A CN 202010479442A CN 111506733 B CN111506733 B CN 111506733B
Authority
CN
China
Prior art keywords
characteristic value
text information
recognized
evaluation text
emotion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010479442.4A
Other languages
Chinese (zh)
Other versions
CN111506733A (en
Inventor
林秋强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Pacific Internet Information Service Co ltd
Original Assignee
Guangdong Pacific Internet Information Service Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Pacific Internet Information Service Co ltd filed Critical Guangdong Pacific Internet Information Service Co ltd
Priority to CN202010479442.4A priority Critical patent/CN111506733B/en
Publication of CN111506733A publication Critical patent/CN111506733A/en
Application granted granted Critical
Publication of CN111506733B publication Critical patent/CN111506733B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/36Creation of semantic tools, e.g. ontology or thesauri
    • G06F16/367Ontology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Animal Behavior & Ethology (AREA)
  • Machine Translation (AREA)

Abstract

The application relates to a method and a device for generating an object portrait, a computer device and a storage medium. The method comprises the following steps: obtaining evaluation text information of an object to be identified; determining the emotion type of the evaluation text information of the object to be recognized; acquiring the freshness of the object to be recognized, and determining the emotional characteristic value of the object to be recognized according to the freshness of the object to be recognized and the emotional category of the evaluation text information; acquiring the data volume of the evaluation text information, and determining the sound volume characteristic value of the object to be identified according to the data volume of the evaluation text information and the emotion type; and performing weighting processing on the emotion characteristic value and the sound volume characteristic value to obtain a target characteristic value of the object to be recognized, and generating an object image of the object to be recognized according to the emotion characteristic value, the sound volume characteristic value and the target characteristic value. By adopting the method, the accuracy of the generated object portrait is improved, and meanwhile, the labor cost is reduced.

Description

Object portrait generation method and device, computer equipment and storage medium
Technical Field
The present application relates to the field of data processing technologies, and in particular, to a method and an apparatus for generating an object representation, a computer device, and a storage medium.
Background
With the development of internet technology, when objects (such as products and brands) are recommended on the internet, generally, the objects are displayed by generating object images corresponding to the objects, and a user selects a desired object by using the displayed object images.
However, in the current generation method of the object portrait, the basic information of the object is generally collected from the massive information manually, and the object portrait is constructed according to the basic information of the object; however, the generated object image includes only basic information of the object, and it is not possible to extract information of a deeper layer of the object, which results in low accuracy of the generated object image.
Disclosure of Invention
In view of the above, it is necessary to provide a method, an apparatus, a computer device, and a storage medium for generating an object image, which can improve the accuracy of the generated object image.
A method of object representation generation, the method comprising:
obtaining evaluation text information of an object to be identified;
determining the emotion type of the evaluation text information of the object to be recognized;
Acquiring the freshness of the object to be recognized, and determining the emotional characteristic value of the object to be recognized according to the freshness of the object to be recognized and the emotional category of the evaluation text information;
acquiring the data volume of the evaluation text information, and determining the sound volume characteristic value of the object to be recognized according to the data volume of the evaluation text information and the emotion type;
and performing weighting processing on the emotion characteristic value and the sound volume characteristic value to obtain a target characteristic value of the object to be recognized, and generating an object image of the object to be recognized according to the emotion characteristic value, the sound volume characteristic value and the target characteristic value.
In one embodiment, the obtaining evaluation text information of the object to be recognized includes:
acquiring object entity information of the object to be identified from a preset object entity library;
and acquiring the evaluation text information matched with the object entity information from the evaluation text information on the network as the evaluation text information of the object to be identified.
In one embodiment, the determining the emotion category of the evaluation text information of the object to be recognized includes:
extracting characteristic information in the evaluation text information of the object to be identified;
And inputting the characteristic information in the evaluation text information of the object to be recognized into a pre-trained text sentiment classification model to obtain the sentiment category of the evaluation text information of the object to be recognized.
In one embodiment, the determining the emotional characteristic value of the object to be recognized according to the freshness of the object to be recognized and the emotion category of the evaluation text information includes:
according to the emotion types of the evaluation text information, a first evaluation text information set belonging to a positive emotion type and a second evaluation text information set belonging to a negative emotion type are determined from the evaluation text information;
acquiring a quantity difference value between the evaluation text information in the first evaluation text information set and the evaluation text information in the second evaluation text information set to obtain an emotion difference value of the object to be recognized;
determining a target emotion type and a positive degree of the object to be recognized according to the emotion difference value of the object to be recognized;
and obtaining the emotion score of the object to be recognized according to the target emotion category, the certainty degree and the freshness of the object to be recognized, and taking the emotion score as the emotion characteristic value of the object to be recognized.
In one embodiment, the determining the sound volume characteristic value of the object to be recognized according to the data volume of the evaluation text information and the emotion category includes:
Acquiring the sum of scores corresponding to the data quantity of each evaluation text message in the first evaluation text message set as the positive sound quantity score of the object to be identified;
acquiring the sum of scores corresponding to the data quantity of each evaluation text message in the second evaluation text message set as the negative sound quantity score of the object to be identified;
and acquiring the sum of the positive sound volume score and the negative sound volume score to obtain the sound volume score of the object to be identified, wherein the sound volume score is used as the sound volume characteristic value of the object to be identified.
In one embodiment, before generating the object image of the object to be recognized according to the emotion feature value, the acoustic quantity feature value, and the target feature value, the method further includes:
acquiring label evaluation text information of label information of an object to be identified;
determining the emotion type of the tag evaluation text information;
determining the emotional characteristic value of the label information of the object to be recognized according to the freshness of the object to be recognized and the emotional category of the label evaluation text information;
acquiring the data volume of the tag evaluation text information, and determining the sound volume characteristic value of the tag information of the object to be identified according to the data volume of the tag evaluation text information and the emotion type;
Generating the object image of the object to be identified according to the emotion characteristic value, the sound volume characteristic value and the target characteristic value, wherein the method comprises the following steps:
and generating an object image of the object to be recognized according to the emotional characteristic value, the acoustic quantity characteristic value and the target characteristic value of the object to be recognized and the emotional characteristic value and the acoustic quantity characteristic value of the label information of the object to be recognized.
In one embodiment, after generating the object image of the object to be recognized according to the emotion feature value, the acoustic quantity feature value, and the target feature value, the method further includes:
receiving an acquisition request of an object portrait of the object to be identified by a terminal;
and sending the object picture of the object to be identified to the terminal for displaying according to the acquisition request.
An apparatus for generating an object representation, the apparatus comprising:
the text information acquisition module is used for acquiring evaluation text information of the object to be identified;
the emotion type determination module is used for determining the emotion type of the evaluation text information of the object to be identified;
the emotion characteristic value determining module is used for acquiring the freshness of the object to be recognized and determining the emotion characteristic value of the object to be recognized according to the freshness of the object to be recognized and the emotion type of the evaluation text information;
The sound volume characteristic value determining module is used for acquiring the data volume of the evaluation text information and determining the sound volume characteristic value of the object to be identified according to the data volume of the evaluation text information and the emotion type;
and the object portrait generation module is used for weighting the emotion characteristic value and the sound volume characteristic value to obtain a target characteristic value of the object to be recognized, and generating an object portrait of the object to be recognized according to the emotion characteristic value, the sound volume characteristic value and the target characteristic value.
A computer device comprising a memory and a processor, the memory storing a computer program, the processor implementing the following steps when executing the computer program:
obtaining evaluation text information of an object to be identified;
determining the emotion type of the evaluation text information of the object to be recognized;
acquiring the freshness of the object to be recognized, and determining the emotional characteristic value of the object to be recognized according to the freshness of the object to be recognized and the emotional category of the evaluation text information;
acquiring the data volume of the evaluation text information, and determining the sound volume characteristic value of the object to be identified according to the data volume of the evaluation text information and the emotion type;
And performing weighting processing on the emotion characteristic value and the sound volume characteristic value to obtain a target characteristic value of the object to be recognized, and generating an object image of the object to be recognized according to the emotion characteristic value, the sound volume characteristic value and the target characteristic value.
A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, carries out the steps of:
obtaining evaluation text information of an object to be identified;
determining the emotion type of the evaluation text information of the object to be recognized;
acquiring the freshness of the object to be recognized, and determining the emotional characteristic value of the object to be recognized according to the freshness of the object to be recognized and the emotional category of the evaluation text information;
acquiring the data volume of the evaluation text information, and determining the sound volume characteristic value of the object to be identified according to the data volume of the evaluation text information and the emotion type;
and performing weighting processing on the emotion characteristic value and the sound volume characteristic value to obtain a target characteristic value of the object to be recognized, and generating an object image of the object to be recognized according to the emotion characteristic value, the sound volume characteristic value and the target characteristic value.
The object portrait generation method, the device, the computer equipment and the storage medium acquire the evaluation text information of the object to be identified; determining the emotion type of the evaluation text information of the object to be identified; acquiring the freshness and the old degree of an object to be recognized, and determining the emotional characteristic value of the object to be recognized according to the freshness and the old degree of the object to be recognized and the emotion type of the evaluation text information; acquiring the data volume of the evaluation text information, and determining the sound volume characteristic value of the object to be identified according to the data volume and the emotion type of the evaluation text information; weighting the emotional characteristic value and the sound quantity characteristic value to obtain a target characteristic value of the object to be recognized, and generating an object image of the object to be recognized according to the emotional characteristic value, the sound quantity characteristic value and the target characteristic value; the method achieves the purpose of determining the emotion characteristic value, the sound volume characteristic value and the target characteristic value of the object to be recognized according to the evaluation text information of the object to be recognized, and further generating an object image of the object to be recognized; through the deeper information of the emotional characteristic value, the acoustic quantity characteristic value and the target characteristic value of the object to be recognized, the characteristic information of the object to be recognized is favorably and accurately positioned, the defect that the object portrait generated through the basic information of the object to be recognized is low in accuracy is avoided, and therefore the accuracy of the generated object portrait is improved; meanwhile, the object portrait of the object to be recognized is generated through the emotional characteristic value, the acoustic quantity characteristic value and the target characteristic value of the object to be recognized, so that the characteristic information of the object to be recognized can be reflected from multiple dimensions, and the accuracy of the generated object portrait is further improved.
Drawings
FIG. 1 is a diagram of an exemplary application environment for a method for generating an object representation;
FIG. 2 is a flow diagram illustrating a method for generating an object representation in accordance with one embodiment;
FIG. 3 is a schematic illustration of an interface for an object representation in one embodiment;
FIG. 4 is a schematic illustration of an interface for an object representation in another embodiment;
FIG. 5 is a schematic flow chart diagram illustrating a method for generating an object representation in accordance with another embodiment;
FIG. 6 is a schematic flow chart diagram illustrating a method for generating an object representation in accordance with yet another embodiment;
FIG. 7 is a schematic flow chart diagram illustrating a method for generating an object representation in accordance with yet another embodiment;
FIG. 8 is a block diagram of an apparatus for generating an object representation in accordance with an embodiment;
FIG. 9 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The method for generating the object portrait can be applied to the application environment shown in FIG. 1. Wherein the terminal 110 communicates with the server 120 through a network. Referring to fig. 1, the server 120 acquires evaluation text information of an object to be recognized; determining the emotion type of the evaluation text information of the object to be recognized; acquiring the freshness of an object to be recognized, and determining the emotional characteristic value of the object to be recognized according to the freshness of the object to be recognized and the emotional category of the evaluation text information; acquiring the data volume of the evaluation text information, and determining the sound volume characteristic value of the object to be identified according to the data volume and the emotion type of the evaluation text information; weighting the emotional characteristic value and the sound quantity characteristic value to obtain a target characteristic value of the object to be recognized, and generating an object image of the object to be recognized according to the emotional characteristic value, the sound quantity characteristic value and the target characteristic value; and sending the object image of the object to be identified to the corresponding terminal 110 for displaying. The terminal 110 may be, but not limited to, various personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices, and the server 120 may be implemented by an independent server or a server cluster formed by a plurality of servers.
In one embodiment, as shown in fig. 2, a method for generating an object representation is provided, which is described by taking the method as an example applied to the server in fig. 1, and includes the following steps:
step S201, obtaining evaluation text information of the object to be recognized.
The object to be identified refers to an object needing to generate an image, such as a product, a brand and the like; the evaluation text information refers to comment information related to the object to be identified, such as an information article, a media article, comment information, and the like.
Specifically, the server collects entity information of an object to be identified and evaluation text information of the whole network, and performs entity identification on the evaluation text information of the whole network according to the entity information of the object to be identified to obtain the evaluation text information of the object to be identified; therefore, the emotion type of the evaluation text information of the object to be recognized can be determined conveniently.
Step S202, determining the emotion type of the evaluation text information of the object to be recognized.
The emotion type is used for identifying emotion labels corresponding to the evaluation text information; if the evaluation text information is forward evaluation text information, such as good evaluation, the corresponding emotion type is a forward emotion type; if the evaluation text information is negative evaluation text information, such as poor evaluation, the corresponding emotion category is a negative emotion category.
Specifically, the server carries out emotion analysis processing on the evaluation text information of the object to be recognized to obtain a corresponding emotion category; for example, the server inputs the evaluation text information of the object to be recognized into a pre-trained text emotion classification model to obtain the emotion type of the evaluation text information of the object to be recognized; the pre-trained text emotion classification model is a model capable of identifying emotion types of evaluation text information, such as a naive Bayes model.
In the step, the emotion type of the evaluation text information of the object to be recognized is determined, so that the emotion characteristic value and the sound volume characteristic value of the object to be recognized can be obtained subsequently.
Step S203, acquiring the freshness of the object to be recognized, and determining the emotional characteristic value of the object to be recognized according to the freshness of the object to be recognized and the emotional category of the evaluation text information.
The freshness is used for measuring the freshness of the object to be identified, specifically, the updating duration of the object to be identified, for example, the production time of the product a is a, the default starting time is B, and the freshness of the product a is a-B (unit is s); the emotion characteristic value refers to a characteristic value obtained from emotion dimension analysis, and specifically refers to an emotion score.
Specifically, the server determines a target emotion type and a positive degree of an object to be identified according to the emotion type of the evaluation text information; and acquiring the freshness of the object to be recognized, and calculating the emotion score of the object to be recognized according to the freshness, the target emotion type and the certainty of the object to be recognized to be used as the emotion characteristic value of the object to be recognized.
And step S204, acquiring the data volume of the evaluation text information, and determining the sound volume characteristic value of the object to be recognized according to the data volume and the emotion type of the evaluation text information.
The data volume refers to the length of the evaluation text information, specifically to the number of words included in the evaluation text information, for example, if the evaluation text information includes 50 words, the corresponding data volume is 50; the acoustic quantity characteristic value refers to a characteristic value obtained from acoustic quantity dimension analysis, and specifically refers to an acoustic quantity score; in addition, if the evaluation text information is forward evaluation text information, the corresponding volume score is a positive volume score; and if the evaluation text information is negative evaluation text information, the corresponding volume score is a negative volume score.
Specifically, the server acquires a preset data volume identification instruction, and identifies and obtains the data volume of each evaluation text message according to the preset data volume identification instruction; obtaining the sound volume score of each evaluation text message according to the data volume and emotion type of each evaluation text message; and adding the sound volume scores of the evaluation text messages to obtain the sound volume score of the object to be identified as the sound volume characteristic value of the object to be identified.
And S205, carrying out weighting processing on the emotion characteristic value and the sound volume characteristic value to obtain a target characteristic value of the object to be recognized, and generating an object image of the object to be recognized according to the emotion characteristic value, the sound volume characteristic value and the target characteristic value.
The target characteristic value is used for comprehensively measuring the characteristic value of the object to be identified; the object image is used for representing a plurality of deep-level feature information of an object to be identified, such as a product image, a brand image and the like.
Specifically, the server acquires preset weights corresponding to the emotion characteristic value and the sound quantity characteristic value, and carries out weighting processing on the emotion characteristic value and the sound quantity characteristic value according to the preset weights corresponding to the emotion characteristic value and the sound quantity characteristic value to obtain a target characteristic value of the object to be identified; the method comprises the steps of obtaining an object portrait template, respectively determining position labels of an emotion characteristic value, a sound volume characteristic value and a target characteristic value in the object portrait template, respectively importing the emotion characteristic value, the sound volume characteristic value and the target characteristic value into corresponding positions in the object portrait template according to the position labels, and generating an object portrait of an object to be identified. Therefore, the object portrait of the object to be recognized is automatically generated according to the emotional characteristic value, the acoustic quantity characteristic value and the target characteristic value, and information does not need to be collected manually, so that the generation efficiency of the object portrait is improved; meanwhile, the defect that errors are easy to occur in the process of artificially generating the object portrait is avoided, and the accuracy of the generated object portrait is further improved.
For example, if the emotion feature value is M, the corresponding preset weight is 0.6, the acoustic quantity feature value is N, and the corresponding preset weight is 0.4, the target feature value of the object to be identified is 0.6M + 0.4N; then, the emotion feature value, the acoustic quantity feature value and the target feature value are imported to corresponding positions in the object portrait template, so as to obtain an object portrait of the object to be recognized, as shown in fig. 3.
Furthermore, the server can arrange the objects to be identified according to the sequence of the target characteristic values from high to low to obtain the arranged objects to be identified, and sends the arranged objects to be identified to the corresponding terminal for displaying; or, the server takes the object to be identified with the target characteristic value larger than the preset characteristic value as the target object, and recommends the target object to the corresponding terminal, so that the purpose of accurate pushing is achieved.
In the method for generating the object portrait, evaluation text information of an object to be identified is acquired; determining the emotion type of the evaluation text information of the object to be identified; acquiring the freshness and the old degree of an object to be recognized, and determining the emotional characteristic value of the object to be recognized according to the freshness and the old degree of the object to be recognized and the emotion type of the evaluation text information; acquiring the data volume of the evaluation text information, and determining the sound volume characteristic value of the object to be identified according to the data volume and the emotion type of the evaluation text information; weighting the emotional characteristic value and the sound quantity characteristic value to obtain a target characteristic value of the object to be recognized, and generating an object image of the object to be recognized according to the emotional characteristic value, the sound quantity characteristic value and the target characteristic value; the method achieves the purpose of determining the emotion characteristic value, the sound volume characteristic value and the target characteristic value of the object to be recognized according to the evaluation text information of the object to be recognized, and further generating an object image of the object to be recognized; through the deeper information of the emotional characteristic value, the acoustic quantity characteristic value and the target characteristic value of the object to be recognized, the characteristic information of the object to be recognized is favorably and accurately positioned, the defect that the object portrait generated through the basic information of the object to be recognized is low in accuracy is avoided, and therefore the accuracy of the generated object portrait is improved; meanwhile, the object portrait of the object to be recognized is generated through the emotional characteristic value, the acoustic quantity characteristic value and the target characteristic value of the object to be recognized, so that the characteristic information of the object to be recognized can be reflected from multiple dimensions, and the accuracy of the generated object portrait is further improved.
In an embodiment, in step S201, the obtaining evaluation text information of the object to be recognized includes: acquiring object entity information of an object to be identified from a preset object entity library; and obtaining the evaluation text information matched with the object entity information from the evaluation text information on the network as the evaluation text information of the object to be identified.
The object entity library is a database containing entity information of a plurality of objects; the object entity information refers to entity information for describing an object.
For example, the server obtains an article on the network, performs word segmentation processing on key contents (such as a title, an abstract, and the like) of the article on the network, stops word processing, and the like, and obtains the processed article; and matching the processed article with the object entity information to obtain the article matched with the object entity information as the article of the object to be identified.
In the embodiment, the emotion analysis processing is favorably carried out on the subsequently evaluated text information of the object to be recognized by acquiring the evaluated text information of the object to be recognized, so that the emotion type of the evaluated text information of the object to be recognized is obtained.
In an embodiment, the step S202 of determining an emotion category of the evaluation text information of the object to be recognized includes: extracting characteristic information in evaluation text information of an object to be identified; and inputting the characteristic information in the evaluation text information of the object to be recognized into a pre-trained text emotion classification model to obtain the emotion type of the evaluation text information of the object to be recognized.
The feature information in the evaluation text information is word segmentation information obtained by performing word segmentation processing on the evaluation text information.
Specifically, the server performs word segmentation processing on the evaluation text information of the object to be recognized to obtain word segmentation information of the evaluation text information of the object to be recognized, and the word segmentation information is used as feature information in the evaluation text information of the object to be recognized; analyzing and processing the characteristic information in the evaluation text information of the object to be recognized through a pre-trained text emotion classification model to obtain the probability that the evaluation text information of the object to be recognized belongs to the positive emotion category and the probability that the evaluation text information of the object to be recognized belongs to the negative emotion category, and determining the emotion category of the evaluation text information of the object to be recognized.
For example, the server may calculate the emotion category of the evaluation text information of the object to be recognized by the following formula:
the evaluation text information of the object to be recognized belongs to the forward emotion category c1Probability of (i.e., posterior probability):
Figure BDA0002516820010000101
the evaluation text information of the object to be recognized belongs to the negative emotion category c2Probability of (i.e., posterior probability):
Figure BDA0002516820010000102
wherein, w1,···,wnIndicating characteristic information in the evaluation text information, P (w)1,···,wn|c1) Representing evaluation text information belonging to a forward emotion category c 1Next, the evaluation text information has these feature information w1,···,wnThe probability of (d); p (w)1,···,wn|c2) Representing that the evaluation text information belongs to the negative emotion category c2Next, the evaluation text information has these feature information w1,···,wnThe probability of (d); p (c)1) Indicating that the evaluation text information belongs to the positive emotion category c1Probability of (c), P (c)2) Indicating that the evaluation text information belongs to the negative emotion category c2The probability of (c).
Next, the server sends P (c)1|w1,···,wn) And P (c)2|w1,···,wn) For comparison, if P (c)1|w1,···,wn) Greater than P (c)2|w1,···,wn) Determining the emotion type of the evaluation text information of the object to be recognized as a forward emotion type, and if P (c)1|w1,···,wn) Less than P (c)2|w1,···,wn) And determining the emotion type of the evaluation text information of the object to be recognized as the negative emotion type.
In this embodiment, the emotion classification of the evaluation text information of the object to be recognized is determined through a pre-trained text emotion classification model, which is beneficial to subsequently obtaining the emotion characteristic value and the sound volume characteristic value of the object to be recognized.
In an embodiment, the step S203 of determining the emotion feature value of the object to be recognized according to the recency of the object to be recognized and the emotion category of the evaluation text information includes: according to the emotion types of the evaluation text information, a first evaluation text information set belonging to a positive emotion type and a second evaluation text information set belonging to a negative emotion type are determined from the evaluation text information; acquiring a quantity difference value between the evaluation text information in the first evaluation text information set and the evaluation text information in the second evaluation text information set to obtain an emotion difference value of an object to be identified; determining the target emotion type and the certainty degree of the object to be recognized according to the emotion difference value of the object to be recognized; and obtaining the emotion score of the object to be recognized as the emotion characteristic value of the object to be recognized according to the target emotion category, the certainty degree and the freshness of the object to be recognized.
The target emotion type is the total emotion direction of the object to be recognized and is used for expressing the comprehensive opinion of the object to be recognized; the degree of certainty is used to indicate the degree of certainty that the object to be recognized is received, and the higher the degree is, the more popular the object to be recognized is.
For example, it is assumed that the first evaluation text information set U of the product P includes 100000 positive evaluation text information items, and the second evaluation text information set D includes 20000 negative evaluation text information items, that is, U is 100000, D is 20000, then the emotion difference value of the product P is x is U-D is 80000;
then, combining the following formula, obtaining the target emotion type y of the product P as 1;
Figure BDA0002516820010000111
if the quantity of the positive evaluation text information is greater than that of the negative evaluation text information, y is + 1; if the number of the positive evaluation text messages is equal to the number of the negative evaluation text messages, y is equal to 0; and if the number of the positive evaluation text messages is less than that of the negative evaluation text messages, y is equal to-1.
Then, combining the following formula, obtaining the affirmance z of the product P as 80000;
Figure BDA0002516820010000112
if the quantity of the positive evaluation text information is greater than or less than the quantity of the negative evaluation text information, namely the emotion difference value of the product P is not equal to 0, z is equal to the absolute value of the emotion difference value of the product P; if the number of positive evaluation text messages is equal to the number of negative evaluation text messages, that is, the emotion difference value of the product P is equal to 0, z is equal to 1.
Then, if the release time of the product P is 2018-9-21 and the default starting time is B is 2005, 12/month, 8/day, 7:46:43, the freshness t of the product P is 403459997 according to the following formula;
t=A-B;
wherein t is in units of seconds(s); once product P is on the market, t is a fixed value and does not change over time, and the newer product P, the greater the value of t.
Then, by the following Reddit Ranking formula, an emotion score f (t, y, z) of the product P is 8971 based on the target emotion category y of the product P being 1, the certainty degree z being 80000, and the new-age degree t being 403459997;
Figure BDA0002516820010000121
and finally, carrying out normalization processing in a [0,100] interval on the emotion score of the product P to obtain the emotion score after normalization processing, wherein the emotion score is used as the emotion characteristic value of the product P.
In this embodiment, the target characteristic value of the object to be recognized is obtained by obtaining the emotion characteristic value of the object to be recognized, which is beneficial to subsequently combining the acoustic quantity characteristic value of the object to be recognized.
In one embodiment, in step S204, determining a sound volume characteristic value of the object to be recognized according to the data volume and the emotion category of the evaluation text information includes: acquiring the sum of scores corresponding to the data quantity of each evaluation text message in the first evaluation text message set as the positive sound volume score of the object to be identified; acquiring the sum of scores corresponding to the data quantity of each evaluation text message in the second evaluation text message set as the negative sound volume score of the object to be identified; and acquiring the sum of the positive sound volume score and the negative sound volume score to obtain the sound volume score of the object to be identified as the sound volume characteristic value of the object to be identified.
The voice volume scores corresponding to the positive-direction evaluation text information and the negative-direction evaluation text information with the same data volume are different, for example, for the positive-direction evaluation text information, if the data volume is 100, the corresponding positive voice volume score is 20; and for negative evaluation text information, if the data volume is 100, the corresponding negative sound volume score is-20.
In the embodiment, the sound volume characteristic value of the object to be recognized is acquired, so that the target characteristic value of the object to be recognized can be acquired according to the emotion characteristic value and the sound volume characteristic value of the object to be recognized subsequently, the deeper information of the object to be recognized can be acquired, the defect of low accuracy of the object portrait generated through the basic information of the object to be recognized can be overcome, and the accuracy of the generated object portrait can be improved.
In an embodiment, before generating the object image of the object to be recognized according to the emotion feature value, the acoustic quantity feature value, and the target feature value, step S205 further includes: acquiring label evaluation text information of label information of an object to be identified; determining the emotion type of the label evaluation text information; determining the emotion characteristic value of the label information of the object to be identified according to the freshness and the age of the object to be identified and the emotion type of the label evaluation text information; and acquiring the data volume of the tag evaluation text information, and determining the sound volume characteristic value of the tag information of the object to be identified according to the data volume and the emotion type of the tag evaluation text information.
The tag information of the object to be recognized is used to identify information of each attribute of the object to be recognized, for example, if the object to be recognized is a mobile phone, the corresponding tag information may be the information before and after appearance design, the material of the body, color matching selection, the size of the screen, the type of the screen, the material of the screen, the camera, the electric quantity, the storage combination, and the like.
Specifically, the server acquires tag entity information of tag information of an object to be identified from a preset object entity library; acquiring evaluation text information matched with the label entity information from the evaluation text information on the network, wherein the evaluation text information is used as label evaluation text information of the label information of the object to be identified; extracting characteristic information in the label evaluation text information of the object to be identified; inputting characteristic information in the label evaluation text information of the object to be recognized into a pre-trained text emotion classification model to obtain the emotion type of the label evaluation text information of the object to be recognized; according to the emotion types of the label evaluation text information, a first label evaluation text information set belonging to a positive emotion type and a second label evaluation text information set belonging to a negative emotion type are determined from the label evaluation text information; acquiring a quantity difference value between tag evaluation text information in the first tag evaluation text information set and tag evaluation text information in the second tag evaluation text information set to obtain an emotion difference value of tag information of an object to be identified; determining a target emotion type and a certainty degree of the tag information of the object to be identified according to the emotion difference value of the tag information of the object to be identified; acquiring the freshness of an object to be identified as the freshness of the label information; obtaining the emotion score of the label information as the emotion characteristic value of the label information according to the target emotion category, the certainty degree and the freshness degree of the label information; by referring to the method, the emotional characteristic value of each label information of the object to be identified can be obtained; then, the server obtains the sum of scores corresponding to the data quantity of each label evaluation text message in the first label evaluation text message set, and the sum is used as the positive sound quantity score of the label information of the object to be identified; acquiring the sum of scores corresponding to the data quantity of each label evaluation text message in the second label evaluation text message set, and taking the sum as the negative sound volume score of the label message of the object to be identified; acquiring the sum of the positive sound volume score and the negative sound volume score of the label information to obtain the sound volume score of the label information as a sound volume characteristic value of the label information; with reference to this method, the acoustic quantity characteristic value of each tag information of the object to be recognized can be obtained.
It should be noted that, for the specific calculation formula of the emotional characteristic value of the tag information of the object to be recognized, the calculation formula of the emotional characteristic value of the object to be recognized may be referred to, and details are not described herein again.
Furthermore, the server can also perform weighting processing on the emotion characteristic value and the acoustic quantity characteristic value of each label information of the object to be recognized according to the preset weight to obtain a target characteristic value of the object to be recognized; for example, the object to be identified has 3 pieces of tag information, which are tag information a, tag information b, and tag information c; emotion eigenvalues and volume eigenvalues corresponding to label information a, label information b and label information c are a1, a2, b1, b2, c1 and c2 respectively; the preset weights corresponding to the 6 feature values are 0.1, 0.15, 0.25 and 0.25, and the target feature value of the object to be recognized is 0.1 × a1+0.1 × a2+0.15 × b1+0.15 × b2+0.25 × c1+0.25 × c 2.
In the embodiment, the emotion characteristic value and the sound volume characteristic value of each label information of the object to be recognized are obtained, so that the characteristic information of the object to be recognized can be reflected from multiple dimensions, and the accuracy of the generated object portrait is further improved.
In one embodiment, the step S205 of generating the object image of the object to be recognized according to the emotion feature value, the acoustic quantity feature value, and the target feature value includes: and generating an object image of the object to be recognized according to the emotional characteristic value, the sound volume characteristic value and the target characteristic value of the object to be recognized and the emotional characteristic value and the sound volume characteristic value of the label information of the object to be recognized.
For example, the server obtains a preset object portrait template, determines a target characteristic value, an emotional characteristic value, a sound quantity characteristic value, a positive sound quantity score, a negative sound quantity score of the product P and a position label of the emotion characteristic value, the sound quantity characteristic value, the positive sound quantity score and the negative sound quantity score of the label information of the object to be recognized in the object portrait template, and introduces the target characteristic value, the emotional characteristic value, the sound quantity characteristic value, the positive sound quantity score, the negative sound quantity score of the product P and the emotion characteristic value, the sound quantity characteristic value, the positive sound quantity score and the negative sound quantity score of the label information of the object to be recognized into corresponding positions in the object portrait template according to the position label, so as to generate the object portrait of the product P, as shown in fig. 4.
Further, the server can perform label semantic analysis on the evaluation text information of the object to be recognized through a Bert algorithm model to obtain an evaluation text information set similar or similar to the evaluation text information of the object to be recognized; therefore, content tracing and analysis searching are facilitated for the object to be recognized, and a data base is laid for the object image of the object to be recognized.
In the embodiment, the object portrait of the object to be recognized is generated according to the emotional characteristic value, the acoustic characteristic value and the target characteristic value of the object to be recognized and the emotional characteristic value and the acoustic characteristic value of the tag information of the object to be recognized, which is beneficial to reflecting the characteristic information of the object to be recognized from multiple dimensions, and further improves the accuracy of the generated object portrait.
In one embodiment, the step S205, after generating the object image of the object to be recognized according to the emotion feature value, the volume feature value, and the target feature value, further includes: receiving an acquisition request of an object portrait of an object to be recognized by a terminal; and sending the object image of the object to be identified to the terminal for displaying according to the acquisition request.
For example, a terminal (such as a smart phone) responds to an input operation of an object portrait query interface displayed by a user at the terminal to obtain an object to be identified, which is input by the user; generating an acquisition request of an object image of the object to be recognized according to the object to be recognized, and sending the acquisition request of the object image of the object to be recognized to a corresponding server; the server analyzes the image to be recognized to obtain an object image of the image to be recognized, sends the object image of the object to be recognized to the terminal, and displays the object image of the object to be recognized through a terminal interface, so that a user can make a consumption decision quickly.
In another embodiment, as shown in fig. 5, another object representation generation method is provided, which is described by taking the application of the method to the server in fig. 1 as an example, and includes the following steps:
Step S501, obtaining evaluation text information of the object to be recognized and tag evaluation text information of the tag information of the object to be recognized.
Step S502, determining the emotion type of the evaluation text information of the object to be recognized.
And S503, acquiring the freshness of the object to be recognized, and determining the emotional characteristic value of the object to be recognized according to the freshness of the object to be recognized and the emotion type of the evaluation text information.
Step S504, acquiring the data volume of the evaluation text information, and determining the sound volume characteristic value of the object to be identified according to the data volume and the emotion type of the evaluation text information.
Step S505, determining the emotion type of the tag evaluation text information.
Step S506, determining the emotion characteristic value of the label information of the object to be recognized according to the freshness of the object to be recognized and the emotion type of the label evaluation text information.
Step S507, acquiring the data volume of the tag evaluation text information, and determining the sound volume characteristic value of the tag information of the object to be identified according to the data volume and the emotion type of the tag evaluation text information.
Step S508, the emotion characteristic value and the sound volume characteristic value are weighted to obtain a target characteristic value of the object to be identified; and generating an object image of the object to be recognized according to the emotional characteristic value, the sound volume characteristic value and the target characteristic value of the object to be recognized and the emotional characteristic value and the sound volume characteristic value of the label information of the object to be recognized.
Step S509, sending the object portrait of the object to be recognized to a corresponding terminal for displaying.
According to the method for generating the object portrait, the characteristic information of the object to be recognized is favorably and accurately positioned through the deeper information of the emotion characteristic value, the acoustic quantity characteristic value and the target characteristic value of the object to be recognized, the defect that the object portrait generated through the basic information of the object to be recognized is low in accuracy is overcome, and therefore the accuracy of the generated object portrait is improved.
In one embodiment, as shown in fig. 6, the present application further provides an application scenario applying the method for generating an object representation. Specifically, the application of the object representation generation method in the application scenario is as follows:
the server acquires mass information data, and performs AI entity identification on the mass information data according to the entity data of the brand, the product and the label to obtain information data corresponding to the entity data of the brand, the product and the label; carrying out AI emotion analysis on information data corresponding to the brands and tag entity data thereof to obtain a whole network comprehensive score, a whole network emotion score, a total voice volume score, a positive voice volume score, a negative voice volume score, a dimension tag score, a total voice volume score, a positive voice volume score and a negative voice volume score corresponding to the brands; AI label semantic analysis is carried out on information data corresponding to the brand and label entity data thereof to obtain semantic traceability of evaluation label content; constructing a brand portrait according to the full-network comprehensive score, the full-network emotion score, the total voice volume score, the positive voice volume score, the negative voice volume score, the dimension label score, the total voice volume score, the positive voice volume score, the negative voice volume score and the semantic traceability of the evaluation label content corresponding to the brand; similarly, AI sentiment analysis is carried out on the information data corresponding to the product and the tag entity data thereof, and the full-network comprehensive score, the full-network sentiment score, the total voice volume score, the positive voice volume score, the negative voice volume score, the dimension tag score, the total voice volume score, the positive voice volume score and the negative voice volume score corresponding to the product are obtained; AI label semantic analysis is carried out on the product and the information data corresponding to the label entity data to obtain semantic traceability of the evaluation label content; constructing a product portrait according to the full-network comprehensive score, the full-network emotion score, the total voice volume score, the positive voice volume score, the negative voice volume score, the dimension label score, the total voice volume score, the positive voice volume score, the negative voice volume score and the semantic traceability of the evaluation label content corresponding to the product; thus, the user can match the portrait characteristics of the brand portrait and the product portrait with the consumption demand of the user, and quickly and accurately make a purchasing decision.
In one embodiment, as shown in fig. 7, the present application further provides an application scenario applying the above object representation generation method. Specifically, the object representation generation method is applied to the application scene as follows:
in the product portrait generation flow, a server acquires products under the category and product attribute labels, and performs AI entity identification on the whole network information articles and the whole network comments according to the products under the category and the product attribute labels to obtain the whole network information articles and the whole network comments of the products under the category and the product attribute labels; and performing sentiment analysis processing on the whole-network information articles and the whole-network comments of the products and the product attribute labels under the categories through a sentiment analysis algorithm model, such as a trained and learned Bayes machine learning algorithm model, to obtain sentiment scores and whole-network comprehensive scores of the products and the product attribute labels, and positive and negative evaluation sound volumes and total evaluation sound volumes of the products and the product attribute labels, and further generating the product portrait. In a brand portrait generation process, a server acquires brand and brand product common attribute labels under the category, AI entity recognition is carried out on the whole-network information article and the whole-network comment according to the brand and brand product common attribute labels under the category to obtain the whole-network information article and the whole-network comment of the brand and brand product common attribute labels under the category, emotion analysis processing is carried out on the whole-network information article and the whole-network comment of the brand and brand product common attribute labels under the category through an emotion analysis algorithm model, such as a trained and learned Bayesian machine learning algorithm model, to obtain emotion scores and whole-network comprehensive scores of the brand and each brand common attribute label, and positive and negative evaluation sound volumes and total evaluation sound volumes of the brand and each brand common attribute label to further generate the brand; and finally, the server provides image characteristic big data API services such as brand and product whole network comprehensive scores to the application business end for the application business end to use and display.
It should be understood that although the steps in the flowcharts of fig. 2 and 5 are shown in sequence as indicated by the arrows, the steps are not necessarily performed in sequence as indicated by the arrows. The steps are not limited to being performed in the exact order illustrated and, unless explicitly stated herein, may be performed in other orders. Moreover, at least a part of the steps in fig. 2 and 5 may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of performing the steps or stages is not necessarily sequential, but may be performed alternately or alternatively with other steps or at least a part of the steps or stages in other steps.
In one embodiment, as shown in fig. 8, there is provided an apparatus for generating an object representation, including: text information acquisition module 810, emotion category determination module 820, emotion feature value determination module 830, acoustic volume feature value determination module 840, and object portrait generation module 850, where:
the text information obtaining module 810 is configured to obtain evaluation text information of the object to be identified.
And an emotion category determining module 820, configured to determine an emotion category of the evaluation text information of the object to be identified.
And the emotion characteristic value determination module 830 is configured to acquire the recency of the object to be recognized, and determine the emotion characteristic value of the object to be recognized according to the recency of the object to be recognized and the emotion category of the evaluation text information.
And the sound volume characteristic value determining module 840 is used for acquiring the data volume of the evaluation text information and determining the sound volume characteristic value of the object to be identified according to the data volume and the emotion type of the evaluation text information.
And an object portrait generation module 850, configured to perform weighting processing on the emotion characteristic value and the acoustic quantity characteristic value to obtain a target characteristic value of the object to be recognized, and generate an object portrait of the object to be recognized according to the emotion characteristic value, the acoustic quantity characteristic value, and the target characteristic value.
In an embodiment, the text information obtaining module 810 is further configured to obtain object entity information of an object to be identified from a preset object entity library; and obtaining the evaluation text information matched with the object entity information from the evaluation text information on the network as the evaluation text information of the object to be identified.
In an embodiment, the emotion classification determining module 820 is further configured to extract feature information in evaluation text information of an object to be identified; and inputting the characteristic information in the evaluation text information of the object to be recognized into a pre-trained text emotion classification model to obtain the emotion type of the evaluation text information of the object to be recognized.
In an embodiment, the emotion feature value determination module 830 is further configured to determine, from the evaluation text information, a first evaluation text information set belonging to the positive emotion category and a second evaluation text information set belonging to the negative emotion category according to the emotion category of the evaluation text information; acquiring a quantity difference value between the evaluation text information in the first evaluation text information set and the evaluation text information in the second evaluation text information set to obtain an emotion difference value of the object to be identified; determining a target emotion type and a certainty degree of an object to be identified according to an emotion difference value of the object to be identified; and obtaining the emotion score of the object to be recognized as the emotion characteristic value of the object to be recognized according to the target emotion category, the certainty degree and the freshness of the object to be recognized.
In an embodiment, the sound volume characteristic value determining module 840 is further configured to obtain a sum of scores corresponding to data volumes of the evaluation text messages in the first evaluation text message set, where the sum is used as a positive sound volume score of the object to be identified; acquiring the sum of scores corresponding to the data quantity of each evaluation text message in the second evaluation text message set as the negative sound volume score of the object to be identified; and acquiring the sum of the positive sound volume score and the negative sound volume score to obtain the sound volume score of the object to be identified as the sound volume characteristic value of the object to be identified.
In one embodiment, the apparatus for generating an object representation further includes a tag information processing module, configured to obtain tag evaluation text information of tag information of an object to be identified; determining the emotion type of the label evaluation text information; determining the emotion characteristic value of the label information of the object to be recognized according to the freshness and the old degree of the object to be recognized and the emotion type of the label evaluation text information; and acquiring the data volume of the tag evaluation text information, and determining the sound volume characteristic value of the tag information of the object to be identified according to the data volume and the emotion type of the tag evaluation text information.
In one embodiment, the object image generating module 850 is further configured to generate an object image of the object to be recognized according to the emotional characteristic value, the acoustic characteristic value, the target characteristic value of the object to be recognized, and the emotional characteristic value and the acoustic characteristic value of the tag information of the object to be recognized.
In one embodiment, the device for generating the object representation further comprises an object representation sending module, configured to receive a request for acquiring the object representation of the object to be recognized by a terminal; and sending the object image of the object to be identified to the terminal for displaying according to the acquisition request.
For specific limitations of the object representation generating device, reference may be made to the above limitations of the object representation generating method, which are not described herein again. The modules in the object representation generation apparatus may be implemented in whole or in part by software, hardware, or a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a server, and the internal structure thereof may be as shown in fig. 9. The computer device includes a processor, a memory, and a network interface connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer equipment is used for storing data such as emotional characteristic values, acoustic quantity characteristic values, object portraits and the like. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a method of generating an object representation.
Those skilled in the art will appreciate that the architecture shown in fig. 9 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In an embodiment, a computer device is further provided, which includes a memory and a processor, the memory stores a computer program, and the processor implements the steps of the above method embodiments when executing the computer program.
In an embodiment, a computer-readable storage medium is provided, in which a computer program is stored which, when being executed by a processor, carries out the steps of the above-mentioned method embodiments.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include at least one of non-volatile and volatile memory. Non-volatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, or the like. Volatile Memory can include Random Access Memory (RAM) or external cache Memory. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others.
All possible combinations of the technical features in the above embodiments may not be described for the sake of brevity, but should be considered as being within the scope of the present disclosure as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is specific and detailed, but not to be understood as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, and these are all within the scope of protection of the present application. Therefore, the protection scope of the present patent application shall be subject to the appended claims.

Claims (10)

1. A method of generating an object representation, the method comprising:
obtaining evaluation text information of an object to be identified;
determining the emotion type of the evaluation text information of the object to be recognized;
acquiring the freshness of the object to be recognized, and determining the emotional characteristic value of the object to be recognized according to the freshness of the object to be recognized and the emotional category of the evaluation text information; the newness degree represents the updating time length of the object to be identified;
Acquiring the data volume of the evaluation text information, and determining the sound volume characteristic value of the object to be recognized according to the data volume of the evaluation text information and the emotion type; further comprising: obtaining a sound volume score of the evaluation text information according to the data volume and the emotion category; adding the sound volume scores according to the sound volume scores of the evaluation text information to obtain a sound volume characteristic value of the object to be identified; the data volume represents the length of the evaluation text information, and the sound volume characteristic value represents a sound volume score obtained from sound volume dimensional analysis;
and performing weighting processing on the emotion characteristic value and the sound volume characteristic value to obtain a target characteristic value of the object to be recognized, and generating an object image of the object to be recognized according to the emotion characteristic value, the sound volume characteristic value and the target characteristic value.
2. The method according to claim 1, wherein the obtaining of evaluation text information of the object to be recognized comprises:
acquiring object entity information of the object to be identified from a preset object entity library;
and acquiring the evaluation text information matched with the object entity information from the evaluation text information on the network as the evaluation text information of the object to be identified.
3. The method according to claim 1, wherein the determining of the emotion classification of the evaluation text information of the object to be recognized comprises:
extracting characteristic information in the evaluation text information of the object to be identified;
and inputting the characteristic information in the evaluation text information of the object to be recognized into a pre-trained text emotion classification model to obtain the emotion type of the evaluation text information of the object to be recognized.
4. The method according to claim 1, wherein the determining the emotional characteristic value of the object to be recognized according to the recency of the object to be recognized and the emotion category of the evaluation text information comprises:
according to the emotion types of the evaluation text information, a first evaluation text information set belonging to a positive emotion type and a second evaluation text information set belonging to a negative emotion type are determined from the evaluation text information;
acquiring a quantity difference value between the evaluation text information in the first evaluation text information set and the evaluation text information in the second evaluation text information set to obtain an emotion difference value of the object to be identified;
determining the target emotion type and the certainty degree of the object to be recognized according to the emotion difference value of the object to be recognized;
And obtaining the emotion score of the object to be recognized according to the target emotion category, the certainty degree and the freshness degree of the object to be recognized, and taking the emotion score as the emotion characteristic value of the object to be recognized.
5. The method according to claim 4, wherein the determining the sound volume characteristic value of the object to be recognized according to the data volume of the evaluation text information and the emotion category comprises:
acquiring the sum of scores corresponding to the data quantity of each evaluation text message in the first evaluation text message set as the positive sound quantity score of the object to be identified;
acquiring the sum of scores corresponding to the data quantity of each evaluation text message in the second evaluation text message set as the negative sound quantity score of the object to be identified;
and acquiring the sum of the positive sound volume score and the negative sound volume score to obtain the sound volume score of the object to be identified, wherein the sound volume score is used as the sound volume characteristic value of the object to be identified.
6. The method according to any one of claims 1 to 5, before generating the object image of the object to be recognized according to the emotion feature value, the volume feature value, and the target feature value, further comprising:
Acquiring label evaluation text information of label information of an object to be identified;
determining the emotion type of the tag evaluation text information;
determining the emotional characteristic value of the label information of the object to be recognized according to the freshness and the age of the object to be recognized and the emotional category of the label evaluation text information;
acquiring the data volume of the tag evaluation text information, and determining the sound volume characteristic value of the tag information of the object to be identified according to the data volume of the tag evaluation text information and the emotion type;
generating the object image of the object to be recognized according to the emotion characteristic value, the acoustic quantity characteristic value and the target characteristic value, wherein the method comprises the following steps of:
and generating an object image of the object to be recognized according to the emotional characteristic value, the acoustic quantity characteristic value and the target characteristic value of the object to be recognized and the emotional characteristic value and the acoustic quantity characteristic value of the label information of the object to be recognized.
7. The method according to claim 6, further comprising, after generating an object image of the object to be recognized from the emotion feature value, the acoustic quantity feature value, and the target feature value:
receiving an acquisition request of an object portrait of the object to be identified by a terminal;
And sending the object image of the object to be identified to the terminal for displaying according to the acquisition request.
8. An apparatus for generating an object representation, the apparatus comprising:
the text information acquisition module is used for acquiring evaluation text information of the object to be identified;
the emotion type determination module is used for determining the emotion type of the evaluation text information of the object to be identified;
the emotion characteristic value determining module is used for acquiring the freshness of the object to be recognized and determining the emotion characteristic value of the object to be recognized according to the freshness of the object to be recognized and the emotion type of the evaluation text information; the new and old degree represents the updating time length of the object to be identified;
the sound volume characteristic value determining module is used for acquiring the data volume of the evaluation text information and determining the sound volume characteristic value of the object to be identified according to the data volume of the evaluation text information and the emotion type; the data volume represents the length of the evaluation text information, and the sound volume characteristic value represents a sound volume score obtained from sound volume dimension analysis;
the sound volume characteristic value determining module is further used for obtaining a sound volume score of the evaluation text information according to the data volume and the emotion type; adding the sound volume scores according to the evaluation text information to obtain a sound volume characteristic value of the object to be identified;
And the object portrait generation module is used for weighting the emotion characteristic value and the sound volume characteristic value to obtain a target characteristic value of the object to be recognized, and generating an object portrait of the object to be recognized according to the emotion characteristic value, the sound volume characteristic value and the target characteristic value.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the method of any of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
CN202010479442.4A 2020-05-29 2020-05-29 Object portrait generation method and device, computer equipment and storage medium Active CN111506733B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010479442.4A CN111506733B (en) 2020-05-29 2020-05-29 Object portrait generation method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010479442.4A CN111506733B (en) 2020-05-29 2020-05-29 Object portrait generation method and device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111506733A CN111506733A (en) 2020-08-07
CN111506733B true CN111506733B (en) 2022-06-28

Family

ID=71870324

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010479442.4A Active CN111506733B (en) 2020-05-29 2020-05-29 Object portrait generation method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111506733B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114201516B (en) * 2020-09-03 2024-06-11 腾讯科技(深圳)有限公司 User portrait construction method, information recommendation method and related devices
CN114626356A (en) * 2020-12-08 2022-06-14 腾讯科技(深圳)有限公司 Article feature generation method, device, equipment and storage medium
CN113139838A (en) * 2021-05-10 2021-07-20 上海华客信息科技有限公司 Hotel service evaluation method, system, equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107767195A (en) * 2016-08-16 2018-03-06 阿里巴巴集团控股有限公司 The display systems and displaying of description information, generation method and electronic equipment
WO2018045910A1 (en) * 2016-09-09 2018-03-15 阿里巴巴集团控股有限公司 Sentiment orientation recognition method, object classification method and data processing system
CN109522412A (en) * 2018-11-14 2019-03-26 北京神州泰岳软件股份有限公司 Text emotion analysis method, device and medium
CN110751533A (en) * 2019-09-09 2020-02-04 上海陆家嘴国际金融资产交易市场股份有限公司 Product portrait generation method and device, computer equipment and storage medium
CN110795554A (en) * 2019-10-29 2020-02-14 北京字节跳动网络技术有限公司 Target information analysis method, device, equipment and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107666649A (en) * 2016-12-29 2018-02-06 平安科技(深圳)有限公司 Personal property state evaluating method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107767195A (en) * 2016-08-16 2018-03-06 阿里巴巴集团控股有限公司 The display systems and displaying of description information, generation method and electronic equipment
WO2018045910A1 (en) * 2016-09-09 2018-03-15 阿里巴巴集团控股有限公司 Sentiment orientation recognition method, object classification method and data processing system
CN109522412A (en) * 2018-11-14 2019-03-26 北京神州泰岳软件股份有限公司 Text emotion analysis method, device and medium
CN110751533A (en) * 2019-09-09 2020-02-04 上海陆家嘴国际金融资产交易市场股份有限公司 Product portrait generation method and device, computer equipment and storage medium
CN110795554A (en) * 2019-10-29 2020-02-14 北京字节跳动网络技术有限公司 Target information analysis method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN111506733A (en) 2020-08-07

Similar Documents

Publication Publication Date Title
CN111506733B (en) Object portrait generation method and device, computer equipment and storage medium
US20180336193A1 (en) Artificial Intelligence Based Method and Apparatus for Generating Article
US20150278350A1 (en) Recommendation System With Dual Collaborative Filter Usage Matrix
CN107145536B (en) User portrait construction method and device and recommendation method and device
US20160203191A1 (en) Recommendation system with metric transformation
WO2020019564A1 (en) Search ranking method and apparatus, electronic device and storage medium
CN109800325A (en) Video recommendation method, device and computer readable storage medium
CN111782947B (en) Search content display method and device, electronic equipment and storage medium
CN110008397B (en) Recommendation model training method and device
CN110163376B (en) Sample detection method, media object identification method, device, terminal and medium
WO2021114936A1 (en) Information recommendation method and apparatus, electronic device and computer readable storage medium
CN112364204A (en) Video searching method and device, computer equipment and storage medium
CN111507285A (en) Face attribute recognition method and device, computer equipment and storage medium
CN110598084A (en) Object sorting method, commodity sorting device and electronic equipment
CN109308332B (en) Target user acquisition method and device and server
US20150278907A1 (en) User Inactivity Aware Recommendation System
WO2015153240A1 (en) Directed recommendations
CN113657087A (en) Information matching method and device
CN116225956A (en) Automated testing method, apparatus, computer device and storage medium
CN115758271A (en) Data processing method, data processing device, computer equipment and storage medium
US20210056149A1 (en) Search system, search method, and program
CN114677176A (en) Method and device for recommending interest content, electronic equipment and storage medium
CN113158037A (en) Object-oriented information recommendation method and device
CN114139031B (en) Data classification method, device, electronic equipment and storage medium
CN116881544A (en) Financial product information pushing method, device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant