CN110766502A - Commodity evaluation method and system - Google Patents

Commodity evaluation method and system Download PDF

Info

Publication number
CN110766502A
CN110766502A CN201810847472.9A CN201810847472A CN110766502A CN 110766502 A CN110766502 A CN 110766502A CN 201810847472 A CN201810847472 A CN 201810847472A CN 110766502 A CN110766502 A CN 110766502A
Authority
CN
China
Prior art keywords
expression
user
image
expression vector
commodity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810847472.9A
Other languages
Chinese (zh)
Inventor
黄月红
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingdong Century Trading Co Ltd
Beijing Jingdong Shangke Information Technology Co Ltd
Original Assignee
Beijing Jingdong Century Trading Co Ltd
Beijing Jingdong Shangke Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Century Trading Co Ltd, Beijing Jingdong Shangke Information Technology Co Ltd filed Critical Beijing Jingdong Century Trading Co Ltd
Priority to CN201810847472.9A priority Critical patent/CN110766502A/en
Publication of CN110766502A publication Critical patent/CN110766502A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0623Item investigation
    • G06Q30/0625Directed, with specific intent or strategy

Abstract

The disclosure provides a commodity evaluation method and a commodity evaluation system, and relates to the field of internet. The method comprises the following steps: acquiring a face image of a user; determining a corresponding set of user expression vectors based on the facial image; and outputting corresponding commodity evaluation based on the user expression vector group. The method and the device can reduce the complexity of commodity evaluation of the user, and further improve the positivity of the commodity evaluation of the user.

Description

Commodity evaluation method and system
Technical Field
The disclosure relates to the field of internet, in particular to a commodity evaluation method and system.
Background
After a user purchases a commodity, according to the flow of a general shopping mall, items are firstly pointed, for example, whether the commodity conforms to the description, logistics service and service attitude are pointed, the grades are from one star to five stars, after the commodity is pointed, the description can be filled in, and then the picture of the commodity shot by the user is matched.
Although there are many encouraging mechanisms for reviews, there are still a large number of users who do not post any text and picture reviews after the purchase of a good. There may be many reasons why users do not comment, for example, comment is a time and effort-consuming matter for some users, and many people do not have good word ability; second, the incentive mechanism is not attractive enough, many people are unlikely to be concerned with rewards, much more with their own time and private space; third, simply dislike commenting.
Disclosure of Invention
The technical problem to be solved by the present disclosure is to provide a method and a system for evaluating a commodity, which can reduce the complexity of evaluating the commodity by a user, and further improve the positivity of evaluating the commodity by the user.
According to an aspect of the present disclosure, a commodity evaluation method is provided, including: acquiring a face image of a user; determining a corresponding set of user expression vectors based on the facial image; and outputting corresponding commodity evaluation based on the user expression vector group.
Optionally, determining the corresponding set of user expression vectors based on the facial image comprises: and determining a user expression vector group corresponding to the facial image based on the expression vector neural network model.
Optionally, the determining, based on the expression vector neural network model, a user expression vector group corresponding to the facial image includes: inputting the facial image into an expression vector neural network model, and obtaining expression vectors corresponding to N unit images and the probability that each unit image contains a face center, wherein the N unit images form the facial image, and N is a natural number; and taking the combination of the expression vectors corresponding to the unit images containing the facial centers with the probability greater than the probability threshold value as a user expression vector group corresponding to the facial image.
Optionally, the method further comprises: acquiring a sample face image; labeling the expression vector corresponding to the sample facial image to generate an expression labeling file; and training the expression vector neural network model based on the sample facial image and the expression annotation file.
Optionally, outputting the corresponding item rating based on the user expression vector group includes: and outputting the commodity evaluation corresponding to the user expression vector group based on a commodity evaluation neural network model, wherein the commodity evaluation comprises character evaluation and comprehensive evaluation.
Optionally, the method further comprises: obtaining a sample expression vector; marking the text comments and the comprehensive scores corresponding to the sample expression vectors to generate an evaluation marking file; and training the commodity evaluation neural network model based on the sample facial image and the evaluation annotation file.
Optionally, the method further comprises: and identifying the number of users in the facial image, wherein the number of users influences the text comment and comprehensive score corresponding to the commodity.
Optionally, the method further comprises: in response to the image style selected by the user, a corresponding expression score map is output based on the facial image of the user.
Optionally, the image style includes at least one of an original image style, a filter style and an expression fitting style; if the image style is the original image style, outputting the facial image as an expression score image; if the image style is a filter style, outputting the facial image as an expression score chart after being processed by a filter; and if the image style is an expression fitting style, fitting an expression information vector group corresponding to the facial image into a preset image, and outputting the image with the expression fitted as an expression score map.
Optionally, the method further comprises: creating at least one item of a user expression album of each user and a commodity expression album of each commodity based on the expression score chart; the user expression album is a set of expression score graphs of each user; the commodity expression album is a set of expression score graphs corresponding to each commodity.
Optionally, the method further comprises: and pushing the expression score graph to the user at a preset time.
According to another aspect of the present disclosure, there is also provided a commodity evaluation system including: a face image acquisition unit for acquiring a face image of a user; an expression vector determination unit for determining a corresponding user expression vector group based on the facial image; and the commodity evaluation determining unit is used for outputting corresponding commodity evaluation based on the user expression vector group.
Optionally, the expression vector determining unit is configured to determine a user expression vector group corresponding to the facial image based on the expression vector neural network model.
Optionally, the expression vector determining unit is configured to input the facial image to an expression vector neural network model, and obtain expression vectors corresponding to N × N unit images and a probability that each unit image includes a face center, where the N × N unit images form a facial image, and N is a natural number; and taking the combination of the expression vectors corresponding to the unit images containing the facial centers with the probability greater than the probability threshold value as a user expression vector group corresponding to the facial image.
Optionally, the commodity evaluation determining unit is configured to output a commodity evaluation corresponding to the user expression vector group based on a commodity evaluation neural network model, where the commodity evaluation includes a text evaluation and a comprehensive evaluation.
Optionally, the system further comprises: and the user number identification unit is used for identifying the number of users in the facial image, wherein the number of users influences the text comment and comprehensive score corresponding to the commodity.
Optionally, the system further comprises: and the expression score graph output unit is used for responding to the image style selected by the user and outputting a corresponding expression score graph based on the facial image of the user.
Optionally, the image style includes at least one of an original image style, a filter style and an expression fitting style; the expression score graph output unit is used for outputting the facial image as an expression score graph if the image style is the original image style; if the image style is a filter style, outputting the facial image as an expression score chart after being processed by a filter; and if the image style is an expression fitting style, fitting an expression vector group corresponding to the facial image in a preset image, and outputting the image with the expression fitted as an expression score map.
According to another aspect of the present disclosure, there is also provided a commodity evaluation system including: a memory; and a processor coupled to the memory, the processor configured to perform the merchandise evaluation method as described above based on the instructions stored in the memory.
According to another aspect of the present disclosure, a computer-readable storage medium is also proposed, on which computer program instructions are stored, which instructions, when executed by a processor, implement the steps of the above-mentioned article evaluation method.
Compared with the prior art, the embodiment of the invention determines the corresponding user expression vector group by using the user facial image, and then outputs the corresponding commodity evaluation based on the user expression vector group, so that the time for evaluating the commodity by the user is reduced, the enthusiasm of the user for evaluating the commodity is further improved, and the interestingness of the commodity evaluation by the user can be improved.
Other features of the present disclosure and advantages thereof will become apparent from the following detailed description of exemplary embodiments thereof, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description, serve to explain the principles of the disclosure.
The present disclosure may be more clearly understood from the following detailed description, taken with reference to the accompanying drawings, in which:
fig. 1 is a schematic flow chart of an embodiment of the disclosed commodity evaluation method.
Fig. 2 is a schematic flow chart of another embodiment of the disclosed commodity evaluation method.
Fig. 3 is a schematic structural diagram of an embodiment of the commodity evaluation system of the present disclosure.
Fig. 4 is a schematic structural diagram of another embodiment of the commodity evaluation system of the present disclosure.
Fig. 5 is a schematic structural diagram of still another embodiment of the merchandise evaluation system according to the present disclosure.
Fig. 6 is a schematic structural diagram of yet another embodiment of the merchandise evaluation system of the present disclosure.
Detailed Description
Various exemplary embodiments of the present disclosure will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of the components and steps, the numerical expressions, and numerical values set forth in these embodiments do not limit the scope of the present disclosure unless specifically stated otherwise.
Meanwhile, it should be understood that the sizes of the respective portions shown in the drawings are not drawn in an actual proportional relationship for the convenience of description.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the disclosure, its application, or uses.
Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate.
In all examples shown and discussed herein, any particular value should be construed as merely illustrative, and not limiting. Thus, other examples of the exemplary embodiments may have different values.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
For the purpose of promoting a better understanding of the objects, aspects and advantages of the present disclosure, reference is made to the following detailed description taken in conjunction with the accompanying drawings.
Fig. 1 is a schematic flow chart of an embodiment of the disclosed commodity evaluation method.
At step 110, an image of the user's face is acquired. After receiving the commodity, the user can take a self-timer photograph and upload the self-timer photograph to the system. The user can include various expressions such as smile, crying face, ghost face and the like during self-photographing, and can use smile to represent satisfaction, crying face to represent dissatisfaction and ghost face to represent anger, anergy and other emotions.
At step 120, a corresponding set of user expression vectors is determined based on the facial image. For example, after a user takes a picture by himself, the user takes a picture by himself and inputs the picture into the expression vector neural network model, and a corresponding expression vector group can be output, wherein one or more expression vectors can exist in the expression vector group, if only one user takes a picture by himself, one expression vector is output, and if a plurality of users take a picture by himself, a plurality of expression vectors are output.
In one embodiment, the facial image is input into an expression vector neural network model, expression vectors corresponding to N × N unit images and the probability that each unit image contains a face center are obtained, wherein the N × N unit images form the facial image, and N is a natural number; and taking the combination of the expression vectors corresponding to the unit images containing the facial centers with the probability greater than the probability threshold value as a user expression vector group corresponding to the facial image. The expression vector group divides the self-portrait picture into NxN units by taking each person as a unit, and each unit outputs the probability that the unit is a face center and the expression vector of the face.
In one embodiment, other machine learning models and algorithms may also be utilized to determine the user expression vector set.
In step 130, the corresponding product rating is output based on the user expression vector set. The commodity evaluation comprises commodity text comments, comprehensive evaluation and the like.
In one embodiment, the commodity evaluation neural network model can output the literal evaluation and comprehensive evaluation of the commodity corresponding to the expression vector group of the user.
In the embodiment, the corresponding user expression vector group is determined by using the facial image of the user, and then the corresponding commodity evaluation is output based on the user expression vector group, so that the user does not need to write comments, score, upload pictures and other operations one by one, the commodity evaluation time of the user is reduced, the user can evaluate the commodity more simply and directly, and the commodity evaluation enthusiasm of the user is improved.
In one embodiment, the expression vector neural network model may be trained in advance. For example, a sample face image is acquired; labeling the expression vector corresponding to the sample facial image to generate an expression labeling file; and then, training the expression vector neural network model based on the sample facial image and the expression annotation file. And after a subsequent user takes a self-portrait, inputting the self-portrait into the trained expression vector neural network model so as to output a corresponding expression vector group.
In one embodiment, the commodity evaluation neural network model may be trained in advance. For example, a sample expression vector is obtained; marking the text comments and the comprehensive scores corresponding to the sample expression vectors to generate an evaluation marking file; and training the commodity evaluation neural network model based on the sample facial image and the evaluation annotation file. And inputting the user expression vector group into the trained commodity evaluation neural network model subsequently so as to output corresponding commodity text evaluation and comprehensive scores. The commodity evaluation neural network model may use NLP (Natural Language Processing) technology.
Fig. 2 is a schematic flow chart of another embodiment of the disclosed commodity evaluation method.
At step 210, an image of the user's face is acquired.
In step 220, a user expression vector group corresponding to the facial image is determined based on the expression vector neural network model.
In step 230, the commodity literal comment and the comprehensive score corresponding to the user expression vector group are output based on the commodity evaluation neural network model.
In step 240, in response to the image style selected by the user, the corresponding expression score map is output.
If the image style selected by the user is the original image style, outputting the facial image of the user as an expression score graph; the self-portrait photo of the user is directly output, although the privacy protection degree of the user is lowest, the reality is best, and the designed expression photo album can arouse the memory of the user best.
If the image style is a filter style, outputting the facial image of the user as an expression score chart after being processed by a filter; for example, different styles of processing such as sketch and cartoon are performed on a face image of a user, so that the privacy protection degree of the user is low, the reality is high, and a designed expression photo album can easily arouse the memory of the user.
And if the image style is an expression fitting style, fitting an expression vector corresponding to the facial features of the user image into a preset image, and outputting the image with the expression fitted as an expression score map. For example, the user expression vectors are fitted to a caricature or other image having a human face. The expression vectors of the user are fitted in the cartoon characters, facial features of the user are basically not output, so that the protection degree of privacy of the user is highest, but the reality is lowest, the designed expression photo album cannot easily arouse the memory of the user, and the user cannot remember who fits the user at the moment.
The method may further include step 250 of creating a user expression album of each user or a commodity expression album of each commodity based on the expression score map.
The user expression album is a set of expression score graphs of each user, and the user can select to store self-portrait photos of the self-portrait expression to form the user expression album.
The commodity expression album is a set of expression score graphs corresponding to each commodity. Before purchasing the commodity, the user can preview the commodity expression album corresponding to the commodity, so that the purchased condition and the approximate quality of the commodity can be known in a clear way, and the interest and the efficiency of purchasing the commodity by the user are increased.
The method may further include the step 260 of pushing the expression score map to the user at a predetermined time. For example, on each festival or birthday, the system automatically arranges the facial expression self-timer of the user and the processed facial expression comment chart selected by the user at that time for publication and pushes the facial expression comment chart to the user, so as to enhance the emotional link of the user and the platform.
In the embodiment, when the commodity evaluation is carried out, a user does not need to make a star and write comments and other complex operations, but a self-photographing picture with expressions is directly taken by self and uploaded to the system, and the system automatically outputs the corresponding commodity evaluation. By the method, people who do not like to comment can be encouraged to evaluate the commodity in an expression sending mode, the interestingness of the comment is increased, the activity and the affinity of software are increased, and more users can be encouraged to participate in the commodity comment. In addition, the aesthetic entertainment requirements of people can be met through expression scoring graph output mechanisms with different styles such as sketch and cartoon, and the privacy of users can be properly protected. Moreover, the expression score graph is pushed to the user at a proper time, so that the pleasure and loyalty of the user are improved, and the emotion ties of the platform and the user are enhanced.
In one embodiment, the number of users in the user face image may also be identified, wherein the number of users affects the composite score and text comments corresponding to the product. The evaluation scores generated by the system are particularly low if the user feels that a commodity is particularly poor and can take a ghost photograph of family and friends together. Also, if the user feels a good commodity and wants to recommend it to more people, a smiling face may be integrated with family and friends.
In this embodiment, if the number of people participating in the comment in the expression comment is more, the efficiency of the comment is in an enhanced mode, and to a certain extent, people using the mobile phone can be encouraged to invite relatives and friends more frequently to autodyne, so that the happiness of the user is enhanced, the number of users on the platform is increased, the brand image of the platform is improved, and the dependency and loyalty of the user on the platform are enhanced.
In practical application, for shops, smiley face and ghost face activities can be deduced, so that more users participate, and commodity sales are promoted.
Fig. 3 is a schematic structural diagram of an embodiment of the commodity evaluation system of the present disclosure. The product evaluation system includes a face image acquisition unit 310, an expression vector determination unit 320, and a product evaluation determination unit 330.
The face image acquisition unit 310 is used to acquire a face image of a user. After receiving the commodity, the user can take a self-timer photograph and upload the self-timer photograph to the system. The user can include various expressions such as smile, crying face, ghost face and the like during self-photographing, and can use smile to represent satisfaction, crying face to represent dissatisfaction and ghost face to represent anger, anergy and other emotions.
The expression vector determination unit 320 is configured to determine a corresponding set of user expression vectors based on the facial image. For example, a user expression vector group corresponding to the facial image is determined based on the expression vector neural network model. For example, inputting a facial image into an expression vector neural network model, and obtaining expression vectors corresponding to N × N unit images and a probability that each unit image contains a face center, wherein the N × N unit images form a facial image, and N is a natural number; and taking the combination of the expression vectors corresponding to the unit images containing the facial centers with the probability greater than the probability threshold value as a user expression vector group corresponding to the facial image.
Wherein, the expression vector neural network model can be trained in advance. For example, a sample face image is acquired; labeling the expression vector corresponding to the sample facial image to generate an expression labeling file; and then, training the expression vector neural network model based on the sample facial image and the expression annotation file. And after a subsequent user takes a self-portrait, inputting the self-portrait into the trained expression vector neural network model so as to output a corresponding expression vector.
The commodity evaluation determination unit 330 is configured to output a corresponding commodity evaluation based on the user expression vector group. For example, the text comment and the comprehensive score of the commodity corresponding to the user expression vector group are output based on the commodity evaluation neural network model.
In which a commodity evaluation neural network model may be trained in advance. For example, a sample expression vector is obtained; marking the text comments and the comprehensive scores corresponding to the sample expression vectors to generate an evaluation marking file; and training the commodity evaluation neural network model based on the sample facial image and the evaluation annotation file. And inputting the user expression vector group into the trained commodity evaluation neural network model subsequently so as to output corresponding commodity text evaluation and comprehensive scores.
In the embodiment, the corresponding user expression vector group is determined by using the facial image of the user, and then the corresponding commodity evaluation is output based on the user expression vector group, so that the user does not need to write comments, score, upload pictures and other operations one by one, the commodity evaluation complexity of the user is reduced, and the commodity evaluation by the user is facilitated to be carried out simply and directly.
In another embodiment of the present disclosure, as shown in fig. 4, the merchandise evaluation system further includes an expression score map output unit 410 for outputting a corresponding expression score map based on a facial image of the user in response to the image style selected by the user.
If the image style selected by the user is the original image style, outputting the facial image of the user as an expression score graph; the self-portrait photo of the user is directly output, although the privacy protection degree of the user is lowest, the reality is best, and the designed expression photo album can arouse the memory of the user best.
If the image style is a filter style, outputting the facial image of the user as an expression score chart after being processed by a filter; for example, different styles of processing such as sketch and cartoon are performed on a face image of a user, so that the privacy protection degree of the user is low, the reality is high, and a designed expression photo album can easily arouse the memory of the user.
And if the image style is an expression fitting style, fitting an expression vector corresponding to the facial features of the user image into a preset image, and outputting the image with the expression fitted as an expression score map. The expression vector of the user is fitted in the cartoon figure, facial features of the user are basically not output, so that the protection degree of privacy of the user is highest, the authenticity is lowest, the memory of the user is not easily aroused by the expression photo album designed later, and the user can not remember which people are matched with the user at the moment when the user looks at the photo.
In the embodiment, the aesthetic entertainment requirements of people can be met through expression score graph output mechanisms of different styles such as sketch and cartoon, and the privacy of users can be properly protected.
In another embodiment of the present disclosure, the merchandise evaluation system further includes a user number recognition unit 420 for recognizing the number of users in the face image; wherein, the number of users influences the comprehensive score corresponding to the commodity. That is, the larger the number of users in the self-portrait photograph, the more the evaluation and review of the product will be affected, for example, if the user feels that a product is particularly bad and can take a ghost photograph of family and friends together, the evaluation score generated by the system will be particularly low. Also, if the user feels a good commodity and wants to recommend it to more people, a smiling face may be integrated with family and friends.
In this embodiment, if the number of people participating in the comment in the expression comment is more, the efficiency of the comment is in an enhanced mode, and to a certain extent, people using the mobile phone can be encouraged to invite relatives and friends more frequently to autodyne, so that the happiness of the user is enhanced, the number of users on the platform is increased, the brand image of the platform is improved, and the dependency and loyalty of the user on the platform are enhanced.
Fig. 5 is a schematic structural diagram of still another embodiment of the merchandise evaluation system according to the present disclosure. The merchandise evaluation system includes a memory 510 and a processor 520, wherein:
the memory 510 may be a magnetic disk, flash memory, or any other non-volatile storage medium. The memory is used for storing instructions in the embodiments corresponding to fig. 1 and 2. Processor 520 is coupled to memory 510 and may be implemented as one or more integrated circuits, such as a microprocessor or microcontroller. The processor 520 is configured to execute instructions stored in memory.
In one embodiment, as also shown in FIG. 6, the merchandise evaluation system 600 includes a memory 610 and a processor 620. Processor 620 is coupled to memory 610 through a BUS 630. The merchandise evaluation system 600 may be further connected to an external storage device 650 through the storage interface 640 to access external data, and may be further connected to a network or another computer system (not shown) through the network interface 660, which will not be described in detail herein.
In this embodiment, the data instructions are stored in the memory and processed by the processor, which can reduce the complexity of the user in evaluating the product.
In another embodiment, a computer-readable storage medium has stored thereon computer program instructions which, when executed by a processor, implement the steps of the method in the corresponding embodiments of fig. 1, 2. As will be appreciated by one skilled in the art, embodiments of the present disclosure may be provided as a method, apparatus, or computer program product. Accordingly, the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present disclosure may take the form of a computer program product embodied on one or more computer-usable non-transitory storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present disclosure is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Thus far, the present disclosure has been described in detail. Some details that are well known in the art have not been described in order to avoid obscuring the concepts of the present disclosure. It will be fully apparent to those skilled in the art from the foregoing description how to practice the presently disclosed embodiments.
Although some specific embodiments of the present disclosure have been described in detail by way of example, it should be understood by those skilled in the art that the foregoing examples are for purposes of illustration only and are not intended to limit the scope of the present disclosure. It will be appreciated by those skilled in the art that modifications may be made to the above embodiments without departing from the scope and spirit of the present disclosure. The scope of the present disclosure is defined by the appended claims.

Claims (20)

1. A merchandise evaluation method, comprising:
acquiring a face image of a user;
determining a corresponding set of user expression vectors based on the facial image;
and outputting corresponding commodity evaluation based on the user expression vector group.
2. The merchandise evaluation method of claim 1, wherein determining a corresponding set of user expression vectors based on the facial image comprises:
and determining a user expression vector group corresponding to the facial image based on an expression vector neural network model.
3. The merchandise evaluation method of claim 2, wherein determining the user expression vector group to which the facial image corresponds based on an expression vector neural network model comprises:
inputting the facial image into the expression vector neural network model, and obtaining expression vectors corresponding to N unit images and the probability that each unit image contains a face center, wherein the N unit images form the facial image, and N is a natural number;
and taking the combination of the expression vectors corresponding to the unit images containing the facial centers with the probability greater than the probability threshold value as a user expression vector group corresponding to the facial image.
4. The merchandise evaluation method according to claim 2, further comprising:
acquiring a sample face image;
labeling the expression vector corresponding to the sample facial image to generate an expression labeling file;
and training the expression vector neural network model based on the sample facial image and the expression labeling file.
5. The merchandise evaluation method of claim 1, wherein outputting the corresponding merchandise evaluation based on the user expression vector group comprises:
and outputting the commodity evaluation corresponding to the user expression vector group based on a commodity evaluation neural network model, wherein the commodity evaluation comprises a text evaluation and a comprehensive score.
6. The merchandise evaluation method according to claim 5, further comprising:
obtaining a sample expression vector;
marking the text comment and the comprehensive score corresponding to the sample expression vector to generate an evaluation marking file;
and training the commodity evaluation neural network model based on the sample facial image and the evaluation annotation file.
7. The merchandise evaluation method according to claim 5, further comprising:
and identifying the number of users in the facial image, wherein the number of users affects the text comment and comprehensive score corresponding to the commodity.
8. The merchandise evaluation method according to any one of claims 1 to 7, further comprising:
and responding to the image style selected by the user, and outputting a corresponding expression score chart based on the facial image of the user.
9. The merchandise evaluation method of claim 8, wherein the image style comprises at least one of an original image style, a filter style, and an expression fitting style;
if the image style is the original image style, outputting the facial image as an expression score map;
if the image style is a filter style, outputting the facial image as an expression score chart after being processed by a filter;
and if the image style is an expression fitting style, fitting an expression information vector group corresponding to the facial image into a preset image, and outputting the image with the expression fitted as an expression score map.
10. The merchandise evaluation method according to claim 8, further comprising:
creating at least one item of a user expression album of each user and a commodity expression album of each commodity based on the expression score map;
the user expression album is a set of expression score graphs of each user;
the commodity expression album is a set of expression score graphs corresponding to each commodity.
11. The merchandise evaluation method according to claim 8, further comprising:
and pushing the expression score graph to the user at preset time.
12. A merchandise evaluation system comprising:
a face image acquisition unit for acquiring a face image of a user;
an expression vector determination unit for determining a corresponding user expression vector group based on the facial image;
and the commodity evaluation determining unit is used for outputting corresponding commodity evaluation based on the user expression vector group.
13. The merchandise evaluation system of claim 11,
the expression vector determining unit is used for determining a user expression vector group corresponding to the facial image based on an expression vector neural network model.
14. The merchandise evaluation system of claim 13,
the expression vector determining unit is used for inputting the facial image into the expression vector neural network model, and obtaining expression vectors corresponding to N unit images and the probability that each unit image contains a face center, wherein the N unit images form the facial image, and N is a natural number; and taking the combination of the expression vectors corresponding to the unit images containing the facial centers with the probability greater than the probability threshold value as a user expression vector group corresponding to the facial image.
15. The merchandise evaluation system of claim 12,
the commodity evaluation determining unit is used for outputting commodity evaluation corresponding to the user expression vector group based on a commodity evaluation neural network model, wherein the commodity evaluation comprises word evaluation and comprehensive evaluation.
16. The merchandise evaluation system of claim 15, further comprising:
and the user number identification unit is used for identifying the number of users in the facial image, wherein the number of users influences the text comment and comprehensive score corresponding to the commodity.
17. The merchandise evaluation system of any of claims 12-16, further comprising:
and the expression score graph output unit is used for responding to the image style selected by the user and outputting a corresponding expression score graph based on the facial image of the user.
18. The merchandise evaluation system of claim 17, wherein the image style comprises at least one of an artwork style, a filter style, an expression fit style;
the expression score graph output unit is used for outputting the facial image as an expression score graph if the image style is an original image style; if the image style is a filter style, outputting the facial image as an expression score chart after being processed by a filter; and if the image style is an expression fitting style, fitting an expression vector group corresponding to the facial image in a preset image, and outputting the image with the expression fitted as an expression score map.
19. A merchandise evaluation system comprising:
a memory; and
a processor coupled to the memory, the processor configured to perform the merchandise evaluation method of any of claims 1-11 based on instructions stored in the memory.
20. A computer-readable storage medium having stored thereon computer program instructions which, when executed by a processor, carry out the steps of the merchandise evaluation method of any one of claims 1 to 11.
CN201810847472.9A 2018-07-27 2018-07-27 Commodity evaluation method and system Pending CN110766502A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810847472.9A CN110766502A (en) 2018-07-27 2018-07-27 Commodity evaluation method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810847472.9A CN110766502A (en) 2018-07-27 2018-07-27 Commodity evaluation method and system

Publications (1)

Publication Number Publication Date
CN110766502A true CN110766502A (en) 2020-02-07

Family

ID=69328335

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810847472.9A Pending CN110766502A (en) 2018-07-27 2018-07-27 Commodity evaluation method and system

Country Status (1)

Country Link
CN (1) CN110766502A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111667337A (en) * 2020-04-28 2020-09-15 苏宁云计算有限公司 Commodity evaluation ordering method and system

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007122533A (en) * 2005-10-31 2007-05-17 Seiko Epson Corp Comment layout for image
CN105049249A (en) * 2015-07-09 2015-11-11 中山大学 Scoring method and system of remote visual conversation services
CN105608447A (en) * 2016-02-17 2016-05-25 陕西师范大学 Method for detecting human face smile expression depth convolution nerve network
US20160275341A1 (en) * 2015-03-18 2016-09-22 Adobe Systems Incorporated Facial Expression Capture for Character Animation
CN107341434A (en) * 2016-08-19 2017-11-10 北京市商汤科技开发有限公司 Processing method, device and the terminal device of video image
CN107423694A (en) * 2017-07-05 2017-12-01 清远初曲智能科技有限公司 A kind of artificial intelligence user image management method and system based on machine vision
CN107563362A (en) * 2017-10-01 2018-01-09 上海量科电子科技有限公司 Evaluate method, client and the system of operation
CN108197595A (en) * 2018-01-23 2018-06-22 京东方科技集团股份有限公司 A kind of method, apparatus, storage medium and computer for obtaining evaluation information
JP2018106419A (en) * 2016-12-26 2018-07-05 大日本印刷株式会社 Marketing apparatus
CN108269169A (en) * 2017-12-29 2018-07-10 武汉璞华大数据技术有限公司 A kind of shopping guide method and system

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007122533A (en) * 2005-10-31 2007-05-17 Seiko Epson Corp Comment layout for image
US20160275341A1 (en) * 2015-03-18 2016-09-22 Adobe Systems Incorporated Facial Expression Capture for Character Animation
CN105049249A (en) * 2015-07-09 2015-11-11 中山大学 Scoring method and system of remote visual conversation services
CN105608447A (en) * 2016-02-17 2016-05-25 陕西师范大学 Method for detecting human face smile expression depth convolution nerve network
CN107341434A (en) * 2016-08-19 2017-11-10 北京市商汤科技开发有限公司 Processing method, device and the terminal device of video image
JP2018106419A (en) * 2016-12-26 2018-07-05 大日本印刷株式会社 Marketing apparatus
CN107423694A (en) * 2017-07-05 2017-12-01 清远初曲智能科技有限公司 A kind of artificial intelligence user image management method and system based on machine vision
CN107563362A (en) * 2017-10-01 2018-01-09 上海量科电子科技有限公司 Evaluate method, client and the system of operation
CN108269169A (en) * 2017-12-29 2018-07-10 武汉璞华大数据技术有限公司 A kind of shopping guide method and system
CN108197595A (en) * 2018-01-23 2018-06-22 京东方科技集团股份有限公司 A kind of method, apparatus, storage medium and computer for obtaining evaluation information

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111667337A (en) * 2020-04-28 2020-09-15 苏宁云计算有限公司 Commodity evaluation ordering method and system

Similar Documents

Publication Publication Date Title
CN109816441B (en) Policy pushing method, system and related device
US20190051032A1 (en) Personal life story simulation system
CN110378731A (en) Obtain method, apparatus, server and the storage medium of user's portrait
US20150143209A1 (en) System and method for personalizing digital content
CN115735229A (en) Updating avatar garments in messaging systems
CN110688874A (en) Facial expression recognition method and device, readable storage medium and electronic equipment
KR101905501B1 (en) Method and apparatus of recommending contents
CN111339420A (en) Image processing method, image processing device, electronic equipment and storage medium
CN111639979A (en) Entertainment item recommendation method and device
CN115462089A (en) Displaying augmented reality content in messaging applications
JPWO2017125975A1 (en) Makeup trend analyzer, makeup trend analysis method, and makeup trend analysis program
CN109544262A (en) Item recommendation method, device, electronic equipment, system and readable storage medium storing program for executing
CN111625100A (en) Method and device for presenting picture content, computer equipment and storage medium
CN106791091B (en) Image generation method and device and mobile terminal
CN111639613A (en) Augmented reality AR special effect generation method and device and electronic equipment
CN113269895A (en) Image processing method and device and electronic equipment
CN110766502A (en) Commodity evaluation method and system
CN112104914B (en) Video recommendation method and device
JP2009163465A (en) Portrait illustration data providing system
KR20230073153A (en) Method of creating video making platform for users
CN113099267B (en) Video generation method and device, electronic equipment and storage medium
KR101786823B1 (en) Method for providing photo in sns
CN111242714A (en) Product recommendation method and device
KR20010091743A (en) A formation method of an automatic caricature
CN107563465A (en) A kind of system and method for obtaining gift information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination