CN110766502B - Commodity evaluation method and system - Google Patents
Commodity evaluation method and system Download PDFInfo
- Publication number
- CN110766502B CN110766502B CN201810847472.9A CN201810847472A CN110766502B CN 110766502 B CN110766502 B CN 110766502B CN 201810847472 A CN201810847472 A CN 201810847472A CN 110766502 B CN110766502 B CN 110766502B
- Authority
- CN
- China
- Prior art keywords
- expression
- user
- image
- commodity
- style
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000011156 evaluation Methods 0.000 title claims abstract description 100
- 239000013604 expression vector Substances 0.000 claims abstract description 112
- 230000001815 facial effect Effects 0.000 claims abstract description 70
- 230000014509 gene expression Effects 0.000 claims description 97
- 238000003062 neural network model Methods 0.000 claims description 36
- 238000002372 labelling Methods 0.000 claims description 16
- 238000004590 computer program Methods 0.000 claims description 10
- 238000012545 processing Methods 0.000 claims description 8
- 238000012549 training Methods 0.000 claims description 8
- 230000004044 response Effects 0.000 claims description 4
- 238000000034 method Methods 0.000 abstract description 17
- 238000010586 diagram Methods 0.000 description 14
- 206010011469 Crying Diseases 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 230000007246 mechanism Effects 0.000 description 4
- 230000008451 emotion Effects 0.000 description 2
- 238000003058 natural language processing Methods 0.000 description 2
- 238000012552 review Methods 0.000 description 2
- 239000002131 composite material Substances 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 230000001737 promoting effect Effects 0.000 description 1
- 239000013598 vector Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/06—Buying, selling or leasing transactions
- G06Q30/0601—Electronic shopping [e-shopping]
- G06Q30/0623—Item investigation
- G06Q30/0625—Directed, with specific intent or strategy
Landscapes
- Business, Economics & Management (AREA)
- Accounting & Taxation (AREA)
- Finance (AREA)
- Development Economics (AREA)
- Economics (AREA)
- Marketing (AREA)
- Strategic Management (AREA)
- Physics & Mathematics (AREA)
- General Business, Economics & Management (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The disclosure provides a commodity evaluation method and system, and relates to the field of Internet. The method comprises the following steps: acquiring a facial image of a user; determining a corresponding set of user expression vectors based on the facial image; and outputting corresponding commodity evaluation based on the user expression vector group. The method and the device can reduce the complexity of evaluating the commodity by the user, and further improve the enthusiasm of evaluating the commodity by the user.
Description
Technical Field
The disclosure relates to the field of internet, and in particular relates to a commodity evaluation method and system.
Background
After the user purchases the commodity, according to the flow of the general mall, firstly, performing sub-item staring, such as staring according to whether the commodity accords with the description, logistics service and service attitude, wherein the grades are different from one star to five stars, after the staring, the user can fill in the text description, and then the user can take pictures of the commodity.
Although there are many encouraging mechanisms for reviews, there are still a large number of users who do not post any text-to-picture reviews after purchasing the merchandise. There may be various reasons for the user not to comment, for example, comment is a time-consuming and energy-consuming matter for some users, and many people do not have good text ability; second, the encouragement mechanism is not attractive enough, many people do not necessarily pay attention to rewards, and more to own time and private space; third, simply dislike comment things.
Disclosure of Invention
The technical problem to be solved by the disclosure is to provide a commodity evaluation method and a commodity evaluation system, which can reduce the complexity of evaluating commodities by a user and further improve the enthusiasm of evaluating the commodities by the user.
According to an aspect of the present disclosure, there is provided a commodity evaluation method including: acquiring a facial image of a user; determining a corresponding set of user expression vectors based on the facial image; and outputting corresponding commodity evaluation based on the user expression vector group.
Optionally, determining the corresponding set of user expression vectors based on the facial image includes: and determining a user expression vector group corresponding to the facial image based on the expression vector neural network model.
Optionally, determining the set of user expression vectors corresponding to the facial image based on the expression vector neural network model includes: inputting a facial image into an expression vector neural network model to obtain expression vectors corresponding to N unit images and the probability that each unit image contains a face center, wherein the N unit images form the facial image, and N is a natural number; and taking the combination of the expression vectors corresponding to the unit images containing the face center with the probability larger than the probability threshold as a user expression vector group corresponding to the face image.
Optionally, the method further comprises: acquiring a sample face image; labeling the expression vector corresponding to the sample facial image to generate an expression labeling file; training the expression vector neural network model based on the sample facial image and the expression annotation file.
Optionally, outputting the corresponding commodity evaluation based on the user expression vector group includes: and outputting commodity evaluation corresponding to the user expression vector group based on the commodity evaluation neural network model, wherein the commodity evaluation comprises text evaluation and comprehensive score.
Optionally, the method further comprises: obtaining a sample expression vector; labeling the text comment and the comprehensive score corresponding to the sample expression vector to generate an evaluation labeling file; training the commodity evaluation neural network model based on the sample expression vector and the evaluation annotation file.
Optionally, the method further comprises: and identifying the number of users in the face image, wherein the number of users influences the text comment and the comprehensive score corresponding to the commodity.
Optionally, the method further comprises: and outputting a corresponding expression scoring graph based on the facial image of the user in response to the image style selected by the user.
Optionally, the image style includes at least one of original image style, filter style, expression fitting style; if the image style is the original image style, outputting the facial image as an expression score graph; if the image style is the filter style, the facial image is processed by the filter and then is output as an expression score graph; if the image style is the expression fitting style, fitting an expression information vector group corresponding to the facial image into a preset image, and outputting the image with the fitted expression as an expression score graph.
Optionally, the method further comprises: creating at least one of a user expression album of each user and a commodity expression album of each commodity based on the expression scoring graph; the user expression photo album is a collection of expression score maps of all users; the commodity expression photo album is a collection of expression scoring graphs corresponding to each commodity.
Optionally, the method further comprises: the expression score map is pushed to the user at a predetermined time.
According to another aspect of the present disclosure, there is also provided a commodity evaluation system including: a face image acquisition unit configured to acquire a face image of a user; an expression vector determining unit for determining a corresponding user expression vector group based on the facial image; and the commodity evaluation determining unit is used for outputting corresponding commodity evaluation based on the user expression vector group.
Optionally, the expression vector determining unit is configured to determine a user expression vector group corresponding to the facial image based on the expression vector neural network model.
Optionally, the expression vector determining unit is configured to input a facial image to the expression vector neural network model, and obtain expression vectors corresponding to n×n unit images and probabilities that each unit image contains a face center, where n×n unit images form a facial image, and N is a natural number; and taking the combination of the expression vectors corresponding to the unit images containing the face center with the probability larger than the probability threshold as a user expression vector group corresponding to the face image.
Optionally, the commodity evaluation determining unit is used for outputting commodity evaluation corresponding to the user expression vector group based on the commodity evaluation neural network model, wherein the commodity evaluation comprises a text evaluation and a comprehensive score.
Optionally, the system further comprises: the user number identification unit is used for identifying the number of users in the face image, wherein the number of users influences the text comment and the comprehensive score corresponding to the commodity.
Optionally, the system further comprises: and the expression score graph output unit is used for responding to the image style selected by the user and outputting a corresponding expression score graph based on the facial image of the user.
Optionally, the image style includes at least one of original image style, filter style, expression fitting style; the expression score graph output unit is used for outputting the facial image as an expression score graph if the image style is an original image style; if the image style is the filter style, the facial image is processed by the filter and then is output as an expression score graph; if the image style is the expression fitting style, fitting an expression vector group corresponding to the facial image into a preset image, and outputting the image with the fitted expression as an expression score graph.
According to another aspect of the present disclosure, there is also provided a commodity evaluation system including: a memory; and a processor coupled to the memory, the processor configured to perform the commodity evaluation method as described above based on instructions stored in the memory.
According to another aspect of the present disclosure, there is also provided a computer-readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the steps of the commodity evaluation method described above.
Compared with the prior art, the embodiment of the disclosure utilizes the facial image of the user to determine the corresponding user expression vector group, and then outputs the corresponding commodity evaluation based on the user expression vector group, so that the commodity evaluation time of the user is reduced, the commodity evaluation enthusiasm of the user is further improved, and meanwhile, the interest of the user in evaluating the commodity is also improved.
Other features of the present disclosure and its advantages will become apparent from the following detailed description of exemplary embodiments of the disclosure, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description, serve to explain the principles of the disclosure.
The disclosure may be more clearly understood from the following detailed description taken in conjunction with the accompanying drawings in which:
FIG. 1 is a flow diagram of one embodiment of a commodity evaluation method of the present disclosure.
FIG. 2 is a flow chart of another embodiment of the commodity evaluation method according to the present disclosure.
FIG. 3 is a schematic diagram of one embodiment of a commodity evaluation system according to the present disclosure.
FIG. 4 is a schematic diagram of another embodiment of a commodity evaluation system according to the present disclosure.
Fig. 5 is a schematic diagram of a configuration of yet another embodiment of the commodity evaluation system according to the present disclosure.
Fig. 6 is a schematic diagram of a further embodiment of the commodity evaluation system according to the present disclosure.
Detailed Description
Various exemplary embodiments of the present disclosure will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of the components and steps, numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present disclosure unless it is specifically stated otherwise.
Meanwhile, it should be understood that the sizes of the respective parts shown in the drawings are not drawn in actual scale for convenience of description.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the disclosure, its application, or uses.
Techniques, methods, and apparatus known to one of ordinary skill in the relevant art may not be discussed in detail, but should be considered part of the specification where appropriate.
In all examples shown and discussed herein, any specific values should be construed as merely illustrative, and not a limitation. Thus, other examples of the exemplary embodiments may have different values.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further discussion thereof is necessary in subsequent figures.
For the purposes of promoting an understanding of the principles and advantages of the disclosure, reference will now be made to the embodiments illustrated in the drawings and specific language will be used to describe the same.
FIG. 1 is a flow diagram of one embodiment of a commodity evaluation method of the present disclosure.
In step 110, a facial image of a user is acquired. After receiving the commodity, the user can perform self-timer, and upload the self-timer to the system. The user may take a photograph of himself, including smiling, crying, and a face with a ghost, and may be satisfied with the smiling, unsatisfied with the crying, and angry and ineligible with the face with a ghost.
At step 120, a corresponding set of user expression vectors is determined based on the facial image. For example, after the user performs self-photographing, the user inputs the self-photographing to the expression vector neural network model, and a corresponding expression vector group can be output, wherein one or more expression vectors can be in the expression vector group, if only one user performs self-photographing, one expression vector is output, and if a plurality of users perform self-photographing, a plurality of expression vectors are output.
In one embodiment, inputting a facial image into an expression vector neural network model to obtain expression vectors corresponding to n×n unit images and probabilities that each unit image contains a face center, where n×n unit images form a facial image, and N is a natural number; and taking the combination of the expression vectors corresponding to the unit images containing the face center with the probability larger than the probability threshold as a user expression vector group corresponding to the face image. The expression vector group takes each person as a unit, divides the self-shot picture into N multiplied by N units, and each unit outputs the probability that the unit is a face center and the expression vector of the face.
In one embodiment, other machine learning models and algorithms may also be utilized to determine the set of user expression vectors.
At step 130, the corresponding merchandise assessment is output based on the user expression vector set. The commodity evaluation comprises commodity text comments, comprehensive scores and the like.
In one embodiment, the commodity evaluation neural network model can output commodity text comments, comprehensive scores and the like corresponding to the user expression vector group.
In the embodiment, the corresponding user expression vector group is determined by using the user facial image, and then the corresponding commodity evaluation is output based on the user expression vector group, so that the user does not need to write comments, score, upload pictures and other operations one by one, the time for evaluating the commodity by the user is reduced, the user can evaluate the commodity more simply and directly, and the enthusiasm of evaluating the commodity by the user is improved.
In one embodiment, the expression vector neural network model may be trained in advance. For example, a sample face image is first acquired; labeling the expression vector corresponding to the sample facial image to generate an expression labeling file; and then training the expression vector neural network model based on the sample facial image and the expression annotation file. So that after the follow-up user takes the self-timer, the self-timer is input into the trained expression vector neural network model so as to output the corresponding expression vector group.
In one embodiment, the commodity evaluation neural network model may be pre-trained. For example, a sample expression vector is acquired; labeling the text comment and the comprehensive score corresponding to the sample expression vector to generate an evaluation labeling file; training the commodity evaluation neural network model based on the sample expression vector and the evaluation annotation file. And the user expression vector group is input into the trained commodity evaluation neural network model so as to output corresponding commodity text evaluation and comprehensive scores. The commodity evaluation neural network model can adopt an NLP (Natural Language Processing ) technology.
FIG. 2 is a flow chart of another embodiment of the commodity evaluation method according to the present disclosure.
In step 210, a facial image of a user is acquired.
In step 220, a set of user expression vectors corresponding to the facial image is determined based on the expression vector neural network model.
In step 230, the commodity text comment and the comprehensive score corresponding to the user expression vector set are output based on the commodity evaluation neural network model.
In step 240, in response to the image style selected by the user, a corresponding expression score map is output.
If the image style selected by the user is the original image style, outputting the facial image of the user as an expression scoring graph; the self-timer photo of the user is directly output, the privacy protection degree of the user is lowest, the authenticity is best, and the expression photo album designed later can most arouse the memory of the user.
If the image style is the filter style, the facial image of the user is processed by the filter and then is output as an expression score graph; for example, different styles such as sketch cartoon are processed on the facial image of the user, the privacy protection degree of the user is low, the authenticity is high, and the designed expression album is easy to arouse the memory of the user.
If the image style is the expression fitting style, fitting the expression vector corresponding to the facial features of the user image into a preset image, and outputting the image with the fitted expression as an expression score. For example, user expression vectors are fitted into a cartoon or other facial image. The facial features of the user are basically not output when the expression vector of the user is fitted in the cartoon character, so that the privacy protection degree of the user is highest, but the authenticity is lowest, and the designed expression album is not easy to arouse the memory of the user, so that the user can possibly remember which people are not in the proper condition when the user looks at the photo.
The method may further include a step 250 of creating a user expression album for each user or a merchandise expression album for each merchandise based on the expression score map.
The user expression photo album is a set of expression score maps of each user, and the user can select to save the self-timer expressions of the user to form the user expression photo album.
The commodity expression photo album is a collection of expression scoring graphs corresponding to each commodity. Before purchasing the commodity, the user can preview the commodity expression album corresponding to the commodity, so that the situation that the commodity is purchased and the approximate quality are known in a clear manner, and the interestingness and efficiency of purchasing the commodity by the user are improved.
The method may further include a step 260 of pushing the expression score map to the user at a predetermined time. For example, on each holiday or birthday, the system automatically sorts the self-timer of the user's expression and pushes the expression comment graph selected by the user at the time and used for publishing to the user, so as to enhance the emotion tie of the user and the platform.
In the embodiment, when evaluating the commodity, the user does not need to make a complicated operation such as writing comments on the star, and the like, but directly self-photographs a self-photograph with expression and uploads the self-photograph to the system, and the system automatically outputs the corresponding commodity evaluation. According to the method, people who do not love comments can be encouraged to evaluate the commodity in a manner of expressing expressions, the interestingness of the comments is increased, the liveness and affinity of software are increased, and more users can be encouraged to participate in the commodity comments. In addition, the aesthetic entertainment requirements of people can be met through expression score graph output mechanisms of different styles such as sketching and cartoon, and the privacy of users can be properly protected. Furthermore, the expression scoring graph is pushed to the user in a proper period, so that the happiness and the loyalty of the user are improved, and the platform and the emotion tie of the user are enhanced.
In one embodiment, the number of users in the user face image may also be identified, wherein the number of users affects the composite score and text comment corresponding to the merchandise. Namely, the people head collecting mode is adopted to give the good score in the good score of the commodity, and if the user feels that one commodity is particularly poor, the people and friends can be collected to shoot the ghost face together, and the evaluation score generated by the system is particularly low. Also, if the user feels that a good is particularly good and wants to recommend to more people, then the family and friends can be clustered to take smiling faces together.
In the embodiment, if the number of people participating in the comment in the expression comment is larger, the comment efficiency is in an enhancement mode, so that people using mobile phones can be encouraged to invite friends to self-shoot together more frequently to improve the happiness of the users, the user quantity of the platform is increased, the brand image of the platform is improved, and the dependence and loyalty of the users on the platform are enhanced.
In practical application, for shops, the smiling face collection can be deduced, and the activities of the ghost face collection can be performed, so that more users participate in the business, and commodity sales can be promoted.
FIG. 3 is a schematic diagram of one embodiment of a commodity evaluation system according to the present disclosure. The commodity evaluation system includes a face image acquisition unit 310, an expression vector determination unit 320, and a commodity evaluation determination unit 330.
The face image acquisition unit 310 is configured to acquire a face image of a user. After receiving the commodity, the user can perform self-timer, and upload the self-timer to the system. The user may take a photograph of himself, including smiling, crying, and a face with a ghost, and may be satisfied with the smiling, unsatisfied with the crying, and angry and ineligible with the face with a ghost.
The expression vector determining unit 320 is configured to determine a corresponding user expression vector group based on the facial image. For example, a set of user expression vectors corresponding to a facial image is determined based on an expression vector neural network model. For example, inputting a facial image into an expression vector neural network model, and obtaining expression vectors corresponding to n×n unit images and a probability that each unit image contains a face center, wherein n×n unit images form the facial image, and N is a natural number; and taking the combination of the expression vectors corresponding to the unit images containing the face center with the probability larger than the probability threshold as a user expression vector group corresponding to the face image.
Wherein the expression vector neural network model may be trained in advance. For example, a sample face image is first acquired; labeling the expression vector corresponding to the sample facial image to generate an expression labeling file; and then training the expression vector neural network model based on the sample facial image and the expression annotation file. So that after the follow-up user takes the self-timer, the self-timer is input into the trained expression vector neural network model so as to output the corresponding expression vector.
The commodity evaluation determining unit 330 is configured to output a corresponding commodity evaluation based on the user expression vector group. For example, the text evaluation and the comprehensive score of the commodity corresponding to the user expression vector group are output based on the commodity evaluation neural network model.
Wherein the commodity evaluation neural network model may be trained in advance. For example, a sample expression vector is acquired; labeling the text comment and the comprehensive score corresponding to the sample expression vector to generate an evaluation labeling file; training the commodity evaluation neural network model based on the sample expression vector and the evaluation annotation file. And the user expression vector group is input into the trained commodity evaluation neural network model so as to output corresponding commodity text evaluation and comprehensive scores.
In the embodiment, the corresponding user expression vector group is determined by using the user facial image, and then the corresponding commodity evaluation is output based on the user expression vector group, so that the user does not need to write comments, score, upload pictures and other operations one by one, the complexity of evaluating the commodity by the user is reduced, and the user can evaluate the commodity more simply and directly.
In another embodiment of the present disclosure, as shown in fig. 4, the commodity evaluation system further includes an expression score map output unit 410 for outputting a corresponding expression score map based on the facial image of the user in response to the image style selected by the user.
If the image style selected by the user is the original image style, outputting the facial image of the user as an expression scoring graph; the self-timer photo of the user is directly output, the privacy protection degree of the user is lowest, the authenticity is best, and the expression photo album designed later can most arouse the memory of the user.
If the image style is the filter style, the facial image of the user is processed by the filter and then is output as an expression score graph; for example, different styles such as sketch cartoon are processed on the facial image of the user, the privacy protection degree of the user is low, the authenticity is high, and the designed expression album is easy to arouse the memory of the user.
If the image style is the expression fitting style, fitting the expression vector corresponding to the facial features of the user image into a preset image, and outputting the image with the fitted expression as an expression score. The facial feature of the user is basically not output when the expression vector of the user is fitted in the cartoon character, so that the user has highest privacy protection degree, but the authenticity is lowest, and the designed expression album is not easy to arouse the memory of the user, so that the user can not remember which people are in the group with himself or herself when watching the photo.
In the embodiment, the aesthetic entertainment requirement of people can be met through the expression score graph output mechanisms of different styles such as sketching, cartoon and the like, and the privacy of the user can be properly protected.
In another embodiment of the present disclosure, the commodity evaluation system further includes a user number identification unit 420 for identifying the number of users in the face image; wherein, the number of users influences the comprehensive score corresponding to the commodity. That is, the more the number of users in the self-timer is, the more the scoring and commenting of the commodity can be affected, for example, if the user feels that a commodity is particularly bad, the sponish timer can be shot together with friends by the family, and the evaluation score generated by the system is particularly low. Also, if the user feels that a good is particularly good and wants to recommend to more people, then the family and friends can be clustered to take smiling faces together.
In the embodiment, if the number of people participating in the comment in the expression comment is larger, the comment efficiency is in an enhancement mode, so that people using mobile phones can be encouraged to invite friends to self-shoot together more frequently to improve the happiness of the users, the user quantity of the platform is increased, the brand image of the platform is improved, and the dependence and loyalty of the users on the platform are enhanced.
Fig. 5 is a schematic diagram of a configuration of yet another embodiment of the commodity evaluation system according to the present disclosure. The commodity evaluation system includes a memory 510 and a processor 520, wherein:
Memory 510 may be a magnetic disk, flash memory, or any other non-volatile storage medium. The memory is used for storing instructions in the embodiments corresponding to fig. 1 and 2. Processor 520 is coupled to memory 510 and may be implemented as one or more integrated circuits, such as a microprocessor or microcontroller. The processor 520 is configured to execute instructions stored in the memory.
In one embodiment, as also shown in FIG. 6, the commodity evaluation system 600 includes a memory 610 and a processor 620. Processor 620 is coupled to memory 610 through BUS 630. The commodity evaluation system 600 may also be connected to external storage 650 via storage interface 640 for invoking external data, and may also be connected to a network or another computer system (not shown) via network interface 660, not described in detail herein.
In this embodiment, the complexity of evaluating the commodity by the user can be reduced by storing the data instructions in the memory and processing the instructions by the processor.
In another embodiment, a computer readable storage medium has stored thereon computer program instructions which, when executed by a processor, implement the steps of the method of the corresponding embodiment of fig. 1,2. It will be apparent to those skilled in the art that embodiments of the present disclosure may be provided as a method, apparatus, or computer program product. Accordingly, the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present disclosure may take the form of a computer program product embodied on one or more computer-usable non-transitory storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
The present disclosure is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Thus far, the present disclosure has been described in detail. In order to avoid obscuring the concepts of the present disclosure, some details known in the art are not described. How to implement the solutions disclosed herein will be fully apparent to those skilled in the art from the above description.
Although some specific embodiments of the present disclosure have been described in detail by way of example, it should be understood by those skilled in the art that the above examples are for illustration only and are not intended to limit the scope of the present disclosure. It will be appreciated by those skilled in the art that modifications may be made to the above embodiments without departing from the scope and spirit of the disclosure. The scope of the present disclosure is defined by the appended claims.
Claims (14)
1. A commodity evaluation method comprising:
acquiring a facial image of a user;
Inputting the facial image into an expression vector neural network model to obtain expression vectors corresponding to N unit images and the probability that each unit image contains a human face center, wherein the N unit images form the facial image, and N is a natural number;
Taking a combination of expression vectors corresponding to unit images with the probability of containing the face center larger than a probability threshold as a user expression vector group corresponding to the face image;
And outputting commodity evaluation corresponding to the user expression vector group based on a commodity evaluation neural network model, wherein the commodity evaluation comprises a text evaluation and a comprehensive score.
2. The commodity evaluation method according to claim 1, further comprising:
Acquiring a sample face image;
Labeling the expression vector corresponding to the sample facial image to generate an expression labeling file;
and training the expression vector neural network model based on the sample facial image and the expression annotation file.
3. The commodity evaluation method according to claim 1, further comprising:
Obtaining a sample expression vector;
Labeling the text comment and the comprehensive score corresponding to the sample expression vector to generate an evaluation labeling file;
training the commodity evaluation neural network model based on the sample expression vector and the evaluation annotation file.
4. The commodity evaluation method according to claim 1, further comprising:
And identifying the number of users in the face image, wherein the number of users influences the text comment and the comprehensive score corresponding to the commodity.
5. The commodity evaluation method according to any one of claims 1 to 4, further comprising:
And outputting a corresponding expression score map based on the facial image of the user in response to the image style selected by the user.
6. The commodity evaluation method according to claim 5, wherein the image style includes at least one of an artwork style, a filter style, an expression fitting style;
If the image style is the original image style, outputting the facial image as an expression score chart;
if the image style is a filter style, the facial image is processed by a filter and then is output as an expression score graph;
And if the image style is an expression fitting style, fitting an expression vector group corresponding to the facial image into a preset image, and outputting the image with the fitted expression as an expression score graph.
7. The commodity evaluation method according to claim 5, further comprising:
Creating at least one of a user expression album of each user and a commodity expression album of each commodity based on the expression scoring graph;
the user expression photo album is a collection of expression score graphs of all users;
The commodity expression photo album is a collection of expression scoring graphs corresponding to each commodity.
8. The commodity evaluation method according to claim 5, further comprising:
And pushing the expression scoring graph to the user at a preset time.
9. A commodity evaluation system comprising:
a face image acquisition unit configured to acquire a face image of a user;
The facial image processing device comprises an expression vector determining unit, a facial image processing unit and a user expression vector processing unit, wherein the expression vector determining unit is used for inputting the facial image into an expression vector neural network model to obtain expression vectors corresponding to N x N unit images and the probability that each unit image contains a face center, wherein the N x N unit images form the facial image, N is a natural number, and the combination of the expression vectors corresponding to the unit images with the probability of containing the face center being greater than a probability threshold is used as a user expression vector group corresponding to the facial image;
And the commodity evaluation determining unit is used for outputting commodity evaluation corresponding to the user expression vector group based on a commodity evaluation neural network model, wherein the commodity evaluation comprises a text evaluation and a comprehensive score.
10. The commodity evaluation system according to claim 9, further comprising:
And the user number identification unit is used for identifying the number of users in the face image, wherein the number of users influences the text comment and the comprehensive score corresponding to the commodity.
11. The commodity evaluation system according to claim 9 or 10, further comprising:
And the expression score graph output unit is used for responding to the image style selected by the user and outputting a corresponding expression score graph based on the facial image of the user.
12. The commodity evaluation system according to claim 11, wherein the image style includes at least one of an artwork style, a filter style, an expression fitting style;
The expression score graph output unit is used for outputting the facial image as an expression score graph if the image style is an original image style; if the image style is a filter style, the facial image is processed by a filter and then is output as an expression score graph; and if the image style is an expression fitting style, fitting an expression vector group corresponding to the facial image into a preset image, and outputting the image with the fitted expression as an expression score graph.
13. A commodity evaluation system comprising:
A memory; and
A processor coupled to the memory, the processor configured to perform the commodity evaluation method according to any one of claims 1 to 8 based on instructions stored in the memory.
14. A computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the steps of the commodity evaluation method according to any one of claims 1 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810847472.9A CN110766502B (en) | 2018-07-27 | 2018-07-27 | Commodity evaluation method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810847472.9A CN110766502B (en) | 2018-07-27 | 2018-07-27 | Commodity evaluation method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110766502A CN110766502A (en) | 2020-02-07 |
CN110766502B true CN110766502B (en) | 2024-06-18 |
Family
ID=69328335
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810847472.9A Active CN110766502B (en) | 2018-07-27 | 2018-07-27 | Commodity evaluation method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110766502B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111667337A (en) * | 2020-04-28 | 2020-09-15 | 苏宁云计算有限公司 | Commodity evaluation ordering method and system |
CN114489442A (en) * | 2022-01-24 | 2022-05-13 | 珠海格力电器股份有限公司 | Product information display method, electronic equipment and storage medium |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105608447A (en) * | 2016-02-17 | 2016-05-25 | 陕西师范大学 | Method for detecting human face smile expression depth convolution nerve network |
CN107563362A (en) * | 2017-10-01 | 2018-01-09 | 上海量科电子科技有限公司 | Evaluate method, client and the system of operation |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4710550B2 (en) * | 2005-10-31 | 2011-06-29 | セイコーエプソン株式会社 | Comment layout in images |
US9552510B2 (en) * | 2015-03-18 | 2017-01-24 | Adobe Systems Incorporated | Facial expression capture for character animation |
CN105049249A (en) * | 2015-07-09 | 2015-11-11 | 中山大学 | Scoring method and system of remote visual conversation services |
CN107341434A (en) * | 2016-08-19 | 2017-11-10 | 北京市商汤科技开发有限公司 | Processing method, device and the terminal device of video image |
JP6825357B2 (en) * | 2016-12-26 | 2021-02-03 | 大日本印刷株式会社 | Marketing equipment |
CN107423694A (en) * | 2017-07-05 | 2017-12-01 | 清远初曲智能科技有限公司 | A kind of artificial intelligence user image management method and system based on machine vision |
CN108269169A (en) * | 2017-12-29 | 2018-07-10 | 武汉璞华大数据技术有限公司 | A kind of shopping guide method and system |
CN108197595A (en) * | 2018-01-23 | 2018-06-22 | 京东方科技集团股份有限公司 | A kind of method, apparatus, storage medium and computer for obtaining evaluation information |
-
2018
- 2018-07-27 CN CN201810847472.9A patent/CN110766502B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105608447A (en) * | 2016-02-17 | 2016-05-25 | 陕西师范大学 | Method for detecting human face smile expression depth convolution nerve network |
CN107563362A (en) * | 2017-10-01 | 2018-01-09 | 上海量科电子科技有限公司 | Evaluate method, client and the system of operation |
Also Published As
Publication number | Publication date |
---|---|
CN110766502A (en) | 2020-02-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR102476294B1 (en) | Determining the Suitability of Digital Images for Creating AR/VR Digital Content | |
CN115735229A (en) | Updating avatar garments in messaging systems | |
CN110378731A (en) | Obtain method, apparatus, server and the storage medium of user's portrait | |
US20240289845A1 (en) | Sentiments based transaction systems and methods | |
CN112889065B (en) | Systems and methods for providing personalized product recommendations using deep learning | |
CN109492607B (en) | Information pushing method, information pushing device and terminal equipment | |
CN115462089A (en) | Displaying augmented reality content in messaging applications | |
CN111339420A (en) | Image processing method, image processing device, electronic equipment and storage medium | |
CN110766502B (en) | Commodity evaluation method and system | |
CN115885318A (en) | Artificial intelligence system and method for modifying images based on relationships between objects | |
CN113269895A (en) | Image processing method and device and electronic equipment | |
CN106791091B (en) | Image generation method and device and mobile terminal | |
CN107609487B (en) | User head portrait generation method and device | |
CN107506479B (en) | A kind of object recommendation method and apparatus | |
CN113705792A (en) | Personalized recommendation method, device, equipment and medium based on deep learning model | |
CN112084862A (en) | Gait analysis method and device, storage medium and electronic equipment | |
CN112104914B (en) | Video recommendation method and device | |
CN106446969B (en) | User identification method and device | |
CN117252947A (en) | Image processing method, image processing apparatus, computer, storage medium, and program product | |
CN113657273A (en) | Method, device, electronic equipment and medium for determining commodity information | |
CN115129829A (en) | Question-answer calculation method, server and storage medium | |
CN109242031B (en) | Training method, using method, device and processing equipment of posture optimization model | |
CN113099267B (en) | Video generation method and device, electronic equipment and storage medium | |
KR101786823B1 (en) | Method for providing photo in sns | |
CN116685981A (en) | Compressing images to image models |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |