CN109255674B - Trial makeup data processing system and method - Google Patents

Trial makeup data processing system and method Download PDF

Info

Publication number
CN109255674B
CN109255674B CN201810897344.5A CN201810897344A CN109255674B CN 109255674 B CN109255674 B CN 109255674B CN 201810897344 A CN201810897344 A CN 201810897344A CN 109255674 B CN109255674 B CN 109255674B
Authority
CN
China
Prior art keywords
controller
user
cosmetics
image
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810897344.5A
Other languages
Chinese (zh)
Other versions
CN109255674A (en
Inventor
贾润芝
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Youngzone Shanghai Intelligence Technology Co ltd
Original Assignee
Youngzone Shanghai Intelligence Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Youngzone Shanghai Intelligence Technology Co ltd filed Critical Youngzone Shanghai Intelligence Technology Co ltd
Priority to CN201810897344.5A priority Critical patent/CN109255674B/en
Publication of CN109255674A publication Critical patent/CN109255674A/en
Application granted granted Critical
Publication of CN109255674B publication Critical patent/CN109255674B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0641Shopping interfaces
    • G06Q30/0643Graphical representation of items or shoppers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0281Customer communication at a business location, e.g. providing product or service information, consulting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0282Rating or review of business operators or products

Abstract

The invention discloses a makeup trial data processing system and a makeup trial data processing method, wherein when a controller senses that cosmetics leave a storage rack through a cosmetic leaving cabinet sensing device, the controller displays the makeup effect of the cosmetics through a touch intelligent mirror and gives a score, and if the score is higher than a preset score, the type of the cosmetics is determined to be the type of the cosmetics suitable for a user, manual operation is not needed, human resources are saved, and the efficiency of determining the type of the cosmetics suitable for the user is improved.

Description

Trial makeup data processing system and method
Technical Field
The invention relates to the technical field of computers, in particular to a makeup trial data processing system and a makeup trial data processing method.
Background
In practical application, different users are suitable for different types of cosmetics, and workers need to judge which type of cosmetics the users are suitable for according to skin colors, skin types, irrelevant features and the like of the users.
However, the method for determining the cosmetic suitable for the user in the prior art needs manual judgment, and is long in time consumption and low in efficiency.
Disclosure of Invention
The embodiment of the invention aims to provide a makeup trial data processing system and a makeup trial data processing method, which are used for solving the problem that the method for determining the type of cosmetics suitable for a user in the prior art is low in efficiency.
In order to achieve the purpose, the technical scheme of the embodiment of the invention is as follows:
the embodiment of the invention provides a makeup trial data processing system, which comprises a controller, a goods shelf, a touch intelligent mirror, a camera and at least one cosmetic off-cabinet sensing device, wherein:
the cosmetic off-cabinet sensing device is arranged on the storage rack and used for placing at least one cosmetic;
the camera is arranged on the touch intelligent mirror;
the touch intelligent mirror, the camera and the at least one cosmetic off-cabinet sensing device are respectively and electrically connected with the controller;
the controller is used for judging whether the cosmetics placed on the cosmetics off-cabinet sensing device leave the storage rack or not through each cosmetics off-cabinet sensing device, and determining a target face part and a target color corresponding to the cosmetics;
the controller is further used for acquiring a first face image of a user currently positioned in front of the intelligent touch mirror in real time through the camera;
the controller is further used for covering the target color on a target human face part in the currently acquired first human face image to obtain a finished image;
the controller is also used for displaying the currently obtained after-makeup image through the touch intelligent mirror;
the controller is further used for determining the score of the currently obtained finished image through a preset scoring model;
if the score is not less than the preset score, the controller is also used for determining the target type of the cosmetics;
the controller is further configured to obtain a user portrait corresponding to the currently obtained first face image, where the user portrait includes at least one of information of an age, a gender, a race, a skin quality, a skin color, a face shape, facial features, favorite colors, and a purchasing habit of the user;
the controller is further configured to establish a first correspondence between the user representation and the target type, and determine the target type as a type of cosmetic appropriate for the user.
Further, the user representation is information encrypted by an encryption algorithm.
Further, the at least one cosmetic product off-cabinet sensing device is at least one photoelectric sensing sensor and/or at least one gravity sensing sensor.
Further, the system further comprises: panorama camera, proximity sensor, light intensity sensor, sound intensity sensor, wherein:
the panoramic camera, the proximity sensor, the light intensity sensor and the sound intensity sensor are respectively arranged on the goods shelf and are respectively electrically connected with the controller;
the controller is further used for acquiring a first image and current time through the panoramic camera according to a preset time interval, determining a model according to a preset number of people, and determining the number of first users in the first image;
the controller is further used for acquiring first light intensity through the light intensity sensor according to the preset time interval;
the controller is further configured to obtain a first sound intensity through the sound intensity sensor according to the preset time interval;
the controller is further used for judging whether a user is in front of the goods shelf or not through the proximity sensor according to the preset time interval;
the controller is further configured to establish and store a second corresponding relationship among the current time, the number of the first users, the first light intensity, and the first sound intensity;
the controller is further configured to determine a second correspondence comprising a maximum number of first users.
Further, the controller is specifically configured to:
acquiring a corresponding relation between a preset face image and a user portrait, and acquiring the user portrait corresponding to the first face image according to the first face image.
The embodiment of the invention also provides a makeup trial data processing method which is applied to the system of any one of the above implementation modes, and the method comprises the following steps:
the controller judges whether the cosmetics placed on the cosmetics off-cabinet sensing device leave the storage rack or not through each cosmetics off-cabinet sensing device, and determines a target face part and a target color corresponding to the cosmetics;
the controller acquires a first face image of a user currently positioned in front of the intelligent touch mirror in real time through the camera;
the controller covers the target color on a target human face part in the currently acquired first human face image to obtain a finished image;
the controller displays the currently obtained made-up image through the touch intelligent mirror;
the controller determines the score of the currently obtained image after makeup through a preset scoring model;
if the score is not less than the preset score, the controller is also used for determining the target type of the cosmetics;
the controller acquires a user portrait corresponding to a first face image which is acquired currently, wherein the user portrait comprises at least one of information of age, gender, race, skin color, facial form, facial features, favorite color and purchasing habit of the user;
the controller establishes a first correspondence between the user representation and the target type, and determines the target type as a type of cosmetic appropriate for the user.
Further, the user representation is information encrypted by an encryption algorithm.
Further, the at least one cosmetic product off-cabinet sensing device is at least one photoelectric sensing sensor and/or at least one gravity sensing sensor.
Further, the method further comprises:
the controller acquires a first image and current time according to a preset time interval through the panoramic camera according to the preset time interval, determines a model according to a preset number of people, and determines the number of first users in the first image;
the controller acquires first light intensity through the light intensity sensor according to the preset time interval;
the controller acquires a first sound intensity through the sound intensity sensor according to the preset time interval;
the controller judges whether a user is in front of the goods shelf or not through the proximity sensor according to the preset time interval;
the controller is used for establishing and storing a second corresponding relation among the current time, the first user number, the first light intensity and the first sound intensity;
the controller determines a second corresponding relationship including the maximum first user number.
Further, the controller obtains a user portrait corresponding to a currently obtained first face image, and the controller specifically includes:
acquiring a corresponding relation between a preset face image and a user portrait, and acquiring the user portrait corresponding to the first face image according to the first face image.
When the similarity between the first face image and any one face image in the preset corresponding relationship between the face image and the user image is greater than a preset threshold value, the user portrait corresponding to the face image is determined as the user portrait corresponding to the first face image.
The embodiment of the invention has the following advantages:
in the embodiment of the invention, when the controller senses that the cosmetics leave the storage rack through the cosmetics leaving cabinet sensing device, the controller displays the makeup effect of the cosmetics through the touch intelligent mirror and gives the score, and if the score is higher than the preset score, the type of the cosmetics is determined to be the type of the cosmetics suitable for the user, manual operation is not needed, manpower resources are saved, and the efficiency of determining the type of the cosmetics suitable for the user is improved.
Drawings
Fig. 1 is a schematic structural diagram of a makeup test data processing system according to an embodiment of the present invention;
fig. 2 is a schematic flow chart of a makeup trial data processing method according to an embodiment of the present invention.
Detailed Description
The following examples are intended to illustrate the invention but are not intended to limit the scope of the invention.
Example 1
Embodiment 1 of the present invention provides a makeup test data processing system, a schematic structural diagram of which can be seen in fig. 1, and the system includes a controller, a shelf, a touch smart mirror 109, a camera, and at least one cosmetic off-cabinet sensing device 104, wherein:
at least one cosmetic out-of-cabinet sensing device 104 disposed on the shelf 102 for placing at least one cosmetic;
the camera 103 is arranged on the touch intelligent mirror 109;
the touch intelligent mirror 109, the camera 103 and the at least one cosmetic off-cabinet sensing device 104 are respectively electrically connected with the controller 101;
the controller 101 is configured to determine whether the cosmetics placed on each cosmetic off-cabinet sensing device 104 leave the shelf through each cosmetic off-cabinet sensing device 104, and determine a target face part and a target color corresponding to the cosmetics;
the controller 101 is further configured to obtain a first face image of a user currently located in front of the smart touch mirror in real time through the camera 103;
the controller 101 is further configured to overlay a target color on a target face part in the currently acquired first face image to obtain a finished image;
the controller 101 is further configured to display a currently obtained made-up image by touching the smart mirror 109;
the controller 101 is further configured to determine a score of the currently obtained finished image through a preset scoring model;
if the score is not less than the preset score, the controller 101 is further configured to determine a target type of the cosmetic;
the controller 101 is further configured to obtain a user portrait corresponding to the currently obtained first face image, where the user portrait includes at least one of information of an age, a gender, a race, a skin quality, a skin color, a facial shape, facial features, favorite colors, and a purchasing habit of the user;
the controller 101 is further configured to establish a first correspondence between the user representation and the target type, and determine the target type as a type of cosmetic suitable for the user.
The controller 101 may be any type of device such as a chip, a central processing unit, a mobile phone, a tablet computer, or a personal computer, as long as the above functions can be achieved. The preset time length can be any time length and can be set according to actual conditions.
The types of cosmetics may be classified according to the function and/or the site of action, such as concealer or whitener, or lipstick or blush.
In an implementation scenario, the controller 101 is specifically configured to:
acquiring a corresponding relation between a preset face image and a user portrait, and acquiring the user portrait corresponding to the first face image according to the first face image.
In one embodiment, the user image may be encrypted by an encryption algorithm, and the information has tamper resistance and high security.
In an implementation scenario, identification information of each cosmetic leaving the cabinet sensing device 104 may be set in advance, and a corresponding relationship between the identification information of each cosmetic leaving the cabinet sensing device 104 and a type of the cosmetic may be set. The controller 101 may determine the target face and face portion, the target color and the target type of the cosmetics corresponding to the cosmetics according to the identification information of the cosmetics leaving the cabinet sensing device 104 by which cosmetics leaving the cabinet sensing device 104 senses that the cosmetics leave the cabinet.
In one implementation scenario, the at least one cosmetic-leaving cabinet sensing device 104 may be at least one photoelectric sensor (not shown) and/or at least one gravity sensor (not shown).
If the control determines whether the cosmetics placed on the cosmetics off-cabinet sensing device 104 leave the shelf 102 through the photoelectric sensing sensor, specifically, the controller 101 determines whether the photoelectric sensing sensor senses that the light intensity is increased, and if the determination result is yes, it determines that the cosmetics placed on the photoelectric sensing sensor leave the shelf 102.
If the control determines whether the cosmetics placed on the cosmetics off-bin sensing device 104 leave the shelf 102 through the gravity sensing sensor, specifically, the controller 101 determines that the weight of the cosmetics decreases when the gravity sensing sensor senses the cosmetics off-bin sensing device, and if the determination result is yes, the controller determines that the cosmetics placed on the gravity sensing sensor leave the shelf 102.
It should be noted that if the system includes at least one gravity sensor, the system may include at least one basket (not shown), the at least one basket being disposed on the shelf 102, and the at least one gravity sensor being disposed under the at least one basket. Wherein, a gravity induction sensor can be arranged under one basket, and at least two gravity induction sensors can also be arranged.
In the embodiment of the invention, when the controller senses that the cosmetics leave the storage rack through the cosmetics leaving cabinet sensing device, the controller displays the makeup effect of the cosmetics through the touch intelligent mirror 109 and gives the score, and if the score is higher than the preset score, the type of the cosmetics is determined to be the type of the cosmetics suitable for the user, so that manual operation is not needed, the human resources are saved, and the efficiency of determining the type of the cosmetics suitable for the user is improved.
In one implementation scenario, the system may further include: panoramic camera 105, proximity sensor 106, light intensity sensor 107, sound intensity sensor 108, wherein:
the panoramic camera 105, the proximity sensor 106, the light intensity sensor 107 and the sound intensity sensor 108 are respectively arranged on the shelf 102 and are respectively electrically connected with the controller 101;
the controller 101 is further configured to obtain a first image and current time according to a preset time interval through the panoramic camera 105 according to the preset time interval, determine a model according to a preset number of people, and determine the number of first users in the first image;
the controller 101 is further configured to obtain a first light intensity through the light intensity sensor 107 according to a preset time interval;
the controller 101 is further configured to obtain a first sound intensity through the sound intensity sensor 108 according to a preset time interval;
the controller 101 is further configured to determine whether a user is in front of the shelf 102 through the proximity sensor 106 at preset time intervals;
the controller 101 is further configured to establish and store a second corresponding relationship among the current time, the number of the first users, the first light intensity, and the first sound intensity;
the controller 101 is further configured to determine a second corresponding relationship including the maximum number of the first users.
By means of the system, it can be determined at what time, at what light intensity and at what sound intensity, the passenger flow is the largest.
In an implementation scenario, after the made-up image is displayed by touching the smart mirror 109, the controller 101 may further obtain at least one color number information corresponding to the cosmetic, where each color number information includes a color number name and a color, display the at least one color number information in the touch smart mirror 109, receive and respond to a selection instruction of a user, select the color number information corresponding to the selection instruction of the user, obtain the image of the user in real time through the camera 103, overlay the color included in the selected color number information onto a target face portion in the currently obtained image of the user, obtain the made-up image, and display the currently obtained made-up image through the touch smart mirror 109. Or the system further comprises a microphone (not shown in the figure) electrically connected with the controller 101; then
A controller 101, further configured to: after the currently obtained makeup image is displayed through the touch intelligent mirror 109, at least one piece of color number information corresponding to the makeup is obtained, the color number information is displayed in the touch intelligent mirror 109, and makeup trial voice information of the user is received through a microphone, wherein the makeup trial voice information of the user comprises a color number name, the color corresponding to the color number name included in the makeup trial voice information of the user is selected in response to the makeup trial voice information of the user, the image of the user is obtained through the camera 103 in real time, the color corresponding to the color number name included in the makeup trial voice information of the selected user is covered on a target face part in the currently obtained image of the user, the makeup image is obtained, and the currently obtained makeup image is displayed through the touch intelligent mirror 109.
The user can perform double-click or long-press operation and the like on at least one piece of color number information displayed in the touch intelligent mirror 109 to trigger the touch intelligent mirror 109 to generate a selection instruction of the user, trigger the intelligent mirror to send the instruction to the controller 101, and execute subsequent operation after the controller 101 receives the instruction. The controller 101 may recognize the makeup-trying voice information of the user through a voice recognition engine (not shown in the figure), where the response time of the engine is less than 0.4s, the accuracy rate is > 97%, and the denoising rate is > 61.5%. The engine is connected to the controller 101.
In addition, the user can also control the makeup trial by voice, and the controller 101 can collect the makeup trial voice information of the user by the microphone, determine the color number information that the user wants to use according to the information, and then execute the subsequent operation.
The user does not need to perform any click operation, and can perform makeup trial only by controlling the system through voice, so that the makeup trial efficiency and the convenience are improved.
Example 2
An embodiment 2 of the present invention provides a makeup test data processing method, which is applied to a system in any of the above implementation forms, and a flow diagram of the method can be seen in fig. 2, where the method includes the following steps:
step 201, the controller judges whether the cosmetics placed on each cosmetic off-cabinet sensing device leave the shelf through each cosmetic off-cabinet sensing device, and determines the target face part and the target color corresponding to the cosmetics.
And step 202, the controller acquires a first face image of a user currently positioned in front of the intelligent touch mirror in real time through the camera.
And step 203, the controller covers the target color on the target human face part in the currently acquired first human face image to obtain a finished image.
And step 204, the controller displays the currently obtained made-up image through the touch intelligent mirror.
And step 205, the controller determines the score of the currently obtained image after makeup through a preset scoring model.
In step 206, if the score is not less than the predetermined score, the controller is further configured to determine a target type of the cosmetic.
And step 207, the controller acquires a user portrait corresponding to the currently acquired first face image, wherein the user portrait comprises at least one of information of age, gender, race, skin color, facial form, facial features, favorite color and purchasing habit of the user.
In step 208, the controller establishes a first correspondence between the user representation and the target type and determines the target type as a type of cosmetic appropriate for the user.
Further, the user representation is information encrypted by an encryption algorithm.
Furthermore, the at least one cosmetic product off-cabinet sensing device is at least one photoelectric sensing sensor and/or at least one gravity sensing sensor.
Further, the method further comprises:
the controller acquires a first image and current time according to a preset time interval through the panoramic camera according to the preset time interval, determines a model according to a preset number of people, and determines the number of first users in the first image;
the controller acquires first light intensity through the light intensity sensor according to a preset time interval;
the controller acquires first sound intensity through the sound intensity sensor according to a preset time interval;
the controller judges whether a user is in front of the goods shelf or not through the proximity sensor according to a preset time interval;
the controller is used for establishing and storing a second corresponding relation among the current time, the number of the first users, the first light intensity and the first sound intensity;
and the controller determines a second corresponding relation comprising the maximum first user number.
Further, the controller obtains a user portrait controller corresponding to the currently acquired first face image, and specifically includes:
acquiring a corresponding relation between a preset face image and a user portrait, and acquiring the user portrait corresponding to the first face image according to the first face image.
The technical features of the embodiments 1 and 2 can be freely combined, and the present invention is not limited to this.
Although the invention has been described in detail above with reference to a general description and specific examples, it will be apparent to one skilled in the art that modifications or improvements may be made thereto based on the invention. Accordingly, such modifications and improvements are intended to be within the scope of the invention as claimed.

Claims (8)

1. The utility model provides a try to make up data processing system, its characterized in that, the system includes controller, goods shelves, touch intelligent mirror, camera and at least one cosmetics and leaves cabinet induction system, wherein:
the cosmetic off-cabinet sensing device is arranged on the storage rack and used for placing at least one cosmetic;
the camera is arranged on the touch intelligent mirror;
the touch intelligent mirror, the camera and the at least one cosmetic off-cabinet sensing device are respectively and electrically connected with the controller;
the controller is used for judging whether the cosmetics placed on the cosmetics off-cabinet sensing device leave the storage rack or not through each cosmetics off-cabinet sensing device, and determining a target face part and a target color corresponding to the cosmetics;
the controller is further used for acquiring a first face image of a user currently positioned in front of the intelligent touch mirror in real time through the camera;
the controller is further used for covering the target color on a target human face part in the currently acquired first human face image to obtain a finished image;
the controller is also used for displaying the currently obtained after-makeup image through the touch intelligent mirror;
the controller is further used for determining the score of the currently obtained finished image through a preset scoring model;
if the score is not less than the preset score, the controller is also used for determining the target type of the cosmetics;
the controller is further configured to obtain a user portrait corresponding to the currently obtained first face image, where the user portrait includes at least one of information of an age, a gender, a race, a skin quality, a skin color, a face shape, facial features, favorite colors, and a purchasing habit of the user;
the controller is further used for establishing a first corresponding relation between the user portrait and the target type and determining the target type as a type of cosmetics suitable for the user;
the system further comprises: panorama camera, proximity sensor, light intensity sensor, sound intensity sensor, wherein:
the panoramic camera, the proximity sensor, the light intensity sensor and the sound intensity sensor are respectively arranged on the goods shelf and are respectively electrically connected with the controller;
the controller is further used for acquiring a first image and current time through the panoramic camera according to a preset time interval, determining a model according to a preset number of people, and determining the number of first users in the first image;
the controller is further used for acquiring first light intensity through the light intensity sensor according to the preset time interval;
the controller is further configured to obtain a first sound intensity through the sound intensity sensor according to the preset time interval;
the controller is further used for judging whether a user is in front of the goods shelf or not through the proximity sensor according to the preset time interval;
the controller is further configured to establish and store a second corresponding relationship among the current time, the number of the first users, the first light intensity, and the first sound intensity;
the controller is further configured to determine a second correspondence comprising a maximum number of first users.
2. The system of claim 1, wherein the user representation is information encrypted by an encryption algorithm.
3. The system of claim 1, wherein the at least one cosmetic exit bin sensing device is at least one photoelectric sensing sensor and/or at least one gravity sensing sensor.
4. The system of claim 1, wherein the controller is specifically configured to:
acquiring a corresponding relation between a preset face image and a user portrait, and acquiring the user portrait corresponding to the first face image according to the first face image.
5. A makeup test data processing method applied to the system of any one of claims 1 to 4, the method comprising:
the controller judges whether the cosmetics placed on the cosmetics off-cabinet sensing device leave the storage rack or not through each cosmetics off-cabinet sensing device, and determines a target face part and a target color corresponding to the cosmetics;
the controller acquires a first face image of a user currently positioned in front of the intelligent touch mirror in real time through the camera;
the controller covers the target color on a target human face part in the currently acquired first human face image to obtain a finished image;
the controller displays the currently obtained made-up image through the touch intelligent mirror;
the controller determines the score of the currently obtained image after makeup through a preset scoring model;
if the score is not less than the preset score, the controller is also used for determining the target type of the cosmetics;
the controller acquires a user portrait corresponding to a first face image which is acquired currently, wherein the user portrait comprises at least one of information of age, gender, race, skin color, facial form, facial features, favorite color and purchasing habit of the user;
the controller is used for establishing a first corresponding relation between the user portrait and the target type and determining the target type as a type of cosmetics suitable for the user;
the method further comprises the following steps:
the controller acquires a first image and current time according to a preset time interval through the panoramic camera according to the preset time interval, determines a model according to a preset number of people, and determines the number of first users in the first image;
the controller acquires first light intensity through the light intensity sensor according to the preset time interval;
the controller acquires a first sound intensity through the sound intensity sensor according to the preset time interval;
the controller judges whether a user is in front of the goods shelf or not through the proximity sensor according to the preset time interval;
the controller is used for establishing and storing a second corresponding relation among the current time, the first user number, the first light intensity and the first sound intensity;
the controller determines a second corresponding relationship including the maximum first user number.
6. The method of claim 5, wherein the user representation is information encrypted by an encryption algorithm.
7. The method according to claim 5, wherein said at least one cosmetic product off-bin sensing device is at least one photoelectric sensing sensor and/or at least one gravity sensing sensor.
8. The method as claimed in claim 5, wherein the step of the controller acquiring the user image corresponding to the currently acquired first face image includes:
acquiring a corresponding relation between a preset face image and a user portrait, and acquiring the user portrait corresponding to the first face image according to the first face image.
CN201810897344.5A 2018-08-08 2018-08-08 Trial makeup data processing system and method Active CN109255674B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810897344.5A CN109255674B (en) 2018-08-08 2018-08-08 Trial makeup data processing system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810897344.5A CN109255674B (en) 2018-08-08 2018-08-08 Trial makeup data processing system and method

Publications (2)

Publication Number Publication Date
CN109255674A CN109255674A (en) 2019-01-22
CN109255674B true CN109255674B (en) 2022-03-04

Family

ID=65050088

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810897344.5A Active CN109255674B (en) 2018-08-08 2018-08-08 Trial makeup data processing system and method

Country Status (1)

Country Link
CN (1) CN109255674B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112819767A (en) * 2021-01-26 2021-05-18 北京百度网讯科技有限公司 Image processing method, apparatus, device, storage medium, and program product

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101371272A (en) * 2006-01-17 2009-02-18 株式会社资生堂 Makeup simulation system, makeup simulation device, makeup simulation method and makeup simulation program
CN103093357A (en) * 2012-12-07 2013-05-08 江苏乐买到网络科技有限公司 Cosmetic makeup trying system of online shopping
CN106942878A (en) * 2017-03-17 2017-07-14 合肥龙图腾信息技术有限公司 Partial enlargement make up system, apparatus and method
CN108053365A (en) * 2017-12-29 2018-05-18 百度在线网络技术(北京)有限公司 For generating the method and apparatus of information
CN108198052A (en) * 2018-03-02 2018-06-22 北京京东尚科信息技术有限公司 User's free choice of goods recognition methods, device and intelligent commodity shelf system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101371272A (en) * 2006-01-17 2009-02-18 株式会社资生堂 Makeup simulation system, makeup simulation device, makeup simulation method and makeup simulation program
CN103093357A (en) * 2012-12-07 2013-05-08 江苏乐买到网络科技有限公司 Cosmetic makeup trying system of online shopping
CN106942878A (en) * 2017-03-17 2017-07-14 合肥龙图腾信息技术有限公司 Partial enlargement make up system, apparatus and method
CN108053365A (en) * 2017-12-29 2018-05-18 百度在线网络技术(北京)有限公司 For generating the method and apparatus of information
CN108198052A (en) * 2018-03-02 2018-06-22 北京京东尚科信息技术有限公司 User's free choice of goods recognition methods, device and intelligent commodity shelf system

Also Published As

Publication number Publication date
CN109255674A (en) 2019-01-22

Similar Documents

Publication Publication Date Title
CN108229415B (en) Information recommendation method and device, electronic equipment and computer-readable storage medium
JP6272342B2 (en) Image processing method, image processing device, terminal device, program, and recording medium
EP3321787B1 (en) Method for providing application, and electronic device therefor
EP2708981B1 (en) Gesture recognition apparatus, control method thereof, display instrument, and computer readable medium
US10559102B2 (en) Makeup simulation assistance apparatus, makeup simulation assistance method, and non-transitory computer-readable recording medium storing makeup simulation assistance program
CN104598869A (en) Intelligent advertisement pushing method based on human face recognition device
CN108712603B (en) Image processing method and mobile terminal
WO2019105411A1 (en) Information recommending method, intelligent mirror, and computer readable storage medium
CN105204351B (en) The control method and device of air-conditioner set
WO2014190509A1 (en) An apparatus and associated methods
CN111698564B (en) Information recommendation method, device, equipment and storage medium
CN104575339A (en) Media information pushing method based on face detection interface
CN107067290A (en) Data processing method and device
CN104103024A (en) User evaluation information acquisition method and device
WO2018214115A1 (en) Face makeup evaluation method and device
CN105549892B (en) Augmented reality information display method and device
CN110415062A (en) The information processing method and device tried on based on dress ornament
CN111488057A (en) Page content processing method and electronic equipment
CN107705245A (en) Image processing method and device
CN103886284A (en) Character attribute information identification method and device and electronic device
CN107909011B (en) Face recognition method and related product
CN108784651A (en) The commending system of U.S. industry product and service based on artificial intelligence
CN109255674B (en) Trial makeup data processing system and method
CN108681398A (en) Visual interactive method and system based on visual human
CN109683711B (en) Product display method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant