CN115481284A - Cosmetic method and device based on cosmetic box, storage medium and electronic device - Google Patents

Cosmetic method and device based on cosmetic box, storage medium and electronic device Download PDF

Info

Publication number
CN115481284A
CN115481284A CN202211048869.4A CN202211048869A CN115481284A CN 115481284 A CN115481284 A CN 115481284A CN 202211048869 A CN202211048869 A CN 202211048869A CN 115481284 A CN115481284 A CN 115481284A
Authority
CN
China
Prior art keywords
makeup
user
cosmetic
video
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211048869.4A
Other languages
Chinese (zh)
Inventor
耿丽娜
连鹏飞
王小惠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Haier Technology Co Ltd
Haier Smart Home Co Ltd
Haier Uplus Intelligent Technology Beijing Co Ltd
Original Assignee
Qingdao Haier Technology Co Ltd
Haier Smart Home Co Ltd
Haier Uplus Intelligent Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Haier Technology Co Ltd, Haier Smart Home Co Ltd, Haier Uplus Intelligent Technology Beijing Co Ltd filed Critical Qingdao Haier Technology Co Ltd
Priority to CN202211048869.4A priority Critical patent/CN115481284A/en
Publication of CN115481284A publication Critical patent/CN115481284A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/7837Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using objects detected or recognised in the video content
    • G06F16/784Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using objects detected or recognised in the video content the detected or recognised objects being people
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/732Query formulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/738Presentation of query results
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Library & Information Science (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses a makeup method, a makeup device, a storage medium and an electronic device based on a makeup box, which relate to the technical field of smart home/smart home, and the method comprises the following steps: under the condition that the working mode is an intelligent recommendation mode, at least one makeup video to be recommended is retrieved according to voice information input by a user; under the condition that a determination instruction of a user for the makeup video is received, displaying a target makeup video corresponding to the determination instruction in a first display area, and displaying a face image of the user in a second display area; the makeup step matched with the user is determined based on the makeup operation in the face image, the video frame matched with the makeup step in the target makeup video is displayed, and the clamping device is controlled to take out the target cosmetics corresponding to the makeup step from the clamping groove.

Description

Cosmetic method and device based on cosmetic box, storage medium and electronic device
Technical Field
The application relates to the technical field of smart home/smart home, in particular to a cosmetic method and device based on a cosmetic box, a storage medium and an electronic device.
Background
In the prior art, the function of the cosmetic case and the function of the cosmetic mirror are single, the cosmetic case usually has only the function of accommodating articles such as cosmetics, and the cosmetic mirror has only the function of displaying images.
However, cosmetics are various at present, and for some makeup novice, in the process of makeup, on one hand, the mobile phone or other terminals and the mirror need to be continuously switched, and on the other hand, when the space of the makeup box is narrow, a user also needs to select, take and put back the cosmetics, and other operations are troublesome, so that the makeup process of the user is complicated, and the time consumption is long.
Disclosure of Invention
The application provides a makeup method and device based on a makeup case, a storage medium and an electronic device, which are used for solving the technical problem that the intellectualization of a makeup case and a makeup mirror is not high in the prior art.
The application provides a makeup method based on makeup case, makeup case include the vanity mirror and with vanity mirror fixed connection's makeup box, the display screen of vanity mirror includes first display area and second display area, the makeup box includes that a plurality of is used for depositing the draw-in groove of cosmetics and is used for following the clamp of cosmetics of taking in the draw-in groove embraces the device, includes:
under the condition that a wake-up instruction input by a user is received, determining a working mode corresponding to the wake-up instruction;
under the condition that the working mode is an intelligent recommendation mode, at least one makeup video to be recommended is retrieved according to voice information input by a user;
under the condition that a determination instruction of the user for the makeup video is received, displaying a target makeup video corresponding to the determination instruction in a first display area of the makeup mirror, and displaying a face image of the user in a second display area of the makeup mirror;
determining a makeup step matched with the user based on makeup operation of the user in the face image, displaying a video frame matched with the makeup step in the target makeup video in the first display area, and controlling the clamping device to take out the target makeup corresponding to the makeup step from the clamping groove.
According to the makeup method based on the cosmetic case provided by the application, the step of determining the matched makeup of the user based on the makeup operation of the user in the face image comprises the following steps:
determining a makeup area corresponding to each step of makeup steps in the face image according to the face contour features in the face image and the reference makeup areas of each step of reference makeup steps in the target makeup video;
and under the condition that the makeup operation of the user in the face image is identified, determining a target makeup area corresponding to the makeup operation from the makeup areas corresponding to the step of makeup, and determining a makeup step matched with the target makeup area.
According to the makeup method based on the cosmetic box provided by the application, after the control of the clamping device to take out the target cosmetic corresponding to the makeup step from the clamping groove, the method further comprises the following steps:
under the condition that the user is detected to store the target cosmetics into the clamping groove, identifying whether the clamping groove for storing the target cosmetics is correct or not;
and under the condition that the target cosmetic storage clamping groove is not correct, controlling the clamping device to automatically place the target cosmetic back into the clamping groove stored before the target cosmetic is taken out, and automatically placing the target cosmetic back into the correct clamping groove.
According to the makeup method based on the cosmetic box, the method for retrieving at least one makeup video to be recommended according to the voice information input by the user comprises the following steps:
determining semantic information corresponding to the voice information;
if the makeup effect is analyzed from the semantic information, at least one makeup video to be recommended, which is matched with the makeup effect, is retrieved;
and if the activity scene is analyzed from the semantic information, at least one makeup video to be recommended, which is matched with the activity scene, is retrieved.
According to the makeup method based on the makeup kit, after the working mode corresponding to the wake-up instruction is determined, the method further includes:
and under the condition that the working mode is an intelligent conveying mode, controlling the clamping and holding devices to take out the cosmetics from the clamping grooves in sequence according to a preset cosmetic use sequence.
According to the makeup method based on the makeup kit, after the working mode corresponding to the wake-up instruction is determined, the method further includes:
under the condition that the working mode is a skin care mode, determining the skin state of a user through the analysis result of the facial image of the user acquired by the camera of the cosmetic box;
and under the condition that the skin state does not accord with the set skin state, controlling the clamping device to take out the skin care product matched with the skin state from the clamping groove, and outputting a skin care reminding message.
According to the makeup method based on the makeup kit, the triggering mode of the awakening instruction comprises gesture awakening or voice awakening.
The application also provides a makeup device based on makeup case, includes:
the wake-up unit is used for determining a working mode corresponding to a wake-up instruction under the condition of receiving the wake-up instruction input by a user;
the recommending unit is used for retrieving at least one makeup video to be recommended according to voice information input by a user under the condition that the working mode is an intelligent recommending mode;
a display unit, configured to, when receiving a determination instruction of the user for the makeup video, display a target makeup video corresponding to the determination instruction in a first display area of the makeup mirror, and display a face image of the user in a second display area;
the control unit is used for determining a makeup step matched with the user based on the makeup operation of the user in the face image, displaying a video frame matched with the makeup step in the target makeup video in the first display area, and controlling the clamping device to take out the target cosmetics corresponding to the makeup step from the clamping groove.
The present application provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the cosmetic case-based makeup method.
An electronic device includes a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the cosmetic case-based makeup method when executing the program.
According to the makeup method based on the makeup box, the working mode corresponding to the awakening instruction is determined under the condition that the awakening instruction input by a user is received; under the condition that the working mode is an intelligent recommendation mode, at least one makeup video to be recommended is retrieved according to voice information input by a user; under the condition that a determination instruction of a user for the makeup video is received, displaying a target makeup video corresponding to the determination instruction in a first display area of the makeup mirror, and displaying a face image of the user in a second display area of the makeup mirror; the method comprises the steps of determining a makeup step matched with a user based on makeup operation in a face image, displaying a video frame matched with the makeup step in a target makeup video, and controlling a clamping device to take out target cosmetics corresponding to the makeup step from a clamping groove.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
In order to more clearly illustrate the technical solutions in the present application or the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic diagram of a hardware environment for a cosmetic case-based makeup method provided herein;
FIG. 2 is a schematic flow chart of a cosmetic case-based cosmetic method provided herein;
FIG. 3 is a schematic view of a construction of a cosmetic case-based cosmetic device provided herein;
fig. 4 is a schematic structural diagram of an electronic device provided in the present application.
Reference numerals:
101: a terminal device; 102: and (4) a server.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without making any creative effort shall fall within the protection scope of the present application.
It should be noted that the terms "first," "second," and the like in this application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, apparatus, article, or device that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or device.
According to one aspect of an embodiment of the present application, there is provided a cosmetic method based on a cosmetic case. The makeup method based on the makeup box is widely applied to full-House intelligent digital control application scenes such as Smart Home, smart Home equipment ecology, smart Home (Intelligence House) ecology and the like. In this embodiment, fig. 1 is a schematic diagram of a hardware environment of a cosmetic box-based cosmetic method provided in this application, and the cosmetic box-based cosmetic method can be applied to the hardware environment formed by the terminal device 101 and the server 102 shown in fig. 1. The server 102 is connected to the terminal device 101 through a network, and may be configured to provide a service (e.g., an application service, etc.) for the terminal or a client installed on the terminal, provide a database on or independent of the server, and provide a data storage service for the server 102, and configure a cloud computing and/or edge computing service on or independent of the server, and provide a data operation service for the server 102.
The network may include, but is not limited to, at least one of: wired network, wireless network. The wired network may include, but is not limited to, at least one of: wide area networks, metropolitan area networks, local area networks, which may include, but are not limited to, at least one of the following: WIFI (Wireless Fidelity), bluetooth. Terminal equipment 101 can be but not limited to be PC, the cell-phone, the panel computer, intelligent air conditioner, intelligent cigarette machine, intelligent refrigerator, intelligent oven, intelligent kitchen range, intelligent washing machine, intelligent water heater, intelligent washing equipment, intelligent dish washer, intelligent projection equipment, intelligent TV, intelligent clothes hanger, intelligent (window) curtain, intelligence audio-visual, smart jack, intelligent stereo set, intelligent audio amplifier, intelligent new trend equipment, intelligent kitchen guarding equipment, intelligent bathroom equipment, intelligence robot of sweeping the floor, intelligence robot of wiping the window, intelligence robot of mopping the ground, intelligent air purification equipment, intelligent steam ager, intelligent microwave oven, intelligent kitchen is precious, intelligent clarifier, intelligent water dispenser, intelligent lock etc..
It should be noted that the main execution body of the method is a control unit in a cosmetic case, the cosmetic case comprises a cosmetic mirror and a cosmetic case fixedly connected with the cosmetic mirror, wherein the control unit can be an electronic device, a component in the electronic device, an integrated circuit or a chip. The electronic device may be a mobile electronic device or a non-mobile electronic device, and the application is not limited in particular.
In this embodiment, the inside draw-in groove that is used for depositing article that is equipped with of makeup box, can place article such as all kinds of cosmetics, skin care products in the draw-in groove, and the makeup box is equipped with image acquisition device simultaneously, and preferably, image acquisition device sets up inside each draw-in groove.
When the user sums up articles, the cosmetic box can be triggered to enter a cosmetic box collection mode, the image acquisition device enters a working state, the user places the articles in the clamping grooves, the image acquisition devices in the clamping grooves respectively acquire the images of the articles in the clamping grooves, and the images of the articles are analyzed to obtain the information of the articles stored in the clamping grooves.
After the user finishes placing the articles, the makeup box synchronously transmits the acquired article information in each clamping groove to the makeup mirror, visual display is carried out based on the form of the article induction list, the article induction list is divided into a clamping groove number column and an article information column, the user can edit and change the related information in the article induction list, and after receiving a confirmation instruction of the user, the user judges that the article induction is finished, and automatically exits the makeup box induction mode.
Furthermore, in order to improve the intellectualization, the dressing case in the embodiment is further provided with a clamping device, and articles in the clamping groove of the dressing case can be taken or placed in a returning mode through the clamping device.
Furthermore, the press-in device is also used for actively taking out the articles stored by the user when the image acquisition device of the cosmetic box identifies that the slot position of the articles returned by the user is incorrect, so as to prompt the user that the articles are wrongly placed.
In this embodiment, the cosmetic mirror is also provided with an image capturing device, and is preferably disposed at the center of the edge frame directly above the cosmetic mirror, and the display screen of the cosmetic mirror is divided into two parts, namely a first display area and a second display area.
In this embodiment, the first display area is an area for visually displaying third-party data and responding to intelligent operations such as user touch operations, that is, editing operations of the makeup videos and the item information in the item summary list in this embodiment are implemented based on the first display area.
The second display area can display the user image acquired by the image acquisition device and can perform image processing on the user image.
In this embodiment, the cosmetic mirror and the cosmetic case are connected in a rotationally fixed manner, preferably, a rotation angle between the cosmetic mirror and the cosmetic case ranges from 0 ° to 120 °, when the rotation angle is 0 °, the cosmetic mirror and the cosmetic case are closely attached to each other, the cosmetic mirror and the cosmetic case are both in a sleep state, when the rotation angle is 60 ° to 90 °, the second display region of the cosmetic case and the cosmetic mirror enters an operating state, and when the rotation angle is 90 ° to 120 °, the first display region of the cosmetic mirror also enters an operating state, so that a user can control whether the cosmetic mirror and the cosmetic case enter the operating state by flexibly adjusting the rotation angle between the cosmetic mirror and the cosmetic case.
Fig. 2 is a schematic flowchart of a cosmetic case-based cosmetic method provided in the present application, and as shown in fig. 2, the method includes step 201, step 202, step 203, and step 204.
Step 201, determining a working mode corresponding to a wake-up instruction under the condition of receiving the wake-up instruction input by a user;
in this embodiment, the cosmetic case has more than one working mode in the working state, and the wake-up instructions corresponding to the various working modes are different, where the wake-up instruction may be a fixed instruction set by a factory or a user-defined instruction, and is not limited to this.
Specifically, the triggering manner of the wake-up command includes, but is not limited to, gesture wake-up, voice wake-up, and touch wake-up. For example, the user may input related voice information into the voice device to wake up, the user may also input related gesture actions into the image acquisition device to wake up, and the user may also wake up through different touch frequencies.
Therefore, the user can input different awakening instructions to control the cosmetic mirror and the cosmetic box of the cosmetic box to enter different working modes.
Preferably, the side frame of the cosmetic mirror is further provided with prompt icons corresponding to the working modes, and the user can be prompted whether to enter the working mode corresponding to the prompt icon currently or not through the prompt icons, for example, the prompt icon of the intelligent recommendation mode is a cloud, and the cloud icon is controlled to be displayed by a green indicator lamp under the condition that the intelligent recommendation mode is entered, so that the user is prompted that the cosmetic box enters the intelligent recommendation mode.
In addition, in this embodiment, only one prompt icon may be set, and the current working mode entered by the user is prompted by controlling the flashing frequency of the prompt icon.
Step 202, under the condition that the working mode is an intelligent recommendation mode, at least one makeup video to be recommended is retrieved according to voice information input by a user;
specifically, the makeup video to be recommended refers to a video, in the third-party makeup video database, of which the similarity with the voice information of the user meets a preset expected value.
In the step, the semantic information of the user can be analyzed by performing Natural Language Processing (NLP) on the voice information, then the information such as the title and the label of each makeup video in a third-party makeup video database is subjected to NLP to analyze the semantic information of each makeup video, finally the similarity between the semantic information of the user and the semantic information of each makeup video is calculated, and at least one makeup video to be recommended is retrieved.
Specifically, in an embodiment, semantic information corresponding to the voice information is determined; if the makeup effect is analyzed from the semantic information, at least one makeup video to be recommended, which is matched with the makeup effect, is searched; if the activity scene is analyzed from the semantic information, at least one makeup video to be recommended, which is matched with the activity scene, is retrieved, so that the makeup box can perform video retrieval according to the voice information input by the user, and detect the makeup video which accords with the similarity threshold of the expected effect of the user.
Step 203, in the case of receiving a determination instruction of the user for the makeup video, displaying a target makeup video corresponding to the determination instruction in the first display area, and displaying a face image of the user in the second display area;
in the step, each makeup video to be recommended is played in a rolling mode in the first display area, a user can play each makeup video in a sliding mode through touch operation, and under the condition that a determination instruction of the user is received, a target makeup video corresponding to the determination instruction is fixedly played in the first display area.
The trigger mode of the instruction is determined to include but not limited to voice or touch operation, for example, when the first display area plays a certain makeup video, the user may double click the first display area, and the makeup mirror determines that the makeup video is the target makeup video selected by the user when the touch operation of double clicking the first display area by the user is detected.
Further, when it is found that the makeup video to be recommended does not meet the user's desire, the user may input an instruction to delete the makeup video through an operation of sliding the makeup video upward or downward.
In this embodiment, the facial image displayed in the second display area is an image of a facial area in the user image acquired by the image acquisition device of the cosmetic mirror in real time, and preferably, the size of the facial image is the same as the size of the display area of the second display area, that is, after the user image is acquired, the image of the facial area in the user image is extracted, and the size of the facial image display is adjusted based on the size of the display area of the second display area, so that the user can see the makeup effect of the user more clearly in the makeup process.
In practical application, when it is detected that the definition of the face image displayed in the second display area is smaller than the set minimum definition, a corresponding text prompt or an identification prompt may be output in the second display area, so as to prompt the user that the current cosmetic mirror fails to clearly acquire the face image of the user, and thus prompt the user to adjust the posture so that the definition of the face image displayed in the second display area is larger than the set minimum definition.
Step 204, determining a makeup step matched with the user based on the makeup operation of the user in the face image, displaying a video frame matched with the makeup step in the target makeup video in the first display area, and controlling the clamping device to take out the target cosmetics corresponding to the makeup step from the clamping groove.
Specifically, the makeup operation refers to a gesture motion of the user applying makeup at the face image area. In this embodiment, an initial makeup operation recognition model is trained using a large number of images labeled with makeup operation labels, and a makeup operation recognition model that can recognize makeup operations is obtained.
In the step, the makeup step of the current user can be judged by identifying the makeup operation of the user, and the display of the video frame corresponding to the makeup step in the target makeup video is synchronously controlled according to the identification result, so that the video content played in the first display area is consistent with the makeup operation step of the user, and the aim of assisting the user in makeup is fulfilled.
In this embodiment, the video frame of the makeup step corresponding to the makeup operation may be controlled to be repeatedly played in the first display area, and the screen in the first display area may also be controlled to be temporarily suspended and displayed on the last video frame of the makeup step corresponding to the makeup operation, which is not limited in this respect.
In this step, after the control is held the device and is taken out the target cosmetics that the cosmetic step corresponds from the draw-in groove, still include:
under the condition that a user is detected to store the target cosmetics into the clamping groove, whether the clamping groove for storing the target cosmetics is correct is identified; and under the condition that the target cosmetic storage clamping groove is not correct, controlling the clamping device to automatically place the target cosmetic back into the clamping groove stored before the target cosmetic is taken out, and automatically placing the target cosmetic back into the correct clamping groove to prompt a user that the target cosmetic is stored wrongly.
In this embodiment from this, can in time put back cosmetics to its draw-in groove of originally depositing before being taken out through pressing from both sides embracing the device when monitoring user's storage position mistiming, not only can remind user's cosmetics position to place the mistake, can also avoid cosmetics position to change always, lead to cosmetics to put in disorder, save the follow-up arrangement cosmetics time of user.
Further, in this embodiment, in order to improve the intelligence of the cosmetic box, after the target makeup video is determined, it is determined whether there is a historical user makeup operation matching the makeup operation in the target makeup video, and if there is a historical user makeup operation matching the historical user makeup operation, cosmetic information corresponding to the historical user makeup operation is acquired, so that the cosmetic box is controlled to take out the target cosmetics corresponding to each makeup step according to the cosmetic information corresponding to the historical user makeup operation.
Specifically, the cosmetic box in this embodiment has an intelligent learning mode, and after entering the intelligent learning mode, the cosmetic sequence that the user uses daily is labeled and learned, and after learning for a period of time, the intelligent learning mode is closed, and the cosmetic use sequence learned during the period of intelligent learning is stored, so that the cosmetics required in the process of each cosmetic step are automatically taken out and retracted in sequence.
According to the makeup method provided by the embodiment, the working mode corresponding to the awakening instruction is determined under the condition that the awakening instruction input by the user is received; under the condition that the working mode is an intelligent recommendation mode, at least one makeup video to be recommended is retrieved according to voice information input by a user; under the condition that a determination instruction of a user for the makeup video is received, displaying a target makeup video corresponding to the determination instruction in a first display area of the makeup mirror, and displaying a face image of the user in a second display area of the makeup mirror; the makeup step matched with the user is determined based on the makeup operation in the face image, the video frame matched with the makeup step in the target makeup video is displayed, and the clamping device is controlled to take out the target cosmetics corresponding to the makeup step from the clamping groove.
Based on the above embodiment, after the displaying the facial image of the user in the operation screen of the cosmetic mirror, the method further includes:
displaying a first video frame corresponding to the first step of makeup in the target makeup video;
determining a first makeup area corresponding to the first step of makeup in the face image according to the facial contour features in the face image and a first reference makeup operation in the first video frame;
displaying a second video frame corresponding to a second step of the target makeup video in the case of recognizing that there is a makeup operation in the first makeup area that is consistent with the first reference makeup operation;
taking the second video frame as the first video frame, and returning to the step of determining a first makeup area corresponding to the first step of makeup in the face image according to the face contour feature in the face image and the first reference makeup operation in the first video frame;
and continuing to execute the step of displaying a second video frame corresponding to the second step in the target makeup video in the case of recognizing that the makeup operation consistent with the first reference makeup operation exists in the first makeup area until the target makeup video is displayed.
The first video frame refers to a plurality of frames of video frames corresponding to the first makeup step, the video frame corresponding to the starting time point of the first makeup step is started, and the video frame corresponding to the ending time point corresponding to the first makeup step is ended.
Facial contour features include, but are not limited to, nasal bone features, chin features, face width length features, and mandible features.
In the step, a reference makeup area corresponding to a first reference makeup operation in a first video frame and a reference face contour feature of a reference user in the video frame are determined, and then a first makeup area corresponding to the face contour feature of the user is calculated according to a position relation between the reference makeup area and the reference face contour feature in the first video frame.
After the first makeup area is determined, an edge feature point in the first makeup area is constructed according to the first reference makeup operation, and when it is recognized that the makeup operation of the user passes through the edge feature point, it is determined that the makeup operation coincides with the first reference makeup operation, and display of a second video frame is automatically started.
Further, after determining the first makeup area corresponding to the first makeup step in the face image according to the facial contour features in the face image and the first reference makeup operation in the first video frame, the method further includes:
visually displaying the first makeup area in the operation screen to prompt the user to make up at the first makeup area.
In this embodiment, the manner of visual display includes, but is not limited to, feature points or feature lines.
In practical application, after a first makeup area in a face image is determined, an edge part of the first makeup area is determined, and at least one edge feature point of the position of the edge part is displayed in the face image so as to prompt a user to carry out makeup along the edge feature point.
Based on the above embodiment, the determining the user-matched makeup operation based on the makeup operation of the user in the face image includes:
determining a makeup area corresponding to each step of makeup steps in the face image according to the face contour features in the face image and the reference makeup areas of each step of reference makeup steps in the target makeup video;
and under the condition that the makeup operation of the user in the face image is identified, determining a target makeup area corresponding to the makeup operation from the makeup areas corresponding to the makeup steps, and determining a makeup step matched with the target makeup area.
The facial contour features include, but are not limited to, nasal bone features, chin features, face width length features, and mandible features.
In the step, a reference makeup area corresponding to each reference makeup step in the target makeup video and a reference face contour feature of a reference user in the target makeup video are determined, then a makeup area corresponding to each step of makeup step corresponding to the face contour feature of the user is calculated according to a position relation between the reference makeup area and the reference face contour feature, and an operation point interval in each makeup area is generated.
Under the condition that the makeup operation of the user in the face image is identified through the makeup operation identification model, the operation point corresponding to the makeup operation is located, the operation point interval to which the operation point belongs is determined, and the corresponding makeup area is determined, so that the purpose of accurately identifying the makeup step matched with the user is achieved.
Based on the above embodiment, after determining the target makeup area corresponding to the makeup operation from the makeup areas corresponding to the makeup steps, the method further includes: visually displaying the makeup area in the second display area to prompt the user to perform makeup operation at the makeup area.
In this embodiment, the manner of visual display includes, but is not limited to, feature points or feature lines.
In practical application, after a makeup area in a face image is determined, an edge part of the makeup area is determined, and at least one edge feature point at the position of the edge part is displayed in the face image so as to prompt a user to perform makeup along the edge feature point.
In the embodiment, the cosmetic box has an auxiliary cosmetic function, and can control the playing progress of the displayed target cosmetic video according to the cosmetic operation of the user, so that the process that the user needs to continuously switch between a mobile phone or other terminals and a mirror is avoided, the time of the user is saved, and the use feeling of the user is improved.
Based on the above embodiment, after determining the working mode corresponding to the wake-up instruction, the method further includes:
and under the condition that the working mode is an intelligent conveying mode, controlling the clamping and holding devices to take out the cosmetics from the clamping grooves in sequence according to a preset cosmetic use sequence.
In this embodiment, the intelligent delivery mode refers to a mode in which the cosmetic case automatically takes out and retracts cosmetics required in the makeup operation process in order.
The preset cosmetic use sequence refers to a user cosmetic use sequence obtained after the cosmetics box is in an intelligent learning mode and the cosmetics used by the user in daily life are subjected to label learning.
It should be noted that, in practical applications, when a user needs to store a new cosmetic use sequence, a wake-up instruction corresponding to the intelligent learning mode may be sent to wake up the cosmetic box to enter the intelligent learning mode, and the cosmetic box deletes the currently stored cosmetic use sequence, and re-identifies and stores the new cosmetic use sequence. In other words, in this embodiment, only one cosmetic usage order can be stored in the cosmetic container, and the user can update the cosmetic usage order by starting the intelligent learning mode of the cosmetic container.
Based on the above embodiment, after determining the working mode corresponding to the wake-up instruction, the method further includes:
under the condition that the working mode is a skin care mode, determining the skin state of a user through the analysis result of the facial image of the user acquired by the camera of the cosmetic box;
and under the condition that the skin state does not accord with the set skin state, controlling the clamping device to take out the skin care product matched with the skin state from the clamping groove, and outputting a skin care reminding message.
In the step, a camera in an image acquisition device of the cosmetic mirror is used for acquiring a facial image of a user, and the skin state of the user is determined according to the analysis result of the facial image of the user of the acquired facial image.
In this embodiment, the facial image may be sent to the third-party skin detection terminal, the analysis result of the facial image of the user fed back by the third-party skin detection terminal may be received, the image processing device of the cosmetic box may further perform image processing such as graying on the facial image to obtain the analysis result of the facial image of the user, and the skin state of the user may be analyzed by combining with the stored skin state database.
In this step, the skin care reminding message includes, but is not limited to, skin care time, skin care times, skin care mode, and recommended skin care product.
For example, in an application scenario, when no skin care product adapted to the skin condition exists in the cosmetic case or the remaining amount of the skin care product adapted to the skin condition is less than the minimum amount of the skin care product required by the skin condition, the skin care product adapted to the user can be recommended according to the skin condition and the currently marketed skin care product, and a reminder can be given.
Further, in another embodiment, after the skin state of the user is determined, the skin state is visually displayed in the first display area of the cosmetic mirror, for example, by means of text report, or by means of text identification in the image of the user, which is not limited thereto.
In the embodiment, the dressing case has a skin care reminding function, can monitor the skin state of a user, and gives skin care suggestions when the skin state is not good, so that the intellectualization of the dressing case is improved, and the use feeling of the user is further improved.
Based on any one of the above embodiments, the makeup apparatus provided by the present application is described below, and the makeup apparatus described below and the makeup method based on the cosmetic case described above may be referred to with each other.
Fig. 3 is a schematic structural view of a cosmetic device based on a cosmetic case provided in the present application, and as shown in fig. 3, the device includes: the wake-up unit 310 is configured to determine a working mode corresponding to a wake-up instruction when the wake-up instruction input by a user is received; the recommending unit 320 is configured to retrieve at least one makeup video to be recommended according to voice information input by a user when the working mode is the intelligent recommending mode; a display unit 330, configured to, when receiving a determination instruction of the user for the makeup video, display a target makeup video corresponding to the determination instruction in a first display area of the makeup mirror, and display a face image of the user in a second display area; a control unit 340, configured to determine a makeup step matched with the user based on a makeup operation of the user in the face image, display a video frame matched with the makeup step in the target makeup video in the first display area, and control the clipping device to take out a target cosmetic corresponding to the makeup step from the clipping slot.
The specific implementation mode of the cosmetic device based on the cosmetic box provided by the embodiment of the application is consistent with the implementation mode of the method, the same beneficial effects can be achieved, and the detailed description is omitted here.
According to the cosmetic device based on the cosmetic box, the working mode corresponding to the awakening instruction is determined under the condition that the awakening instruction input by a user is received; under the condition that the working mode is the intelligent recommendation mode, at least one makeup video to be recommended is retrieved according to the voice information input by the user; under the condition that a determination instruction of a user for the makeup video is received, displaying a target makeup video corresponding to the determination instruction in a first display area of the makeup mirror, and displaying a face image of the user in a second display area of the makeup mirror; the makeup step matched with the user is determined based on the makeup operation in the face image, the video frame matched with the makeup step in the target makeup video is displayed, and the clamping device is controlled to take out the target cosmetics corresponding to the makeup step from the clamping groove.
Based on any one of the above embodiments, fig. 4 is a schematic structural diagram of an electronic device provided in the present application, and as shown in fig. 4, the electronic device may include: a Processor (Processor) 410, a communication Interface (communication Interface) 420, a Memory (Memory) 430 and a communication Bus (communication Bus) 440, wherein the Processor 410, the communication Interface 420 and the Memory 430 are communicated with each other via the communication Bus 440. The processor 410 may call logical commands in the memory 430 to perform the following method: under the condition that a wake-up instruction input by a user is received, determining a working mode corresponding to the wake-up instruction; under the condition that the working mode is an intelligent recommendation mode, at least one makeup video to be recommended is retrieved according to voice information input by a user; under the condition that a determination instruction of the user for the makeup video is received, displaying a target makeup video corresponding to the determination instruction in the first display area, and displaying a face image of the user in the second display area; and determining a makeup step matched with the user based on the makeup operation of the user in the face image, displaying a video frame matched with the makeup step in the target makeup video in the first display area, and controlling the clamping device to take out the target cosmetics corresponding to the makeup step from the clamping groove.
In addition, the logic commands in the memory 430 may be implemented in the form of software functional units and stored in a computer readable storage medium when the logic commands are sold or used as independent products. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including commands for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The processor in the electronic device provided in the embodiment of the present application may call a logic instruction in a memory to implement the method, and a specific implementation manner of the method is consistent with the implementation manner of the method, and may achieve the same beneficial effects, which are not described herein again.
The embodiments of the present application also provide a computer-readable storage medium, on which a computer program is stored, where the computer program is implemented by a processor to execute the methods provided by the above embodiments.
The specific implementation manner is the same as the implementation manner of the method, and the same beneficial effects can be achieved, which is not described herein again.
Embodiments of the present application provide a computer program product, which includes a computer program, and when the computer program is executed by a processor, the method is implemented as described above.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment may be implemented by software plus a necessary general hardware platform, and may also be implemented by hardware. Based on the understanding, the above technical solutions substantially or otherwise contributing to the prior art may be embodied in the form of a software product, which may be stored in a computer-readable storage medium, such as ROM/RAM, magnetic disk, optical disk, etc., and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the various embodiments or some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present application.

Claims (10)

1. A makeup method based on a makeup case, the makeup case comprises a makeup mirror and a makeup box fixedly connected with the makeup mirror, a display screen of the makeup mirror comprises a first display area and a second display area, the makeup box comprises a plurality of clamping grooves used for storing cosmetics and clamping devices used for taking the cosmetics from the clamping grooves, and the makeup case is characterized by comprising the following steps:
under the condition of receiving a wake-up instruction input by a user, determining a working mode corresponding to the wake-up instruction;
under the condition that the working mode is an intelligent recommendation mode, at least one makeup video to be recommended is retrieved according to voice information input by a user;
under the condition that a determination instruction of the user for the makeup video is received, displaying a target makeup video corresponding to the determination instruction in the first display area, and displaying a face image of the user in the second display area;
determining a makeup step matched with the user based on makeup operation of the user in the face image, displaying a video frame matched with the makeup step in the target makeup video in the first display area, and controlling the clamping device to take out the target makeup corresponding to the makeup step from the clamping groove.
2. The make-up kit based makeup method according to claim 1, wherein said step of determining the user-matched makeup based on the makeup operation of the user in the face image comprises:
determining a makeup area corresponding to each step of makeup steps in the face image according to the face contour features in the face image and the reference makeup area of each step of reference makeup steps in the target makeup video;
and under the condition that the makeup operation of the user in the face image is identified, determining a target makeup area corresponding to the makeup operation from the makeup areas corresponding to the makeup steps, and determining a makeup step matched with the target makeup area.
3. The cosmetic box-based makeup method according to claim 1, further comprising, after said controlling said holding means to take out a target cosmetic corresponding to said makeup step from said slot:
under the condition that the user is detected to store the target cosmetics into the clamping groove, identifying whether the clamping groove for storing the target cosmetics is correct or not;
and under the condition that the target cosmetic storage clamping groove is not correct, controlling the clamping device to automatically place the target cosmetic back into the clamping groove stored before the target cosmetic is taken out, and automatically placing the target cosmetic back into the correct clamping groove.
4. The cosmetic case-based makeup method according to claim 1, wherein said retrieving at least one makeup video to be recommended according to voice information inputted by a user comprises:
determining semantic information corresponding to the voice information;
if the makeup effect is analyzed from the semantic information, at least one makeup video to be recommended, which is matched with the makeup effect, is retrieved;
and if the activity scene is analyzed from the semantic information, at least one makeup video to be recommended, which is matched with the activity scene, is retrieved.
5. The cosmetic case-based makeup method according to claim 1, wherein said determining an operation mode corresponding to said wake-up command further comprises:
and under the condition that the working mode is an intelligent conveying mode, controlling the clamping and holding devices to take out the cosmetics from the clamping grooves in sequence according to a preset cosmetic use sequence.
6. The cosmetic case-based makeup method according to claim 1, wherein said determining an operation mode corresponding to said wake-up command further comprises:
under the condition that the working mode is a skin care mode, determining the skin state of a user through the analysis result of the facial image of the user acquired by the camera of the cosmetic box;
and under the condition that the skin state does not accord with the set skin state, controlling the clamping device to take out the skin care product matched with the skin state from the clamping groove, and outputting a skin care reminding message.
7. The cosmetic case-based makeup method according to any one of claims 1 to 6, characterized in that said wake-up instruction is triggered by a gesture wake-up or a voice wake-up.
8. A cosmetic case-based device, comprising:
the wake-up unit is used for determining a working mode corresponding to a wake-up instruction under the condition of receiving the wake-up instruction input by a user;
the recommending unit is used for retrieving at least one makeup video to be recommended according to voice information input by a user under the condition that the working mode is an intelligent recommending mode;
a display unit, configured to, when a determination instruction of the user for the makeup video is received, display a target makeup video corresponding to the determination instruction in the first display area, and display a face image of the user in the second display area;
the control unit is used for determining a makeup step matched with the user based on the makeup operation of the user in the face image, displaying a video frame matched with the makeup step in the target makeup video in the first display area, and controlling the clamping device to take out the target cosmetics corresponding to the makeup step from the clamping groove.
9. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out a cosmetic method based on a cosmetic kit according to any one of claims 1 to 7.
10. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor, when executing the program, implements a kit-based makeup method according to any one of claims 1 to 7.
CN202211048869.4A 2022-08-30 2022-08-30 Cosmetic method and device based on cosmetic box, storage medium and electronic device Pending CN115481284A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211048869.4A CN115481284A (en) 2022-08-30 2022-08-30 Cosmetic method and device based on cosmetic box, storage medium and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211048869.4A CN115481284A (en) 2022-08-30 2022-08-30 Cosmetic method and device based on cosmetic box, storage medium and electronic device

Publications (1)

Publication Number Publication Date
CN115481284A true CN115481284A (en) 2022-12-16

Family

ID=84422004

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211048869.4A Pending CN115481284A (en) 2022-08-30 2022-08-30 Cosmetic method and device based on cosmetic box, storage medium and electronic device

Country Status (1)

Country Link
CN (1) CN115481284A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117596741A (en) * 2023-12-08 2024-02-23 东莞莱姆森科技建材有限公司 Intelligent mirror control method and system capable of automatically adjusting light rays

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117596741A (en) * 2023-12-08 2024-02-23 东莞莱姆森科技建材有限公司 Intelligent mirror control method and system capable of automatically adjusting light rays
CN117596741B (en) * 2023-12-08 2024-05-14 东莞莱姆森科技建材有限公司 Intelligent mirror control method and system capable of automatically adjusting light rays

Similar Documents

Publication Publication Date Title
CN109976506B (en) Awakening method of electronic equipment, storage medium and robot
CN105700363A (en) Method and system for waking up smart home equipment voice control device
CN112346353B (en) Intelligent equipment control method and device
CN109637518A (en) Virtual newscaster's implementation method and device
CN103295028B (en) gesture operation control method, device and intelligent display terminal
CN109951595A (en) Intelligence adjusts method, apparatus, storage medium and the mobile terminal of screen intensity
CN105204351B (en) control method and device of air conditioning unit
CN105045240A (en) Household appliance control method and device
CN105163180A (en) Play control method, play control device and terminal
CN110426962A (en) A kind of control method and system of smart home device
WO2020135334A1 (en) Television application theme switching method, television, readable storage medium, and device
CN107862313B (en) Dish washing machine and control method and device thereof
CN108762512A (en) Human-computer interaction device, method and system
CN115481284A (en) Cosmetic method and device based on cosmetic box, storage medium and electronic device
CN111692418A (en) Water outlet device and control method thereof
CN113206774A (en) Control method and device of intelligent household equipment based on indoor positioning information
CN112243065B (en) Video recording method and device
CN113448427B (en) Equipment control method, device and system
CN110880994A (en) Control method and control equipment of household appliance
CN113160475A (en) Access control method, device, equipment and computer readable storage medium
CN108415572B (en) Module control method and device applied to mobile terminal and storage medium
CN112333439A (en) Face cleaning equipment control method and device and electronic equipment
CN111385595B (en) Network live broadcast method, live broadcast replenishment processing method and device, live broadcast server and terminal equipment
CN108388342B (en) Electronic device, equipment control method and related product
CN111064766A (en) Information pushing method and device based on Internet of things operating system and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination