CN117315165B - Intelligent auxiliary cosmetic display method based on display interface - Google Patents
Intelligent auxiliary cosmetic display method based on display interface Download PDFInfo
- Publication number
- CN117315165B CN117315165B CN202311595590.2A CN202311595590A CN117315165B CN 117315165 B CN117315165 B CN 117315165B CN 202311595590 A CN202311595590 A CN 202311595590A CN 117315165 B CN117315165 B CN 117315165B
- Authority
- CN
- China
- Prior art keywords
- makeup
- face
- display interface
- user
- model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 239000002537 cosmetic Substances 0.000 title claims abstract description 77
- 238000000034 method Methods 0.000 title claims abstract description 64
- 230000001815 facial effect Effects 0.000 claims abstract description 43
- 230000008569 process Effects 0.000 claims abstract description 22
- 230000006978 adaptation Effects 0.000 claims abstract description 11
- 238000001514 detection method Methods 0.000 claims description 24
- 230000008859 change Effects 0.000 claims description 11
- 238000012545 processing Methods 0.000 claims description 5
- 238000004422 calculation algorithm Methods 0.000 claims description 4
- 238000004458 analytical method Methods 0.000 claims description 3
- 230000001143 conditioned effect Effects 0.000 claims description 3
- 230000026676 system process Effects 0.000 claims description 3
- 238000012790 confirmation Methods 0.000 claims description 2
- 239000000284 extract Substances 0.000 claims description 2
- 230000000694 effects Effects 0.000 description 16
- 238000004590 computer program Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 239000011248 coating agent Substances 0.000 description 2
- 238000000576 coating method Methods 0.000 description 2
- 210000000697 sensory organ Anatomy 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000003796 beauty Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000005282 brightening Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 210000003128 head Anatomy 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000002955 isolation Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000004321 preservation Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000037072 sun protection Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/751—Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Software Systems (AREA)
- Human Computer Interaction (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Medical Informatics (AREA)
- Geometry (AREA)
- Computer Graphics (AREA)
- Processing Or Creating Images (AREA)
Abstract
The invention discloses a cosmetic intelligent auxiliary display method based on a display interface, which comprises the following specific steps: firstly, a cosmetic system receives a wake-up instruction and starts an auxiliary cosmetic process; then, the camera shooting component acquires facial feature data of the user, and the cosmetic system generates a facial model according to the facial feature data; displaying the generated face model on a display interface, and selecting a corresponding makeup template to be matched with the face model by a makeup system to generate a makeup model display after a user determines the makeup type; after confirming the entering of the makeup flow, the user displays the guiding information through the display interface by the makeup system, and the user completes makeup on the face according to the guiding information by steps. According to the invention, a user can be guided to view the virtual image with makeup on the virtual interface through electronic information to select the favorite makeup, and the makeup drawing is performed according to the flow prompt, so that the adaptation condition of the makeup and the face of the user can be known in advance.
Description
Technical Field
The invention belongs to the technical field of artificial intelligence algorithm assistance, and particularly relates to a cosmetic intelligent auxiliary display method based on a display interface.
Background
The makeup uses cosmetics and tools, adopts the steps and skills in order to render, draw and arrange the face, the five sense organs and other parts of the human body, enhances the stereoscopic impression, adjusts the shape and color, masks the defects and shows the look, thereby achieving the purpose of beautifying the visual feeling. Make-up, can show the unique natural beauty of the personage; can improve the original shape, color and quality of the character and increase aesthetic feeling and charm.
The common make-up is divided into make-up and make-up before and after make-up, wherein the make-up is a base layer for covering the skin, so that a covering layer with the same color or uniform brightness is formed on the face, the make-up has a shading and brightening effect, meanwhile, different effects such as products of isolation, moisture preservation, sun protection and the like can be achieved by coating cosmetics with different functions, the functional cosmetics are not mainly used for changing the appearance, and the effect of protecting the covered skin is mainly achieved. And the color cosmetics are covered on the base cosmetics, and are only used for producing a beautiful feeling.
Different people have different aesthetic ideas, the aesthetic ideas are different, and the facial form and the five sense organs of each person determine that the people always have cosmetic differences, and also have adaptation differences of color values and brightness. With the development of the age, the aesthetic requirements of everyone are gradually increased along with the change of the age, and the selection of cosmetic schemes is also gradually increased. And through the popularization of internet technology, various types of makeup and makeup teaching videos appear, and the makeup teaching generally has a template of a real face, and a user learns to make up according to the teaching videos after selecting a favorite make-up template.
However, because of different facial features, the visual change is generated when the attractive makeup on the template is presented on the face of the user, so that a plurality of people are dissatisfied after choosing the makeup, and various virtual programs for assisting in makeup are provided in order to solve the technical problem in the prior art, the makeup adaptation can be carried out according to the facial form of the user, so that the user can roughly know the effect of matching the makeup scheme to the face of the user before the makeup is carried out, and the user is helped to quickly choose the makeup scheme of the heart. However, in this method, only the facial features of the user are used as references for planning, and the user cannot obtain the corresponding cosmetic information and steps after selecting the cosmetic scheme, so that the virtual cosmetic effect presented by the user cannot be accurately restored even by adopting the auxiliary matching method. Especially, in the prior art, virtual makeup matching and guiding equipment and method exist, but most of the equipment and method cannot adapt to the makeup of the face of a user, so that the simulated makeup with head portrait is distorted and cannot be used as a reference.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides the intelligent auxiliary cosmetic display method based on the display interface, which provides more simulated cosmetic effect presentation for users based on a cosmetic system through optimized identification, generation and guiding modes, reduces the distinction between the cosmetic model effect before makeup and the actual face dressing effect after makeup, monitors the cosmetic effect in real time for feedback in the process of makeup, and further improves the success rate of makeup.
The technical scheme adopted by the invention is as follows:
in a first aspect, the invention discloses a makeup intelligent auxiliary display method based on a display interface, which is used for carrying out makeup feedback guidance on a user through a makeup system with the display interface and a camera assembly, wherein a makeup database with a plurality of makeup templates is arranged in the makeup system, the makeup templates take standard facial forms as templates, and the makeup intelligent auxiliary display method comprises the following specific steps:
s100, firstly, receiving a wake-up instruction of a user by the cosmetic system, and starting an auxiliary cosmetic process;
s200, acquiring facial feature data of a user by a camera component, and generating a facial model by a cosmetic system according to the facial feature data;
s300, displaying the generated face model on a display interface, displaying the label of each makeup template on the display interface, and selecting the corresponding makeup template to be matched with the face model by a cosmetic system to generate a makeup carrying model to be displayed on the display interface after the user selects the corresponding label;
s400, after the user confirms the makeup model, the makeup system prompts the confirmed makeup type on a display interface according to the preset makeup steps of the makeup template, and prompts the user whether to enter a makeup flow or not on the display interface;
s500, after the user confirms to enter the makeup flow, the display interface displays guide information generated according to the makeup flow data of the corresponding makeup template, and the user completes makeup on the face according to the steps according to the guide information.
With reference to the first aspect, the present invention provides a first implementation manner of the first aspect, and the specific steps of the step S100 are as follows:
s101, firstly, defining a detection area and a collection area in a light receiving range of a camera component of a makeup system, wherein the makeup system is provided with a detection sensor for covering signals of the detection area;
s102, judging a passive start signal after a detection sensor acquires that an object in a detection area enters and stays for more than a set time, starting up the cosmetic system after receiving the passive start signal, and prompting a user to enable the face to be in an acquisition area through a display interface;
s103, the camera shooting assembly acquires image information in real time from the acquisition area after the cosmetic system is started, and prompts a face acquisition flow through a display interface after confirming that the face of a user is in the acquisition area.
With reference to the first aspect, the present invention provides a second implementation manner of the first aspect, and the specific steps of the step S200 are as follows:
s201, firstly, displaying steps of a face acquisition flow on a display interface, and displaying each flow step on the display interface in a mode of combining characters with images and animation;
s202, a user firstly moves the face into a detection area according to a face acquisition process, keeps stable towards a camera shooting assembly according to a prompt time, and acquires a first plane face image by the camera shooting assembly;
s203, the cosmetic system displays the first plane face image on a display interface after the first plane face image is obtained, and a user moves the whole face in a detection area according to the requirement in a directional manner according to the prompt of the display interface;
s204, the cosmetic system processes the face image data obtained in real time, models the face image data by combining the face image data with the stored three-dimensional model, displays the three-dimensional image on a display interface after generating the face model, and enters the next step after confirmation.
With reference to the first aspect, in a third implementation manner of the first aspect, in step S300, the cosmetic system determines an adapted makeup type for the obtained face model according to a preset face analysis algorithm, and extracts display data of a makeup template corresponding to the adapted makeup type in a makeup database;
the display data of the extracted adaptation makeup template are adapted with the face model in advance to form makeup layer data, the makeup layer data are cached into the internal memory of the makeup system, and then any makeup layer data are adapted with the face model to form a makeup-carrying model to be displayed on a display interface;
the display interface is provided with a switching key with a makeup type description, the makeup type is switched through the switching key, and when the corresponding makeup type is selected, the corresponding makeup layer data is put out from the memory by the makeup system and is matched with the face model to form a new makeup carrying model to be displayed on the display interface.
With reference to the third implementation manner of the first aspect, the present invention provides a fourth implementation manner of the first aspect, and the specific steps of adapting the extracted display data of the makeup template to the face model are as follows:
firstly, determining facial feature points and boundaries on a three-dimensional face model, expanding three-dimensional coordinate data of each pixel point in a face area in the boundaries to form smooth plane coordinate data, forming a makeup reference plane on the pixel points corresponding to each plane coordinate data in the same plane, and dividing a grid area on the makeup reference plane according to a three-dimensional five-eye rule;
in the display data, a plurality of color blocks connected with the boundary are defined by taking facial feature points of a standard face as positioning reference points, each color block takes the facial feature points as reference points to send a plurality of sector areas to boundary pixel points of the color areas, the boundary of the sector areas between adjacent facial feature points of the same color area is bounded by a central line of two point connecting lines, and the display data comprises color values of the facial feature points and color change rates of the sector areas;
the display data further comprises unit grid values occupied by each color block in the grid area, when the color blocks in the display data are subjected to adaptation, the color blocks in the display data are subjected to primary positioning by taking facial feature points as positioning reference points and a makeup reference plane of a user, then the unit grid values corresponding to each color block are used for determining boundaries of each color block on the makeup reference plane of the user, and finally the color values of each positioning reference point are used for endowing color values to the pixel points of all grid areas in the color blocks according to the change rate of a plurality of sector areas.
With reference to the fourth implementation manner of the first aspect, the present invention provides a fifth implementation manner of the first aspect, when performing initial positioning, if there is an overlapped grid area between adjacent color areas, performing equal proportion erasing with the boundary of the adjacent color areas until there is no overlapped grid area between the adjacent color areas;
and if a blank grid area exists between the adjacent color areas, the equal proportion coverage is carried out on the boundaries of the adjacent color areas until the blank grid area does not exist in the adjacent color areas.
With reference to the fourth implementation manner of the first aspect, the present invention provides a sixth implementation manner of the first aspect, wherein the boundary of the color area is conditioned on that the color value of the pixel is equal to the color value of the pixel of the standard face shape of the uncoated color cosmetic.
With reference to the several embodiments of the first aspect, the present invention provides a seventh embodiment of the first aspect, and the specific steps of the step S400 are as follows:
s401, displaying any makeup model to be selected on a display interface, and determining a preselected makeup model by a user through switching the makeup type;
s402, displaying the preselected makeup model on a display interface in a three-dimensional image, and forming a plurality of color areas displayed in boundary dotted lines on the three-dimensional image;
s403, a plurality of operation points which are arranged at equal intervals are arranged on boundary virtual lines of all the color areas, a user adjusts the boundary range of the color areas by dragging the operation points on a display interface, and in the boundary adjustment process, the color values of the pixel points of the color areas are smoothly changed according to the color change rate of each sector area determined in the display data;
s404, after the user adjusts and confirms the makeup model, the makeup system integrally presents the makeup model and prompts the user whether to enter a makeup process on a display interface, and the makeup system correspondingly adjusts the makeup process data of the makeup template according to the display data of the determined makeup model data.
With reference to the seventh implementation manner of the first aspect, the present invention provides an eighth implementation manner of the first aspect, and specific steps of the step S500 are as follows;
s501, after a user determines to enter a makeup flow, the makeup system displays at least two virtual head images on a display interface, wherein one virtual head image is a determined makeup model with complete makeup, and the other virtual head image is a face model without makeup of the user, and the face model without makeup is displayed in a front view;
s502, prompting the whole process step of the makeup process by a character or an image on a display interface, entering a first process step, displaying the area to be made up by the process step on a face model without makeup by a virtual line, and prompting the type and the dosage of the cosmetics required in the process step by the character or the image on the display interface;
s503, when a user performs a first flow step under guidance, the camera shooting assembly acquires face images of the user in real time according to a set time interval, updates a changed area on a face model of a main view after each acquisition and processing, and displays the changed area, if the acquired changed area does not correspond to an area marked by a virtual line in the flow step, feedback prompt is performed through a display interface or a cosmetic system, and if the acquired changed area reaches a finishing condition corresponding to the area marked by the virtual line in the flow step, prompt of a next flow step is performed through the display interface or the cosmetic system;
s504, when the next flow step is carried out, updating a changed area in the face image when the previous flow step is finished to a face model of a front view by the camera component for display, marking a makeup area in the next flow step by a virtual line, and repeating the guiding operation of the previous flow step;
s505, after all the flow steps are completed, the face feature data of the user is obtained again through the camera component by the cosmetic system, an actual cosmetic model is generated, and the actual cosmetic model and the virtual cosmetic model are displayed together on a display interface.
The beneficial effects of the invention are as follows:
(1) According to the invention, the face image of the user is firstly obtained through the camera shooting assembly, and then the three-dimensional face model is synthesized through the face image for display, so that the adaptation of the makeup template can be quickly carried out, the user can intuitively obtain the selected effect of the makeup on the face of the user on the display interface, and the problem of makeup removal reworking caused by poor effect after makeup can be avoided;
(2) According to the invention, through the pre-stored flow data of each cosmetic template, after a user selects favorite makeup, one-step makeup decomposition is carried out through the display interface, and detailed flow steps are displayed on the display interface in each step of makeup flow, so that the user can directly select corresponding cosmetics according to the explanation of the flow steps, smear the cosmetics in the area marked by the virtual line according to the use amount of the prompts, feed back the user makeup progress in real time images on the display interface, and feed back the prompts in time when the coating area or the color does not correspond, thereby improving the makeup success rate and the matching degree of the finally presented actual makeup and effect diagram;
(3) According to the invention, through the detection mechanism, the user can be in a standby state in a low-power consumption mode when not in use, and the wake-up equipment can be quickly activated when the user enters the makeup area;
(4) According to the invention, by optimizing the matching method of the makeup template, the positioning and the extending arrangement of the makeup are carried out through the two indexes of the facial feature points and the grid area, so that the better adaptation of users with different facial forms is realized by utilizing the makeup data stored on the standard facial forms, and particularly, the problem that the actual correction effect is not reflected and the distortion is caused by the fact that the makeup positioning error occurs or the dressing area is smaller/larger can be avoided for users with larger difference between the facial features and the standard facial forms.
Drawings
Fig. 1 is a flow chart of the present invention.
Detailed Description
The invention is further illustrated by the following description of specific embodiments in conjunction with the accompanying drawings.
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. The components of the embodiments of the present application, which are generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present application, as provided in the accompanying drawings, is not intended to limit the scope of the application, as claimed, but is merely representative of selected embodiments of the application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures.
In the description of the present application, it should be noted that, if the terms "center," "upper," "lower," "left," "right," "vertical," "horizontal," "inner," "outer," and the like indicate an azimuth or a positional relationship based on that shown in the drawings, or an azimuth or a positional relationship that a product of the application conventionally puts in use, it is merely for convenience of describing the present application and simplifying the description, and does not indicate or imply that the device or element to be referred to must have a specific azimuth, be configured and operated in a specific azimuth, and thus should not be construed as limiting the present application. Furthermore, the terms "first," "second," and the like in the description of the present application, if any, are used for distinguishing between the descriptions and not necessarily for indicating or implying a relative importance.
Furthermore, the terms "horizontal," "vertical," and the like in the description of the present application, if any, do not denote a requirement that the component be absolutely horizontal or overhang, but rather may be slightly inclined. As "horizontal" merely means that its direction is more horizontal than "vertical", and does not mean that the structure must be perfectly horizontal, but may be slightly inclined.
Example 1:
the embodiment discloses an intelligent auxiliary cosmetic display method based on a display interface, which realizes corresponding technical effects based on intelligent cosmetic system equipment.
The cosmetic system of this embodiment includes a variety of physical structure implementations having at least two features, namely a display interface and a camera assembly.
The system is provided with a circuit system taking a control module as a main calculation module and a software program formed by depending on the circuit system, wherein a display interface is used as a virtual image displayed on display hardware, stored data is processed through a processing module, and a dynamic display interface is formed through the software program for display.
The camera assembly can be connected to the makeup system as a single peripheral device, or can be integrated into an integrated device formed by the makeup system, and a camera or other optical sensor capable of acquiring image information data can be used as the camera assembly in the embodiment, and the embodiment does not limit the hardware structure and parameters thereof. The data transmission flow is to collect external image data and multi-frame image stream data containing audio information in real time through a camera assembly, then transmit the data to a cosmetic system, and process the data through a control module.
Then, the makeup system stores a makeup database containing frames of a plurality of makeup templates, that is, data setting is performed according to the existing makeup types, and each of the frames of the makeup templates includes the following information: the makeup scheme and the makeup scheme take standard facial forms as references, and have display data and makeup flow data, wherein the makeup flow data comprises the types, the dosage, the use sequence and the use area of the required cosmetics, and the information can be presented on a display interface in a text and image introduction mode.
The frame of each make-up template is not limited to a fixed flow and defined cosmetic data, but rather has defined flow sequences and adjustable flow nodes, each flow comprising the necessary and optional steps. Similarly, cosmetic data is not limited to a type, but is defined with color values as parameters.
The standard face shape described below refers to one or a plurality of face shape synthesis-optimized data as a reference, and is set as standard face shape data, and includes all facial feature points.
Specifically, referring to fig. 1, the auxiliary display method in this embodiment is as follows:
firstly, receiving a wake-up instruction of a user by a cosmetic system, and starting an auxiliary cosmetic process;
then, the camera shooting component acquires facial feature data of the user, and the cosmetic system generates a facial model according to the facial feature data;
displaying the generated face model on a display interface, displaying the label of each makeup template on the display interface, and selecting the corresponding makeup template to be matched with the face model by a cosmetic system to generate a makeup carrying model to be displayed on the display interface after the user selects the corresponding label;
after the user confirms the makeup model, the makeup system prompts the confirmed makeup type on a display interface according to the makeup steps preset by the makeup template, and prompts the user whether to enter a makeup flow or not on the display interface;
after confirming that the user enters the makeup flow, the display interface displays guide information generated according to the makeup flow data corresponding to the makeup template, and the user completes makeup on the face according to the guide information in steps.
Further, in order to obtain more accurate acquisition data, the steps therein are optimally defined.
Wherein, the detection area and the collection area are delimited in the light receiving range of the camera component of the makeup system, and the makeup system is provided with a detection sensor for covering signals of the detection area. The detection sensor is generally based on an ultrasonic detection or infrared detection principle, and has a feedback signal for an object entering a detection area, and when the feedback signal is obtained and no additional feedback signal is received after a certain time, the object is indicated to stay in the area for a certain time, so that the cosmetic system is started to collect data.
The makeup system is started after receiving the passive starting signal and prompts a user to enable the face to be located in the collecting area through the display interface, the camera shooting assembly acquires image information in real time to the collecting area after the makeup system is started, and prompts a face collecting flow through the display interface after confirming that the face of the user is located in the collecting area.
It should be noted that, the acquisition area is a space area, that is, a conical space in front of the camera assembly, and the face of the user faces the camera assembly, and is prompted through the display interface when the face of the user is transitionally deflected.
When the face collection flow is carried out, the steps of the face collection flow are displayed on a display interface, and each flow step is displayed on the display interface in a mode of combining characters with images and animation. In this embodiment, the detailed flowchart is guided by text in the form of a flowchart, and may be combined with voice prompt.
And then the user firstly moves the face into the detection area according to the face acquisition flow, keeps stable towards the camera shooting assembly according to the prompt time, the camera shooting assembly acquires a first plane face image, the cosmetic system displays the first plane face image on the display interface after processing the first plane face image, and the user directionally moves the whole face in the detection area according to the prompt of the display interface and the requirements.
The planar face image is a single-frame static two-dimensional image, and cannot be rotated or moved, but since the face of the user is not held at a fixed angular alignment with the imaging unit at the time of acquisition, the first planar face image is used for illustration only, and multi-frame composition is performed based on the face image.
The cosmetic system processes the face image data obtained in real time, models the face image data by combining the face image data with the stored three-dimensional model, displays the three-dimensional image on a display interface after generating the face model, and enters the next step after being confirmed by a user.
Further, in order to obtain a better matching effect of the makeup, the embodiment optimizes the matching step of the makeup system.
Firstly, a makeup system determines an adapted makeup type for an acquired face model according to a preset face analysis algorithm, and display data of a makeup template corresponding to the adapted makeup type is extracted in a makeup database; the display data of the extracted adaptation makeup template are adapted with the face model in advance to form makeup layer data, the makeup layer data are cached into the internal memory of the makeup system, and then any makeup layer data are adapted with the face model to form a makeup-carrying model to be displayed on a display interface; the display interface is provided with a switching key with a makeup type description, the makeup type is switched through the switching key, and when the corresponding makeup type is selected, the corresponding makeup layer data is put out from the memory by the makeup system and is matched with the face model to form a new makeup carrying model to be displayed on the display interface.
When the display data of the extracted makeup template is matched with the face model, facial feature points and boundaries are determined on the three-dimensional face model, three-dimensional coordinate data of each pixel point in a face area in the boundaries are unfolded to form smooth plane coordinate data, then the pixel points corresponding to the plane coordinate data form a makeup reference plane in the same plane, and grid areas are divided on the makeup reference plane according to a three-dimensional five-eye rule.
The three-room five eyes are equally divided into three areas in the length direction of the face, the five areas are divided into five areas in the width direction, the widths of the eyes in the five areas are used as the widths of two areas, the two areas are symmetrically arranged by the central line of the nose bridge, the other three areas are determined according to the determined areas, and the widths are symmetrically arranged by the central line of the nose bridge.
In the implementation process, the method is not limited to the region division by only three-court five eyes, and the equally divided regions in the length direction are multiples of three, so that the more the number is, the better; the first five regions in the width direction are used as demarcation conditions to be continuously subdivided, and the multiple of five is the more preferable.
In the display data, a plurality of color blocks connected with the boundary are defined by taking the facial feature points of the standard face as positioning reference points, each color block takes the facial feature points as reference points to send a plurality of sector areas to the boundary pixel points of the color areas, the boundary of the sector areas between the adjacent facial feature points of the same color area is bounded by the central line of the connecting lines of the two points, and the display data comprises the color values of the facial feature points and the color change rate of the sector areas.
The display data further comprises unit grid values occupied by each color block in the grid area, when the color blocks in the display data are subjected to adaptation, the color blocks in the display data are subjected to primary positioning by taking facial feature points as positioning reference points and a makeup reference plane of a user, then the unit grid values corresponding to each color block are used for determining boundaries of each color block on the makeup reference plane of the user, and finally the color values of each positioning reference point are used for endowing color values to the pixel points of all grid areas in the color blocks according to the change rate of a plurality of sector areas.
Further, when initial positioning is carried out, if overlapped grid areas exist between adjacent color areas, the equal proportion erasing is carried out on the boundaries of the adjacent color areas until the overlapped grid areas do not exist in the adjacent color areas; and if a blank grid area exists between the adjacent color areas, the equal proportion coverage is carried out on the boundaries of the adjacent color areas until the blank grid area does not exist in the adjacent color areas. The boundary of the color region is conditioned on the color value of the pixel point being equal to the color value of the pixel point of the standard face shape of the uncoated color cosmetic.
Further, in order to obtain better makeup showing and guiding effects, the showing process of the makeup model is optimized.
Displaying any makeup model to be selected on a display interface, and determining a preselected makeup model by a user through switching the makeup type; the preselected make-up model is displayed in a three-dimensional image on a display interface and forms a plurality of color areas on the three-dimensional image that are displayed in a boundary dashed line.
The method comprises the steps that a plurality of operation points which are arranged at equal intervals are arranged on boundary virtual lines of all color areas, a user adjusts the boundary range of the color areas by dragging the operation points on a display interface, and in the boundary adjustment process, the color values of pixel points of the color areas are smoothly changed according to the color change rate of each sector area determined in display data; after the user adjusts and confirms the makeup model, the makeup system integrally presents the makeup model and prompts the user whether to enter a makeup process on a display interface, and the makeup system correspondingly adjusts the makeup process data of the makeup template according to the display data of the determined makeup model data.
After the user confirms to enter a makeup flow, the makeup system displays at least two virtual head images on a display interface, wherein one virtual head image is a confirmed makeup model with complete makeup, and the other virtual head image is a face model without makeup for the user, and the face model without makeup is displayed in a front view; then, a character or image prompts the whole process step of the makeup process on the display interface and enters the first process step, at the moment, the area to be made up in the process step is displayed on the face model without makeup by a virtual line, and the type and the amount of the cosmetics required in the process step are prompted on the display interface in a character or image mode.
When a user performs a first flow step under guidance, the camera shooting assembly acquires face images of the user in real time according to a set time interval, updates a changed area on a face model of a main view after each acquisition and processing, performs feedback prompt through a display interface or a cosmetic system if the acquired changed area does not correspond to the area marked by the virtual line in the flow step, and performs prompt of the next flow step through the display interface or the cosmetic system if the acquired changed area reaches a completion condition corresponding to the area marked by the virtual line in the flow step; when the next flow step is carried out, updating a changed area in the face image when the previous flow step is finished to a face model of a main view by the camera component for display, marking a cosmetic area in the next flow step by a virtual line, and repeating the guiding operation of the previous flow step; after all the flow steps are completed, the face feature data of the face of the user is obtained again through the camera component by the cosmetic system, an actual makeup carrying model is generated, and the actual makeup carrying model and the virtual makeup carrying model are displayed together on a display interface.
The circuitry described above has a memory in which logic commands may be implemented in the form of software functional units and stored in a computer readable storage medium when sold or used as a stand alone product. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several commands for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The control module provided in the embodiment of the present application may call the logic instruction in the memory to implement the above method, and the specific implementation manner of the control module is consistent with the implementation manner of the foregoing method, and may achieve the same beneficial effects, which are not described herein again.
The present application also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, is implemented to perform the methods provided by the above embodiments.
Embodiments of the present application provide a computer program product comprising a computer program which, when executed by a processor, implements a method as described above.
The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
From the above description of the embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus necessary general hardware platforms, or of course may be implemented by means of hardware. Based on this understanding, the foregoing technical solution may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the respective embodiments or some parts of the embodiments.
The invention is not limited to the alternative embodiments described above, but any person may derive other various forms of products in the light of the present invention. The above detailed description should not be construed as limiting the scope of the invention, which is defined in the claims and the description may be used to interpret the claims.
Claims (4)
1. The utility model provides a make-up intelligent auxiliary display method based on display interface, carries out make-up feedback guide to the user through the makeup system that has display interface and subassembly of making up, and the makeup system embeds has the dressing database that has a plurality of dressing templates, the dressing template takes standard facial form as the template, including show data and make-up flow data, its characterized in that: the method comprises the following specific steps:
s100, firstly, receiving a wake-up instruction of a user by the cosmetic system, and starting an auxiliary cosmetic process;
s200, acquiring facial feature data of a user by a camera component, and generating a facial model by a cosmetic system according to the facial feature data;
s300, displaying the generated face model on a display interface, displaying the label of each makeup template on the display interface, and selecting the corresponding makeup template to be matched with the face model by a cosmetic system to generate a makeup carrying model to be displayed on the display interface after the user selects the corresponding label;
s400, after the user confirms the makeup model, the makeup system prompts the confirmed makeup type on a display interface according to the preset makeup steps of the makeup template, and prompts the user whether to enter a makeup flow or not on the display interface;
s500, after the user confirms to enter the makeup flow, the display interface displays guide information generated according to the makeup flow data of the corresponding makeup template, and the user completes makeup on the face according to the steps according to the guide information;
in the step S300, the makeup system determines an adapted makeup type for the obtained face model according to a preset face analysis algorithm, and extracts presentation data of a makeup template corresponding to the adapted makeup type in a makeup database;
the display data of the extracted adaptation makeup template are adapted with the face model in advance to form makeup layer data, the makeup layer data are cached into the internal memory of the makeup system, and then any makeup layer data are adapted with the face model to form a makeup-carrying model to be displayed on a display interface;
the display interface is provided with a switching key with a makeup type description, the makeup type is switched through the switching key, and when the corresponding makeup type is selected, the corresponding makeup layer data is put out from the memory by the makeup system and is matched with the face model to form a new makeup carrying model to be displayed on the display interface;
the specific steps of adapting the extracted display data of the makeup template to the face model are as follows:
firstly, determining facial feature points and boundaries on a three-dimensional face model, expanding three-dimensional coordinate data of each pixel point in a face area in the boundaries to form smooth plane coordinate data, forming a makeup reference plane on the pixel points corresponding to each plane coordinate data in the same plane, and dividing a grid area on the makeup reference plane in a three-dimensional five-eye face area division mode;
the three-room five eyes are equally divided into three areas in the length direction of the face, and are divided into five areas in the width direction, wherein the widths of the eyes in the five areas are used as the widths of two areas and are symmetrically arranged by the central line of the nose bridge, and the other three areas are determined according to the determined areas, and the widths are symmetrically arranged by the central line of the nose bridge;
in the display data, a plurality of color blocks connected with the boundary are defined by taking facial feature points of a standard face as positioning reference points, each color block takes the facial feature points as reference points to send a plurality of sector areas to boundary pixel points of the color areas, the boundary of the sector areas between adjacent facial feature points of the same color area is bounded by a central line of two point connecting lines, and the display data comprises color values of the facial feature points and color change rates of the sector areas;
the display data further comprises unit grid values occupied by each color block in the grid area, when the color blocks in the display data are subjected to adaptation, the color blocks in the display data are subjected to primary positioning by taking facial feature points as positioning reference points and a makeup reference plane of a user, then the unit grid values corresponding to each color block are used for determining boundaries of each color block on the makeup reference plane of the user, and finally the color values of each positioning reference point are used for endowing color values to the pixel points of all grid areas in the color blocks according to the change rate of a plurality of sector areas;
when initial positioning is carried out, if overlapped grid areas exist between adjacent color areas, the equal proportion erasing is carried out on the boundaries of the adjacent color areas until the overlapped grid areas do not exist in the adjacent color areas;
if a blank grid area exists between the adjacent color areas, the boundary of the adjacent color areas is subjected to equal proportion coverage until the blank grid area does not exist in the adjacent color areas;
the boundary of the color area is conditioned on the condition that the color value of the pixel point is equal to the color value of the pixel point of a standard face without the color makeup;
the specific steps of the step S400 are as follows:
s401, displaying any makeup model to be selected on a display interface, and determining a preselected makeup model by a user through switching the makeup type;
s402, displaying the preselected makeup model on a display interface in a three-dimensional image, and forming a plurality of color areas displayed in boundary dotted lines on the three-dimensional image;
s403, a plurality of operation points which are arranged at equal intervals are arranged on boundary virtual lines of all the color areas, a user adjusts the boundary range of the color areas by dragging the operation points on a display interface, and in the boundary adjustment process, the color values of the pixel points of the color areas are smoothly changed according to the color change rate of each sector area determined in the display data;
s404, after the user adjusts and confirms the makeup model, the makeup system integrally presents the makeup model and prompts the user whether to enter a makeup process on a display interface, and the makeup system correspondingly adjusts the makeup process data of the makeup template according to the display data of the determined makeup model data.
2. The intelligent auxiliary display method for cosmetics based on the display interface according to claim 1, wherein the method comprises the following steps: the specific steps of the step S100 are as follows:
s101, firstly, defining a detection area and a collection area in a light receiving range of a camera component of a makeup system, wherein the makeup system is provided with a detection sensor for covering signals of the detection area;
s102, judging a passive start signal after a detection sensor acquires that an object in a detection area enters and stays for more than a set time, starting up the cosmetic system after receiving the passive start signal, and prompting a user to enable the face to be in an acquisition area through a display interface;
s103, the camera shooting assembly acquires image information in real time from the acquisition area after the cosmetic system is started, and prompts a face acquisition flow through a display interface after confirming that the face of a user is in the acquisition area.
3. The intelligent auxiliary display method for cosmetics based on the display interface according to claim 1, wherein the method comprises the following steps: the specific steps of the step S200 are as follows:
s201, firstly, displaying steps of a face acquisition flow on a display interface, and displaying each flow step on the display interface in a mode of combining characters with images and animation;
s202, a user firstly moves the face into a detection area according to a face acquisition process, keeps stable towards a camera shooting assembly according to a prompt time, and acquires a first plane face image by the camera shooting assembly;
s203, the cosmetic system displays the first plane face image on a display interface after the first plane face image is obtained, and a user moves the whole face in a detection area according to the requirement in a directional manner according to the prompt of the display interface;
s204, the cosmetic system processes the face image data obtained in real time, models the face image data by combining the face image data with the stored three-dimensional model, displays the three-dimensional image on a display interface after generating the face model, and enters the next step after confirmation.
4. The intelligent auxiliary display method for cosmetics based on the display interface according to claim 1, wherein the method comprises the following steps: the specific steps of the step S500 are as follows;
s501, after a user determines to enter a makeup flow, the makeup system displays at least two virtual head images on a display interface, wherein one virtual head image is a determined makeup model with complete makeup, and the other virtual head image is a face model without makeup of the user, and the face model without makeup is displayed in a front view;
s502, prompting the whole process step of the makeup process by a character or an image on a display interface, entering a first process step, displaying the area to be made up by the process step on a face model without makeup by a virtual line, and prompting the type and the dosage of the cosmetics required in the process step by the character or the image on the display interface;
s503, when a user performs a first flow step under guidance, the camera shooting assembly acquires face images of the user in real time according to a set time interval, updates a changed area on a face model of a main view after each acquisition and processing, and displays the changed area, if the acquired changed area does not correspond to an area marked by a virtual line in the flow step, feedback prompt is performed through a display interface or a cosmetic system, and if the acquired changed area reaches a finishing condition corresponding to the area marked by the virtual line in the flow step, prompt of a next flow step is performed through the display interface or the cosmetic system;
s504, when the next flow step is carried out, updating a changed area in the face image when the previous flow step is finished to a face model of a front view by the camera component for display, marking a makeup area in the next flow step by a virtual line, and repeating the guiding operation of the previous flow step;
s505, after all the flow steps are completed, the face feature data of the user is obtained again through the camera component by the cosmetic system, an actual cosmetic model is generated, and the actual cosmetic model and the virtual cosmetic model are displayed together on a display interface.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311595590.2A CN117315165B (en) | 2023-11-28 | 2023-11-28 | Intelligent auxiliary cosmetic display method based on display interface |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311595590.2A CN117315165B (en) | 2023-11-28 | 2023-11-28 | Intelligent auxiliary cosmetic display method based on display interface |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117315165A CN117315165A (en) | 2023-12-29 |
CN117315165B true CN117315165B (en) | 2024-03-12 |
Family
ID=89288672
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311595590.2A Active CN117315165B (en) | 2023-11-28 | 2023-11-28 | Intelligent auxiliary cosmetic display method based on display interface |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117315165B (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107783686A (en) * | 2016-08-24 | 2018-03-09 | 南京乐朋电子科技有限公司 | Vanity mirror based on virtual technology |
CN112733007A (en) * | 2019-10-14 | 2021-04-30 | 小卫(上海)生物科技有限公司 | Intelligent makeup method and makeup mirror |
CN113496459A (en) * | 2020-04-01 | 2021-10-12 | 华为技术有限公司 | Make-up assisting method, terminal device, storage medium, and program product |
WO2022042163A1 (en) * | 2020-08-27 | 2022-03-03 | 华为技术有限公司 | Display method applied to electronic device, and electronic device |
CN115424308A (en) * | 2021-05-12 | 2022-12-02 | 海信集团控股股份有限公司 | Intelligent cosmetic mirror and method for displaying auxiliary makeup |
CN116998816A (en) * | 2022-04-28 | 2023-11-07 | 北京百度网讯科技有限公司 | Dressing processing method, device, equipment, storage medium and program product |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11882918B2 (en) * | 2018-05-28 | 2024-01-30 | Boe Technology Group Co., Ltd. | Make-up assistance method and apparatus and smart mirror |
-
2023
- 2023-11-28 CN CN202311595590.2A patent/CN117315165B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107783686A (en) * | 2016-08-24 | 2018-03-09 | 南京乐朋电子科技有限公司 | Vanity mirror based on virtual technology |
CN112733007A (en) * | 2019-10-14 | 2021-04-30 | 小卫(上海)生物科技有限公司 | Intelligent makeup method and makeup mirror |
CN113496459A (en) * | 2020-04-01 | 2021-10-12 | 华为技术有限公司 | Make-up assisting method, terminal device, storage medium, and program product |
WO2022042163A1 (en) * | 2020-08-27 | 2022-03-03 | 华为技术有限公司 | Display method applied to electronic device, and electronic device |
CN115424308A (en) * | 2021-05-12 | 2022-12-02 | 海信集团控股股份有限公司 | Intelligent cosmetic mirror and method for displaying auxiliary makeup |
CN116998816A (en) * | 2022-04-28 | 2023-11-07 | 北京百度网讯科技有限公司 | Dressing processing method, device, equipment, storage medium and program product |
Also Published As
Publication number | Publication date |
---|---|
CN117315165A (en) | 2023-12-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN101055646B (en) | Method and device for processing image | |
CN109410298B (en) | Virtual model manufacturing method and expression changing method | |
EP1030267B1 (en) | Method of correcting face image, makeup simulation method, makeup method, makeup supporting device and foundation transfer film | |
US20180157901A1 (en) | Method and system for incorporating contextual and emotional visualization into electronic communications | |
JP6375480B2 (en) | Makeup support device, makeup support system, makeup support method, and makeup support program | |
CN101324961B (en) | Human face portion three-dimensional picture pasting method in computer virtual world | |
CN106920274A (en) | Mobile terminal 2D key points rapid translating is the human face model building of 3D fusion deformations | |
CN110390632B (en) | Image processing method and device based on dressing template, storage medium and terminal | |
US20100189357A1 (en) | Method and device for the virtual simulation of a sequence of video images | |
KR102386642B1 (en) | Image processing method and apparatus, electronic device and storage medium | |
US10512321B2 (en) | Methods, systems and instruments for creating partial model of a head for use in hair transplantation | |
KR20100047863A (en) | Makeup simulation system, makeup simulation apparatus, makeup simulation method, and makeup simulation program | |
CN103456010A (en) | Human face cartoon generation method based on feature point localization | |
CN105913416A (en) | Method for automatically segmenting three-dimensional human face model area | |
CN104809638A (en) | Virtual glasses trying method and system based on mobile terminal | |
CN102074040A (en) | Image processing apparatus, image processing method, and program | |
WO2021197186A1 (en) | Auxiliary makeup method, terminal device, storage medium and program product | |
CN111047511A (en) | Image processing method and electronic equipment | |
CN106709886A (en) | Automatic image retouching method and device | |
CN110070481B (en) | Image generation method, device, terminal and storage medium for virtual object of face | |
KR101898804B1 (en) | Apparatus and method for production of tattoo online | |
KR101165017B1 (en) | 3d avatar creating system and method of controlling the same | |
CN117495664B (en) | Intelligent auxiliary cosmetic system | |
CN110335194B (en) | Face aging image processing method | |
US10152827B2 (en) | Three-dimensional modeling method and electronic apparatus thereof |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CB03 | Change of inventor or designer information | ||
CB03 | Change of inventor or designer information |
Inventor after: Xu Xiangming Inventor after: Tao Quanyi Inventor after: Yang Jialin Inventor before: Yang Jialin Inventor before: Tao Quanyi |