CN111782854A - Multi-view projection makeup method and multi-view projection makeup dressing wearing equipment - Google Patents

Multi-view projection makeup method and multi-view projection makeup dressing wearing equipment Download PDF

Info

Publication number
CN111782854A
CN111782854A CN202010646810.XA CN202010646810A CN111782854A CN 111782854 A CN111782854 A CN 111782854A CN 202010646810 A CN202010646810 A CN 202010646810A CN 111782854 A CN111782854 A CN 111782854A
Authority
CN
China
Prior art keywords
makeup
face
dimensional
user
visual angle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010646810.XA
Other languages
Chinese (zh)
Inventor
颜寒松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kelong Shijing Biotechnology Shanghai Co ltd
Original Assignee
Kelong Shijing Biotechnology Shanghai Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kelong Shijing Biotechnology Shanghai Co ltd filed Critical Kelong Shijing Biotechnology Shanghai Co ltd
Priority to CN202010646810.XA priority Critical patent/CN111782854A/en
Publication of CN111782854A publication Critical patent/CN111782854A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/53Querying
    • G06F16/538Presentation of query results
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3179Video signal processing therefor

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Signal Processing (AREA)
  • Library & Information Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Image Processing (AREA)

Abstract

The application provides a beautiful method of making up of multi-view projection and beautiful dress equipment of making up of multi-view projection. Acquiring face region information of a user in real time to obtain two-dimensional face images corresponding to different face regions under different viewing angles; identifying one or more locations in each of the two-dimensional facial images based on a facial image recognition model; respectively loading the makeup information of each corresponding part aiming at the identified part or parts so as to obtain two-dimensional makeup images corresponding to each visual angle; and correspondingly projecting the two-dimensional makeup image to the face of the user by taking the acquisition position of each visual angle as a base point so as to map the makeup effect. The method and the device have the advantages that the projection technology is utilized to save the makeup time, the environmental problem caused by manufacturing or discarding the cosmetics and the corresponding damage to the human skin caused by long-term erosion of the cosmetics are saved, in addition, the three-dimensional modeling is not needed, and the realization way is simpler and more convenient.

Description

Multi-view projection makeup method and multi-view projection makeup dressing wearing equipment
Technical Field
The application relates to the technical field of human-computer interaction based on virtual reality, in particular to a multi-view projection makeup method and multi-view projection makeup wearing equipment.
Background
With the development of society and the continuous improvement of living standard, more and more women can make up their own appearance by making up. The beauty is owned by everyone, the confidence of the beauty can be rapidly improved by makeup, and the beauty is also the existing skill of women and even men.
1) The makeup process usually takes a lot of time, and many women even take at least half an hour per day to complete;
2) the makeup process is not simple and requires skilled makeup skills to obtain a perfect makeup.
3) Cosmetics are generally not cheap, and different makeup requires a plurality of different kinds of cosmetics, and most girls have a variety of cosmetics with various brands, so that a considerable expense is required for purchasing cosmetics in order to have a perfect makeup effect.
4) The chemical components contained in the cosmetics cause certain damage to human skin due to long-term erosion, such as foundation make-up or concealer, and the wrinkles of the skin are accelerated after long-term use, so that the wrinkles have to be covered by wiping more powder, and a vicious circle is formed. Women who make up for a long time have a very obvious characteristic, namely haggard is held after makeup removal, and skin aging is quicker. And the manufacture or disposal of cosmetics also causes serious environmental pollution.
The virtual and digital makeup effects and the method are applied to the aspects of makeup reference and makeup product popularization. In the traditional method, real makeup is still carried out by applying color makeup products such as foundation make-up, concealer, eye shadow, eyebrow pencil, blush, lip gloss and the like, so that the effect can be achieved. How to become reality to virtual, digital makeup effect, can not carry out real makeup with the makeup product, just can show virtual, digital makeup effect in user's face true presentation to in the middle of applying to daily life, be the problem that this application will be solved.
Disclosure of Invention
In view of the above-mentioned shortcomings of the prior art, the technical problem to be solved by the present application is to provide a multi-view projection beauty method and a multi-view projection beauty wearing device, which are used for solving at least one existing problem.
To achieve the above and other related objects, the present application provides a multi-view projection makeup method, comprising: acquiring face region information of a user in real time to obtain two-dimensional face images corresponding to different face regions under different viewing angles; identifying one or more locations in each of the two-dimensional facial images based on a facial image recognition model; respectively loading the makeup information of each corresponding part aiming at the identified part or parts so as to obtain two-dimensional makeup images corresponding to each visual angle; correspondingly projecting the two-dimensional makeup image to the face of the user by taking the collection position of each visual angle as a base point to map the makeup effect
In an embodiment of the present application, the different viewing angles include: any one or more combination of left visual angle, left lower visual angle, left upper visual angle, right lower visual angle, right upper visual angle, depression visual angle and elevation visual angle.
In an embodiment of the present application, the different viewing angles may collect information of any one or more of the following facial regions: 1) collecting a whole face area; 2) collecting any one part of face area of the upper half part, the lower half part, the left half part and the right half part; 3) the face area of any part or a combination of a plurality of parts of the face is collected.
In an embodiment of the present application, the identifying one or more regions in each of the two-dimensional facial images based on a facial image recognition model includes: pre-taking two-dimensional face images corresponding to face regions under different viewing angles as a training set; marking the position and the part name of each part of the face area under different visual angles through a marking frame; and inputting the two-dimensional face image after the labeling into an image recognition model for training to obtain the face image recognition model for recognizing one or more parts in the two-dimensional face image.
In an embodiment of the present application, the identifying one or more regions in each of the two-dimensional facial images based on a facial image recognition model includes: and performing highlighting processing on each two-dimensional face image to identify the outline corresponding to each part, so as to load the makeup information of each part according to the outline corresponding to each part aiming at one or more identified parts.
In an embodiment of the present application, the highlighting process includes: any one or more of color saturation adjustment, contrast adjustment, brightness adjustment, coloring adjustment, black-white vector conversion, and line dispersion.
In an embodiment of the present application, the projecting the two-dimensional makeup image onto the face of the user to map a makeup effect includes: projecting the two-dimensional makeup image to the face of the user at a frequency of M times per second; and the two-dimensional makeup image projected each time is the latest two-dimensional makeup image obtained by identifying each part and correspondingly loading makeup information according to the latest two-dimensional face image acquired each time.
To achieve the above and other related objects, there is provided a multi-view projection beauty dressing apparatus, the apparatus including: the system comprises a plurality of cameras, a face recognition device and a face recognition device, wherein the cameras are used for collecting face area information of a user in real time so as to obtain two-dimensional face images corresponding to different face areas under different view angles; a processor for identifying one or more locations in each of the two-dimensional facial images based on a facial image recognition model; respectively loading the makeup information of each corresponding part aiming at the identified part or parts so as to obtain two-dimensional makeup images corresponding to each visual angle; and the projectors are used for correspondingly projecting the two-dimensional makeup images to the face of the user by taking the acquisition positions of all the visual angles as base points so as to map the makeup effect.
In an embodiment of the present application, the apparatus further includes: the memory is used for storing a plurality of preset groups of makeup information so that the equipment can obtain the makeup information through a switching key or a selection key; or the communicator is used for being in communication connection with the mobile terminal so as to receive the makeup information in any combination form selected on the mobile terminal.
In an embodiment of the application, the device is a card, and the card is provided with at least two interaction units which extend to the front of the face of a user when the user wears the card; each interaction unit comprises a camera and a projector, and the acquisition and projection directions of the interaction units face the face of the user; or the equipment is a hat with a front brim, and at least two interaction units facing the face of the user are arranged below the front brim of the hat; each interaction unit comprises a camera and a projector, and the collection direction and the projection direction face the face of the user.
As mentioned above, the application provides a beautiful dress method of multi-view projection and beautiful dress equipment of multi-view projection. Acquiring face areas of a user in real time to obtain two-dimensional face images corresponding to different face areas under different view angles; identifying one or more locations in each of the two-dimensional facial images based on a facial image recognition model; respectively loading the makeup information of each corresponding part aiming at the identified part or parts so as to obtain two-dimensional makeup images corresponding to each visual angle; and correspondingly projecting the two-dimensional makeup image to the face of the user by taking the acquisition position of each visual angle as a base point so as to map the makeup effect.
The following beneficial effects are achieved:
the method and the device have the advantages that the time for making up is saved by using a projection technology, the environmental problem caused by making or discarding the cosmetics and the damage to human skin caused by long-term corrosion of the cosmetics are saved, in addition, three-dimensional modeling and three-dimensional information processing are not needed, the transformation of three-dimensional coordinates is also not needed, and the realization way is simpler and more convenient.
Drawings
Fig. 1A is a scene schematic diagram of a hairpin of multi-view projection beauty equipment in the embodiment of the application.
Fig. 1B is a schematic view of a scene in which the multi-view projection beauty dressing device is a hat in the embodiment of the present application.
Fig. 2 is a schematic flow chart of a multi-view projection makeup method in the embodiment of the present application.
Fig. 3 is a schematic structural view of a multi-view projection cosmetic wearing device in an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application is provided by way of specific examples, and other advantages and effects of the present application will be readily apparent to those skilled in the art from the disclosure herein. The present application is capable of other and different embodiments and its several details are capable of modifications and/or changes in various respects, all without departing from the spirit of the present application. It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict.
Embodiments of the present application will be described in detail below with reference to the accompanying drawings so that those skilled in the art to which the present application pertains can easily carry out the present application. The present application may be embodied in many different forms and is not limited to the embodiments described herein.
In order to clearly explain the present application, components that are not related to the description are omitted, and the same reference numerals are given to the same or similar components throughout the specification.
Throughout the specification, when a component is referred to as being "connected" to another component, this includes not only the case of being "directly connected" but also the case of being "indirectly connected" with another element interposed therebetween. In addition, when a component is referred to as "including" a certain constituent element, unless otherwise stated, it means that the component may include other constituent elements, without excluding other constituent elements.
When an element is referred to as being "on" another element, it can be directly on the other element, or intervening elements may also be present. When a component is referred to as being "directly on" another component, there are no intervening components present.
Although the terms first, second, etc. may be used herein to describe various elements in some instances, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, the first interface and the second interface, etc. are described. Also, as used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context indicates otherwise. It will be further understood that the terms "comprises," "comprising," "includes" and/or "including," when used in this specification, specify the presence of stated features, steps, operations, elements, components, items, species, and/or groups, but do not preclude the presence, or addition of one or more other features, steps, operations, elements, components, species, and/or groups thereof. The terms "or" and/or "as used herein are to be construed as inclusive or meaning any one or any combination. Thus, "A, B or C" or "A, B and/or C" means "any of the following: a; b; c; a and B; a and C; b and C; A. b and C ". An exception to this definition will occur only when a combination of elements, functions, steps or operations are inherently mutually exclusive in some way.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used herein, the singular forms "a", "an" and "the" include plural forms as long as the words do not expressly indicate a contrary meaning. The term "comprises/comprising" when used in this specification is taken to specify the presence of stated features, regions, integers, steps, operations, elements, and/or components, but does not exclude the presence or addition of other features, regions, integers, steps, operations, elements, and/or components.
Terms indicating "lower", "upper", and the like relative to space may be used to more easily describe a relationship of one component with respect to another component illustrated in the drawings. Such terms are intended to include not only the meanings indicated in the drawings, but also other meanings or operations of the device in use. For example, if the device in the figures is turned over, elements described as "below" other elements would then be oriented "above" the other elements. Thus, the exemplary terms "under" and "beneath" all include above and below. The device may be rotated 90 or other angles and the terminology representing relative space is also to be interpreted accordingly.
The multi-view projection makeup method and the multi-view projection makeup wearing equipment mainly utilize a multi-view projection mode to accurately project and map needed makeup on the face of a person so as to achieve makeup.
The application can inlay camera and projection arrangement on the ornaments through the ornaments of wearing, like hairpin or peaked cap, utilize this kind of technique to realize the projection and make up, need not manual actual make up promptly, can possess the effect that multiple style was made up completely. In short, the face is used as a curtain to receive the makeup projection in the form of a projector, so that the effect of makeup is achieved.
The present application is an improvement based on patent No. 2020100712660, but the present application is substantially different from the aforementioned patent in that: according to the method and the device, only the two-dimensional image information needs to be acquired and generated, the two-dimensional image information is processed, and the makeup effect is loaded on the basis of the two-dimensional image so as to project, so that the makeup information is mapped to the target of the corresponding part of the face of the user. In the application, three-dimensional modeling is not needed, three-dimensional information processing is not needed, three-dimensional coordinate transformation is not needed, the stereoscopic projection effect can be achieved only through a plurality of projectors, virtual beauty cosmetics are changed into reality, and the realization way is simpler and more convenient.
First embodiment
As shown in fig. 1A, a scene diagram of a hairpin of a multi-view projection beauty wearing device in an embodiment of the present application is shown. As shown in the figure, the multi-view projection beauty dressing wearing device is a hair clip 1 for fixing hair, at least two interaction units 11 are arranged on the hair clip 1, and the interaction units 11 comprise a micro camera and a micro projector.
In this embodiment, the plurality of micro cameras acquire the two-dimensional face images of the face area of the user from different viewing angles, and then recognize one or more parts in the two-dimensional face images and the outlines of the parts to load the makeup information corresponding to the parts, where the makeup information may be pre-stored in the multi-view projection makeup wearing device, for example, the makeup information may be obtained by switching or selecting a key through a switch key. Beautiful makeup information can also be selected through mobile terminal 2, mobile terminal 2 can be the equipment that can load APP (or believe applet a little) such as smart mobile phone, panel computer, intelligent wrist-watch, vehicle-mounted terminal, beautiful makeup information can be like the beautiful makeup information of multiple combinations of multiple colours such as eyebrow shape, lipstick color number, blush, eye shadow colour. Then, the mobile terminal 2 is in communication connection with the card sender 1, such as Bluetooth connection, wifi connection, local area network connection, data line connection and the like, and the makeup information selected on the mobile terminal 2 is sent to the card sender 1.
And finally, projecting the loaded makeup image to the face of the user by means of a micro projector so as to realize the makeup effect. It is often necessary here that the micro-cameras and micro-projectors are located overlapping or very close together to achieve accurate projection.
Second embodiment
As shown in fig. 1B, a scene diagram of the multi-view projection cosmetic wearing device as a hat in an embodiment of the present application is shown. As shown in the figure, the multi-view projection beauty dressing wearing device is a hat 3 with a front brim, at least two interaction units 31 are arranged below the front brim of the hat 3, and each interaction unit 31 comprises a micro camera and a micro projector. The specific process is similar to that of the first embodiment, and is not described herein again.
Fig. 2 is a schematic flow chart of a multi-view projection cosmetic method according to an embodiment of the present disclosure. The method is mainly applied to multi-view projection makeup wearing equipment. As shown, the method comprises:
step S201: the method comprises the steps of collecting face area information of a user in real time to obtain two-dimensional face images corresponding to different face areas under different view angles.
In an embodiment of the present application, unlike a common face seen from the front or the side, the present application uses the face as the center, and the periphery of the face surrounds a plurality of capturing devices with different viewing angles for capturing, where the different viewing angles include: any one or more combination of left visual angle, left lower visual angle, left upper visual angle, right lower visual angle, right upper visual angle, depression visual angle and elevation visual angle.
For example, a two-dimensional face image obtained by capturing a face region from a position obliquely above the eyes, or a two-dimensional face image obtained by capturing a face region from a position outside the jawbone of the faces.
It should be noted that the two-dimensional image of the face area is collected, makeup information is correspondingly loaded on the basis of the two-dimensional image to form a makeup two-dimensional image, and finally projection is carried out according to the makeup two-dimensional image. The utility model provides a beautiful dress up scheme of projection based on two-dimensional image's characteristics lie in:
on the one hand, in the two-dimensional face image obtained by the face image acquisition mode, the five sense organs in the face region are less shielded except the nose part, and are approximately symmetrical. This feature is very advantageous for the application in combination with the projection action.
It is well known that human faces are not planar, but three-dimensional. After this application is a plurality of two-dimensional image information with whole face information split, to the beautiful makeup information of the two-dimensional image information loading of each visual angle after, project the projection light that corresponds in this visual angle and do not have and shelter from or shelter from less, can project the face in this visual angle almost totally.
In addition, it is known that a two-dimensional image projected onto a three-dimensional structure may be deformed according to the curved surface or the trend of the three-dimensional structure. The acquisition equipment acquires the three-dimensional face from a certain visual angle (a visual angle with less shielding), and the acquired two-dimensional face image can be regarded as the two-dimensional (or planar) three-dimensional face information, so that the acquired two-dimensional face image may not be a common observation visual angle, and the two-dimensional face image acquired based on the visual angle is loaded with makeup information, and the loaded makeup information has the characteristic different from common makeup, for example, the makeup length of the eyebrow shape is only half of the real makeup length. And finally, projecting corresponding to the visual angle without considering the deformation problem after the stereo face is projected. Because each part of the face is three-dimensional, the three-dimensional object is used as a projection receiving curtain, the two-dimensional image is projected on the three-dimensional object, the three-dimensional image can be naturally mapped, and the three-dimensional image can be naturally restored to the original face part three-dimensional image as long as the positioning is accurate, and the normal beauty effect can be presented.
For example, the two-dimensional face image acquired by the acquisition device at the left view angle position may include: the mouth of the left half, the left eye, and the nose of the left half, where partial occlusion of the nose or intraorbital eye may occur, but these locations typically do not require makeup and, therefore, the makeup-able locations are unobstructed. Due to the problem of the angle of view, the size, length, curve, and position of each part and the real part appearing in the two-dimensional face image may be different. For example, in the left view, the length of the eyebrow is half of the real length, the opening and closing angle of the canthus is different from the opening and closing angle of the real canthus, and so on. In the process of loading the makeup effect, the position and the outline presented by the two-dimensional face image are loaded, in the process of projecting the loaded makeup image, the makeup shadows of all parts in the makeup image can be deformed such as adaptively elongated along with the three-dimensional face of the user, and the makeup image can be just filled in the corresponding part area under the visual angle, namely, the normal makeup effect can be presented.
On the other hand, collection equipment overlaps or is in close setting with projection equipment position in this application, corresponds collection equipment position promptly and in the projection of overlapping position, or through the projection of two displacement position adjustment, can realize the beautiful effect that above-mentioned process appears. In addition, the device to which the method of the present application is applied is a wearable device, that is, the face of the user remains relatively still with the acquisition device and the projection device most of the time, which greatly reduces the operation difficulty of the method of the present application.
The present application aims at different face regions to be collected under different viewing angles, and the face images collected from these unusual viewing angles may include a partial region of a certain part, or include a complete region of a certain part, or include partial regions of multiple parts, or include complete regions of multiple parts.
In this embodiment, the different viewing angles may collect information of any one or more of the following facial regions:
1) collecting a whole face area;
2) collecting any one part of face area of the upper half part, the lower half part, the left half part and the right half part;
3) the face area of any part or a combination of a plurality of parts of the face is collected.
Further, according to the requirement of the user to make up the part, the whole face image can be collected, and part of the face can also be collected, for example, only make up the eye. Of course, the site to be harvested can also be adjusted at any time in the present application. Such as by acquiring a full face image to only acquire eye regions.
For example, the acquisition device may employ a miniature high-precision portrait recognition camera.
Step S202: identifying one or more locations in each of the two-dimensional facial images based on a facial image recognition model;
in an embodiment of the present application, the identifying one or more regions in each of the two-dimensional facial images based on a facial image recognition model includes:
A. pre-taking two-dimensional face images corresponding to face regions under different viewing angles as a training set; specifically, for users with different long-age and gender, two-dimensional face images of face regions under various visual angles are collected to serve as a training set.
B. And marking the positions and the part names of all parts of the face area under different visual angles through the marking frame.
For example, the common labeling method of the image recognition algorithm is adopted to calibrate each part of the face region under different viewing angles in advance, such as labeling the position and the part name of each part through a labeling frame.
C. And inputting the two-dimensional face image after the labeling into an image recognition model for training to obtain the face image recognition model for recognizing one or more parts in the two-dimensional face image.
For example, by inputting the labeled training set into an image recognition model and training it, a face image recognition model that can recognize each part in a two-dimensional face image can be obtained.
In an embodiment of the present application, the step S202 further includes: and performing highlighting processing on each two-dimensional face image to identify the outline corresponding to each part, so as to load the makeup information of each part according to the outline corresponding to each part aiming at one or more identified parts.
Wherein the highlighting process comprises: any one or more of color saturation adjustment, contrast adjustment, brightness adjustment, coloring adjustment, black-white vector conversion, and line dispersion.
In the present application, the highlighting process aims to highlight the outline of each part in the two-dimensional face image more clearly, and the highlighting process can be implemented by some common image operation modes such as contrast, brightness, and the like.
In the present application, the sites include, but are not limited to: any one or more of eyebrows, eyes, eyelashes, nose, lips, cheeks, cheekbones, and outer contours of the face.
Step S203: and loading the makeup information corresponding to each part aiming at the identified part or parts respectively to obtain a two-dimensional makeup image corresponding to each visual angle.
After one or more parts in the two-dimensional face image are identified in step S202, cosmetic information corresponding to each part is loaded according to the identified outline of each part.
In an embodiment of the application, the obtaining of makeup information includes any one or more of the following manners:
on one hand, the multi-view projection makeup wearing equipment is preset with multiple groups of makeup information and is obtained through a switch key or a selection key.
Preferably, the preset makeup information may also be transmitted to the multi-view projection makeup wearing device through a mobile terminal and stored, or the stored makeup information of the multi-view projection makeup wearing device may be deleted or modified through the mobile terminal.
On the other hand, the makeup information in any combination form is selected at the mobile terminal, and the makeup information is communicated with the multi-view projection makeup wearing equipment to be transmitted.
Preferably, the present application is also capable of storing makeup used by the user so that it can be subsequently changed by one key.
Wherein the makeup information includes but is not limited to: eyebrow shape, eye shadow, eyelash, eyeliner, lip gloss, blush, foundation, face grooming, highlight, shadow, and whitening.
For example, the makeup information may be eyebrow shapes of various shapes, eye shadows of various colors, eyelashes of various lengths or densities, eye liners of various thicknesses and different lines, lip colors of different colors, and the like.
Preferably, different makeup information can be selected on the mobile terminal by means of makeup or retouching software similar to the makeup show APP, for example, makeup effects such as dimming, filtering, blush adding, eye shadow, lipstick selecting and different lipstick color numbers can be displayed through the virtual head portrait, so that a makeup combination of the mood instrument can be selected.
For example, the software provides makeup templates for selection: such as christmas makeup, mulley makeup, white collar makeup, etc., which are set at the beginning and roughly include eyebrows, eye shadows, eyelashes, highlights, face-care, lipstick, blush, foundation, etc., the user can also adjust the makeup intensity and color of the corresponding part according to the effect on his face.
Self-defined makeup: the user can customize the makeup suitable for the user according to the preference of the user. The adjustment contents comprise eyebrow, eye shadow, eyelash, eye liner, lipstick color, lip makeup type, and decorations like stickers and paillettes, and the adjustment contents can adjust the local light and shade relation to achieve the effects of whitening, concealing, repairing the face and the like.
In this embodiment, the obtained makeup information includes a corresponding relationship corresponding to one or more parts of the face, for example, lip gloss may correspond to a lip part, eyebrow shape may correspond to an eyebrow part, eye shadow, eyelashes, and eyeliner may correspond to eyes or eyelids, and the makeup information such as foundation, face correction, highlight, shadow, and whitening may correspond to part or all of the face.
Step S204: and correspondingly projecting the two-dimensional makeup image to the face of the user by taking the acquisition position of each visual angle as a base point so as to map the makeup effect.
It should be noted that, the acquisition positions of the various viewing angles are used as base points, and the micro camera and the micro projector in the multi-viewing-angle projection makeup wearing device are generally required to be as close as possible in position, even the two are integrated, so that real-time conversion and projection positioning of the makeup effect are facilitated.
This application can adopt miniature projecting apparatus, and on wearable equipment was located to its less convenient of size, its projection light source heat was low secondly, and is harmless to skin, and the power consumption is few, and the precision is high, delays for a short time.
In an embodiment of the present application, the method of the present application may further include: projecting the two-dimensional makeup image to the face of the user at a frequency of M times per second; and the two-dimensional makeup image projected each time is the latest two-dimensional makeup image obtained by identifying each part and correspondingly loading makeup information according to the latest two-dimensional face image acquired each time.
In this embodiment, the method can make the makeup image follow the changes of the expression, texture, etc. of the user's face, reduce or eliminate the error or delay between the projected two-dimensional makeup image and the changes of the user's face, and make the makeup image completely and dynamically fit with the face in real time, and look true and natural enough.
Generally, when M reaches 24 or more, the observed makeup effect is sufficiently continuous and natural due to persistence of vision of human eyes. The standard frame number per second of the movie, M is 24, since the action amplitude and speed of the human face are not large, M can be lower, or, in order to make the effect better, the frame number can be increased according to the situation. Generally, the higher the frame number is, the better the effect is, but the calculation amount and the energy consumption are also higher, and the standby time and the equipment heating are influenced. And the frame number is too high, and to a certain extent, the human eyes can difficultly distinguish obvious effect difference. The number of frames per second must be taken to an appropriate value.
At present, projection technology is more perfect, and the makeup is mainly carried out through the projection technology, and the makeup is perfectly fitted to the face of a user and changes along with the change of the face, so that the makeup looks natural enough. Therefore, the multi-view projection realizes the effect of makeup, which is different from the simple single-view projection. The human face can not receive the projection effect in a flat way like a curtain, so that the face information is collected from a plurality of visual angles and is projected in a plurality of angles, the makeup selected by a user is projected on the face of the user, the effect of projecting the makeup is achieved, and the effect of projecting the makeup is very close to the effect of real makeup.
Fig. 3 is a schematic structural view of a multi-view projection cosmetic wearing apparatus according to an embodiment of the present application. As shown, the apparatus 300 includes:
the cameras 301 are used for acquiring the face area information of the user in real time to obtain two-dimensional face images corresponding to different face areas under different viewing angles;
a processor 302 for identifying one or more regions in each of the two-dimensional facial images based on a facial image recognition model; respectively loading the makeup information of each corresponding part aiming at the identified part or parts so as to obtain two-dimensional makeup images corresponding to each visual angle;
and the projectors 303 are used for correspondingly projecting the two-dimensional makeup image to the face of the user by taking the collection position of each visual angle as a base point so as to map a makeup effect.
Specifically, the processor 302 is further connected to a memory storing computer instructions, and the processor 302 loads one or more instructions corresponding to the processes of the application program into the memory according to the steps described in fig. 2, and the processor 302 executes the application program stored in the memory, thereby implementing the method described in fig. 2.
The Processor 302 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the device can also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, or a discrete hardware component.
Further, the air conditioner is provided with a fan,
when the multi-view projection cosmetic applicator 300 adopts a preset multi-set cosmetic information manner, the apparatus 300 may further include: and the memory 304 is used for storing preset groups of cosmetic information for the equipment to acquire through a switching key or a selection key.
The Memory 304 may include a Random Access Memory (RAM), and may further include a non-volatile Memory (non-volatile Memory), such as at least one disk Memory.
When the multi-view projection cosmetic wearing device 300 is a transmission mode for receiving the cosmetic information combined by user-defined on the mobile terminal, the device 300 may further include: a communicator 305, configured to perform communication connection with a mobile terminal, so as to receive the makeup information in any combination form selected on the mobile terminal.
Specifically, the communication method of the communicator 305 includes: any one or more of WIFI, NFC, Bluetooth, Ethernet, GSM, 3G and GPRS. The network communication method of the communication method comprises the following steps: any one or more of the internet, an intranet, a Wide Area Network (WAN), a Local Area Network (LAN), a wireless network, a Digital Subscriber Line (DSL) network, a frame relay network, an Asynchronous Transfer Mode (ATM) network, a Virtual Private Network (VPN), and/or any other suitable communication network.
In an embodiment of the present application, the device 300 is a card issuing device, which can refer to the card issuing device shown in fig. 1A. The hairpin is provided with at least two interaction units which extend to the front of the face of the user when the hairpin is worn by the user; each of the interaction units includes one of the cameras 301 and one of the projectors 303, and the capturing and projecting directions are both toward the face of the user.
In another embodiment of the present application, the apparatus 300 may also be a hat with a front brim, as described with reference to fig. 1B. At least two interaction units facing the face of the user are arranged below the front brim of the hat; each of the interaction units includes one of the cameras 301 and one of the projectors 303, and the capturing and projecting directions are both toward the face of the user.
In the present application, whether the device 300 is a hair clip or a hat, it is generally desirable that the camera 301 and the projector 303 be located in close proximity to achieve accurate projection. It is also possible if the camera 301 and the projector 303 are not located in close proximity, but the calculation process is slightly complicated in the case of projection positioning.
In this embodiment, the device 300 may store electric energy by using a small lithium battery, or replace the lithium battery with a solar battery, so that the device 300 is lighter, more convenient, energy-saving and environment-friendly.
The main problem of solving of this application utilizes the projection technology to save the time of making up, saves the environmental problem that makes up or abandon cosmetics and the damage that cosmetics erode for a long time and cause people's skin, and the women of making up for a long time has a puzzlement that hardly solves, and the shape is worn garbled after removing makeup promptly, and this method can remove from one whole day and hang full face makeup, and skin can more comfortable breathing, and the improvement to the face color also can be more obvious in the past for a long time.
To sum up, the application provides a beautiful dress up method of multi-view projection and beautiful dress up wearing apparatus of multi-view projection. Acquiring face region information of a user in real time to obtain two-dimensional face images corresponding to different face regions under different viewing angles; identifying one or more locations in each of the two-dimensional facial images based on a facial image recognition model; respectively loading the makeup information of each corresponding part aiming at the identified part or parts so as to obtain two-dimensional makeup images corresponding to each visual angle; and correspondingly projecting the two-dimensional makeup image to the face of the user by taking the acquisition position of each visual angle as a base point so as to map the makeup effect.
The application effectively overcomes some defects in the prior art and has higher industrial utilization value.
The above embodiments are merely illustrative of the principles and utilities of the present application and are not intended to limit the application. Any person skilled in the art can modify or change the above-described embodiments without departing from the spirit and scope of the present application. Accordingly, it is intended that all equivalent modifications or changes which can be made by those skilled in the art without departing from the spirit and technical concepts disclosed in the present application shall be covered by the claims of the present application.

Claims (10)

1. A multi-perspective projection makeup method, comprising:
acquiring face region information of a user in real time to obtain two-dimensional face images corresponding to different face regions under different viewing angles;
identifying one or more locations in each of the two-dimensional facial images based on a facial image recognition model;
respectively loading the makeup information of each corresponding part aiming at the identified part or parts so as to obtain two-dimensional makeup images corresponding to each visual angle;
and correspondingly projecting the two-dimensional makeup image to the face of the user by taking the acquisition position of each visual angle as a base point so as to map the makeup effect.
2. The multi-perspective projection cosmetic method of claim 1, wherein the different perspectives comprise: any one or more combination of left visual angle, left lower visual angle, left upper visual angle, right lower visual angle, right upper visual angle, depression visual angle and elevation visual angle.
3. The multi-view projection makeup method according to claim 2, wherein said different views can collect any one or more of the following facial region information:
1) collecting a whole face area;
2) collecting any one part of face area of the upper half part, the lower half part, the left half part and the right half part;
3) the face area of any part or a combination of a plurality of parts of the face is collected.
4. The multi-perspective projection cosmetic method of claim 1, wherein the identifying one or more locations in each of the two-dimensional facial images based on a facial image recognition model comprises:
pre-taking two-dimensional face images corresponding to face regions under different viewing angles as a training set;
marking the position and the part name of each part of the face area under different visual angles through a marking frame;
and inputting the two-dimensional face image after the labeling into an image recognition model for training to obtain the face image recognition model for recognizing one or more parts in the two-dimensional face image.
5. The multi-perspective projection cosmetic method of claim 1, wherein the identifying one or more locations in each of the two-dimensional facial images based on a facial image recognition model comprises:
and performing highlighting processing on each two-dimensional face image to identify the outline corresponding to each part, so as to load the makeup information of each part according to the outline corresponding to each part aiming at one or more identified parts.
6. The multi-view projection makeup method according to claim 5, wherein said highlighting process comprises: any one or more of color saturation adjustment, contrast adjustment, brightness adjustment, coloring adjustment, black-white vector conversion, and line dispersion.
7. The multi-perspective projection cosmetic method according to any one of claims 1 to 6, wherein the method comprises:
projecting the two-dimensional makeup image to the face of the user at a frequency of M times per second; and the two-dimensional makeup image projected each time is the latest two-dimensional makeup image obtained by identifying each part and correspondingly loading makeup information according to the latest two-dimensional face image acquired each time.
8. A multi-perspective projection cosmetic wearing device, characterized in that the device comprises:
the system comprises a plurality of cameras, a face recognition device and a face recognition device, wherein the cameras are used for collecting face area information of a user in real time so as to obtain two-dimensional face images corresponding to different face areas under different view angles;
a processor for identifying one or more locations in each of the two-dimensional facial images based on a facial image recognition model; respectively loading the makeup information of each corresponding part aiming at the identified part or parts so as to obtain two-dimensional makeup images corresponding to each visual angle;
and the projectors are used for correspondingly projecting the two-dimensional makeup images to the face of the user by taking the acquisition positions of all the visual angles as base points so as to map the makeup effect.
9. The multi-perspective projection cosmetic wearing device according to claim 8, further comprising:
the memory is used for storing a plurality of preset groups of makeup information so that the equipment can obtain the makeup information through a switching key or a selection key;
or the like, or, alternatively,
and the communicator is used for carrying out communication connection with the mobile terminal so as to receive the makeup information in any combination form selected on the mobile terminal.
10. The multi-view projection makeup and costume wearing device according to claim 8, wherein the device is a hairpin, and at least two interaction units extending to the front of the face of the user when worn by the user are arranged on the hairpin; each interaction unit comprises a camera and a projector, and the acquisition and projection directions of the interaction units face the face of the user; or the equipment is a hat with a front brim, and at least two interaction units facing the face of the user are arranged below the front brim of the hat; each interaction unit comprises a camera and a projector, and the collection direction and the projection direction face the face of the user.
CN202010646810.XA 2020-07-07 2020-07-07 Multi-view projection makeup method and multi-view projection makeup dressing wearing equipment Pending CN111782854A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010646810.XA CN111782854A (en) 2020-07-07 2020-07-07 Multi-view projection makeup method and multi-view projection makeup dressing wearing equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010646810.XA CN111782854A (en) 2020-07-07 2020-07-07 Multi-view projection makeup method and multi-view projection makeup dressing wearing equipment

Publications (1)

Publication Number Publication Date
CN111782854A true CN111782854A (en) 2020-10-16

Family

ID=72758181

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010646810.XA Pending CN111782854A (en) 2020-07-07 2020-07-07 Multi-view projection makeup method and multi-view projection makeup dressing wearing equipment

Country Status (1)

Country Link
CN (1) CN111782854A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112486263A (en) * 2020-11-30 2021-03-12 科珑诗菁生物科技(上海)有限公司 Eye protection makeup method based on projection and projection makeup dressing wearing equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004094917A (en) * 2002-07-08 2004-03-25 Toshiba Corp Virtual makeup device and method therefor
US20080285843A1 (en) * 2007-05-16 2008-11-20 Honda Motor Co., Ltd. Camera-Projector Duality: Multi-Projector 3D Reconstruction
CN106657849A (en) * 2016-12-31 2017-05-10 上海孩子国科教设备有限公司 Facial projection apparatus and system, and implementation method
CN107896324A (en) * 2017-11-02 2018-04-10 天衍互动(厦门)科技有限公司 A kind of actual situation hybrid projection devices and methods therefor
JP2018195996A (en) * 2017-05-18 2018-12-06 株式会社デジタルハンズ Image projection apparatus, image projection method, and image projection program
CN111179210A (en) * 2019-12-27 2020-05-19 浙江工业大学之江学院 Method and system for generating texture map of face and electronic equipment
CN111260587A (en) * 2020-01-21 2020-06-09 科珑诗菁生物科技(上海)有限公司 3D projection makeup method and 3D projection makeup dressing equipment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004094917A (en) * 2002-07-08 2004-03-25 Toshiba Corp Virtual makeup device and method therefor
US20080285843A1 (en) * 2007-05-16 2008-11-20 Honda Motor Co., Ltd. Camera-Projector Duality: Multi-Projector 3D Reconstruction
CN106657849A (en) * 2016-12-31 2017-05-10 上海孩子国科教设备有限公司 Facial projection apparatus and system, and implementation method
JP2018195996A (en) * 2017-05-18 2018-12-06 株式会社デジタルハンズ Image projection apparatus, image projection method, and image projection program
CN107896324A (en) * 2017-11-02 2018-04-10 天衍互动(厦门)科技有限公司 A kind of actual situation hybrid projection devices and methods therefor
CN111179210A (en) * 2019-12-27 2020-05-19 浙江工业大学之江学院 Method and system for generating texture map of face and electronic equipment
CN111260587A (en) * 2020-01-21 2020-06-09 科珑诗菁生物科技(上海)有限公司 3D projection makeup method and 3D projection makeup dressing equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112486263A (en) * 2020-11-30 2021-03-12 科珑诗菁生物科技(上海)有限公司 Eye protection makeup method based on projection and projection makeup dressing wearing equipment

Similar Documents

Publication Publication Date Title
CN109377557B (en) Real-time three-dimensional face reconstruction method based on single-frame face image
CN108932654B (en) Virtual makeup trial guidance method and device
CN109690617A (en) System and method for digital vanity mirror
CN108537126B (en) Face image processing method
EP1030267A1 (en) Method of correcting face image, makeup simulation method, makeup method, makeup supporting device and foundation transfer film
JP2010507854A (en) Method and apparatus for virtual simulation of video image sequence
CN110663066A (en) Image processing apparatus, image processing system, image processing method, and program
CN104915981A (en) Three-dimensional hairstyle design method based on somatosensory sensor
CN109801380A (en) A kind of method, apparatus of virtual fitting, storage medium and computer equipment
CN202662016U (en) Real-time virtual fitting device
WO2021197186A1 (en) Auxiliary makeup method, terminal device, storage medium and program product
CN106097442A (en) A kind of intelligent simulation dressing system and application process thereof
CN103995911A (en) Beauty matching method and system based on intelligent information terminal
CN111783511A (en) Beauty treatment method, device, terminal and storage medium
CN106447739A (en) Method for generating makeup region dynamic image and beauty makeup assisting method and device
CN109410119A (en) Mask image distortion method and its system
CN109785259A (en) A kind of real-time U.S. pupil method and device
CN111796662A (en) Makeup method based on AR technology and multifunctional makeup mirror system
CN112465606A (en) Cosmetic customization system
JP2015197710A (en) Makeup support device, and program
CN111260587A (en) 3D projection makeup method and 3D projection makeup dressing equipment
CN111782854A (en) Multi-view projection makeup method and multi-view projection makeup dressing wearing equipment
KR101719927B1 (en) Real-time make up mirror simulation apparatus using leap motion
CN104680574A (en) Method for automatically generating 3D face according to photo
JPWO2011155067A1 (en) Character generation system, character generation method and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination