CN112819718A - Image processing method and device, electronic device and storage medium - Google Patents

Image processing method and device, electronic device and storage medium Download PDF

Info

Publication number
CN112819718A
CN112819718A CN202110136553.XA CN202110136553A CN112819718A CN 112819718 A CN112819718 A CN 112819718A CN 202110136553 A CN202110136553 A CN 202110136553A CN 112819718 A CN112819718 A CN 112819718A
Authority
CN
China
Prior art keywords
makeup
template
image
user
makeup template
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110136553.XA
Other languages
Chinese (zh)
Inventor
王根在
罗彬�
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Sensetime Technology Co Ltd
Original Assignee
Shenzhen Sensetime Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Sensetime Technology Co Ltd filed Critical Shenzhen Sensetime Technology Co Ltd
Priority to CN202110136553.XA priority Critical patent/CN112819718A/en
Publication of CN112819718A publication Critical patent/CN112819718A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Medical Informatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The application discloses an image processing method and device, electronic equipment and a storage medium. The method comprises the following steps: outputting a makeup pattern in response to receiving an instruction to start makeup; the makeup mode comprises a mode of recommending a makeup template and a mode of self-service makeup; in response to receiving a mode of selecting a recommended makeup template, acquiring a character attribute of the user; and searching a first cosmetic template library by using the character attributes to obtain at least one cosmetic template with characteristic data matched with the character attributes.

Description

Image processing method and device, electronic device and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image processing method and apparatus, an electronic device, and a storage medium.
Background
At present, the beauty industry has great market potential. The beauty is a cosmetic and a tool, and reasonable steps and skills are adopted to render and draw the face of a person, so that the image quality of the person is enhanced. The beauty treatment involves many steps and the selection is complicated. Therefore, most people are not aware of which makeup they are fit for.
The common methods for people to find makeup suitable for themselves are as follows: professional makeup and make-up consultation is carried out through professional institutions. However, this method is not only inefficient and not very user-friendly, but also does not help the user to provide a more accurate makeup template according to the characteristics of the user.
Disclosure of Invention
The application provides an image processing method and device, an electronic device and a storage medium.
The application provides an image processing method, which comprises the following steps:
outputting a makeup pattern in response to receiving an instruction to start makeup; the makeup mode comprises a mode of recommending a makeup template and a mode of self-service makeup;
in response to receiving a mode of selecting a recommended makeup template, acquiring a character attribute of the user;
and searching a first cosmetic template library by using the character attributes to obtain at least one cosmetic template with characteristic data matched with the character attributes.
In combination with any embodiment of the present application, the at least one makeup template includes a first makeup template to be treated, and the method further includes:
acquiring a face image of a user;
and transferring the makeup of the first makeup template to be processed to the face image to obtain a first virtual makeup image.
With reference to any embodiment of the present application, the acquiring the character attributes of the user includes:
and performing character attribute extraction processing on the face image to obtain the character attribute of the user in the face image.
In combination with any embodiment of the present application, the at least one makeup template includes a face image region;
before the step of transferring the makeup of the first makeup template to be processed to the face image to obtain a first virtual makeup image, the method further comprises the following steps:
taking the highest makeup template as the first makeup template to be processed under the condition that the number of the at least one makeup template exceeds 1; the highest beauty makeup template is the beauty makeup template with the highest similarity between the at least one beauty makeup template and the face image.
With reference to any embodiment of the present application, before the transferring the makeup of the first makeup template to be processed to the face image of the user to obtain a first virtual makeup image, the method further includes:
displaying the first makeup template to be processed;
under the condition that an instruction of selecting to use the first to-be-processed makeup template is received, executing the step of transferring the makeup of the first to-be-processed makeup template to the face image of the user to obtain a first virtual makeup image;
and under the condition that an instruction which is not satisfied with the first to-be-processed makeup template is received, taking the makeup template with the second highest similarity between the at least one makeup template and the face image as the first to-be-processed makeup template, or entering an autonomous makeup mode.
In combination with any embodiment of the present application, the method further comprises:
displaying the image after the first virtual makeup;
outputting makeup auxiliary information under the condition that an instruction of satisfying the makeup effect of the image after the first virtual makeup is received; the makeup assistant information is used for guiding a user to carry out actual makeup so as to obtain a makeup effect of the first makeup template to be processed.
In combination with any embodiment of the present application, the at least one makeup template further includes a second makeup template to be treated different from the first makeup template to be treated, and the method further includes:
and under the condition that an instruction for dissatisfaction with the image after the first virtual makeup is received, transferring the makeup of the second to-be-processed makeup template to the face image to obtain a second image after the second virtual makeup.
In combination with any embodiment herein, the method further comprises;
acquiring an image after actual makeup; the image after the actual makeup is an image obtained by actually making up the user according to the makeup auxiliary information;
and carrying out matching degree detection on the image after the actual makeup and the image after the first virtual makeup to obtain a matching degree result of the image after the actual makeup and the image after the first virtual makeup.
In combination with any of the embodiments of the present application, the first cosmetic template library is not used in response to receiving a mode for selecting a self-made cosmetic.
In combination with any embodiment of the present application, the method further comprises:
acquiring a sample makeup template and a second makeup template library;
and updating the makeup template in the second makeup template library by using the sample makeup template to obtain a first makeup template library.
With reference to any embodiment of the present application, before the updating, by using the sample makeup template, a makeup template in the second makeup template library to obtain a first makeup template library, the method further includes:
comparing the sample makeup template with makeup templates in the second makeup template library to obtain a similarity set;
updating the makeup template in the second makeup template library by using the sample makeup template to obtain a first makeup template library, including:
and under the condition that the maximum value in the similarity set is larger than a similarity threshold value, updating the makeup template in the second makeup template library by using the sample makeup template to obtain a first makeup template library.
In combination with any embodiment of the present application, the updating the makeup template in the second makeup template library by using the sample makeup template to obtain a first makeup template library includes:
fusing the sample makeup template and the first makeup template to obtain a second makeup template; the first makeup template is a makeup template corresponding to the maximum value in the similarity set in the second makeup template library;
and replacing the first makeup template in the second makeup template library by using the second makeup template to obtain the first makeup template library.
In some embodiments, the present application further provides an apparatus for image processing, the apparatus comprising:
the output unit is used for responding to the received instruction of starting makeup beautification and outputting a makeup beautification mode; the makeup mode comprises a mode of recommending a makeup template and a mode of self-service makeup;
the obtaining unit is used for responding to the mode of receiving the selection recommendation makeup template and obtaining the character attribute of the user;
and the retrieval unit is used for retrieving the first makeup template library by using the character attributes to obtain at least one makeup template with characteristic data matched with the character attributes.
In combination with any embodiment of the present application, the at least one makeup template includes a first makeup template to be processed, and the obtaining unit is further configured to:
acquiring a face image of a user;
the device further comprises: and the transfer unit is used for transferring the makeup of the first makeup template to be processed to the face image to obtain a first virtual makeup image.
With reference to any embodiment of the present application, the obtaining unit is specifically configured to:
and performing character attribute extraction processing on the face image to obtain the character attribute of the user in the face image.
In combination with any embodiment of the present application, the at least one makeup template includes a face image region;
the device further comprises: a first processing unit, configured to, before the makeup of the first makeup template to be processed is migrated to the face image to obtain a first virtual makeup image, take a highest makeup template as the first makeup template to be processed when the number of the at least one makeup template exceeds 1; the highest beauty makeup template is the beauty makeup template with the highest similarity between the at least one beauty makeup template and the face image.
In combination with any embodiment of the present application, the apparatus further includes: the display unit is used for displaying the first makeup template to be processed before the makeup of the first makeup template to be processed is transferred to the face image of the user to obtain a first virtual makeup image;
the migration unit is used for executing the step of migrating the makeup of the first to-be-processed makeup template to the face image of the user to obtain a first virtual makeup image under the condition of receiving an instruction of selecting to use the first to-be-processed makeup template;
the device further comprises: and the second processing unit is used for taking the makeup template with the second highest similarity of the human face in the at least one makeup template and the face image as the first makeup template to be processed or entering an autonomous makeup mode under the condition that an instruction which is not satisfied with respect to the first makeup template to be processed is received.
In combination with any embodiment of the present application, the apparatus further includes: the display unit is used for displaying the image after the first virtual makeup;
the output unit is further used for outputting makeup auxiliary information under the condition that an instruction of satisfying the makeup effect of the image after the first virtual makeup is received; the makeup assistant information is used for guiding a user to carry out actual makeup so as to obtain a makeup effect of the first makeup template to be processed.
In combination with any embodiment of the present application, the at least one makeup template further includes a second makeup template to be processed different from the first makeup template to be processed, and the migration unit is further configured to:
and under the condition that an instruction for dissatisfaction with the image after the first virtual makeup is received, transferring the makeup of the second to-be-processed makeup template to the face image to obtain a second image after the second virtual makeup.
With reference to any one of the embodiments of the present application, the obtaining unit is further configured to;
acquiring an image after actual makeup; the image after the actual makeup is an image obtained by actually making up the user according to the makeup auxiliary information;
the device further comprises: and the detection unit is used for detecting the matching degree of the image after the actual makeup and the image after the first virtual makeup to obtain a matching degree result of the image after the actual makeup and the image after the first virtual makeup.
In combination with any embodiment of the present application, the apparatus further includes: a third processing unit to not use the first cosmetic template library in response to receiving a mode to select an autonomous cosmetic.
With reference to any embodiment of the present application, the obtaining unit is further configured to:
acquiring a sample makeup template and a second makeup template library;
the device further comprises: and the updating unit is used for updating the makeup template in the second makeup template library by using the sample makeup template to obtain a first makeup template library.
In combination with any embodiment of the present application, the apparatus further includes: a comparison unit, configured to compare the sample makeup template with the makeup templates in the second makeup template library to obtain a similarity set before the makeup template in the second makeup template library is updated by using the sample makeup template to obtain a first makeup template library;
the retrieval unit is specifically configured to:
and under the condition that the maximum value in the similarity set is larger than a similarity threshold value, updating the makeup template in the second makeup template library by using the sample makeup template to obtain a first makeup template library.
With reference to any embodiment of the present application, the update unit is specifically configured to:
fusing the sample makeup template and the first makeup template to obtain a second makeup template; the first makeup template is a makeup template corresponding to the maximum value in the similarity set in the second makeup template library;
and replacing the first makeup template in the second makeup template library by using the second makeup template to obtain the first makeup template library.
In some embodiments, the present application further provides a processor for performing the method of the first aspect and any one of its possible implementation manners.
In some embodiments, the present application further provides an electronic device comprising: a processor, input means, output means, and a memory for storing computer program code comprising computer instructions, which, when executed by the processor, cause the electronic device to perform the method of the first aspect and any one of its possible implementations.
In some embodiments, the present application further provides a computer-readable storage medium having stored therein a computer program comprising program instructions which, if executed by a processor, cause the processor to perform the method of the first aspect and any one of its possible implementations.
In some embodiments, the present application also provides a computer program product comprising a computer program or instructions which, if run on a computer, causes the computer to perform the method of the first aspect and any of its possible implementations.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments or the background art of the present application, the drawings required to be used in the embodiments or the background art of the present application will be described below.
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and, together with the description, serve to explain the principles of the application.
Fig. 1 is a schematic flowchart of an image processing method according to an embodiment of the present application;
fig. 2 is a schematic flowchart of an image processing system according to an embodiment of the present application;
fig. 3 is a schematic flowchart of an application image processing method according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application;
fig. 5 is a schematic diagram of a hardware structure of an image processing apparatus according to an embodiment of the present disclosure.
Detailed Description
In order to make the technical solutions of the present application better understood, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first," "second," and the like in the description and claims of the present application and in the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
The embodiments of the present application will be described below with reference to the drawings.
Referring to fig. 1, fig. 1 is a schematic flowchart of an image processing method according to an embodiment of the present disclosure.
101. Outputting a makeup pattern in response to receiving an instruction of starting makeup, wherein the makeup pattern comprises a pattern of recommending a makeup template and a pattern of self-contained makeup.
In the embodiment of the application, the instruction for starting the makeup is used for instructing the image processing device to start the makeup program. In one possible implementation, the image processing device has a communication connection with the display, and the image processing device displays a message box indicating whether to start the beauty program on the display through the communication connection. The user can input an instruction to start makeup to the image processing apparatus through the information box.
In another possible implementation manner, the user inputs an instruction to start beauty makeup to the image processing apparatus by inputting voice data carrying information for starting beauty makeup to the image processing apparatus.
In yet another possible implementation manner, the instruction for starting the makeup may be received by the image processing apparatus, and the instruction for starting the makeup may be transmitted by the terminal. Optionally, the terminal may be any one of the following: cell-phone, computer, panel computer, server, wearable equipment.
In the embodiment of the application, the mode of recommending the makeup template refers to that the image processing device recommends the makeup template suitable for the user to the user, the mode of self-making the makeup refers to that the user makes the makeup by himself, and further, the user can upload the effect image of self-making the makeup as the makeup template to the image processing device.
And outputting the makeup model, namely enabling the image processing device to enter a makeup template selection interface in response to receiving an instruction for starting makeup.
102. In response to receiving a mode of selecting a recommended makeup template, a character attribute of the user is obtained.
In the embodiment of the application, the character attributes of the user comprise the appearance characteristics (including age, hair style, hair color, gender, face shape, types of five sense organs and skin color) of the user.
In one implementation of obtaining the person attribute of the user, the person attribute of the user may be obtained by receiving the person attribute of the user input by the user through an input component, where the input component includes: keyboard, mouse, touch screen, touch pad, audio input device, etc. Or receiving the character attributes of the user sent by the terminal, wherein the terminal comprises a mobile phone, a computer, a tablet computer, a server and the like.
In another implementation manner of acquiring the character attributes of the user, a terminal is in communication connection with a server, receives the character attributes of the user input by the user and sends the character attributes of the user to the server, so that the server acquires the character attributes of the user. Optionally, in this possible implementation manner, before the user inputs the personal attributes of the user through the terminal, the terminal may display the personal attributes of the user to be selected to the user, and send the personal attributes of the user selected by the user to the server as the personal attributes of the user.
For example, among the character attributes displayed to the user by the terminal, the to-be-selected items of the face shape include: standard face, round face, square face, diamond face, goose egg face, long face, and heart-shaped face; the candidate items for gender include: male, female; the age groups of the candidate options include: under 20 years old, 20-25 years old, 25-30 years old, 30-35 years old, 35-40 years old, 40-45 years old, 45-50 years old, 50 years old or older; options to be selected for hair color include: black, white, red, orange, yellow, green, blue, violet, brown; the options to be selected for the eyebrow include: string-moon eyebrow, straight eyebrow, triangular eyebrow, sword eyebrow, willow leaf eyebrow, and eight-character eyebrow; the eye to be selected includes: standard eye, round eye, red-phoenix eye, white eye, narrow-sewed eye, triangular eye, deep-pocketed eye, swollen and swollen eye, long and thin eye, and peach-blossom eye; the pending options for the mouth include: thin lips, thick lips, M-shaped lips, beaded lips, smiling lips, dudu lips, loving lips; the to-be-selected items of the nose include: towards the skyhead, the saddlenose, the straight nose, the wavy nose, the high nose and the low nose; the skin color candidate includes: black, white, yellow; the options to be selected for the hairstyle include: has long hair of bang, no short hair of bang, and no short hair of bang.
103. And searching the first makeup template library by using the character attributes to obtain at least one makeup template with characteristic data matched with the character attributes.
In the embodiment of the application, the first cosmetic template library may be established before the character attributes of the user are acquired, and the first cosmetic template library includes the image and the feature data of the image. Wherein the feature data of the image refers to a person attribute of a person object in the image. The person attributes of the person object in the image include: face type, gender, age group, type of eyebrows, type of nose, type of eyes, type of mouth, hairstyle, hair color, skin color.
Since each image in the first cosmetic template library has feature data, the first cosmetic template library is searched by using the character attributes of the user, that is, feature data matched with the character attributes of the user is determined from the first cosmetic template library, and then a cosmetic template having feature data matched with the character attributes of the user is determined. It should be understood that the number of makeup templates may be one or more.
For example, the user's character attributes include: the face shape is round face, the nose type is high nose, and the color is black. And searching the first makeup template library by using the character attributes of the user, namely determining an image with characteristic data of a round face as a face type, a high nose as a nose type and black color from the first makeup template library to obtain at least one makeup template.
The image processing device searches data in the first cosmetic template library by using the character attributes of the user, and determines images with characteristic data matched with the character attributes of the user in the first cosmetic template library to obtain at least one cosmetic template. Therefore, the user can search the at least one makeup template obtained from the first makeup template library through the character attributes of the user, and learn the makeup method of the character object contained in the makeup template, so as to find the makeup suitable for the user.
For example, the user's character attributes are: round face, peach eye, high nose, female. Then, the image processing apparatus searches the first makeup template library using the person attribute of the user, and obtains a plurality of images having feature data of "round face, peach-eye, high nose, female", that is, at least one makeup template.
In the embodiment of the application, after entering the mode of recommending the makeup templates, the image processing device takes the character attributes of the user as a retrieval basis, and retrieves at least one makeup template suitable for the user from the first makeup template library.
As an alternative embodiment, the at least one makeup template includes a first makeup template to be processed, and the image processing apparatus further performs the steps of:
1. an image of a user's face is acquired.
In the embodiment of the application, the facial image of the user is an image containing the face of the user. In one implementation of acquiring a facial image of a user, an image processing apparatus receives a facial image of the user input by the user through an input component.
In yet another implementation of acquiring a facial image of a user, an image processing apparatus receives the facial image of the user transmitted by a data terminal.
In yet another implementation of acquiring a facial image of a user, an image processing apparatus receives the facial image of the user transmitted by a camera. Optionally, the image processing apparatus receives a video stream sent by the camera, performs decoding processing on the video stream, and obtains an image as a face image of the user.
2. And transferring the makeup of the first makeup template to be processed to the face image to obtain a first virtual makeup image.
In the embodiment of the application, the first to-be-processed makeup template is one makeup template of at least one makeup template. In a possible implementation manner of selecting the first to-be-processed makeup template from the at least one makeup template, an image with the minimum Euclidean distance from the features of the facial image of the user is selected from images contained in the at least one makeup template to serve as the first to-be-processed makeup template. And selecting the image with the minimum Euclidean distance with the characteristics of the facial image of the user from the at least one makeup template, namely selecting the image which is closest to the facial image of the user as a first makeup template to be processed.
In the embodiment of the application, makeup of the first makeup template to be processed is transferred to the face image of the user, and the image after the first virtual makeup is obtained. The image after the first virtual makeup is an image in which makeup of the first makeup template to be processed is migrated to the face image of the user, so that the face image of the user has the makeup effect of the human object in the first makeup template to be processed.
In a possible implementation manner, the image processing device transfers the makeup of the first makeup template to be processed to the face image of the user through a makeup transfer network to obtain a first virtual makeup image. The makeup transfer network can be obtained by training the deep convolutional neural network by taking at least one first training image with label information as training data, wherein the label information is the makeup parts such as eyebrows, eyes, nose, cheeks and the like of the person object in the first training image. The facial image of the user is input into the trained makeup migration network, and a makeup image, namely the first virtual makeup image, can be obtained.
In yet another possible implementation, the image processing apparatus transfers the makeup of the first makeup template to be processed to the face image of the user through a makeup transfer algorithm. The makeup migration algorithm divides the pixel value of the pixel image corresponding to the first makeup template to be processed by the pixel value corresponding to the first makeup template to be processed to obtain a value of a makeup mapping (cosmetic map, cp) between the pixel image corresponding to the first makeup template to be processed and the first makeup template to be processed. And then multiplying the numerical value of the makeup mapping by the pixel value of the face image of the user to obtain the pixel value of the image after the first virtual makeup, and further obtaining the image after the first virtual makeup. That is, the input of the makeup migration algorithm is the facial image of the first to-be-processed makeup template, and the facial image of the user; the output is the face image of the user after makeup, i.e. the first virtual makeup image. The image processing device inputs the face image of the user, the first to-be-processed makeup template and the pixel image corresponding to the first to-be-processed makeup template into a makeup migration algorithm, and outputs the image after the first virtual makeup.
Optionally, in order to transfer the makeup of the first makeup template to be processed to the face image of the user to obtain a relatively good makeup effect, before the value mapped by the makeup is multiplied by the pixel value of the face image of the user to obtain the pixel value of the image after the first virtual makeup, the makeup transfer algorithm may deform the spatial layout of the first makeup template to be processed and the spatial layout of the face image of the user to be processed to be consistent. And respectively extracting the feature points of the first to-be-processed makeup template and the facial image of the user, and matching the feature points of the first to-be-processed makeup template and the feature points of the facial image of the user to obtain the optimal matching. And optimizing the corresponding relation between the first makeup template to be processed and the facial image of the user by using parameters such as affine transformation, perspective transformation and the like to obtain optimized parameters. And deforming the first to-be-processed makeup template into the same spatial layout as the facial image of the user by using the optimized parameters.
The image processing device can accurately search the makeup template matched with the character attributes of the user by acquiring the character attributes of the user, namely, the makeup template according with the characteristics of the user can be provided according to the characteristics of the user. The image processing device can display a makeup template matching the character attributes of the user and a first virtual makeup image obtained by transferring the makeup of the makeup template to the face image of the user in a realistic manner. Therefore, the user can know which kind of makeup is suitable for himself/herself according to the makeup template matched with the character attribute of the user displayed by the image processing apparatus and the first virtual makeup-finished image obtained by migrating the makeup template matched with the character attribute of the user to the face image of the user.
As an alternative embodiment, the image processing apparatus performs the following steps in the process of performing step 1:
3. and performing character attribute extraction processing on the face image to obtain the character attribute of the user in the face image.
In the embodiment of the application, the person attribute extraction processing is performed on the facial image of the user, and the essence of the person attribute extraction is the feature extraction processing. The feature extraction processing can be realized by a pre-trained neural network or a feature extraction model, which is not limited in the present application. The feature data obtained by performing the person attribute extraction processing on the face image of the user can be understood as the semantic information of the deeper level of the face image of the user. Wherein the feature data is a person attribute of the user in the face image of the user.
In some possible implementation manners, the feature extraction processing of the facial image of the user is completed by performing convolution processing on the facial image of the user layer by layer through a plurality of layers of convolution layers which are randomly stacked, wherein the extracted feature content and semantic information of each convolution layer are different, and the concrete expression is that the feature extraction processing abstracts the features of the image step by step and removes relatively secondary feature data step by step, so that the content and the semantic information are more concentrated when the feature data extracted later is smaller. The method comprises the steps of carrying out convolution processing on a face image of a user step by step through a plurality of layers of convolution layers and extracting corresponding feature data, so that the image size can be reduced while main content information (namely the feature data of the face image of the user) of the face image of the user is obtained, the calculation amount of a system is reduced, and the calculation speed is improved.
In one possible implementation, the convolution process is implemented as follows: the convolution layer performs convolution processing on the face image of the user, namely, a convolution kernel is utilized to slide on the face image of the user, pixels on the face image of the user are multiplied by numerical values on the corresponding convolution kernels, all multiplied values are added to serve as pixel values on the image corresponding to pixels in the middle of the convolution kernel, all pixels in the face image of the user are finally subjected to convolution processing, and feature data are extracted.
It should be understood that the more the person attributes of the user are obtained by the person attribute extraction process, the more the makeup template obtained by the image processing apparatus using the person attribute search database of the user matches the face image of the user, and a better makeup effect can be obtained by transferring the makeup of the makeup template to the face image of the user.
For example, the person attributes of the user obtained by the person attribute extraction processing are: round face, thick lips, peach-blossom eye. Five makeup templates with the characteristics of round faces, thick lips and peach blossom eyes are obtained by utilizing the character attributes of the user. The person attributes of the user obtained by the person attribute extraction processing are as follows: round face, thick lips, peach blossom eye, high nose, willow leaf eyebrow. Three makeup templates with the characteristics of round face, thick lips, peach blossom eyes, high nose and willow leaf eyebrow are obtained by utilizing the character attribute search of the user. The beauty makeup template retrieved by the image processing device by utilizing more character attributes of the user can be matched with the facial image of the user better.
As an alternative embodiment, before performing step 2, the image processing apparatus further performs the following steps:
4. and in the case that the number of the at least one makeup template exceeds 1, taking the highest makeup template as the first makeup template to be processed, wherein the highest makeup template is the makeup template with the highest similarity between the at least one makeup template and the face image.
In the embodiment of the application, the makeup template with the highest similarity between the face image of the user and at least one makeup template is used as the first makeup template to be processed, and similarity comparison between two images is required. The algorithm for similarity comparison may adopt any one of the following algorithms: pixel point comparison, center-of-gravity comparison, projection comparison, block comparison, Opencv histogram method, image template matching, peak signal to noise ratio (PSNR), Structural Similarity (SSIM), and perceptual hash algorithm.
Optionally, before similarity comparison is performed on a face between the face image of the user and the face image of the user in the at least one makeup template, if the face image of the user and the image included in the at least one makeup template are RGB images, the pixel value of a pixel point of the image is between 0 and 255. If the direct processing range is too large, the processing of the data is complicated, and therefore, the dimension reduction of the image, that is, the binarization processing of the image is performed first. The binarization process can change the image to be only black and white, where black is represented by 1 and white is represented by 0, so as to obtain a matrix, where only the numbers 0 and 1 are included, and the OTSU algorithm can be used.
The pixel point comparison is used to explain how to calculate the similarity between the first image and the second image. The first image is the face image of the user after binarization processing, and the second image is the first makeup template to be processed after binarization processing. The first to-be-processed makeup template is any one of the at least one makeup template. The number of the pixel points of the image A is the same as that of the pixel points of the image B. And sequentially comparing pixel points at the same positions of the image A and the image B, and if the pixel values corresponding to the pixel points at the same positions are equal, adding one to the number of the similar points of the image A and the image B. Calculating the number of similar points of the first image and the second image, dividing the number of the similar points by the number of pixel points of the first image (or the second image) to obtain a numerical value between 0 and 1, and taking the numerical value as the similarity of the first image and the second image.
For example, assume that at least one cosmetic template contains an A template and a B template. Wherein, the pixel points of the A template and the facial image of the user are all 100 pixel points, and 30 pixel points are different, so the similarity of the face between the A template and the facial image of the user is 70%. The pixel points of the B template and the facial image of the user are all 100 pixel points, 40 pixel points are different, and then the similarity of the face between the B template and the facial image of the user is 60%. Because the similarity of the face between the a template and the face image of the user is greater than the similarity of the face between the B template and the face image of the user, the a template is recommended to the user as the first makeup template to be processed.
As an alternative embodiment, before performing step 2, the image processing apparatus further performs the following steps:
5. and displaying the first makeup template to be processed.
6. And when receiving an instruction of selecting to use the first to-be-processed makeup template, executing the step of transferring the makeup of the first to-be-processed makeup template to the face image of the user to obtain a first virtual makeup image.
In the embodiment of the application, the image processing device receives an instruction for selecting whether to use the first to-be-processed makeup template. The image processing device receives an instruction of selecting to use the first to-be-processed makeup template, and indicates that the user is satisfied with the first to-be-processed makeup template, so that the first to-be-processed makeup template is selected to be used. And the image processing device further transfers the makeup of the first makeup template to be processed to the face image of the user to obtain the image after virtual makeup.
7. And under the condition that an instruction which is not satisfied with the first to-be-processed makeup template is received, taking the makeup template with the second highest similarity between the at least one makeup template and the face image as the first to-be-processed makeup template or entering an autonomous makeup mode.
The image processing device receives an instruction of dissatisfaction with the first to-be-processed makeup template, and the instruction indicates that the user is dissatisfied with the first to-be-processed makeup template, so that the first to-be-processed makeup template is selected not to be used.
In one case, the image processing device further selects a makeup template with the second highest similarity to the face of the person from the at least one makeup template as a first makeup template to be processed. In another case, the image processing apparatus enters a self-beauty mode.
As an alternative embodiment, the image processing apparatus further performs the steps of:
8. and displaying the image after the first virtual makeup.
9. And outputting makeup auxiliary information under the condition that an instruction of satisfying the makeup effect of the image after the first virtual makeup is received, wherein the makeup auxiliary information is used for guiding a user to carry out actual makeup so as to obtain the makeup effect of the first makeup template to be processed.
The image processing device receives an instruction that the user is not satisfied with the makeup effect of the image after the first virtual makeup, and indicates that the user does not like the makeup effect of the image after the first virtual makeup. Therefore, the image processing device does not execute the step of guiding the user to carry out actual beauty makeup based on the first beauty makeup template to be processed, thereby reducing the memory of the operation data.
The image processing device receives an instruction that the user is satisfied with the image after the first virtual makeup, the instruction indicates that the user likes the makeup effect of the image after the first virtual makeup, and the image processing device executes the instruction for outputting the makeup auxiliary information. Wherein, the makeup auxiliary information is used for guiding a user to carry out actual makeup so as to obtain the makeup effect of the first makeup template to be processed. The user may be the user or another person who helps the user to make up, and the number and gender of the user are not limited in the present application. Optionally, the auxiliary information may be in the form of one or more of voice, text, image, and video.
In a possible implementation manner, the database stores a makeup template and a makeup method corresponding to the makeup template, the image processing device calls the makeup method corresponding to the first to-be-processed makeup template after receiving a user's satisfaction instruction for the image after the first virtual makeup, and displays the step of the makeup method of the character object in the first to-be-processed makeup template, so that the user can make up according to the makeup method.
In another possible implementation manner, when the image processing device identifies that the user operates a certain makeup part while displaying the step of the makeup method corresponding to the first makeup template to be processed, the image processing device uses the makeup method corresponding to the makeup part in a voice broadcast manner to prompt the user.
For example, the method for storing makeup in the database corresponding to the first makeup template to be processed includes: applying a uniform color foundation on the face, and applying a dark color foundation on the cheek and the chin; the eyebrows 2/3 are drawn straight, the eyebrows are not too high, and are not required to be drawn downwards, and the eyebrows are drawn a bit longer (similar to a straight eyebrows); the eye line is drawn into an oval shape; the nose shadow is not suitable for making a too obvious nose shadow and is natural; the upper lip of the lipstick is not required to be drawn too full, and the lower lip of the lipstick is required to be drawn full; the ear is preferably rubbed with the user's ear, preferably with a horizontal brush. If the image processing device recognizes that the user is drawing eyebrows, the user can be prompted by a makeup method for playing the eyebrows, such as "the eyebrows 2/3 are drawn straight, the eyebrows are not too high, and the eyebrows are not required to be drawn down and are drawn a bit longer".
In another possible implementation manner, the image processing device acquires each target part of the user requiring makeup, and then generates auxiliary information required by the specific makeup part, including the shape of the makeup part, the color of the makeup part and the like, according to information such as the position of the specific makeup part in the real face image and the like, by combining a pre-established makeup data model, and then superimposes the auxiliary information to the position of the corresponding organ of the face image for display.
For example, the face part corresponding to the lipstick is the lips. And generating an auxiliary line of the shape of the lipstick to be made up and prompt information for marking the color of the lipstick to be colored on the interface, and then overlaying the auxiliary line and the prompt information to the position of the mouth of the face image for displaying.
In addition, during the actual makeup process, the user may adjust his/her posture and mouth shape or move his/her position, so as to better confirm the makeup effect. Therefore, the image processing apparatus also needs to track changes of the specific target makeup part in the face image, including changes of the position and the shape of the target makeup part, and adjust the attribute of the virtual auxiliary information in real time, so that the virtual auxiliary information can be attached to the face image no matter how the user moves. To achieve the above object, any one of the following motion tracking algorithms may be used: the real-time positioning of the human face is realized by a tracking algorithm based on the region, a tracking algorithm based on the characteristics and a tracking algorithm based on the contour.
As an optional implementation, the at least one makeup template further includes a second makeup template to be processed different from the first makeup template to be processed, and the image processing apparatus further performs the following steps:
10. and when receiving an instruction that the image after the first virtual makeup is not satisfied, transferring the makeup of the second makeup template to be processed to the face image to obtain a second image after the second virtual makeup.
In this embodiment, the number of cosmetic templates in the at least one first cosmetic template is at least 2. The image processing device receives an instruction of dissatisfaction with the image after the first virtual makeup, and the user thinks that the makeup effect of the first makeup template to be processed is not suitable for the user. Therefore, the image processing device selects a makeup template different from the first makeup template to be processed from at least one makeup template as a second makeup template. And transferring the makeup of the second makeup template to the face image to obtain a second virtual makeup image.
As an alternative embodiment, the image processing apparatus further performs the steps of:
11. and acquiring an image after actual makeup, wherein the image after actual makeup is an image obtained by performing actual makeup on the user according to the makeup auxiliary information.
In the embodiment of the application, the makeup assistant information is used for guiding a user to perform actual makeup to obtain a makeup effect of the makeup template to be processed, and the user is actually made up according to the makeup assistant information to obtain an image with the makeup effect of the makeup template to be processed, that is, an image after actual makeup is obtained.
For example, Zhang III uses an image processing device to make actual beauty makeup. The image processing device displays an interface of three actual beauty makeup sheets, recognizes the positions of the eyebrows, the mouths and the like of the three actual beauty makeup sheets, and displays auxiliary beauty makeup information at the positions of the eyebrows, the mouths and the like of the three actual beauty makeup sheets. For example, an auxiliary line related to the eyebrows is displayed at the position of the eyebrows of Zhangsan, and the color of the eyebrows is marked to be brown; and (4) displaying an auxiliary line related to the mouth at the position of the mouth of Zhang III, and marking the color of the mouth to be rose red. And performing actual makeup according to the makeup auxiliary information displayed by the image processing device to obtain an image after the actual makeup.
It should be understood that the image after the actual makeup may be an image captured by the user through another data terminal, an image captured by the user himself through another data terminal, or an image captured by the image processing apparatus when it is determined that the user has finished the actual makeup.
In one implementation of obtaining an actual cosmetic image, an image processing device receives the actual cosmetic image input by a user via an input component.
In another implementation of obtaining the actual cosmetic image, the image processing device receives the actual cosmetic image sent by the data terminal.
12. And performing matching degree detection on the image after the actual makeup and the image after the first virtual makeup to obtain a matching degree result of the image after the actual makeup and the image after the first virtual makeup.
In the embodiment of the application, the matching degree of the image after the actual makeup and the image after the virtual makeup are detected, so that a matching degree result of the image after the actual makeup and the image after the virtual makeup is obtained. The matching degree of the actual beautified image and the virtual beautified image is detected, which is similar to the process of performing similarity comparison between the two images in the step 4.
For example, if the pixel points of the image after the binarization processing and the image after the binarization processing are both 100 pixel points, and 30 pixel points are different, the matching degree between the image after the binarization processing and the image after the actual makeup is 70%.
According to the matching degree result of the image after the virtual makeup and the image after the actual makeup, the user can judge the makeup level of the user. The higher the degree of matching, the better the makeup level of the user is indicated. Optionally, the matching degree result obtained after the matching degree detection is performed on the actual makeup image and the virtual makeup image obtained by the user performing the actual makeup each time by using the makeup template is stored in the database, and the matching degree result is compared with the matching degree result obtained after the matching degree detection is performed on the actual makeup image and the virtual makeup image obtained by the user performing the actual makeup next time by using the same makeup template, and the comparison result is used as a basis for judging whether the makeup level of the user is improved or not, so that the user is helped to judge whether the makeup level of the user is improved or not.
For example, the matching degree between the actual makeup image a and the virtual makeup image obtained by the user first applying makeup using the makeup template C is 70%. The matching degree of the actual makeup image B and the virtual makeup image B, which are made up by the user using the makeup template C for the second time, is 78%. Then this indicates that the user's cosmetic level has increased.
The image processing device performs matching degree detection on the image after the actual makeup and the image after the virtual makeup to obtain a matching degree result of the image after the actual makeup and the image after the virtual makeup, so that the user can know the level of makeup. The matching degree of the image after the actual makeup and the image after the virtual makeup, which are obtained by using the makeup template each time by the user, is recorded in the image processing device, and compared with the matching degree of the image after the actual makeup and the image after the virtual makeup, which are obtained by using the same makeup template for the previous time, is increased or reduced, so that the user can be helped to know whether the makeup level of the user is improved, and the experience of the user in using the image processing device is improved.
And the mode of the user self-making up shows that the user does not need the image processing device to recommend the make up template based on the first make up template library. Therefore, as an alternative embodiment, the image processing apparatus does not use the first makeup template library when receiving the mode of selecting the self-made makeup, thereby reducing the data processing amount.
As an alternative embodiment, the image processing apparatus further performs the steps of:
13. and acquiring a sample makeup template and a second makeup template library.
In this step, the sample makeup template is an image of which the user is satisfied with the makeup effect. For example, the sample makeup template may be an image obtained by self-photographing after the user makes up himself. For another example, the sample makeup template may be an image after the first virtual makeup. For another example, the sample makeup template may also be the actual makeup image. It is understood that the source of the sample makeup template is not limited in this embodiment. In this step, the second makeup template library includes at least one makeup template, and the second makeup template library is different from the first makeup template library.
14. And updating the makeup template in the second makeup template library by using the sample makeup template to obtain a first makeup template library.
Because the user is satisfied with the makeup effect in the makeup template of the sample, the image processing device updates the second makeup template by using the makeup template of the sample and updates the makeup template in the second makeup template library, and the makeup effect of the makeup template in the second makeup template library can be improved.
Based on step 13 and step 14, the image processing apparatus may continuously update the makeup templates in the second makeup template library to obtain the first makeup template library. The image processing device is used for recommending the makeup template to the user by the first makeup template library, and the recommending effect can be improved.
As an alternative embodiment, before executing step 14, the image processing apparatus further executes the following steps:
15. and comparing the sample makeup template with the makeup templates in the second makeup template library to obtain a similarity set.
In this step, the image processing device compares the facial similarity of the sample makeup template with the makeup templates in the second makeup template library to obtain a similarity set.
After obtaining the similarity set, the image processing apparatus performs the following steps in executing step 14:
16. and under the condition that the maximum value in the similarity set is larger than the similarity threshold value, updating the makeup template in the second makeup template library by using the sample makeup template to obtain a first makeup template library.
And the maximum value in the similarity set is larger than the similarity threshold value, which indicates that the makeup template matched with the sample makeup template exists in the second makeup template library. At this time, the image processing device updates the makeup template in the second makeup template library by using the sample makeup template library, so that the makeup effect of the makeup template in the first makeup template library is better.
As an alternative embodiment, the image processing apparatus performs the following steps in the process of performing step 14:
17. and fusing the sample makeup template and the first makeup template to obtain a second makeup template, wherein the first makeup template is a makeup template corresponding to the maximum value in the similarity set in the second makeup template library.
Optionally, the image processing device fuses the sample makeup template and the second makeup template to achieve the effect of correcting the first makeup template by using the sample makeup template, and obtain the second makeup template.
18. And replacing the first makeup template in the second makeup template library by using the second makeup template to obtain the first makeup template library.
Referring to fig. 2, fig. 2 is a schematic flowchart of an image processing system according to an embodiment of the present disclosure.
In the embodiment of the present Application, as shown in fig. 2, the image processing apparatus includes an electronic apparatus with a face recognition function, a cloud service platform, and a cosmetic Application (App). When data is updated, the storage capacity of the data is too large when the data is stored in the device, and the operation speed of the device is affected. In order to better manage the data, the image processing system stores the data in the database to the cloud service platform. The makeup application is installed on the electronic device with the face recognition function and used for executing the method, and a user can conveniently make up by using the makeup application.
The cloud service platform comprises a first makeup template library, a user creation sharing platform and an exchange sharing platform. The first makeup template library is used for storing face images of men and face images of women after makeup. Under the condition that the makeup App receives an instruction that a user needs to obtain a makeup template, the makeup template is called from the first makeup template library of the cloud service platform, and the memory occupied by data of the electronic device with the face recognition function can be reduced. The user creation sharing platform is used for storing the makeup images actually created by the user. The communication sharing platform is used for storing communication contents shared by users and related to actual beauty makeup by using the beauty makeup template. Optionally, the communication content may be the image after the first virtual makeup, the image after the actual makeup, or a matching degree result of the image after the first virtual makeup and the image after the actual makeup, and the form of the communication content may be: one or more of audio, text, image, video.
Referring to fig. 3, fig. 3 is a schematic flowchart illustrating an application image processing method according to an embodiment of the present disclosure.
In the embodiment of the present application, as shown in fig. 3, when the image processing apparatus detects a human face through human face recognition, a facial image of a user is acquired. By extracting the features of the facial image of the user, the feature information of the facial contour, facial features, facial proportion, facial texture and the like of the user can be obtained, so that the facial attributes of the user, such as the face shape, gender, age range, facial features, skin color and the like of the user can be determined. For example: when the image processing apparatus recognizes a face of the third page, first, a face image of the third page is obtained. The facial image of Zhangsan is subjected to feature extraction processing, and the attributes of the figure of Zhangsan, namely 'round face, male, glad, age 20-25 years old and yellow skin color' can be obtained. The image processing device receives an instruction whether the user selects to use the makeup template.
And in the case of receiving an instruction of selecting to use the makeup template by the user, acquiring the makeup template matched with the character attribute of the user from the makeup templates in the database. Generally, there are a plurality of makeup templates having feature data matching the character attributes of the user. Optionally, in order to reduce the number of times of calling the makeup templates and increase the running memory speed of the image processing apparatus, only one of the plurality of makeup templates is called and displayed by the image processing apparatus each time. The application does not limit how one of the plurality of makeup templates can be selected.
And after the image processing device displays the makeup template, receiving an instruction whether the user is satisfied with the recommended makeup template.
And under the condition that the instruction that the user is not satisfied with the recommended beauty makeup template is received, calling and displaying the re-recommended beauty makeup template from the plurality of beauty makeup templates according to the mode of selecting the first recommended beauty makeup template, and receiving the instruction that whether the user is satisfied with the re-recommended beauty makeup template. And receiving an instruction whether the user needs to create the makeup in the case of receiving the instruction that the user is not satisfied with the plurality of makeup templates.
And under the condition that the receiving user is satisfied with the recommended makeup template, transferring the makeup of the makeup template to the face image of the user to obtain an image after the first virtual makeup, and receiving an instruction of whether the makeup effect of the user on the image after the first virtual makeup is satisfied.
And under the condition that an instruction that the cosmetic effect of the image after the first virtual cosmetic is not satisfactory is received, the recommended cosmetic template is called and displayed from the plurality of cosmetic templates again according to the mode of selecting the first recommended cosmetic template, and the instruction whether the recommended cosmetic template is satisfactory or not is received.
Under the condition that an instruction that a user is satisfied with the makeup effect of the image after the first virtual makeup is received, the image processing device outputs makeup auxiliary information for guiding the user to carry out actual makeup so as to obtain the makeup effect of the first makeup template to be processed. The image processing device receives an instruction whether the user finishes the actual makeup, and obtains the image after the user finishes the actual makeup under the condition of receiving the instruction whether the user finishes the actual makeup.
The image processing device receives an instruction whether the user selects to communicate the actual makeup image or the first virtual makeup image. And outputting an editing frame when receiving an instruction of the user to select to communicate the actual cosmetic image or the first virtual cosmetic image. Wherein, the content edited by the user in the edit box can be in one or more of characters, images, videos and audios. And the image processing device receives an instruction of finishing editing the content in the editing frame by the user and uploads the content edited by the user to the communication sharing platform in the database.
And under the condition that an instruction that the user does not select to use the makeup template is received, outputting an interface for the user to actually create the makeup. Optionally, before outputting the interface on which the user creates the makeup by himself, the database may be retrieved according to the facial form attribute of the user, so as to obtain a makeup suggestion and a dressing suggestion corresponding to the facial form of the user. For example, if the face of the user is a long face, the makeup suggestion obtained by searching the database is: applying a foundation with uniform skin color on the face, and applying a dark foundation on the cheek and the chin; the eyebrows 2/3 are drawn straight, the eyebrows are not too high, and are not required to be drawn downwards, and the eyebrows are drawn a bit longer (similar to a straight eyebrows); the eye line is drawn into an oval shape; is not suitable for making obvious nasal shadow and is suitable for nature; the upper lip of the lipstick is not required to be drawn too full, and the lower lip of the lipstick is required to be drawn full; the ear is preferably rubbed with the user's ear, preferably with a horizontal brush. Accordingly, dressing recommendations are: the user is suitable for wearing clothes with a round neckline, and can also wear a coat with a high neckline, a polo shirt or a hat. Optionally, the database may be searched according to which type the five sense organs of the user belong to, so as to obtain the makeup suggestion corresponding to the five sense organs of the user, and obtain the makeup suggestion of the five sense organs of the user. For example, if the eye type of the user is peach-blossom eye, the makeup suggestion obtained by searching the database is peach-blossom eye makeup.
And the image processing device displays the makeup suggestion obtained by the figure attribute retrieval database of the user to obtain an image after actual makeup creation, which is obtained by the user according to the makeup suggestion.
After the image after the user actually creates the makeup is obtained, in order to update the makeup templates contained in the database, the makeup templates in the database are richer, and the user is helped to have more makeup templates to select, so that the image after the user actually creates the makeup can be used as a new makeup template. Whether the image after the user actually creates the makeup is uploaded or not needs to acquire the permission of the user, so that the image processing device receives an instruction whether the user agrees to share the image after the user actually creates the makeup.
Under the condition that an instruction that a user agrees to share an image after the actual makeup creation is received, the image after the actual makeup creation is uploaded to a user creation sharing platform in a database, and then the image after the actual makeup creation is audited. It should be understood that the auditing here may be to score the image after the actual makeup is created, and if the score exceeds a certain threshold, the auditing is passed; or comparing the similarity of the facial image of the user with the image after the actual makeup creation, and checking to pass when the similarity of the image after the actual makeup creation and the facial image of the user is lower than a set numerical value; the makeup completeness of the image after the actual makeup creation can be detected, and the image after the actual makeup creation is checked to be passed under the condition that the makeup completeness of the image after the actual makeup creation is detected.
In a possible implementation manner, the mode that the audit can take may be to score the image after the actual makeup creation, and determine that the image after the actual makeup creation passes the audit when the score of the image after the actual makeup creation exceeds a certain score. For example, the scoring rule is: comparing the image after the actual makeup creation with the face image of the user, and adding one point when the color change of the face and the eyebrow in the image after the actual makeup creation is detected; adding one point when the color change of the human face and the mouth in the image after the makeup is actually created is detected; adding one point when the color of the upper eyelid of the face in the image after the makeup is actually created is detected to be changed; adding one point when the color of the cheeks of the human faces in the image after the actual makeup creation is detected to be changed, adding one point when the color of the skins of the human faces in the image after the actual makeup creation is detected to be changed, and determining that the image after the actual makeup creation passes the audit when the score of the image after the actual makeup creation is more than four points.
For example, the image processing apparatus detects that the image after three-piece actual makeup creation is compared with the face image of three-piece, the color of the eyebrows changes, the color of the upper eyelid changes, the color of the mouth changes, and the color of the skin changes, so that the score of the image after three-piece actual makeup creation is four, and it is determined that the image after three-piece actual makeup creation passes the audit.
In another possible implementation manner, because the image after the user actually creates makeup and the facial image of the user are different, the verification may also be performed by comparing the similarity between the facial image of the user and the image after the user actually creates makeup, and using the result of the similarity comparison as a standard for measuring whether the image after the user actually creates makeup can be used as a makeup template. And under the condition that the similarity between the facial image of the user and the image after the user actually creates the makeup is detected to be lower than a set numerical value, storing the image after the user actually creates the makeup as a makeup template to a database. And under the condition that the similarity between the facial image of the user and the image of the user for actually creating the makeup does not exceed 80%, determining that the image after actually creating the makeup passes the audit, so that the image of the user for actually creating the makeup can be stored into the database as a makeup template for enriching the makeup template in the database and updating the data in the database.
For example, if the image processing device detects that the similarity between the face image of the third image and the image of the user actually creating the makeup does not exceed 80%, it is determined that the image of the third image after actually creating the makeup passes the audit.
In yet another possible implementation manner, the manner that the review may be performed may also be to detect whether the makeup of the image after the actual makeup creation is complete, and determine that the review of the image after the actual makeup creation is passed under the condition that it is determined that the makeup of the image after the actual makeup creation is complete. For example, the rule for judging the completeness of makeup is as follows: and comparing the image after the actual makeup creation with the facial image of the user, and determining that the makeup appearance of the image after the actual makeup creation is complete under the condition that the face of the person in the image after the actual makeup creation is eyebrow drawn, lipstick painted, eye shadow painted, cheek red painted and highlight painted, so that the image after the actual makeup creation is determined to be approved.
For example, the image processing device compares the three-piece face image with the image of the user actually creating makeup, detects that the face of the person in the three-piece image actually created makeup has eyebrow painting, lipstick painting, eye shadow painting, blush painting and highlight, determines that the makeup of the three-piece image actually created makeup is complete, and determines that the three-piece image actually created makeup passes the audit.
And under the condition of receiving the instruction that the image after the actual makeup creation is not approved, outputting prompt information that the image after the actual makeup creation is not approved. Optionally, the image processing apparatus may output a prompt of a specific part of the user that the makeup is not available, and may also output a prompt of whether a makeup template is needed. The present application is not limited thereto.
For example, the image a is an image of three persons who actually make a makeup, and the image processing apparatus detects that three persons in the image a do not draw eyebrows or paint lipstick, and outputs a prompt message that the eyebrows and mouth of the user are still to be completed. Optionally, the image processing apparatus may further call a makeup suggestion corresponding to the facial shape of zhang san or the facial features of zhang san in the database, output the makeup suggestion corresponding to the facial shape of zhang san or the facial features of zhang san, and help zhang san to perfect the actually created makeup.
Optionally, the image after the actual makeup creation is checked is stored in a makeup template library as a makeup template.
Optionally, the image after the user actually creates the makeup after the audit is passed is firstly stored in a database of the user creating a sharing platform. And creating a sharing platform for users in the database, further screening images after actual makeup creation, storing the screened images after actual makeup creation into a makeup template library of the database as makeup templates, enriching data of the makeup template library of the database, and updating the makeup templates of the makeup template library in the database.
Optionally, after determining that the makeup template library stores images after actual makeup creation, the image processing apparatus outputs a method for requesting the user to input actual makeup creation, and gives a reward to the user. The reward may be a certain point or a small gift, and the application is not limited herein.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
The method of the embodiments of the present application is set forth above in detail and the apparatus of the embodiments of the present application is provided below.
Referring to fig. 4, fig. 4 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present disclosure, where the apparatus 1 includes: the output unit 11, the obtaining unit 12, and the retrieving unit 13, optionally, the apparatus 1 further includes: migration unit 14, first processing unit 15, display unit 16, second processing unit 17, third processing unit 18, update unit 19, comparison unit 20, wherein:
an output unit 11 that outputs a makeup pattern in response to receiving an instruction to start makeup; the makeup mode comprises a mode of recommending a makeup template and a mode of self-service makeup;
the obtaining unit 12 is configured to obtain a character attribute of the user in response to receiving a mode of selecting the recommended makeup template;
and the retrieval unit 13 is used for retrieving the first makeup template library by using the character attributes to obtain at least one makeup template with characteristic data matched with the character attributes.
In combination with any embodiment of the present application, the at least one makeup template includes a first makeup template to be processed, and the obtaining unit 12 is further configured to:
acquiring a face image of a user;
the device further comprises: and the migration unit 14 is used for migrating the makeup of the first makeup template to be processed to the face image to obtain a first virtual makeup image.
With reference to any embodiment of the present application, the obtaining unit 12 is specifically configured to:
and performing character attribute extraction processing on the face image to obtain the character attribute of the user in the face image.
In combination with any embodiment of the present application, the at least one makeup template includes a face image region;
the device further comprises: a first processing unit 15, configured to, before the makeup of the first makeup template to be processed is migrated to the face image to obtain a first virtual makeup image, use the highest makeup template as the first makeup template to be processed if the number of the at least one makeup template exceeds 1; the highest beauty makeup template is the beauty makeup template with the highest similarity between the at least one beauty makeup template and the face image.
In combination with any embodiment of the present application, the apparatus further includes: a display unit 16, configured to display the first makeup template to be processed before the makeup of the first makeup template to be processed is migrated to the face image of the user to obtain a first virtual makeup image;
the migration unit 14 is configured to, when receiving an instruction to select to use the first to-be-processed makeup template, execute the step of migrating the makeup of the first to-be-processed makeup template to the face image of the user to obtain a first virtual makeup image;
the device further comprises: a second processing unit 17, configured to, in a case where an instruction that is not satisfied with respect to the first cosmetic template to be processed is received, take a cosmetic template with a second highest similarity between the face image and the at least one cosmetic template as the first cosmetic template to be processed, or enter an autonomous cosmetic mode.
In combination with any embodiment of the present application, the apparatus further includes: a display unit 16 configured to display the image after the first virtual makeup;
the output unit 11 is further configured to output makeup auxiliary information when receiving an instruction that a makeup effect of the image after the first virtual makeup is satisfactory; the makeup assistant information is used for guiding a user to carry out actual makeup so as to obtain a makeup effect of the first makeup template to be processed.
In combination with any embodiment of the present application, the at least one makeup template further includes a second makeup template to be processed different from the first makeup template to be processed, and the migration unit 14 is further configured to:
and under the condition that an instruction for dissatisfaction with the image after the first virtual makeup is received, transferring the makeup of the second to-be-processed makeup template to the face image to obtain a second image after the second virtual makeup.
With reference to any embodiment of the present application, the obtaining unit 12 is further configured to;
acquiring an image after actual makeup; the image after the actual makeup is an image obtained by actually making up the user according to the makeup auxiliary information;
the device further comprises: and the detection unit is used for detecting the matching degree of the image after the actual makeup and the image after the first virtual makeup to obtain a matching degree result of the image after the actual makeup and the image after the first virtual makeup.
In combination with any embodiment of the present application, the apparatus further includes: a third processing unit 18 for not using the first cosmetic template library in response to receiving a mode for selecting an autonomous cosmetic.
With reference to any embodiment of the present application, the obtaining unit 12 is further configured to:
acquiring a sample makeup template and a second makeup template library;
the device further comprises: and the updating unit 19 is configured to update the makeup template in the second makeup template library by using the sample makeup template to obtain a first makeup template library.
In combination with any embodiment of the present application, the apparatus further includes: a comparison unit 20, configured to compare the sample makeup template with the makeup templates in the second makeup template library to obtain a similarity set before the makeup template in the second makeup template library is updated by using the sample makeup template to obtain a first makeup template library;
the retrieving unit 13 is specifically configured to:
and under the condition that the maximum value in the similarity set is larger than a similarity threshold value, updating the makeup template in the second makeup template library by using the sample makeup template to obtain a first makeup template library.
With reference to any embodiment of the present application, the updating unit 19 is specifically configured to:
fusing the sample makeup template and the first makeup template to obtain a second makeup template; the first makeup template is a makeup template corresponding to the maximum value in the similarity set in the second makeup template library;
and replacing the first makeup template in the second makeup template library by using the second makeup template to obtain the first makeup template library.
In some embodiments, functions of or modules included in the apparatus provided in the embodiments of the present application may be used to execute the method described in the above method embodiments, and specific implementation thereof may refer to the description of the above method embodiments, and for brevity, will not be described again here.
Fig. 5 is a schematic diagram of a hardware structure of an image processing apparatus according to an embodiment of the present disclosure. The image processing apparatus 2 includes a processor 21, a memory 22, an input device 23, and an output device 24. The processor 21, the memory 22, the input device 23 and the output device 24 are coupled by a connector, which includes various interfaces, transmission lines or buses, etc., and the embodiment of the present application is not limited thereto. It should be appreciated that in various embodiments of the present application, coupled refers to being interconnected in a particular manner, including being directly connected or indirectly connected through other devices, such as through various interfaces, transmission lines, buses, and the like.
The processor 21 may be one or more Graphics Processing Units (GPUs), and in the case that the processor 21 is one GPU, the GPU may be a single-core GPU or a multi-core GPU. Alternatively, the processor 21 may be a processor group composed of a plurality of GPUs, and the plurality of processors are coupled to each other through one or more buses. Alternatively, the processor may be other types of processors, and the like, and the embodiments of the present application are not limited.
Memory 22 may be used to store computer program instructions, as well as various types of computer program code for executing the program code of aspects of the present application. Alternatively, the memory includes, but is not limited to, Random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM), or compact disc read-only memory (CD-ROM), which is used for associated instructions and data.
The input means 23 are for inputting data and/or signals and the output means 24 are for outputting data and/or signals. The input device 23 and the output device 24 may be separate devices or may be an integral device.
It is understood that, in the embodiment of the present application, the memory 22 may be used to store not only the relevant instructions, but also relevant data, for example, the memory 22 may be used to store the user attributes and the facial image of the user acquired through the input device 23, or the memory 22 may be used to store the first virtual cosmetic image obtained through the processor 21, and the like, and the embodiment of the present application is not limited to the data specifically stored in the memory.
It will be appreciated that fig. 5 only shows a simplified design of the image processing apparatus. In practical applications, the image processing apparatuses may further include other necessary components, including but not limited to any number of input/output devices, processors, memories, etc., and all image processing apparatuses that can implement the embodiments of the present application are within the scope of the present application.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. It is also clear to those skilled in the art that the descriptions of the various embodiments of the present application have different emphasis, and for convenience and brevity of description, the same or similar parts may not be repeated in different embodiments, so that the parts that are not described or not described in detail in a certain embodiment may refer to the descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in or transmitted over a computer-readable storage medium. The computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by wire (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)), or wirelessly (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., Digital Versatile Disk (DVD)), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
One of ordinary skill in the art will appreciate that all or part of the processes in the methods of the above embodiments may be implemented by hardware related to instructions of a computer program, which may be stored in a computer-readable storage medium, and when executed, may include the processes of the above method embodiments. And the aforementioned storage medium includes: various media that can store program codes, such as a read-only memory (ROM) or a Random Access Memory (RAM), a magnetic disk, or an optical disk.

Claims (15)

1. A method of image processing, the method comprising:
outputting a makeup pattern in response to receiving an instruction to start makeup; the makeup mode comprises a mode of recommending a makeup template and a mode of self-service makeup;
in response to receiving a mode of selecting a recommended makeup template, acquiring a character attribute of the user;
and searching a first cosmetic template library by using the character attributes to obtain at least one cosmetic template with characteristic data matched with the character attributes.
2. The method of claim 1, wherein the at least one cosmetic template comprises a first cosmetic template to be processed, the method further comprising:
acquiring a face image of a user;
and transferring the makeup of the first makeup template to be processed to the face image to obtain a first virtual makeup image.
3. The method of claim 2, wherein the obtaining the character attributes of the user comprises:
and performing character attribute extraction processing on the face image to obtain the character attribute of the user in the face image.
4. The method according to claim 2 or 3, wherein the at least one makeup template comprises a face image region;
before the step of transferring the makeup of the first makeup template to be processed to the face image to obtain a first virtual makeup image, the method further comprises the following steps:
taking the highest makeup template as the first makeup template to be processed under the condition that the number of the at least one makeup template exceeds 1; the highest beauty makeup template is the beauty makeup template with the highest similarity between the at least one beauty makeup template and the face image.
5. The method according to any one of claims 2 to 4, wherein before the transferring the makeup of the first makeup template to be processed to the facial image of the user to obtain a first virtual makeup image, the method further comprises:
displaying the first makeup template to be processed;
under the condition that an instruction of selecting to use the first to-be-processed makeup template is received, executing the step of transferring the makeup of the first to-be-processed makeup template to the face image of the user to obtain a first virtual makeup image;
and under the condition that an instruction which is not satisfied with the first to-be-processed makeup template is received, taking the makeup template with the second highest similarity between the at least one makeup template and the face image as the first to-be-processed makeup template, or entering an autonomous makeup mode.
6. The method according to any one of claims 2 to 5, further comprising:
displaying the image after the first virtual makeup;
outputting makeup auxiliary information under the condition that an instruction of satisfying the makeup effect of the image after the first virtual makeup is received; the makeup assistant information is used for guiding a user to carry out actual makeup so as to obtain a makeup effect of the first makeup template to be processed.
7. The method of any one of claims 2 to 5, wherein the at least one cosmetic template further comprises a second cosmetic template to be treated that is different from the first cosmetic template to be treated, the method further comprising:
and under the condition that an instruction for dissatisfaction with the image after the first virtual makeup is received, transferring the makeup of the second to-be-processed makeup template to the face image to obtain a second image after the second virtual makeup.
8. The method of claim 6, further comprising;
acquiring an image after actual makeup; the image after the actual makeup is an image obtained by actually making up the user according to the makeup auxiliary information;
and carrying out matching degree detection on the image after the actual makeup and the image after the first virtual makeup to obtain a matching degree result of the image after the actual makeup and the image after the first virtual makeup.
9. The method of claim 1, wherein the first cosmetic template library is not used in response to receiving a mode to select an autonomous cosmetic.
10. The method according to any one of claims 1 to 9, further comprising:
acquiring a sample makeup template and a second makeup template library;
and updating the makeup template in the second makeup template library by using the sample makeup template to obtain a first makeup template library.
11. The method of claim 10, wherein prior to said updating the cosmetic template in the second cosmetic template library using the sample cosmetic template to obtain the first cosmetic template library, the method further comprises:
comparing the sample makeup template with makeup templates in the second makeup template library to obtain a similarity set;
updating the makeup template in the second makeup template library by using the sample makeup template to obtain a first makeup template library, including:
and under the condition that the maximum value in the similarity set is larger than a similarity threshold value, updating the makeup template in the second makeup template library by using the sample makeup template to obtain a first makeup template library.
12. The method of claim 11, wherein updating the makeup template in the second makeup template library using the sample makeup template to obtain a first makeup template library comprises:
fusing the sample makeup template and the first makeup template to obtain a second makeup template; the first makeup template is a makeup template corresponding to the maximum value in the similarity set in the second makeup template library;
and replacing the first makeup template in the second makeup template library by using the second makeup template to obtain the first makeup template library.
13. An apparatus for image processing, the apparatus comprising:
the output unit is used for responding to the received instruction of starting makeup beautification and outputting a makeup beautification mode; the makeup mode comprises a mode of recommending a makeup template and a mode of self-service makeup;
the obtaining unit is used for responding to the mode of receiving the selection recommendation makeup template and obtaining the character attribute of the user;
and the retrieval unit is used for retrieving the first makeup template library by using the character attributes to obtain at least one makeup template with characteristic data matched with the character attributes.
14. An electronic device, comprising: a processor, input means, output means and a memory for storing computer program code comprising computer instructions which, when executed by the processor, cause the electronic device to perform the method of any of claims 1 to 12.
15. A computer-readable storage medium, in which a computer program is stored, which computer program comprises program instructions which, if executed by a processor, cause the processor to carry out the method of any one of claims 1 to 12.
CN202110136553.XA 2021-02-01 2021-02-01 Image processing method and device, electronic device and storage medium Pending CN112819718A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110136553.XA CN112819718A (en) 2021-02-01 2021-02-01 Image processing method and device, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110136553.XA CN112819718A (en) 2021-02-01 2021-02-01 Image processing method and device, electronic device and storage medium

Publications (1)

Publication Number Publication Date
CN112819718A true CN112819718A (en) 2021-05-18

Family

ID=75861097

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110136553.XA Pending CN112819718A (en) 2021-02-01 2021-02-01 Image processing method and device, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN112819718A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113344837A (en) * 2021-06-28 2021-09-03 展讯通信(上海)有限公司 Face image processing method and device, computer readable storage medium and terminal
CN113642481A (en) * 2021-08-17 2021-11-12 百度在线网络技术(北京)有限公司 Recognition method, training method, device, electronic equipment and storage medium
JP7308318B1 (en) 2022-03-04 2023-07-13 株式会社Zozo Information processing device, information processing method and information processing program
JP7308317B1 (en) 2022-03-04 2023-07-13 株式会社Zozo Information processing device, information processing method and information processing program

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106446207A (en) * 2016-09-30 2017-02-22 北京美到家科技有限公司 Makeup database creating method, personalized makeup aiding method and personalized makeup aiding device
CN107317974A (en) * 2017-08-23 2017-11-03 三星电子(中国)研发中心 A kind of makeups photographic method and device
WO2018029963A1 (en) * 2016-08-08 2018-02-15 パナソニックIpマネジメント株式会社 Make-up assistance apparatus and make-up assistance method
CN107886484A (en) * 2017-11-30 2018-04-06 广东欧珀移动通信有限公司 U.S. face method, apparatus, computer-readable recording medium and electronic equipment
CN108053365A (en) * 2017-12-29 2018-05-18 百度在线网络技术(北京)有限公司 For generating the method and apparatus of information
CN108171143A (en) * 2017-12-25 2018-06-15 深圳市美丽控电子商务有限公司 Makeups method, smart mirror and storage medium
CN108229415A (en) * 2018-01-17 2018-06-29 广东欧珀移动通信有限公司 Information recommendation method, device, electronic equipment and computer readable storage medium
KR20180110842A (en) * 2017-03-30 2018-10-11 최진은 Customized semi-permanent make-up recommendation system based on virtual experience and its service method
CN108932654A (en) * 2018-06-12 2018-12-04 苏州诚满信息技术有限公司 A kind of virtually examination adornment guidance method and device
CN109063671A (en) * 2018-08-20 2018-12-21 三星电子(中国)研发中心 Method and device for intelligent cosmetic
CN109376661A (en) * 2018-10-29 2019-02-22 百度在线网络技术(北京)有限公司 Method and apparatus for output information
CN110110118A (en) * 2017-12-27 2019-08-09 广东欧珀移动通信有限公司 Dressing recommended method, device, storage medium and mobile terminal
CN110135929A (en) * 2018-02-02 2019-08-16 英属开曼群岛商玩美股份有限公司 It is rendered in the system, method and storage media of virtual makeup application program
CN111291642A (en) * 2020-01-20 2020-06-16 深圳市商汤科技有限公司 Dressing method, dressing device, electronic equipment and storage medium
CN111539882A (en) * 2020-04-17 2020-08-14 华为技术有限公司 Interactive method for assisting makeup, terminal and computer storage medium
KR20200107469A (en) * 2019-03-08 2020-09-16 주식회사 에이아이네이션 A method for providing recommendation services of personal makeup styles based on beauty scores
CN111783511A (en) * 2019-10-31 2020-10-16 北京沃东天骏信息技术有限公司 Beauty treatment method, device, terminal and storage medium
CN111968248A (en) * 2020-08-11 2020-11-20 深圳追一科技有限公司 Intelligent makeup method and device based on virtual image, electronic equipment and storage medium

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018029963A1 (en) * 2016-08-08 2018-02-15 パナソニックIpマネジメント株式会社 Make-up assistance apparatus and make-up assistance method
CN106446207A (en) * 2016-09-30 2017-02-22 北京美到家科技有限公司 Makeup database creating method, personalized makeup aiding method and personalized makeup aiding device
KR20180110842A (en) * 2017-03-30 2018-10-11 최진은 Customized semi-permanent make-up recommendation system based on virtual experience and its service method
CN107317974A (en) * 2017-08-23 2017-11-03 三星电子(中国)研发中心 A kind of makeups photographic method and device
CN107886484A (en) * 2017-11-30 2018-04-06 广东欧珀移动通信有限公司 U.S. face method, apparatus, computer-readable recording medium and electronic equipment
CN108171143A (en) * 2017-12-25 2018-06-15 深圳市美丽控电子商务有限公司 Makeups method, smart mirror and storage medium
CN110110118A (en) * 2017-12-27 2019-08-09 广东欧珀移动通信有限公司 Dressing recommended method, device, storage medium and mobile terminal
CN108053365A (en) * 2017-12-29 2018-05-18 百度在线网络技术(北京)有限公司 For generating the method and apparatus of information
CN108229415A (en) * 2018-01-17 2018-06-29 广东欧珀移动通信有限公司 Information recommendation method, device, electronic equipment and computer readable storage medium
CN110135929A (en) * 2018-02-02 2019-08-16 英属开曼群岛商玩美股份有限公司 It is rendered in the system, method and storage media of virtual makeup application program
CN108932654A (en) * 2018-06-12 2018-12-04 苏州诚满信息技术有限公司 A kind of virtually examination adornment guidance method and device
CN109063671A (en) * 2018-08-20 2018-12-21 三星电子(中国)研发中心 Method and device for intelligent cosmetic
CN109376661A (en) * 2018-10-29 2019-02-22 百度在线网络技术(北京)有限公司 Method and apparatus for output information
KR20200107469A (en) * 2019-03-08 2020-09-16 주식회사 에이아이네이션 A method for providing recommendation services of personal makeup styles based on beauty scores
CN111783511A (en) * 2019-10-31 2020-10-16 北京沃东天骏信息技术有限公司 Beauty treatment method, device, terminal and storage medium
CN111291642A (en) * 2020-01-20 2020-06-16 深圳市商汤科技有限公司 Dressing method, dressing device, electronic equipment and storage medium
CN111539882A (en) * 2020-04-17 2020-08-14 华为技术有限公司 Interactive method for assisting makeup, terminal and computer storage medium
CN111968248A (en) * 2020-08-11 2020-11-20 深圳追一科技有限公司 Intelligent makeup method and device based on virtual image, electronic equipment and storage medium

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
TALEB ALASHKAR等: "Application Design Study of Chinese Female Makeup Recommendations", 《PROCEEDINGS OF THE AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE》, vol. 31, no. 1, 12 February 2017 (2017-02-12), pages 941 - 947 *
TALEB ALASHKAR等: "Rule-Based Facial Makeup Recommendation System", 《2017 12TH IEEE INTERNATIONAL CONFERENCE ON AUTOMATIC FACE & GESTURE RECOGNITION (FG 2017)》, 29 June 2017 (2017-06-29), pages 325 - 330 *
李杰: "基于图像处理的实时虚拟化妆及推荐方法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》, vol. 2018, no. 6, 15 June 2018 (2018-06-15), pages 138 - 1645 *
李阳帆: "东方女性人脸妆容推荐算法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》, vol. 2019, no. 1, 15 January 2019 (2019-01-15), pages 138 - 5034 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113344837A (en) * 2021-06-28 2021-09-03 展讯通信(上海)有限公司 Face image processing method and device, computer readable storage medium and terminal
CN113642481A (en) * 2021-08-17 2021-11-12 百度在线网络技术(北京)有限公司 Recognition method, training method, device, electronic equipment and storage medium
JP7308318B1 (en) 2022-03-04 2023-07-13 株式会社Zozo Information processing device, information processing method and information processing program
JP7308317B1 (en) 2022-03-04 2023-07-13 株式会社Zozo Information processing device, information processing method and information processing program
WO2023166910A1 (en) * 2022-03-04 2023-09-07 株式会社Zozo Information processing device, information processing method, and information processing program
WO2023166911A1 (en) * 2022-03-04 2023-09-07 株式会社Zozo Information processing device, information processing method, and information processing program

Similar Documents

Publication Publication Date Title
CN112819718A (en) Image processing method and device, electronic device and storage medium
US10799010B2 (en) Makeup application assist device and makeup application assist method
EP3627392A1 (en) Object identification method, system and device, and storage medium
CN100468463C (en) Method,apparatua and computer program for processing image
WO2021147920A1 (en) Makeup processing method and apparatus, electronic device, and storage medium
CN111354079A (en) Three-dimensional face reconstruction network training and virtual face image generation method and device
JP4435809B2 (en) Virtual makeup apparatus and method
CN108053365A (en) For generating the method and apparatus of information
CN108920490A (en) Assist implementation method, device, electronic equipment and the storage medium of makeup
CN110110118A (en) Dressing recommended method, device, storage medium and mobile terminal
JP2004094917A (en) Virtual makeup device and method therefor
CN108932654B (en) Virtual makeup trial guidance method and device
CN113362263B (en) Method, apparatus, medium and program product for transforming an image of a virtual idol
CN110235169A (en) Evaluation system of making up and its method of operating
CN109741438B (en) Three-dimensional face modeling method, device, equipment and medium
US11978242B2 (en) Systems and methods for improved facial attribute classification and use thereof
CN110110611A (en) Portrait attribute model construction method, device, computer equipment and storage medium
CN110866139A (en) Cosmetic treatment method, device and equipment
CN111862116A (en) Animation portrait generation method and device, storage medium and computer equipment
CN111968248A (en) Intelligent makeup method and device based on virtual image, electronic equipment and storage medium
US11961169B2 (en) Digital makeup artist
CN112102157A (en) Video face changing method, electronic device and computer readable storage medium
Park et al. An automatic virtual makeup scheme based on personal color analysis
CN116830073A (en) Digital color palette
KR102289824B1 (en) System and method for recommending color of cosmetic product by sharing information with influencer

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination