CN112686818A - Face image processing method and device and electronic equipment - Google Patents

Face image processing method and device and electronic equipment Download PDF

Info

Publication number
CN112686818A
CN112686818A CN202011588167.6A CN202011588167A CN112686818A CN 112686818 A CN112686818 A CN 112686818A CN 202011588167 A CN202011588167 A CN 202011588167A CN 112686818 A CN112686818 A CN 112686818A
Authority
CN
China
Prior art keywords
face image
image
network model
eyebrow
sample set
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011588167.6A
Other languages
Chinese (zh)
Inventor
刘行
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202011588167.6A priority Critical patent/CN112686818A/en
Publication of CN112686818A publication Critical patent/CN112686818A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a face image processing method and device and electronic equipment, and belongs to the technical field of image processing. The method comprises the steps of determining an eyebrow area in a first face image; carrying out edge detection on the miscellaneous hair region in the eyebrow hair region to obtain an miscellaneous hair mask image; removing the miscellaneous hair mask image from the first face image to obtain a second face image; and inputting the second face image into a skin generator model to generate skin texture at the position of the miscellaneous hair mask image in the second face image so as to obtain a target correction image. The eyebrow that the scheme of this application can be intelligent in to the human face image is repaiied, removes the edulcoration eyebrow intelligently and produces true and true skin texture, reaches the effect that does not have the sense of paining, when convenient user uses, has promoted the whole aesthetic feeling of eyebrow.

Description

Face image processing method and device and electronic equipment
Technical Field
The application belongs to the technical field of image processing, and particularly relates to a face image processing method and device and electronic equipment.
Background
At present, most users pay more and more attention to the effect of mobile phone self-shooting, and not only seek to whiten and beautify faces, but also pay special attention to the local detailed aesthetic feeling of the faces, such as the eyebrows of the users are required to be compact and tidy, and the original eyebrow shapes of the users are not required to be changed.
In the process of implementing the present application, the inventor finds that at least the following problems exist in the prior art:
the existing eyebrow trimming processing method mainly needs a user to manually edit eyebrows by utilizing various beautifying software and trim the mixed hairs beside the eyebrows, and the manual eyebrow trimming method is time-consuming, labor-consuming and unfriendly to the user.
Disclosure of Invention
The embodiment of the application aims to provide a control method, a control device and electronic equipment, and can solve the problems that in the prior art, a user edits eyebrows by manually using various beautifying software, and repairs and removes miscellaneous hairs beside the eyebrows, so that time and labor are wasted, and the user is not friendly.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides a face image processing method, including:
determining an eyebrow region in a first face image;
carrying out edge detection on the miscellaneous hair region in the eyebrow hair region to obtain an miscellaneous hair mask image;
removing the miscellaneous hair mask image from the first face image to obtain a second face image;
and inputting the second face image into a skin generator model to generate skin texture at the position of the miscellaneous hair mask image in the second face image so as to obtain a target correction image. In a second aspect, an embodiment of the present application provides a face image processing apparatus, including:
the first determining module is used for obtaining and determining an eyebrow area in the first face image;
the detection module is used for carrying out edge detection on the miscellaneous hair region in the eyebrow hair region to obtain an miscellaneous hair mask image;
the first processing module is used for removing the miscellaneous hair mask image from the first face image to obtain a second face image;
and the second processing module is used for inputting the second face image into a skin generator model so as to generate skin texture at the position of the miscellaneous hair mask image in the second face image and obtain a target correction image.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, and when executed by the processor, the program or instructions implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In the embodiment of the application, a first eyebrow area in a face image is obtained, and a miscellaneous hair area in the eyebrow area is analyzed to obtain a miscellaneous hair mask image; further removing the miscellaneous hair mask image from the first face image to obtain a second face image; and inputting the second face image into a skin generator model to generate skin texture at the position of the miscellaneous hair mask image in the second face image so as to obtain a target correction image. The eyebrow to in the face image that this application can be intelligent is repaiied, gets rid of miscellaneous eyebrow intelligently, generates real skin texture, reaches the effect that does not have the sense of paining, when convenient user uses, is favorable to promoting the whole aesthetic feeling of eyebrow.
Drawings
FIG. 1 is a schematic flow chart of a control method according to an embodiment of the present application;
FIG. 2 is a schematic illustration of a rough mask image according to an embodiment of the present application;
FIG. 3 is a schematic illustration of real face images from a first set of face samples according to an embodiment of the present application;
FIG. 4 is a schematic diagram of face images in a second face sample set according to an embodiment of the present application;
FIG. 5 is an architecture diagram of a generation of a countermeasure network according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a fuzz detection template according to an embodiment of the present application;
fig. 7 is a block diagram of a face image processing apparatus according to an embodiment of the present application;
fig. 8 is a hardware configuration diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application are capable of operation in sequences other than those illustrated or described herein. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The following describes the control method provided by the embodiments of the present application in detail through specific embodiments and application scenarios thereof with reference to the accompanying drawings.
As shown in fig. 1, an embodiment of the present application provides a face image processing method, including the following steps:
step 11: determining an eyebrow region in a first face image;
in the step, a first face image is displayed on the electronic equipment; a first input is received, and in response to the first input, an eyebrow region in a first face image is acquired. Specifically, a normalized eyebrow image region can be obtained based on the positions of the eyebrow feature points.
Illustratively, taking an electronic device as a mobile phone, taking the factor of network speed on the mobile phone into consideration, and adopting a deep network technology based on a mobilenetV2 module to obtain eyebrow feature points. The mobilenetV2 module can improve the operation speed of the network while maintaining the precision by separating convolution and inverse residual structure.
Step 12: carrying out edge detection on the miscellaneous hair region in the eyebrow hair region to obtain an miscellaneous hair mask image;
the mask is a binary image composed of 0 and 1, and as shown in fig. 2, which shows a schematic view of the image of the rough mask, in fig. 2, the white portion is a rough region having a pixel value of 1, and the black portion has a pixel value of 0.
Step 13: removing the miscellaneous hair mask image from the first face image to obtain a second face image;
in the step, the miscellaneous hair mask image is removed from the first face image, namely, a corresponding miscellaneous hair area in the first face image is changed into black, so that miscellaneous hair is removed, and a second face image is obtained.
Step 14: and inputting the second face image into a skin generator model to generate skin texture at the position of the miscellaneous hair mask image in the second face image so as to obtain a target correction image.
Specifically, the second face image is used as an input of a skin generator model based on the training of a generation confrontation network model, the skin generator model is operated, and real skin textures are generated at the position of the second face image where the miscellaneous hair mask image is removed; and generating real skin texture by corresponding to the fussy region missing in the second face image through a skin generator model trained based on the generation confrontation network model. The output of the skin generator model is further used as a target correction image for the first face image.
In the embodiment, the method includes the steps that a first eyebrow area in a first face image is obtained, and a miscellaneous hair area in the eyebrow area is analyzed to obtain a miscellaneous hair mask image; further removing the miscellaneous hair mask image from the first face image to obtain a second face image; and inputting the second face image into a skin generator model to generate skin texture at the position of the miscellaneous hair mask image in the second face image so as to obtain a target correction image. Can realize that intelligent eyebrow to in the face image is repaiied, remove miscellaneous eyebrow intelligently, and generate real skin texture, reach the effect that does not have the sense of paining, when convenient user uses, be favorable to promoting the whole aesthetic feeling of eyebrow.
In one embodiment, the skin generator model is trained based on a generative confrontation network model, and the step of training the skin generator model based on the generative confrontation network model includes:
acquiring a first face sample set;
randomly generating mask image lines in each face image in the first face sample set, simulating skin defects of a real rough area, and obtaining a second face sample set;
inputting the second face sample set into the generator model for generating the confrontation network model to obtain a generated sample set after the defect skin is repaired;
inputting the generated sample set and the first face sample set into a discriminator model of the generated confrontation network model to obtain a discrimination result;
updating the generator model by using the discrimination result, and continuously inputting the second face sample set into the generator model to obtain a generated sample set after repairing the defective skin, and performing iterative training;
and after the judgment result meets the iterative training end condition, taking the updated generator model as the skin generator model.
It should be noted that if a lot of training image sets are obtained by directly detecting the fuzz, then supervised training is performed by using the convolutional neural network CNN, which cannot achieve the purpose, because the image region detected by the fuzz itself has the fuzz, that is, the training sample set of the network cannot be obtained, so that the CNN network cannot learn to remove the fuzz and regenerate the texture.
In this embodiment, an unsupervised training mode is used to acquire the training set, that is, a certain amount (e.g., about 5 ten thousand) of high definition picture sets (first face sample sets) are collected, the acquisition mode includes modes of searching a public database, crawling a webpage, and the like, and the acquisition of such unlabeled pictures is very convenient.
Illustratively, as shown in fig. 3, it is a schematic diagram of a complete real face image in a first face sample set, as shown in fig. 4, it is a schematic diagram of a face image obtained by randomly generating mask image lines in the real face image shown in fig. 3, simulating skin defects of a real hairy region; wherein, the lines of the mask image are black lines similar to the rough. Fig. 3 and fig. 4 form a training data pair, and the generator network model generating the confrontation network model learns how to perform skin filling on the missing black line part in fig. 4, so as to output a real skin region.
It is noted that the generation of the countermeasure network model includes a generator network model for generating a false picture as true as possible from the input data, and a discriminator network model for discriminating whether an input picture belongs to a true picture or a false picture. The generation of the confrontation network training means that a generator network model generates a picture to deceive a discriminator network model, then the discriminator network model judges whether the picture and a corresponding real picture are true or false, and in the training process of the two models, the capabilities of the two models are enhanced more and more, and finally the steady-state process is achieved.
In one embodiment, the generator network model is constructed based on a U-Net network structure and a jump layer connection mode.
In the embodiment, because not only the semantic feature details of the high layer need to be generated, but also the texture information of the bottom layer of the human face skin needs to be generated, a generator network model of the confrontation network is designed and generated based on a U-Net network structure and a layer jump connection mode, information extracted by the bottom layer network and information extracted by the upper layer convolution are spliced, and the learning effect of the network is greatly improved.
A U-Net network structure is a network structure that includes down-sampling and up-sampling. Wherein, the down-sampling is used to gradually show the environment information, and the up-sampling process is to combine the down-sampling layer information and the up-sampling input information to restore the detail information, and gradually restore the image precision.
As shown in fig. 5, the image B in the second face sample set is convolved, the result output by the convolution is subjected to a hole convolution, and the result output by the hole convolution is subjected to a deconvolution to finally generate a false image a'. Wherein the down-sampling is here performed by convolution, the up-sampling is performed by deconvolution, and the two features of the up-sampling and the down-sampling are further connected together by a layer-hopping connection.
In one embodiment, the discriminator network model is constructed based on spectral normalization.
In this embodiment, the stability of network training can be improved by using a discriminator network based on Spectrum Normalization (SN).
As shown in fig. 5, the picture a' generated by the generator network model and the original image a in the first face sample set are input to the discriminator network model, and true and false discrimination is performed, and the output discrimination result is used to update the generator network model.
In an embodiment, before the step of performing edge detection on the rough region in the eyebrow area to obtain the rough mask image, the method further includes:
under the condition that the eyebrow area is detected to exist in the eyebrow area, determining a mixed hair area according to the eyebrow area, wherein the determined mixed hair area is an area which is not overlapped with the eyebrow area.
In the embodiment, for the eyebrow-shaped area which is already drawn, if the miscellaneous eyebrows exist around the area, because the miscellaneous eyebrows are non-overlapped with the miscellaneous eyebrow area, the eyebrows cannot be accidentally injured or otherwise operated even though the skin generator network model is used; if no eyebrow is arranged around the skin generator network model, the eyebrow mask image is completely black, and the eyebrow cannot be accidentally injured or operated through the skin generator network model. Therefore, this embodiment does not cause damage to the eyebrow-formed figure.
In one embodiment, the step 12 includes:
analyzing a mixed hair region of the eyebrow region according to a predetermined mixed hair detection template;
and utilizing a canny operator to carry out edge detection on the rough region, and determining the rough mask image.
Illustratively, as shown in FIG. 6, a schematic representation of a gross detection template is shown, which can be obtained by a number of experimental comparative analyses, from which it is possible to specify where to protect and where to detect. The white area in fig. 6 is a fuzz detection area, the twill area is an area to be protected, the fuzz in the white area has obvious edge information, the edge detection is performed on the fuzz area through a canny operator, and the fuzz mask image is determined, so that the automatic detection of the fuzz is realized, and the fuzz mask image shown in fig. 2 is obtained.
In one embodiment, step 13 comprises:
obtaining a first target image by negating the pixel value of the miscellaneous hair mask image;
multiplying the first target image and the first face image to obtain a second face image; the second face image is the face image from which the miscellaneous hair mask image is removed from the first face image.
In this embodiment, the pixel value of the miscellaneous hair region in the miscellaneous hair mask image is 1, and after negation, the pixel value of the miscellaneous hair region is made to be 0, so that the first target image and the first face image are multiplied by each other, and the second face image in which the miscellaneous hair portion corresponds to the black defective image can be obtained.
In the embodiment, after the skin generator network model is trained, in the actual use reasoning stage, only the rough mask image is detected by the rough automatic detection module, and then the rough mask image is multiplied by the original input image and then input into the trained skin generator network model, so that real skin texture can be generated while the rough is removed, and the eyebrow can be intelligently trimmed without manual operation by the scheme; not only can find useless mixed hair beside the eyebrow, but also can generate real skin texture; and no accidental injury exists, the rough detection Mask is completely black for the girls who have already drawn eyebrows, and the accidental injury or other operations on the eyebrows cannot be caused even through the skin generator model.
It should be noted that, in the face image processing method provided in the embodiment of the present application, the execution subject may be a face image processing apparatus, or a control module in the face image processing apparatus for executing a method for loading face image processing. In the embodiment of the present application, a method for executing processing of loading a face image by a face image processing apparatus is taken as an example, and the method for processing a face image provided in the embodiment of the present application is described.
As shown in fig. 7, an embodiment of the present invention further provides a face image processing apparatus 700, including:
a first determining module 701, configured to determine an eyebrow region in a first face image;
a detection module 702, configured to perform edge detection on a rough region in the eyebrow region to obtain a rough mask image;
a first processing module 703, configured to remove the miscellaneous hair mask image from the first face image to obtain a second face image;
a second processing module 704, configured to input the second face image into a skin generator model, so as to generate a skin texture at the position of the miscellaneous hair mask image in the second face image, and obtain a target corrected image.
Optionally, the skin generator model is trained based on a generative confrontation network model, and the facial image processing apparatus 700 further includes:
the second acquisition module is used for acquiring a first face sample set;
the third acquisition module is used for randomly generating mask image lines in each face image in the first face sample set, simulating skin defects of a real rough area and acquiring a second face sample set;
the generating module is used for inputting the second face sample set into the generator network model for generating the confrontation network model to obtain a generated sample set after the defect skin is repaired;
the judging module is used for inputting the generated sample set and the first face sample set into a discriminator model of the generated confrontation network model to obtain a judging result;
the training updating module is used for updating the generator network model by using the judgment result, continuously inputting the second face sample set into the generator network model to obtain a generated sample set after the defect skin is repaired, and performing iterative training;
and the third processing module is used for taking the updated generator network model as the skin generator model after the judgment result meets the iterative training end condition.
Optionally, the generator network model is constructed based on a U-Net network structure plus a layer-hopping connection.
Optionally, the apparatus 700 further includes:
and the second determining module is used for determining a mixed hair region according to the eyebrow drawing region under the condition that the eyebrow drawing region is detected to exist in the eyebrow drawing region, and the determined mixed hair region is a region which is not overlapped with the eyebrow drawing region.
Optionally, the discriminator network model is constructed based on spectrum normalization.
Optionally, the detecting module 702 includes:
the first detection submodule is used for analyzing a mixed hair area of the eyebrow area according to a predetermined mixed hair detection template;
and the second detection submodule is used for carrying out edge detection on the miscellaneous hair region by utilizing a canny operator to determine the miscellaneous hair mask image.
Optionally, the first processing module 703 includes:
the first processing submodule is used for obtaining a first target image by negating the pixel value of the rough mask image;
the second processing submodule is used for multiplying the first target image and the first face image to obtain a second face image; the second face image is the face image from which the miscellaneous hair mask image is removed from the first face image.
The face image processing device in the embodiment of the present application may be a device, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The face image processing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android operating system (Android), an iOS operating system, or other possible operating systems, which is not specifically limited in the embodiments of the present application.
The face image processing apparatus provided in the embodiment of the present application can implement each process implemented by the face image processing apparatus in the method embodiment of fig. 1, and is not described here again to avoid repetition.
The face image processing apparatus 700 in the embodiment of the application obtains a rough mask image by obtaining a rough region in a first face image and analyzing the rough region in the rough region; further removing the miscellaneous hair mask image from the first face image to obtain a second face image; and inputting the second face image into a skin generator model to generate skin texture at the position of the miscellaneous hair mask image in the second face image so as to obtain a target correction image. Can realize that intelligent eyebrow to in the face image is repaiied, remove miscellaneous eyebrow intelligently, and generate real skin texture, reach the effect that does not have the sense of paining, when convenient user uses, be favorable to promoting the whole aesthetic feeling of eyebrow.
Optionally, an electronic device is further provided in this embodiment of the present application, and includes a processor 810, a memory 809, and a program or an instruction stored in the memory 809 and capable of being executed on the processor 810, where the program or the instruction is executed by the processor 810 to implement each process of the above-mentioned control method embodiment of the folding electronic device, and can achieve the same technical effect, and in order to avoid repetition, details are not described here again.
It should be noted that the electronic devices in the embodiments of the present application include the mobile electronic devices and the non-mobile electronic devices described above.
Fig. 8 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 800 includes, but is not limited to: a radio frequency unit 801, a network module 802, an audio output unit 803, an input unit 804, a sensor 805, a display unit 806, a user input unit 807, an interface unit 808, a memory 809, and a processor 810.
Those skilled in the art will appreciate that the electronic device 800 may further comprise a power source (e.g., a battery) for supplying power to the various components, and the power source may be logically connected to the processor 810 via a power management system, so as to manage charging, discharging, and power consumption management functions via the power management system. The electronic device structure shown in fig. 8 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is omitted here.
Wherein, the processor 810 is configured to determine an eyebrow region in the first face image;
carrying out edge detection on the miscellaneous hair region in the eyebrow hair region to obtain an miscellaneous hair mask image;
removing the miscellaneous hair mask image from the first face image to obtain a second face image;
and inputting the second face image into a skin generator model to generate skin texture at the position of the miscellaneous hair mask image in the second face image so as to obtain a target correction image.
The electronic device 800 in the embodiment of the application obtains a rough mask image by obtaining a rough region in a first face image and analyzing the rough region in the rough region; further removing the miscellaneous hair mask image from the first face image to obtain a second face image; and inputting the second face image into a skin generator model to generate skin texture at the position of the miscellaneous hair mask image in the second face image so as to obtain a target correction image. Can realize that intelligent eyebrow to in the face image is repaiied, remove miscellaneous eyebrow intelligently, and generate real skin texture, reach the effect that does not have the sense of paining, when convenient user uses, be favorable to promoting the whole aesthetic feeling of eyebrow.
Optionally, the processor 810 is further configured to obtain a first face sample set;
randomly generating mask image lines in each face image in the first face sample set, simulating skin defects of a real rough area, and obtaining a second face sample set;
inputting the second face sample set into the generator model for generating the confrontation network model to obtain a generated sample set after the defect skin is repaired;
inputting the generated sample set and the first face sample set into a discriminator model of the generated confrontation network model to obtain a discrimination result;
updating the generator model by using the discrimination result, and continuously inputting the second face sample set into the generator model to obtain a generated sample set after repairing the defective skin, and performing iterative training;
and after the judgment result meets the iterative training end condition, taking the updated generator model as the skin generator model.
Optionally, the generator network model is constructed based on a U-Net network structure plus a layer-hopping connection.
Optionally, the processor 810 is further configured to, in a case that it is detected that an eyebrow area exists in the eyebrow area, determine a rough area according to the eyebrow area, where the determined rough area is an area that does not overlap with the eyebrow area.
Optionally, the discriminator network model is constructed based on spectrum normalization.
Optionally, the processor 810 is further configured to analyze a rough region of the eyebrow region according to a predetermined rough detection template;
and utilizing a canny operator to carry out edge detection on the rough region, and determining the rough mask image.
Optionally, the processor 810 is further configured to obtain a first target image by negating a pixel value of the rough mask image;
multiplying the first target image and the first face image to obtain a second face image; the second face image is the face image from which the miscellaneous hair mask image is removed from the first face image.
It should be understood that in the embodiment of the present application, the input Unit 804 may include a Graphics Processing Unit (GPU) 8041 and a microphone 8042, and the Graphics Processing Unit 8041 processes image data of a still picture or a video obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 806 may include a display panel 8061, and the display panel 8061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 807 includes a touch panel 8071 and other input devices 8072. A touch panel 8071, also referred to as a touch screen. The touch panel 8071 may include two portions of a touch detection device and a touch controller. Other input devices 8072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. The memory 809 may be used to store software programs as well as various data including, but not limited to, application programs and operating systems. The processor 810 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 810.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the above-mentioned embodiment of the face image processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and so on.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to run a program or an instruction to implement each process of the above embodiment of the face image processing method, and can achieve the same technical effect, and in order to avoid repetition, the description is omitted here.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. A face image processing method is characterized by comprising the following steps:
determining an eyebrow region in a first face image;
carrying out edge detection on the miscellaneous hair region in the eyebrow hair region to obtain an miscellaneous hair mask image;
removing the miscellaneous hair mask image from the first face image to obtain a second face image;
and inputting the second face image into a skin generator model to generate skin texture at the position of the miscellaneous hair mask image in the second face image so as to obtain a target correction image.
2. The method according to claim 1, wherein the skin generator model is trained based on a generative confrontation network model, and the step of training the skin generator model based on the generative confrontation network model specifically comprises:
acquiring a first face sample set;
randomly generating mask image lines in each face image in the first face sample set, simulating skin defects of a real rough area, and obtaining a second face sample set;
inputting the second face sample set into the generator network model for generating the confrontation network model to obtain a generated sample set after repairing the defective skin;
inputting the generated sample set and the first face sample set into a discriminator network model of the generated confrontation network model to obtain a discrimination result;
updating the generator network model by using the discrimination result, and continuously inputting the second face sample set into the generator network model to obtain a generated sample set after the defect skin is repaired, and performing iterative training;
and after the judgment result meets the iterative training end condition, taking the updated generator network model as the skin generator model.
3. The method of claim 2, wherein the generator network model is constructed based on a U-Net network structure plus a layer jump connection.
4. The method of claim 2, wherein the discriminator network model is constructed based on spectral normalization.
5. The method for processing a human face image according to claim 1, wherein the step of performing edge detection on the rough region in the eyebrow area to obtain the rough mask image further comprises:
under the condition that the eyebrow area is detected to exist in the eyebrow area, determining a mixed hair area according to the eyebrow area, wherein the determined mixed hair area is an area which is not overlapped with the eyebrow area.
6. A face image processing apparatus, comprising:
the first determining module is used for determining an eyebrow area in the first face image;
the detection module is used for carrying out edge detection on the miscellaneous hair region in the eyebrow hair region to obtain an miscellaneous hair mask image;
the first processing module is used for removing the miscellaneous hair mask image from the first face image to obtain a second face image;
and the second processing module is used for inputting the second face image into a skin generator model so as to generate skin texture at the position of the miscellaneous hair mask image in the second face image and obtain a target correction image.
7. The apparatus of claim 6, wherein the skin generator model is trained based on a generative confrontation network model, the apparatus further comprising:
the first acquisition module is used for acquiring a first face sample set;
the second acquisition module is used for randomly generating mask image lines in each face image in the first face sample set, simulating skin defects of a real rough area and acquiring a second face sample set;
the generating module is used for inputting the second face sample set into the generator network model for generating the confrontation network model to obtain a generated sample set after the defect skin is repaired;
the judging module is used for inputting the generated sample set and the first face sample set into a discriminator network model of the generated confrontation network model to obtain a judging result;
the training updating module is used for updating the generator network model by using the judgment result, continuously inputting the second face sample set into the generator network model to obtain a generated sample set after the defect skin is repaired, and performing iterative training;
and the third processing module is used for taking the updated generator network model as the skin generator model after the judgment result meets the iterative training end condition.
8. The facial image processing apparatus according to claim 7, wherein said generator network model is constructed based on a U-Net network structure plus a layer jump connection.
9. The facial image processing apparatus according to claim 7, wherein said discriminator network model is constructed based on spectral normalization.
10. The face image processing apparatus according to claim 6, further comprising:
and the second determining module is used for determining a mixed hair region according to the eyebrow drawing region under the condition that the eyebrow drawing region is detected to exist in the eyebrow drawing region, and the determined mixed hair region is a region which is not overlapped with the eyebrow drawing region.
CN202011588167.6A 2020-12-29 2020-12-29 Face image processing method and device and electronic equipment Pending CN112686818A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011588167.6A CN112686818A (en) 2020-12-29 2020-12-29 Face image processing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011588167.6A CN112686818A (en) 2020-12-29 2020-12-29 Face image processing method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN112686818A true CN112686818A (en) 2021-04-20

Family

ID=75454624

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011588167.6A Pending CN112686818A (en) 2020-12-29 2020-12-29 Face image processing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN112686818A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113409185A (en) * 2021-05-14 2021-09-17 北京达佳互联信息技术有限公司 Image processing method, image processing device, electronic equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000011144A (en) * 1998-06-18 2000-01-14 Shiseido Co Ltd Eyebrow deformation system
EP1298597A2 (en) * 2001-10-01 2003-04-02 L'oreal Simulation of effects of cosmetic products using a three-dimensional facial image
CN1475969A (en) * 2002-05-31 2004-02-18 ��˹���´﹫˾ Method and system for intensify human image pattern
US20120299945A1 (en) * 2006-05-05 2012-11-29 Parham Aarabi Method, system and computer program product for automatic and semi-automatic modificatoin of digital images of faces
WO2016101883A1 (en) * 2014-12-24 2016-06-30 掌赢信息科技(上海)有限公司 Method for face beautification in real-time video and electronic equipment
CN106485222A (en) * 2016-10-10 2017-03-08 上海电机学院 A kind of method for detecting human face being layered based on the colour of skin
CN108121978A (en) * 2018-01-10 2018-06-05 马上消费金融股份有限公司 A kind of face image processing process, system and equipment and storage medium
CN110197462A (en) * 2019-04-16 2019-09-03 浙江理工大学 A kind of facial image beautifies in real time and texture synthesis method

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000011144A (en) * 1998-06-18 2000-01-14 Shiseido Co Ltd Eyebrow deformation system
EP1298597A2 (en) * 2001-10-01 2003-04-02 L'oreal Simulation of effects of cosmetic products using a three-dimensional facial image
CN1475969A (en) * 2002-05-31 2004-02-18 ��˹���´﹫˾ Method and system for intensify human image pattern
US20120299945A1 (en) * 2006-05-05 2012-11-29 Parham Aarabi Method, system and computer program product for automatic and semi-automatic modificatoin of digital images of faces
WO2016101883A1 (en) * 2014-12-24 2016-06-30 掌赢信息科技(上海)有限公司 Method for face beautification in real-time video and electronic equipment
CN106485222A (en) * 2016-10-10 2017-03-08 上海电机学院 A kind of method for detecting human face being layered based on the colour of skin
CN108121978A (en) * 2018-01-10 2018-06-05 马上消费金融股份有限公司 A kind of face image processing process, system and equipment and storage medium
CN110197462A (en) * 2019-04-16 2019-09-03 浙江理工大学 A kind of facial image beautifies in real time and texture synthesis method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113409185A (en) * 2021-05-14 2021-09-17 北京达佳互联信息技术有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN113409185B (en) * 2021-05-14 2024-03-05 北京达佳互联信息技术有限公司 Image processing method, device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN111597884A (en) Facial action unit identification method and device, electronic equipment and storage medium
CN106293074B (en) Emotion recognition method and mobile terminal
CN110889379B (en) Expression package generation method and device and terminal equipment
CN112306347B (en) Image editing method, image editing device and electronic equipment
CN104731446A (en) Hint-based spot healing techniques
CN114936301B (en) Intelligent household building material data management method, device, equipment and storage medium
CN116186326A (en) Video recommendation method, model training method, electronic device and storage medium
CN114638939A (en) Model generation method, model generation device, electronic device, and readable storage medium
CN113037925B (en) Information processing method, information processing apparatus, electronic device, and readable storage medium
CN114792285A (en) Image processing method and processing device, electronic device and readable storage medium
CN112686818A (en) Face image processing method and device and electronic equipment
CN111144374B (en) Facial expression recognition method and device, storage medium and electronic equipment
CN113111804A (en) Face detection method and device, electronic equipment and storage medium
CN112822393B (en) Image processing method and device and electronic equipment
CN106874835B (en) A kind of image processing method and device
CN115661927A (en) Sign language recognition method and device, electronic equipment and storage medium
CN113271379B (en) Image processing method and device and electronic equipment
CN112150486B (en) Image processing method and device
CN112261321B (en) Subtitle processing method and device and electronic equipment
CN112258416A (en) Image processing method and device and electronic equipment
CN112288835A (en) Image text extraction method and device and electronic equipment
CN114143454B (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN114119392A (en) Image processing method and device and electronic equipment
CN117331469A (en) Screen display method, device, electronic equipment and readable storage medium
CN112288763A (en) Image processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination