CN111784611A - Portrait whitening method, portrait whitening device, electronic equipment and readable storage medium - Google Patents

Portrait whitening method, portrait whitening device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN111784611A
CN111784611A CN202010636778.7A CN202010636778A CN111784611A CN 111784611 A CN111784611 A CN 111784611A CN 202010636778 A CN202010636778 A CN 202010636778A CN 111784611 A CN111784611 A CN 111784611A
Authority
CN
China
Prior art keywords
portrait
image
whitening
network
mask
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010636778.7A
Other languages
Chinese (zh)
Other versions
CN111784611B (en
Inventor
周铭柯
李启东
邹嘉伟
陈进山
何恕预
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiamen Meitu Technology Co Ltd
Original Assignee
Xiamen Meitu Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiamen Meitu Technology Co Ltd filed Critical Xiamen Meitu Technology Co Ltd
Priority to CN202010636778.7A priority Critical patent/CN111784611B/en
Publication of CN111784611A publication Critical patent/CN111784611A/en
Application granted granted Critical
Publication of CN111784611B publication Critical patent/CN111784611B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • G06T5/77
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Abstract

The embodiment of the application provides a portrait whitening method and device, electronic equipment and a readable storage medium, and relates to the technical field of image processing. The method comprises the steps of firstly obtaining an image to be whitened containing a portrait, then inputting the image to be whitened into a portrait whitening model for whitening to obtain a whitening result image, wherein the portrait whitening model is a portrait processing network which is constructed in advance and comprises a portrait whitening main network and a portrait mask secondary network, and the portrait whitening main network after training is obtained by taking the portrait image as a training sample. Therefore, when the trained main portrait whitening network is used for whitening the portrait in the image to be whitened, more image details are kept, and distortion of the image to be whitened due to whitening is avoided.

Description

Portrait whitening method, portrait whitening device, electronic equipment and readable storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a method and an apparatus for whitening a human image, an electronic device, and a readable storage medium.
Background
The portrait is whitened, which is a picture-repairing operation that is often performed by most beauty-lovers, and manual picture-repairing requires that the skin area of the portrait is firstly scratched out, then the skin color is adjusted, and finally the scratched-out edge area is subjected to a smooth operation, so that the skin area and other areas are in transition coordination and nature. Such a retouching operation often requires a certain retouching skill, and the retouching of a sheet also takes a lot of time, which is not friendly to most beauty lovers. Therefore, it is urgent to develop an algorithm capable of intelligently whitening the skin of a portrait in one key.
At present, the image is generally whitened by using a filter, that is, the color of the whole image is adjusted. However, this method is often difficult to adjust for the skin area, affects the color of the image background, has strong filter sense, and is difficult to achieve the ideal image-modifying effect.
How to keep more image details while whitening the portrait is a problem worthy of research.
Disclosure of Invention
In view of the above, the present application provides a method, an apparatus, an electronic device and a readable storage medium for whitening a portrait to solve the above problems.
The embodiment of the application can be realized as follows:
in a first aspect, an embodiment of the present application provides a portrait whitening method, including:
acquiring an image to be whitened, which contains a portrait;
and inputting the image to be whitened into an image whitening model for whitening to obtain a whitening result image, wherein the image whitening model is a trained image processing network which is constructed in advance and comprises an image whitening main network and an image mask secondary network, and the image whitening main network is obtained after training by taking the image as a training sample.
In an alternative embodiment, the portrait whitening model is trained by the following steps:
acquiring a portrait image and a target image, wherein the target image is obtained by whitening the face of a portrait in the portrait image;
taking the portrait images as training samples, taking the target images as labels, and training the portrait processing network by adopting a pre-constructed loss function to obtain a trained portrait processing network;
and taking the main portrait whitening network in the trained portrait processing network as the portrait whitening model.
In an alternative embodiment, the target image comprises a target mask image and a target portrait image, and the portrait whitening primary network comprises a portrait mask perception sub-network and a portrait whitening sub-network;
the step of training the portrait processing network by using the portrait images as training samples and the target images as labels and adopting a pre-constructed loss function to obtain the trained portrait processing network comprises the following steps:
inputting the portrait image into the portrait mask perception sub-network, and performing mask perception on the portrait image by using the portrait mask perception sub-network to obtain a mask perception image;
inputting the mask perception image into the portrait mask sub-network, and performing portrait mask processing on the mask perception image by using the portrait mask sub-network to obtain a preliminary mask image;
inputting the mask perception image into the portrait whitening sub-network, and performing face whitening on the mask perception image by using the portrait whitening sub-network to obtain a whitened primary result image;
calculating a loss value of the loss function according to the preliminary result image, the preliminary mask image, the target mask image and the target portrait image;
and updating the parameters of the portrait processing network according to the loss value until the loss value meets a preset condition, so as to obtain the trained portrait processing network.
In an alternative embodiment, the loss functions include a semantic loss function, an L1 loss function, and a first L2 loss function, the loss values including a first output value of the semantic loss function, a second output value of the L1 loss function, and a third output value of the first L2 loss function;
the step of calculating the loss value of the loss function according to the preliminary result image, the preliminary mask image, the target mask image and the target portrait image comprises:
calculating a first output value of the semantic loss function by using the preliminary result image and the target portrait image;
calculating a second output value of the L1 loss function using the preliminary result image and the target portrait image;
calculating a third output value of the first L2 loss function using the preliminary mask image and the target mask image.
In an optional implementation manner, the updating the parameter of the portrait processing network according to the loss value until the loss value meets a preset condition, and the obtaining the trained portrait processing network includes:
calculating a weighted sum of the first output value, the second output value, and the third output value;
judging whether the weighted sum is smaller than a preset threshold value or not;
if so, stopping updating the parameters of the portrait processing network to obtain the trained portrait processing network;
if not, updating the parameters of the portrait processing network according to the first output value, the second output value and the third output value, and repeatedly executing the steps until the weighted sum is smaller than the preset threshold value, so as to obtain the trained portrait processing network.
In an alternative embodiment, the semantic loss function includes a pre-trained VGG model and a second L2 loss function;
the step of calculating a first output value of the semantic loss function using the preliminary result image and the target portrait image comprises:
inputting the preliminary result image into the VGG model to obtain a first function diagram;
inputting the target portrait image into the VGG model to obtain a second functional diagram;
and calculating an output value of the second L2 loss function by using the first function diagram and the second function diagram, and taking the output value as the first output value.
In a second aspect, an embodiment of the present application provides a portrait whitening apparatus, including:
the first acquisition module is used for acquiring an image to be whitened, which comprises a portrait;
and the whitening module is used for inputting the image to be whitened into an image whitening model for whitening to obtain a whitening result image, wherein the image whitening model is a trained image obtained by training an image processing network which is constructed in advance and comprises an image whitening main network and an image mask secondary network by taking the image as a training sample.
In an alternative embodiment, the apparatus further comprises:
the second acquisition module is used for acquiring a portrait image and a target image, wherein the target image is obtained by whitening the face of a portrait in the portrait image;
the training module is used for taking the portrait images as training samples, taking the target images as labels, and training the portrait processing network by adopting a pre-constructed loss function to obtain a trained portrait processing network;
and the portrait whitening model acquisition module is used for taking a portrait whitening main network in the trained portrait processing network as the portrait whitening model.
In a third aspect, an embodiment of the present application provides an electronic device, where the electronic device includes a processor, a memory, and a bus, where the memory stores machine-readable instructions executable by the processor, and when the electronic device runs, the processor and the memory communicate with each other through the bus, and the processor executes the machine-readable instructions to perform the steps of the portrait whitening method according to any one of the foregoing embodiments.
In a fourth aspect, an embodiment of the present application provides a readable storage medium, in which a computer program is stored, and the computer program, when executed, implements the portrait whitening method according to any one of the foregoing embodiments.
The embodiment of the application provides a portrait whitening method, a portrait whitening device, electronic equipment and a readable storage medium. Therefore, when the trained portrait whitening main network is used for whitening the portrait, more image details are kept, and distortion of the image due to whitening is avoided.
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, several embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.
Fig. 1 is a block diagram of an electronic device according to an embodiment of the present disclosure.
Fig. 2 is a flowchart of a portrait whitening method according to an embodiment of the present application.
Fig. 3 is a schematic diagram illustrating training of a portrait whitening model according to an embodiment of the present application.
Fig. 4 is one of portrait images provided in the embodiments of the present application.
Fig. 5 is one of the target images provided in the embodiment of the present application, which corresponds to the portrait image shown in fig. 4.
Fig. 6 is an object mask image included in the object image shown in fig. 5.
Fig. 7 is a functional block diagram of a portrait whitening apparatus according to an embodiment of the present application.
Icon: 100-an electronic device; 110-a memory; 120-a processor; 130-portrait whitening device; 131-a first acquisition module; 132-whitening module.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
In the description of the present application, it should be noted that if the terms "upper", "lower", "inside", "outside", etc. are used for indicating the orientation or positional relationship based on the orientation or positional relationship shown in the drawings or the orientation or positional relationship which the present invention product is usually put into use, it is only for convenience of describing the present application and simplifying the description, but it is not intended to indicate or imply that the referred device or element must have a specific orientation, be constructed in a specific orientation and be operated, and thus, should not be construed as limiting the present application.
Furthermore, the appearances of the terms "first," "second," and the like, if any, are used solely to distinguish one from another and are not to be construed as indicating or implying relative importance.
It should be noted that the features of the embodiments of the present application may be combined with each other without conflict.
As introduced in the background art, the whitening of the portrait is a frequently-performed retouching operation for the beauty lovers, and manual retouching requires that the skin area of the portrait is firstly scratched out, then the skin color is adjusted, and finally the scratched-out edge area is subjected to a smoothing operation, so that the skin area and other areas are in transition coordination and nature. Such a retouching operation often requires a certain retouching skill, and the retouching of a sheet also takes a lot of time, which is not friendly to most beauty lovers. Therefore, it is urgent to develop an algorithm capable of intelligently whitening the skin of a portrait in one key.
At present, the image is generally whitened by using a filter, that is, the color of the whole image is adjusted. However, this method is often difficult to adjust for the skin area, affects the color of the image background, has strong filter sense, and is difficult to achieve the ideal image-modifying effect.
How to keep more image details while whitening the portrait is a problem worthy of research.
In view of this, embodiments of the present application provide a portrait whitening method, apparatus, electronic device and readable storage medium, in which a to-be-whitened image including a portrait is input into a pre-trained portrait whitening model, and then a whitening result image is obtained. The above scheme is explained in detail below.
Referring to fig. 1, fig. 1 is a block diagram of an electronic device 100 according to an embodiment of the present disclosure. The apparatus may include a processor 120, a memory 110, a portrait whitening device 130, and a bus, wherein the memory 110 stores machine-readable instructions executable by the processor 120, the processor 120 and the memory 110 communicate with each other via the bus when the electronic apparatus 100 is operated, and the processor 120 executes the machine-readable instructions and performs the steps of the portrait whitening method.
The memory 110, the processor 120, and other components are electrically connected to each other directly or indirectly to enable signal transmission or interaction.
For example, the components may be electrically connected to each other via one or more communication buses or signal lines. The image whitening apparatus 130 includes at least one software functional module that can be stored in the memory 110 in the form of software or firmware (firmware). The processor 120 is configured to execute executable modules stored in the memory 110, such as software functional modules or computer programs included in the portrait lightening apparatus 130.
The Memory 110 may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Read-Only Memory (EPROM), an electrically Erasable Read-Only Memory (EEPROM), and the like.
The processor 120 may be an integrated circuit chip having signal processing capabilities. The processor 120 may be a general-purpose processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and so on.
But may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
In the embodiment of the present application, the memory 110 is used for storing a program, and the processor 120 is used for executing the program after receiving the execution instruction. The method defined by the process disclosed in any of the embodiments of the present application may be applied to the processor 120, or may be implemented by the processor 120.
In the embodiment of the present application, the electronic device 100 may be, but is not limited to, a smart phone, a personal computer, a tablet computer, or the like having a processing function.
It will be appreciated that the configuration shown in figure 1 is merely illustrative. Electronic device 100 may also have more or fewer components than shown in FIG. 1, or a different configuration than shown in FIG. 1. The components shown in fig. 1 may be implemented in hardware, software, or a combination thereof.
As a possible implementation manner, an embodiment of the present application provides an image whitening method, please refer to fig. 2 in combination, and fig. 2 is a flowchart of the image whitening method provided in the embodiment of the present application.
The following is described in detail with reference to the specific flow shown in fig. 2.
Step S1, an image to be whitened including a portrait is acquired.
Step S2, inputting the image to be whitened into a portrait whitening model for whitening to obtain a whitening result image, wherein the portrait whitening model is a trained portrait processing network which is constructed in advance and comprises a portrait whitening main network and a portrait mask secondary network, and the portrait whitening main network is obtained after training by taking the portrait image as a training sample.
The image to be whitened can be obtained by shooting by the current electronic equipment, or can be stored in the memory in advance by the current electronic equipment, and when needed, the image to be whitened can be obtained from the memory.
As a possible implementation scene, after the to-be-whitened image containing the portrait is obtained through the two modes, the to-be-whitened image is sent to the portrait whitening model, and a whitening result image can be obtained.
By training a pre-constructed portrait processing network comprising a portrait whitening main network and a portrait mask secondary network, only the trained portrait whitening main network is reserved after the training is finished. Therefore, on one hand, the two networks are combined to be trained simultaneously in the training stage, so that more image details are kept while the portrait is whitened, and image distortion is avoided.
It is understood that the portrait whitening model may be obtained by pre-training in other electronic devices and then migrated to the current electronic device, or may be obtained by pre-training in the current electronic device and storing.
It should be understood that, in other embodiments, the order of some steps in the image whitening method of the embodiment of the present application may be interchanged according to actual needs, or some steps may be omitted or deleted.
As an alternative embodiment, please refer to fig. 3 in combination, the portrait whitening model is trained by the following steps:
step S100, a portrait image and a target image are obtained, wherein the target image is obtained by whitening the face of the portrait in the portrait image.
And step S200, taking the portrait images as training samples and the target images as labels, and training the portrait processing network by adopting a pre-constructed loss function to obtain the trained portrait processing network.
And step S300, taking the main portrait whitening network in the trained portrait processing network as a portrait whitening model.
The face whitening image may include a plurality of face images, and the target images obtained by whitening the faces in the face images may also include a plurality of face images, and the target images correspond to the face images one to one.
For example, as shown in fig. 4 and 5, fig. 4 is one of the portrait images provided in the embodiment of the present application, and fig. 5 is one of the target images corresponding to the portrait image shown in fig. 4 in the embodiment of the present application.
The target image in fig. 5 is obtained by locally whitening the facial skin of the portrait included in the portrait image, and does not disturb the background image other than the portrait.
The portrait processing network is trained by adopting the portrait image and the target image, so that the trained portrait processing network has the effect of whitening the face of the portrait, and more image details are reserved.
Further, the target image comprises a target mask image and a target portrait image, and the portrait whitening main network comprises a portrait mask perception sub-network and a portrait whitening sub-network.
As shown in fig. 6, fig. 6 is an object mask image included in the object image shown in fig. 5.
As a possible implementation manner, please refer to table 1 in combination, where table 1 is a structural schematic diagram of a portrait processing network in the embodiment of the present application.
TABLE 1
Figure BDA0002569020530000121
Figure BDA0002569020530000131
Wherein WSkM represents the portrait processing network (Whitening Skin Model). ConX _ ReLU indicates that ReLU activation is performed after convolution layer X performs convolution operations. Skip _ LayerX _ LayerY indicates that the LayerX layer output (activated output) is added to the LayerY layer output (activated output), for example, WSkM _ Skip2_ De4_ Con6 indicates that the WSkM _ Dec4_ ReLU layer output is added to the WSkM _ Con6_ ReLU layer output.
Kernel is a convolution Kernel, padding is a filling parameter, stride is a step length of convolution Kernel stepping, imaps is the number of input channels, omaps is the number of Output channels, Output is Output, and Mask is a preliminary Mask image.
As shown in the table, the portrait mask subnetwork includes the WSkM _ Con11_ ReLU layer through the WSkM _ Con15 layer. The portrait whitening primary network comprises a WSkM _ Con1_ ReLU layer to a WSkM _ Con15 layer, the portrait mask perception sub-network comprises a WSkM _ Con1_ ReLU layer to a WSkM _ Dec5_ ReLU layer, and the portrait whitening sub-network comprises a WSkM _ Dec6_ ReLU layer to a WSkM _ Dec10 layer.
Thus, by combining the structure of the human processing network, the portrait processing network can be trained by the following method to obtain the trained portrait processing network:
firstly, inputting a portrait image into a portrait mask perception sub-network, and performing mask perception on the portrait image by using the portrait mask perception sub-network to obtain a mask perception image.
And then, inputting the mask perception image into a portrait mask sub-network, and performing portrait mask processing on the mask perception image by using the portrait mask sub-network to obtain a preliminary mask image.
And then, inputting the mask perception image into a portrait whitening sub-network, and carrying out face whitening on the mask perception image by using the portrait whitening sub-network to obtain a whitened preliminary result image.
And then, calculating the loss value of the loss function according to the initial result image, the initial mask image, the target mask image and the target portrait image.
And finally, updating parameters of the portrait processing network according to the loss value until the loss value meets a preset condition, and obtaining the trained portrait processing network.
The size of the portrait images can be adjusted to 256-512 through random scaling in advance, so that the diversity of training samples is increased, and the robustness of the trained portrait processing network is improved.
Further, the loss functions include a semantic loss function, an L1 loss function, and a first L2 loss function, and the loss values include a first output value of the semantic loss function, a second output value of the L1 loss function, and a third output value of the first L2 loss function.
As an alternative embodiment, the loss value of the loss function may be calculated from the preliminary result image, the preliminary mask image, the target mask image and the target portrait image by:
first, a first output value of a semantic loss function is calculated by using the preliminary result image and the target portrait image.
Then, using the preliminary result image and the target portrait image, a second output value of the L1 loss function is calculated.
Finally, a third output value of the first L2 loss function is calculated using the preliminary mask image and the target mask image.
Where the L1 loss function is:
Figure BDA0002569020530000151
wherein D isL1As a function of L1 loss, yiIs the ith pixel value, f (x) in the target portrait imagei) The number is the ith pixel value in the preliminary result image, and n is the number of pixel points in the target portrait image or the preliminary result image.
The first L2 loss function is:
Figure BDA0002569020530000152
wherein D isL2First L2 loss function, ziIs the ith pixel value, g (x), in the target mask imagei) The number is the ith pixel value in the preliminary mask image, and n is the number of pixel points in the preliminary mask image or the target mask image.
Further, the semantic loss function includes a pre-trained VGG model and a second L2 loss function.
As an alternative embodiment, the first output value of the semantic loss function may be calculated by:
firstly, inputting a preliminary result image into a VGG model to obtain a first function diagram.
And then, inputting the target portrait image into the VGG model to obtain a second functional diagram.
Finally, the output value of the second L2 loss function is calculated using the first and second function maps, and the output value is taken as the first output value.
The second L2 loss function is:
Figure BDA0002569020530000161
wherein D isL3Is a second L2 loss function, piIs the first workThe ith pixel value in the energy map, q (x)i) Is the ith pixel value in the second functional diagram, and n is the number of pixel points in the first functional diagram or the second functional diagram.
As a possible implementation manner, the parameters of the portrait processing network may be updated according to the loss value by the following method until the loss value satisfies the preset condition, so as to obtain the trained portrait processing network:
first, a weighted sum of the first output value, the second output value, and the third output value is calculated.
Secondly, whether the weighted sum is smaller than a preset threshold value is judged.
And if so, stopping updating the parameters of the portrait processing network to obtain the trained portrait processing network.
If not, updating the parameters of the portrait processing network according to the first output value, the second output value and the third output value, and repeatedly executing the steps until the weighted sum is smaller than a preset threshold value, so as to obtain the trained portrait processing network.
As an alternative embodiment, the weight of the first output value may be 1, and the weight of the second output value may be 105. The second output value may have a weight of 105
Therefore, the primary result image and the target image can be ensured to be consistent in details such as texture, edge and the like through the semantic loss function. And monitoring color information between the preliminary result image and the target image through an L1 loss function to ensure that the colors of the preliminary result image and the target image are similar. Through the first L2 loss function, the portrait whitening main network can be supervised, so that a person wants to whiten the face area of the portrait skin, and the face area of the portrait can be whitened without changing other image details.
Based on the same inventive concept, please refer to fig. 7 in combination, an embodiment of the present application further provides a portrait whitening apparatus corresponding to the method for adjusting photographing parameters, the apparatus includes:
the first obtaining module 131 is configured to obtain an image to be whitened, which includes a portrait.
The whitening module 132 is configured to input the image to be whitened into the portrait whitening model for whitening, so as to obtain a whitening result image, where the portrait whitening model is a trained portrait processing network that is trained on a portrait processing network that is pre-constructed and includes a portrait whitening main network and a portrait mask secondary network, and the trained portrait whitening main network is obtained.
Further, the apparatus further comprises:
and the second acquisition module is used for acquiring a portrait image and a target image, wherein the target image is obtained by whitening the face of the portrait in the portrait image.
And the training module is used for training the portrait processing network by taking the portrait images as training samples and the target images as labels and adopting a pre-constructed loss function to obtain the trained portrait processing network.
And the portrait whitening model acquisition module is used for taking the portrait whitening main network in the trained portrait processing network as a portrait whitening model.
Since the principle of the device in the embodiment of the present application to solve the problem is similar to that of the image whitening method in the embodiment of the present application, the implementation principle of the device can be referred to the implementation principle of the method, and repeated details are not repeated.
The embodiment of the application also provides a readable storage medium, wherein a computer program is stored in the readable storage medium, and when the computer program is executed, the method for whitening the human image is realized.
To sum up, the embodiment of the application provides a portrait whitening method, a portrait whitening device, an electronic device and a readable storage medium, the method includes firstly obtaining an image to be whitened, and then inputting the image to be whitened into a portrait whitening model for whitening to obtain a whitening result image, wherein the portrait whitening model is to train a pre-constructed portrait processing network including a portrait whitening main network and a portrait mask sub-network by using the portrait image as a training sample to obtain a trained portrait whitening main network. Therefore, when the trained portrait whitening main network is used for whitening the portrait, more image details are kept, and distortion of the image due to whitening is avoided.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described apparatus embodiments are merely illustrative, and for example, the flowcharts and block diagrams in the figures show the apparatuses, methods and requirements according to the embodiments of the present application, and it is intended that in this document, the terms "include", "include" or any other variation thereof are intended to cover a non-exclusive inclusion, so that a process, method, article or apparatus that includes a series of elements includes not only those elements but also other elements not explicitly listed, or also includes elements inherent to such process, method, article or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
Architecture, functionality, and operation of possible implementations of a computer program product. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present application should be covered within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A method of whitening a human image, the method comprising:
acquiring an image to be whitened, which contains a portrait;
and inputting the image to be whitened into an image whitening model for whitening to obtain a whitening result image, wherein the image whitening model is a trained image processing network which is constructed in advance and comprises an image whitening main network and an image mask secondary network, and the image whitening main network is obtained after training by taking the image as a training sample.
2. The portrait whitening method according to claim 1, wherein the portrait whitening model is trained by the following steps:
acquiring a portrait image and a target image, wherein the target image is obtained by whitening the face of a portrait in the portrait image;
taking the portrait images as training samples, taking the target images as labels, and training the portrait processing network by adopting a pre-constructed loss function to obtain a trained portrait processing network;
and taking the main portrait whitening network in the trained portrait processing network as the portrait whitening model.
3. The image whitening method of claim 2, wherein the target image comprises a target mask image and a target portrait image, and the portrait whitening primary network comprises a portrait mask perception sub-network and a portrait whitening sub-network;
the step of training the portrait processing network by using the portrait images as training samples and the target images as labels and adopting a pre-constructed loss function to obtain the trained portrait processing network comprises the following steps:
inputting the portrait image into the portrait mask perception sub-network, and performing mask perception on the portrait image by using the portrait mask perception sub-network to obtain a mask perception image;
inputting the mask perception image into the portrait mask sub-network, and performing portrait mask processing on the mask perception image by using the portrait mask sub-network to obtain a preliminary mask image;
inputting the mask perception image into the portrait whitening sub-network, and performing face whitening on the mask perception image by using the portrait whitening sub-network to obtain a whitened primary result image;
calculating a loss value of the loss function according to the preliminary result image, the preliminary mask image, the target mask image and the target portrait image;
and updating the parameters of the portrait processing network according to the loss value until the loss value meets a preset condition, so as to obtain the trained portrait processing network.
4. The human image whitening method according to claim 3, wherein the loss functions comprise a semantic loss function, an L1 loss function, and a first L2 loss function, the loss values comprising a first output value of the semantic loss function, a second output value of the L1 loss function, and a third output value of the first L2 loss function;
the step of calculating the loss value of the loss function according to the preliminary result image, the preliminary mask image, the target mask image and the target portrait image comprises:
calculating a first output value of the semantic loss function by using the preliminary result image and the target portrait image;
calculating a second output value of the L1 loss function using the preliminary result image and the target portrait image;
calculating a third output value of the first L2 loss function using the preliminary mask image and the target mask image.
5. The portrait whitening method according to claim 4, wherein the step of updating the parameters of the portrait processing network according to the loss values until the loss values satisfy a preset condition to obtain the trained portrait processing network comprises:
calculating a weighted sum of the first output value, the second output value, and the third output value;
judging whether the weighted sum is smaller than a preset threshold value or not;
if so, stopping updating the parameters of the portrait processing network to obtain the trained portrait processing network;
if not, updating the parameters of the portrait processing network according to the first output value, the second output value and the third output value, and repeatedly executing the steps until the weighted sum is smaller than the preset threshold value, so as to obtain the trained portrait processing network.
6. The human image whitening method according to claim 4, wherein the semantic loss function comprises a pre-trained VGG model and a second L2 loss function;
the step of calculating a first output value of the semantic loss function using the preliminary result image and the target portrait image comprises:
inputting the preliminary result image into the VGG model to obtain a first function diagram;
inputting the target portrait image into the VGG model to obtain a second functional diagram;
and calculating an output value of the second L2 loss function by using the first function diagram and the second function diagram, and taking the output value as the first output value.
7. A portrait whitening apparatus, characterized in that the apparatus comprises:
the first acquisition module is used for acquiring an image to be whitened, which comprises a portrait;
and the whitening module is used for inputting the image to be whitened into an image whitening model for whitening to obtain a whitening result image, wherein the image whitening model is a trained image obtained by training an image processing network which is constructed in advance and comprises an image whitening main network and an image mask secondary network by taking the image as a training sample.
8. The portrait whitening apparatus according to claim 7, further comprising:
the second acquisition module is used for acquiring a portrait image and a target image, wherein the target image is obtained by whitening the face of a portrait in the portrait image;
the training module is used for taking the portrait images as training samples, taking the target images as labels, and training the portrait processing network by adopting a pre-constructed loss function to obtain a trained portrait processing network;
and the portrait whitening model acquisition module is used for taking a portrait whitening main network in the trained portrait processing network as the portrait whitening model.
9. An electronic device, comprising a processor, a memory and a bus, wherein the memory stores machine-readable instructions executable by the processor, and when the electronic device is running, the processor and the memory communicate via the bus, and the processor executes the machine-readable instructions to perform the steps of the human image whitening method according to any one of claims 1-6.
10. A readable storage medium, wherein a computer program is stored in the readable storage medium, and when executed, the computer program implements the human image whitening method according to any one of claims 1 to 6.
CN202010636778.7A 2020-07-03 2020-07-03 Portrait whitening method, device, electronic equipment and readable storage medium Active CN111784611B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010636778.7A CN111784611B (en) 2020-07-03 2020-07-03 Portrait whitening method, device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010636778.7A CN111784611B (en) 2020-07-03 2020-07-03 Portrait whitening method, device, electronic equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN111784611A true CN111784611A (en) 2020-10-16
CN111784611B CN111784611B (en) 2023-11-03

Family

ID=72758642

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010636778.7A Active CN111784611B (en) 2020-07-03 2020-07-03 Portrait whitening method, device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN111784611B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023160219A1 (en) * 2022-02-28 2023-08-31 荣耀终端有限公司 Light supplementing model training method, image processing method, and related device thereof

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105787878A (en) * 2016-02-25 2016-07-20 杭州格像科技有限公司 Beauty processing method and device
CN105825486A (en) * 2016-04-05 2016-08-03 北京小米移动软件有限公司 Beautifying processing method and apparatus
CN106611402A (en) * 2015-10-23 2017-05-03 腾讯科技(深圳)有限公司 Image processing method and device
CN108230271A (en) * 2017-12-31 2018-06-29 广州二元科技有限公司 Cosmetic method on face foundation cream in a kind of digital picture based on Face datection and facial feature localization
CN108537292A (en) * 2018-04-10 2018-09-14 上海白泽网络科技有限公司 Semantic segmentation network training method, image, semantic dividing method and device
CN109359527A (en) * 2018-09-11 2019-02-19 杭州格像科技有限公司 Hair zones extracting method and system neural network based
CN109410131A (en) * 2018-09-28 2019-03-01 杭州格像科技有限公司 The face U.S. face method and system of confrontation neural network are generated based on condition
CN109785258A (en) * 2019-01-10 2019-05-21 华南理工大学 A kind of facial image restorative procedure generating confrontation network based on more arbiters
WO2019128508A1 (en) * 2017-12-28 2019-07-04 Oppo广东移动通信有限公司 Method and apparatus for processing image, storage medium, and electronic device
US10381105B1 (en) * 2017-01-24 2019-08-13 Bao Personalized beauty system
CN110263737A (en) * 2019-06-25 2019-09-20 Oppo广东移动通信有限公司 Image processing method, image processing apparatus, terminal device and readable storage medium storing program for executing
CN110580688A (en) * 2019-08-07 2019-12-17 北京达佳互联信息技术有限公司 Image processing method and device, electronic equipment and storage medium
CN110706179A (en) * 2019-09-30 2020-01-17 维沃移动通信有限公司 Image processing method and electronic equipment
CN111161131A (en) * 2019-12-16 2020-05-15 上海传英信息技术有限公司 Image processing method, terminal and computer storage medium
CN111163265A (en) * 2019-12-31 2020-05-15 成都旷视金智科技有限公司 Image processing method, image processing device, mobile terminal and computer storage medium
CN111160379A (en) * 2018-11-07 2020-05-15 北京嘀嘀无限科技发展有限公司 Training method and device of image detection model and target detection method and device
CN111311485A (en) * 2020-03-17 2020-06-19 Oppo广东移动通信有限公司 Image processing method and related device
CN111345834A (en) * 2018-12-21 2020-06-30 佳能医疗系统株式会社 X-ray CT system and method

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106611402A (en) * 2015-10-23 2017-05-03 腾讯科技(深圳)有限公司 Image processing method and device
CN105787878A (en) * 2016-02-25 2016-07-20 杭州格像科技有限公司 Beauty processing method and device
CN105825486A (en) * 2016-04-05 2016-08-03 北京小米移动软件有限公司 Beautifying processing method and apparatus
US10381105B1 (en) * 2017-01-24 2019-08-13 Bao Personalized beauty system
WO2019128508A1 (en) * 2017-12-28 2019-07-04 Oppo广东移动通信有限公司 Method and apparatus for processing image, storage medium, and electronic device
CN108230271A (en) * 2017-12-31 2018-06-29 广州二元科技有限公司 Cosmetic method on face foundation cream in a kind of digital picture based on Face datection and facial feature localization
CN108537292A (en) * 2018-04-10 2018-09-14 上海白泽网络科技有限公司 Semantic segmentation network training method, image, semantic dividing method and device
CN109359527A (en) * 2018-09-11 2019-02-19 杭州格像科技有限公司 Hair zones extracting method and system neural network based
CN109410131A (en) * 2018-09-28 2019-03-01 杭州格像科技有限公司 The face U.S. face method and system of confrontation neural network are generated based on condition
CN111160379A (en) * 2018-11-07 2020-05-15 北京嘀嘀无限科技发展有限公司 Training method and device of image detection model and target detection method and device
CN111345834A (en) * 2018-12-21 2020-06-30 佳能医疗系统株式会社 X-ray CT system and method
CN109785258A (en) * 2019-01-10 2019-05-21 华南理工大学 A kind of facial image restorative procedure generating confrontation network based on more arbiters
CN110263737A (en) * 2019-06-25 2019-09-20 Oppo广东移动通信有限公司 Image processing method, image processing apparatus, terminal device and readable storage medium storing program for executing
CN110580688A (en) * 2019-08-07 2019-12-17 北京达佳互联信息技术有限公司 Image processing method and device, electronic equipment and storage medium
CN110706179A (en) * 2019-09-30 2020-01-17 维沃移动通信有限公司 Image processing method and electronic equipment
CN111161131A (en) * 2019-12-16 2020-05-15 上海传英信息技术有限公司 Image processing method, terminal and computer storage medium
CN111163265A (en) * 2019-12-31 2020-05-15 成都旷视金智科技有限公司 Image processing method, image processing device, mobile terminal and computer storage medium
CN111311485A (en) * 2020-03-17 2020-06-19 Oppo广东移动通信有限公司 Image processing method and related device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023160219A1 (en) * 2022-02-28 2023-08-31 荣耀终端有限公司 Light supplementing model training method, image processing method, and related device thereof

Also Published As

Publication number Publication date
CN111784611B (en) 2023-11-03

Similar Documents

Publication Publication Date Title
CN109670558B (en) Digital image completion using deep learning
CN110473141B (en) Image processing method, device, storage medium and electronic equipment
CN109493350B (en) Portrait segmentation method and device
CN111950638B (en) Image classification method and device based on model distillation and electronic equipment
CN106778928B (en) Image processing method and device
CN108846814A (en) Image processing method, device, readable storage medium storing program for executing and computer equipment
CN103247036A (en) Multiple-exposure image fusion method and device
CN109117760A (en) Image processing method, device, electronic equipment and computer-readable medium
CN104866755B (en) Setting method and device for background picture of application program unlocking interface and electronic equipment
US20210097651A1 (en) Image processing method and apparatus, electronic device, and storage medium
CN107172354A (en) Method for processing video frequency, device, electronic equipment and storage medium
CN109871871A (en) Image-recognizing method, device and electronic equipment based on optical neural network structure
CN111028142A (en) Image processing method, apparatus and storage medium
CN109005368A (en) A kind of generation method of high dynamic range images, mobile terminal and storage medium
CN106650615A (en) Image processing method and terminal
JP2021531571A (en) Certificate image extraction method and terminal equipment
CN109005367A (en) A kind of generation method of high dynamic range images, mobile terminal and storage medium
CN107256543A (en) Image processing method, device, electronic equipment and storage medium
CN110503704A (en) Building method, device and the electronic equipment of three components
CN111784611A (en) Portrait whitening method, portrait whitening device, electronic equipment and readable storage medium
CN115410274A (en) Gesture recognition method and device and storage medium
CN112465709A (en) Image enhancement method, device, storage medium and equipment
CN111815533B (en) Dressing processing method, device, electronic equipment and readable storage medium
CN103871014A (en) Image color changing method and device
CN107644455B (en) Face image synthesis method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant