CN108596062B - Face picture real-time highlight removal method and device based on deep learning - Google Patents

Face picture real-time highlight removal method and device based on deep learning Download PDF

Info

Publication number
CN108596062B
CN108596062B CN201810327486.8A CN201810327486A CN108596062B CN 108596062 B CN108596062 B CN 108596062B CN 201810327486 A CN201810327486 A CN 201810327486A CN 108596062 B CN108596062 B CN 108596062B
Authority
CN
China
Prior art keywords
highlight
face
network
picture
result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810327486.8A
Other languages
Chinese (zh)
Other versions
CN108596062A (en
Inventor
徐枫
王至博
刘烨斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN201810327486.8A priority Critical patent/CN108596062B/en
Publication of CN108596062A publication Critical patent/CN108596062A/en
Application granted granted Critical
Publication of CN108596062B publication Critical patent/CN108596062B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a method and a device for removing real-time highlight of a face picture based on deep learning, wherein the method comprises the following steps: extracting a face region from a face picture by a face fitting method; extracting a face highlight distribution map from the face region to establish a data set consisting of a face picture and the face highlight distribution map; generating an confrontation network through deep learning, and training the confrontation network through a data set to obtain a highlight-removed network model; and removing the highlight of the face picture by removing the highlight network model to obtain a highlight removal result of the face picture. The method can output the picture with strong reality sense while removing the highlight, and is simple and easy to implement and wide in application range.

Description

Face picture real-time highlight removal method and device based on deep learning
Technical Field
The invention relates to the technical field of computer vision, in particular to a method and a device for removing highlight of a face picture in real time based on deep learning.
Background
Highlight removal of human face pictures is very important in the fields of computer vision and image processing, and also has very important application in the fields of human face recognition, human face three-dimensional reconstruction and the like. For example, in the field of face recognition, highlight on a face picture can cause the accuracy of face recognition to be reduced; in the human face three-dimensional reconstruction, the highlight on a human face picture can cause the increase of the model error of the three-dimensional reconstruction. Human eyes are very sensitive to the reality of human face pictures, and it is very important to be able to remove highlights from human face pictures and maintain their reality.
However, in the prior art, the ability to highlight and remove the face image in real time is a very challenging and research problem. Although the traditional highlight removal method can extract the distribution of highlight on the face, the reality of highlight removal cannot be guaranteed, and the problem needs to be solved urgently.
Disclosure of Invention
The present invention is directed to solving, at least to some extent, one of the technical problems in the related art.
Therefore, one objective of the present invention is to provide a method for removing highlight from a face picture in real time based on deep learning, which is capable of removing highlight and outputting a picture with strong reality, and is simple and easy to implement and has a wide application range.
The invention also aims to provide a device for removing highlight in real time of a face picture based on deep learning.
In order to achieve the above object, an embodiment of the present invention provides a method for removing highlight in real time from a face picture based on deep learning, including the following steps: extracting a face region from a face picture by a face fitting method; extracting a face highlight distribution map from the face region to establish a data set consisting of the face picture and the face highlight distribution map; generating an confrontation network through deep learning, and training the confrontation network through the data set to obtain a highlight-removed network model; and removing highlight of the face picture through the highlight removing network model to obtain a highlight removing result of the face picture.
According to the method for removing the highlight in real time of the face picture based on the deep learning, the data set formed by the face highlight distribution map is established, the confrontation network is generated according to the deep learning to generate the highlight removing network model, the highlight of the face picture is further removed, the highlight can be automatically distinguished, the picture with the very strong reality sense is output while the highlight is removed, and the method is simple and easy to implement and wide in application range.
In addition, the method for removing highlight in real time of the face picture based on deep learning according to the above embodiment of the present invention may further have the following additional technical features:
further, in an embodiment of the present invention, the extracting a face region from a face picture by a face fitting method further includes: acquiring feature points of the face picture by a face feature extraction method; and establishing an energy equation according to the corresponding relation between the characteristic points and the top points on the human face three-dimensional model, and extracting the human face area by utilizing the pre-fitted human face three-dimensional model through optimization iteration.
Further, in an embodiment of the present invention, the obtaining the highlight-removed network model further includes: judging whether the picture block contains highlight according to a lower sampling value of a highlight component of the picture block of the highlight component picture and a preset threshold value; when a discrimination network of the confrontation network is generated by training, the discrimination network is used for judging whether each picture block contains highlight or not, wherein the discrimination network does not judge the picture blocks containing non-face areas; generating a highlight component through a generation network of the countermeasure network, wherein a final highlight result is equal to a difference between a network input and a network output; and adding a linear rectification function to the generation network to ensure that the network output is a positive value.
Further, in an embodiment of the present invention, the removing highlights of the face picture by the highlight removing network model further includes: and taking the face area as the input of the highlight removing network model to obtain a difference value between the face area and the highlight removing picture so as to obtain the highlight removing result.
Further, in an embodiment of the present invention, in training the countermeasure network, an error value between an output result of the discriminant network and a target discriminant result is optimized, and the optimization formula is:
LD=-EI,i,j,F(i,j)=0[T(i,j)·[log(D(I)(i,j))+log(D(G′(I))(i,j))]+(1-T(i,j))·[log(1-D(I)(i,j))+log(1-D(G′(I))(i,j))]|;
wherein E is a mathematical expectation, G' (I) is a difference value between a final output highlight result and an input picture of the generation network, and T (I, j) is a discrimination target result for training the generation of the discrimination network in the confrontation network.
In order to achieve the above object, an embodiment of another aspect of the present invention provides a device for removing highlight in real time from a face picture based on deep learning, including: the extraction module is used for extracting a face region from the face picture by a face fitting method; the data set establishing module is used for extracting a face highlight distribution map from the face region so as to establish a data set consisting of the face picture and the face highlight distribution map; the training module is used for generating an confrontation network through deep learning and training the confrontation network through the data set to obtain a highlight-removed network model; and the removal module is used for removing the highlight of the face picture through the highlight removal network model so as to obtain a highlight removal result of the face picture.
According to the device for removing the highlight in real time of the face picture based on the deep learning, the data set formed by the face highlight distribution map is established, the confrontation network is generated according to the deep learning to generate the highlight removing network model, the highlight of the face picture is further removed, the highlight can be automatically distinguished, the picture with the very strong reality sense is output while the highlight is removed, and the device is simple and easy to implement and wide in application range.
In addition, the device for removing highlight in real time of face pictures based on deep learning according to the above embodiment of the present invention may further have the following additional technical features:
further, in an embodiment of the present invention, the extracting module is further configured to: acquiring feature points of the face picture by a face feature extraction method; and establishing an energy equation according to the corresponding relation between the characteristic points and the top points on the human face three-dimensional model, and extracting the human face area by utilizing the pre-fitted human face three-dimensional model through optimization iteration.
Further, in an embodiment of the present invention, the data set creating module is configured to: judging whether the picture block contains highlight according to a lower sampling value of a highlight component of the picture block of the highlight component picture and a preset threshold value; when a discrimination network of the confrontation network is generated by training, the discrimination network is used for judging whether each picture block contains highlight or not, wherein the discrimination network does not judge the picture blocks containing non-face areas; generating a highlight component through a generation network of the countermeasure network, wherein a final highlight result is equal to a difference between a network input and a network output; and adding a linear rectification function to the generation network to ensure that the network output is a positive value.
Further, in an embodiment of the present invention, the removing module is further configured to: and taking the face area as the input of the highlight removing network model to obtain a difference value between the face area and the highlight removing picture so as to obtain the highlight removing result.
Further, in an embodiment of the present invention, the error optimization module is configured to: and optimizing an error value between a discrimination network output result and a target discrimination result during training the confrontation network, wherein an error optimization formula is as follows:
LD=-EI,i,j,F(i,j)=0[T(i,j)·[log(D(I)(i,j))+log(D(G′(I))(i,j))]+(1-T(i,j))·[log(1-D(I)(i,j))+log(1-D(G′(I))(i,j))]];
wherein E is a mathematical expectation, G' (I) is a difference value between a final output highlight result and an input picture of the generation network, and T (I, j) is a discrimination target result for training the generation of the discrimination network in the confrontation network.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The foregoing and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a flowchart of a method for removing highlights from a face picture in real time based on deep learning according to an embodiment of the present invention;
FIG. 2 is a flowchart of a method for removing highlights from a face image in real time based on deep learning according to an embodiment of the present invention to generate data for training a discriminant network;
FIG. 3 is a flowchart of a method for generating confrontation network training based on a deep learning face picture real-time highlight removal method according to an embodiment of the present invention;
FIG. 4 is a schematic diagram illustrating the effect of a method for removing highlight in real time from a face picture based on deep learning according to an embodiment of the present invention; and
fig. 5 is a schematic structural diagram of a deep learning-based face picture real-time highlight removal device according to an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are illustrative and intended to be illustrative of the invention and are not to be construed as limiting the invention.
The method and the device for removing highlight in real time of a face picture based on deep learning provided by the embodiment of the invention are described below with reference to the accompanying drawings.
Fig. 1 is a flowchart of a method for removing highlights from a face picture in real time based on deep learning according to an embodiment of the present invention.
As shown in fig. 1, the method for removing highlight in real time from a face picture based on deep learning includes the following steps:
in step S101, a face region is extracted from the face picture by a face fitting method.
It can be understood that the feature points of the face picture are obtained by a face feature extraction method; and establishing an energy equation according to the corresponding relation between the characteristic points and the top points on the human face three-dimensional model, and extracting a human face region by utilizing the pre-fitted human face three-dimensional model through optimization iteration.
Further, in an embodiment of the present invention, for the face fitting method, first, a face region in a face picture is extracted. The method aims to detect feature points in a face picture, establish an energy equation by utilizing the corresponding relation between the feature points and the top points on the three-dimensional model of the face, and optimize to obtain the fitted three-dimensional model of the face. And the area covered by the human face three-dimensional model in the picture is the human face area.
In step S102, a face highlight distribution map is extracted from the face region to create a data set composed of the face image and the face highlight distribution map.
In an embodiment of the present invention, as shown in fig. 2, a picture H with a substantially high light distribution on a face picture is obtained by using a conventional highlight removal method.
With reference to fig. 2 and fig. 3, a rough highlight distribution map and a face region detection result F on the face picture are obtained. The detection result of the face region is a true value image, the pixel point value of the face region is 0, the pixel point value of the non-face region is 1, and the true value image formed by the face image and the face highlight distribution map is a data set formed by the face image and the face highlight distribution map.
And judging whether each small picture block in the picture contains highlight or not by a down-sampling method to obtain a true value picture of whether each small picture block in the face picture contains highlight or not. The calculation method of each value in the true value graph is as follows:
Figure GDA0002611832690000041
wherein M (i, j) is the result of whether the (i, j) th tile contains highlight, 1 is that the tile contains highlight, 0 is that the tile does not contain highlight, and t is a given threshold, and further, the threshold is different according to different input face pictures. For the face region detection result, the method is used for down-sampling the face region detection result:
Figure GDA0002611832690000051
further, according to the downsampling results of the highlight picture and the face area picture, a discrimination target result for training and generating a discrimination network in the confrontation network can be calculated:
Figure GDA0002611832690000052
the undefined operation is because the discrimination network does not discriminate the picture block not containing the face region.
In step S103, a countermeasure network is generated by deep learning, and trained by a data set to obtain a highlight-removed network model.
Specifically, whether the picture block contains highlight is judged according to a downsampling value of the highlight component of the picture block of the highlight component picture and a preset threshold; when a discrimination network of the confrontation network is generated by training, the discrimination network is used for judging whether each picture block contains highlight or not, wherein the discrimination network does not judge the picture blocks containing the non-face areas; generating a highlight component through a generation network of the countermeasure network, wherein a final highlight result is equal to a difference value between a network input and a network output; and adding a linear rectification function to the generation network to ensure that the network output is a positive value.
Further, in training the confrontation network, an error value between an output result of the discrimination network and a target discrimination result is optimized, and an error optimization formula is as follows:
LD=-EI,i,j,F(i,j)=0[T(i,j)·[log(D(I)(i,j))+log(D(G′(I))(i,j))]+(1-T(i,j))·[log(1-D(I)(i,j))+log(1-D(G′(I))(i,j))]];
wherein E is a mathematical expectation, G' (I) is a difference value between a final output highlight result and an input picture of the generation network, and T (I, j) is a discrimination target result for training the generation of the discrimination network in the confrontation network.
In one embodiment of the invention, as shown in fig. 2, the generation network is trained by using the generation confrontation network, and the training process includes generation network G and discrimination network D. Further, the generation network is added with a linear rectification function finally, which is used for restricting the output of the generation network to be non-negative. The difference between the final highlight removal result of the generated network and the input picture is as follows:
G′(I)=I-G(I),
wherein, G' (I) is the highlight removal result of the input picture I, and G (I) is the output result of the network generated when I is used as the input.
The judgment network does not score image blocks containing non-face regions. In the generation of the countermeasure network, the output of the discrimination network is a picture with the same size as the discrimination target result, the error value between the discrimination network and the target discrimination result is optimized during training, and the error optimization formula is as follows:
LD=-EI,i,j,F(i,j)=0[T(i,j)·[log(D(I)(i,j))+log(D(G′(I))(i,j))]+(1-T(i,j))·[log(1-D(I)(i,j))+log(1-D(G′(I))(i,j))]],
where E represents the mathematical expectation. When the generation network is trained, each face image block on the highlight-removed picture obtained by the generation network is judged to be true by the judgment network, so that the confrontation training is realized.
Further, the error of training the generated network is:
Figure GDA0002611832690000061
wherein the content of the first and second substances,
Figure GDA0002611832690000062
as a term of regularization, ωLThe regular term coefficient is used to restrict G (I) from being too large.
Through continuous iteration, the discrimination network parameters are optimized to reduce LDOptimizing generated network parameters to reduce LG. Training results in a network model that can be used to remove highlights.
In step S104, highlight of the face picture is removed by removing the highlight network model, so as to obtain a highlight removal result of the face picture.
Specifically, the face region is used as an input of the highlight removing network model to obtain a difference value between the face region and the highlight removing picture, so as to obtain a highlight removing result.
In addition, in an embodiment of the present invention, each input picture is an extracted picture of a face region, and is input into a trained face highlight removing network, and a corresponding highlight removing result can be obtained through network processing, and as a result, see fig. 4, the average processing time of each picture is 4.2 milliseconds.
According to the method for removing the highlight in real time of the face picture based on the deep learning, the data set formed by the face highlight distribution map is established, the confrontation network is generated according to the deep learning to generate the highlight removing network model, the highlight of the face picture is further removed, the highlight can be automatically distinguished, the picture with the very strong reality sense is output while the highlight is removed, and the method is simple and easy to implement and wide in application range.
The following describes a real-time highlight removing device for a face picture based on deep learning according to an embodiment of the present invention with reference to the accompanying drawings.
Fig. 5 is a schematic structural diagram of a deep learning-based face picture real-time highlight removal device according to an embodiment of the present invention.
As shown in fig. 5, the apparatus 10 for removing highlights from a face picture in real time based on deep learning includes: an extraction module 100, a data set creation module 200, a training module 300, and a removal module 400.
The extraction module 100 is configured to extract a face region from a face picture by a face fitting method. The data set creating module 200 extracts a face highlight distribution map from the face region to create a data set composed of a face picture and the face highlight distribution map. And the training module 300 is used for generating the confrontation network through deep learning and training the confrontation network through the data set to obtain the highlight-removed network model. And the removing module 400 is configured to remove highlights of the face picture by removing the highlight network model, so as to obtain a highlight removing result of the face picture. The real-time highlight removing device 10 for the human face picture based on the deep learning can remove highlight and output a picture with strong reality sense, and is simple and easy to implement and wide in application range.
Further, in an embodiment of the present invention, the extraction module 100 is further configured to: acquiring feature points of a face picture by a face feature extraction method; and establishing an energy equation according to the corresponding relation between the characteristic points and the top points on the human face three-dimensional model, and extracting a human face region by utilizing the pre-fitted human face three-dimensional model through optimization iteration.
Further, in one embodiment of the present invention, the data set creation module 200 is configured to: judging whether the picture block contains highlight according to a lower sampling value of the highlight component of the picture block of the highlight component picture and a preset threshold; when a discrimination network of the confrontation network is generated by training, the discrimination network is used for judging whether each picture block contains highlight or not, wherein the discrimination network does not judge the picture blocks containing the non-face areas; generating a highlight component through a generation network of the countermeasure network, wherein a final highlight result is equal to a difference value between a network input and a network output; and adding a linear rectification function to the generation network to ensure that the network output is a positive value.
Further, in an embodiment of the present invention, the removing module 400 is further configured to: and taking the face area as the input of the highlight removing network model to obtain the difference value between the face area and the highlight removing picture so as to obtain a highlight removing result.
Further, in an embodiment of the present invention, the error optimization module is configured to: in training the confrontation network, optimizing the error value between the output result of the discrimination network and the target discrimination result, wherein the error optimization formula is as follows:
LD=--EI,i,j,F(i,j)=0[T(i,j)·[log(D(I)(i,j))+log(D(G′(I))(i,j))]+(1-T(i,j))·[log(1-D(I)(i,j))+log(1-D(G′(I))(i,j))]];
wherein E is a mathematical expectation, G' (I) is a difference value between a final output highlight result and an input picture of the generation network, and T (I, j) is a discrimination target result for training the generation of the discrimination network in the confrontation network.
It should be noted that the foregoing explanation of the embodiment of the method for removing highlight from a face image in real time based on deep learning is also applicable to the apparatus of the embodiment, and is not repeated here.
According to the device for removing the highlight in real time of the face picture based on the deep learning, the data set formed by the face highlight distribution map is established, the confrontation network is generated according to the deep learning to generate the highlight removing network model, the highlight of the face picture is further removed, the highlight can be automatically distinguished, the picture with strong reality sense is output while the highlight is removed, and the device is simple and easy to implement and wide in application range.
In the description of the present invention, it is to be understood that the terms "central," "longitudinal," "lateral," "length," "width," "thickness," "upper," "lower," "front," "rear," "left," "right," "vertical," "horizontal," "top," "bottom," "inner," "outer," "clockwise," "counterclockwise," "axial," "radial," "circumferential," and the like are used in the orientations and positional relationships indicated in the drawings for convenience in describing the invention and to simplify the description, and are not intended to indicate or imply that the referenced devices or elements must have a particular orientation, be constructed and operated in a particular orientation, and are therefore not to be considered limiting of the invention.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present invention, "a plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
In the present invention, unless otherwise expressly stated or limited, the terms "mounted," "connected," "secured," and the like are to be construed broadly and can, for example, be fixedly connected, detachably connected, or integrally formed; can be mechanically or electrically connected; they may be directly connected or indirectly connected through intervening media, or they may be connected internally or in any other suitable relationship, unless expressly stated otherwise. The specific meanings of the above terms in the present invention can be understood by those skilled in the art according to specific situations.
In the present invention, unless otherwise expressly stated or limited, the first feature "on" or "under" the second feature may be directly contacting the first and second features or indirectly contacting the first and second features through an intermediate. Also, a first feature "on," "over," and "above" a second feature may be directly or diagonally above the second feature, or may simply indicate that the first feature is at a higher level than the second feature. A first feature being "under," "below," and "beneath" a second feature may be directly under or obliquely under the first feature, or may simply mean that the first feature is at a lesser elevation than the second feature.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention.

Claims (8)

1. A real-time highlight removing method for a face picture based on deep learning is characterized by comprising the following steps:
extracting a face region from a face picture by a face fitting method;
extracting a face highlight distribution map from the face region to establish a data set consisting of the face picture and the face highlight distribution map;
generating an confrontation network through deep learning, and training the confrontation network through the data set to obtain a highlight-removed network model; the obtaining of the highlight-removed network model further comprises: judging whether the picture block contains highlight according to a lower sampling value of a highlight component of the picture block of the highlight component picture and a preset threshold value; when a discrimination network of the confrontation network is generated by training, the discrimination network is used for judging whether each picture block contains highlight or not, wherein the discrimination network does not judge the picture blocks containing non-face areas; generating a highlight component by a generation network of the countermeasure network; and
highlight of the face picture is removed through the highlight removing network model to obtain a highlight removing result of the face picture, wherein the highlight removing result of the face picture is a difference value between the face picture and the generated network output; and adding a linear rectification function to the generation network to ensure that the network output is a positive value.
2. The method for removing highlights from a face picture based on deep learning of claim 1, wherein the extracting the face region from the face picture by the face fitting method further comprises:
acquiring feature points of the face picture by a face feature extraction method;
and establishing an energy equation according to the corresponding relation between the characteristic points and the top points on the human face three-dimensional model, and extracting the human face area by utilizing the pre-fitted human face three-dimensional model through optimization iteration.
3. The method according to claim 1, wherein the removing highlights of the face picture through the highlight removing network model further comprises:
and taking the face area as the input of the highlight removing network model to obtain a difference value between the face area and the highlight removing picture so as to obtain the highlight removing result.
4. The method for removing highlights from a face picture based on deep learning of claim 3, further comprising:
during training of the confrontation network, an error value between a discrimination network output result and a target discrimination result is optimized, and an optimization formula is as follows:
LD=-EI,i,j,F(i,j)=0[T(i,j)·[log(D(I)(i,j))+log(D(G′(I))(i,j))]+(1-T(i,j))·[log(1-D(I)(i,j))+log(1-D(G′(I))(i,j))]];
wherein E is a mathematical expectation, G' (I) is a difference value between a final output highlight result and an input picture of the generation network, and T (I, j) is a discrimination target result for training the generation of the discrimination network in the confrontation network.
5. The utility model provides a real-time highlight remove device of people's face picture based on deep learning which characterized in that includes:
the extraction module is used for extracting a face region from the face picture by a face fitting method;
the data set establishing module is used for extracting a face highlight distribution map from the face region so as to establish a data set consisting of the face picture and the face highlight distribution map; the data set establishing module is specifically configured to:
judging whether the picture block contains highlight according to a lower sampling value of a highlight component of the picture block of the highlight component picture and a preset threshold value;
when a discrimination network of the confrontation network is generated by training, the discrimination network is used for judging whether each picture block contains highlight or not, wherein the discrimination network does not judge the picture blocks containing non-face areas;
generating a highlight component by a generation network of the countermeasure network;
the training module is used for generating an confrontation network through deep learning and training the confrontation network through the data set to obtain a highlight-removed network model; and
the highlight removing module is used for removing highlight of the face picture through the highlight removing network model so as to obtain a highlight removing result of the face picture, wherein the highlight removing result of the face picture is a difference value between the face picture and generated network output;
and adding a linear rectification function to the generation network to ensure that the network output is a positive value.
6. The device for removing highlights from a face picture based on deep learning of claim 5, wherein the extracting module is further configured to:
acquiring feature points of the face picture by a face feature extraction method;
and establishing an energy equation according to the corresponding relation between the characteristic points and the top points on the human face three-dimensional model, and extracting the human face area by utilizing the pre-fitted human face three-dimensional model through optimization iteration.
7. The deep learning based face picture real-time highlights removing device according to claim 5, wherein the removing module is further configured to:
and taking the face area as the input of the highlight removing network model to obtain a difference value between the face area and the highlight removing picture so as to obtain the highlight removing result.
8. The device for removing highlights from a face picture based on deep learning of claim 7, further comprising an error optimization module for:
during training of the confrontation network, an error value between a discrimination network output result and a target discrimination result is optimized, and an optimization formula is as follows:
LD=-EI,i,j,F(i,j)=0[T(i,j)·[log(D(I)(i,j))+log(D(G′(I))(i,j))]+(1-T(i,j))·[log(1-D(I)(i,j))+log(1-D(G′(I))(i,j))]];
wherein E is a mathematical expectation, G' (I) is a difference value between a final output highlight result and an input picture of the generation network, and T (I, j) is a discrimination target result for training the generation of the discrimination network in the confrontation network.
CN201810327486.8A 2018-04-12 2018-04-12 Face picture real-time highlight removal method and device based on deep learning Active CN108596062B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810327486.8A CN108596062B (en) 2018-04-12 2018-04-12 Face picture real-time highlight removal method and device based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810327486.8A CN108596062B (en) 2018-04-12 2018-04-12 Face picture real-time highlight removal method and device based on deep learning

Publications (2)

Publication Number Publication Date
CN108596062A CN108596062A (en) 2018-09-28
CN108596062B true CN108596062B (en) 2021-04-06

Family

ID=63621945

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810327486.8A Active CN108596062B (en) 2018-04-12 2018-04-12 Face picture real-time highlight removal method and device based on deep learning

Country Status (1)

Country Link
CN (1) CN108596062B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111179196B (en) * 2019-12-28 2023-04-18 杭州电子科技大学 Multi-resolution depth network image highlight removing method based on divide-and-conquer
CN111275651B (en) * 2020-02-25 2023-05-12 东南大学 Face bright removal method based on antagonistic neural network
CN111311520B (en) * 2020-03-12 2023-07-18 Oppo广东移动通信有限公司 Image processing method, device, terminal and storage medium
CN111583128B (en) * 2020-04-09 2022-08-12 清华大学 Face picture highlight removal method based on deep learning and realistic rendering
CN111709886B (en) * 2020-05-27 2023-04-18 杭州电子科技大学 Image highlight removing method based on U-shaped cavity residual error network
CN111882495B (en) * 2020-07-05 2022-08-02 东北林业大学 Image highlight processing method based on user-defined fuzzy logic and GAN
CN112686819A (en) * 2020-12-29 2021-04-20 东北大学 Magic cube image highlight removal method and device based on generation countermeasure network

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102800129B (en) * 2012-06-20 2015-09-30 浙江大学 A kind of scalp electroacupuncture based on single image and portrait edit methods
WO2017223530A1 (en) * 2016-06-23 2017-12-28 LoomAi, Inc. Systems and methods for generating computer ready animation models of a human head from captured data images
CN107067429A (en) * 2017-03-17 2017-08-18 徐迪 Video editing system and method that face three-dimensional reconstruction and face based on deep learning are replaced
CN107292813B (en) * 2017-05-17 2019-10-22 浙江大学 A kind of multi-pose Face generation method based on generation confrontation network
CN107154023B (en) * 2017-05-17 2019-11-05 电子科技大学 Based on the face super-resolution reconstruction method for generating confrontation network and sub-pix convolution
CN107392118B (en) * 2017-07-04 2020-04-03 竹间智能科技(上海)有限公司 Enhanced face attribute recognition method and system based on multitask confrontation generation network
CN107491771A (en) * 2017-09-21 2017-12-19 百度在线网络技术(北京)有限公司 Method for detecting human face and device
CN107895358A (en) * 2017-12-25 2018-04-10 科大讯飞股份有限公司 The Enhancement Method and system of facial image

Also Published As

Publication number Publication date
CN108596062A (en) 2018-09-28

Similar Documents

Publication Publication Date Title
CN108596062B (en) Face picture real-time highlight removal method and device based on deep learning
CN108090435B (en) Parking available area identification method, system and medium
EP3093823B1 (en) Static object reconstruction method and system
WO2020107717A1 (en) Visual saliency region detection method and apparatus
US9031282B2 (en) Method of image processing and device therefore
CN106716450B (en) Image-based feature detection using edge vectors
CN109903331B (en) Convolutional neural network target detection method based on RGB-D camera
CN106778668B (en) A kind of method for detecting lane lines of robust that combining RANSAC and CNN
US11379963B2 (en) Information processing method and device, cloud-based processing device, and computer program product
EP2819091A2 (en) Method and apparatus for processing a gray image
CN103473545B (en) A kind of text image method for measuring similarity based on multiple features
JP2005190400A (en) Face image detection method, system, and program
US11049275B2 (en) Method of predicting depth values of lines, method of outputting three-dimensional (3D) lines, and apparatus thereof
JP2011138388A (en) Data correction apparatus and method
CN105096307A (en) Method for detecting objects in paired stereo images
US9715729B2 (en) Method and apparatus for processing block to be processed of urine sediment image
CN115861210B (en) Transformer substation equipment abnormality detection method and system based on twin network
CN102542541B (en) Deep image post-processing method
CN114266894A (en) Image segmentation method and device, electronic equipment and storage medium
CN110874572B (en) Information detection method and device and storage medium
KR101779040B1 (en) Method and apparatus for extracting panel area from thermal infrared images of photovoltaic array
CN111680577A (en) Face detection method and device
CN110163103B (en) Live pig behavior identification method and device based on video image
CN109920049B (en) Edge information assisted fine three-dimensional face reconstruction method and system
CN112232403A (en) Fusion method of infrared image and visible light image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant