CN113743243A - Face beautifying method based on deep learning - Google Patents

Face beautifying method based on deep learning Download PDF

Info

Publication number
CN113743243A
CN113743243A CN202110931570.2A CN202110931570A CN113743243A CN 113743243 A CN113743243 A CN 113743243A CN 202110931570 A CN202110931570 A CN 202110931570A CN 113743243 A CN113743243 A CN 113743243A
Authority
CN
China
Prior art keywords
face
image
thinning
makeup
key points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110931570.2A
Other languages
Chinese (zh)
Inventor
姚俊峰
张海诗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiamen University
Original Assignee
Xiamen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiamen University filed Critical Xiamen University
Priority to CN202110931570.2A priority Critical patent/CN113743243A/en
Publication of CN113743243A publication Critical patent/CN113743243A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a face beautifying method based on deep learning, which belongs to the technical field of image processing and comprises the following steps: step S10, acquiring a large number of face images, and preprocessing each face image to obtain a face data set; step S20, creating a face key point detection model, and training the face key point detection model by using a face data set; step S30, detecting the image to be beautified by using the face key point detection model to obtain 68 face key points, and performing face-thinning operation on the image to be beautified by using a mobile least square method and the face key points to obtain a face-thinning image; step S40, obtaining a makeup image, and aligning the makeup image with a face-thinning image based on a triangulation algorithm; step S50, whitening the face-thinning image by using a skin whitening algorithm; and step S60, fusing the makeup image and the whitened face-thinning image to obtain a face beautifying image. The invention has the advantages that: the anti-interference and imaging effects of face image beautification are greatly improved.

Description

Face beautifying method based on deep learning
Technical Field
The invention relates to the technical field of image processing, in particular to a face beautifying method based on deep learning.
Background
With the rapid development of the computer vision field, the beautifying technology of the face image gradually matures, and the display and imaging of the face detail after the picture taking are more and more detailed. In order to show more perfect skin, remove blemishes such as wrinkles, freckles, etc., it is becoming a popular function to give users advice for beauty of human faces and recommendations for fitting makeup by extracting and migrating some makeup.
Aiming at face beautifying, a makeup template is traditionally adopted to beautify a face image, but the defects that the makeup template is too single, the face image is not smooth and natural after being processed, and certain damage is easily caused to the edge skin after flaws are removed exist. Therefore, a method of combining face beauty with makeup beauty based on a color and space model has appeared, but a face image for beauty is easily affected by image noise, posture change and illumination change, and especially a side face has a poor effect compared with a method of beauty based on a makeup template.
In summary, the traditional face beautifying method has unsatisfactory face beautifying effect, and therefore, how to provide a face beautifying method based on deep learning to improve the anti-interference performance and imaging effect of face image beautification becomes a problem to be solved urgently.
Disclosure of Invention
The invention aims to provide a face beautifying method based on deep learning, and the anti-interference performance and the imaging effect of face image beautifying are improved.
The invention is realized by the following steps: a face beautifying method based on deep learning comprises the following steps:
step S10, acquiring a large number of face images, and preprocessing each face image to obtain a face data set;
step S20, a face key point detection model is established based on the convolutional neural network, and the face key point detection model is trained by utilizing the face data set;
step S30, detecting the image to be beautified by using the trained face key point detection model to obtain 68 face key points, and performing face-thinning operation on the image to be beautified by using a mobile least square method and the face key points to obtain a face-thinning image;
step S40, obtaining a makeup image, and aligning the makeup image with a face-thinning image based on a triangulation algorithm;
step S50, whitening the face-thinning image by using a skin whitening algorithm;
and step S60, fusing the makeup image and the whitened face-thinning image to obtain a face beautifying image.
Further, the step S10 is specifically:
acquiring a large number of face images, detecting face areas of a single face in each face image through an SSD (solid State disk) algorithm, intercepting each face area by using a square, labeling key points of 68 faces in each face area, and normalizing the coordinates of each label to obtain a face data set.
Further, in step S20, the face keypoint detection model adopts 5 convolutional layers and a maximum pooling operation, the size of the convolutional kernel is 2 × 2, the step size is 2, and a Relu function is used as an activation function.
Further, in step S20, the training of the face keypoint detection model by using the face data set specifically includes:
dividing the face data set into a training set and a verification set based on a preset proportion, and setting an accuracy threshold;
training the face key point detection model by using the training set, verifying the trained face key point detection model by using the verification set, judging whether the detection accuracy is greater than the accuracy threshold, if so, ending the training, and entering the step S30; if not, the face data set is expanded, and training is continued.
Further, the step S30 specifically includes:
step S31, acquiring an image to be beautified;
step S32, detecting the image to be beautified by using the trained face key point detection model to obtain 68 face key points of the image to be beautified;
and S33, selecting 4 face key points with symmetrical face edges, moving the 4 face key points inwards by using a mobile least square method, and further performing face thinning operation on the image to be beautified to obtain a face thinning image.
Further, the step S40 specifically includes:
step S41, the face key points at the upper end of the face are sequentially moved upwards along the y axis by taking 10 pixel points as gradients until the RGB value of the current pixel point is 0, and then 68 face key points are expanded into 79 face key points;
step S42, triangulating 79 face key points based on a triangulation algorithm to obtain a plurality of triangles;
step S43, a makeup image is acquired, and the makeup image is aligned with the face-thinning image based on each of the triangles.
Further, in step S50, the formula of the skin whitening algorithm is:
Ren=Rf(255+w)/(Rf+w);
wherein R isfRepresenting a face-thinning image; renRepresenting a whitened face-thinning image; w represents the whitening coefficient, and the value of w is 2.8.
Further, the step S60 specifically includes:
step S61, fusing skin details of the makeup image and the whitened face-thinning image;
step S62, fusing the skin color of the makeup image and the whitened face-thinning image;
step S63, performing highlight and shadow fusion on the face of the makeup image and the whitened face-thinning image;
step S64, transferring the lip makeup of the makeup image to a whitened face-thinning image;
and step S65, performing edge smoothing operation on the face-thinning image after fusion and lip makeup migration by using a mean value filtering algorithm to obtain a face beautifying image.
The invention has the advantages that:
a face key point detection model is created through a convolutional neural network, so that the accuracy of face key point detection is greatly improved; the face-thinning operation is carried out on the image to be beautified by moving the least square method and the key points of the human face, so that the face-thinning effect is more natural; the makeup image and the face-thinning image are aligned through the triangulation algorithm, so that alignment of eyebrows, outlines, eye shadows and mouth shapes of the makeup image and the face-thinning image is guaranteed, and subsequent image fusion is facilitated; whitening the face-thinning image through a skin whitening algorithm, and avoiding damaging excessive natural illumination while ensuring the whitening effect; through skin detail fusion, skin color fusion, face highlight, face shadow fusion, lip makeup migration and edge smoothing operation, the makeup image and the face thinning image are fused to obtain a face beautifying image, and finally the anti-interference performance and the imaging effect of face image beautification are greatly improved.
Drawings
The invention will be further described with reference to the following examples with reference to the accompanying drawings.
Fig. 1 is a flow chart of a face beautifying method based on deep learning according to the present invention.
Fig. 2 is a schematic diagram of key points of a face of a person 68 of the present invention.
Fig. 3 is a schematic diagram illustrating the effect of 68 face key point detections on a face image according to the present invention.
Fig. 4 is a schematic diagram illustrating the effect of the face-thinning operation according to the present invention.
Fig. 5 is a schematic diagram illustrating the effect of 79 key point detections of a human face image according to the present invention.
Fig. 6 is a schematic diagram of the effect of triangulation according to the invention.
Fig. 7 is a schematic diagram illustrating the alignment effect of the makeup image and the face-thinning image according to the present invention.
Fig. 8 is a schematic illustration of the effect of the lip makeup migration of the present invention.
FIG. 9 is a comparative schematic of the edge smoothing operation of the present invention.
FIG. 10 is a schematic diagram of the face beautification comparison of the present invention.
Detailed Description
Referring to fig. 1 to 10, a preferred embodiment of a face beautifying method based on deep learning according to the present invention includes the following steps:
step S10, acquiring a large number of face images, and preprocessing each face image to obtain a face data set;
step S20, creating a face key point detection model based on a Convolutional Neural Network (CNN), and training the face key point detection model by using the face data set, namely training the face data set in mass to obtain the relationship between a face structure and a pixel value, and finally realizing the detection of the face key point;
the convolutional neural network comprises an excitation layer, a pooling layer and a full-connection layer; the excitation layer is usually used for nonlinear mapping of convolution results, a common excitation layer function is a ReLU (the reconstructed Linear Unit), rapid convergence can be guaranteed, and meanwhile gradient calculation is simple; the pooling layer is used for carrying out value taking on the maximum value or the average value aiming at the output characteristics of the upper layer, wherein one area shows that the characteristic dimension is reduced; in general, directly taking the output of the previous convolutional layer as the next input may produce an overfitting phenomenon, and reducing the output dimension by the pooling layer can improve the defect; in addition, in the process of image processing, the scale can be unchanged, and even if the scales of the images are different, the features of the images can still be learned; the full-connection layer belongs to a traditional form of a deep learning network, and the ganglionic nodes connect the two layers, so that the local features are changed into global features together, and the influence of the feature positions on classification is reduced;
step S30, detecting the image to be beautified by using the trained face key point detection model to obtain 68 face key points, and performing face slimming operation on the image to be beautified by using a Moving Least Square (MLS) method and the face key points to obtain a face slimming image;
step S40, obtaining a makeup image, and aligning the makeup image with a face-thinning image based on a triangulation algorithm;
step S50, whitening the face-thinning image by using a skin whitening algorithm;
and step S60, fusing the makeup image and the whitened face-thinning image to obtain a face beautifying image.
The step S10 specifically includes:
acquiring a large number of face images, detecting face areas of a single face in each face image through an SSD (solid State disk) algorithm, intercepting each face area by using a square, labeling key points of 68 faces in each face area, and normalizing the coordinates of each label to obtain a face data set.
In step S20, the face keypoint detection model adopts 5 convolutional layers and maximum pooling operation, the size of the convolutional kernel is 2 × 2, the step size is 2, and a Relu function is used as an activation function; the problem that the gradient disappears exists in the Sigmoid function, so that the training effect of the face key point detection model is poor, and the problem that the gradient disappears does not exist in the Relu function, so that the Relu function is adopted as an activation function.
In step S20, the training of the face keypoint detection model by using the face data set specifically includes:
dividing the face data set into a training set and a verification set based on a preset proportion, and setting an accuracy threshold;
training the face key point detection model by using the training set, verifying the trained face key point detection model by using the verification set, judging whether the detection accuracy is greater than the accuracy threshold, if so, ending the training, and entering the step S30; if not, the face data set is expanded, and training is continued.
The step S30 specifically includes:
step S31, acquiring an image to be beautified;
step S32, detecting the image to be beautified by using the trained face key point detection model to obtain 68 face key points of the image to be beautified;
and S33, selecting 4 face key points with symmetrical face edges, moving the 4 face key points inwards by using a mobile least square method, and further performing face thinning operation on the image to be beautified to obtain a face thinning image. Obtaining pixel point a in original image based on moving least square methodijAt the corresponding target image position f (a)ij) Through aijAnd f (a)ij) Performing face thinning by the transformation mapping; for example, for the face key points [4, 5, 11, 12 ] in FIG. 4]Move inward by 1/20, and the distance moved is two points, and the specific distance moved (face thinning degree) is set based on different face types.
The step S40 specifically includes:
step S41, the face key points at the upper end of the face are sequentially moved upwards along the y axis by taking 10 pixel points as gradients until the RGB value of the current pixel point is 0, and then 68 face key points are expanded into 79 face key points;
79 personal face key points are compared with 68 personal face key points, the positions of contour points on the forehead are increased, a white area on the face key points [16,17,18,19,20,21,22,23,24,25,27] is larger, the top of the area is the forehead or the edge line of a shielding object (such as a hat, hair and a gesture) on the forehead, as long as the face key points at the upper end of the face are sequentially moved upwards along the y axis, and when the value of the query pixel point is zero or approaches to zero, the pixel point belongs to the key point position of the forehead part, and 79 personal face key points are obtained; the 10 pixel points are taken as gradients to move upwards, so that the key points of the face of the forehead are slightly higher than the forehead, and the whole makeup is realistic and more natural;
step S42, triangulating 79 face key points based on a triangulation algorithm to obtain a plurality of triangles;
the triangulation algorithm adopts a Bowyer-Watson algorithm and comprises the following steps: (1) initializing a super triangle comprising all scatter points, and storing the super triangle into an empty triangle linked list; (2) traversing all scattered points of the insertion point set D, traversing a triangle linked list to search for an influence triangle containing the insertion points on the circumscribed circle, removing all common edges of the triangle, and finally connecting the insertion points with all vertexes of all influence triangles containing the insertion points, namely inserting one point into the Delaunay triangle linked list; (3) in order to prevent the triangle sides from overlapping, local new triangles are optimized according to an optimization method, and then the formed triangles are placed into a linked list; (4) the step (2) is circulated until all scattered points are inserted, and the process is finished;
step S43, a makeup image is obtained, the makeup image and the face-thinning image are aligned based on the triangles, namely, the triangles of the triangulation are affine transformed in sequence, the triangles of the makeup image and the face-thinning image are aligned, and then the obtained triangles are spliced and aligned to form a human face structure close to the face-thinning image.
In step S50, the formula of the skin whitening algorithm is:
Ren=Rf(255+w)/(Rf+w);
wherein R isfRepresenting a face-thinning image; renRepresenting a whitened face-thinning image; w represents the whitening coefficient, and the value of w is 2.8. The whitening degree of the image depends on the value of w, negative correlation is presented, and the larger the value of w is, the lower the whitening degree is; since too high or too low a whitening level may affect the migration effect of other makeup, the value of w is preferably 2.8.
The step S60 specifically includes:
step S61, fusing skin details of the makeup image and the whitened face-thinning image;
the formula for skin detail fusion is:
Figure BDA0003211125030000071
wherein the content of the first and second substances,
Figure BDA0003211125030000072
and betasAll represent skin detail fusion coefficients with the value range of (0,1), and
Figure BDA0003211125030000073
step S62, fusing the skin color of the makeup image and the whitened face-thinning image;
the formula for skin color fusion is: rc=(1-γ)Ic+γSc
Wherein R iscRepresenting the collection of a channel a and a channel b of the makeup image and the face-thinning image, namely an image with fused skin colors; i iscIs IaAnd IbA collection of (1), and Ic=Ia+Ib;ScIs SaAnd SbA set of (a); gamma represents a skin color fusion coefficient, and the value is preferably 0.8;
step S63, performing highlight and shadow fusion on the face of the makeup image and the whitened face-thinning image;
the formula for highlight of the face and shadow fusion of the face is as follows:
Figure BDA0003211125030000081
wherein the content of the first and second substances,
Figure BDA0003211125030000082
represents the Laplacian pyramid of the table; u () represents an upsampling operation on an image (representing 0 filling and using gaussian blur after doubling the image by different directions); d () represents the downsampling operation of the image (removing all lines and columns after Gaussian blurring the image); c1Representing the area behind the eyes, mouth and nose removed during the fusion process;
step S64, transferring the lip makeup of the makeup image to a whitened face-thinning image;
the lip makeup is to make the face-beautifying effect more vivid and beautiful, so the acquisition of the lip makeup migration pixel points is filled from the specific lip area pixels on the makeup image according to the related pixels of the thin face image, the similarity of the face makeup is ensured to the maximum extent, the basic texture of the lips is also ensured, and the realization of the face-beautifying effect is further ensured;
assuming that P and q represent specific pixel points in the lip image region of the makeup image and the lip image region of the thin face image, respectively, P () is the lip part region in the generated image, and S () is the lip part region in the makeup image, then:
Figure BDA0003211125030000083
Figure BDA0003211125030000084
wherein G () represents a gaussian transform; specific pixel values represented by p and q pixel points in I (p) -S (q) are pixel values obtained by taking a histogram for equalization of partial areas related to the two images; c2A lip part region representing a makeup picture;
step S65, performing edge smoothing operation on the face-thinning image after fusion and lip makeup migration by using a mean value filtering algorithm to obtain a face beautifying image;
since the above operation is performed, the face contour mask C1Large and sharp edges and corners, and the edge is not smooth enough, and the makeup is not natural enough, so the edge smoothing operation is needed, and the steps are as follows: (1) mask face contour C1Carrying out mean value filtering operation, adopting the size of a filter (20, 20), and carrying out normalization on the newly obtained mask; (2) if w of the pixel point on the new mask is more than 0.6, making w equal to (w-0.6)/0.4, otherwise, making the value equal to 0; (3) for each pixel point i on the new image, the following conditions are satisfied: n (i) ═ w r (i) +(1-w) × i (i); where n (i) represents an image after smoothing, r (i) represents an image before smoothing, and i (i) represents an original image.
In summary, the invention has the advantages that:
a face key point detection model is created through a convolutional neural network, so that the accuracy of face key point detection is greatly improved; the face-thinning operation is carried out on the image to be beautified by moving the least square method and the key points of the human face, so that the face-thinning effect is more natural; the makeup image and the face-thinning image are aligned through the triangulation algorithm, so that alignment of eyebrows, outlines, eye shadows and mouth shapes of the makeup image and the face-thinning image is guaranteed, and subsequent image fusion is facilitated; whitening the face-thinning image through a skin whitening algorithm, and avoiding damaging excessive natural illumination while ensuring the whitening effect; through skin detail fusion, skin color fusion, face highlight, face shadow fusion, lip makeup migration and edge smoothing operation, the makeup image and the face thinning image are fused to obtain a face beautifying image, and finally the anti-interference performance and the imaging effect of face image beautification are greatly improved.
Although specific embodiments of the invention have been described above, it will be understood by those skilled in the art that the specific embodiments described are illustrative only and are not limiting upon the scope of the invention, and that equivalent modifications and variations can be made by those skilled in the art without departing from the spirit of the invention, which is to be limited only by the appended claims.

Claims (8)

1. A face beautifying method based on deep learning is characterized in that: the method comprises the following steps:
step S10, acquiring a large number of face images, and preprocessing each face image to obtain a face data set;
step S20, a face key point detection model is established based on the convolutional neural network, and the face key point detection model is trained by utilizing the face data set;
step S30, detecting the image to be beautified by using the trained face key point detection model to obtain 68 face key points, and performing face-thinning operation on the image to be beautified by using a mobile least square method and the face key points to obtain a face-thinning image;
step S40, obtaining a makeup image, and aligning the makeup image with a face-thinning image based on a triangulation algorithm;
step S50, whitening the face-thinning image by using a skin whitening algorithm;
and step S60, fusing the makeup image and the whitened face-thinning image to obtain a face beautifying image.
2. The face beautifying method based on deep learning of claim 1, wherein: the step S10 specifically includes:
acquiring a large number of face images, detecting face areas of a single face in each face image through an SSD (solid State disk) algorithm, intercepting each face area by using a square, labeling key points of 68 faces in each face area, and normalizing the coordinates of each label to obtain a face data set.
3. The face beautifying method based on deep learning of claim 1, wherein: in step S20, the face keypoint detection model adopts 5 convolutional layers and maximum pooling, the size of the convolutional kernel is 2 × 2, the step size is 2, and a Relu function is used as an activation function.
4. The face beautifying method based on deep learning of claim 1, wherein: in step S20, the training of the face keypoint detection model by using the face data set specifically includes:
dividing the face data set into a training set and a verification set based on a preset proportion, and setting an accuracy threshold;
training the face key point detection model by using the training set, verifying the trained face key point detection model by using the verification set, judging whether the detection accuracy is greater than the accuracy threshold, if so, ending the training, and entering the step S30; if not, the face data set is expanded, and training is continued.
5. The face beautifying method based on deep learning of claim 1, wherein: the step S30 specifically includes:
step S31, acquiring an image to be beautified;
step S32, detecting the image to be beautified by using the trained face key point detection model to obtain 68 face key points of the image to be beautified;
and S33, selecting 4 face key points with symmetrical face edges, moving the 4 face key points inwards by using a mobile least square method, and further performing face thinning operation on the image to be beautified to obtain a face thinning image.
6. The face beautifying method based on deep learning of claim 1, wherein: the step S40 specifically includes:
step S41, the face key points at the upper end of the face are sequentially moved upwards along the y axis by taking 10 pixel points as gradients until the RGB value of the current pixel point is 0, and then 68 face key points are expanded into 79 face key points;
step S42, triangulating 79 face key points based on a triangulation algorithm to obtain a plurality of triangles;
step S43, a makeup image is acquired, and the makeup image is aligned with the face-thinning image based on each of the triangles.
7. The face beautifying method based on deep learning of claim 1, wherein: in step S50, the formula of the skin whitening algorithm is:
Ren=Rf(255+w)/(Rf+w);
wherein R isfRepresenting a face-thinning image; renRepresenting a whitened face-thinning image; w represents the whitening coefficient, and the value of w is 2.8.
8. The face beautifying method based on deep learning of claim 1, wherein: the step S60 specifically includes:
step S61, fusing skin details of the makeup image and the whitened face-thinning image;
step S62, fusing the skin color of the makeup image and the whitened face-thinning image;
step S63, performing highlight and shadow fusion on the face of the makeup image and the whitened face-thinning image;
step S64, transferring the lip makeup of the makeup image to a whitened face-thinning image;
and step S65, performing edge smoothing operation on the face-thinning image after fusion and lip makeup migration by using a mean value filtering algorithm to obtain a face beautifying image.
CN202110931570.2A 2021-08-13 2021-08-13 Face beautifying method based on deep learning Pending CN113743243A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110931570.2A CN113743243A (en) 2021-08-13 2021-08-13 Face beautifying method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110931570.2A CN113743243A (en) 2021-08-13 2021-08-13 Face beautifying method based on deep learning

Publications (1)

Publication Number Publication Date
CN113743243A true CN113743243A (en) 2021-12-03

Family

ID=78731076

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110931570.2A Pending CN113743243A (en) 2021-08-13 2021-08-13 Face beautifying method based on deep learning

Country Status (1)

Country Link
CN (1) CN113743243A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114418901A (en) * 2022-03-30 2022-04-29 江西中业智能科技有限公司 Image beautifying processing method, system, storage medium and equipment based on Retinaface algorithm

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105654435A (en) * 2015-12-25 2016-06-08 武汉鸿瑞达信息技术有限公司 Facial skin softening and whitening method
CN105787878A (en) * 2016-02-25 2016-07-20 杭州格像科技有限公司 Beauty processing method and device
CN106203395A (en) * 2016-07-26 2016-12-07 厦门大学 Face character recognition methods based on the study of the multitask degree of depth
CN107123083A (en) * 2017-05-02 2017-09-01 中国科学技术大学 Face edit methods
CN107622472A (en) * 2017-09-12 2018-01-23 北京小米移动软件有限公司 Face dressing moving method and device
CN110782408A (en) * 2019-10-18 2020-02-11 杭州趣维科技有限公司 Intelligent beautifying method and system based on convolutional neural network
CN112784773A (en) * 2021-01-27 2021-05-11 展讯通信(上海)有限公司 Image processing method and device, storage medium and terminal
CN113160036A (en) * 2021-04-19 2021-07-23 金科智融科技(珠海)有限公司 Face changing method for image keeping face shape unchanged

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105654435A (en) * 2015-12-25 2016-06-08 武汉鸿瑞达信息技术有限公司 Facial skin softening and whitening method
CN105787878A (en) * 2016-02-25 2016-07-20 杭州格像科技有限公司 Beauty processing method and device
CN106203395A (en) * 2016-07-26 2016-12-07 厦门大学 Face character recognition methods based on the study of the multitask degree of depth
CN107123083A (en) * 2017-05-02 2017-09-01 中国科学技术大学 Face edit methods
CN107622472A (en) * 2017-09-12 2018-01-23 北京小米移动软件有限公司 Face dressing moving method and device
CN110782408A (en) * 2019-10-18 2020-02-11 杭州趣维科技有限公司 Intelligent beautifying method and system based on convolutional neural network
CN112784773A (en) * 2021-01-27 2021-05-11 展讯通信(上海)有限公司 Image processing method and device, storage medium and terminal
CN113160036A (en) * 2021-04-19 2021-07-23 金科智融科技(珠海)有限公司 Face changing method for image keeping face shape unchanged

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
吴剑: "《深度学习与深度合成》", 31 October 2020, 中国纺织出版社, pages: 152 - 155 *
王伟光: "基于深度学习的人脸妆容迁移算法", 《计算机应用研究》, vol. 38, no. 5, pages 1559 - 1562 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114418901A (en) * 2022-03-30 2022-04-29 江西中业智能科技有限公司 Image beautifying processing method, system, storage medium and equipment based on Retinaface algorithm

Similar Documents

Publication Publication Date Title
CN112766160B (en) Face replacement method based on multi-stage attribute encoder and attention mechanism
CN109872397B (en) Three-dimensional reconstruction method of airplane parts based on multi-view stereo vision
CN109697688B (en) Method and device for image processing
JP4723834B2 (en) Photorealistic three-dimensional face modeling method and apparatus based on video
Guo et al. LIME: Low-light image enhancement via illumination map estimation
KR100682889B1 (en) Method and Apparatus for image-based photorealistic 3D face modeling
Liang et al. Parsing the hand in depth images
Liao et al. Automatic caricature generation by analyzing facial features
Shackleton et al. Classification of facial features for recognition
CN111243093A (en) Three-dimensional face grid generation method, device, equipment and storage medium
CN103839223A (en) Image processing method and image processing device
CN110096925A (en) Enhancement Method, acquisition methods and the device of Facial Expression Image
CN108197534A (en) A kind of head part's attitude detecting method, electronic equipment and storage medium
WO2022135574A1 (en) Skin color detection method and apparatus, and mobile terminal and storage medium
CN104794693A (en) Human image optimization method capable of automatically detecting mask in human face key areas
CN109325903A (en) The method and device that image stylization is rebuild
CN114120389A (en) Network training and video frame processing method, device, equipment and storage medium
WO2007145654A1 (en) Automatic compositing of 3d objects in a still frame or series of frames and detection and manipulation of shadows in an image or series of images
CN117157673A (en) Method and system for forming personalized 3D head and face models
CN113743243A (en) Face beautifying method based on deep learning
Kim et al. Facial landmark extraction scheme based on semantic segmentation
CN111275610A (en) Method and system for processing face aging image
CN116342519A (en) Image processing method based on machine learning
CN108109115B (en) Method, device and equipment for enhancing character image and storage medium
CN116681579A (en) Real-time video face replacement method, medium and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination