CN104794693A - Human image optimization method capable of automatically detecting mask in human face key areas - Google Patents

Human image optimization method capable of automatically detecting mask in human face key areas Download PDF

Info

Publication number
CN104794693A
CN104794693A CN201510184292.3A CN201510184292A CN104794693A CN 104794693 A CN104794693 A CN 104794693A CN 201510184292 A CN201510184292 A CN 201510184292A CN 104794693 A CN104794693 A CN 104794693A
Authority
CN
China
Prior art keywords
face
region
image
rectangle
place
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510184292.3A
Other languages
Chinese (zh)
Other versions
CN104794693B (en
Inventor
王进
鲁晓卉
方力洋
陆国栋
陈晓威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN201510184292.3A priority Critical patent/CN104794693B/en
Publication of CN104794693A publication Critical patent/CN104794693A/en
Application granted granted Critical
Publication of CN104794693B publication Critical patent/CN104794693B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention discloses a human image optimization method capable of automatically detecting mask in human face key areas. The human image optimization method mainly includes two steps of human face detection and image smoothing, wherein the step of human face detection includes: using an adaboost algorithm, haar features and a cascade detector to detect a human face area, adopting a three-part five-hole rule to roughly divide a human face into several areas, judging positions of organs of the human face according to a binarization result of a picture, haar features and integral projection, and taking the organs which are detected as mask; using a principle of double-index edge smoothness maintaining filtering to perform image smoothing on a skin portion of the human face, and using the mask to cover the picture after being smoothed. The human image optimization method has high adaptive ability and reliability and is more accurate in organ detection and edge smoothness maintaining of the human face.

Description

A kind of face critical area detects the portrait optimization method of masking-out automatically
Technical field
The present invention relates to technical field of image processing, particularly relate to the portrait optimization method that a kind of face critical area detects masking-out automatically.
Background technology
Along with the widespread use of computing machine, more company can branch out in choice for use ecommerce.In order to publicize the product of oneself better, general company can carry out promotional package to product, such as, drop into advertising, and the products propaganda etc. of internet.Therefore propagate method is exactly for product shooting publicity is shone the most intuitively.In order to attract more client, generally certain landscaping treatment can be done by comparison film.With regard to this demand, mainly contain following several method at present:
The first, engage numerical digit related personnel, to often opening the manual artificial treatment of photo.Because there are every day thousands of photos to need screening, process and beautify, needing related personnel to reach certain skill level to image processing software by this way.Company needs certain economic strength and corresponding time to go to cultivate a collection of relevant picture treatment people.Do not only efficiency so not high, also need the financial resource and material resource of at substantial.
The second, based on all kinds of algorithmic technique process picture.Wherein be divided into two steps---Face datection and picture smooth treatment:
Face datection mainly contains these typical technologies following at present: 1) detect face based on the colour of skin and template matches.Some average face templates of expert along training, extract the region consistent with template size and compare with template similarity, thus find out the region that the match is successful, namely human face region from the image needing to detect.Add complexion model to strengthen the accuracy of Face datection simultaneously.But experiment proves that the degree of accuracy of this method is not high, easily undetected the or phenomenon of false retrieval can be produced; 2) eigenface technology.Draw the proportionate relationship between the shape of people's face feature and each organ according to the training set similar to sample, then the information of these geometric features and sample is compared, thus judge human face region.This method speed, and comparatively convenient, but still there is significant limitation.
Picture smooth treatment mainly contains these typical technologies following at present: 1) mean filter.Eight pixels around certain pixel on this algorithm sample image form a template, replace this pixel value with the mean value of eight pixels in this template.2) gaussian filtering.This algorithm does weighted mean to whole sample image, namely certain pixel and the pixel adjacent with it done weighted average operation, carry out the value of this pixel alternative; 3) medium filtering.This algorithm carrys out its gray-scale value alternative by the intermediate value of the pixel in some pixel surrounding neighbors.4) bilateral filtering.This algorithm is made up of two functions, with above-mentioned algorithm unlike, it considers two kinds of difference: one is spatial information (si), namely geometric space distance; Another is that gray scale is similar, namely the difference of pixel.Although this algorithm effect in edge maintenance makes moderate progress, efficiency still has much room for improvement.
Summary of the invention
The object of the invention is to and problem that degree of accuracy inadequate not high for existing portrait optimisation technique efficiency, provide a kind of face critical area automatically to detect the portrait optimization method of masking-out.
The object of the invention is to be achieved through the following technical solutions: a kind of face critical area detects the portrait optimization method of masking-out automatically, comprises the following steps:
(1) read in portrait coloured image, use the Haar feature of integrogram to coloured image to accelerate, then distinguish face and non-face strong classifier with AdaBoost Algorithm for Training, detect human face region by strong classifier;
(2) in the human face region that step 1 detects, detect face's vitals region, described face's vitals region comprises eyebrow, eye, nose and mouth position, using face's vitals region of detecting as masking-out;
(3) use the two parts of skin of index edge preserving smooth filter to face to do smoothing processing to coloured image, the coloured image after smoothing processing is combined with the masking-out in step 2, the facial image after being optimized after Edge contrast.
Further, described step 2 comprises following sub-step:
(2.1) coloured image that step 1 is read in is changed into gray level image, draw the grey level histogram of gray level image, using the valley in grey level histogram between two peak values as gray threshold;
(2.2) according to the gray threshold obtained in step 2.1, gray level image is converted into bianry image;
(2.3) according to the rule in five, three front yard, the human face region detected in step 1 is divided into five regions: the face rectangle region at the face rectangle region at eyebrow place, the face rectangle region at left eye place, the face rectangle region at right eye place, the face rectangle region at nose place and face place;
(2.4) to the face rectangle region at the left eye place marked off in step 2.3 and the face rectangle region at right eye place, haar cascade classifier is used to obtain the eye areas of people;
(2.5) pixel value in the face rectangle region at the left eye place marked off in traversal step 2.3 and the face rectangle region at right eye place, be 255 according to people's eye portion gray-scale value in bianry image, other skin part gray-scale values are 0, obtain the eye areas of people;
(2.6) eye areas of people that draws of integrating step 2.4 and step 2.5, the intersection in two regions is the precise region of people's eye, goes out by rectangle circle; Described rectangle frame is the minimum rectangle frame comprising all intersection pixels;
(2.7) calculate the horizontal and vertical integral projection of inverse binary image (eyebrow part gray-scale value is 0) in the face rectangle region at eyebrow place, in vertical integral projection, the coordinate of first valley and the 3rd valley is respectively the top and bottom of brow region; In integrate levels projection, the coordinate of first peak value and second peak value is respectively high order end and the low order end of left brow region, the coordinate of the 3rd peak value and the 4th peak value is respectively high order end and the low order end of right brow region, and the region of two eyebrows goes out by two rectangle circles respectively;
(2.8) the inverse binary image (nostril part gray-scale value is 255) in the face rectangle region at nose place is drawn, mark the profile existed in reverse image, two regions that in the profile that calculating marks, pixel is maximum are the region in nostril, go out by rectangle circle;
(2.9) vertical integral projection of the gray level image in the face rectangle region at face place is calculated, adopt the method for Gaussian distribution, calculate the average of vertical integration histogram data, by average as threshold value, obtain the region that maximum UNICOM region that pixel value is less than threshold value is face, go out by rectangle circle;
(2.10) using seven face's vitals regions going out by rectangle circle in human face region as masking-out.
Further, described step (3) is specially:
(3.1) using in coloured image except rectangle circle goes out the pixel data in the region except part as list entries x;
(3.2) progressive recurrence taken turns doing to the list entries x in step 3.1 and return recurrence, obtaining auxiliary sequencel with φ [k], the formula of described progressive recurrence is as follows:
Wherein, r is scope filtering, and λ ∈ [0,1) control the smoothness of convolution spatial filter when impulse response; X [k] is current sample, k ∈ Z; for previous sample;
The formula of described recurrence recurrence is as follows:
φ[k]=(1-ρ[k]λ)x[k]+ρ[k]λφ[k+1] (2)
Wherein, ρ [k]=r (x [k], φ [k+1]), r is scope filtering, λ ∈ [0,1) control the smoothness of convolution spatial filter when impulse response; X [k] is current sample, k ∈ Z; The rear sample that φ [k+1] is φ [k];
(3.3) auxiliary sequencel is merged with φ [k], obtain output sequence y:
(3.4) coloured image corresponding to output sequence y is covered by the masking-out of step 2.10, the border that Edge contrast is combined to merge masking-out with image is carried out to covering the image after combining, and make the edge of image more clear, thus keep the chin profile of face better, the portrait picture after being finally optimized.
The invention has the beneficial effects as follows: present invention uses and multiple there is higher accuracy, the detection method in the people face vitals region of greater efficiency, by the eyebrow detected, eye, nose and these vitals of mouth are as masking-out, cover the image after smoothing processing, thus reach the object of accurately only carrying out grinding skin process to the parts of skin of face.Simultaneously in picture smooth treatment, employ high-level efficiency and effectively can retain the filtering algorithm at facial contour edge.
Accompanying drawing explanation
In Fig. 1 embodiment of the present invention, face critical area detects the process flow diagram of the portrait optimization method of masking-out automatically;
Former figure is tested in Fig. 2 embodiment of the present invention;
Face and organ Detection results figure in Fig. 3 embodiment of the present invention;
The design sketch of portrait optimisation technique process in Fig. 4 embodiment of the present invention.
Embodiment
Below in conjunction with the drawings and specific embodiments, the present invention is described in further detail.
As shown in Figure 1, a kind of face critical area of the present invention detects the portrait optimization method of masking-out automatically, comprises the following steps:
(1) read in portrait coloured image, use the Haar feature of integrogram to coloured image to accelerate, then distinguish face and non-face strong classifier with AdaBoost Algorithm for Training, detect human face region by strong classifier;
(2) in the human face region that step 1 detects, detect face's vitals region, described face's vitals region comprises eyebrow, eye, nose and mouth position, using face's vitals region of detecting as masking-out; Specifically comprise following sub-step:
(2.1) coloured image that step 1 is read in is changed into gray level image, draw the grey level histogram of gray level image, using the valley in grey level histogram between two peak values as gray threshold;
(2.2) according to the gray threshold obtained in step 2.1, gray level image is converted into bianry image;
(2.3) according to the rule in five, three front yard, the human face region detected in step 1 is divided into five regions: the face rectangle region at the face rectangle region at eyebrow place, the face rectangle region at left eye place, the face rectangle region at right eye place, the face rectangle region at nose place and face place;
(2.4) to the face rectangle region at the left eye place marked off in step 2.3 and the face rectangle region at right eye place, haar cascade classifier is used to obtain the eye areas of people;
(2.5) pixel value in the face rectangle region at the left eye place marked off in traversal step 2.3 and the face rectangle region at right eye place, be 255 according to people's eye portion gray-scale value in bianry image, other skin part gray-scale values are 0, obtain the eye areas of people;
(2.6) eye areas of people that draws of integrating step 2.4 and step 2.5, the intersection in two regions is the precise region of people's eye, goes out by rectangle circle; Described rectangle frame is the minimum rectangle frame comprising all intersection pixels;
(2.7) calculate the horizontal and vertical integral projection of inverse binary image (eyebrow part gray-scale value is 0) in the face rectangle region at eyebrow place, in vertical integral projection, the coordinate of first valley and the 3rd valley is respectively the top and bottom of brow region; In integrate levels projection, the coordinate of first peak value and second peak value is respectively high order end and the low order end of left brow region, the coordinate of the 3rd peak value and the 4th peak value is respectively high order end and the low order end of right brow region, and the region of two eyebrows goes out by two rectangle circles respectively;
(2.8) the inverse binary image (nostril part gray-scale value is 255) in the face rectangle region at nose place is drawn, mark the profile existed in reverse image, two regions that in the profile that calculating marks, pixel is maximum are the region in nostril, go out by rectangle circle;
(2.9) vertical integral projection of the gray level image in the face rectangle region at face place is calculated, adopt the method for Gaussian distribution, calculate the average of vertical integration histogram data, by average as threshold value, obtain the region that maximum UNICOM region that pixel value is less than threshold value is face, go out by rectangle circle.Wherein, the effect of all people face vitals is gone out as shown in Figure 3 by rectangle circle;
(2.10) using seven face's vitals regions going out by rectangle circle in human face region as masking-out.
(3) use the two parts of skin of index edge preserving smooth filter to face to do smoothing processing to coloured image, the coloured image after smoothing processing is combined with the masking-out in step 2, the facial image after being optimized after Edge contrast; Be specially and comprise following sub-step:
(3.1) using in coloured image except rectangle circle goes out the pixel data in the region except part as list entries x;
(3.2) progressive recurrence taken turns doing to the list entries x in step 3.1 and return recurrence, obtaining auxiliary sequencel with φ [k], the formula of described progressive recurrence is as follows:
Wherein, r is scope filtering, and λ ∈ [0,1) control the smoothness of convolution spatial filter when impulse response; X [k] is current sample, k ∈ Z; for previous sample;
The formula of described recurrence recurrence is as follows:
φ[k]=(1-ρ[k]λ)x[k]+ρ[k]λφ[k+1] (2)
Wherein, ρ [k]=r (x [k], φ [k+1]), r is scope filtering, λ ∈ [0,1) control the smoothness of convolution spatial filter when impulse response; X [k] is current sample, k ∈ Z; The rear sample that φ [k+1] is φ [k];
(3.3) auxiliary sequencel is merged with φ [k], obtain output sequence y:
Wherein, for Fig. 2 as list entries x, when described λ is set to 0.02, when control mill skin radius is 10 simultaneously, the smooth effect of the image of described output sequence y can reach the degree of Fig. 4; When described λ is more than or equal to 0.2, the image of described output sequence y there will be obvious distortion.
(3.4) coloured image corresponding to output sequence y is covered by the masking-out of step 2.10, the border that Edge contrast is combined to merge masking-out with image is carried out to covering the image after combining, and make the edge of image more clear, thus keep the chin profile of face better, the portrait picture after being finally optimized.
The inventive method is utilized to do optimization process to image shown in Fig. 2, wherein the design sketch that detects of face and organ is as shown in Figure 3, the complete portrait of optimization process as shown in Figure 4, as can be seen from Figure 4, the inventive method accurately only can carry out the process of mill skin to the parts of skin of face, and effect of optimization is obvious.

Claims (3)

1. face critical area detects a portrait optimization method for masking-out automatically, it is characterized in that, comprises the following steps:
(1) read in portrait coloured image, use the Haar feature of integrogram to coloured image to accelerate, then distinguish face and non-face strong classifier with AdaBoost Algorithm for Training, detect human face region by strong classifier;
(2) in the human face region that step 1 detects, detect face's vitals region, described face's vitals region comprises eyebrow, eye, nose and mouth position, using face's vitals region of detecting as masking-out;
(3) use the two parts of skin of index edge preserving smooth filter to face to do smoothing processing to coloured image, the coloured image after smoothing processing is combined with the masking-out in step 2, the facial image after being optimized after Edge contrast.
2. a kind of face critical area according to claim 1 detects the portrait optimization method of masking-out automatically, and it is characterized in that, described step 2 comprises following sub-step:
(2.1) coloured image that step 1 is read in is changed into gray level image, draw the grey level histogram of gray level image, using the valley in grey level histogram between two peak values as gray threshold;
(2.2) according to the gray threshold obtained in step 2.1, gray level image is converted into bianry image;
(2.3) according to the rule in five, three front yard, the human face region detected in step 1 is divided into five regions: the face rectangle region at the face rectangle region at eyebrow place, the face rectangle region at left eye place, the face rectangle region at right eye place, the face rectangle region at nose place and face place;
(2.4) to the face rectangle region at the left eye place marked off in step 2.3 and the face rectangle region at right eye place, haar cascade classifier is used to obtain the eye areas of people;
(2.5) pixel value in the face rectangle region at the left eye place marked off in traversal step 2.3 and the face rectangle region at right eye place, be 255 according to people's eye portion gray-scale value in bianry image, other skin part gray-scale values are 0, obtain the eye areas of people;
(2.6) eye areas of people that draws of integrating step 2.4 and step 2.5, the intersection in two regions is the precise region of people's eye, goes out by rectangle circle; Described rectangle frame is the minimum rectangle frame comprising all intersection pixels;
(2.7) calculate the horizontal and vertical integral projection of inverse binary image in the face rectangle region at eyebrow place, in vertical integral projection, the coordinate of first valley and the 3rd valley is respectively the top and bottom of brow region; In integrate levels projection, the coordinate of first peak value and second peak value is respectively high order end and the low order end of left brow region, the coordinate of the 3rd peak value and the 4th peak value is respectively high order end and the low order end of right brow region, and the region of two eyebrows goes out by two rectangle circles respectively;
(2.8) draw the inverse binary image in the face rectangle region at nose place, mark the profile existed in reverse image, two regions that in the profile that calculating marks, pixel is maximum are the region in nostril, go out by rectangle circle;
(2.9) vertical integral projection of the gray level image in the face rectangle region at face place is calculated, adopt the method for Gaussian distribution, calculate the average of vertical integration histogram data, by average as threshold value, obtain the region that maximum UNICOM region that pixel value is less than threshold value is face, go out by rectangle circle;
(2.10) using seven face's vitals regions going out by rectangle circle in human face region as masking-out.
3. a kind of face critical area according to claim 2 detects the portrait optimization method of masking-out automatically, it is characterized in that, described step (3) is specially:
(3.1) using the pixel data of coloured image as list entries x;
(3.2) progressive recurrence taken turns doing to the list entries x in step 3.1 and return recurrence, obtaining auxiliary sequencel with φ [k], the formula of described progressive recurrence is as follows:
Wherein, r is scope filtering, and λ ∈ [0,1) control the smoothness of convolution spatial filter when impulse response; X [k] is current sample, k ∈ Z; for previous sample;
The formula of described recurrence recurrence is as follows:
φ[k]=(1-ρ[k]λ)x[k]+ρ[k]λφ[k+1] (2)
Wherein, ρ [k]=r (x [k], φ [k+1]), r is scope filtering, λ ∈ [0,1) control the smoothness of convolution spatial filter when impulse response; X [k] is current sample, k ∈ Z; The rear sample that φ [k+1] is φ [k];
(3.3) auxiliary sequencel is merged with φ [k], obtain output sequence y:
(3.4) covering coloured image corresponding to output sequence y by the masking-out of step 2.10, carrying out Edge contrast, the portrait picture after being finally optimized to covering the image after combining.
CN201510184292.3A 2015-04-17 2015-04-17 A kind of portrait optimization method of face key area automatic detection masking-out Active CN104794693B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510184292.3A CN104794693B (en) 2015-04-17 2015-04-17 A kind of portrait optimization method of face key area automatic detection masking-out

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510184292.3A CN104794693B (en) 2015-04-17 2015-04-17 A kind of portrait optimization method of face key area automatic detection masking-out

Publications (2)

Publication Number Publication Date
CN104794693A true CN104794693A (en) 2015-07-22
CN104794693B CN104794693B (en) 2017-07-14

Family

ID=53559473

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510184292.3A Active CN104794693B (en) 2015-04-17 2015-04-17 A kind of portrait optimization method of face key area automatic detection masking-out

Country Status (1)

Country Link
CN (1) CN104794693B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105205646A (en) * 2015-08-07 2015-12-30 江苏诚创信息技术研发有限公司 Automatic roll call system and realization method thereof
CN105354793A (en) * 2015-11-25 2016-02-24 小米科技有限责任公司 Facial image processing method and device
CN106169177A (en) * 2016-06-27 2016-11-30 北京金山安全软件有限公司 A kind of image mill skin method, device and electronic equipment
CN107341775A (en) * 2017-06-16 2017-11-10 广东欧珀移动通信有限公司 image processing method and device
CN107480577A (en) * 2016-06-07 2017-12-15 深圳市珍爱网信息技术有限公司 A kind of face sincerity recognition methods and device
CN107800966A (en) * 2017-10-31 2018-03-13 广东欧珀移动通信有限公司 Method, apparatus, computer-readable recording medium and the electronic equipment of image procossing
CN108427918A (en) * 2018-02-12 2018-08-21 杭州电子科技大学 Face method for secret protection based on image processing techniques
CN109376618A (en) * 2018-09-30 2019-02-22 北京旷视科技有限公司 Image processing method, device and electronic equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101526997A (en) * 2009-04-22 2009-09-09 无锡名鹰科技发展有限公司 Embedded infrared face image identifying method and identifying device
US20100177981A1 (en) * 2009-01-12 2010-07-15 Arcsoft Hangzhou Co., Ltd. Face image processing method
CN104331868A (en) * 2014-11-17 2015-02-04 厦门美图网科技有限公司 Optimizing method of image border

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100177981A1 (en) * 2009-01-12 2010-07-15 Arcsoft Hangzhou Co., Ltd. Face image processing method
CN101526997A (en) * 2009-04-22 2009-09-09 无锡名鹰科技发展有限公司 Embedded infrared face image identifying method and identifying device
CN104331868A (en) * 2014-11-17 2015-02-04 厦门美图网科技有限公司 Optimizing method of image border

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
PHILIPPE THEVENAZ等: "Bi-Exponential Edge-Preserving Smoother", 《IEEE TRANSACTIONS ON IMAGE PROSESSING》 *
杨新权: "基于肤色分割及连续Adaboost算法的人脸检测研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105205646A (en) * 2015-08-07 2015-12-30 江苏诚创信息技术研发有限公司 Automatic roll call system and realization method thereof
CN105354793A (en) * 2015-11-25 2016-02-24 小米科技有限责任公司 Facial image processing method and device
CN107480577A (en) * 2016-06-07 2017-12-15 深圳市珍爱网信息技术有限公司 A kind of face sincerity recognition methods and device
CN106169177A (en) * 2016-06-27 2016-11-30 北京金山安全软件有限公司 A kind of image mill skin method, device and electronic equipment
CN107341775A (en) * 2017-06-16 2017-11-10 广东欧珀移动通信有限公司 image processing method and device
CN107800966A (en) * 2017-10-31 2018-03-13 广东欧珀移动通信有限公司 Method, apparatus, computer-readable recording medium and the electronic equipment of image procossing
CN107800966B (en) * 2017-10-31 2019-10-18 Oppo广东移动通信有限公司 Method, apparatus, computer readable storage medium and the electronic equipment of image procossing
CN108427918A (en) * 2018-02-12 2018-08-21 杭州电子科技大学 Face method for secret protection based on image processing techniques
CN108427918B (en) * 2018-02-12 2021-11-30 杭州电子科技大学 Face privacy protection method based on image processing technology
CN109376618A (en) * 2018-09-30 2019-02-22 北京旷视科技有限公司 Image processing method, device and electronic equipment

Also Published As

Publication number Publication date
CN104794693B (en) 2017-07-14

Similar Documents

Publication Publication Date Title
CN104794693A (en) Human image optimization method capable of automatically detecting mask in human face key areas
CN104834898B (en) A kind of quality classification method of personage's photographs
Lin Face detection in complicated backgrounds and different illumination conditions by using YCbCr color space and neural network
Gunay et al. Automatic age classification with LBP
CN106960202B (en) Smiling face identification method based on visible light and infrared image fusion
JP4414401B2 (en) Facial feature point detection method, apparatus, and program
CN103902958A (en) Method for face recognition
Venkatesh et al. A novel approach to classification of facial expressions from 3D-mesh datasets using modified PCA
CN102622589A (en) Multispectral face detection method based on graphics processing unit (GPU)
CN106570447B (en) Based on the matched human face photo sunglasses automatic removal method of grey level histogram
Richardson et al. Extracting scar and ridge features from 3D-scanned lithic artifacts
CN104008364B (en) Face identification method
Kim et al. Robust facial landmark extraction scheme using multiple convolutional neural networks
CN108230297B (en) Color collocation assessment method based on garment replacement
Shen et al. Image based hair segmentation algorithm for the application of automatic facial caricature synthesis
Ecins et al. Shadow free segmentation in still images using local density measure
US20140050404A1 (en) Combining Multiple Image Detectors
Vezzetti et al. Application of geometry to rgb images for facial landmark localisation-a preliminary approach
Agrawal et al. Support Vector Machine for age classification
Zhang et al. Symmetry-aware face completion with generative adversarial networks
Yi et al. Face detection method based on skin color segmentation and facial component localization
Jadhav et al. Introducing Celebrities in an Images using HAAR Cascade algorithm
Nguyen et al. Enhanced age estimation by considering the areas of non-skin and the non-uniform illumination of visible light camera sensor
Anchit et al. Comparative analysis of Haar and Skin color method for face detection
CN106940792A (en) The human face expression sequence truncation method of distinguished point based motion

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
EXSB Decision made by sipo to initiate substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant