CN106570447A - Face photo sunglass automatic removing method based on gray histogram matching - Google Patents

Face photo sunglass automatic removing method based on gray histogram matching Download PDF

Info

Publication number
CN106570447A
CN106570447A CN201510941267.5A CN201510941267A CN106570447A CN 106570447 A CN106570447 A CN 106570447A CN 201510941267 A CN201510941267 A CN 201510941267A CN 106570447 A CN106570447 A CN 106570447A
Authority
CN
China
Prior art keywords
eyes
target
histogram
gray
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510941267.5A
Other languages
Chinese (zh)
Other versions
CN106570447B (en
Inventor
黄开竹
江浩川
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201510941267.5A priority Critical patent/CN106570447B/en
Publication of CN106570447A publication Critical patent/CN106570447A/en
Application granted granted Critical
Publication of CN106570447B publication Critical patent/CN106570447B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/273Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion removing elements interfering with the pattern to be recognised
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Landscapes

  • Engineering & Computer Science (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses a face photo sunglass automatic removing method based on gray histogram matching. The method comprises the following steps that an acquired initial face image is obtained so that full face contour line description information and facial organ feature point position and facial organ contour line information are obtained according to feature point information; the concrete positions of the feature points describing left and right eyes and two eyebrows are obtained, and the transverse and vertical extreme values of the left and right eyes are obtained so as to outline a target image area; a reference face image having no wearing of sunglasses is obtained; the gray histogram of the eyebrow area and the eye area in the target image area is calculated according to the obtained positions of the left and right eyes and the two eyebrows; histogram equalization operation is performed on the modified target image gray histogram, and the gray distribution of the modified target image gray histogram completely covers [0,255]; and the union set of the obtained sunglass target area is obtained and the modified sunglass target area is obtained. The final recognition rate can be greatly enhanced so that the usability and the practicality of the technology can be enhanced.

Description

Based on the human face photo sunglassess automatic removal method that grey level histogram is matched
Technical field
The present invention relates to a kind of method of recognition of face and its association area, more particularly to it is a kind of based on grey level histogram The human face photo sunglassess automatic removal method matched somebody with somebody.
Background technology
In fields such as machine learning, pattern recognition and computer visions, face recognition technology be a popular research and Application.The technology is directly based upon the facial feature information of people and carries out identification or the extraction of other facial informations and sentence It is disconnected, it is widely used in such as security fields (gate control system or monitoring system, the unblock application of smart mobile phone), entertainment field (trip Play, facial expression are imitated) and service field (service class robot).
This area related art is concentrated mainly on to not wearing any shelter (including glasses, medicated cap or mask) Extract and recognize;For the human face photo of wearing spectacles, also focus primarily upon and wear the glasses for not having color changeable effect.Through test Draw, once face wears coloured sunglasses, its discrimination can be substantially reduced, have a strong impact on the popularization of the technology and answer With.In order to improve the usability and practicality of the technology, this patent proposes a kind of recovery by sunglasses overlay area image letter The method of breath, so as to greatly improve final discrimination.
The content of the invention
It is an object of the invention to provide a kind of human face photo sunglassess automatic removal method matched based on grey level histogram, The human face photo sunglassess automatic removal method overcomes face and wears coloured sunglasses, and its discrimination can drop significantly It is low, the deficiency of the promotion and application of the technology is had a strong impact on, final discrimination is substantially increased, the ease for use of the technology is improved And practicality.
To reach above-mentioned purpose, the technical solution used in the present invention is:It is a kind of to be shone based on the face that grey level histogram is matched Piece sunglassess automatic removal method, comprises the following steps:
Step one, the Initial Face image for obtaining collection, and it is special by the active shape model acquisition 79 for having trained Levy an information, so as to obtained according to 79 characteristic point informations the full face contour line information of description and face's face characteristic point position and Face outline line information;
Step 2, two features for according in 79 characteristic points obtained in step one, choosing description images of left and right eyes ball center Point, Initial Face image after being rotated after the rotation process that-θ angles are carried out to view picture photo, repeats step one and is rotated 79 characteristic points after the amendment of Initial Face image, describe complete after amendment so as to obtain according to 79 characteristic point informations after amendment afterwards The characteristic point position and face outline line information of face contour line information and face's face, the anglec of rotation are as follows:
In above-mentioned formula, θ be practical photograph drift angle, xlAnd ylFor left oculocentric pixel coordinate, xrAnd yrFor right eye center Pixel coordinate;
Step 3, according to left and right two eyes and two eyebrows described in 79 characteristic points after the amendment of the acquisition of step 2 Characteristic point particular location, obtain laterally, be most worth on longitudinal direction, so as to sketch the contours of the target figure of images of left and right eyes and its surrounding position Section;
Step 4, due to eyes and its around position head panel region in, eyelid interior part and soon outer skin part Intensity profile have a notable difference, eyelid interior part is mainly white and black distribution, and intensity profile concentrates on all gray scales and takes At two end points of value;Around eyes, position is mainly Lycoperdon polymorphum Vitt, and intensity profile is concentrated at the middle part of all gray scale values, Therefore, the reference facial image for not wearing sunglassess is obtained, and to carrying out the step one~step successively with reference to facial image Three operations obtain the reference picture area with reference to facial image middle left and right eye and its surrounding position, and reference picture is divided into eyes Position area and skin around eyes position area, calculate the goal histogram in eyes area and skin around eyes position area respectively Obtain corresponding eyes reference gray level rectangular histogram and skin around eyes reference gray level rectangular histogram, the eyes reference gray level Nogata There is in figure and skin around eyes reference gray level histogram table diagram picture the number of pixels of certain gray level, and reflect image In the frequency that occurs of certain gray scale, be embodied as:px(i)=ni/ n, i ∈ 0,1 ..., 255, in formula, pxI () represents pixel It is worth the frequency of the pixel appearance for i, niFor the number of times which occurs, n is sum of all pixels in image;
The position of step 5-1, the eyes of left and right two obtained according to step 3 and two eyebrows, calculates target figure respectively Brow region and the respective grey level histogram of eye areas in section, by the grey level histogram of brow region and its face forehead portion The intensity profile rectangular histogram of the image slices vegetarian refreshments of position is matched, by the grey level histogram of eye areas and skin around corresponding eyes The grey level histogram of the image slices vegetarian refreshments of skin is matched, so as to obtain erase eyebrow and eyes after Target Photo area, and calculate Its grey level histogram is drawn, Target Photo area grayscale rectangular histogram after being modified;
Step 5-2, histogram equalization operation is carried out to Target Photo grey level histogram after modification, target figure after modification The intensity profile of piece grey level histogram can be completely covered [0,255];
Step 5-3, the threshold operation up and down for introducing a similar band filter effect, will be above upper threshold value and are less than down The pixel of threshold value is all mapped to highest gray scale (255), and the pixel gray level between lower threshold value and upper threshold value does not change, Obtain Target Photo after the modification after bandpass filtering;
Step 5-4, for Target Photo after the modification after bandpass filtering, according to the left and right two obtained in step 3 The particular location of eyes and eye center position, carry out the operation of similar step three, obtain laterally, be most worth on longitudinal direction, from And the bounding box of two eyes in acquisition left and right, and calculate length and height of the bounding box in pixel;
Step 5-5, the picture obtained by after said process is carried out by border detection and obtained according to gray scale intensities gradient Take initial boundary profile.For each initial profile, distance of each point to corresponding eye center on profile is calculated, and is selected Take two threshold value up and down of the distance:The half that upper threshold value is the half of corresponding eye-length, lower threshold value is corresponding eye-level.Than The distance of central point is put compared with each and is counted:If in full-sized more than three one-tenth more than point to corresponding eyes central point Distance exceed upper threshold value or exceed lower threshold value, then judge that the profile is invalid glasses profile, and the profile rejected; If all contour lines are disallowable, the face picture not wearing spectacles are can determine whether;If still there is contour line not disallowable, Then only it is judged as that the face picture may wear eyes;
Step 6, using K-Means algorithms to from obtained in step 3, target area image carries out pixel point Class,
Because observing by the naked eye, had in gray scale value significantly with non-cover part by the part that sunglassess are covered Difference, therefore consider the K-Means clustering algorithms using two clusters (cluster), concrete methods of realizing is as follows:
(1). for all pixels point of Target Photo, wherein 2 pixels are randomly selected, as cluster center of mass point (cluster centroids), is μ1And μ2
(2). repeat procedure below, until convergence:{
1.. all pixels point in traversal region, for each of which pixel, calculate its cluster that should belong to:
X(i)For the pixel value of current pixel point
2.. recalculate the barycenter of two clusters:
Wherein:1 { true }=1 of indicator function, 1 { false }=0 }
After convergence, judge the average gray of two classes, and to choose a relatively low cluster of average gray be doubtful lens area.
Step 7-1, the sunglassess target area that step 5 and step 6 are each obtained is taken into the sun after union is corrected Mirror target area;
Step 7-2, for sunglassess target area after amendment, calculated shared by the region of two eyes in correspondence left and right respectively The number of pixel, for the input face picture of a secondary 256*256 sizes, if left eye mirror and two region of right eye mirror are covered Pixel number between [600,1800] when, you can confirm that the width human face photo has worn glasses really;For only one Glasses meet above-mentioned condition, then the eye areas that will be unsatisfactory for above-mentioned condition are deleted, and offside lens area is carried out picture Mirror image operation, the people for still confirming as wearing spectacles connect photo;For two regions are unsatisfactory for above-mentioned condition, then confirmed as The human face photo of non-wearing spectacles.
Step 8-1, corrected according to step 7 after sunglassess target area, according to eye feature point respectively will amendment Sunglassess target area is divided into sunglassess objective eye position area and sunglassess objective eye surrounding skin position area afterwards, counts respectively The rectangular histogram for calculating sunglassess objective eye position area and sunglassess objective eye surrounding skin position area obtains corresponding eyes mesh Mark grey level histogram and skin around eyes target gray rectangular histogram;
Step 8-2, eyes target gray rectangular histogram and skin around eyes target gray rectangular histogram are entered into column hisgram Equalization operation obtains equalization eyes target gray rectangular histogram and equalization skin around eyes target gray rectangular histogram;
And by the eyes reference gray level rectangular histogram of step 4 and skin around eyes reference gray level histogram equalization operation Obtain equalization eyes reference gray level rectangular histogram and equalization skin around eyes reference gray level rectangular histogram;
Step 8-3, found according to the corresponding probit of certain target gray value in equalization eyes target gray rectangular histogram With the immediate reference gray level value of its parameter probability valuing in equalization eyes reference gray level rectangular histogram, then with closest to generally The reference gray level value of rate value replaces corresponding target gray value, and successively to equalizing in eyes target gray rectangular histogram which Its target gray value makees same operation;
It is step 8-4, corresponding general according to certain target gray value in equalization skin around eyes target gray rectangular histogram With the immediate reference gray level value of its parameter probability valuing in rate value searching equalization skin around eyes reference gray level rectangular histogram, so Corresponding target gray value is replaced with the reference gray level value closest to parameter probability valuing afterwards, and successively to equalizing eyes week In enclosing skin targets grey level histogram, other target gray values make same operation, are gone so as to complete human face photo sunglassess automatically Remove.
In above-mentioned technical proposal, further improved technical scheme is as follows:
1., in such scheme, the reference facial image for not wearing sunglassess of the step 4 selects to use all photos equal " Japanese women facial expression data storehouse " without wearing spectacles.
2., in such scheme, in the step 5-3, lower threshold value is 50, and upper threshold value is 75.
3. in such scheme, according to left and right described in 79 characteristic points after the amendment of the acquisition of step 2 in the step 3 The particular location of the characteristic point of two eyes and two eyebrows, obtain respectively left and right the horizontal of two eyes, on longitudinal direction most Value, and after appropriate amplification, sketch the contours of the Target Photo area at images of left and right eyes and its surrounding position.
Due to the utilization of above-mentioned technical proposal, the present invention has following advantages compared with prior art:
1. the human face photo sunglassess automatic removal method that the present invention is matched based on grey level histogram, which overcomes face pendant Coloured sunglasses are put on, its discrimination can be substantially reduced, have a strong impact on the deficiency of the promotion and application of the technology, carry significantly High discrimination finally, improves the usability and practicality of the technology.
2. the human face photo sunglassess automatic removal method that the present invention is matched based on grey level histogram, during actual test It was found that, only by Canny border detection operators, reliably all sunglasses eyeglass regions can not be extracted, especially In the case where eyeglass light transmittance is higher (>=75%), it is found that the boundary member for having part glasses is not judged exactly For glasses part, the reliability for being obtained can be substantially reduced;The scheme for only being clustered by K-Means, is entered to target area image Row pixel is classified, it is found that part eyeglass mid portion is not judged as eye portion exactly;According to being retouched above Situation about stating, proposes the union using above two detection scheme in the present invention, will be the boundary member and mid portion of eyeglass equal It is judged as glasses part, so as to preferably improve the precision of lens area extraction.
Description of the drawings
79 face feature point schematic diagrams that accompanying drawing 1 is obtained automatically for STASM software kits used in the present invention
Accompanying drawing 2 is figure before and after Initial Face image rotation in sunglassess automatic removal method of the present invention;
Accompanying drawing 3 is Target Photo zone position schematic diagram in sunglassess automatic removal method of the present invention;
Accompanying drawing 4 is reference target facial image figure in human face photo sunglassess automatic removal method of the present invention;
Accompanying drawing 5 is the front image of recovery in human face photo sunglassess automatic removal method of the present invention;
Accompanying drawing 6 is image after recovering in human face photo sunglassess automatic removal method of the present invention;
Accompanying drawing 7 (a) is that Target Photo after the modification of eyebrow and glasses is erased in step 5 of the present invention -1;
Accompanying drawing 7 (b) is histogram equalization Target Photo in step 5 of the present invention -2;
Accompanying drawing 7 (c) obtains target after the modification after bandpass filtering after being separated by threshold value in step 5 of the present invention -3 Picture;
Accompanying drawing 7 (d) is the doubtful lens area figure that Canny operators are calculated in step 5 of the present invention -5;
Accompanying drawing 7 (e) is the doubtful lens area figure that K-Means** is calculated in step 6 of the present invention;
Accompanying drawing 7 (f) is final lens area figure in step 7 of the present invention;
Accompanying drawing 8 (a) is intensity profile figure before equalizing in human face photo sunglassess automatic removal method of the present invention;
Accompanying drawing 8 (b) is the intensity profile figure after equalizing in human face photo sunglassess automatic removal method of the present invention.
Specific embodiment
With reference to embodiment, the invention will be further described:
Embodiment:A kind of human face photo sunglassess automatic removal method matched based on grey level histogram:Using a kind of base In " active shape model (Active Shape Model:ASM method) ", is automatically positioned the face profile of the positive face photo of face (eyes, eyebrow, nose, face, ear) and full face outline;Hereafter, the profile according to the eyebrow and eyes being truncated to Positional information, the method cut by image, surface trimming go out spectacled object region of may wearing;Analysis and process The profile for being drawn, so as to judge whether to have worn sunglassess;Hereafter, for cut image-region, by intensity histogram The method and the histogram data (goal histogram) for not putting on sunglassess being stored in data base of figure matching, automatically will be by The intensity profile Histogram Matching of object region to goal histogram, so as to obtain the effect that sunglassess are automatically removed.Institute The method stated is comprised the following steps:
The first step, automatically acquisition profile and face characteristic point
" active shape model " obtains the statistical information of the characteristic point distribution of training image sample by training process, and The change direction that characteristic point allows to exist is obtained, the position of corresponding characteristic point is found in realization on target image.
Sample image is opened comprising N in assuming training sample, each 79 characteristic points for having hand labeled good, record are every The coordinate of individual point, and to its normalization and vectorization.To each characteristic point, corresponding local gray level model is calculated special as local Levy a characteristic vector for adjustment.Above-mentioned process is carried out to all sample images in training sample, and uses principal component analysiss Method carry out dimensionality reduction, you can obtain the statistical models of face contour and face contour feature point:Wherein, For average shape, P is the front t characteristic vector of the covariance matrix of vectorization coordinate, and b is corresponding eigenvalue.
For the new positive face picture of unknown characteristics point, first by the above-mentioned average shape for calculatingCome to picture wheel Exterior feature is initialized.Hereafter, for each characteristic point, by the method for iteration, that is, finding certain characteristic point When next position, found and the distance in contour line vertical direction with original character pair point using local gray level model The position that (Euclidean distance or mahalanobis distance) minimum characteristic point is will be moved into as current signature point, and it is referred to as doubtful Point.Find the doubtful contour line that all of doubtful point is obtained with a search.By "current" model by parameter adjustment, can be with So that current model most approaches doubtful contour line, and realize that algorithm is finally restrained using iteration.In this patent, it is main Dynamic shape is used for searching full face contour line and face's face (eyebrow, eyes, nose, ear, face) contour line Rope.In this patent, using a kind of software kit (STASM) based on ASM models.The positive face photo new for one is soft using this After part bag, 79 characteristic points automatically can be extracted.
Second step, the characteristic point obtained according to more than, enter horizontal deflection rotation correction automatically to human face photo
According in 79 characteristic points obtained in the first step, two characteristic points of description right and left eyes ball center are chosen, to whole Photos enter horizontal deflection rotation correction.Bearing calibration is as follows:
In above-mentioned formula, θ is the photo drift angle of actual measurement gained, xlAnd ylFor left oculocentric pixel coordinate, xrAnd yr For right oculocentric pixel coordinate.Therefore, for whole photo, need to carry out the rotation process an at-θ angle.3rd step, automatically Intercept the picture region (Target Photo region) at eyes and its surrounding position
According to the characteristic point of two eyes in left and right and two eyebrows described in acquired 79 characteristic points in STASM Particular location, and calculate features above point on four direction (upper and lower, left and right) extreme position, and suitably scaled, The picture region at two eyes in left and right and its surrounding position can substantially be obtained.
Two pictures of upper row are original image, left and right eye and its peripheral region that lower two pictures of row are respectively intercepted, I.e. as Target Photo region.
4th step, the average gray rectangular histogram of the target area of the non-wearing spectacles of calculating, as goal histogram:
Grey level histogram has the number of the pixel of certain gray level in representing image, in reflecting image, certain gray scale goes out Existing frequency.If regarding total pixel intensity (grey level) of image as a stochastic variable, its distribution situation is just anti- The statistical nature of image is answered, it is possible to described and portrayed by probability density function, show as grey level histogram.Mathematical expression For:px(i)=ni/ n, i ∈ 0,1 ..., 255, in formula, pxI () represents pixel value as the frequency of the pixel appearance of i, niGo out for which Existing number of times, n are sum of all pixels in image.
Firstly, because gray scale COLOR COMPOSITION THROUGH DISTRIBUTION is relatively simple near the human eye of non-wearing spectacles, therefore select using all photographs " Japanese women facial expression data storehouse (Japanese Female Facial Expression of the piece without wearing spectacles (JAFFE) Database) " in all pictures corresponding target area, and intercept and count target area grey level histogram, from And average histogram is calculated, as the goal histogram for hereafter matching.
Because being included whole eye portions and skin around eyes part, Liang Zhe by the position that sunglasses eyeglass are covered Obvious difference in intensity profile, therefore, the goal histogram for calculating gained is divided into eyes goal histogram and skin around eyes Goal histogram.
5th step, using Canny border detection operator from the positive face photo of face and its peripheral region for wearing sunglassess Intercept glasses profile
(i). because sunglassess can bring extreme influence to the gray feature of eyes and its surrounding position.Therefore, discussing extensive During the half-tone information for being blocked by sunglassess again, the profile for choosing and positioning sunglassess exactly is very necessary.Meanwhile, according to The algorithm for extracting sunglassess profile described below, it is also possible to which whether personage has worn sunglassess in effectively drawing the photo Information.
(ii). because the intensity profile of eyebrow, eye portion intensity profile and sunglassess has similarity, and on locus There are them that also there is very high plyability, so as to interference can be brought to the following operation for obtaining glasses profile.Therefore, using Nogata Figure matching algorithm (process of implementing will be described in detail below), the eyebrow for being obtained according to STASM automatically and eyes Accurate location, intensity profile rectangular histogram of the grey level histogram of eyebrow part with the image slices vegetarian refreshments at its face forehead position is entered Row matching, the grey level histogram of eye portion and the grey level histogram of the image slices vegetarian refreshments of corresponding skin around eyes are carried out Match somebody with somebody, effect is obscured so as to be reduced as far as that eyebrow and eyes are brought to eyeglass profile.
(iii). for the part that preferably protrusion is blocked by glasses, hereafter, histogram equalization must be carried out to target area image Operation, so as to improve the global contrast of target area image.In fact, the corresponding function representation of histogram equalization operation is To calculate cumulative distribution function (Cumulative Distribution Function of the dot frequency in full tonal range: CDF), which is mathematically represented as:No matter the original intensity profile of target area image, Jing After crossing this step equalization operation, the intensity profile of target area can be completely covered [0,255].
(iv). because wearing after sunglassess, decrease during sunglassess than not worn by the gray value of sunglassess covering part, And concentrate on some gray value interval.Therefore, it can introduce a threshold operation, that is, introduce a band filter, will be above Or highest gray scale (255) is all mapped to less than the pixel of some gray scale interval, so as to the profile for more projecting glasses. According to experimental data, in this patent, the selected lower threshold value of the band filter is 50, upper threshold value 75.
(v). hereafter, enter row bound inspection to resulting picture after said process using Canny border detection operator Survey, you can obtain the general profile of sunglassess.Canny borders include noise reduction, find image in brightness step, in the picture with Three, track edge key step.
(1). noise reduction:Convolution operation is done according to Gaussian smoothing template to original image, the image for obtaining is compared with original image There is slight blur effect.So, the picture noise of a single pixel can become several on the image of Gaussian smoothing Without impact;
(2). find the brightness step in image:Edge in image may point to different directions, so Canny sides Boundary's detective operators are using 4 different detective operators to detection level, vertical and two diagonally adjacent edges.It is former The convolution done by beginning figure and each detective operators is stored, and for each pixel, its convolution maximum and correspondence Edge direction it is all identified and record.Thus, the brightness step of each pixel and its direction on original image Draw;
(3) .Canny detective operators judge that using the mode of hysteresis threshold whether detected greater brightness gradient is Efficient frontier.The threshold value is divided into high threshold and Low threshold.Assume that the center edge in image is all full curve, given by tracking Determine blurred portions in curve, and avoid for the noise pixel for not having constituent curve misreading into edge.Therefore, we from one compared with Big threshold value starts (part for having higher probability to be true edge at larger threshold value), the direction letter obtained before use Breath, according to less threshold value (the more conducively blurred portions of aircraft pursuit course), tracks whole edge in whole image, and ensures most The starting point of curve can be traced into eventually.The process can draw a secondary bianry image, and the value of wherein 0 or the 1 of each point is represented Whether current pixel point is a marginal point.
(vi). not merely there is the profile of eyeglass by the profile that Canny detective operators are detected, while further comprises mesh Other borders (such as eyes, facial speckle etc.) being likely to occur in logo image region.Therefore, must be according to the actual big of glasses Little and its shape facility, extracts the profile of glasses simultaneously from Canny detective operators exactly in the numerous profiles for detecting acquisition Delayed menstrual period is processed.Involved step is as follows:
(1). (profile central point is apart from corresponding eye center point to remove the null contour unrelated with eyeglass The surrounded area of excessively remote, profile is too small etc.);Now, if the profile number for drawing is zero, illustrate the human face photo not Wearing spectacles, otherwise, then to can determine that and gone up eyes to have worn;
(2). wipe out the point of all identical between the two or hypotelorisms on same profile;
(3). the point on profile is rearranged according to (or counterclockwise) direction clockwise;
(4). profile point smoothing processing;
(5). profile point interpolation processing so that adjacent distance between two points are less than a pixel on profile;
(6). use " the unrestrained filling of water " algorithm to be filled profile obtained as above, so as to form the mirror of final glasses Panel region.
6th step:Eye is intercepted using K-Means modes from the positive face photo of face and its peripheral region for wearing sunglassess Mirror wheel is wide.
Find during actual test, only by Canny border detection operators, can not reliably by all sunglassess Lens area is extracted, especially in the case where eyeglass light transmittance is higher (>=75%), the reliability for being obtained can be significantly Reduce.Therefore, this patent proposes the scheme for using K-Means clusters again, carries out pixel classification to target area image.
Because observing by the naked eye, had in gray scale value significantly with non-cover part by the part that sunglassess are covered Difference, therefore consider the K-Means clustering algorithms using two clusters (cluster).Concrete methods of realizing is as follows:
(1). 2 clusters center of mass point (cluster centroids) are randomly selected, is μ1And μ2
(2). repeat process, until convergence:{
1.. for each sample i, calculate its cluster that should belong to:
2.. for each class j, recalculate such barycenter:
Wherein:1 { true }=1 of indicator function, 1 { false }=0.
}
Final detected sunglasses eyeglass region is the union of the 5th step and the 6th step institute detection zone.
7th step, the method matched using grey level histogram carry out recovery behaviour to the picture pixels region covered by sunglassess Make.The concrete operation step of grey level histogram is as described below:
(1). to detected and be judged as wearing the facial image of sunglassess, the ophthalmic lens region to detecting is first Advanced column hisgram equalization operation (uses cumulative distribution function:CDF), meanwhile, to the target Nogata obtained by abovementioned steps Figure equally carries out histogram equalization operation so that both cover [0,255] at histogrammic distribution, are in same projector space Among.Equalization operation institute according to mathematical formulae it is as follows:
In two above formula, vkAnd skPicture and Target Photo that the needs for having been equalized are resumed are represented respectively Rectangular histogram, mj/ m and nj/ n is represented respectively needs what gray value in the picture that is resumed and Target Photo was accounted for by the pixel of j The ratio of whole number of pixels, G (zi) and T (rj) represent it is above-mentioned for the histogram equalization operation of two width pictures.The operation institute As shown in Figure 8, Fig. 8 (a) and two figures of Fig. 8 (b) are respectively the intensity profile before and after equalizing to the grey level histogram for obtaining, Wherein, Fig. 8 (a) is original image (image that needs are resumed, red) and the grey level histogram of target image (blueness), Fig. 8 (b) To perform both intensity profile after histogram equalization operation.
(2). according to mentioned above, the intensity profile of two width pictures is equalized, and is allowed to cover whole gray scale values. With regard to this it is considered that both are in the same space, i.e. vk=skCondition is set up, as:
T(rj)=G (zi)
Thus, it is possible to be back-calculated to obtain:
zi=G-1(zi)=G-1[T(rj)]
According to above formula and grey level histogram referred to above, any one picture in the photo being resumed for needs Vegetarian refreshments value rj, its functional value T (r are found according to its CDF function firstj), then find and T from the CDF functions of goal histogram (rj) equal functional value G (zi), inverse function G is calculated to which-1(zi), you can obtain pixel value rjCorresponding target pixel value zi.To from 0 to 255 be possible to values, thus method the target pixel value for having one-to-one relationship with which can be found.
During practical operation, obtain the lens area being detected in target image first with statistical method and (be divided into eyes Region and ocular vicinity skin area) grey level histogram, and read and be stored in target gray rectangular histogram in data base;This Afterwards, the Fat Distribution after both histogram equalizations is calculated using CDF functions;Now, on any one original image Gray scale value is (such as rj), its parameter probability valuing (T (r after equalization is found firstj)), and it is corresponding in goal histogram Find on CDF functions and T (rj) equal (or value is most close) corresponding parameter probability valuing (G (zi), i.e., according to T (rj)=G (zi)), G (z are found so as to counteri) corresponding to gray scale value zi.Accordingly, for each gray scale value rj, can find therewith One-to-one target gray value zi, so as to set up mapping relations.When carrying out image recovery, mapping that only need to be according to more than is closed System carries out simple table look-up, by original pixel value rjReplace with respective pixel value zi, you can complete the overall process of glasses reparation.
Above-described embodiment technology design only to illustrate the invention and feature, its object is to allow person skilled in the art Scholar will appreciate that present disclosure and implement according to this, can not be limited the scope of the invention with this.It is all according to the present invention Equivalence changes or modification that spirit is made, should all be included within the scope of the present invention.

Claims (4)

1. it is a kind of based on grey level histogram match human face photo sunglassess automatic removal method, it is characterised in that:
Comprise the following steps:
Step one, the Initial Face image for obtaining collection, and 79 characteristic points are obtained by the active shape model for having trained Information, so as to the characteristic point position and face of the full face contour line information of description and face's face are obtained according to 79 characteristic point informations Contour line information;
Step 2, two characteristic points for according in 79 characteristic points obtained in step one, choosing description images of left and right eyes ball center, View picture photo is carried outInitial Face image after being rotated after the rotation process at angle, repeats step one and is rotated 79 characteristic points after the amendment of Initial Face image, describe complete after amendment so as to obtain according to 79 characteristic point informations after amendment afterwards The characteristic point position and face outline line information of face contour line information and face's face, the anglec of rotation are as follows:
In above-mentioned formula,For practical photograph drift angle,WithFor left oculocentric pixel coordinate,WithFor right eye center Pixel coordinate;
Step 3, according to described in 79 characteristic points after the amendment of the acquisition of step 2 left and right two eyes and two eyebrows spy Particular location a little is levied, left and right the horizontal of two eyes is obtained respectively, is most worth on longitudinal direction, sketch the contours of images of left and right eyes and its week Enclose the Target Photo area at position;
Step 4, acquisition do not wear the reference facial image of sunglassess, and to reference to facial image carry out successively the step one ~ Step 3 operation obtains the reference picture area with reference to facial image middle left and right eye and its surrounding position, and reference picture is divided into Eyes area and skin around eyes position area, calculate eyes area respectively and the target in skin around eyes position area is straight Fang Tu, and corresponding eyes reference gray level rectangular histogram and skin around eyes reference gray level rectangular histogram are obtained, the eyes reference Number of pixels with certain gray level in grey level histogram and skin around eyes reference gray level histogram table diagram picture, and instead The frequency that certain gray scale occurs in image is reflected, has been embodied as:, in formula,Represent picture Plain value isPixel occur frequency,For the number of times which occurs,For sum of all pixels in image;
The position of step 5-1, the eyes of left and right two obtained according to step 3 and two eyebrows, calculates Target Photo area respectively Middle brow region and the respective grey level histogram of eye areas, by the grey level histogram of brow region and its face forehead position The intensity profile rectangular histogram of image slices vegetarian refreshments is matched, by the grey level histogram of eye areas and corresponding skin around eyes The grey level histogram of image slices vegetarian refreshments is matched, so as to obtain erase eyebrow and eyes after Target Photo area, and calculate Its grey level histogram, Target Photo area grayscale rectangular histogram after being modified;
Step 5-2, histogram equalization operation is carried out to Target Photo grey level histogram after modification, Target Photo ash after modification Spend histogrammic intensity profile to be completely covered [0,255];
Step 5-3, the threshold operation up and down for introducing a similar band filter effect, will be above upper threshold value and less than lower threshold value Pixel be all mapped to highest gray scale(255), the pixel gray level between lower threshold value and upper threshold value do not change, and obtains Target Photo after modification after bandpass filtering;
Step 5-4, for Target Photo after the modification after bandpass filtering, according to the eye of the left and right obtained in step 3 two The particular location of eyeball and eye center position, carry out the operation of similar step three, obtain and are most worth on horizontal, longitudinal direction, so as to obtain The bounding box of two eyes in left and right is taken, and calculates length and height of the bounding box in pixel;
Step 5-5, according at the beginning of gray scale intensities gradient carries out border detection and obtains to the picture obtained by after said process Beginning boundary profile, for each initial profile, calculates distance of each point to corresponding eye center on profile, and selection should Two threshold value up and down of distance:The half that upper threshold value is the half of corresponding eye-length, lower threshold value is corresponding eye-level, it is relatively more every One point to central point distance and count:If in full-sized more than three one-tenth more than point to the central point of corresponding eyes away from From exceeding upper threshold value or exceeding lower threshold value, then judge that the profile is invalid glasses profile, and the profile is rejected;If All contour lines are disallowable, then can determine whether the face picture not wearing spectacles;If still there is contour line not disallowable, only Can determine whether to have worn eyes for the face picture;
Step 6, using K-Means algorithms to, from obtained in step 3, target area image carries out pixel classification,
Because observing by the naked eye, the part covered by sunglassess is had in gray scale value significantly not with non-cover part Together, therefore consider using two clusters(cluster)K-Means clustering algorithms, concrete methods of realizing is as follows:
(1)For all pixels point of Target Photo, wherein 2 pixels are randomly selected, as cluster center of mass point(cluster centroids), it isWith
(2)Repeat procedure below, until convergence:{
1.. all pixels point in traversal region, for each of which pixel, calculate its cluster that should belong to:
X(i)For the pixel value of current pixel point
2.. recalculate the barycenter of two clusters:
Wherein:Indicator function
After convergence, judge the average gray of two classes, and to choose a relatively low cluster of average gray be doubtful lens area;
Step 7-1, the sunglassess target area that step 5 and step 6 are each obtained is taken into sunglassess mesh after union is corrected Mark region;
Step 7-2, for sunglassess target area after amendment, calculate pixel shared by the region of two eyes in correspondence left and right respectively The number of point, for the input face picture of a secondary 256*256 sizes, if the picture covered by left eye mirror and two region of right eye mirror When vegetarian refreshments number is between [600,1800], you can confirm that the width human face photo has worn glasses really;For only one eye Mirror meets above-mentioned condition, then delete the lens area for being unsatisfactory for above-mentioned condition, and offside lens area is carried out picture mirror image Operation, the people for still confirming as wearing spectacles connect photo;For two regions are unsatisfactory for above-mentioned condition, then confirmed as not wearing The human face photo worn glasses;
Step 8-1, corrected according to step 7 after sunglassess target area, according to eye feature point respectively will amendment after too Positive mirror target area is divided into sunglassess objective eye position area and sunglassess objective eye surrounding skin position area, calculates respectively too The rectangular histogram in positive mirror objective eye position area and sunglassess objective eye surrounding skin position area obtains corresponding eyes target ash Degree rectangular histogram and skin around eyes target gray rectangular histogram;
Step 8-2, eyes target gray rectangular histogram and skin around eyes target gray rectangular histogram are carried out into histogram equalization Change operation and obtain equalization eyes target gray rectangular histogram and equalization skin around eyes target gray rectangular histogram;
And the eyes reference gray level rectangular histogram of step 4 and skin around eyes reference gray level histogram equalization operation are obtained Equalization eyes reference gray level rectangular histogram and equalization skin around eyes reference gray level rectangular histogram;
Step 8-3, find balanced according to the corresponding probit of certain target gray value in equalization eyes target gray rectangular histogram Change in eyes reference gray level rectangular histogram with the immediate reference gray level value of its parameter probability valuing, then with taking closest to probability The reference gray level value of value replaces corresponding target gray value, and successively to equalizing other mesh in eyes target gray rectangular histogram Mark gray value makees same operation;
Step 8-4, according to equalization skin around eyes target gray rectangular histogram in the corresponding probit of certain target gray value With the immediate reference gray level value of its parameter probability valuing, Ran Houyong in searching equalization skin around eyes reference gray level rectangular histogram Corresponding target gray value is replaced with the reference gray level value closest to parameter probability valuing, and successively to equalizing skin around eyes In skin target gray rectangular histogram, other target gray values make same operation, automatically remove so as to complete human face photo sunglassess.
2. the human face photo sunglassess automatic removal method matched based on grey level histogram according to claim 1, which is special Levy and be:The reference facial image for not wearing sunglassess of the step 4 selects to use all photos without wearing spectacles " Japanese women facial expression data storehouse ".
3. the human face photo sunglassess automatic removal method matched based on grey level histogram according to claim 1, which is special Levy and be:In the step 5-3, lower threshold value is 50, and upper threshold value is 75.
4. the human face photo sunglassess automatic removal method matched based on grey level histogram according to claim 1, which is special Levy and be:According to two eyes in left and right described in 79 characteristic points after the amendment of the acquisition of step 2 and two in the step 3 The particular location of the characteristic point of eyebrow, obtains left and right the horizontal of two eyes respectively, is most worth on longitudinal direction, and through suitably putting After big, the Target Photo area at images of left and right eyes and its surrounding position is sketched the contours of.
CN201510941267.5A 2015-12-16 2015-12-16 Based on the matched human face photo sunglasses automatic removal method of grey level histogram Active CN106570447B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510941267.5A CN106570447B (en) 2015-12-16 2015-12-16 Based on the matched human face photo sunglasses automatic removal method of grey level histogram

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510941267.5A CN106570447B (en) 2015-12-16 2015-12-16 Based on the matched human face photo sunglasses automatic removal method of grey level histogram

Publications (2)

Publication Number Publication Date
CN106570447A true CN106570447A (en) 2017-04-19
CN106570447B CN106570447B (en) 2019-07-12

Family

ID=58508673

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510941267.5A Active CN106570447B (en) 2015-12-16 2015-12-16 Based on the matched human face photo sunglasses automatic removal method of grey level histogram

Country Status (1)

Country Link
CN (1) CN106570447B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107730457A (en) * 2017-08-28 2018-02-23 广东数相智能科技有限公司 A kind of image completion method, apparatus, electronic equipment and storage medium
CN109977727A (en) * 2017-12-27 2019-07-05 广东欧珀移动通信有限公司 Sight protectio method, apparatus, storage medium and mobile terminal
CN110010228A (en) * 2019-03-26 2019-07-12 广州艾颜佳美容美发设备有限公司 A kind of facial skin rendering algorithm based on image analysis
US10769499B2 (en) * 2017-11-03 2020-09-08 Fujitsu Limited Method and apparatus for training face recognition model
CN113052120A (en) * 2021-04-08 2021-06-29 深圳市华途数字技术有限公司 Entrance guard's equipment of wearing gauze mask face identification
CN113095148A (en) * 2021-03-16 2021-07-09 深圳市雄帝科技股份有限公司 Method and system for detecting occlusion of eyebrow area, photographing device and storage medium
CN113435361A (en) * 2021-07-01 2021-09-24 南开大学 Mask identification method based on depth camera
CN117173158A (en) * 2023-10-25 2023-12-05 深圳市德海威实业有限公司 Intelligent detection method and system for quality of precise connector

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102163288A (en) * 2011-04-06 2011-08-24 北京中星微电子有限公司 Eyeglass detection method and device
CN103020579A (en) * 2011-09-22 2013-04-03 上海银晨智能识别科技有限公司 Face recognition method and system, and removing method and device for glasses frame in face image

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102163288A (en) * 2011-04-06 2011-08-24 北京中星微电子有限公司 Eyeglass detection method and device
CN103020579A (en) * 2011-09-22 2013-04-03 上海银晨智能识别科技有限公司 Face recognition method and system, and removing method and device for glasses frame in face image

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
XUN WANG 等: "A New Facial Expression Recognition Method Based on Geometric Alignment and LBP Features", 《2014 IEEE 17TH INTERNATIONAL CONFERENCE ON COMPUTATIONAL SCIENCE AND ENGINEERING》 *
张志刚 等: "去除人脸图像中眼镜的方法", 《计算机工程与设计》 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107730457A (en) * 2017-08-28 2018-02-23 广东数相智能科技有限公司 A kind of image completion method, apparatus, electronic equipment and storage medium
CN107730457B (en) * 2017-08-28 2020-02-14 广东数相智能科技有限公司 Image completion method and device, electronic equipment and storage medium
US10769499B2 (en) * 2017-11-03 2020-09-08 Fujitsu Limited Method and apparatus for training face recognition model
CN109977727A (en) * 2017-12-27 2019-07-05 广东欧珀移动通信有限公司 Sight protectio method, apparatus, storage medium and mobile terminal
CN110010228A (en) * 2019-03-26 2019-07-12 广州艾颜佳美容美发设备有限公司 A kind of facial skin rendering algorithm based on image analysis
CN110010228B (en) * 2019-03-26 2022-12-23 广州艾颜佳美容美发设备有限公司 Face skin perspective algorithm based on image analysis
CN113095148A (en) * 2021-03-16 2021-07-09 深圳市雄帝科技股份有限公司 Method and system for detecting occlusion of eyebrow area, photographing device and storage medium
CN113052120A (en) * 2021-04-08 2021-06-29 深圳市华途数字技术有限公司 Entrance guard's equipment of wearing gauze mask face identification
CN113435361A (en) * 2021-07-01 2021-09-24 南开大学 Mask identification method based on depth camera
CN113435361B (en) * 2021-07-01 2023-08-01 南开大学 Mask identification method based on depth camera
CN117173158A (en) * 2023-10-25 2023-12-05 深圳市德海威实业有限公司 Intelligent detection method and system for quality of precise connector
CN117173158B (en) * 2023-10-25 2024-01-30 深圳市德海威实业有限公司 Intelligent detection method and system for quality of precise connector

Also Published As

Publication number Publication date
CN106570447B (en) 2019-07-12

Similar Documents

Publication Publication Date Title
CN106570447B (en) Based on the matched human face photo sunglasses automatic removal method of grey level histogram
Guo et al. Eyes tell all: Irregular pupil shapes reveal gan-generated faces
CN109344724B (en) Automatic background replacement method, system and server for certificate photo
CN103914676B (en) A kind of method and apparatus used in recognition of face
CN107403168B (en) Face recognition system
US7953253B2 (en) Face detection on mobile devices
CN103902958A (en) Method for face recognition
CN105205480B (en) Human-eye positioning method and system in a kind of complex scene
JP4893863B1 (en) Image processing apparatus and image processing method
CN104794693B (en) A kind of portrait optimization method of face key area automatic detection masking-out
CN103810491B (en) Head posture estimation interest point detection method fusing depth and gray scale image characteristic points
CN107609459A (en) A kind of face identification method and device based on deep learning
CN106960202A (en) A kind of smiling face's recognition methods merged based on visible ray with infrared image
CN104408462B (en) Face feature point method for rapidly positioning
CN101359365A (en) Iris positioning method based on Maximum between-Cluster Variance and gray scale information
CN108629336A (en) Face value calculating method based on human face characteristic point identification
CN103116749A (en) Near-infrared face identification method based on self-built image library
JP2007213377A (en) Facial feature point detection method, device and program
CN104091155A (en) Rapid iris positioning method with illumination robustness
JP6956986B1 (en) Judgment method, judgment device, and judgment program
CN106599785A (en) Method and device for building human body 3D feature identity information database
CN108416291A (en) Face datection recognition methods, device and system
CN114283052A (en) Method and device for cosmetic transfer and training of cosmetic transfer network
CN106611158A (en) Method and equipment for obtaining human body 3D characteristic information
CN103544478A (en) All-dimensional face detection method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant