CN106503611B - Facial image eyeglass detection method based on marginal information projective iteration mirror holder crossbeam - Google Patents

Facial image eyeglass detection method based on marginal information projective iteration mirror holder crossbeam Download PDF

Info

Publication number
CN106503611B
CN106503611B CN201610814099.8A CN201610814099A CN106503611B CN 106503611 B CN106503611 B CN 106503611B CN 201610814099 A CN201610814099 A CN 201610814099A CN 106503611 B CN106503611 B CN 106503611B
Authority
CN
China
Prior art keywords
mirror holder
region
holder crossbeam
pixel
horizontal line
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610814099.8A
Other languages
Chinese (zh)
Other versions
CN106503611A (en
Inventor
赵明华
张鑫
张飞飞
陈棠
曹慧
石争浩
王晓帆
王映辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Zhongchuang Yuhao Information Technology Co.,Ltd.
Original Assignee
Xian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian University of Technology filed Critical Xian University of Technology
Priority to CN201610814099.8A priority Critical patent/CN106503611B/en
Publication of CN106503611A publication Critical patent/CN106503611A/en
Application granted granted Critical
Publication of CN106503611B publication Critical patent/CN106503611B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of facial image eyeglass detection methods based on marginal information projective iteration mirror holder crossbeam, mouth region is positioned according to facial image, the direction the x position at mirror holder crossbeam center is determined with mouth region, the direction the y position that mirror holder crossbeam center is determined with the pixel transverse projection situation of the marginal information figure of facial image determines mirror holder crossbeam region according to the direction the x and y position at mirror holder crossbeam center;Horizontal line is searched in mirror holder crossbeam region, if including horizontal line in mirror holder crossbeam region, and horizontal line length is approximately equal to the mirror holder crossbeam region direction x length, then judges that facial image is worn glasses;Otherwise, it is judged as non-wearing spectacles.The present invention can effectively to facial image, whether wearing spectacles be detected, and in detection operation, not only required characteristic information is few, but also can accurately position target area, have obtained a kind of opposite more simple and efficient eyeglass detection method of other methods.

Description

Facial image eyeglass detection method based on marginal information projective iteration mirror holder crossbeam
Technical field
The invention belongs to image procossings and computer vision field, and in particular to one kind is based on marginal information projective iteration mirror The facial image eyeglass detection method of frame crossbeam.
Background technique
In recent years, biometrics identification technology is widely applied to many fields, and face recognition technology is that biological characteristic is known One of the important topic of other technical research.Currently, face recognition technology has been achieved for great progress, but recognition of face skill The performance of art nevertheless suffers from the influence of jewelry, posture, expression and other complicated factors.Glasses are that face wears most commonly used decorations Object, it has been investigated that being influenced with frame glasses on the accuracy rate of human face detection and recognition extremely serious.Glasses detection is in pattern-recognition It has a wide range of applications with field of image processing, such as, recognition of face, exit and entry certificates shine concrete application according to detection, upload Piece on-line checking etc..Therefore, whether wearing spectacles have obtained the concern of numerous researchers to detection face in facial image.
Currently, whether the method for wearing spectacles has following a few classes to detection facial image.First kind method is by analyzing eye Mirror region geometry feature to determine whether wearing spectacles method.Yoshikawa et al. (Glasses frame detection With 3d hough transform International Conference on Pattern Recognition, IEEE, 2002) in the text by having studied edge feature and geometrical characteristic around glasses, a kind of method for proposing deformable contour outline is come The presence of glasses is detected, but there are many foundation variable to be considered of this model;(the Automatic eyeglasses such as Wu Removal from face images, PAMI, 2004) establish facial image of the wearing glasses people not worn glasses corresponding with it The joint probability distribution model of face image, and the facial image that do not wear glasses is synthesized by this model.This method needs 15 characteristic points are accurately positioned, at this point, the positioning for no-frame glasses is extremely difficult;(the Towards such as Jiang Detection of glasses in facial images, ICPR, 1998) utilize six spies of glasses surrounding edge information Sign judges that glasses combine whether there is or not and by this six features and improve detection effect, these features are located in two eyes Region at the bridge of the nose of the heart and below two eyes, the result that this method positions eyes are very sensitive.Second class method It is the method based on relative coordinate positioning area-of-interest.Ren Minggang et al. (Glasses detection based on face picture marginal information Method, software guide, 2014) nasal bridge region positioned according to relative coordinate of the bridge of the nose in facial image, and glasses are carried out accordingly Detection.This method is avoided that the influence by factors such as illumination, wrinkles in the detection process, but this method is for ridge area The positioning in domain has great uncertainty in operation.Third class method is the method for statistical learning.Jing et al. (Glasses detection for face recognition using bayes rules, ICMI, 2000) proposition is based on The glasses localization method of bayes rule, this method are special according to the glasses that the feature and associative learning of each pixel adjacent domain arrive Sign is to determine whether the pixel belongs to lens area;Wang et al. (Improvement of face recognition by Eyeglasses removal, In Proceedings of the Sixth Conference on Intelligent Information Hiding and Multimedia Signal Processing, 2010) consider glasses and surrounding pixel Contrast it is obvious, directly extract rims of spectacle using active appearance models AAM method, thus position glasses and extract glasses week Enclose the characteristic informations such as color, the gradient in region.Such methods based on statistical learning need to carry out feature extraction to great amount of images And it model is trained can just obtain a result.
Summary of the invention
The object of the present invention is to provide a kind of facial image Glasses detections based on marginal information projective iteration mirror holder crossbeam Method, solve existing method detect facial image whether wearing spectacles when consider variable it is more, can not accurately position mesh It marks region and needs the problem of being trained to great amount of images.
The technical scheme adopted by the invention is that the facial image glasses based on marginal information projective iteration mirror holder crossbeam are examined Survey method is positioned mouth region according to facial image, is determined the direction the x position at mirror holder crossbeam center, with mouth region with face The pixel transverse projection situation of the marginal information figure of image determines the direction the y position at mirror holder crossbeam center, according to mirror holder crossbeam The direction the x and y position at center determines mirror holder crossbeam region;Horizontal line is searched in mirror holder crossbeam region, if in mirror holder crossbeam region Including horizontal line, and horizontal line length is approximately equal to the mirror holder crossbeam region direction x length, then judges that facial image is worn glasses;Otherwise, It is judged as non-wearing spectacles.
It is above-mentioned that the direction the x position at mirror holder crossbeam center is determined specifically, with the center in the direction mouth region x with mouth region The direction x position as mirror holder crossbeam center.
The method for positioning mouth region is that original facial image is gone to YCbCr color space, is existed using Gauss model YCbCr color space calculates the similarity in human face region for mouth region with formula (1), and is switched to bianry image to obtain Mouth region;
P=exp ((- 0.5) × (x-M) ' inv (cov) × (x-M)) (1)
In formula (1), cov and M respectively indicate the covariance matrix and mean vector of chroma vector;Its value is respectively as follows:
M=(156.5599,117.4361)T (2)
The pixel transverse projection situation of the above-mentioned marginal information figure with facial image determines the direction the y position of mirror holder crossbeam Specifically, the marginal information figure to facial image carries out transverse projection according to formula (4), the sum of every row pixel R is obtainedj, then press The position in the direction mirror holder crossbeam y is determined according to formula (5);
Index (k)=find (Rj> μ × max (R)) (j=1,2,3 ..., m;K=1,2,3 ..., s) (5);
In formula (4), f ' (i, j) indicates that every pixel value of Image Edge-Detection image f ' (x, y), pro indicate projection fortune It calculates, RjIndicate the value of the every row of project column vector;In formula (5), R is column vector obtained in formula (4), and max (R) is column vector Middle maximum value.
Further, μ is threshold coefficient, 0.50≤μ≤0.65.
The method that the above-mentioned direction the x and y position according to mirror holder crossbeam center determines mirror holder crossbeam region is, using formula (6)-(9) mirror holder crossbeam regional location is calculated,
Gxl=Glassx-(α×W) (6)
Gxr=Glassx+(α×W) (7)
Gyl=Glassy-γ (8)
Gyr=Glassy+(β×H) (9)
In formula (6)-(9), GlassxIt is the center in the mirror holder crossbeam region direction x, GlassyIt is mirror holder crossbeam region y The position in direction, W are face width, and H is face length, Gxl、GylRespectively cross, the ordinate in the mirror holder crossbeam region upper left corner, Gxr、GyrRespectively cross, the ordinate in the mirror holder crossbeam region lower right corner;0.010≤α≤0.028,1≤γ≤5,0.03≤β≤ 0.08。
The above-mentioned method that whether search horizontal line and interpretation wear glasses in mirror holder crossbeam region are as follows:
(1) in the marginal information in mirror holder crossbeam region, first pixel in every row is searched for line by line from the top down, with Starting point of first pixel searched as horizontal line, horizontal line length is 1 at this time;
(2) pixel for continuing to test current line and its adjacent uplink and downlink to the right since the horizontal line starting point, if next Pixel is located at current detection pixel adjacent right side, then judges that it, for a pixel on the horizontal line, adds 1 to horizontal line length, Otherwise, the horizontal line length computation of current detection row stops, and stops detection;
(3) compare the horizontal line length and the mirror holder crossbeam region direction x length, if the horizontal line length is approximately equal to mirror holder crossbeam The region direction x length then judges that the image for the image of wearing spectacles, terminates detection;Otherwise, return step (1), according to same Method continue searching pixel from next line, and calculate the length of next horizontal line, and judged again;If in mirror holder crossbeam The horizontal line of the mirror holder crossbeam region direction x length is not approximately equal in region, then it represents that the image is the figure of not wearing spectacles Picture.
It is one of following four situation that step (2) next pixel, which is located at current detection pixel adjacent right side position: under One pixel is located at the adjacent directly to the right of current detection pixel;Next pixel is located at the adjacent bottom right of current detection pixel Angle;Next pixel is located at the adjacent upper right corner of current detection pixel;It is just right that next pixel is located at current detection pixel The position of the one inactive pixels point in side interval.
The horizontal line length is approximately equal to the judgement of the mirror holder crossbeam region direction x length in step (3) specifically: when horizontal line When length > a × mirror holder crossbeam region the direction x length, it is believed that horizontal line length is approximately equal to the mirror holder crossbeam region direction x length, 0.70≤a≤0.95。
The invention has the advantages that the present invention is to facial image, whether wearing spectacles are detected, with existing detection side Method is compared, and not only without the concern for multivariable and a large amount of training samples excessively, but also can be simplified to target area precise positioning Detection operates and improves the performance of detection.
Detailed description of the invention
Fig. 1 is that the present invention is based on the processes of the facial image eyeglass detection method of marginal information projective iteration mirror holder crossbeam Figure;
Fig. 2 is an image set to be detected of embodiment input;
Fig. 3 is the pre-processed results for the presentation graphics chosen in the image set of Fig. 2;
Fig. 4 is the corresponding edge detection results of image in Fig. 3;
Fig. 5 is the mouth similarity area calculated using Gauss model in YCbCr color space original color image in Fig. 3 The bianry image in domain;
Fig. 6 is the positioning result according to the finally obtained mouth region of Fig. 5, wherein box mark indicates mouth region;
Fig. 7 is according to the center in the mouth region positioning direction x in Fig. 6 come the positioning in the mirror holder crossbeam region direction x determined, It is indicated with vertical line;
Fig. 8 is the case where carrying out transverse projection to the pixel of Fig. 4 edge detection results image, wherein the curve of no * label It is the statistical conditions of edge pixel, the curve of * label is after the curve smoothing marked to no * as a result, carrying out subsequent behaviour When making, with have * mark curve based on;
Fig. 9 is the positioning according to the pixel transverse projection situation of Fig. 8 to the mirror holder crossbeam region direction y, is marked with horizontal line;
Figure 10 is mirror holder crossbeam zone location schematic diagram;
Figure 11 is the crossbeam zone location in Fig. 3 in original color image as a result, being marked with box, and * expression is calculated The upper left corner in mirror holder crossbeam region and the position in the lower right corner;
Figure 12 is according to interception result of the mirror holder crossbeam region in edge-detected image in Figure 11;
Figure 13 is " potential " horizontal line feature schematic diagram, wherein black region represents working as during searching for " potential " horizontal line Preceding pixel point, hatched example areas represent next pixel of " potential " horizontal line, and white space indicates inactive pixels point, corresponding edge inspection Survey the black pixel point in result;
Figure 14 a and Figure 14 b are the horizontal line example detected;* it indicates white pixel point, indicates black pixel point;
Figure 15 a and Figure 15 b are to detect correct result to the facial image of wearing spectacles and non-wearing spectacles respectively.
Specific embodiment
Present invention will be described in further detail below with reference to the accompanying drawings and specific embodiments.
Facial image eyeglass detection method based on marginal information projective iteration mirror holder crossbeam of the invention, first part are Edge detection is pre-processed and done to facial image, mouth region is positioned with color image, and then determine the side x at mirror holder crossbeam center To position, the direction the y position of mirror holder crossbeam is determined with the pixel transverse projection situation of edge detection, and with this, it is horizontal to obtain mirror holder Beam region;Second part searches for " potential " horizontal line in the mirror holder crossbeam region of the marginal information image extracted, according to " latent " horizontal line judged in the coverage rate of mirror holder crossbeam zone length, with complete to facial image whether the detection of wearing spectacles.
As shown in Figure 1, being specifically implemented according to the following steps:
Step 1, facial image f (x, y) is inputted, as shown in Figure 2.Since there are noises etc. to interfere for collected facial image Information needs to pre-process face part.Firstly, colorized face images are converted into gray level image;Then, to grayscale image As carrying out Gaussian smoothing, as shown in Figure 3;Finally, extracting edge f ' (x, y) after gray level image is smoothed, such as Shown in Fig. 4.
Step 2, the original color image f (x, y) of input is gone into YCbCr color space from RGB color, using height This model calculated in YCbCr color space with formula (1) be in human face region mouth region similarity P, and switched to two-value Image obtains possible mouth region, as shown in Figure 5.By the screening of the information such as shape, position, to determine mouth region, As shown in Figure 6.With the center Glass in the direction mouth region xxAs the direction the x positioning result at mirror holder crossbeam center, such as Fig. 7 institute Show.
P=exp ((- 0.5) × (x-M) ' × inv (cov) × (x-M)) (1)
Formula (1) is that the colorized face images f (x, y) that will be inputted goes to YCbCr color space from RGB color to carry out It calculates, wherein x=(Cb, Cr)TFor the chroma vector of pixel;Inverse of a matrix matrix is sought in inv expression;Cov and M distinguishes table Show the covariance matrix and mean vector of chroma vector;P indicates that the similarity of the pixel and mouth, value are expressed as more greatly mouth A possibility that portion region, is bigger, otherwise smaller.Through experiment statistics, mean value and variance are respectively as follows:
M=(156.5599,117.4361)T (2)
Step 3, the marginal information figure f ' (x, y) of facial image is subjected in a manner of formula (4) transverse projection, obtains edge letter The sum of pixel of every row R on breath figure f ' (x, y)j, as shown in Figure 8.The positioning at the mirror holder direction crossbeam y center is determined with formula (5) As a result Glassy, as shown in Figure 9.
In formula (4), f ' (i, j) indicates that every pixel value of Image Edge-Detection image f ' (x, y), x, y indicate that it is horizontal, vertical Coordinate, pro indicate project, and the operation that every row element value for realizing to matrix is added obtains a column vector, RjIndicate the value of the every row of column vector.
Index (k)=find (Rj> μ × max (R)) (j=1,2,3 ..., m;K=1,2,3 ..., s) (5)
In formula (5), R refers to column vector obtained in formula (4);Max (R) is maximum value in column vector;μ is threshold coefficient, is led to In normal situation, 0.50≤μ≤0.65 takes 0.55 in the present embodiment;Find refers to be waited when the condition is satisfied, returns to the R for the condition that meetsj Subscript j;Index (k) records k-th of R for meeting conditionjSubscript, i.e., by the R for the condition that meetsjSubscript j value be assigned to Index (k);S records the R for the condition that meetsjNumber;1st meets the R of conditionjLower label be (1), i.e. Index (1) be mirror The positioning Glass in the direction frame crossbeam yy.The obtained direction mirror holder crossbeam y is as shown in Figure 10.
Step 4, the wide W of face and the high H of face of facial image are obtained.Mirror holder crossbeam region is determined according in step 2 and step 3 Position result (Glassx,Glassy), the region Glass of mirror holder crossbeam is obtained with formula (6)-(9), as shown in figure 11.Existed with the position Crossbeam region Glass ' is intercepted in marginal information image f ' (x, y), as shown in figure 12.
Gxl=Glassx-(α×W)(6)
Gxr=Glassx+(α×W)(7)
Gyl=Glassy-γ(8)
Gyr=Glassy+(β×H)(9)
In formula (6)-(9), GlassxIt is the center in the mirror holder crossbeam region direction x, GlassyIt is mirror holder crossbeam region y The position in direction, W are face width, and H is face length, Gxl、GylRespectively cross, the ordinate in the mirror holder crossbeam region upper left corner, Gxr、GyrRespectively cross, the ordinate in the mirror holder crossbeam region lower right corner.In formula (6), (7), the usual value of α is 0.010≤α ≤ 0.028, in formula (8), (9), the usual value of γ is 1≤γ≤5, and the usual value of β is 0.03≤β≤0.08;In this reality It applies in example, α=0.025, γ=3, β=0.05.
Similarly, position can also be determined according to the transverse and longitudinal coordinate in the mirror holder crossbeam region upper right corner and the lower left corner.
Step 5, mirror holder crossbeam is modeled as " potential " horizontal line, in the mirror holder crossbeam of the marginal information image extracted In region, " potential " horizontal line is searched for, if in mirror holder crossbeam region including " potential " horizontal line, and " potential " horizontal line length is somebody's turn to do and is approximately equal to The mirror holder crossbeam region direction x length, then it represents that mirror holder crossbeam has mirror holder crossbeam in region, then judges that facial image is worn glasses; Otherwise, it is judged as non-wearing spectacles.
(1) in the marginal information Glass ' in mirror holder crossbeam region, first picture in every row is searched for line by line from the top down Vegetarian refreshments, using first pixel of the row for searching as the starting point of " potential " horizontal line, J value is the line number of the row, at this time " potential " horizontal line length is 1;
(2) pixel of current line and its adjacent uplink and downlink is continued to test to the right since " potential " the horizontal line starting point, As shown in figure 13, the black region in figure is the current pixel point for searching for " potential " horizontal line, and hatched example areas is to search for " potential " cross Next pixel of line.It is if current detection pixel and next pixel belong to four kinds of positions as shown in fig. 13 that, i.e., next Pixel is located at the adjacent directly to the right of current detection pixel, the lower right corner, the upper right corner, or is located at current detection pixel directly to the right When being spaced the position of an inactive pixels point, then it is judged for a pixel on the horizontal line, using formula (10) to horizontal line length Add 1;
In formula (10), Glass ' (i, J) indicates first pixel of the row, and Glass ' (i, j) expression meets " potential " Next pixel of horizontal line condition, QL indicate " potential " horizontal line, and the example of " potential " horizontal line QL is as shown in Figure 14 a, 14b, figure The horizontal line of 14a detection is the horizontal line that the length in the second row pixel is 5, and the horizontal line of Figure 14 b detection is length in the second row pixel For 2 horizontal line.
If current detection pixel and next pixel are not belonging to four kinds of positional relationships as shown in fig. 13 that, such as next picture Under vegetarian refreshments and current detection pixel interval are located at current pixel point just there are two above inactive pixels point or next pixel Situations such as square, is then somebody's turn to do " potential " horizontal line length computation and stops, stops the detection of current line.
The calculating of " potential " horizontal line length is completed in the region according to the method described above.
(3) it is compared with the length of " potential " horizontal line and the mirror holder crossbeam region direction x length, mirror holder crossbeam region x Direction length is Gxr-Gxl+ 1, give one threshold value a, usual 0.70≤a≤0.95, in the present embodiment, a=0.8.If " latent " the length len of horizontal lineJ>a×(Gxr-Gxl+ 1), indicate the length of " potential " horizontal line in the mirror holder crossbeam region direction x length Coverage rate is high, i.e. expression " potential " horizontal line is mirror holder crossbeam, then judges that the image for the image of wearing spectacles, no longer executes subsequent Judgement.If lenJ≤a×(Gxr-Gxl+ 1), then it represents that the horizontal line may be other situations such as wrinkle or shade, then return step (1), pixel is continued searching from next line after the same method, judges the cross in next line and its adjacent uplink and downlink region Line, and the length of next horizontal line is calculated, and judged.If eligible without finding in entire mirror holder crossbeam region (lenJ>a×(Gxr-Gxl+ 1) horizontal line), then it represents that the image is the image of not wearing spectacles.As shown in figs. 15 a and 15b.
Table 1 gives the accurate of test sample detection of the method for the present invention to the 300 facial images composition acquired on the net Rate, test sample contain the face under a variety of expressions and a variety of environment, and partial test sample is as shown in Figure 2, wherein wears glasses Facial image have 177, the facial image that do not wear glasses has 123, the glasses worn include the glasses with frame and No-frame glasses.
The testing result of the invention of table 1
As can be seen from Table 1, the method for the present invention can accurately detect facial image whether wearing spectacles, work as face Image can also make also when there is exaggeration expression, dark, face micro- the case where inclining accurate judgement, as Figure 15 a, Shown in b.Consideration is few in the judgment process by the present invention, it is only necessary to extract the crossbeam region of glasses;Position crossbeam region also simultaneously Non- is the relative position according to glasses crossbeam in face, but according to mouth region detection and transverse projection from horizontal and vertical Determine the crossbeam position of glasses;And then wearing spectacles are judged whether according to the marginal information of glasses crossbeam, do not need a large amount of pictures Training obtain a result.
Above description of the present invention is section Example, and but the invention is not limited in above-mentioned embodiment. Above-mentioned specific embodiment is schematical, is not restrictive.It is all using method of the invention, do not departing from this hair In the case of bright objective and scope of the claimed protection, all specific expansions belong within protection scope of the present invention.

Claims (8)

1. the facial image eyeglass detection method based on marginal information projective iteration mirror holder crossbeam, which is characterized in that according to face Framing mouth region determines the direction the x position at mirror holder crossbeam center, with mouth region with the marginal information figure of facial image Pixel transverse projection situation determine the direction the y position at mirror holder crossbeam center, according to x the and y direction position at mirror holder crossbeam center Set determining mirror holder crossbeam region;Horizontal line is searched in mirror holder crossbeam region, if including horizontal line in mirror holder crossbeam region, and horizontal line is long Degree is approximately equal to the mirror holder crossbeam region direction x length, then judges that facial image is worn glasses;Otherwise, it is judged as non-wearing spectacles;
The method of the positioning mouth region is that original facial image is gone to YCbCr color space, is existed using Gauss model YCbCr color space calculates the similarity in human face region for mouth region with formula (1), and is switched to bianry image to obtain Mouth region;
In formula (1), cov and M respectively indicate the covariance matrix and mean vector of chroma vector;X=(Cb, Cr)TFor pixel Chroma vector;Inverse of a matrix matrix is sought in inv expression;P indicates that the similarity of the pixel and mouth, value are respectively as follows:
M=(156.5599,117.4361)T (2)
2. facial image eyeglass detection method according to claim 1, which is characterized in that described to determine mirror with mouth region The direction the x position at frame crossbeam center is specifically, using the center in the direction mouth region x as the direction the x position at mirror holder crossbeam center.
3. facial image eyeglass detection method according to claim 1, which is characterized in that the edge with facial image The pixel transverse projection situation of hum pattern determines the direction the y position of mirror holder crossbeam specifically, marginal information to facial image Figure carries out transverse projection according to formula (4), obtains the sum of every row pixel Rj, the direction mirror holder crossbeam y is determined according still further to formula (5) Position;
Index (k)=find (Rj> μ × max (R)) (j=1,2,3..., m;K=1,2,3..., s) (5);
In formula (4), f ' (i, j) indicates that every pixel value of Image Edge-Detection image f ' (x, y), pro indicate project, Rj Indicate the value of the every row of project column vector;In formula (5), R be formula (4) obtained in column vector, max (R) be in column vector most Big value, Index (k) record k-th of R for meeting conditionjSubscript.
4. facial image eyeglass detection method according to claim 3, which is characterized in that in the formula (5), μ is threshold value Coefficient, 0.50≤μ≤0.65.
5. facial image eyeglass detection method according to claim 1, which is characterized in that described according to mirror holder crossbeam center The direction the x and y position method that determines mirror holder crossbeam region be to calculate mirror holder crossbeam regional location using formula (6)-(9),
Gxl=Glassx-(α×W) (6)
Gxr=Glassx+(α×W) (7)
Gyl=Glassy-γ (8)
Gyr=Glassy+(β×H) (9)
In formula (6)-(9), GlassxIt is the center in the mirror holder crossbeam region direction x, GlassyIt is the mirror holder crossbeam region direction y Position, W be face width, H be face length, Gxl、GylRespectively cross, the ordinate in the mirror holder crossbeam region upper left corner, Gxr、 GyrRespectively cross, the ordinate in the mirror holder crossbeam region lower right corner;0.010≤α≤0.028,1≤γ≤5,0.03≤β≤0.08.
6. facial image eyeglass detection method according to claim 1, which is characterized in that described in mirror holder crossbeam region The method whether search horizontal line and interpretation wear glasses are as follows:
(1) in the marginal information in mirror holder crossbeam region, first pixel in every row is searched for line by line from the top down, with search Starting point of first pixel arrived as horizontal line, horizontal line length is 1 at this time;
(2) pixel for continuing to test current line and its adjacent uplink and downlink to the right since the horizontal line starting point, if next pixel Point is located at current detection pixel adjacent right side position, then judges it for a pixel on the horizontal line, add 1 to horizontal line length; Otherwise, the horizontal line length computation of current detection row stops, and stops detection;
(3) compare the horizontal line length and the mirror holder crossbeam region direction x length, if the horizontal line length is approximately equal to mirror holder crossbeam region The direction x length then judges that the image for the image of wearing spectacles, terminates detection;Otherwise, return step (1), according to same side Method continues searching pixel from next line, and calculates the length of next horizontal line, and judged again;If in mirror holder crossbeam region It is not approximately equal to the horizontal line of the mirror holder crossbeam region direction x length inside, then it represents that the image is the image of not wearing spectacles.
7. facial image eyeglass detection method according to claim 6, which is characterized in that step (2) next pixel It is one of following four situation that point, which is located at current detection pixel adjacent right side position: next pixel is located at current detection pixel The adjacent directly to the right of point;Next pixel is located at the adjacent lower right corner of current detection pixel;Next pixel is located at current inspection Survey the adjacent upper right corner of pixel;Next pixel is located at the position of the one inactive pixels point in current detection pixel directly to the right interval It sets.
8. facial image eyeglass detection method according to claim 6, which is characterized in that step (3) the horizontal line length It is approximately equal to the judgement of the mirror holder crossbeam region direction x length specifically: when length > a × mirror holder crossbeam region direction x of horizontal line is long When spending, it is believed that horizontal line length is approximately equal to the mirror holder crossbeam region direction x length;0.70≤a≤0.95.
CN201610814099.8A 2016-09-09 2016-09-09 Facial image eyeglass detection method based on marginal information projective iteration mirror holder crossbeam Active CN106503611B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610814099.8A CN106503611B (en) 2016-09-09 2016-09-09 Facial image eyeglass detection method based on marginal information projective iteration mirror holder crossbeam

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610814099.8A CN106503611B (en) 2016-09-09 2016-09-09 Facial image eyeglass detection method based on marginal information projective iteration mirror holder crossbeam

Publications (2)

Publication Number Publication Date
CN106503611A CN106503611A (en) 2017-03-15
CN106503611B true CN106503611B (en) 2019-11-22

Family

ID=58291395

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610814099.8A Active CN106503611B (en) 2016-09-09 2016-09-09 Facial image eyeglass detection method based on marginal information projective iteration mirror holder crossbeam

Country Status (1)

Country Link
CN (1) CN106503611B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3699808B1 (en) 2017-11-14 2023-10-25 Huawei Technologies Co., Ltd. Facial image detection method and terminal device
CN107945126B (en) * 2017-11-20 2022-02-18 杭州登虹科技有限公司 Method, device and medium for eliminating spectacle frame in image

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103093210A (en) * 2013-01-24 2013-05-08 北京天诚盛业科技有限公司 Method and device for glasses identification in face identification
CN104408426A (en) * 2014-11-27 2015-03-11 小米科技有限责任公司 Method and device for removing glasses in face image
CN105787427A (en) * 2016-01-08 2016-07-20 上海交通大学 Lip area positioning method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103093210A (en) * 2013-01-24 2013-05-08 北京天诚盛业科技有限公司 Method and device for glasses identification in face identification
CN104408426A (en) * 2014-11-27 2015-03-11 小米科技有限责任公司 Method and device for removing glasses in face image
CN105787427A (en) * 2016-01-08 2016-07-20 上海交通大学 Lip area positioning method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
一种新型的带反馈的人眼检测方法;张长国;《可编程控制器与工厂自动化》;20120415(第4期);第73页第4节 *
基于人脸图片边缘信息的眼镜检测;任明罡 等;《软件导刊》;20140731;第13卷(第7期);第142-143页第1.3-1.4节,图4-5 *

Also Published As

Publication number Publication date
CN106503611A (en) 2017-03-15

Similar Documents

Publication Publication Date Title
Lin Face detection in complicated backgrounds and different illumination conditions by using YCbCr color space and neural network
CN106709568B (en) The object detection and semantic segmentation method of RGB-D image based on deep layer convolutional network
CN105205480B (en) Human-eye positioning method and system in a kind of complex scene
CN105844252B (en) A kind of fatigue detection method of face key position
CN100423020C (en) Human face identifying method based on structural principal element analysis
CN110348319A (en) A kind of face method for anti-counterfeit merged based on face depth information and edge image
CN108898125A (en) One kind being based on embedded human face identification and management system
CN102324025A (en) Human face detection and tracking method based on Gaussian skin color model and feature analysis
CN107368778A (en) Method for catching, device and the storage device of human face expression
CN106022231A (en) Multi-feature-fusion-based technical method for rapid detection of pedestrian
CN106503644B (en) Glasses attribute detection method based on edge projection and color characteristic
CN104794693B (en) A kind of portrait optimization method of face key area automatic detection masking-out
CN106570447B (en) Based on the matched human face photo sunglasses automatic removal method of grey level histogram
CN106599785A (en) Method and device for building human body 3D feature identity information database
CN114359998B (en) Identification method of face mask in wearing state
CN106611158A (en) Method and equipment for obtaining human body 3D characteristic information
CN110533648A (en) A kind of blackhead identifying processing method and system
CN106909884A (en) A kind of hand region detection method and device based on hierarchy and deformable part sub-model
CN110929570B (en) Iris rapid positioning device and positioning method thereof
CN106503611B (en) Facial image eyeglass detection method based on marginal information projective iteration mirror holder crossbeam
Montazeri et al. Automatic extraction of eye field from a gray intensity image using intensity filtering and hybrid projection function
Yi et al. Face detection method based on skin color segmentation and facial component localization
Patil et al. Automatic detection of facial feature points in image sequences
CN112183215A (en) Human eye positioning method and system combining multi-feature cascade SVM and human eye template
Al-Hameed et al. Face detection based on skin color segmentation and eyes detection in the human face

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20210325

Address after: 1213-156, building 3, No. 1366, Hongfeng Road, Kangshan street, Huzhou Economic and Technological Development Zone, Zhejiang Province, 313000

Patentee after: Zhejiang Zhongchuang Yuhao Information Technology Co.,Ltd.

Address before: 710048 No. 5 Jinhua South Road, Shaanxi, Xi'an

Patentee before: XI'AN University OF TECHNOLOGY

TR01 Transfer of patent right