CN105893946B - A kind of detection method of front face image - Google Patents
A kind of detection method of front face image Download PDFInfo
- Publication number
- CN105893946B CN105893946B CN201610188392.8A CN201610188392A CN105893946B CN 105893946 B CN105893946 B CN 105893946B CN 201610188392 A CN201610188392 A CN 201610188392A CN 105893946 B CN105893946 B CN 105893946B
- Authority
- CN
- China
- Prior art keywords
- image
- face
- noise
- value
- front face
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/165—Detection; Localisation; Normalisation using facial parts and geometric relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Health & Medical Sciences (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- General Health & Medical Sciences (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Geometry (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The present invention provides a kind of detection method of front face image, comprising: step 1) captures video image;Step 2 image preprocessing, comprising: median filtering, illumination compensation and image edge processing;Step 3) carries out Face datection based on the method that cascade classifier is combined with key feature points;Step 4) utilizes geometrical relationship of the key feature points on two-dimensional surface, filters out front face image.Present invention improves over the methods of image preprocessing in conventional face's detection process, under the premise of realizing illumination compensation and denoising, reduce the interference to the original feature of image, pass through improved method for detecting human face, reduce the false detection rate of the face detection system based on AdaBoost cascade classifier, added simultaneously by gauss of distribution function and weight is increased to the local gray level model of characteristic point, improves the efficiency of face key feature point method.On the basis of human face characteristic point is accurately positioned, using the plane geometry relationship of characteristic point on 2d, face image is filtered out.
Description
Technical field
The present invention relates to field of video image processing, more particularly to a kind of detection method of front face image.
Background technique
With the variation of international security situation, attention of the public safety increasingly by society.It is higher in density of stream of people
Video monitoring in large-scale occasion plays the monitoring to dangerous person and the warning function of hazardous activity, while being also public security organ
Clear up a cace offer strong evidence, but existing Video Supervision Technique is not still able to satisfy society in intelligent analysis data
The demand of meeting, such as fingerprint, iris etc., it is still necessary to which test object is cooperated on one's own initiative.
In recent years, face has been increasingly becoming the master of video monitoring because it is with efficient biometric feature and concealment
One of object is wanted, the detection of face facial area is normally based on the key step that face biological characteristic carries out intelligent recognition and analysis
Suddenly, mostly with face entirety in existing method, face region, face complexion and these faces of face key feature points are different
Region be test object, it is crucial that the Active Shape Model Method based on spot distribution model can quickly navigate to detection face
Characteristic point is able to satisfy the requirement of Face datection real-time, when illumination and when being affected of noise, will lead to the standard of search positioning
Exactness decline, is not able to satisfy requirement of the system to stability.For being determined based on active shape model in face key feature points
Defect during position vulnerable to illumination and influence of noise usually compensates illumination using histogram equalization method, tradition
Method will lead to the disappearances of some details and edge in image, cause the loss of image information.Inhibit to scheme in median filtering
During as noise, template can not often be related to the noise spot occurred on image border, secondly, traditional median filtering
Process all handles each of image pixel, has changed simultaneously the gray value of non-noise pixel.These methods
The information in original image is also destroyed while illumination compensation and denoising, has certain shadow to the extraction of later period face characteristic
It rings.So the efficiency of the image pre-processing method in the detection of face key feature points based on illumination compensation and noise filtering needs
It improves.
Cascade classifier based on Haar-like feature can detecte face entirety and face region, when in classifier
When threshold value is higher, a part of face target object can be taken as non-face object, and by the classification of mistake.When the threshold value of classifier
When lower, the non-face target object in testing result will increase, so as to cause detection accuracy decline, but compared to based on master
The face key feature points detection of dynamic shape, this process employs the features in human face region, and information content is more sufficient, right
Human face posture and illumination etc. are because being known as certain robustness.
In conclusion combination of the both methods in Face datection will improve the accuracy rate and stability of detection.Together
When identification face difficulty under natural conditions it is comparatively bigger, not only by illumination, the force majeure such as block and influenced, simultaneously
Inconsistent more serious the Generalization Capability for compromising recognition of face device of posture.And positive face screening can be to face under natural conditions
Carry out " filtering ", find face of those postures than calibration, so in the process of face recognition, would not be by posture in terms of
It influences, and most of correlation data is all the positive face of people in the database of monitoring system, therefore detect front face image to have
Important meaning.
Summary of the invention
In view of the foregoing deficiencies of prior art, the purpose of the present invention is to provide a kind of detections of front face image
Method, the accuracy rate for solving monitoring system Face datection in the prior art is not high, vulnerable to illumination and influence of noise, Yi Jiren
The insufficient problem of face key feature points detection technique.
In order to achieve the above objects and other related objects, the present invention provides a kind of detection method of front face image, institute
Stating detection method includes: step 1), and the video image of input capture is as original image;Original image is converted former ash by step 2)
Image is spent, former ash degree image impulse noise is removed by median filtering algorithm, the grey level histogram of original image is become by gray scale
Exchange the letters number is modified to the histogram of uniform gray level distribution, then realizes that the illumination to original image is mended by the histogram of uniform gray level
It repays, Gaussian smoothing filter and Canny operator detection image edge processing is carried out to former gray level image, new images weighting that treated
With the image addition after histogram equalization, acquisition pretreatment image is modified to original image;Step 3) schemes pretreatment
As carrying out Low threshold detection, the region there may be face is marked, then passes through face key feature points in this region
Positioning filters out the region not comprising face;Step 4) is filtered out using geometrical relationship of the key feature points on two-dimensional surface
Front face image.
A kind of preferred embodiment of detection method as front face image of the invention in step 2), is filtered by intermediate value
It includes: step a) that wave algorithm, which removes former ash degree image impulse noise, sets classification thresholds to noise, establishes high gray scale noise range
And low ash degree noise range;Step b), is filtered using template, by template center's pixel and mould in filtering
Plate intermediate value compares, and judges whether to be noise spot.
Preferably, step a) includes: to set classification thresholds to noise, with [0,60] for low ash degree noise range, [200,
It 255] is high gray scale noise range.
Preferably, step b) includes: step b-1), to pretreatment image matrix the first row, first row and last line,
Pixel in last column from top to bottom, moving die plate to row second from the bottom and the element where column second from the bottom from left to right
Until, the pixel value of judge templet regional center point;Step b-2), high gray scale noise filtering is carried out, when template area central point
Grey scale pixel value i (x, y) proceeded as follows: step b-2-1 when being judged as high gray scale noise range [200,255]), when
When i (x, y) is the maximum value in template window overlay area, i (x, y) is considered as noise spot, while modulus plate intermediate value M (x, y) is replaced
For i (x, y);Step b-2-2), when i (x, y) is not the maximum value in template window overlay area and i (x, y) > M (x, y),
The intermediate value m (x, y) in 2 × 2 new region centered on the pixel where M (x, y) is taken, if i (x, y) > m (x, y), judges
I (x, y) is noise spot, while substituting i (x, y) with M (x, y);If i (x, y) < m (x, y) judges that i (x, y) is not noise spot, protect
Hold initial value;Step b-2-3), when i (x, y) is not the maximum value in template window overlay area and i (x, y) < M (x, y), sentence
Disconnected i (x, y) is not noise spot, keeps initial value;Step b-3), low ash degree noise filtering is carried out, when the pixel ash of template center's point
Angle value i (x, y) carries out following operation: step b-3-1 when being judged as low ash degree noise range [0,60]), when i (x, y) is mould
When minimum value in plate window overlay area, i (x, y) is considered as noise spot, while modulus plate intermediate value M (x, y) substitution i (x, y);Step
Rapid b-3-2), when i (x, y) is not the minimum value in template window overlay area and i (x, y) < M (x, y), take the place M (x, y)
Pixel centered on 2 × 2 new region in intermediate value m (x, y), if i (x, y) < m (x, y), judge i (x, y) be noise
Point, while i (x, y) is substituted with M (x, y);If i (x, y) > m (x, y) judges that i (x, y) is not noise spot, initial value is kept;Step
B-3-3), when i (x, y) is not the minimum value in template window overlay area and i (x, y) > M (x, y), judge that i (x, y) is not
Noise spot keeps initial value.
A kind of preferred embodiment of detection method as front face image of the invention, in step 2), original image
Grey level histogram includes: that step c) unites to former gray level image by the histogram that greyscale transformation function is modified to uniform gray level distribution
The probability of occurrence p (i) for counting each gray level i, obtains greyscale transformation formula:Step d), utilizes greyscale transformation
Formula changes the gray value I'(x, y of each pixel in original image)=T (I (x, y)).
Preferably, in step 2), Gaussian smoothing filter and Canny operator detection image edge are carried out to former gray level image
Reason, new images I " (x, y) weighting that treated and the image addition after histogram equalization: I*=I'(x, y)+λ I " (x,
y)。
A kind of preferred embodiment of detection method as front face image of the invention, step 3) include: step 3-1),
Using the human face target pair in the AdaBoost cascade classifier detection pretreatment image of the Low threshold based on Haar-like feature
As;Step 3-2), face and surrounding image-region are marked using shape frame;If human face target object is not detected,
Return step 1), input next frame image;Step 3-3), using the active shape model ASM method based on spot distribution model
Then the face key feature points of locating rectangle frame region are sieved in the rectangle frame region by the positioning of face key feature points
Select the region not comprising face.
A kind of preferred embodiment of detection method as front face image of the invention, step 4) include: step 4-1),
Input step 3) position and left eye angle of the human face target object that detect, right eye angle, the left corners of the mouth, the right corners of the mouth, nose feature
The coordinate position of point;Step 4-2), judge the line at two canthus of left and right and the angle of horizontal direction, if angle is less than threshold value, into
Enter step 4-3), otherwise return step 1) next video image of input;Step 4-3), judge nose characteristic point and left and right two
The distance between perpendicular bisector on the line at angle enters step 4-4 if the distance is less than threshold value), otherwise return step 1)
Input next video image;Step 4-4), the face of the attitudes vibration for planar turning direction with left and right sides simultaneously is sentenced
The midpoint of disconnected left and right corners of the mouth characteristic point line and the distance between the perpendicular bisector of right and left eyes corner characteristics point line, if the distance
Less than threshold value, then the face target object is finally judged as front face, otherwise return step 1) next video image of input.
As described above, the detection method of front face image of the invention, have the advantages that present invention improves over
The method of image preprocessing in conventional face's detection process reduces under the premise of realizing illumination compensation and denoising to image
The interference of original feature reduces the Face datection based on AdaBoost cascade classifier by improved method for detecting human face
The false detection rate of system, while being added by gauss of distribution function and weight is increased to the local gray level model of characteristic point, improve face
The efficiency of key feature point method.On the basis of human face characteristic point is accurately positioned, on 2d using characteristic point
Plane geometry relationship, filter out face image.
Detailed description of the invention
Fig. 1 is shown as the step flow diagram of the detection method of front face image of the invention.
Fig. 2 is shown as the face key feature points training flow chart of the detection method of front face image of the invention.
Fig. 3 and Fig. 4 is shown as median filtering flow chart used by the detection method of front face image of the invention.
The detection method that Fig. 5 is shown as front face image of the invention establishes the schematic diagram of local gray level model.
Fig. 6 is shown as the face key feature point search routine of the detection method of front face image of the invention
Figure.
Fig. 7 and Fig. 8 is shown as in the detection method of front face image of the invention, is screened according to characteristic point geometrical characteristic
The schematic diagram of positive face.
Fig. 9 is shown as the front face decision flowchart of the detection method of front face image of the invention.
Component label instructions
S11~S14 step
Specific embodiment
Illustrate embodiments of the present invention below by way of specific specific example, those skilled in the art can be by this specification
Other advantages and efficacy of the present invention can be easily understood for disclosed content.The present invention can also pass through in addition different specific realities
The mode of applying is embodied or practiced, the various details in this specification can also based on different viewpoints and application, without departing from
Various modifications or alterations are carried out under spirit of the invention.
Please refer to FIG. 1 to FIG. 9.It should be noted that diagram provided in the present embodiment only illustrates this in a schematic way
The basic conception of invention, only shown in diagram then with related component in the present invention rather than package count when according to actual implementation
Mesh, shape and size are drawn, when actual implementation kenel, quantity and the ratio of each component can arbitrarily change for one kind, and its
Assembly layout kenel may also be increasingly complex.
As shown in FIG. 1 to FIG. 9, the present embodiment provides a kind of detection method of front face image, the detection method packet
It includes:
As shown in Figure 1, carrying out step 1) S11 first, the video image of input capture is as original image.
As shown in Figure 1, then carrying out step 2) S12, former gray level image is converted by original image, passes through median filtering algorithm
Former ash degree image impulse noise is removed, the grey level histogram of original image is modified to uniform gray level distribution by greyscale transformation function
Histogram, the illumination compensation to original image is then realized by the histogram of uniform gray level, Gauss is carried out to former gray level image
Smothing filtering and Canny operator detection image edge processing, new images weighting that treated and the figure after histogram equalization
As being added, acquisition pretreatment image is modified to original image.
As an example, removing former ash degree image impulse noise by median filtering algorithm includes: in step 2)
Step a) is carried out first, and classification thresholds are set to noise, establish high gray scale noise range and low ash degree noise range.
In the present embodiment, comprising: classification thresholds are set to noise, with [0,60] for low ash degree noise range, [200,
It 255] is high gray scale noise range.
As shown in figs. 3 and 4, step b) is then carried out, is filtered using template, by template in filtering
Imago vegetarian refreshments judges whether to be noise spot compared with template intermediate value.
In the present embodiment, step b) includes:
First carry out step b-1), to pretreatment image matrix the first row, first row and last line, last column in
Pixel from top to bottom, from left to right until moving die plate to row second from the bottom and element where column second from the bottom, judgement
The pixel value of template area central point;
Then carry out step b-2), carry out high gray scale noise filtering, when template area central point grey scale pixel value i (x,
Y), it when being judged as high gray scale noise range [200,255], proceeds as follows:
Step b-2-1), when i (x, y) is the maximum value in template window overlay area, i (x, y) is considered as noise spot, together
When modulus plate intermediate value M (x, y) substitute i (x, y);
Step b-2-2), when i (x, y) is not the maximum value in template window overlay area and i (x, y) > M (x, y), take
Intermediate value m (x, y) in the new region of 2 × 2 centered on pixel where M (x, y), if i (x, y) > m (x, y), judges i
(x, y) is noise spot, while substituting i (x, y) with M (x, y);If i (x, y) < m (x, y) judges that i (x, y) is not noise spot, protect
Hold initial value;
Step b-2-3), when i (x, y) is not the maximum value in template window overlay area and i (x, y) < M (x, y), sentence
Disconnected i (x, y) is not noise spot, keeps initial value;
Then step b-3 is carried out), it carries out low ash degree noise filtering and sentences as the grey scale pixel value i (x, y) of template center's point
When breaking as low ash degree noise range [0,60], following operation is carried out:
Step b-3-1), when i (x, y) is the minimum value in template window overlay area, i (x, y) is considered as noise spot, together
When modulus plate intermediate value M (x, y) substitute i (x, y);
Step b-3-2), when i (x, y) is not the minimum value in template window overlay area and i (x, y) < M (x, y), take
Intermediate value m (x, y) in the new region of 2 × 2 centered on pixel where M (x, y), if i (x, y) < m (x, y), judges i
(x, y) is noise spot, while substituting i (x, y) with M (x, y);If i (x, y) > m (x, y) judges that i (x, y) is not noise spot, protect
Hold initial value;
Step b-3-3), when i (x, y) is not the minimum value in template window overlay area and i (x, y) > M (x, y), sentence
Disconnected i (x, y) is not noise spot, keeps initial value.
As an example, the grey level histogram of original image is modified to uniform gray level by greyscale transformation function in step 2)
The histogram of distribution includes:
The probability of occurrence p (i) of step c) gray level i each to former ash degree image statistics, obtains greyscale transformation formula:
Step d) changes the gray value I'(x, y of each pixel in original image using greyscale transformation formula)=T (I (x,
y))。
As an example, carrying out Gaussian smoothing filter and Canny operator detection image edge to former gray level image in step 2)
Processing, new images I " (x, y) weighting that treated and the image addition after histogram equalization: I*=I'(x, y)+λ I " (x,
y)。
As shown in Figure 1, then carry out step 3) S13, to pretreatment image carry out Low threshold detection, mark there may be
Then the region of face filters out the region not comprising face by the positioning of face key feature points in this region;At this
In step, the AdaBoost algorithm of selection Low threshold first detects face, secondly the method pair of facial feature points detection
The human face region of AdaBoost calibration detects again.
As an example, step 3) includes:
Step 3-1), pretreatment is detected using the AdaBoost cascade classifier of the Low threshold based on Haar-like feature
Human face target object in image;Such algorithm be it is high-efficient in Face datection field so far, real-time is good.In order to
Omission factor is reduced, the present invention has selected the classifier of Low threshold, and principle is that the AdaBoost algorithm of high threshold improves face inspection
The omission factor of survey, but rate of false alarm can be reduced, and the AdaBoost algorithm of Low threshold reduces omission factor, but improves wrong report
Rate.
Step 3-2), face and surrounding image-region are marked using shape frame;If human face target pair is not detected
As then return step 1), input next frame image.
Step 3-3), using the people of the active shape model ASM method locating rectangle frame region based on spot distribution model
Then face key feature points filter out the area not comprising face by the positioning of face key feature points in the rectangle frame region
Domain, as shown in Figure 5.The step of detecting two cascades by Face datection and face key feature points, can be improved final detection
Accuracy.
In the present embodiment, using the active shape model ASM method locating rectangle frame region based on spot distribution model
The methods of face key feature points be mainly made of three parts, comprising: the foundation of shape, key feature points part
The foundation of gray level model and the search matching of testing image key feature points.
Step 3-3-1), the foundation of shape includes:
1) in the training sample being made of n width facial image, every piece image m two dimensional character point { x of calibration by handi,
yi, i ∈ 1 ..., m, this m feature point group forming shape vector Si, one study set L=of collision vector composition of all images
{(Ii,si) | i=1 ..., m };
2) translating each sample makes its center of gravity be located at coordinate origin, (such as first, an optional sample from training set
Shape) initial estimate as average shape, which is normalized, i.e.,
3) it arbitrarily selects a shape as standard shape in L, other shapes alignment is passed through into rotation, zooming and panning
It is snapped under the same coordinate system with standard shape, obtains new study set L1, when the average shape and previous step in L1
It when the difference of average shape is less than threshold value, enters step 4), otherwise return step 2)
4) final study set L' principal component is analyzed, obtains statistical shape model:
Step 3-3-2), key feature points local gray level model foundation includes:
1) the local gray level vector g of the ith feature point in training set on j-th of sample is calculatedij
gij=[gij1,gij2,...gIj (2m+1)]T,
Wherein, m is characterized a pixel number for normal two sides sampling.I.e. centered on this feature point, perpendicular to front and back
The m pixel selected respectively on the direction of two characteristic point lines constitutes a length as the grayscale information structure of the pixel of 2m+1
At detailed process is as shown in Figure 2.
2) foundation of the local gray level model weighted: the first derivative of the gray value of the ith feature point on j-th of sample
Vector gij:
g′ij=[(gij2-gij1),...(gij(m+2)-gIj (m+1)),...(gij(2m+1)-gij2m)]T
The first derivative vector g " weighted using gauss of distribution functionij
3) normalized first derivative vector G is obtainedij:
4) gray level model after calibration point i weighting:
Finally obtain the local gray level model of characteristic point iThe weighted intensity modeling statistics represents calibration point more
More information content, in the search process to target image, can obtain with the more similar candidate point of real features point, to make
Positioning feature point is more acurrate.
Step 3-3-3), the search positioning of key feature points is based on above-mentioned statistical shape model and local texture model, gives
A fixed width new input facial image I, ASM extract wherein face shape, as shown in fig. 6, basic process is as follows:
1) k=0 is enabled, uses average shape as initialization shape St;
2) to each candidate point at current shape ith feature point over an input image, calculate in shape with
Corresponding characteristic point and the candidate point mahalanobis distance, in search process, for the boundary point of each current location, along searching
Suo Fangxiang respectively takes m point in its two sides, and the gray scale of a point of 2k+1 (m > k) and gray level model is taken to carry out every time from this 2m+1 point
Compare, finds out best match position from+1 position 2 (m-k).The distance metric provided selects that with minimum range
New position of the point as this feature point.
3) model parameter b is updateds, generate new model instanceForce model with objective contour
Closely, work as st+1With stBetween gap when meeting threshold value, matching terminates, the coordinate vector of backout feature point, otherwise return step
1)。
Traditional AdaBoost Face datection algorithm based on Haar-like feature, it is non-when classifier selects high threshold
The rate of false alarm of human face target object reduces, and human face target object omission factor can also increase.Conversely, selecting Low threshold in classifier
When, the omission factor of human face target object reduces, rather than the rate of false alarm of human face target object can increase.Therefore, Face datection is commented
Valence index, method proposed by the present invention select the AdaBoost cascade classifier of Low threshold, reduce the leakage of human face target object
Inspection rate, but further screening is done by the method that improved face key feature points detect simultaneously, exclude non-face target pair
As having achieved the effect that reduce rate of false alarm.Finally improve the detection accuracy of whole system.In addition, present invention improves over masters
The foundation of local gray level model in dynamic shape method, is compared with the traditional method, it is contemplated that establish each key feature points
Local gray level model when, by key feature points P, in the straight line parallel with the perpendicular bisector of former and later two characteristic point on lines
Two sides, the importance between each point successively weaken, thus introduce the point that Gaussian Profile is key feature points two sides assign it is different
Weight, the correct information content for reflecting candidate key characteristic point.
As shown in Fig. 1 and Fig. 7~Fig. 9, step 4) S14 is finally carried out, it is several on two-dimensional surface using key feature points
What relationship, filters out front face image.
As an example, step 4) includes:
Step 4-1), input step 3) position and left eye angle of the human face target object that detect, right eye angle, left mouth
Angle, the right corners of the mouth, the coordinate position of nose characteristic point;
Step 4-2), the angle of the line and horizontal direction that judge two canthus of left and right enters step if angle is less than threshold value
4-3), otherwise return step 1) next video image of input;
Step 4-3), judge the distance between the perpendicular bisector on the line at two canthus of nose characteristic point and left and right, if
The distance is less than threshold value, enters step 4-4), otherwise return step 1) next video image of input;
Step 4-4), the face of the attitudes vibration for planar turning direction with left and right sides simultaneously judges the left and right corners of the mouth
The midpoint of characteristic point line and the distance between the perpendicular bisector of right and left eyes corner characteristics point line, if the distance is less than threshold value,
Then the face target object is finally judged as front face, otherwise return step 1) next video image of input.
As described above, the detection method of front face image of the invention, have the advantages that present invention improves over
The method of image preprocessing in conventional face's detection process reduces under the premise of realizing illumination compensation and denoising to image
The interference of original feature reduces the Face datection based on AdaBoost cascade classifier by improved method for detecting human face
The false detection rate of system, while being added by gauss of distribution function and weight is increased to the local gray level model of characteristic point, improve face
The efficiency of key feature point method.On the basis of human face characteristic point is accurately positioned, on 2d using characteristic point
Plane geometry relationship, filter out face image.So the present invention effectively overcomes various shortcoming in the prior art and has height
Spend value of industrial utilization.
The above-described embodiments merely illustrate the principles and effects of the present invention, and is not intended to limit the present invention.It is any ripe
The personage for knowing this technology all without departing from the spirit and scope of the present invention, carries out modifications and changes to above-described embodiment.Cause
This, institute is complete without departing from the spirit and technical ideas disclosed in the present invention by those of ordinary skill in the art such as
At all equivalent modifications or change, should be covered by the claims of the present invention.
Claims (8)
1. a kind of detection method of front face image, which is characterized in that the detection method includes:
Step 1), the video image of input capture is as original image;
Original image is converted former gray level image by step 2), removes former ash degree image impulse noise by median filtering algorithm,
The grey level histogram of original image is modified to the histogram of uniform gray level distribution by greyscale transformation function, then passes through uniform gray level
Histogram realize to the illumination compensation of original image, Gaussian smoothing filter and Canny operator detection figure are carried out to former gray level image
As edge processing, new images weighting that treated and the image addition after histogram equalization are modified original image and obtain
Obtain pretreatment image;
Step 3) carries out Low threshold detection to pretreatment image, marks the region there may be face, then in this region
The region not comprising face is filtered out by the positioning of face key feature points, using the active shape based on spot distribution model
The face key feature points of model ASM method locating rectangle frame region, it is then crucial special by face in the rectangle frame region
The positioning of sign point filters out the region not comprising face, fixed using the active shape model ASM method based on spot distribution model
The method of the face key feature points of position rectangle frame region includes: the foundation of the local gray level model of key feature points, passes through height
This distribution function increases weight to the local gray level model of key feature points;
Step 4) filters out front face image using geometrical relationship of the key feature points on two-dimensional surface.
2. the detection method of front face image according to claim 1, it is characterised in that: in step 2), pass through intermediate value
Filtering algorithm removes former ash degree image impulse noise
Step a) sets classification thresholds to noise, establishes high gray scale noise range and low ash degree noise range;
Step b), is filtered using template, by compared with template intermediate value, sentencing to template center's pixel in filtering
Whether disconnected is noise spot.
3. the detection method of front face image according to claim 2, it is characterised in that: step a) includes: to noise
Classification thresholds are set, with [0,60] for low ash degree noise range, [200,255] are high gray scale noise range.
4. the detection method of front face image according to claim 3, it is characterised in that: step b) includes:
Step b-1), to pretreatment image matrix the first row, first row and last line, the pixel in last column is from upper
It arrives down, from left to right until moving die plate to row second from the bottom and element where column second from the bottom, judge templet regional center
The pixel value of point;
Step b-2), high gray scale noise filtering is carried out, as the grey scale pixel value i (x, y) of template area central point, is judged as high ash
When spending noise range [200,255], proceed as follows:
Step b-2-1), when i (x, y) is the maximum value in template window overlay area, i (x, y) is considered as noise spot, takes simultaneously
Template intermediate value M (x, y) substitutes i (x, y);
Step b-2-2), when i (x, y) is not the maximum value in template window overlay area and i (x, y) > M (x, y), take M (x,
Y) the intermediate value m (x, y) in the new region of 2 × 2 centered on pixel where, if i (x, y) > m (x, y), judges i (x, y)
For noise spot, while i (x, y) is substituted with M (x, y);If i (x, y) < m (x, y) judges that i (x, y) is not noise spot, keep former
Value;
Step b-2-3), when i (x, y) is not the maximum value in template window overlay area and i (x, y) < M (x, y), judge i
(x, y) is not noise spot, keeps initial value;
Step b-3), low ash degree noise filtering, which is carried out, as the grey scale pixel value i (x, y) of template center's point is judged as that low ash degree is made an uproar
When sound range [0,60], following operation is carried out:
Step b-3-1), when i (x, y) is the minimum value in template window overlay area, i (x, y) is considered as noise spot, takes simultaneously
Template intermediate value M (x, y) substitutes i (x, y);
Step b-3-2), when i (x, y) is not the minimum value in template window overlay area and i (x, y) < M (x, y), take M (x,
Y) the intermediate value m (x, y) in the new region of 2 × 2 centered on pixel where, if i (x, y) < m (x, y), judges i (x, y)
For noise spot, while i (x, y) is substituted with M (x, y);If i (x, y) > m (x, y) judges that i (x, y) is not noise spot, keep former
Value;
Step b-3-3), when i (x, y) is not the minimum value in template window overlay area and i (x, y) > M (x, y), judge i
(x, y) is not noise spot, keeps initial value.
5. the detection method of front face image according to claim 1, it is characterised in that: in step 2), original image
Grey level histogram by greyscale transformation function be modified to uniform gray level distribution histogram include:
The probability of occurrence p (i) of step c) gray level i each to former ash degree image statistics, obtains greyscale transformation formula:
Step d) changes gray value I ' (x, y)=T (I (x, y)) of each pixel in original image using greyscale transformation formula.
6. the detection method of front face image according to claim 5, it is characterised in that: in step 2), to former ash degree
Image carries out Gaussian smoothing filter and Canny operator detection image edge processing, new images I " (x, y) weighting that treated and warp
Image addition after crossing histogram equalization:
I*=I ' (x, y)+λ I " (x, y).
7. the detection method of front face image according to claim 1, it is characterised in that: step 3) includes:
Step 3-1), pretreatment image is detected using the AdaBoost cascade classifier of the Low threshold based on Haar-like feature
In human face target object;
Step 3-2), face and surrounding image-region are marked using shape frame;If human face target object is not detected,
Return step 1), input next frame image;
Step 3-3), it is closed using the face of the active shape model ASM method locating rectangle frame region based on spot distribution model
Then key characteristic point filters out the region not comprising face by the positioning of face key feature points in the rectangle frame region.
8. the detection method of front face image according to claim 1, which is characterized in that step 4) includes:
Step 4-1), input step 3) position and left eye angle of the human face target object that detect, right eye angle, the left corners of the mouth is right
The corners of the mouth, the coordinate position of nose characteristic point;
Step 4-2), the angle of the line and horizontal direction that judge two canthus of left and right enters step 4- if angle is less than threshold value
3), otherwise return step 1) next video image of input;
Step 4-3), the distance between the perpendicular bisector on the line at two canthus of nose characteristic point and left and right is judged, if should be away from
From threshold value is less than, 4-4 is entered step), otherwise return step 1) next video image of input;
Step 4-4), the face of the attitudes vibration for planar turning direction with left and right sides simultaneously judges left and right corners of the mouth feature
The midpoint of point line and the distance between the perpendicular bisector of right and left eyes corner characteristics point line should if the distance is less than threshold value
Human face target object is finally judged as front face, otherwise return step 1) next video image of input.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610188392.8A CN105893946B (en) | 2016-03-29 | 2016-03-29 | A kind of detection method of front face image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610188392.8A CN105893946B (en) | 2016-03-29 | 2016-03-29 | A kind of detection method of front face image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105893946A CN105893946A (en) | 2016-08-24 |
CN105893946B true CN105893946B (en) | 2019-10-11 |
Family
ID=57014562
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610188392.8A Active CN105893946B (en) | 2016-03-29 | 2016-03-29 | A kind of detection method of front face image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105893946B (en) |
Families Citing this family (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107423684A (en) * | 2017-06-09 | 2017-12-01 | 湖北天业云商网络科技有限公司 | A kind of fast face localization method and system applied to driver fatigue detection |
CN107358174A (en) * | 2017-06-23 | 2017-11-17 | 浙江大学 | A kind of hand-held authentication idses system based on image procossing |
CN107729855B (en) * | 2017-10-25 | 2022-03-18 | 成都尽知致远科技有限公司 | Mass data processing method |
CN109918971B (en) * | 2017-12-12 | 2024-01-05 | 深圳光启合众科技有限公司 | Method and device for detecting number of people in monitoring video |
CN108921148A (en) * | 2018-09-07 | 2018-11-30 | 北京相貌空间科技有限公司 | Determine the method and device of positive face tilt angle |
CN109522853B (en) * | 2018-11-22 | 2019-11-19 | 湖南众智君赢科技有限公司 | Face datection and searching method towards monitor video |
CN109753886B (en) * | 2018-12-17 | 2024-03-08 | 北京爱奇艺科技有限公司 | Face image evaluation method, device and equipment |
CN109785300A (en) * | 2018-12-27 | 2019-05-21 | 华南理工大学 | A kind of cancer medical image processing method, system, device and storage medium |
CN112001203A (en) * | 2019-05-27 | 2020-11-27 | 北京君正集成电路股份有限公司 | Method for extracting front face from face recognition library |
CN110321841A (en) * | 2019-07-03 | 2019-10-11 | 成都汇纳智能科技有限公司 | A kind of method for detecting human face and system |
CN110427907B (en) * | 2019-08-09 | 2023-04-07 | 上海天诚比集科技有限公司 | Face recognition preprocessing method for gray level image boundary detection and noise frame filling |
CN110879972B (en) * | 2019-10-24 | 2022-07-26 | 深圳云天励飞技术有限公司 | Face detection method and device |
CN111161281A (en) * | 2019-12-25 | 2020-05-15 | 广州杰赛科技股份有限公司 | Face region identification method and device and storage medium |
CN111242189B (en) * | 2020-01-06 | 2024-03-05 | Oppo广东移动通信有限公司 | Feature extraction method and device and terminal equipment |
CN112257696B (en) * | 2020-12-23 | 2021-05-28 | 北京万里红科技股份有限公司 | Sight estimation method and computing equipment |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101430759A (en) * | 2008-12-04 | 2009-05-13 | 上海大学 | Optimized recognition pretreatment method for human face |
CN103440479A (en) * | 2013-08-29 | 2013-12-11 | 湖北微模式科技发展有限公司 | Method and system for detecting living body human face |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101877981B1 (en) * | 2011-12-21 | 2018-07-12 | 한국전자통신연구원 | System for recognizing disguised face using gabor feature and svm classifier and method thereof |
-
2016
- 2016-03-29 CN CN201610188392.8A patent/CN105893946B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101430759A (en) * | 2008-12-04 | 2009-05-13 | 上海大学 | Optimized recognition pretreatment method for human face |
CN103440479A (en) * | 2013-08-29 | 2013-12-11 | 湖北微模式科技发展有限公司 | Method and system for detecting living body human face |
Also Published As
Publication number | Publication date |
---|---|
CN105893946A (en) | 2016-08-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105893946B (en) | A kind of detection method of front face image | |
CN106778586B (en) | Off-line handwritten signature identification method and system | |
CN104008370B (en) | A kind of video face identification method | |
Vukadinovic et al. | Fully automatic facial feature point detection using Gabor feature based boosted classifiers | |
Hsiao et al. | Occlusion reasoning for object detectionunder arbitrary viewpoint | |
CN103632132B (en) | Face detection and recognition method based on skin color segmentation and template matching | |
CN102521565B (en) | Garment identification method and system for low-resolution video | |
CN106228137A (en) | A kind of ATM abnormal human face detection based on key point location | |
CN103279768B (en) | A kind of video face identification method based on incremental learning face piecemeal visual characteristic | |
CN106599785B (en) | Method and equipment for establishing human body 3D characteristic identity information base | |
CN106682641A (en) | Pedestrian identification method based on image with FHOG- LBPH feature | |
CN105701466A (en) | Rapid all angle face tracking method | |
TWI415032B (en) | Object tracking method | |
CN110826389A (en) | Gait recognition method based on attention 3D frequency convolution neural network | |
CN106570490A (en) | Pedestrian real-time tracking method based on fast clustering | |
Vishwakarma et al. | Simple and intelligent system to recognize the expression of speech-disabled person | |
CN115797970B (en) | Dense pedestrian target detection method and system based on YOLOv5 model | |
CN110222660B (en) | Signature authentication method and system based on dynamic and static feature fusion | |
CN114863464A (en) | Second-order identification method for PID drawing picture information | |
CN114038011A (en) | Method for detecting abnormal behaviors of human body in indoor scene | |
Wanjale et al. | Use of haar cascade classifier for face tracking system in real time video | |
KR101473991B1 (en) | Method and apparatus for detecting face | |
CN109858342A (en) | A kind of face pose estimation of fusion hand-designed description son and depth characteristic | |
Tu et al. | Improved pedestrian detection algorithm based on HOG and SVM | |
Zhou et al. | ROI-HOG and LBP based human detection via shape part-templates matching |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |