CN105893946A - Front face image detection method - Google Patents
Front face image detection method Download PDFInfo
- Publication number
- CN105893946A CN105893946A CN201610188392.8A CN201610188392A CN105893946A CN 105893946 A CN105893946 A CN 105893946A CN 201610188392 A CN201610188392 A CN 201610188392A CN 105893946 A CN105893946 A CN 105893946A
- Authority
- CN
- China
- Prior art keywords
- image
- face
- noise
- front face
- judged
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/165—Detection; Localisation; Normalisation using facial parts and geometric relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
Abstract
The invention provides a front face image detection method which includes the steps of capturing a video image, preprocessing the image by conducting median filtering, illumination compensation and image edge processing, detecting a face based on a method combining a cascade classifier and key characteristic points, and sorting out a front face image by using the geometric relation of the key characteristic points on a two dimensional plane. The method improves the image preprocessing method in a conventional face detection process. Interference on original image characteristics is reduced on the premises of realizing illumination compensation and de-noising. The error detection rate of face detection system based on an AdaBoost cascade classifier is lowered through the improved face detection method. Weight is added to a partial gray scale model of characteristic points through gauss distribution functions, thereby increasing the efficiency of a facial key characteristic points positioning method. A front face image is sorted out based on facial characteristic points accuracy positioning and by using the planar geometric relation of the characteristic points on a two-dimensional image.
Description
Technical field
The present invention relates to field of video image processing, particularly relate to the detection method of a kind of front face image.
Background technology
Along with the change of international security situation, public safety is increasingly paid attention to by society.Higher in people's current density
Video monitoring in large-scale occasion plays the warning function of the monitoring to dangerous person and hazardous activity, is also Public Security Organs simultaneously
Offer strong evidence is provided, but existing Video Supervision Technique or can not meet society in intelligent analysis data
The demand of meeting, such as fingerprint, iris etc., it is still necessary to cooperating with on one's own initiative of test object.
In recent years, face has efficient biometric feature and disguise because of it, has been increasingly becoming the master of video monitoring
Want one of object, the detection of face facial zone to be normally based on face biological characteristic and carry out the key step of Intelligent Recognition and analysis
Suddenly, mostly overall with face in existing method, face region, face complexion and these faces of face key feature points are different
Region be detection object, it is crucial that Active Shape Model Method based on spot distribution model can quickly navigate to detect face
Characteristic point, can meet the requirement of Face datection real-time, when illumination and effect of noise are bigger, can cause the standard of search location
Exactness declines, it is impossible to meet the system requirement to stability.For determining at face key feature points based on active shape model
Easily by illumination and the defect of influence of noise during Wei, usually use histogram equalization method that illumination is compensated, tradition
Method can cause some details in image and the disappearance at edge, cause the loss of image information.At medium filtering suppression figure
As, during noise, template often cannot relate to for the noise spot occurred on image border, secondly, traditional medium filtering
Each pixel in image is processed by process, changes the gray value of the pixel of non-noise simultaneously.These methods
Also destroying the information in original image while illumination compensation and denoising, the extraction to later stage face characteristic has certain shadow
Ring.So the detection of face key feature points needs based on the efficiency of illumination compensation and the image pre-processing method of noise filtering
Improve.
Cascade classifier based on Haar-like feature can detect face entirety and face region, when in grader
When threshold value is higher, a part of face destination object can be taken as non-face object, and by the classification of mistake.Threshold value when grader
Time relatively low, the non-face destination object in testing result will increase, thus causes accuracy of detection to decline, but compares based on master
The face key feature points detection of dynamic shape, this process employs the feature in human face region, and quantity of information is the most sufficient, right
Human face posture and illumination etc. are because have certain robustness.
In sum, both approaches combination in Face datection will improve accuracy rate and the stability of detection.With
Time identify that the face difficulty under naturalness is the biggest, not only by illumination, the act of god such as block and affected, simultaneously
The inconsistent more serious Generalization Capability that must compromise recognition of face device of attitude.And just face screening can be to the face under naturalness
Carry out " filtration ", find those attitudes face than calibration, so during recognition of face, would not be by terms of attitude
Impact, and in the data base of monitoring system, major part correction data is all the positive face of people, and therefore detection front face image has
Important meaning.
Summary of the invention
The shortcoming of prior art in view of the above, it is an object of the invention to provide the detection of a kind of front face image
Method is the highest, easily by illumination and influence of noise, Yi Jiren for solving the accuracy rate of monitoring system Face datection in prior art
The not enough problem of face key feature points detection technique.
For achieving the above object and other relevant purposes, the present invention provides the detection method of a kind of front face image, institute
State detection method to include: step 1), the video image of input capture is as original image;Step 2), original image is converted into former ash
Degree image, removes former gray level image impulsive noise by median filtering algorithm, the grey level histogram of original image is become by gray scale
Exchange the letters number is modified to the rectangular histogram of uniform gray level distribution, then realizes the illumination to original image by the rectangular histogram of uniform gray level and mends
Repay, former gray level image is carried out Gaussian smoothing filter and Canny operator detection image edge processing, the new images weighting after process
With the image addition after histogram equalization, it is modified original image obtaining pretreatment image;Step 3), to pretreatment figure
As carrying out Low threshold detection, mark the region that there may be face, the most in this region by face key feature points
Location filters out the region not comprising face;Step 4), utilize key feature points geometrical relationship on two dimensional surface, filter out
Front face image.
As a kind of preferred version of detection method of the front face image of the present invention, step 2) in, filtered by intermediate value
Ripple algorithm is removed former gray level image impulsive noise and is included: step a), noise is set classification thresholds, establishes high gray noise scope
And low gray noise scope;Step b), uses template to be filtered, by template center's pixel and mould in filtering
Plate intermediate value compares, it may be judged whether be noise spot.
Preferably, step a) including: noise is set classification thresholds, is low gray noise scope with [0,60], [200,
255] it is high gray noise scope.
Preferably, step b) including: step b-1), to pretreatment image matrix the first row, first row and last column,
From top to bottom, moving die plate is to row second from the bottom and the element at row place second from the bottom from left to right for pixel in last string
Till, it is judged that the pixel value of template area central point;Step b-2), carry out high gray noise filtering, when template area central point
Grey scale pixel value i (x, y), it is judged that during for high gray noise scope [200,255], proceed as follows: step b-2-1), when
(x, when y) being the maximum in template window overlay area, (x, y) is considered as noise spot to i to i, and (x y) replaces delivery plate intermediate value M simultaneously
For i (x, y);Step b-2-2), when i (x, y) be not the maximum in template window overlay area and i (x, y) > M (and x, time y),
(x, y) (x, y), if ((x y), then judges x, y) > m i intermediate value m in the new region of 2 × 2 centered by the pixel at place to take M
I (x, y) is noise spot, simultaneously with M (x, y) substitute i (x, y);If i (x, and y) < m (x, y), it is judged that (x, y) is not noise spot to i, protects
Hold initial value;Step b-2-3), as i, (x is not y) that (x, y) < (x, time y), sentences M for the maximum in template window overlay area and i
(x, y) is not noise spot to disconnected i, keeps initial value;Step b-3), carry out low gray noise filtering, when the pixel ash of template center's point
Angle value i (x y), it is judged that during for low gray noise scope [0,60], carries out following operation: step b-3-1), as i, (x y) is mould
During minima in plate window overlay area, i (x, y) is considered as noise spot, simultaneously delivery plate intermediate value M (x, y) substitute i (x, y);Step
Rapid b-3-2), as i, (x is not y) that (x, y) < (x, time y), takes M (x, y) place to M for the minima in template window overlay area and i
Pixel centered by 2 × 2 new region in intermediate value m (x, y), if (x, y) < (x y), then judges that (x y) is noise to i to m to i
Point, simultaneously with M (x, y) substitute i (x, y);If i (x, y) > m (x, y), it is judged that (x, y) is not noise spot to i, keeps initial value;Step
B-3-3), as i, (x is not y) that ((x, time y), it is judged that (x y) is not i for x, y) > M for the minima in template window overlay area and i
Noise spot, keeps initial value.
As a kind of preferred version of detection method of the front face image of the present invention, step 2) in, original image
The rectangular histogram that grey level histogram is modified to uniform gray level distribution by greyscale transformation function includes: former gray level image is united by step c)
Count probability of occurrence p (i) of each gray level i, obtain greyscale transformation formula:Step d), utilizes greyscale transformation
Formula changes gray value I'(x, the y)=T (I (x, y)) of each pixel in original image.
Preferably, step 2) in, former gray level image is carried out at Gaussian smoothing filter and Canny operator detection image border
Reason, the new images I after process " (x, y) weighting and the image addition after histogram equalization: I*=I'(x, y)+λ I " (x,
y)。
As a kind of preferred version of detection method of the front face image of the present invention, step 3) including: step 3-1),
Use the human face target pair in the AdaBoost cascade classifier detection pretreatment image of Low threshold based on Haar-like feature
As;Step 3-2), use shape frame to mark face and image-region about;If being not detected by human face target object, then
Return step 1), input next frame image;Step 3-3), use active shape model ASM method based on spot distribution model
The face key feature points in locating rectangle frame region, is then sieved by the location of face key feature points in this rectangle frame region
Select the region not comprising face.
As a kind of preferred version of detection method of the front face image of the present invention, step 4) including: step 4-1),
Input step 3) position of human face target object that detects, and left eye angle, right eye angle, the left corners of the mouth, the right corners of the mouth, nose feature
The coordinate position of point;Step 4-2), it is judged that the line at canthus, left and right two and the angle of horizontal direction, if angle is less than threshold value, enter
Enter step 4-3), otherwise return step 1) input next video image;Step 4-3), it is judged that nose characteristic point and two, left and right
The distance between perpendicular bisector on the line at angle, if this distance is less than threshold value, enters step 4-4), otherwise return step 1)
Input next video image;Step 4-4), for planar turning the face of the attitudes vibration in direction with left and right sides simultaneously, sentence
Distance between midpoint and the perpendicular bisector of right and left eyes corner characteristics point line of disconnected left and right corners of the mouth characteristic point line, if this distance
Less than threshold value, then this human face target object is finally judged as front face, otherwise returns step 1) input next video image.
As it has been described above, the detection method of the front face image of the present invention, have the advantages that and present invention improves over
During conventional face's detection, the method for Image semantic classification, on the premise of realizing illumination compensation and denoising, reduces image
The interference of original feature, by the method for detecting human face improved, reduces Face datection based on AdaBoost cascade classifier
The false drop rate of system, add the local gray level model to characteristic point by gauss of distribution function increases weight simultaneously, improves face
The efficiency of key feature point method.On the basis of human face characteristic point is accurately positioned, utilize characteristic point on 2d
Plane geometry relation, filter out positive face image.
Accompanying drawing explanation
Fig. 1 is shown as the steps flow chart schematic diagram of the detection method of the front face image of the present invention.
Fig. 2 is shown as the face key feature points training flow chart of the detection method of the front face image of the present invention.
The medium filtering flow chart that the detection method of the front face image that Fig. 3 and Fig. 4 is shown as the present invention is used.
The detection method of the front face image that Fig. 5 is shown as the present invention sets up the schematic diagram of local gray level model.
Fig. 6 is shown as the face key feature point search routine of the detection method of the front face image of the present invention
Figure.
In the detection method of the front face image that Fig. 7 and Fig. 8 is shown as the present invention, screen according to characteristic point geometric properties
The schematic diagram of positive face.
Fig. 9 is shown as the front face decision flowchart of the detection method of the front face image of the present invention.
Element numbers explanation
S11~S14 step
Detailed description of the invention
Below by way of specific instantiation, embodiments of the present invention being described, those skilled in the art can be by this specification
Disclosed content understands other advantages and effect of the present invention easily.The present invention can also be by the most different concrete realities
The mode of executing is carried out or applies, the every details in this specification can also based on different viewpoints and application, without departing from
Various modification or change is carried out under the spirit of the present invention.
Refer to Fig. 1~Fig. 9.It should be noted that the diagram provided in the present embodiment illustrates this most in a schematic way
The basic conception of invention, package count when then only showing the assembly relevant with the present invention rather than implement according to reality in diagram
Mesh, shape and size are drawn, and during its actual enforcement, the kenel of each assembly, quantity and ratio can be a kind of random change, and its
Assembly layout kenel is likely to increasingly complex.
As shown in Fig. 1~Fig. 9, the present embodiment provides the detection method of a kind of front face image, described detection method bag
Include:
As it is shown in figure 1, first carry out step 1) S11, the video image of input capture is as original image.
As it is shown in figure 1, then carry out step 2) S12, original image is converted into former gray level image, passes through median filtering algorithm
Remove former gray level image impulsive noise, the grey level histogram of original image is modified to uniform gray level by greyscale transformation function and is distributed
Rectangular histogram, then realize the illumination compensation to original image by the rectangular histogram of uniform gray level, former gray level image carried out Gauss
Smothing filtering and Canny operator detection image edge processing, the new images weighting after process and the figure after histogram equalization
As being added, it is modified original image obtaining pretreatment image.
As example, step 2) in, remove former gray level image impulsive noise by median filtering algorithm and include:
First carry out step a), noise is set classification thresholds, establishes high gray noise scope and low gray noise scope.
In the present embodiment, including: noise is set classification thresholds, is low gray noise scope with [0,60], [200,
255] it is high gray noise scope.
As shown in figs. 3 and 4, then carry out step b), use template to be filtered, by template in filtering
Imago vegetarian refreshments compares with template intermediate value, it may be judged whether be noise spot.
In the present embodiment, step b) including:
First step b-1 is carried out), to pretreatment image matrix the first row, first row and last column, in last string
Pixel from top to bottom, moving die plate is to the element at row second from the bottom and row place second from the bottom from left to right, it is judged that
The pixel value of template area central point;
Then carry out step b-2), carry out high gray noise filtering, when template area central point grey scale pixel value i (x,
Y), it is judged that during for high gray noise scope [200,255], proceed as follows:
Step b-2-1), when i (x, when y) being the maximum in template window overlay area, i (x, y) is considered as noise spot, with
Time delivery plate intermediate value M (x, y) substitute i (x, y);
Step b-2-2), as i, (x is not y) that ((x, time y), takes x, y) > M for the maximum in template window overlay area and i
(x, y) (x, y), if ((x y), then judges i to x, y) > m to i to intermediate value m in the new region of 2 × 2 centered by the pixel at place to M
(x, y) is noise spot, simultaneously with M (x, y) substitute i (x, y);If i (x, and y) < m (x, y), it is judged that (x, y) is not noise spot to i, protects
Hold initial value;
Step b-2-3), as i, (x is not y) that (x, y) < (x, time y), sentences M for the maximum in template window overlay area and i
(x, y) is not noise spot to disconnected i, keeps initial value;
Then step b-3 is carried out), carry out low gray noise filtering, as the grey scale pixel value i of template center's point, (x y), sentences
When breaking as low gray noise scope [0,60], carry out following operation:
Step b-3-1), when i (x, when y) being the minima in template window overlay area, i (x, y) is considered as noise spot, with
Time delivery plate intermediate value M (x, y) substitute i (x, y);
Step b-3-2), as i, (x is not y) that (x, y) < (x, time y), takes M for the minima in template window overlay area and i
(x, y) (x, y), if (x, y) < (x y), then judges i to m to i to intermediate value m in the new region of 2 × 2 centered by the pixel at place to M
(x, y) is noise spot, simultaneously with M (x, y) substitute i (x, y);If i (x, y) > m (x, y), it is judged that (x, y) is not noise spot to i, protects
Hold initial value;
Step b-3-3), as i, (x is not y) that ((x, time y), sentences x, y) > M for the minima in template window overlay area and i
(x, y) is not noise spot to disconnected i, keeps initial value.
As example, step 2) in, the grey level histogram of original image is modified to uniform gray level by greyscale transformation function
The rectangular histogram of distribution includes:
Former gray level image is added up probability of occurrence p (i) of each gray level i by step c), obtains greyscale transformation formula:
Step d), utilize greyscale transformation formula change each pixel in original image gray value I'(x, y)=T (I (x,
y))。
As example, step 2) in, former gray level image is carried out Gaussian smoothing filter and Canny operator detection image border
Process, the new images I after process " (x, y) weighting and the image addition after histogram equalization: I*=I'(x, y)+λ I " (x,
y)。
As it is shown in figure 1, then carry out step 3) S13, pretreatment image is carried out Low threshold detection, marks and there may be
The region of face, filters out the region not comprising face the most in this region by the location of face key feature points;At this
In step, first select the AdaBoost algorithm detection face of Low threshold, the secondly method pair of facial feature points detection
The human face region that AdaBoost demarcates detects again.
As example, step 3) including:
Step 3-1), use the AdaBoost cascade classifier detection pretreatment of Low threshold based on Haar-like feature
Human face target object in image;Such algorithm is that the efficiency in Face datection field is high so far, and real-time is good.In order to
Reducing loss, the present invention have selected the grader of Low threshold, and principle is that the AdaBoost algorithm of high threshold improves face inspection
The loss surveyed, but rate of false alarm can be reduced, and the AdaBoost algorithm of Low threshold reduces loss, but improves wrong report
Rate.
Step 3-2), use shape frame to mark face and image-region about;If being not detected by human face target pair
As, then return step 1), input next frame image.
Step 3-3), use the people in active shape model ASM method locating rectangle frame region based on spot distribution model
Face key feature points, then filters out the district not comprising face in this rectangle frame region by the location of face key feature points
Territory, as shown in Figure 5.Detected the step of two cascades by Face datection and face key feature points, final detection can be improved
Accuracy.
In the present embodiment, active shape model ASM method locating rectangle frame region based on spot distribution model is used
The method of face key feature points be mainly made up of three parts, including the foundation of shape, the local of key feature points
The foundation of gray level model and the search coupling of testing image key feature points.
Step 3-3-1), the foundation of shape includes:
1), in the training sample being made up of n width facial image, every piece image is manual demarcates m two dimensional character point { xi,
yi, i ∈ 1 ..., m, this m feature point group forming shape vector Si, the collision vector of all images forms a study set L=
{(Ii,si) | i=1 ..., m};
2) translating each sample makes its center of gravity be positioned at coordinate origin, and from training set, an optional sample is (such as first
Shape) as the initial estimate of average shape, this sample is normalized, i.e.
3) arbitrarily select a shape as standard shape, by the alignment of other shapes by rotating, zooming and panning in L
Snap under same coordinate system with standard shape, obtain new study set L1, when the average shape in L1 and previous step
When the difference of average shape is less than threshold value, enter step 4), otherwise return step 2)
4) final study set L' principal component is analyzed, obtains statistical shape model:
Step 3-3-2), key feature points local gray level model is set up and is included:
1) the local gray level vector g of the ith feature point calculated in training set on jth sampleij
gij=[gij1,gij2,...gIj (2m+1)]T,
Wherein, m is characterized a pixel number for normal both sides sampling.I.e. centered by this feature point, before and after being perpendicular to
M the pixel selected respectively on the direction of two characteristic point lines constitutes the half-tone information structure of the pixel of an a length of 2m+1
Becoming, its idiographic flow is as shown in Figure 2.
2) foundation of the local gray level model weighted: the first derivative of the gray value of the ith feature point on jth sample
Vector gij:
g′ij=[(gij2-gij1),...(gij(m+2)-gIj (m+1)),...(gij(2m+1)-gij2m)]T
Utilize the first derivative vector g that gauss of distribution function weights "ij
3) normalized first derivative vector G is obtainedij:
4) gray level model after fixed point i weighting:
Finally give the local gray level model of characteristic point iThis weighted intensity modeling statistics represents fixed point more
Many quantity of information, in the search procedure to target image, can obtain the candidate point the most close with real features point, so that
Positioning feature point is more accurate.
Step 3-3-3), the search location of key feature points, based on above-mentioned statistical shape model and local texture model, give
The input facial image I, ASM that a fixed width is new extracts wherein face shape, and as shown in Figure 6, basic process is as follows:
1) k=0 is made, by average shape as initializing shape St;
2) to each candidate point at current shape ith feature point over an input image, calculate in shape with
The mahalanobis distance of characteristic of correspondence point and this candidate point, in search procedure, for the boundary point of each current location, along searching
Suo Fangxiang respectively takes m point in its both sides, and the gray scale every time taking 2k+1 (m > k) individual point from this 2m+1 point is carried out with gray level model
Relatively, from 2 (m-k)+1 position, best match position is found out.The distance metric be given, selection has that of minimum range
Point is as the new position of this feature point.
3) model parameter b is updateds, produce new model instanceModel is made to force with objective contour
Closely, s is worked ast+1With stBetween gap when meeting threshold value, coupling terminates, the coordinate vector of backout feature point, otherwise returns step
1)。
Traditional AdaBoost Face datection algorithm based on Haar-like feature, when grader selects high threshold, non-
The rate of false alarm of human face target object reduces, and human face target object loss also can raise.Otherwise, select Low threshold at grader
Time, the loss of human face target object reduces, rather than the rate of false alarm of human face target object can raise.Therefore, the commenting of Face datection
Valency index, the method that the present invention proposes, select the AdaBoost cascade classifier of Low threshold, reduce the leakage of human face target object
Inspection rate, but the method simultaneously detected by the face key feature points improved does screening further, gets rid of non-face target pair
As, reach to reduce the effect of rate of false alarm.Finally improve the detection accuracy of whole system.It addition, present invention improves over master
The foundation of the local gray level model in dynamic shape method, compared with traditional method, it is contemplated that set up each key feature points
Local gray level model time, through key feature points P, at the straight line parallel with the perpendicular bisector of former and later two characteristic point on lines
Both sides, the importance between each point weakens successively, so introducing the point that Gauss distribution is key feature points both sides to give difference
Weight, the correct quantity of information reflecting candidate key characteristic point.
As shown in Fig. 1 and Fig. 7~Fig. 9, finally carry out step 4) S14, utilize several on two dimensional surface of key feature points
What relation, filters out front face image.
As example, step 4) including:
Step 4-1), input step 3) position of human face target object that detects, and left eye angle, right eye angle, left mouth
Angle, the right corners of the mouth, the coordinate position of nose characteristic point;
Step 4-2), it is judged that the line at canthus, left and right two and the angle of horizontal direction, if angle is less than threshold value, enter step
4-3), step 1 is otherwise returned) input next video image;
Step 4-3), it is judged that the distance between perpendicular bisector on the line at nose characteristic point and canthus, left and right two, if
This distance, less than threshold value, enters step 4-4), otherwise return step 1) input next video image;
Step 4-4), for planar turning the face of the attitudes vibration in direction with left and right sides simultaneously, it is judged that the left and right corners of the mouth
Distance between midpoint and the perpendicular bisector of right and left eyes corner characteristics point line of characteristic point line, if this distance is less than threshold value,
Then this human face target object is finally judged as front face, otherwise returns step 1) input next video image.
As it has been described above, the detection method of the front face image of the present invention, have the advantages that and present invention improves over
During conventional face's detection, the method for Image semantic classification, on the premise of realizing illumination compensation and denoising, reduces image
The interference of original feature, by the method for detecting human face improved, reduces Face datection based on AdaBoost cascade classifier
The false drop rate of system, add the local gray level model to characteristic point by gauss of distribution function increases weight simultaneously, improves face
The efficiency of key feature point method.On the basis of human face characteristic point is accurately positioned, utilize characteristic point on 2d
Plane geometry relation, filter out positive face image.So, the present invention effectively overcomes various shortcoming of the prior art and has height
Degree industrial utilization.
The principle of above-described embodiment only illustrative present invention and effect thereof, not for limiting the present invention.Any ripe
Above-described embodiment all can be modified under the spirit and the scope of the present invention or change by the personage knowing this technology.Cause
This, have usually intellectual such as complete with institute under technological thought without departing from disclosed spirit in art
All equivalences become are modified or change, and must be contained by the claim of the present invention.
Claims (8)
1. the detection method of a front face image, it is characterised in that described detection method includes:
Step 1), the video image of input capture is as original image;
Step 2), original image is converted into former gray level image, removes former gray level image impulsive noise by median filtering algorithm,
The grey level histogram of original image is modified to the rectangular histogram of uniform gray level distribution by greyscale transformation function, then passes through uniform gray level
Rectangular histogram realize illumination compensation to original image, former gray level image is carried out Gaussian smoothing filter and Canny operator detection figure
As edge treated, the new images weighting after process and the image addition after histogram equalization, it is modified obtaining to original image
Obtain pretreatment image;
Step 3), pretreatment image is carried out Low threshold detection, marks the region that there may be face, the most in this region
The region not comprising face is filtered out by the location of face key feature points;
Step 4), utilize key feature points geometrical relationship on two dimensional surface, filter out front face image.
The detection method of front face image the most according to claim 1, it is characterised in that: step 2) in, pass through intermediate value
Filtering algorithm is removed former gray level image impulsive noise and is included:
Step a), sets classification thresholds to noise, establishes high gray noise scope and low gray noise scope;
Step b), uses template to be filtered, by comparing template center's pixel with template intermediate value in filtering, sentences
Whether disconnected is noise spot.
The detection method of front face image the most according to claim 2, it is characterised in that: step a) including: to noise
Setting classification thresholds, be low gray noise scope with [0,60], [200,255] are high gray noise scope.
The detection method of front face image the most according to claim 3, it is characterised in that: step b) including:
Step b-1), to pretreatment image matrix the first row, first row and last column, the pixel in last string is from upper
Arriving down, moving die plate is to the element at row second from the bottom and row place second from the bottom from left to right, it is judged that center, template area
The pixel value of point;
Step b-2), carry out high gray noise filtering, when template area central point grey scale pixel value i (x, y), it is judged that for high ash
During degree noise range [200,255], proceed as follows:
Step b-2-1), as i, (x, when y) being the maximum in template window overlay area, (x y) is considered as noise spot, takes simultaneously i
Template intermediate value M (x, y) substitute i (x, y);
Step b-2-2), when i (x, y) be not the maximum in template window overlay area and i (x, y) > M (and x, time y), take M (x,
Y) intermediate value m in the new region of 2 × 2 centered by the pixel at place (x, y), if i (x, y) > m (x, y), then judge i (x, y)
For noise spot, simultaneously with M (x, y) substitute i (x, y);If i (x, and y) < m (x, y), it is judged that (x is not y) noise spot, keeps former i
Value;
Step b-2-3), as i, (x is not y) that (x, y) < (x, time y), it is judged that i for M for the maximum in template window overlay area and i
(x, y) is not noise spot, keeps initial value;
Step b-3), carry out low gray noise filtering, when template center's point grey scale pixel value i (x, y), it is judged that make an uproar for low gray scale
During sound scope [0,60], carry out following operation:
Step b-3-1), as i, (x, when y) being the minima in template window overlay area, (x y) is considered as noise spot, takes simultaneously i
Template intermediate value M (x, y) substitute i (x, y);
Step b-3-2), when i (x, y) be not the minima in template window overlay area and i (x, y) < M (and x, time y), take M (x,
Y) intermediate value m in the new region of 2 × 2 centered by the pixel at place (x, y), if i (x, y) < m (and x, y), then judge i (x, y)
For noise spot, simultaneously with M (x, y) substitute i (x, y);If i (x, y) > m (x, y), it is judged that (x is not y) noise spot, keeps former i
Value;
Step b-3-3), as i, (x is not y) that ((x, time y), it is judged that i for x, y) > M for the minima in template window overlay area and i
(x, y) is not noise spot, keeps initial value.
The detection method of front face image the most according to claim 1, it is characterised in that: step 2) in, original image
Grey level histogram by greyscale transformation function be modified to uniform gray level distribution rectangular histogram include:
Former gray level image is added up probability of occurrence p (i) of each gray level i by step c), obtains greyscale transformation formula:
Step d), utilizes greyscale transformation formula to change gray value I'(x, the y)=T (I (x, y)) of each pixel in original image.
The detection method of front face image the most according to claim 5, it is characterised in that: step 2) in, to former gray scale
Image carries out Gaussian smoothing filter and Canny operator detection image edge processing, the new images I after process " (x, y) weighting and warp
Cross the image addition after histogram equalization:
I*=I'(x, y)+λ II " (x, y).
The detection method of front face image the most according to claim 1, it is characterised in that: step 3) including:
Step 3-1), use the AdaBoost cascade classifier detection pretreatment image of Low threshold based on Haar-like feature
In human face target object;
Step 3-2), use shape frame to mark face and image-region about;If being not detected by human face target object, then
Return step 1), input next frame image;
Step 3-3), use the face in active shape model ASM method locating rectangle frame region based on spot distribution model to close
Key characteristic point, then filters out the region not comprising face in this rectangle frame region by the location of face key feature points.
The detection method of front face image the most according to claim 1, it is characterised in that step 4) including:
Step 4-1), input step 3) position of human face target object that detects, and left eye angle, right eye angle, the left corners of the mouth, right
The corners of the mouth, the coordinate position of nose characteristic point;
Step 4-2), it is judged that the line at canthus, left and right two and the angle of horizontal direction, if angle is less than threshold value, enter step 4-
3), step 1 is otherwise returned) input next video image;
Step 4-3), it is judged that the distance between perpendicular bisector on the line at nose characteristic point and canthus, left and right two, if should be away from
From less than threshold value, enter step 4-4), otherwise return step 1) input next video image;
Step 4-4), for planar turning the face of the attitudes vibration in direction with left and right sides simultaneously, it is judged that left and right corners of the mouth feature
Distance between midpoint and the perpendicular bisector of right and left eyes corner characteristics point line of some line, if this distance is less than threshold value, then should
Human face target object is finally judged as front face, otherwise returns step 1) input next video image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610188392.8A CN105893946B (en) | 2016-03-29 | 2016-03-29 | A kind of detection method of front face image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610188392.8A CN105893946B (en) | 2016-03-29 | 2016-03-29 | A kind of detection method of front face image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105893946A true CN105893946A (en) | 2016-08-24 |
CN105893946B CN105893946B (en) | 2019-10-11 |
Family
ID=57014562
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610188392.8A Active CN105893946B (en) | 2016-03-29 | 2016-03-29 | A kind of detection method of front face image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105893946B (en) |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107358174A (en) * | 2017-06-23 | 2017-11-17 | 浙江大学 | A kind of hand-held authentication idses system based on image procossing |
CN107423684A (en) * | 2017-06-09 | 2017-12-01 | 湖北天业云商网络科技有限公司 | A kind of fast face localization method and system applied to driver fatigue detection |
CN107729855A (en) * | 2017-10-25 | 2018-02-23 | 成都尽知致远科技有限公司 | Mass data processing method |
CN108921148A (en) * | 2018-09-07 | 2018-11-30 | 北京相貌空间科技有限公司 | Determine the method and device of positive face tilt angle |
CN109522853A (en) * | 2018-11-22 | 2019-03-26 | 湖南众智君赢科技有限公司 | Face datection and searching method towards monitor video |
CN109753886A (en) * | 2018-12-17 | 2019-05-14 | 北京爱奇艺科技有限公司 | A kind of evaluation method of facial image, device and equipment |
CN109785300A (en) * | 2018-12-27 | 2019-05-21 | 华南理工大学 | A kind of cancer medical image processing method, system, device and storage medium |
WO2019114145A1 (en) * | 2017-12-12 | 2019-06-20 | 深圳光启合众科技有限公司 | Head count detection method and device in surveillance video |
CN110321841A (en) * | 2019-07-03 | 2019-10-11 | 成都汇纳智能科技有限公司 | A kind of method for detecting human face and system |
CN110427907A (en) * | 2019-08-09 | 2019-11-08 | 上海天诚比集科技有限公司 | A kind of recognition pretreatment method for human face of grayscale image border detection and noise frame filling |
CN110879972A (en) * | 2019-10-24 | 2020-03-13 | 深圳云天励飞技术有限公司 | Face detection method and device |
CN111161281A (en) * | 2019-12-25 | 2020-05-15 | 广州杰赛科技股份有限公司 | Face region identification method and device and storage medium |
CN111242189A (en) * | 2020-01-06 | 2020-06-05 | Oppo广东移动通信有限公司 | Feature extraction method and device and terminal equipment |
CN112001203A (en) * | 2019-05-27 | 2020-11-27 | 北京君正集成电路股份有限公司 | Method for extracting front face from face recognition library |
CN112257696A (en) * | 2020-12-23 | 2021-01-22 | 北京万里红科技股份有限公司 | Sight estimation method and computing equipment |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101430759A (en) * | 2008-12-04 | 2009-05-13 | 上海大学 | Optimized recognition pretreatment method for human face |
US20130163829A1 (en) * | 2011-12-21 | 2013-06-27 | Electronics And Telecommunications Research Institute | System for recognizing disguised face using gabor feature and svm classifier and method thereof |
CN103440479A (en) * | 2013-08-29 | 2013-12-11 | 湖北微模式科技发展有限公司 | Method and system for detecting living body human face |
-
2016
- 2016-03-29 CN CN201610188392.8A patent/CN105893946B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101430759A (en) * | 2008-12-04 | 2009-05-13 | 上海大学 | Optimized recognition pretreatment method for human face |
US20130163829A1 (en) * | 2011-12-21 | 2013-06-27 | Electronics And Telecommunications Research Institute | System for recognizing disguised face using gabor feature and svm classifier and method thereof |
CN103440479A (en) * | 2013-08-29 | 2013-12-11 | 湖北微模式科技发展有限公司 | Method and system for detecting living body human face |
Cited By (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107423684A (en) * | 2017-06-09 | 2017-12-01 | 湖北天业云商网络科技有限公司 | A kind of fast face localization method and system applied to driver fatigue detection |
CN107358174A (en) * | 2017-06-23 | 2017-11-17 | 浙江大学 | A kind of hand-held authentication idses system based on image procossing |
CN107729855A (en) * | 2017-10-25 | 2018-02-23 | 成都尽知致远科技有限公司 | Mass data processing method |
WO2019114145A1 (en) * | 2017-12-12 | 2019-06-20 | 深圳光启合众科技有限公司 | Head count detection method and device in surveillance video |
CN109918971A (en) * | 2017-12-12 | 2019-06-21 | 深圳光启合众科技有限公司 | Number detection method and device in monitor video |
CN109918971B (en) * | 2017-12-12 | 2024-01-05 | 深圳光启合众科技有限公司 | Method and device for detecting number of people in monitoring video |
CN108921148A (en) * | 2018-09-07 | 2018-11-30 | 北京相貌空间科技有限公司 | Determine the method and device of positive face tilt angle |
CN109522853A (en) * | 2018-11-22 | 2019-03-26 | 湖南众智君赢科技有限公司 | Face datection and searching method towards monitor video |
CN109522853B (en) * | 2018-11-22 | 2019-11-19 | 湖南众智君赢科技有限公司 | Face datection and searching method towards monitor video |
CN109753886A (en) * | 2018-12-17 | 2019-05-14 | 北京爱奇艺科技有限公司 | A kind of evaluation method of facial image, device and equipment |
CN109753886B (en) * | 2018-12-17 | 2024-03-08 | 北京爱奇艺科技有限公司 | Face image evaluation method, device and equipment |
CN109785300A (en) * | 2018-12-27 | 2019-05-21 | 华南理工大学 | A kind of cancer medical image processing method, system, device and storage medium |
CN112001203A (en) * | 2019-05-27 | 2020-11-27 | 北京君正集成电路股份有限公司 | Method for extracting front face from face recognition library |
CN110321841A (en) * | 2019-07-03 | 2019-10-11 | 成都汇纳智能科技有限公司 | A kind of method for detecting human face and system |
CN110427907B (en) * | 2019-08-09 | 2023-04-07 | 上海天诚比集科技有限公司 | Face recognition preprocessing method for gray level image boundary detection and noise frame filling |
CN110427907A (en) * | 2019-08-09 | 2019-11-08 | 上海天诚比集科技有限公司 | A kind of recognition pretreatment method for human face of grayscale image border detection and noise frame filling |
CN110879972A (en) * | 2019-10-24 | 2020-03-13 | 深圳云天励飞技术有限公司 | Face detection method and device |
CN111161281A (en) * | 2019-12-25 | 2020-05-15 | 广州杰赛科技股份有限公司 | Face region identification method and device and storage medium |
CN111242189A (en) * | 2020-01-06 | 2020-06-05 | Oppo广东移动通信有限公司 | Feature extraction method and device and terminal equipment |
CN111242189B (en) * | 2020-01-06 | 2024-03-05 | Oppo广东移动通信有限公司 | Feature extraction method and device and terminal equipment |
CN112257696A (en) * | 2020-12-23 | 2021-01-22 | 北京万里红科技股份有限公司 | Sight estimation method and computing equipment |
Also Published As
Publication number | Publication date |
---|---|
CN105893946B (en) | 2019-10-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105893946A (en) | Front face image detection method | |
CN108038476B (en) | A kind of facial expression recognition feature extracting method based on edge detection and SIFT | |
CN104166841B (en) | The quick detection recognition methods of pedestrian or vehicle is specified in a kind of video surveillance network | |
CN108416250B (en) | People counting method and device | |
Stalder et al. | Cascaded confidence filtering for improved tracking-by-detection | |
CN103824070B (en) | A kind of rapid pedestrian detection method based on computer vision | |
CN108875600A (en) | A kind of information of vehicles detection and tracking method, apparatus and computer storage medium based on YOLO | |
CN103093212B (en) | The method and apparatus of facial image is intercepted based on Face detection and tracking | |
CN107368778A (en) | Method for catching, device and the storage device of human face expression | |
CN107330371A (en) | Acquisition methods, device and the storage device of the countenance of 3D facial models | |
EP2434431A1 (en) | Method and device for classifying image | |
CN107742099A (en) | A kind of crowd density estimation based on full convolutional network, the method for demographics | |
CN101142584A (en) | Method for facial features detection | |
WO2009109127A1 (en) | Real-time body segmentation system | |
CN104616006B (en) | A kind of beard method for detecting human face towards monitor video | |
CN105701466A (en) | Rapid all angle face tracking method | |
CN110298297A (en) | Flame identification method and device | |
CN110415208A (en) | A kind of adaptive targets detection method and its device, equipment, storage medium | |
CN106611158A (en) | Method and equipment for obtaining human body 3D characteristic information | |
CN109003275A (en) | The dividing method of weld defect image | |
CN108537787A (en) | A kind of quality judging method of facial image | |
CN106909883A (en) | A kind of modularization hand region detection method and device based on ROS | |
CN114863464A (en) | Second-order identification method for PID drawing picture information | |
Sakthimohan et al. | Detection and Recognition of Face Using Deep Learning | |
CN104616034B (en) | A kind of smog detection method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |