CN108446642A - A kind of Distributive System of Face Recognition - Google Patents
A kind of Distributive System of Face Recognition Download PDFInfo
- Publication number
- CN108446642A CN108446642A CN201810245016.7A CN201810245016A CN108446642A CN 108446642 A CN108446642 A CN 108446642A CN 201810245016 A CN201810245016 A CN 201810245016A CN 108446642 A CN108446642 A CN 108446642A
- Authority
- CN
- China
- Prior art keywords
- image
- face
- recognition
- value
- boundary
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 8
- 238000000034 method Methods 0.000 claims description 23
- 238000012545 processing Methods 0.000 claims description 17
- 230000009466 transformation Effects 0.000 claims description 17
- 230000001815 facial effect Effects 0.000 claims description 11
- 230000008569 process Effects 0.000 claims description 9
- 210000001747 pupil Anatomy 0.000 claims description 9
- 238000000605 extraction Methods 0.000 claims description 7
- 238000001514 detection method Methods 0.000 claims description 6
- 230000011218 segmentation Effects 0.000 claims description 5
- 230000015572 biosynthetic process Effects 0.000 claims description 3
- 238000004364 calculation method Methods 0.000 claims description 3
- 230000007797 corrosion Effects 0.000 claims description 3
- 238000005260 corrosion Methods 0.000 claims description 3
- 238000003708 edge detection Methods 0.000 claims description 3
- 210000004709 eyebrow Anatomy 0.000 claims description 3
- 238000009499 grossing Methods 0.000 claims description 3
- 238000007689 inspection Methods 0.000 claims description 3
- 239000011159 matrix material Substances 0.000 claims description 3
- 238000013179 statistical model Methods 0.000 claims description 3
- 238000003786 synthesis reaction Methods 0.000 claims description 3
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 claims 1
- 238000012706 support-vector machine Methods 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 3
- 238000012549 training Methods 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 2
- 238000012790 confirmation Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 238000005286 illumination Methods 0.000 description 2
- 238000003709 image segmentation Methods 0.000 description 2
- 239000013598 vector Substances 0.000 description 2
- 238000005311 autocorrelation function Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000005489 elastic deformation Effects 0.000 description 1
- 239000000686 essence Substances 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000029052 metamorphosis Effects 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 210000000056 organ Anatomy 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 238000004611 spectroscopical analysis Methods 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/165—Detection; Localisation; Normalisation using facial parts and geometric relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration using local operators
- G06T5/30—Erosion or dilatation, e.g. thinning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/187—Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/40—Analysis of texture
- G06T7/41—Analysis of texture based on statistical description of texture
- G06T7/44—Analysis of texture based on statistical description of texture using image operators, e.g. filters, edge density metrics or local histograms
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20024—Filtering details
- G06T2207/20032—Median filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Ophthalmology & Optometry (AREA)
- Probability & Statistics with Applications (AREA)
- Geometry (AREA)
- Image Analysis (AREA)
Abstract
In order to reduce the complexity of face recognition algorithms, the present invention provides one kind being based on individualized feature, the especially Distributive System of Face Recognition of lip image, it reads the face image data of people, the image data of the cheilogramma image for acquiring same angle first, the lip and other positions that are then based in the image is identified.
Description
Technical field
The invention belongs to technical field of face recognition, and in particular to a kind of Distributive System of Face Recognition.
Background technology
The computer technology compared using analysis is refered in particular in recognition of face.Recognition of face is a popular computer technology
Research field, face tracking detecting, adjust automatically image zoom, night infrared detecting, adjust automatically exposure intensity;It belongs to raw
Object feature identification technique is to distinguish organism individual to organism (generally the refering in particular to people) biological characteristic of itself.
Face recognition technology is the face feature based on people, to the facial image or video flowing of input.First determine whether it
With the presence or absence of face, if there is face, then the position of each face, size and each major facial organ are further provided
Location information.And according to these information, the identity characteristic contained in each face is further extracted, and by itself and known people
Face is compared, to identify the identity of each face.
Current Distributive System of Face Recognition includes very much, but all have the shortcomings that it is respective, below we analyze one by one:
(1) face identification method of geometric properties, geometric properties generally refer to eye, nose, mouth etc. shape and they between
Mutual geometrical relationship the distance between such as, it is fast using this algorithm recognition speed, but discrimination is relatively low.
(2) face identification method of feature based face (PCA):Eigenface method is the recognition of face side converted based on KL
Method, KL transformation are a kind of optimal orthogonal transformations of compression of images.The image space of higher-dimension obtained after KL is converted one group it is new
Orthogonal basis retains wherein important orthogonal basis, low-dimensional linear space can be turned by these bases.If it is assumed that face is low at these
The projection in dimensional linear space has separability, so that it may with by these characteristic vectors of the projection as identification, here it is eigenface sides
The basic thought of method.These methods need more training sample, and be based entirely on the statistical property of gradation of image.
(3) face identification method of neural network:The input of neural network can be the facial image for reducing resolution ratio, office
The auto-correlation function in portion region, second moment of local grain etc..Such methods also need more sample and are trained, and
In many applications, sample size is very limited.
(4) face identification method of elastic graph matching:Elastic graph matching method defined in the two-dimensional space it is a kind of for
Common Facial metamorphosis has the distance of certain invariance, and represents face, any of topological diagram using attribute topological diagram
Vertex includes a feature vector, for recording information of the face near the vertex position.This method combines gamma characteristic
And geometrical factor, can allow image there are elastic deformation when comparing, overcome expression shape change to the influence of identification in terms of receive
Preferable effect has been arrived, also multiple samples has no longer been needed to be trained simultaneously for single people, but algorithm is relative complex.
(5) face identification method of support vector machines (SVM):Support vector machines is that one of statistical-simulation spectrometry field is new
Hot spot, it attempts so that learning machine reaches a kind of compromise on empiric risk and generalization ability, to improve the property of learning machine
Energy.What support vector machines mainly solved is 2 classification problems, its basic thought is attempt to linearly can not low-dimensional
The problem of the problem of dividing is converted to the linear separability of a higher-dimension.It is common the experimental results showed that SVM has preferable discrimination, but
It is that it needs a large amount of training sample (per class 300), this is often unpractical in practical applications.And support vector machines
Training time is long, and method realizes that complexity, the function follow the example of ununified theory.
Invention content
In view of the above analysis, the main purpose of the present invention is to provide one kind overcoming above-mentioned various Distributive System of Face Recognition
The defects of integrated data processing algorithm.
The purpose of the present invention is what is be achieved through the following technical solutions.
Including:
Cheilogramma image acquisition units, the face image data for reading people acquire the cheilogramma image of same angle first;
Face image data obtaining unit, the face image data for being obtained based on the cheilogramma image acquisition units,
It is detected to removing the face other than lip in the lips image, passes through confirmation from the complex background image of above-mentioned intake
The face image of the face character extraction people of detected object;
The face image of wherein extraction people includes that its boundary is calculated and identified comprising following calculating process:
Wherein, kmnIndicate the gray value of image pixel (m, n), K=max (kmn), shooting angle θmn∈[0,1]
Radian greyscale transformation is carried out to image using Tr formula:
N is the natural number more than 2;
Wherein
Wherein θcFor Boundary Recognition threshold value, is determined by lip boundary empirical value, then calculated as follows again:
Transformation coefficient k 'mn=(K-1) θmn
Then image boundary is extracted, the image boundary matrix extracted is
Edges=[k 'mn]
Wherein
k′mn=| k 'mn-min{k′ij},(i,j)∈W
W is 3 × 3 windows centered on pixel (i, j),
Then boundary judging result is verified, if identification enough, terminates, if being not enough to identify, to upper
It states Boundary Recognition threshold value to be adjusted, repeat the above process, until obtaining good Boundary Recognition result, wherein Boundary Recognition threshold
It is [0.3,0.8] to be worth value range;
Recognition factor determination unit judges that the factor includes face's posture, illumination for being judged for the first time identification image
It spends, have unobstructed, face's distance, be to carry out face's posture judgement first, symmetry is carried out to identification image and integrity degree judges,
The symmetry of the image of above-mentioned second step acquisition is analyzed, if symmetry meets predetermined threshold value requirement, then it is assumed that face
Flat-hand position is correct, if it exceeds predetermined threshold value require, then it is assumed that face's flat-hand position is incorrect, that is, occur side face excessively or face
Portion's overbank phenomenon, specific to judge algorithm to carry out binaryzation to obtained image, it is 80 to take threshold value, is more than 80 pixel
0 is taken, remaining sets 1, is divided into the projection that left and right two parts seek horizontal direction respectively to the image after binaryzation, obtains two-part
Histogram calculates the chi-Square measure between histogram, and it is horizontal poorer to symmetry that chi-Square measure shows more greatly, then complete to face
Whole degree judged, i.e., to face's finite element inspection in the face mask that identifies, check its eye, eyebrow, face, under
Bar whether occur completely, if lacking some element or imperfect, then it is assumed that pitch angle is excessive when identification, then has to face
It is unobstructed to be judged, subsequent processing is carried out when unobstructed, finally whether face's distance is properly judged, when suitable identification
Apart from when, carry out subsequent processing, when the conditions are satisfied, start image processing unit.
Image processing unit, the position for searching for crucial human face characteristic point in the specific region of face image,
Grey level histogram using human eye candidate region in identification image is divided, the partial pixel that carrying out image threshold segmentation takes gray value minimum
The value of point is 255, and the value of other pixels is 0, and described image processing unit includes Pupil diameter subelement, pupil center's positioning
It is to detect pip from two eye areas, the detection of eyes block is carried out using position and luminance information, from left and right eye area
The higher connection block of brightness is deleted in domain in the image of binaryzation, selects the connection block positioned at extreme lower position as eyes block, and
And above-mentioned Pupil diameter subelement is additionally operable to:Chroma space is carried out, retains luminance component, obtains the luminance graph of human eye area
Picture enhances luminance picture into column hisgram linear equalization and contrast, threshold transformation is then carried out, to the figure after threshold transformation
Implement Gauss and median smoothing filtering as carrying out corrosion and expansion process, then to treated two-value human eye area, to flat
Image after cunning carries out threshold transformation again, then carries out edge detection, and ellipse fitting simultaneously detects the circle in profile, and detection radius is most
Big circle obtains the center of pupil;
Texture feature information obtaining unit is used for after carrying out above-mentioned positioning, handling facial recognition data
High-pass filter, the Gaussian function that graphics standard is melted into a zero-mean and unit variance is distributed, then carries out sub-block to image
Segmentation, dimension-reduction treatment calculate the two-value relationship of the gray value on each pixel value of image point adjacent thereto and secondly pass through correspondence
Pixel value point and weighting multiplied by weight, are then added the coding for foring local binary patterns, finally by using multizone
Textural characteristics of the histogram as image, Local textural feature calculation formula are as follows:
Hi,j=∑x,yI { h (x, y)=i } I { (x, y) ∈ Rj), i=0,1 ... n-1;J=0,1 ... D-1
Wherein Hi,jIndicate the region R divided from imagejIn belong to the number of i-th of histogram, n is local binary
The number of the statistical model feature of pattern, D is the areal of facial image, to the upper of face key area and non-key area
It states information to be counted, then be spliced, synthesis obtains the texture feature information of whole picture face image;
Recognition unit is compared, for the texture feature information and face archives number to whole picture face image obtained above
It is compared according to the face texture characteristic information in library, to realize recognition of face.
Technical scheme of the present invention has the following advantages:
It can accurately realize the processing of facial recognition data and the feature extraction of face texture information, meanwhile, gram
Take above-mentioned number of drawbacks existing in the prior art, and the relatively easy easy realization of algorithm.
Description of the drawings
Fig. 1 shows the composition frame chart of the system according to the present invention.
Specific implementation mode
As shown in Figure 1, Distributive System of Face Recognition of the present invention includes:
Cheilogramma image acquisition units, the face image data for reading people acquire the cheilogramma image of same angle first;
Face image data obtaining unit, the face image data for being obtained based on the cheilogramma image acquisition units,
It is detected to removing the face other than lip in the lips image, passes through confirmation from the complex background image of above-mentioned intake
The face image of the face character extraction people of detected object;
The face image of wherein extraction people includes that its boundary is calculated and identified comprising following calculating process:
Wherein, kmnIndicate the gray value of image pixel (m, n), K=max (kmn), shooting angle θmn∈[0,1]
Radian greyscale transformation is carried out to image using Tr formula:
N is the natural number more than 2;
Wherein
Wherein θcFor Boundary Recognition threshold value, is determined by lip boundary empirical value, then calculated as follows again:
Transformation coefficient k 'mn=(K-1) θmn
Then image boundary is extracted, the image boundary matrix extracted is
Edges=[k 'mn]
Wherein
k′mn=| k 'mn-min{k′ij},(i,j)∈W
W is 3 × 3 windows centered on pixel (i, j),
Then boundary judging result is verified, if identification enough, terminates, if being not enough to identify, to upper
It states Boundary Recognition threshold value to be adjusted, repeat the above process, until obtaining good Boundary Recognition result, wherein Boundary Recognition threshold
It is [0.3,0.8] to be worth value range;
Recognition factor determination unit judges that the factor includes face's posture, illumination for being judged for the first time identification image
It spends, have unobstructed, face's distance, be to carry out face's posture judgement first, symmetry is carried out to identification image and integrity degree judges,
The symmetry of the image of above-mentioned second step acquisition is analyzed, if symmetry meets predetermined threshold value requirement, then it is assumed that face
Flat-hand position is correct, if it exceeds predetermined threshold value require, then it is assumed that face's flat-hand position is incorrect, that is, occur side face excessively or face
Portion's overbank phenomenon, specific to judge algorithm to carry out binaryzation to obtained image, it is 80 to take threshold value, is more than 80 pixel
0 is taken, remaining sets 1, is divided into the projection that left and right two parts seek horizontal direction respectively to the image after binaryzation, obtains two-part
Histogram calculates the chi-Square measure between histogram, and it is horizontal poorer to symmetry that chi-Square measure shows more greatly, then complete to face
Whole degree judged, i.e., to face's finite element inspection in the face mask that identifies, check its eye, eyebrow, face, under
Bar whether occur completely, if lacking some element or imperfect, then it is assumed that pitch angle is excessive when identification, then has to face
It is unobstructed to be judged, subsequent processing is carried out when unobstructed, finally whether face's distance is properly judged, when suitable identification
Apart from when, carry out subsequent processing, when the conditions are satisfied, start image processing unit.
Image processing unit, the position for searching for crucial human face characteristic point in the specific region of face image,
Grey level histogram using human eye candidate region in identification image is divided, the partial pixel that carrying out image threshold segmentation takes gray value minimum
The value of point is 255, and the value of other pixels is 0, and described image processing unit includes Pupil diameter subelement, pupil center's positioning
It is to detect pip from two eye areas, the detection of eyes block is carried out using position and luminance information, from left and right eye area
The higher connection block of brightness is deleted in domain in the image of binaryzation, selects the connection block positioned at extreme lower position as eyes block, and
And above-mentioned Pupil diameter subelement is additionally operable to:Chroma space is carried out, retains luminance component, obtains the luminance graph of human eye area
Picture enhances luminance picture into column hisgram linear equalization and contrast, threshold transformation is then carried out, to the figure after threshold transformation
Implement Gauss and median smoothing filtering as carrying out corrosion and expansion process, then to treated two-value human eye area, to flat
Image after cunning carries out threshold transformation again, then carries out edge detection, and ellipse fitting simultaneously detects the circle in profile, and detection radius is most
Big circle obtains the center of pupil;
Texture feature information obtaining unit is used for after carrying out above-mentioned positioning, handling facial recognition data
High-pass filter, the Gaussian function that graphics standard is melted into a zero-mean and unit variance is distributed, then carries out sub-block to image
Segmentation, dimension-reduction treatment calculate the two-value relationship of the gray value on each pixel value of image point adjacent thereto and secondly pass through correspondence
Pixel value point and weighting multiplied by weight, are then added the coding for foring local binary patterns, finally by using multizone
Textural characteristics of the histogram as image, Local textural feature calculation formula are as follows:
Hi,j=∑x,yI { h (x, y)=i } I { (x, y) ∈ Rj), i=0,1 ... n-1;J=0,1 ... D-1
Wherein Hi,jIndicate the region R divided from imagejIn belong to the number of i-th of histogram, n is local binary
The number of the statistical model feature of pattern, D is the areal of facial image, to the upper of face key area and non-key area
It states information to be counted, then be spliced, synthesis obtains the texture feature information of whole picture face image;
Recognition unit is compared, for the texture feature information and face archives number to whole picture face image obtained above
It is compared according to the face texture characteristic information in library, to realize recognition of face.
The foregoing is merely illustrative of the preferred embodiments of the present invention, is not intended to limit the invention, all essences in the present invention
All any modification, equivalent and improvement etc., should all be included in the protection scope of the present invention made by within refreshing and principle.
Claims (1)
1. a kind of Distributive System of Face Recognition, which is characterized in that including:
Cheilogramma image acquisition units, the face image data for reading people acquire the cheilogramma image of same angle first;
Face image data obtaining unit, the face image data for being obtained based on the cheilogramma image acquisition units, to institute
The face removed in lips image other than lip is stated to be detected, it is tested by confirming from the complex background image of above-mentioned intake
Survey the face image of the face character extraction people of object;
The face image of wherein extraction people includes that its boundary is calculated and identified comprising following calculating process:
Wherein, kmnIndicate the gray value of image pixel (m, n), K=max (kmn), shooting angle θmn∈[0,1]
Radian greyscale transformation is carried out to image using Tr formula:
N is the natural number more than 2;
Wherein
Wherein θcFor Boundary Recognition threshold value, is determined by lip boundary empirical value, then calculated as follows again:
Transformation coefficient k 'mn=(K-1) θmn
Then image boundary is extracted, the image boundary matrix extracted is
Edges=[k 'mn]
Wherein
k′mn=| k 'mn-min{k′ij}|,(i,j)∈W
W is 3 × 3 windows centered on pixel (i, j),
Then boundary judging result is verified, if identification enough, terminates, if being not enough to identify, to above-mentioned side
Boundary's recognition threshold is adjusted, and is repeated the above process, until obtaining good Boundary Recognition result, wherein Boundary Recognition threshold value takes
Value is ranging from [0.3,0.8];
Recognition factor determination unit, for identification image judged for the first time, judge the factor include face's posture, illuminance,
There is unobstructed, face's distance, be to carry out face's posture judgement first, symmetry is carried out to identification image and integrity degree judges, it is right
The symmetry for the image that above-mentioned second step obtains is analyzed, if symmetry meets predetermined threshold value requirement, then it is assumed that face's water
Flat correct set, if it exceeds predetermined threshold value require, then it is assumed that face's flat-hand position is incorrect, that is, occur side face excessively or face
Overbank phenomenon, specific to judge algorithm to carry out binaryzation to obtained image, it is 80 to take threshold value, and the pixel more than 80 takes
0, remaining sets 1, is divided into the projection that left and right two parts seek horizontal direction respectively to the image after binaryzation, obtains two-part straight
Fang Tu calculates the chi-Square measure between histogram, and it is horizontal poorer to symmetry that chi-Square measure shows more greatly, then complete to face
Degree is judged, i.e., to face's finite element inspection in the face mask that identifies, checks its eye, eyebrow, face, chin
Whether occur completely, if lacking some element or imperfect, then it is assumed that pitch angle is excessive when identification, then to face whether there is or not
It blocks and is judged, subsequent processing is carried out when unobstructed, finally whether face's distance is properly judged, when suitable identification
Apart from when, carry out subsequent processing, when the conditions are satisfied, start image processing unit.
Image processing unit, the position for searching for crucial human face characteristic point in the specific region of face image utilize
Identify the grey level histogram segmentation of human eye candidate region in image, the partial pixel point that carrying out image threshold segmentation takes gray value minimum
Value is 255, and the values of other pixels is 0, and described image processing unit includes Pupil diameter subelement, pupil center's positioning be from
Pip is detected in two eye areas, the detection of eyes block is carried out using position and luminance information, from left and right eye region
The higher connection block of brightness is deleted in the image of binaryzation, selects the connection block positioned at extreme lower position as eyes block, and on
Pupil diameter subelement is stated to be additionally operable to:Chroma space is carried out, retains luminance component, obtains the luminance picture of human eye area,
Luminance picture is enhanced into column hisgram linear equalization and contrast, threshold transformation is then carried out, to the image after threshold transformation
Corrosion and expansion process are carried out, then Gauss is implemented to treated two-value human eye area and is filtered with median smoothing, to smooth
Image afterwards carries out threshold transformation again, then carries out edge detection, and ellipse fitting simultaneously detects the circle in profile, and detection radius is maximum
Circle i.e. obtain the center of pupil;
Texture feature information obtaining unit, for after carrying out above-mentioned positioning, handling facial recognition data, using high pass
Filter, the Gaussian function that graphics standard is melted into a zero-mean and unit variance is distributed, then carries out sub-block segmentation to image,
Dimension-reduction treatment calculates the two-value relationship of the gray value on each pixel value of image point adjacent thereto and secondly passes through respective pixel value
Point and weighting multiplied by weight, are then added the coding for foring local binary patterns, finally by the histogram using multizone
As the textural characteristics of image, Local textural feature calculation formula is as follows:
Hi,j=∑x,yI { h (x, y)=i } I { (x, y) ∈ Rj), i=0,1 ... n-1;J=0,1 ... D-1
Wherein Hi,jIndicate the region R divided from imagejIn belong to the number of i-th of histogram, n is local binary patterns
The number of statistical model feature, D is the areal of facial image, to the above- mentioned information of face key area and non-key area
It is counted, is then spliced, synthesis obtains the texture feature information of whole picture face image;
Recognition unit is compared, for the texture feature information and face archive database to whole picture face image obtained above
In face texture characteristic information compared, to realize recognition of face.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810245016.7A CN108446642A (en) | 2018-03-23 | 2018-03-23 | A kind of Distributive System of Face Recognition |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810245016.7A CN108446642A (en) | 2018-03-23 | 2018-03-23 | A kind of Distributive System of Face Recognition |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108446642A true CN108446642A (en) | 2018-08-24 |
Family
ID=63196765
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810245016.7A Pending CN108446642A (en) | 2018-03-23 | 2018-03-23 | A kind of Distributive System of Face Recognition |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108446642A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109579075A (en) * | 2018-12-06 | 2019-04-05 | 王银仙 | Intelligent liquefied gas stove |
CN109784335A (en) * | 2019-01-25 | 2019-05-21 | 太原科技大学 | A kind of E Zhou interesting image area boundary demarcation method based on least square fitting |
CN111179210A (en) * | 2019-12-27 | 2020-05-19 | 浙江工业大学之江学院 | Method and system for generating texture map of face and electronic equipment |
CN113128391A (en) * | 2021-04-15 | 2021-07-16 | 广东便捷神科技股份有限公司 | Goods loading and taking method of unmanned vending machine based on face recognition |
CN113884538A (en) * | 2021-10-18 | 2022-01-04 | 沈阳工业大学 | Infrared thermal image detection method for micro defects in large wind turbine blade |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101861118A (en) * | 2007-11-16 | 2010-10-13 | 视瑞尔技术公司 | Method and device for finding and tracking pairs of eyes |
CN103914683A (en) * | 2013-12-31 | 2014-07-09 | 闻泰通讯股份有限公司 | Gender identification method and system based on face image |
CN106295549A (en) * | 2016-08-05 | 2017-01-04 | 深圳市鹰眼在线电子科技有限公司 | Multi-orientation Face collecting method and device |
CN107657218A (en) * | 2017-09-12 | 2018-02-02 | 广东欧珀移动通信有限公司 | Face identification method and Related product |
-
2018
- 2018-03-23 CN CN201810245016.7A patent/CN108446642A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101861118A (en) * | 2007-11-16 | 2010-10-13 | 视瑞尔技术公司 | Method and device for finding and tracking pairs of eyes |
CN101861118B (en) * | 2007-11-16 | 2012-05-16 | 视瑞尔技术公司 | Method and apparatus for finding and tracking eyes |
CN103914683A (en) * | 2013-12-31 | 2014-07-09 | 闻泰通讯股份有限公司 | Gender identification method and system based on face image |
CN106295549A (en) * | 2016-08-05 | 2017-01-04 | 深圳市鹰眼在线电子科技有限公司 | Multi-orientation Face collecting method and device |
CN107657218A (en) * | 2017-09-12 | 2018-02-02 | 广东欧珀移动通信有限公司 | Face identification method and Related product |
Non-Patent Citations (2)
Title |
---|
任月庆: "虹膜图像分割算法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
王晓华 等: "融合局部特征的面部遮挡表情识别", 《中国图象图形学报》 * |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109579075A (en) * | 2018-12-06 | 2019-04-05 | 王银仙 | Intelligent liquefied gas stove |
CN109784335A (en) * | 2019-01-25 | 2019-05-21 | 太原科技大学 | A kind of E Zhou interesting image area boundary demarcation method based on least square fitting |
CN111179210A (en) * | 2019-12-27 | 2020-05-19 | 浙江工业大学之江学院 | Method and system for generating texture map of face and electronic equipment |
CN111179210B (en) * | 2019-12-27 | 2023-10-20 | 浙江工业大学之江学院 | Face texture map generation method and system and electronic equipment |
CN113128391A (en) * | 2021-04-15 | 2021-07-16 | 广东便捷神科技股份有限公司 | Goods loading and taking method of unmanned vending machine based on face recognition |
CN113128391B (en) * | 2021-04-15 | 2024-02-06 | 广东便捷神科技股份有限公司 | Face recognition-based method for loading and picking goods of vending machine |
CN113884538A (en) * | 2021-10-18 | 2022-01-04 | 沈阳工业大学 | Infrared thermal image detection method for micro defects in large wind turbine blade |
CN113884538B (en) * | 2021-10-18 | 2024-07-23 | 沈阳工业大学 | Infrared thermal image detection method for micro defects in large wind turbine blade |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103632132B (en) | Face detection and recognition method based on skin color segmentation and template matching | |
CN108446642A (en) | A kind of Distributive System of Face Recognition | |
US6661907B2 (en) | Face detection in digital images | |
US7953253B2 (en) | Face detection on mobile devices | |
CN109918971B (en) | Method and device for detecting number of people in monitoring video | |
CN109101865A (en) | A kind of recognition methods again of the pedestrian based on deep learning | |
CN107368778A (en) | Method for catching, device and the storage device of human face expression | |
CN108268859A (en) | A kind of facial expression recognizing method based on deep learning | |
CN107330371A (en) | Acquisition methods, device and the storage device of the countenance of 3D facial models | |
CN111191573A (en) | Driver fatigue detection method based on blink rule recognition | |
Celik et al. | Facial feature extraction using complex dual-tree wavelet transform | |
CN104794693B (en) | A kind of portrait optimization method of face key area automatic detection masking-out | |
CN107066969A (en) | A kind of face identification method | |
CN108334870A (en) | The remote monitoring system of AR device data server states | |
CN111291701A (en) | Sight tracking method based on image gradient and ellipse fitting algorithm | |
CN108446639A (en) | Low-power consumption augmented reality equipment | |
CN108491798A (en) | Face identification method based on individualized feature | |
CN108520208A (en) | Localize face recognition method | |
CN111274851A (en) | Living body detection method and device | |
CN107145820B (en) | Binocular positioning method based on HOG characteristics and FAST algorithm | |
Guha | A report on automatic face recognition: Traditional to modern deep learning techniques | |
Szczepański et al. | Pupil and iris detection algorithm for near-infrared capture devices | |
Campadelli et al. | Fiducial point localization in color images of face foregrounds | |
Yamamoto et al. | Algorithm optimizations for low-complexity eye tracking | |
Mohamed et al. | Face detection based on skin color in image by neural networks |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180824 |
|
RJ01 | Rejection of invention patent application after publication |