CN108537143B - A kind of face identification method and system based on key area aspect ratio pair - Google Patents
A kind of face identification method and system based on key area aspect ratio pair Download PDFInfo
- Publication number
- CN108537143B CN108537143B CN201810234083.9A CN201810234083A CN108537143B CN 108537143 B CN108537143 B CN 108537143B CN 201810234083 A CN201810234083 A CN 201810234083A CN 108537143 B CN108537143 B CN 108537143B
- Authority
- CN
- China
- Prior art keywords
- face
- pixel
- region
- sample
- key area
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/165—Detection; Localisation; Normalisation using facial parts and geometric relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
Landscapes
- Engineering & Computer Science (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Data Mining & Analysis (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Geometry (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention belongs to technical field of face recognition, and in particular to a kind of recognition speed faster, the higher face identification method and system based on key area aspect ratio pair of precision.A kind of face identification method based on key area aspect ratio pair, including man face image acquiring;Face normalization;Classified by classifier to face characteristic;The face characteristic for carrying out key area extracts;The face characteristic of key area is compared and completes recognition of face;Signature analysis carries out deep learning.Present invention comprises extracting face characteristic, carry out feature calibration, with error rate judge and classified, finally achievees the purpose that recognition of face, raising accuracy rate is achieved the purpose that by multi-model fusion.
Description
Technical field
The invention belongs to technical field of face recognition, and in particular to a kind of recognition speed faster, precision it is higher based on weight
The face identification method and system that point provincial characteristics compares.
Background technique
Recognition of face occurs in daily life all the time, it refers to true according to the facial feature information of people progress identity
The process recognized.Human brain can easily judge its identity by the face observed, but for computer
It but is not an easy thing.In general, Automatic face recognition, which refers to, includes people using image capture device acquisition first
Then the still image or dynamic video stream of face pass through the computerized algorithm detection and tracking people in image or video flowing automatically
Face, and then the process that facial feature information extraction is carried out to the face for detecting or tracing into and is identified.The beginning of recognition of face is studied
Significant progress is had been achieved for after development in more than 50 years in the sixties in last century.Especially in recent years, with
Artificial intelligence technology develops started upsurge, and face recognition technology is even more the extensive concern for causing academia and industrial circle,
Its not still important branch for computation vision research field, the even more hot topic of area of pattern recognition, while being also people
The intelligent important application example in social life of work.
So, recognition of face will receive such concern why nowadays? why scientific and technological circle and industry at present can be become
" favorite " important reason on boundary is that it is all significant in theoretical research and practical application.It is led in academic research
Domain, on the one hand, the research of recognition of face facilitates understanding in depth and studying to mankind itself's recognition of face mechanism;Another party
Face can promote the research of face recognition algorithms the hair of related discipline as the important branch in computer vision research field
Exhibition.In application field, as (unmanned plane, high-definition camera are hand-held to set for the progress of image acquisition technology and acquisition equipment universal
It is standby etc.) and Internet technology fast development so that collected face has good improvement on quality and quantity;
Simultaneously with the development of big data technology, promote conversion of the unstructured data to structuring, the progress of these technologies so that
The practical value of recognition of face is increasing, and coverage is also increasingly wider, accelerates comprehensive functionization of face recognition technology
Process.Specifically, recognition of face is widely used in following aspect:
Authentication: usually using face registration image gathered in advance as digital identity, once it succeeds in registration, so that it may
It is compared with facial image to be verified and judges its identity.Nowadays, e-commerce becomes more and more popular, " brush face " payment
Epoch also at hand, how to guarantee that the agility of online transaction and reliability are increasingly valued by people.Arriba
Bar, Baidu's wallet etc. gradually carries out " brush face " payment system, in addition, spacious view science and technology also is putting forth effort to promote recognition of face in finance clothes
The application in business field will release face account opening system.Security fields: security fields are that recognition of face is also most important earliest
Application field mainly includes safety monitoring and public safety.With the development that safe city and smart city are built, recognition of face
It puts expertise to good use in field of video monitoring systems, for the intellectual analysis to character attribute and identity.It is considered as representative with Haikang prestige
Security protection enterprise release one after another face information extract identification searching system, major function includes: face retrieval, face snap and people
Face compares in real time.Facial image retrieval: facial image retrieval and analysis are recognitions of face important answers at one of Internet era
With.There is the image of magnanimity on internet, and there are a large amount of new images to upload on internet daily, these images include again
Many character images will be retrieved and be analyzed to these character images, and recognition of face is indispensable technological means.Right
When the research direction of recognition of face is explored, happy, lower dawn of Tsinghua University is green, Fang Chi et al. delivered in 2011 it is " more
Feature part and global face identification method merge " in, propose it is a kind of under the premise of local feature acts on certainly, in dividing
The face identification method that global and local feature is merged on several layers;A very short time, Wang Wenwei et al. are opened 2015 by Wu Mei university
One kind guarantor while deep learning is proposed in " recognition of face based on local binary patterns and deep learning " that year delivers
Stay the recognition methods of face partial structurtes feature." the image local that Song Tiecheng of University of Electronic Science and Technology et al. was delivered in 2015
The extraction of feature and description technique study " in propose, research local feature method is of great significance.The Ma Xiao of Peking University,
Kind, in Feng Jufu et al. " face identification method of the rarefaction representation based on deep learning feature " delivered in 2016 according to
It has so selected to start with from the level of feature and has enhanced the accuracy of recognition of face;These grind that clever thought road is different degrees of to be shown
Local feature method is of great significance for recognition of face, illustrates that it is very significant for improving to local characterization method
Research direction.
However the above method is not mature enough the research of key area when carrying out Local Features Analysis, it is right in other words
It is abundant not enough for the meaning understanding for improving recognition effect in key area signature analysis, the key area of face is known
The operand of recognition methods is not further reduced, while being also beneficial to further increase identification precision.
Summary of the invention
The purpose of the present invention is to provide it is a kind of can be improved identification speed and identification precision based on key area spy
Levy the face identification method compared and system.The object of the present invention is achieved like this.
A kind of face identification method based on key area aspect ratio pair, including, (1) man face image acquiring:
Image capture device acquires color image in real time, according to RGB color mode, that is, the color of each pixel is by red green
Color image is carried out gray processing processing by blue three representation in components, and value range is 0 to 255 gray value or brightness value;0
Most secretly to indicate black, 255 be most bright expression white;
(2) Face normalization:
Determining each datum mark of face key area in collected color image, key area includes: canthus region,
Corners of the mouth region, nose region, pupil region;According to key area where the coordinate (x, y) of datum mark and each datum mark
Indicatrix, carries out face cutting, and the image calibrated is extracted for face characteristic;
(3) classified by classifier to face characteristic, the key area of the facial image by calibration is decomposited
Come;All sample images are endowed identical weight, the training set for being N for sample number, and sample initial weight is 1/N;By repeatedly
For the prominent sample that do not classified correctly of method, the sample being distinguished is weakened;Area is carried out to Weak Classifier in an iterative process
Point, weight of the higher Weak Classifier of classification accuracy rate in final strong classifier is bigger;
(4) face characteristic for carrying out key area extracts:
It is determined first using datum mark pixel as the region Z of the setting of center pixel, for region Z according to elder generation from abscissa
It is traversed, then each pixel in the order traversal image traversed by ordinate, to the pixel (x, y) of region Z
It is compared with the gray value of its adjacent pixels point, if the gray value of pixel (x, y) is less than the gray scale of its adjacent pixels
This adjacent pixels is then labeled as 0 by value;It, will if the gray value of center pixel is greater than or equal to adjacent pixels gray value
Threshold value O of this adjacent pixels labeled as a certain fixation in 1-255;
(5) face characteristic of key area is compared and completes recognition of face:
The normal pictures that region Z is corresponded in image of the region Z after face characteristic extracts and picture library are compared
It is right, if the Classification Loss function of pixel value fluctuates in a certain preset threshold range E, a certain record being identified as in picture library
Enter the corresponding identity information of face picture;If the Classification Loss function of pixel value fluctuates outside a certain preset threshold range E,
It is identified as unqualified, return step (1) re-starts man face image acquiring;
(6) signature analysis carries out deep learning, and the face characteristic of freshly harvested key area is included in picture library:
The verifying loss function of the face characteristic of the normal pictures of region Z is corresponded in zoning Z and picture library, if
New normal pictures are then regarded as in verifying loss function fluctuation within the scope of threshold value I, update original region Z in picture library
Normal pictures;If verifying loss function fluctuation outside threshold value I range, regard as being stored in picture library;To the step
(4) face characteristic extracted is compared with normal pictures, is included in picture library as later for the face characteristic being mutually matched
The data source of recognition of face.
Preferably, gray processing processing specifically includes: using the color image upper left corner as on zero point coordinate, color image
Boundary is x-axis, color image left margin is y-axis;For coordinate be (x, y) pixel, respectively with R (x, y), G (x, y), B (x,
Y) three components of RGB for indicating the pixel indicate the gray value of the pixel after gray processing with q (x, y);Then:
Q (x, y)=Max [R (x, y), G (x, y), B (x, y)],
R (x, y), G (x, y), any one of B (x, y) brightness value are greater than 150;
R (x, y), G (x, y), any one of B (x, y) brightness value are less than or equal to 100;
Q (x, y)=0.3R (x, y)+0.59G (x, y)+0.11B (x, y),
R (x, y), G (x, y), any one of B (x, y) brightness value are greater than 100, are less than or equal to 150.
Preferably, the face, which is cut, includes:
One indicatrix by n sections of sample rectilinear(-al)s,
Set the standard curve of each datum mark region and the slope range K of standard curveMarkWith slope library;
The indicatrix of each datum mark is evenly divided into n segment, and these segments are adjusted to straight line, i.e. sample
Straight line calculates the slope K of sample straight lineSample, work as KSampleValue range in KMarkWhen interior, retain the straight line, work as KSampleValue range
Not in KMarkWhen interior, the slope of standard curve and indicatrix each segment in the same area is calculated,
KMark nFor the slope of n-th of segment standard curve, KSample nFor the slope of n-th of segment indicatrix;
The slope relative error of each segment of indicatrix is calculated,
When relative error is in threshold value T range, then retain n-th section of curve;When relative error not in error range when
Then cancel n-th section of curve;
Face normalization is completed after traversing all curves of each region.
A kind of face identification system based on key area aspect ratio pair, including, such as flowering structure:
(1) man face image acquiring module:
Image capture device acquires color image in real time, and according to RGB color mode, i.e., the color of each pixel is by RGB
Color image is carried out gray processing processing by three representation in components, and value range is 0 to 255 gray value or brightness value;0 is
Most secretly indicate black, 255 be most bright expression white;
(2) Face normalization module:
Determining each datum mark of face key area in collected color image, key area includes: canthus region,
Corners of the mouth region, nose region, pupil region;According to key area where the coordinate (x, y) of datum mark and each datum mark
Indicatrix, carries out face cutting, and the image calibrated is extracted for face characteristic;
(3) feature classifiers:
Classified by classifier to face characteristic, the key area of the facial image by calibration is decomposited to come;
All sample images are endowed identical weight, the training set for being N for sample number, and sample initial weight is 1/N;By iteration side
The prominent sample that do not classified correctly of method, weakens the sample being distinguished;Weak Classifier is distinguished in an iterative process, point
Weight of the higher Weak Classifier of class accuracy in final strong classifier is bigger;
(4) the face characteristic extraction module of key area:
It is determined first using datum mark pixel as the region Z of the setting of center pixel, for region Z according to elder generation from abscissa
It is traversed, then each pixel in the order traversal image traversed by ordinate, to the pixel (x, y) of region Z
It is compared with the gray value of its adjacent pixels point, if the gray value of pixel (x, y) is less than the gray scale of its adjacent pixels
This adjacent pixels is then labeled as 0 by value;It, will if the gray value of center pixel is greater than or equal to adjacent pixels gray value
Threshold value O of this adjacent pixels labeled as a certain fixation in 1-255;
(5) face recognition module:
Be compared to the face characteristic of key area and complete recognition of face: region Z is after face characteristic extracts
The normal pictures that region Z is corresponded in image and picture library are compared, if the Classification Loss function of pixel value is fluctuated at certain
In one preset threshold range E, then the corresponding identity information of a certain typing face picture that is identified as in picture library;If pixel value
Classification Loss function fluctuate outside a certain preset threshold range E, then be identified as unqualified, re-start man face image acquiring;
(6) signature analysis study module:
Deep learning is carried out, the face characteristic of freshly harvested key area is included in picture library: zoning Z and picture library
The verifying loss function of the face characteristic of the normal pictures in the middle correspondence region, if verifying loss function fluctuation is in threshold value I model
New normal pictures are then regarded as in enclosing, and update the normal pictures of original region Z in picture library;If verifying loss function
Fluctuation is then regarded as being stored in picture library outside threshold value I range;To which the face characteristic of extraction be compared with normal pictures
It is right, data source of the picture library as later recognition of face is included in for the face characteristic being mutually matched.
Preferably, gray processing processing specifically includes: using the color image upper left corner as on zero point coordinate, color image
Boundary is x-axis, color image left margin is y-axis;For coordinate be (x, y) pixel, respectively with R (x, y), G (x, y), B (x,
Y) three components of RGB for indicating the pixel indicate the gray value of the pixel after gray processing with q (x, y);Then:
Q (x, y)=Max [R (x, y), G (x, y), B (x, y)],
R (x, y), G (x, y), any one of B (x, y) brightness value are greater than 150;
R (x, y), G (x, y), any one of B (x, y) brightness value are less than or equal to 100;
Q (x, y)=0.3R (x, y)+0.59G (x, y)+0.11B (x, y),
R (x, y), G (x, y), any one of B (x, y) brightness value are greater than 100, are less than or equal to 150;
The face is cut
Set the standard curve of each datum mark region and the slope range K of standard curveMarkWith slope library;
The indicatrix of each datum mark is evenly divided into n segment, and these segments are adjusted to straight line, i.e. sample
Straight line calculates the slope K of sample straight lineSample, work as KSampleValue range in KMarkWhen interior, retain the straight line, work as KSampleValue range
Not in KMarkWhen interior, the slope of standard curve and indicatrix each segment in the same area is calculated,
KMark nFor the slope of n-th of segment standard curve, KSample nFor the slope of n-th of segment indicatrix;
The slope relative error of each segment of indicatrix is calculated,
When relative error is in threshold value T range, then retain n-th section of curve;When relative error not in error range when
Then cancel n-th section of curve;
After all curves for traversing each region, Face normalization is completed.
The beneficial effects of the invention are that:
The invention proposes it is a kind of can be improved identification speed and identification precision based on key area aspect ratio pair
Face identification method and system.It include extracting face characteristic, carrying out feature calibration, judged and classified with error rate, it is final next
Achieve the purpose that recognition of face, achievees the purpose that improve accuracy rate by multi-model fusion.
Detailed description of the invention
Fig. 1 is present system structural schematic diagram.
Specific embodiment
The present invention is described further with reference to the accompanying drawing.
A kind of face identification method based on key area aspect ratio pair, includes the following steps:
(1) man face image acquiring:
Image capture device acquires color image in real time, according to RGB (RGB) color mode, i.e., the color of each pixel
By three representation in components of RGB, color image is subjected to gray processing processing, value range is 0 to 255 gray value or bright
Angle value;0 is most secretly indicates black, and 255 be most bright expression white;
The gray processing processing specifically includes: being zero point coordinate, color image coboundary for x using the color image upper left corner
Axis, color image left margin are y-axis;It is the pixel of (x, y) for coordinate, being indicated respectively with R (x, y), G (x, y), B (x, y) should
Three components of RGB of pixel indicate the gray value of the pixel after gray processing with q (x, y);Then:
Q (x, y)=Max [R (x, y), G (x, y), B (x, y)],
R (x, y), G (x, y), any one of B (x, y) brightness value are greater than 150;
R (x, y), G (x, y), any one of B (x, y) brightness value are less than or equal to 100;
Q (x, y)=0.3R (x, y)+0.59G (x, y)+0.11B (x, y),
R (x, y), G (x, y), any one of B (x, y) brightness value are greater than 100, are less than or equal to 150;
(2) Face normalization:
Determine each datum mark of face key area in collected color image, key area includes: canthus region
Y, corners of the mouth region Z, nose region J, the coordinate of pupil region T;According to where the coordinate (x, y) of datum mark and each datum mark
The indicatrix of key area, carries out face cutting, and the image corrected is extracted for face characteristic;
Described, face cutting includes:
Set the standard curve of each datum mark region and the slope range K of standard curveMarkWith slope library;
The indicatrix of each datum mark will be evenly divided into n segment, and these segments are adjusted to straight line, i.e. sample
This straight line calculates the slope K of sample straight lineSample, work as KSampleValue range in KMarkWhen interior, retain the straight line, work as KSampleValue model
It encloses not in KMarkWhen interior, the slope of standard curve and sample curve each segment in the same area is calculated,
KMark nFor the slope of n-th of segment standard curve, KSample nFor n-th of shingle sample slope of a curve;
Calculate the slope relative error of sample curve segment
When relative error is in threshold value T range, then retain n-th section of curve;When relative error not in error range when
Then cancel n-th section of curve;
Face normalization is completed after traversing all curves of each region;
(3) classified by classifier to face characteristic, the key area of the facial image by calibration is decomposited
Come;All sample images are endowed identical weight, the training set for being N for sample number, and sample initial weight is 1/N;By repeatedly
For the prominent sample that do not classified correctly of method, the sample being distinguished is weakened;Area is carried out to Weak Classifier in an iterative process
Point, weight of the higher Weak Classifier of classification accuracy rate in final strong classifier is bigger;
For sample image { (x1, y1, z1), (x2, y2, z2) ..., (xn, yn, zn), zn∈ (- 1,1), n ∈ N are indicated
Correctly or incorrectly;
The weight of whole sample images is initialized,
Di=(wi), i=1 ..., n;
DiIt indicates initialization weight distribution, is 1/N;
M indicates that the wheel number of iteration, M are most bull wheel number, and using weights are distributed DiAfter learning to sample image, obtain
Classifier is denoted as Gm(xi);
Classifier carries out sorted error rate to whole samples are as follows:
Classifier weight shared in the strong classifier finally obtained are as follows:
After iteration, the weight of next round sample image is distributed as Di+1;
Final strong classifier are as follows:
(4) face characteristic for carrying out key area extracts:
It is determined first using the pixel as the region Z of the setting of center pixel, which is carried out according to elder generation from abscissa
Each pixel in traversal, then the order traversal image that is traversed by ordinate, pixel (x, y) to region Z and it
The gray value of adjacent pixels point be compared, if the gray value of pixel (x, y) is less than the gray value of its adjacent pixels,
This adjacent pixels is labeled as 0;If the gray value of center pixel is greater than or equal to adjacent pixels gray value, by this neighbour
Pixel is connect labeled as the threshold value O of a certain fixation in 1-255;
(5) face characteristic of key area is compared and completes recognition of face:
The normal pictures that the region is corresponded in picture of the region after face characteristic extracts and picture library are compared,
If the Classification Loss function of pixel value fluctuates in a certain preset threshold range E, a certain typing being identified as in picture library
The corresponding identity information of facial image;If the Classification Loss function of pixel value fluctuates outside a certain preset threshold range E, know
Wei not be unqualified, return step (1) re-starts man face image acquiring;
The Classification Loss function are as follows:
Wherein f is the face characteristic extracted, and t indicates feature region, θiIndicate the serious forgiveness parameter of feature extraction, pi
For face characteristic destination probability distribution,It is distributed for the prediction probability of face characteristic;
(6) signature analysis carries out deep learning, and freshly harvested face characteristic is included in learning database:
The face for calculating the normal pictures that the region is corresponded in the face characteristic and picture library for the key area newly extracted is special
The verifying loss function of sign regards as new normal pictures if verifying loss function fluctuation is within the scope of threshold value I, updates figure
The normal pictures in original region in valut;If verifying loss function fluctuation outside threshold value I range, regard as with reference to figure
Piece is stored in reference picture library;The face characteristic that the step (4) is extracted is compared with normal pictures, for the people being mutually matched
Face feature is included in data source of the learning database as later recognition of face;
The verifying loss function:
fiFor the face characteristic of the key area of extraction, fjThe face characteristic of the normal pictures in the region is corresponded in picture library,
yij=1 expression feature corresponds to same people, yij=-1 expression feature corresponds to different people, θe=m is the orientation ginseng for verifying loss function
Number.
The present invention to occur in actual image acquisition and treatment process illumination, contrast, noise and it is fuzzy the problems such as,
Image preprocessing has been carried out, and targetedly the filtering noise reduction and image sharpening operation of image gray processing equalization have been carried out
Processing, as can be seen that image gray processing better effect, Characteristic Contrast degree are enhanced from the comparison before and after image procossing,
What the boundary line between the face and shoulder and background of people became is more clear, and the profile of facial nose and mouth is more bright wet, has
Conducive to later feature extraction and recognition of face;It is that base has been accomplished fluently in subsequent feature extraction using face cutting and Face normalization
Plinth is the guarantee of recognition of face validity.For under the conditions of one training sample because of illumination, block and the factors bring people such as deformation
Face identifies problem, and the invention proposes one kind to be based on single sample face recognition method.By using above-mentioned model to sample set into
Row feature extraction can effectively solve the limited problem of training samples number, in addition to this, for illumination and block well
Robustness.To solve deformation problems, the present invention takes subarea processing to image, and to the image block of all key areas
Recognition result is merged by weight phase technology.It is on multiple data sets the experimental results showed that, the present invention can be effective
Improve the accuracy rate of recognition of face under the conditions of one training sample, at the same improve the present invention to illumination, block and the factors such as deformation
Robustness.
A kind of face identification system based on key area aspect ratio pair, comprises the following structure:
(1) man face image acquiring module:
Image capture device acquires color image in real time, and according to RGB color mode, i.e., the color of each pixel is by RGB
Color image is carried out gray processing processing by three representation in components, and value range is 0 to 255 gray value or brightness value;0 is
Most secretly indicate black, 255 be most bright expression white;
(2) Face normalization module:
Determine each datum mark of face key area in collected color image, key area includes: canthus region
Y, corners of the mouth region Z, nose region J, the coordinate of pupil region T;According to where the coordinate (x, y) of datum mark and each datum mark
The indicatrix of key area, carries out face cutting, and the image corrected is extracted for face characteristic;
(3) feature classifiers:
Classified by classifier to face characteristic, the key area of the facial image by calibration is decomposited to come;
All sample images are endowed identical weight, the training set for being N for sample number, and sample initial weight is 1/N;By iteration side
The prominent sample that do not classified correctly of method, weakens the sample being distinguished;Weak Classifier is distinguished in an iterative process, point
Weight of the higher Weak Classifier of class accuracy in final strong classifier is bigger;
(4) the face characteristic extraction module of key area:
It is determined first using the pixel as the region Z of the setting of center pixel, which is carried out according to elder generation from abscissa
Each pixel in traversal, then the order traversal image that is traversed by ordinate, pixel (x, y) to region Z and it
The gray value of adjacent pixels point be compared, if the gray value of pixel (x, y) is less than the gray value of its adjacent pixels,
This adjacent pixels is labeled as 0;If the gray value of center pixel is greater than or equal to adjacent pixels gray value, by this neighbour
Pixel is connect labeled as the threshold value O of a certain fixation in 1-255;
(5) face recognition module:
The face characteristic of key area is compared and completes recognition of face: figure of the region after face characteristic extracts
The normal pictures that the region is corresponded in piece and picture library are compared, if the Classification Loss function of pixel value is fluctuated a certain pre-
If in threshold range E, then the corresponding identity information of a certain typing facial image that is identified as in picture library;If point of pixel value
Class loss function fluctuates outside a certain preset threshold range E, then is identified as unqualified, and return step (1) re-starts face figure
As acquisition;
(6) signature analysis study module:
Deep learning is carried out, freshly harvested face characteristic is included in learning database: calculating the face for the key area newly extracted
The verifying loss function of the face characteristic of the normal pictures in the region is corresponded in feature and picture library, if verifying loss function wave
It moves and then regards as new normal pictures within the scope of threshold value I, update the normal pictures in original region in picture library;If tested
Loss function fluctuation is demonstrate,proved outside threshold value I range, then regards as reference picture deposit reference picture library;What the step (4) was extracted
Face characteristic is compared with normal pictures, is included in learning database as later recognition of face for the face characteristic being mutually matched
Data source.
The gray processing processing specifically includes: being zero point coordinate, color image coboundary for x using the color image upper left corner
Axis, color image left margin are y-axis;It is the pixel of (x, y) for coordinate, being indicated respectively with R (x, y), G (x, y), B (x, y) should
Three components of RGB of pixel indicate the gray value of the pixel after gray processing with q (x, y);Then:
Q (x, y)=Max [R (x, y), G (x, y), B (x, y)],
R (x, y), G (x, y), any one of B (x, y) brightness value are greater than 150;
R (x, y), G (x, y), any one of B (x, y) brightness value are less than or equal to 100;
Q (x, y)=0.3R (x, y)+0.59G (x, y)+0.11B (x, y),
R (x, y), G (x, y), any one of B (x, y) brightness value are greater than 100, are less than or equal to 150;
The face is cut
Set the standard curve of each datum mark region and the slope range K of standard curveMarkWith slope library;
The indicatrix of each datum mark will be evenly divided into n segment, and these segments are adjusted to straight line, i.e. sample
This straight line calculates the slope K of sample straight lineSample, work as KSampleValue range in KMarkWhen interior, retain the straight line, work as KSampleValue model
It encloses not in KMarkWhen interior, the slope of standard curve and sample curve each segment in the same area is calculated,
KMark nFor the slope of n-th of segment standard curve, KSample nFor n-th of shingle sample slope of a curve;
The slope relative error of sample curve segment is calculated,
When relative error is in threshold value T range, then retain n-th section of curve;When relative error not in error range when
Then cancel n-th section of curve;
Face normalization is completed after traversing all curves of each region.
Feature classifiers are for sample image { (x in the step (3)1, y1, z1), (x2, y2, z2) ..., (xn, yn,
zn), zn∈ (- 1,1), n ∈ N are indicated correctly or incorrectly;
The weight of whole sample images is initialized,
Di=(wi), i=1 ..., n;
DiIt indicates initialization weight distribution, is 1/N;
M indicates that the wheel number of iteration, M are most bull wheel number, and using weights are distributed DiAfter learning to sample image, obtain
Classifier is denoted as Gm(xi);
Classifier carries out sorted error rate to whole samples are as follows:
Classifier weight shared in the strong classifier finally obtained are as follows:
After iteration, the weight of next round sample image is distributed as Di+1;
Final strong classifier are as follows:
The Classification Loss function are as follows:
Wherein f is the face characteristic extracted, and t indicates feature region, θiIndicate the serious forgiveness parameter of feature extraction, pi
For face characteristic destination probability distribution,It is distributed for the prediction probability of face characteristic;
The verifying loss function:
fiFor the face characteristic of the key area of extraction, fiThe face characteristic of the normal pictures in the region is corresponded in picture library,
yij=1 expression feature corresponds to same people, yij=-1 expression feature corresponds to different people, θe=m is the orientation ginseng for verifying loss function
Number.
Claims (5)
1. a kind of face identification method based on key area aspect ratio pair, which comprises the steps of:
(1) man face image acquiring:
Image capture device acquires color image in real time, and according to RGB color mode, i.e., the color of each pixel is by RGB three
Color image is carried out gray processing processing by representation in components, and value range is 0 to 255 gray value or brightness value;0 is most dark
Indicate black, 255 be most bright expression white;
(2) Face normalization:
Determine each datum mark of face key area in collected color image, key area includes: canthus region, the corners of the mouth
Region, nose region, pupil region;According to the feature of key area where the coordinate (x, y) of datum mark and each datum mark
Curve, carries out face cutting, and the image calibrated is extracted for face characteristic;
(3) classified by classifier to face characteristic, the key area of the facial image by calibration is decomposited to come;Institute
There is sample image to be endowed identical weight, the training set for being N for sample number, sample initial weight is 1/N;Pass through alternative manner
The prominent sample that do not classified correctly, weakens the sample that oneself is distinguished;Weak Classifier is distinguished in an iterative process, is classified
Weight of the higher Weak Classifier of accuracy in final strong classifier is bigger;
(4) face characteristic for carrying out key area extracts:
It is determined first using datum mark pixel as the region Z of the setting of center pixel, region Z is carried out according to elder generation from abscissa
Each pixel in traversal, then the order traversal image that is traversed by ordinate, pixel (x, y) to region Z and it
The gray value of adjacent pixels point be compared, if the gray value of pixel (x, y) is less than the gray value of its adjacent pixels,
This adjacent pixels is labeled as 0;If the gray value of center pixel is greater than or equal to adjacent pixels gray value, by this neighbour
Pixel is connect labeled as the threshold value O of a certain fixation in 1-255;
(5) face characteristic of key area is compared and completes recognition of face:
The normal pictures that region Z is corresponded in image of the region Z after face characteristic extracts and picture library are compared, such as
The Classification Loss function of fruit pixel value fluctuates in a certain preset threshold range E, then a certain typing people being identified as in picture library
The corresponding identity information of face picture;If the Classification Loss function of pixel value fluctuates outside a certain preset threshold range E, identify
To be unqualified, return step (1) re-starts man face image acquiring;
(6) signature analysis carries out deep learning, and the face characteristic of freshly harvested key area is included in picture library:
The verifying loss function of the face characteristic of the normal pictures of region Z is corresponded in zoning Z and picture library, if verifying
New normal pictures are then regarded as in loss function fluctuation within the scope of threshold value I, update the standard of original region Z in picture library
Picture;If verifying loss function fluctuation outside threshold value I range, regard as being stored in picture library;To which the step (4) mention
The face characteristic taken is compared with normal pictures, is included in picture library as later face for the face characteristic being mutually matched and knows
Other data source.
2. a kind of face identification method based on key area aspect ratio pair according to claim 1, it is characterised in that: institute
The gray processing processing stated specifically includes: being zero point coordinate, color image coboundary as x-axis, cromogram using the color image upper left corner
As left margin is y-axis;It is the pixel of (x, y) for coordinate, indicates the pixel with R (x, y), G (x, y), B (x, y) respectively
Three components of RGB indicate the gray value of the pixel after gray processing with q (x, y);Then:
Q (x, y)=Max [R (x, y), G (x, y), B (x, y)],
R (x, y), G (x, y), any one of B (x, y) brightness value are greater than 150;
R (x, y), G (x, y), any one of B (x, y) brightness value are less than or equal to 100;
Q (x, y)=0.3R (x, y)+0.59G (x, y)+0.11B (x, y),
R (x, y), G (x, y), any one of B (x, y) brightness value are greater than 100, are less than or equal to 150.
3. a kind of face identification method based on key area aspect ratio pair according to claim 1, it is characterised in that: institute
The face stated is cut
One indicatrix by n sections of sample rectilinear(-al)s,
Set the standard curve of each datum mark region and the slope range K of standard curveMarkWith slope library;
The indicatrix of each datum mark is evenly divided into n segment, and these segments are adjusted to straight line, is i.e. sample is straight
Line calculates the slope K of sample straight lineSample, work as KSampleValue range in KMarkWhen interior, retain the straight line, work as KSampleValue range not
In KMarkWhen interior, the slope of standard curve and indicatrix each segment in the same area is calculated,
KMark nFor the slope of n-th of segment standard curve, KSample nFor the slope of n-th of segment indicatrix;
The slope relative error of each segment of indicatrix is calculated,
When relative error is in threshold value T range, then retain n-th section of curve;When relative error not in error range when then take
Disappear n-th section of curve;
Face normalization is completed after traversing all curves of each region.
4. a kind of face identification system based on key area aspect ratio pair, which is characterized in that comprise the following structure:
(1) man face image acquiring module:
Image capture device acquires color image in real time, and according to RGB color mode, i.e., the color of each pixel is by RGB three
Color image is carried out gray processing processing by representation in components, and value range is 0 to 255 gray value or brightness value;0 is most dark
Indicate black, 255 be most bright expression white;
(2) Face normalization module:
Determine each datum mark of face key area in collected color image, key area includes: canthus region, the corners of the mouth
Region, nose region, pupil region;According to the feature of key area where the coordinate (x, y) of datum mark and each datum mark
Curve, carries out face cutting, and the image calibrated is extracted for face characteristic;
(3) feature classifiers:
Classified by classifier to face characteristic, the key area of the facial image by calibration is decomposited to come;It is all
Sample image is endowed identical weight, the training set for being N for sample number, and sample initial weight is 1/N;It is prominent by alternative manner
The sample that do not classified correctly out weakens the sample that oneself is distinguished;Weak Classifier is distinguished in an iterative process, classification is just
True weight of the higher Weak Classifier of rate in final strong classifier is bigger;
(4) the face characteristic extraction module of key area:
It is determined first using datum mark pixel as the region Z of the setting of center pixel, region Z is carried out according to elder generation from abscissa
Each pixel in traversal, then the order traversal image that is traversed by ordinate, pixel (x, y) to region Z and it
The gray value of adjacent pixels point be compared, if the gray value of pixel (x, y) is less than the gray value of its adjacent pixels,
This adjacent pixels is labeled as 0;If the gray value of center pixel is greater than or equal to adjacent pixels gray value, by this neighbour
Pixel is connect labeled as the threshold value O of a certain fixation in 1-255;
(5) face recognition module:
The face characteristic of key area is compared and completes recognition of face: image of the region Z after face characteristic extracts
It is compared with the normal pictures for corresponding to region Z in picture library, if the Classification Loss function of pixel value is fluctuated a certain pre-
If in threshold range E, then the corresponding identity information of a certain typing face picture that is identified as in picture library;If point of pixel value
Class loss function fluctuates outside a certain preset threshold range E, then is identified as unqualified, re-starts man face image acquiring;
(6) signature analysis study module:
Carry out deep learning, the face characteristic of freshly harvested key area is included in picture library: zoning Z with it is right in picture library
Should region normal pictures face characteristic verifying loss function, if verifying loss function fluctuation within the scope of threshold value I
New normal pictures are then regarded as, the normal pictures of original region Z in picture library are updated;If verifying loss function fluctuation
Outside threshold value I range, then regard as being stored in picture library;It is right to which the face characteristic of extraction to be compared with normal pictures
Data source of the picture library as later recognition of face is included in the face characteristic being mutually matched.
5. a kind of face identification system based on key area aspect ratio pair according to claim 4, it is characterised in that: institute
The gray processing processing stated specifically includes: being zero point coordinate, color image coboundary as x-axis, cromogram using the color image upper left corner
As left margin is y-axis;It is the pixel of (x, y) for coordinate, indicates the pixel with R (x, y), G (x, y), B (x, y) respectively
Three components of RGB indicate the gray value of the pixel after gray processing with q (x, y);Then:
Q (x, y)=Max [R (x, y), G (x, y), B (x, y)],
R (x, y), G (x, y), any one of B (x, y) brightness value are greater than 150;
R (x, y), G (x, y), any one of B (x, y) brightness value are less than or equal to 100;
Q (x, y)=0.3R (x, y)+0.59G (x, y)+0.11B (x, y),
R (x, y), G (x, y), any one of B (x, y) brightness value are greater than 100, are less than or equal to 150;
The face is cut
Set the standard curve of each datum mark region and the slope range K of standard curveMarkWith slope library;
The indicatrix of each datum mark is evenly divided into n segment, and these segments are adjusted to straight line, is i.e. sample is straight
Line calculates the slope K of sample straight lineSample, work as KSampleValue range in KMarkWhen interior, retain the straight line, work as KSampleValue range not
In KMarkWhen interior, the slope of standard curve and indicatrix each segment in the same area is calculated,
KMark nFor the slope of n-th of segment standard curve, KSample nFor the slope of n-th of segment indicatrix;
The slope relative error of each segment of indicatrix is calculated,
When relative error is in threshold value T range, then retain n-th section of curve;When relative error not in error range when then take
Disappear n-th section of curve;
After all curves for traversing each region, Face normalization is completed.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810234083.9A CN108537143B (en) | 2018-03-21 | 2018-03-21 | A kind of face identification method and system based on key area aspect ratio pair |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810234083.9A CN108537143B (en) | 2018-03-21 | 2018-03-21 | A kind of face identification method and system based on key area aspect ratio pair |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108537143A CN108537143A (en) | 2018-09-14 |
CN108537143B true CN108537143B (en) | 2019-02-15 |
Family
ID=63484997
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810234083.9A Active CN108537143B (en) | 2018-03-21 | 2018-03-21 | A kind of face identification method and system based on key area aspect ratio pair |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108537143B (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109711386B (en) * | 2019-01-10 | 2020-10-09 | 北京达佳互联信息技术有限公司 | Method and device for obtaining recognition model, electronic equipment and storage medium |
CN109686110A (en) * | 2019-01-17 | 2019-04-26 | 蜂寻(上海)信息科技有限公司 | Parking stall sky expires condition discrimination method and apparatus |
WO2021027440A1 (en) | 2019-08-15 | 2021-02-18 | 华为技术有限公司 | Face retrieval method and device |
CN113705280B (en) * | 2020-05-21 | 2024-05-10 | 北京聚匠艺传媒有限公司 | Human-computer interaction method and device based on facial features |
CN112070013A (en) * | 2020-09-08 | 2020-12-11 | 安徽兰臣信息科技有限公司 | Method and device for detecting facial feature points of children and storage medium |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106845421A (en) * | 2017-01-22 | 2017-06-13 | 北京飞搜科技有限公司 | Face characteristic recognition methods and system based on multi-region feature and metric learning |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN100336070C (en) * | 2005-08-19 | 2007-09-05 | 清华大学 | Method of robust human face detection in complicated background image |
US8260008B2 (en) * | 2005-11-11 | 2012-09-04 | Eyelock, Inc. | Methods for performing biometric recognition of a human eye and corroboration of same |
CN101398886B (en) * | 2008-03-17 | 2010-11-10 | 杭州大清智能技术开发有限公司 | Rapid three-dimensional face identification method based on bi-eye passiveness stereo vision |
CN101383001B (en) * | 2008-10-17 | 2010-06-02 | 中山大学 | Quick and precise front human face discriminating method |
CN102194131B (en) * | 2011-06-01 | 2013-04-10 | 华南理工大学 | Fast human face recognition method based on geometric proportion characteristic of five sense organs |
-
2018
- 2018-03-21 CN CN201810234083.9A patent/CN108537143B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106845421A (en) * | 2017-01-22 | 2017-06-13 | 北京飞搜科技有限公司 | Face characteristic recognition methods and system based on multi-region feature and metric learning |
Also Published As
Publication number | Publication date |
---|---|
CN108537143A (en) | 2018-09-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108537143B (en) | A kind of face identification method and system based on key area aspect ratio pair | |
CN106778664B (en) | Iris image iris area segmentation method and device | |
CN101142584B (en) | Method for facial features detection | |
CN109145742B (en) | Pedestrian identification method and system | |
CN104050471B (en) | Natural scene character detection method and system | |
CN111414862B (en) | Expression recognition method based on neural network fusion key point angle change | |
CN104517104A (en) | Face recognition method and face recognition system based on monitoring scene | |
CN104504383B (en) | A kind of method for detecting human face based on the colour of skin and Adaboost algorithm | |
CN107066969A (en) | A kind of face identification method | |
CN110298297A (en) | Flame identification method and device | |
CN104951940A (en) | Mobile payment verification method based on palmprint recognition | |
CN110008793A (en) | Face identification method, device and equipment | |
CN106127193B (en) | A kind of facial image recognition method | |
Shen et al. | Adaptive pedestrian tracking via patch-based features and spatial–temporal similarity measurement | |
CN106599785A (en) | Method and device for building human body 3D feature identity information database | |
CN110119695A (en) | A kind of iris activity test method based on Fusion Features and machine learning | |
CN106611158A (en) | Method and equipment for obtaining human body 3D characteristic information | |
CN107784263A (en) | Based on the method for improving the Plane Rotation Face datection for accelerating robust features | |
Akbarzadeh et al. | Design and matlab simulation of Persian license plate recognition using neural network and image filtering for intelligent transportation systems | |
Hsiao et al. | EfficientNet based iris biometric recognition methods with pupil positioning by U-net | |
Chen et al. | Fresh tea sprouts detection via image enhancement and fusion SSD | |
Hu et al. | Fast face detection based on skin color segmentation using single chrominance Cr | |
Das et al. | Human face detection in color images using HSV color histogram and WLD | |
CN108596121A (en) | A kind of face critical point detection method based on context and structural modeling | |
CN109165551B (en) | Expression recognition method for adaptively weighting and fusing significance structure tensor and LBP characteristics |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20190121 Address after: Room 3658, 3rd floor, 2879 Longteng Avenue, Xuhui District, Shanghai, 2002 Applicant after: Optical Control Teslian (Shanghai) Information Technology Co., Ltd. Address before: 100088 No. 303, Block C, Block C, 28 Xinjiekouwai Street, Xicheng District, Beijing Applicant before: Benitez Lian (Beijing) Technology Co. Ltd. |
|
GR01 | Patent grant | ||
GR01 | Patent grant |