CN108563999A - A kind of piece identity's recognition methods and device towards low quality video image - Google Patents

A kind of piece identity's recognition methods and device towards low quality video image Download PDF

Info

Publication number
CN108563999A
CN108563999A CN201810224833.4A CN201810224833A CN108563999A CN 108563999 A CN108563999 A CN 108563999A CN 201810224833 A CN201810224833 A CN 201810224833A CN 108563999 A CN108563999 A CN 108563999A
Authority
CN
China
Prior art keywords
face
image
rectangular area
training
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810224833.4A
Other languages
Chinese (zh)
Inventor
刘丰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Terminus Beijing Technology Co Ltd
Original Assignee
Terminus Beijing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Terminus Beijing Technology Co Ltd filed Critical Terminus Beijing Technology Co Ltd
Priority to CN201810224833.4A priority Critical patent/CN108563999A/en
Publication of CN108563999A publication Critical patent/CN108563999A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Abstract

The present invention relates to a kind of piece identity's recognition methods towards low quality video image and devices.The present invention includes that video image is converted to digital picture after camera sampling, quantization, re-computation and inputs, store into frame memory;Image classification based on geometry and statistical nature is carried out to digital picture, obtains target image;Translate and intercept the rectangular area of target image;For the rectangular area for the target image that any one width gives, the characteristic information for scanning for extracting face if containing face to determine whether containing face includes position, size and posture information;According to the characteristic information of extraction face, specific identity user is determined whether, and send out identity recognition result.Present invention concentration solves the problems, such as to be difficult in the video image of low resolution quickly to identify relatively fuzzy face, can substantially reduce calculation amount, discrimination is greatly improved, and is suitable for the insufficient devices of computing capabilitys such as embedded board.

Description

A kind of piece identity's recognition methods and device towards low quality video image
Technical field
The invention belongs to image identities to identify field, and in particular to a kind of piece identity's knowledge towards low quality video image Other method and apparatus.
Background technology
It is security monitoring, trace tracking, automatic door opening, self-help drawing money based on the identity of video image identification target person Basic technology under many scenes such as payment.
Face recognition technology is the main realization rate of video image identity recognizing technology, and the face of the mankind contains enough Multi information is used for identification, and there is each face its unique feature, even twins will not have complete The same face.In addition to face comprising enough information therefore reliability and accuracy it is preferable other than, the fingerprint recognition that compares and Other identity recognizing technologies such as iris recognition, recognition of face only need the conventional equipments such as camera and camera, do not need its biography Equipment of the sensor as acquisition characteristics, it is of low cost, facilitate deployment, and do not need human contact and can be carried out identity knowledge Not, recognition of face (Face Recognition) research is started in eighties of last century, and Galton set forth is used as feature by face The possibility of identification is carried out, but there are no be related to Automatic face recognition at that time.Subsequent recognition of face development Rapidly, this process can be divided into four-stage:
First stage (1964~nineteen ninety)
In this stage, researchers do not study recognition of face as an individual project, and only handle Recognition of face is treated as general pattern recognition problem, and face recognition study at this time is as other object identifications, all It is based on descriptive geometry structure feature.Research concentrates on the structure feature extraction and analysis to facial outline curve, this method Thinking needs the accurate geometric distance measured between characteristic point, and face is identified using nearest neighbor method and other methods.This rank The face identification method of section is not intelligent, and many work all manually carry out, and are the methods of non-automatic identification.
Second stage (1991~1997)
This stage face recognition technology is developed rapidly, and many important achievements, masschusetts, U.S.A science and engineering are achieved The Turk and Pentland of institute's Media Lab are proposed " eigenface ", and eigenface is a kind of based on the unsupervised of Karhunen-Loeve transformation Target image is projected to eigenface subspace, is effectively reduced dimension by face identification method using principal component analysis technology Degree, and judged and identified in the position of subspace and the length of projection line by its subpoint.Eigenface method The shortcomings that for illumination variation do not have robustness, nevertheless, the dimensionality reduction thought of eigenface be after recognition of face side Method brings prodigious inspiration, and the method newly proposed is all more or less to be influenced by eigenface method.It is another in this same period One important research is the discovery that the Brunelli and Poggio of MIT is based on geometry feature (namely the first rank by comparison Section main research) with based on template matching method carry out recognition of face as a result, having obtained a conclusion:Based on template The method matched is better than the method based on geometry feature.The conclusion of this guidance quality and the common work of eigenface experimental result With, result in the termination for the Face Recognition for being based purely on qualifying structure feature, then based on statistical-simulation spectrometry and Lower dimensional space modeling based on performance is increasingly becoming mainstream research direction.
Belhumeur etc. proposes Fisher Faces methods, and line is used on the basis of after the PCA dimensionality reductions of Eigen Faces The method of property discriminant analysis (Linear Discriminant Analysis, LDA), further analyzes obtained principal component, Find divergence in class scatter big as possible and class small as possible, thus can effectively extract different samples difference it Place.Hereafter mutation method of a period of time based on Fisher Face has also obtained research and development, as Zero Space Method and Its, subspace are sentenced Other model, enhancing discrimination model etc..There are many more outstanding methods to propose, such as Elastic Graph Matching technology (Elastic GraphMatching, EGM) not only the global characteristics of the mankind are modeled, also retain the information of facial local key position.It is soft The it is proposed of property model (Flexible Models) provides place mat for the development of early stage Face normalization technology.
Phase III (1998~2011)
Phase III is that recognition of face develops the stage like a raging fire, for the last stage study in generally occur to light According to the not good disadvantage of robustness, Georghiades etc. proposes the model based on illumination cone and effectively eliminates illumination, posture Change the influence to face recognition result.In addition to finding to the spy with robustness such as human face's expression, angle, illumination, fuzzy Sign is outer, and the more and more machine learning methods such as classification of support vector machines, k nearest neighbor, Bayes graders etc. are applied to recognition of face .Viola and Jones in 2001 proposes the Face datection frame using simple feature and cascade filtering, has high Detection efficiency, for face knowledge provide accurately and reliably facial image region.
Fourth stage (2012~so far)
The concept of artificial neural network (Artificial Neural Network, ANN) has just been carried early in eighties of last century Go out, but there are some insoluble problems at that time.Hinton in 1986 et al. proposes back-propagation algorithm, error correction Operand drops to only and neuron number itself is directly proportional, while also solving artificial neural network and cannot handle XOR gate Problem lays the first stone for later deep learning.Yann Lecun use first successful convolutional neural networks Le within 1989 Net-5 achieves important research achievement in English handwriting recongnition.
Hinton in 2006 proposes the concept of deep learning, i.e. profound convolutional neural networks on Science, this Kind method simulates the process of human brain study, is combined by low-level feature, and study is to the high-level feature of target, and this is learned The process of habit is the automatic learning process without manual intervention.
2012, Hinton, Alex etc. used depth convolutional neural networks Alex Net in Image Net matches, It uses Re LU activation primitives and solves the problems, such as gradient disappearance, to hold a safe lead the achievement of conventional machines study outstanding person SVM Image champions Net are obtained, success of the deep learning in image classification makes it was recognized that recognition of face can also be by depth Study solves, and largely have stimulated the development that deep learning is applied in recognition of face, opens the epoch of deep learning. Subsequent each university, company, research institution all have investigated the depth convolutional neural networks structure of oneself and are applied to recognition of face In, Deep Face of Face Book, Deep ID of Hong Kong Chinese University, Google the traditional structures such as Face Net CNN It releases one after another.Unsupervised CNN, EM Rudd that Google Net, the residual error network of He Kaiming, SU Rehman are proposed are proposed mixed It closes objective optimization network and also continuously improves the structure for optimizing deep neural network.The discrimination of deep learning has surmounted human eye Discrimination 97.53%, and constantly create new record.Other than the method based on deep learning, based on face 3d models Method, the face identification method for combining different expressions also all achieve good effect.
Although experienced long-term research, carrying out recognition of face in video at present, there is still a need for develop higher robust The method and apparatus of property and more Computationally efficient.The technology one of piece identity's identification can be carried out based on shooting face-image at present As require that the picture quality of face-image is relatively good, geometry, " feature can be extracted from face-image in this way Face ", image texture etc. indicate the characteristic information of piece identity. It is bad, focus it is bad, have interference signal etc., lead to that face-image is fuzzy or there are more noises, just can not accurately extract Indicate the characteristic information of piece identity.Therefore, how fuzzy in image, there are realized when more noise for image Piece identity's is identified as current this field urgent problem to be solved.
Invention content
The purpose of the present invention is to provide a kind of piece identity's recognition methods towards low quality video image.The present invention's Purpose, which also resides in, provides a kind of piece identity's identification device towards low quality video image.The purpose of the present invention is noise, In the case that the reasons such as illumination is insufficient, out of focus, camera lens shakes cause video image fuzzy, the identification to piece identity is realized, from And it can be adapted under many scenes such as security monitoring, trace tracking, automatic door opening, self-help drawing money payment by various condition institutes Limit can only shoot the situation of low quality video image.
The object of the present invention is achieved like this:
A kind of piece identity's recognition methods towards low quality video image, includes the following steps:
(1) video image acquisition:Video image samples by camera, quantifies, is converted to digital picture after re-computation And it inputs, store into frame memory;
(2) target image detects:Image classification based on geometry and statistical nature is carried out to digital picture, obtains target figure Picture;
(3) translate and intercept the rectangular area of target image;
(4) Face datection:For any one width give target image rectangular area, scan for wherein be with determination The no characteristic information that containing face, face is extracted if containing face includes position, size and posture information;If nobody Then return to step (1) re-starts video image acquisition to face;
(5) recognition of face determines whether specific identity user, if not spy according to the characteristic information of extraction face Determine identity user then return to step (1), interception is then carried out if it is specific identity user and extract the characteristic information of face, concurrently Go out identity recognition result.
Preferably, the specific steps that the video image is converted to digital picture after re-computation include:
(1.1) image f (x, y) the probability q (r) that pixel value occurs in each gray level are calculated
T is the highest gray level in image, erFor the pixel number of gray level r, x, y are that the transverse and longitudinal of pixel in the picture is sat Mark, E is total number of image pixels;
(1.2) the pixels probability density u (r) that each gray level adds up in image is calculated, u is that the normalization of image is accumulative Histogram;
I is label;
(1.3) the pixel value O (x, y) of image each position is recalculated;
O (x, y)=(Omax-Omin) u [O (x, y)]+Omin
Omax、OminMaximum and minimum pixel value respectively in image, by the graphical representation after recalculating be F (x, y)。
Preferably, the image classification that geometry and statistical nature are carried out to digital picture, obtains target image It is as follows:
(2.1) acquire a digital image data, wherein with s face picture group at positive sample set and d it is a inhuman Face picture forms negative sample set, is expressed as
F={ (α1, β1), (α2, β2)…(αa, βa)}
αi∈Gi, GiFor the feature vector of the digital picture as positive and negative samples, βi∈ H={ 0,1 }, H are sample data αi Label;0 is negative sample label, and 1 is positive sample label;
(2.2) weights initialisation of positive sample is
The weights initialisation of negative sample is
(2.3) to each sample αiCharacteristic value training modeling, obtain the Weak Classifier of sample characteristics,
c(αi) be sample digital picture characteristic value, γ is threshold value;
(2.4) the weights v of the corresponding Weak Classifier of all feature vectors is calculatedi, by selecting Weak Classifier, accidentally Poor minimum Weak Classifier is cascaded as strong classifier δi,
Wherein viInitial value be sample αiCorresponding initialization weighted value J (i);
(2.5) pass through the Weak Classifier weights v of selectioniAgain it is assigned a value of viεi (1-θ)
If i-th of sample αiCorrectly classified, θ=0, on the contrary θ=1,
(2.6) strong classifier to the end is obtained, is classified by force to digital picture by the strong classifier;According to step (2.6) strong classification results, determine whether belong to the target image comprising face picture in digital picture.
Preferably, the translation and the specific steps of rectangular area of target image are intercepted include:
The rectangular characteristic of target image is calculated using the rectangle template of predefined size, is used for face detection;The rectangular mold Plate is composed of the first rectangle frame and the second rectangle frame, two rectangle frames be respectively provided on the directions x and the directions y it is fixed between Away from;Utilize the rectangular area in each rectangle frame extraction target image of rectangle template;Calculate rectangle template characteristic value;Judge square Whether shape template characteristic value confirmation is more than scheduled threshold value, then carries out the rectangular area extracted by rectangle template if it is greater than the threshold value The Face datection of step (4) is handled, if no more than if the threshold value that the rectangle template is predetermined in the directions x and the movement of the directions y Distance after re-start interception target image rectangular area calculating, until traversing entire target image;
Wherein, the rectangle template characteristic value is the integrated value of each pixel in the rectangular area extracted by the first rectangle frame Subtract the summation of the absolute difference in the rectangular area of the second rectangle frame extraction obtained by the integrated value of the pixel of each corresponding position; Wherein, the integrated value A (x, y) of each pixel (x, y) is the sum of the pixel value positioned at the upper left all pixels of the pixel, i.e.,
A (x, y)=∑X '≤x, y '≤yO (x ', y ')
O (x ', y ') is the pixel value at (x ', y ').
Preferably, the specific steps of the Face datection include:
(4.1) a certain number of Given Face images are collected by facial feature database and forms trained face image set It closes, each personage needs to include the facial image under a certain number of different expressions and different light, M training of human in set The set of matrices ζ of face image is expressed as the matrix σ of m*n per picturesi
(4.2) matrix N of training face set is calculated,
N=ATA;
A=[ρ1, ρ2..., ρM]
ρiFor the difference of facial image and average facial image,
ρii- τ,
It matrix in traversal set ζ and adds up, then takes its average value to get to the average image τ
The feature vector and characteristic value of calculating matrix N,
Feature vector μkFor the difference ρ of facial image and average facial imageiDistribution law, characteristic value
And select the wherein corresponding feature vector of M characteristic value with highest correlation;
(4.3) the training image set of combined standard generates eigenface pattern vector
Ωii Ti-τ);I=1,2 ... M;
(4.4) it is directed to each known training personage, according to what is be calculated by the original training image of known personage The mean value computation face class vector of eigenface pattern vectorGiven threshold θkIndicate that the maximum between training face class is permissible Distance;Given threshold θ 'k, indicate that the maximum in face space allows distance;
(4.5) it is directed to the rectangular area that step 3 intercepts, calculates its pattern vector Ωi, at a distance from each trained face class ε, and the distance ε to face spacek;If the minimum range ε with training face classk< θk, and arrive the distance in face space ε < θ 'kThen think that face to be identified belongs to the face class;If the minimum range ε with training face classk≥θk, but arrive face The distance ε < θ ' in spacek, then it is assumed that face to be identified belongs to strange face;
(4.6) if face to be identified is identified as known training personage, this facial image will be added into In the original training image set of the personage, its eigenface is then recalculated.
Preferably, the specific steps of specific identity user that determine whether include:
To being each eigenvalue λ of identified face calculating in step (4)k, its corresponding grader h of training (x, λk, ρ, θ),
Indicated when grader is averaged more than 0.47 its be specific identity user, when grader be averaged less than etc. Indicate that it is nonspecific identity user when 0.47.
The present invention provides a kind of piece identity's identification device towards low quality video image, including video image in turn Acquisition module, target image detection module, rectangular area module, face detection module and face recognition module,
Video image acquisition module, for video image to be converted to number after camera sampling, quantization, re-computation Word image is simultaneously inputted, is stored into frame memory;
Target image detection module is obtained for carrying out the image classification based on geometry and statistical nature to digital picture Target image;
Rectangular area extraction module, the rectangular area for translating and intercepting target image;
Face detection module, the rectangular area of the target image for being given for any one width, scans for determination Wherein whether contain face, the characteristic information that face is extracted if containing face includes position, size and posture information;If There is no face then to start video image acquisition module and re-starts video image acquisition;
Face recognition module determines whether specific identity user, if not for the characteristic information according to extraction face It is that specific identity user then starts video image acquisition module and re-starts video image acquisition, then if it is specific identity user It carries out interception and extracts the characteristic information of face, store and send out identity recognition result.
Preferably, the video image acquisition module proceeds as follows the re-computation of video image:
(1.1) image f (x, y) the probability q (r) that pixel value occurs in each gray level are calculated
T is the highest gray level in image, erFor the pixel number of gray level r, x, y are that the transverse and longitudinal of pixel in the picture is sat Mark, E is total number of image pixels;
(1.2) the pixels probability density u (r) that each gray level adds up in image is calculated, u is that the normalization of image is accumulative Histogram;
I is label;
(1.3) the pixel value O (x, y) of image each position is recalculated;
O (x, y)=(Omax-Omin) u [O (x, y)]+Omin
Omax、OminMaximum and minimum pixel value respectively in image, by the graphical representation after recalculating be F (x, y)。
Preferably, target image detection module as follows carries out digital picture the image of geometry and statistical nature Classification obtains target image:
(2.1) acquire a digital image data, wherein with s face picture group at positive sample set and d it is a inhuman Face picture forms negative sample set, is expressed as
F={ (α1, β1), (α2, β2)…(αa, βa),
αi∈Gi, GiFor the feature vector of the digital picture as positive and negative samples, βi∈ H={ 0,1 }, H are sample data α The label of i;0 is negative sample label, and 1 is positive sample label;
(2.2) weights initialisation of positive sample is
The weights initialisation of negative sample is
(2.3) to each sample αiCharacteristic value training modeling, obtain the Weak Classifier of sample characteristics,
c(αi) be sample digital picture characteristic value, γ is threshold value;
(2.4) the weights v of the corresponding Weak Classifier of all feature vectors is calculatedi, by selecting Weak Classifier, accidentally Poor minimum Weak Classifier is cascaded as strong classifier δi,
Wherein viInitial value be sample αiCorresponding initialization weighted value J (i);
(2.5) pass through the Weak Classifier weights v of selectioniAgain it is assigned a value of viεi (1-θ)
If i-th of sample αiCorrectly classified, θ=0, on the contrary θ=1,
(2.6) strong classifier to the end is obtained, is classified by force to digital picture by the strong classifier;According to step (2.6) strong classification results, determine whether belong to the target image comprising face picture in digital picture.
Preferably, the rectangular area extraction module translates and intercepts the rectangular area of target image as follows:
The rectangular characteristic of target image is calculated using the rectangle template of predefined size, is used for face detection;The rectangular mold Plate is composed of the first rectangle frame and the second rectangle frame, two rectangle frames be respectively provided on the directions x and the directions y it is fixed between Away from;Utilize the rectangular area in each rectangle frame extraction target image of rectangle template;Calculate rectangle template characteristic value;Judge square Whether shape template characteristic value confirmation is more than scheduled threshold value, then carries out the rectangular area extracted by rectangle template if it is greater than the threshold value The Face datection of step (4) is handled, if no more than if the threshold value that the rectangle template is predetermined in the directions x and the movement of the directions y Distance after re-start interception target image rectangular area calculating, until traversing entire target image;
Wherein, the rectangle template characteristic value is the integrated value of each pixel in the rectangular area extracted by the first rectangle frame Subtract the summation of the absolute difference in the rectangular area of the second rectangle frame extraction obtained by the integrated value of the pixel of each corresponding position; Wherein, the integrated value A (x, y) of each pixel (x, y) is the sum of the pixel value positioned at the upper left all pixels of the pixel, i.e.,
A (x, y)=∑X '≤x, y '≤yO (x ', y ')
O (x ', y ') is the pixel value at (x ', y ').
Preferably, face detection module carries out the rectangular area that rectangular area extraction module extracts in the following way Face datection:
(4.1) a certain number of Given Face images are collected by facial feature database and forms trained face image set It closes, each personage needs to include the facial image under a certain number of different expressions and different light, M training of human in set The set of matrices ζ of face image is expressed as the matrix σ of m*n per picturesi
(4.2) matrix N of training face set is calculated,
N=ATA;
A=[ρ1, ρ2..., ρM]
ρiFor the difference of facial image and average facial image,
ρii- τ,
It matrix in traversal set ζ and adds up, then takes its average value to get to the average image τ
The feature vector and characteristic value of calculating matrix N,
Feature vector μkFor the difference ρ of facial image and average facial imageiDistribution law, characteristic value
And select the wherein corresponding feature vector of M characteristic value with highest correlation;
(4.3) the training image set of combined standard generates eigenface pattern vector
Ωii Ti-τ);I=1,2 ... M;
(4.4) it is directed to each known training personage, according to what is be calculated by the original training image of known personage The mean value computation face class vector of eigenface pattern vectorGiven threshold θkIndicate that the maximum between training face class is permissible Distance;Given threshold θ 'k, indicate that the maximum in face space allows distance;
(4.5) it is directed to the rectangular area of rectangular area extraction module interception, calculates its pattern vector Ωi, with each training The distance ε of face class, and the distance ε to face spacek;If the minimum range ε with training face classk< θk, and arrive people The distance ε < θ ' in face spacekThen think that face to be identified belongs to the face class;If the minimum range ε with training face classk< θk, but arrive the distance ε < θ ' in face spacek, then it is assumed that face to be identified belongs to the strange face;
(4.6) if face to be identified is identified as known training personage, this facial image will be added into In the original training image set of the personage, its eigenface is then recalculated.
Preferably, the face recognition module determines whether according to the characteristic information of extraction face in the following way Specific identity user:
It is each eigenvalue λ that identified face calculates for face detection modulek, its corresponding grader of training H (x, λk, ρ, θ),
Indicated when grader is averaged more than 0.47 its be specific identity user, when grader be averaged less than etc. Indicate that it is nonspecific identity user when 0.47.
The beneficial effects of the present invention are:
The present invention proposes a kind of face identification method and system for low quality video image, solves at relatively low point The problem of being difficult to quickly identify relatively fuzzy face in the video image of resolution.The present invention extract first the target in video image into Row pixel is balanced;Using the cascade sort method based on geometry and statistical nature, being obtained by positive and negative sample training has face The target image of target;Then the boundary rectangle for calculating target, by translating and intercepting rectangular operation, it is ensured that face is in rectangle In region, human face detection and recognition is carried out in follow-up rectangular region image only after treatment, can substantially reduce recognition of face Calculation amount;Finally, using training algorithm, consider position, size and the posture information of human face region, realize that identity is known Not;The algorithm can not only substantially reduce the calculation amount of recognition of face, and discrimination is also greatly improved.
Description of the drawings
Fig. 1 is the flow chart of piece identity's recognition methods of the present invention towards low quality video image;
Fig. 2 is the structure chart of the system for identifying figures of the present invention towards low quality video image.
Specific implementation mode
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete Site preparation describes, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on Embodiment in the present invention, it is obtained by those of ordinary skill in the art without making creative efforts every other Embodiment shall fall within the protection scope of the present invention.
The present invention thinks after many experiments and analysis, works for the recognition of face carried out in low quality video It says, is primarily present following difficult point:Angle, since camera angle or people are different with respect to the position of camera, face in video Angle change is very big, and angle said herein is not only angle change geometrically, and the same face is observed in different angle Side face feature also has very big difference, i.e. attitudes vibration.So face recognition algorithms is needed to have robustness (constant angle change Property).Illumination, since video is typically a period of time image sequence, probably due to sunlight variation, switch lamp lead to the illumination of video Condition is different, generates different shades, even if also having apparent difference in the facial image of same position in this way, so needing To the robustness of illumination variation.Expression, in video, face has abundant expression, expression shape change that can lead to face relative position The variation of variation, this Global Face feature also can increase difficulty to recognition of face.It blocks, people may wearing spectacles, extension Decorations, also beard, bang can shelter from the facial information of a part, this requires face recognition algorithms can be according to incomplete people Face feature carries out recognition of face.Fuzzy, the quick movement in low resolution or video can cause the face in image to generate mould Paste, face characteristic unobvious at this time, it is desirable that face recognition algorithms can be extracted in the case of fuzzy enough features into Row recognition of face.Speed and calculation amount problem, usual one second more than ten frame of video even tens frames, processing one frame time only have it is several Ten milliseconds, this has the speed of algorithm very high requirement, the excessive algorithm of calculation amount to be not appropriate for the video with real-time characteristic Recognition of face.Tracking problem, tracking refers to has determined that face location in a certain frame, behind even without Face datection in several frames Algorithm can also continuous positioning to the face location.If can be to the face in video into line trace, it may be determined that continuous several In frame the variation of face and predict next frame face occur position.Foreground detection directly uses sliding window pair in video It is obviously that very time-consuming and calculation amount is prodigious that whole image, which carries out Face datection, if we can be by foreground detection side Method isolates the foreground and background of image, can only carry out face inspection in the foreground rather than in whole image every time in this way It surveys, can solve to calculate the time so significantly, save computing resource.The present invention can be removed as inhuman face image, improve people The precision of face detection;Finally devise face standardized algorithm, experiment show its can fine-grained progress face reorientation, Substantially increase the accuracy rate of follow-up recognition of face.It need to be noted that image described below be fuzzy or video matter The not high image of amount, it is terse due to expressing, it is unified to use graphical representation, do not claimed one by one with low quality or blurred picture It exhales.Moreover, if in the case of the identification of blurred picture is effective, the identification of clear image is it was proved that standing good and imitating Fruit is more preferable.
Embodiment 1
In view of the above-mentioned problems, the present invention proposes a kind of personage towards low quality video image solving problem of image blurring Personal identification method includes the following steps:
(1) video image acquisition:Video image samples by camera, quantifies, is converted to digital picture after re-computation And it inputs, store into frame memory;
(2) target image detects:Image classification based on geometry and statistical nature is carried out to digital picture, obtains target figure Picture;
(3) translate and intercept the rectangular area of target image;
(4) Face datection:For any one width give target image rectangular area, scan for wherein be with determination The no characteristic information that containing face, face is extracted if containing face includes position, size and posture information;If nobody Then return to step (1) re-starts video image acquisition to face;
(5) recognition of face determines whether specific identity user, if not spy according to the characteristic information of extraction face Determine identity user then return to step (1), interception is then carried out if it is specific identity user and extract the characteristic information of face, concurrently Go out identity recognition result.
The specific steps that the video image is converted to digital picture after re-computation include:
(1.1) image f (x, y) the probability q (r) that pixel value occurs in each gray level are calculated
T is the highest gray level in image, erFor the pixel number of gray level r, x, y are that the transverse and longitudinal of pixel in the picture is sat Mark, E is total number of image pixels;
(1.2) the pixels probability density u (r) that each gray level adds up in image is calculated, u is that the normalization of image is accumulative Histogram;
I is label;
(1.3) the pixel value O (x, y) of image each position is recalculated;
O (x, y)=(Omax-Omin) u [O (x, y)]+Omin
Omax、OminMaximum and minimum pixel value respectively in image, by the graphical representation after recalculating be F (x, y)。
After the re-computation to digital picture, picture contrast greatly enhances, and is conducive to subsequent image procossing.
The image classification that geometry and statistical nature are carried out to digital picture, obtains target image and is as follows:
(2.1) acquire a digital image data, wherein with s face picture group at positive sample set and d it is a inhuman Face picture forms negative sample set, is expressed as
F={ (α1, β1), (α2, β2)…(αa, βa),
αi∈Gi, GiFor the feature vector of the digital picture as positive and negative samples, βi∈ H={ 0,1 }, H are sample data αi Label;0 is negative sample label, and 1 is positive sample label;
(2.2) weights initialisation of positive sample is
The weights initialisation of negative sample is
(2.3) to each sample αiCharacteristic value training modeling, obtain the Weak Classifier of sample characteristics,
c(αi) be sample digital picture characteristic value, γ is threshold value;
(2.4) the weights v of the corresponding Weak Classifier of all feature vectors is calculatedi, by selecting Weak Classifier, accidentally Poor minimum Weak Classifier is cascaded as strong classifier δi,
Wherein viInitial value be sample αiCorresponding initialization weighted value J (i);
(2.5) pass through the Weak Classifier weights v of selectioniAgain it is assigned a value of viεi (1-θ)
If i-th of sample αiCorrectly classified, θ=0, on the contrary θ=1,
(2.6) strong classifier to the end is obtained, is classified by force to digital picture by the strong classifier;According to step (2.6) strong classification results, determine whether belong to the target image comprising face picture in digital picture.
It utilizes the cascade algorithm of above-mentioned classification to the present invention, greatly reduces and identifies required sampling grain to face picture The quantity of son, and adjustment criteria for classification that can be adaptive is arranged.
The specific steps of the translation and the rectangular area that intercepts target image include:
The rectangular characteristic of target image is calculated using the rectangle template of predefined size, is used for face detection;The rectangular mold Plate is composed of the first rectangle frame and the second rectangle frame, two rectangle frames be respectively provided on the directions x and the directions y it is fixed between Away from;Utilize the rectangular area in each rectangle frame extraction target image of rectangle template;Calculate rectangle template characteristic value;Judge square Whether shape template characteristic value confirmation is more than scheduled threshold value, then carries out the rectangular area extracted by rectangle template if it is greater than the threshold value The Face datection of step (4) is handled, if no more than if the threshold value that the rectangle template is predetermined in the directions x and the movement of the directions y Distance after re-start interception target image rectangular area calculating, until traversing entire target image;
Wherein, the rectangle template characteristic value is the integrated value of each pixel in the rectangular area extracted by the first rectangle frame Subtract the summation of the absolute difference in the rectangular area of the second rectangle frame extraction obtained by the integrated value of the pixel of each corresponding position; Wherein, the integrated value A (x, y) of each pixel (x, y) is the sum of the pixel value positioned at the upper left all pixels of the pixel, i.e.,
A (x, y)=∑X '≤x, y '≤yO (x ', y ')
O (x ', y ') is the pixel value at (x ', y ').
Face datection refers to detecting whether that there are faces in any one sub-picture, and if so, identifying the position of face It sets;Face normalization refers to behind the position of face, further determining that each datum mark (Face of face in obtaining image Landmarks).These datum marks are typically all to think that the human face of selection has certain semantic position, as pupil, nose, The positions such as the corners of the mouth.The specific steps of the Face datection include:
(4.1) a certain number of Given Face images are collected by facial feature database and forms trained face image set It closes, each personage needs to include the facial image under a certain number of different expressions and different light, M training of human in set The set of matrices ζ of face image is expressed as the matrix σ of m*n per picturesi
(4.2) matrix N of training face set is calculated,
N=ATA;
A=[ρ1, ρ2..., ρM]
ρiFor the difference of facial image and average facial image,
ρii- τ,
It matrix in traversal set ζ and adds up, then takes its average value to get to the average image τ
The feature vector and characteristic value of calculating matrix N,
Feature vector μkFor the difference ρ of facial image and average facial imageiDistribution law, characteristic value
And select the wherein corresponding feature vector of M characteristic value with highest correlation;
(4.3) the training image set of combined standard generates eigenface pattern vector
Ωii Ti-τ);I=1,2 ... M;
(4.4) it is directed to each known training personage, according to what is be calculated by the original training image of known personage The mean value computation face class vector of eigenface pattern vectorGiven threshold θkIndicate that the maximum between training face class is permissible Distance;Given threshold θ 'k, indicate that the maximum in face space allows distance;
(4.5) it is directed to the rectangular area that step 3 intercepts, calculates its pattern vector Ωi, at a distance from each trained face class ε, and the distance ε to face spacek;If the minimum range ε with training face classk< θk, and arrive the distance in face space ε < θ 'kThen think that face to be identified belongs to the face class;If the minimum range ε with training face classk≥θk, but arrive face The distance ε < θ ' in spacek, then it is assumed that face to be identified belongs to strange face;
(4.6) if face to be identified is identified as known training personage, this facial image will be added into In the original training image set of the personage, its eigenface is then recalculated.
The selection of threshold value largely influences the quality of testing result, and threshold value is excessive, then may lose moving target Useful information;Threshold value is too small, then picture noise is difficult to be effectively suppressed.And the speed of moving target is also to influence detection knot One of factor of fruit.Although frame differential method calculates simply, real-time is good, changes to illumination etc. insensitive.But its detection essence Degree is extremely difficult to the requirement of practical application, and testing result is largely influenced by threshold value, target itself motion conditions.This System subsequently needs to carry out recognition of face to the target area detected, therefore the closer practical feelings of the moving target split Condition is better, due to the algorithm EMS memory occupation is few, superior performance, with preferable anti-noise ability, have very high compatibility to software and hardware Property etc. is numerous excellent, and the segmentation and extraction of foreground target can be successfully realized using the algorithm.The characteristics of this method is that face is examined It surveys and calibration is fused to Unified frame, Face normalization information reacts on Face datection, substantially increases Face datection and calibration Efficiency and accuracy rate.
The specific steps of specific identity user that determine whether include:
To being each eigenvalue λ of identified face calculating in step (4)k, its corresponding grader h of training (x, λk, ρ, θ),
Indicated when grader is averaged more than 0.47 its be specific identity user, when grader be averaged less than etc. Indicate that it is nonspecific identity user when 0.47.
Embodiment two
The present invention provides a kind of piece identity's identification device towards low quality video image, including video image in turn Acquisition module, target image detection module, rectangular area module, face detection module and face recognition module,
Video image acquisition module, for video image to be converted to number after camera sampling, quantization, re-computation Word image is simultaneously inputted, is stored into frame memory;
Target image detection module is obtained for carrying out the image classification based on geometry and statistical nature to digital picture Target image;
Rectangular area extraction module, the rectangular area for translating and intercepting target image;
Face detection module, the rectangular area of the target image for being given for any one width, scans for determination Wherein whether contain face, the characteristic information that face is extracted if containing face includes position, size and posture information;If There is no face then to start video image acquisition module and re-starts video image acquisition;
Face recognition module determines whether specific identity user, if not for the characteristic information according to extraction face It is that specific identity user then starts video image acquisition module and re-starts video image acquisition, then if it is specific identity user It carries out interception and extracts the characteristic information of face, store and send out identity recognition result.
The video image acquisition module proceeds as follows the re-computation of video image:
(1.1) image f (x, y) the probability q (r) that pixel value occurs in each gray level are calculated
T is the highest gray level in image, erFor the pixel number of gray level r, x, y are that the transverse and longitudinal of pixel in the picture is sat Mark, E is total number of image pixels;
(1.2) the pixels probability density u (r) that each gray level adds up in image is calculated, u is that the normalization of image is accumulative Histogram;
I is label;
(1.3) the pixel value O (x, y) of image each position is recalculated;
O (x, y)=(Omax-Omin) u [O (x, y)]+Omin
Omax、OminMaximum and minimum pixel value respectively in image, by the graphical representation after recalculating be F (x, y)。
Target image detection module as follows carries out digital picture the image classification of geometry and statistical nature, obtains Target image:
(2.1) acquire a digital image data, wherein with s face picture group at positive sample set and d it is a inhuman Face picture forms negative sample set, is expressed as
F={ (α1, β1), (α2, β2)…(αa, βa),
αi∈Gi, GiFor the feature vector of the digital picture as positive and negative samples, βi∈ H={ 0,1 }, H are sample data αi Label;0 is negative sample label, and 1 is positive sample label;
(2.2) weights initialisation of positive sample is
The weights initialisation of negative sample is
(2.3) to each sample αiCharacteristic value training modeling, obtain the Weak Classifier of sample characteristics,
c(αi) be sample digital picture characteristic value, γ is threshold value;
(2.4) the weights v of the corresponding Weak Classifier of all feature vectors is calculatedi, by selecting Weak Classifier, accidentally Poor minimum Weak Classifier is cascaded as strong classifier δi,
Wherein viInitial value be sample αiCorresponding initialization weighted value J (i);
(2.5) pass through the Weak Classifier weights v of selectioniAgain it is assigned a value of viεi (1-θ)
If i-th of sample αiCorrectly classified, θ=0, on the contrary θ=1,
(2.6) strong classifier to the end is obtained, is classified by force to digital picture by the strong classifier;According to step (2.6) strong classification results, determine whether belong to the target image comprising face picture in digital picture.
The rectangular area extraction module translates and intercepts the rectangular area of target image as follows:
The rectangular characteristic of target image is calculated using the rectangle template of predefined size, is used for face detection;The rectangular mold Plate is composed of the first rectangle frame and the second rectangle frame, two rectangle frames be respectively provided on the directions x and the directions y it is fixed between Away from;Utilize the rectangular area in each rectangle frame extraction target image of rectangle template;Calculate rectangle template characteristic value;Judge square Whether shape template characteristic value confirmation is more than scheduled threshold value, then carries out the rectangular area extracted by rectangle template if it is greater than the threshold value The Face datection of step (4) is handled, if no more than if the threshold value that the rectangle template is predetermined in the directions x and the movement of the directions y Distance after re-start interception target image rectangular area calculating, until traversing entire target image;
Wherein, the rectangle template characteristic value is the integrated value of each pixel in the rectangular area extracted by the first rectangle frame Subtract the summation of the absolute difference in the rectangular area of the second rectangle frame extraction obtained by the integrated value of the pixel of each corresponding position; Wherein, the integrated value A (x, y) of each pixel (x, y) is the sum of the pixel value positioned at the upper left all pixels of the pixel, i.e.,
A (x, y)=ΣX '≤x, y '≤yO (x ', y ')
O (x ', y ') is the pixel value at (x ', y ').
Face detection module carries out Face datection in the following way to the rectangular area that rectangular area extraction module extracts:
(4.1) a certain number of Given Face images are collected by facial feature database and forms trained face image set It closes, each personage needs to include the facial image under a certain number of different expressions and different light, M training of human in set The set of matrices ζ of face image is expressed as the matrix σ of m*n per picturesi
(4.2) matrix N of training face set is calculated,
N=ATA;
A=[ρ1, ρ2..., ρM]
ρiFor the difference of facial image and average facial image,
ρii- τ,
It matrix in traversal set ζ and adds up, then takes its average value to get to the average image τ
The feature vector and characteristic value of calculating matrix N,
Feature vector μkFor the difference ρ of facial image and average facial imageiDistribution law, characteristic value
And select the wherein corresponding feature vector of M characteristic value with highest correlation;
(4.3) the training image set of combined standard generates eigenface pattern vector
Ωii Ti-τ);I=1,2 ... M;
(4.4) it is directed to each known training personage, according to what is be calculated by the original training image of known personage The mean value computation face class vector of eigenface pattern vectorGiven threshold θkIndicate that the maximum between training face class is permissible Distance;Given threshold θ 'k, indicate that the maximum in face space allows distance;
(4.5) it is directed to the rectangular area of rectangular area extraction module interception, calculates its pattern vector Ωi, with each training The distance ε of face class, and the distance ε to face spacek;If the minimum range ε with training face classk< θk, and arrive people The distance ε < θ ' in face spacekThen think that face to be identified belongs to the face class;If the minimum range ε with training face classk≥ θk, but arrive the distance ε < θ ' in face spacek, then it is assumed that face to be identified belongs to the strange face;
(4.6) if face to be identified is identified as known training personage, this facial image will be added into In the original training image set of the personage, its eigenface is then recalculated.
The face recognition module determines whether specific identity according to the characteristic information of extraction face in the following way User:
It is each eigenvalue λ that identified face calculates for face detection modulek, its corresponding grader of training H (x, λk, ρ, θ),
Indicated when grader is averaged more than 0.47 its be specific identity user, when grader be averaged less than etc. Indicate that it is nonspecific identity user when 0.47.
The present invention is for feature extraction, the characteristic for flexibly having used binary file read or write speed fast, and it is initial to accelerate system The speed of face characteristic is read when change;Face recognition module directly can judge whether face passes through verification by return value Or face is not present in present frame, simple and clear shows recognition result.
It should be noted that herein, relational terms such as first and second and the like are used merely to a reality Body or operation are distinguished with another entity or operation, are deposited without necessarily requiring or implying between these entities or operation In any actual relationship or order or sequence.Moreover, the terms "include", "comprise" or its any other variant are intended to Non-exclusive inclusion, so that the process, method, article or equipment including a series of elements is not only wanted including those Element, but also include other elements that are not explicitly listed, or further include for this process, method, article or equipment Intrinsic element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that There is also other identical elements in process, method, article or equipment including the element.
It although an embodiment of the present invention has been shown and described, for the ordinary skill in the art, can be with Understanding without departing from the principles and spirit of the present invention can carry out these embodiments a variety of variations, modification, replace And modification, the scope of the present invention is defined by the appended.

Claims (10)

1. a kind of piece identity's recognition methods towards low quality video image, which is characterized in that include the following steps:
(1) video image acquisition:Video image samples by camera, quantization, digital picture and defeated is converted to after re-computation Enter, store into frame memory;
(2) target image detects:Image classification based on geometry and statistical nature is carried out to digital picture, obtains target image;
(3) translate and intercept the rectangular area of target image;
(4) Face datection:For the rectangular area for the target image that any one width gives, scan for determine whether to contain There is face, the characteristic information that face is extracted if containing face includes position, size and posture information;If without face Return to step (1) re-starts video image acquisition;
(5) recognition of face determines whether specific identity user, if not specific body according to the characteristic information of extraction face Part user then return to step (1), interception is then carried out if it is specific identity user and extracts the characteristic information of face, is concurrently come from Part recognition result.
2. piece identity's recognition methods according to claim 1 towards low quality video image, which is characterized in that described Video image be converted to the specific steps of digital picture after re-computation and include:
(1.1) image f (x, y) the probability q (r) that pixel value occurs in each gray level are calculated
T is the highest gray level in image, erFor the pixel number of gray level r, x, y are the transverse and longitudinal coordinate of pixel in the picture, and E is Total number of image pixels;
(1.2) the pixels probability density u (r) that each gray level adds up in image is calculated, u is that the normalization of image adds up histogram Figure;
I is label;
(1.3) the pixel value O (x, y) of image each position is recalculated;
O (x, y)=(Omax-Omin) u [O (x, y)]+Omin
Omax、OminGraphical representation after recalculating is F (x, y) by maximum and minimum pixel value respectively in image.
3. piece identity's recognition methods according to claim 2 towards low quality video image, which is characterized in that described To digital picture carry out geometry and statistical nature image classification, obtain target image and be as follows:
(2.1) acquire a digital image data, wherein with s face picture group at positive sample set and a non-face figures of d Piece forms negative sample set, is expressed as
F={ (α1, β1), (α2, β2)...(αa, βa),
αi∈Gi, GiFor the feature vector of the digital picture as positive and negative samples, βi∈ H={ 0,1 }, H are sample data αiMark Label;0 is negative sample label, and 1 is positive sample label;
(2.2) weights initialisation of positive sample is
The weights initialisation of negative sample is
(2.3) to each sample αiCharacteristic value training modeling, obtain the Weak Classifier of sample characteristics,
c(αi) be sample digital picture characteristic value, γ is threshold value;
(2.4) the weights v of the corresponding Weak Classifier of all feature vectors is calculatedi, by selecting Weak Classifier, error is most Small Weak Classifier is cascaded as strong classifier δi,
Wherein viInitial value be sample αiCorresponding initialization weighted value J (i);
(2.5) pass through the Weak Classifier weights v of selectioniAgain it is assigned a value of viεi (1-θ)
If i-th of sample αiCorrectly classified, θ=0, on the contrary θ=1,
(2.6) strong classifier to the end is obtained, is classified by force to digital picture by the strong classifier;According to step (2.6) Strong classification results, determine whether belong to the target image comprising face picture in digital picture.
4. piece identity's recognition methods according to claim 3 towards low quality video image, which is characterized in that described Translation and intercept the specific steps of rectangular area of target image and include:
The rectangular characteristic of target image is calculated using the rectangle template of predefined size, is used for face detection;The rectangle template by First rectangle frame and the second rectangle frame are composed, and two rectangle frames are respectively provided with fixed spacing on the directions x and the directions y; Utilize the rectangular area in each rectangle frame extraction target image of rectangle template;Calculate rectangle template characteristic value;Judge rectangle Whether template characteristic value confirmation is more than scheduled threshold value, then walks the rectangular area extracted by rectangle template if it is greater than the threshold value Suddenly the Face datection processing of (4), if no more than if the threshold value that the rectangle template is scheduled in the directions x and the movement of the directions y The calculating of the rectangular area of interception target image is re-started after distance, until traversing entire target image.
5. piece identity's recognition methods according to claim 4 towards low quality video image, which is characterized in that described The specific steps of Face datection include:
(4.1) a certain number of Given Face images are collected by facial feature database and forms trained face image set, collected Each personage needs to include the facial image under a certain number of different expressions and different light, M training face figures in conjunction The set of matrices ζ of picture is expressed as the matrix σ of m*n per picturesi
(4.2) matrix N of training face set is calculated,
N=ATA;
A=[ρ1, ρ2..., ρM]
ρiFor the difference of facial image and average facial image,
ρii- τ,
It matrix in traversal set ζ and adds up, then takes its average value to get to the average image τ
The feature vector and characteristic value of calculating matrix N,
Feature vector μkFor the difference ρ of facial image and average facial imageiDistribution law, characteristic value
And select the wherein corresponding feature vector of M characteristic value with highest correlation;
(4.3) the training image set of combined standard generates eigenface pattern vector
nii Ti-τ);I=1,2 ... M;
(4.4) it is directed to each known training personage, according to the feature being calculated by the original training image of known personage The mean value computation face class vector of face pattern vectorGiven threshold θkIndicate that the maximum between training face class allows distance; Given threshold θ 'k, indicate that the maximum in face space allows distance;
(4.5) it is directed to the rectangular area that step 3 intercepts, calculates its pattern vector Ωi, and each trained face class distance ε, with And the distance ε k to face space;If the minimum range ε with training face classk< θk, and arrive the distance ε < in face space θ′kThen think that face to be identified belongs to the face class;If the minimum range ε with training face classk≥θk, but it is empty to face Between distance ε < θ 'k, then it is assumed that face to be identified belongs to strange face;
(4.6) if face to be identified is identified as known training personage, this facial image will be added into the people In the original training image set of object, its eigenface is then recalculated.
6. a kind of piece identity's identification device towards low quality video image, which is characterized in that including video image acquisition mould Block, target image detection module, rectangular area module, face detection module and face recognition module;Wherein
Video image acquisition module, for video image to be converted to digitized map after camera sampling, quantization, re-computation Picture is simultaneously inputted, is stored into frame memory;
Target image detection module obtains target for carrying out the image classification based on geometry and statistical nature to digital picture Image;
Rectangular area extraction module, the rectangular area for translating and intercepting target image;
Face detection module, the rectangular area of the target image for being given for any one width are scanned for determine wherein Whether face is contained, the characteristic information that face is extracted if containing face includes position, size and posture information;If no Face then starts video image acquisition module and re-starts video image acquisition;
Face recognition module determines whether specific identity user, if not spy for the characteristic information according to extraction face Determine identity user and then start video image acquisition module to re-start video image acquisition, is then carried out if it is specific identity user The characteristic information of interception and extraction face, stores and sends out identity recognition result.
7. piece identity's identification device according to claim 6 towards low quality video image, which is characterized in that described Video image acquisition module proceeds as follows the re-computation of video image:
(1.1) image f (x, y) the probability q (r) that pixel value occurs in each gray level are calculated
T is the highest gray level in image, erFor the pixel number of gray level r, x, y are the transverse and longitudinal coordinate of pixel in the picture, and E is Total number of image pixels;
(1.2) the pixels probability density u (r) that each gray level adds up in image is calculated, u is that the normalization of image adds up histogram Figure;
I is label;
(1.3) the pixel value O (x, y) of image each position is recalculated;
O (x, y)=(Omax-Omin) u [O (x, y)]+Omin
Omax、OminGraphical representation after recalculating is F (x, y) by maximum and minimum pixel value respectively in image.
8. piece identity's identification device according to claim 7 towards low quality video image, which is characterized in that target Image detection module as follows carries out digital picture the image classification of geometry and statistical nature, obtains target image:
(2.1) acquire a digital image data, wherein with s face picture group at positive sample set and a non-face figures of d Piece forms negative sample set, is expressed as
F={ (α11), (α2, β2)...(αa, βa),
αi∈Gi,GiFor the feature vector of the digital picture as positive and negative samples, βi∈ H={ 0,1 }, H are sample data αiMark Label;0 is negative sample label, and 1 is positive sample label;
(2.2) weights initialisation of positive sample is
The weights initialisation of negative sample is
(2.3) to each sample αiCharacteristic value training modeling, obtain the Weak Classifier of sample characteristics,
c(αi) be sample digital picture characteristic value, γ is threshold value;
(2.4) the weights v of the corresponding Weak Classifier of all feature vectors is calculatedi, by selecting Weak Classifier, error is most Small Weak Classifier is cascaded as strong classifier δi,
Wherein viInitial value be sample αiCorresponding initialization weighted value J (i);
(2.5) pass through the Weak Classifier weights v of selectioniAgain it is assigned a value of
If i-th of sample αiCorrectly classified, θ=0, on the contrary θ=1,
(2.6) strong classifier to the end is obtained, is classified by force to digital picture by the strong classifier;According to step (2.6) Strong classification results, determine whether belong to the target image comprising face picture in digital picture.
9. piece identity's identification device according to claim 8 towards low quality video image, which is characterized in that described Rectangular area extraction module translates and intercepts the rectangular area of target image as follows:
The rectangular characteristic of target image is calculated using the rectangle template of predefined size, is used for face detection;The rectangle template by First rectangle frame and the second rectangle frame are composed, and two rectangle frames are respectively provided with fixed spacing on the directions x and the directions y; Utilize the rectangular area in each rectangle frame extraction target image of rectangle template;Calculate rectangle template characteristic value;Judge rectangle Whether template characteristic value confirmation is more than scheduled threshold value, then walks the rectangular area extracted by rectangle template if it is greater than the threshold value Suddenly the Face datection processing of (4), if no more than if the threshold value that the rectangle template is scheduled in the directions x and the movement of the directions y The calculating of the rectangular area of interception target image is re-started after distance, until traversing entire target image.
10. piece identity's identification device according to claim 9 towards low quality video image, which is characterized in that people Face detection module carries out Face datection in the following way to the rectangular area that rectangular area extraction module extracts:
(4.1) a certain number of Given Face images are collected by facial feature database and forms trained face image set, collected Each personage needs to include the facial image under a certain number of different expressions and different light, M training face figures in conjunction The set of matrices ζ of picture is expressed as the matrix σ of m*n per picturesi
(4.2) matrix N of training face set is calculated,
N=ATA;
A=[ρ1, ρ2..., ρM]
ρiFor the difference of facial image and average facial image,
ρii- τ,
It matrix in traversal set ζ and adds up, then takes its average value to get to the average image τ
The feature vector and characteristic value of calculating matrix N,
Feature vector μkFor the difference ρ of facial image and average facial imageiDistribution law, characteristic value
And select the wherein corresponding feature vector of M characteristic value with highest correlation;
(4.3) the training image set of combined standard generates eigenface pattern vector
Ωii Ti-τ);I=1,2 ... M;
(4.4) it is directed to each known training personage, according to the feature being calculated by the original training image of known personage The mean value computation face class vector of face pattern vectorGiven threshold θkIndicate that the maximum between training face class allows distance; Given threshold θ 'k, indicate that the maximum in face space allows distance;
(4.5) it is directed to the rectangular area of rectangular area extraction module interception, calculates its pattern vector Ωi, with each trained face class Distance ε, and the distance ε to face spacek;If the minimum range ε with training face classk< θk, and arrive face space Distance ε < θ 'kThen think that face to be identified belongs to the face class;If the minimum range ε with training face classk≥θk, still To the distance ε < θ ' in face spacek, then it is assumed that face to be identified belongs to the strange face;
(4.6) if face to be identified is identified as known training personage, this facial image will be added into the people In the original training image set of object, its eigenface is then recalculated.
CN201810224833.4A 2018-03-19 2018-03-19 A kind of piece identity's recognition methods and device towards low quality video image Pending CN108563999A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810224833.4A CN108563999A (en) 2018-03-19 2018-03-19 A kind of piece identity's recognition methods and device towards low quality video image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810224833.4A CN108563999A (en) 2018-03-19 2018-03-19 A kind of piece identity's recognition methods and device towards low quality video image

Publications (1)

Publication Number Publication Date
CN108563999A true CN108563999A (en) 2018-09-21

Family

ID=63532673

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810224833.4A Pending CN108563999A (en) 2018-03-19 2018-03-19 A kind of piece identity's recognition methods and device towards low quality video image

Country Status (1)

Country Link
CN (1) CN108563999A (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109241942A (en) * 2018-09-29 2019-01-18 佳都新太科技股份有限公司 Image processing method, device, face recognition device and storage medium
CN109360205A (en) * 2018-12-07 2019-02-19 泰康保险集团股份有限公司 Double record video quality detecting methods, device, medium and electronic equipment
CN109711386A (en) * 2019-01-10 2019-05-03 北京达佳互联信息技术有限公司 Obtain method, apparatus, electronic equipment and the storage medium of identification model
CN109784244A (en) * 2018-12-29 2019-05-21 西安理工大学 A kind of low resolution face precise recognition method of specified target
CN109858426A (en) * 2019-01-27 2019-06-07 武汉星巡智能科技有限公司 Face feature extraction method, device and computer readable storage medium
CN110008919A (en) * 2019-04-09 2019-07-12 南京工业大学 The quadrotor drone face identification system of view-based access control model
CN110378307A (en) * 2019-07-25 2019-10-25 广西科技大学 Texture image orientation estimate method based on deep learning
CN111091056A (en) * 2019-11-14 2020-05-01 泰康保险集团股份有限公司 Method and device for identifying sunglasses in image, electronic equipment and storage medium
CN111382649A (en) * 2018-12-31 2020-07-07 南京拓步智能科技有限公司 Face image recognition system and method based on nine-grid principle
CN111832460A (en) * 2020-07-06 2020-10-27 北京工业大学 Face image extraction method and system based on multi-feature fusion
CN111967312A (en) * 2020-07-06 2020-11-20 中央民族大学 Method and system for identifying important persons in picture
CN112183394A (en) * 2020-09-30 2021-01-05 江苏智库智能科技有限公司 Face recognition method and device and intelligent security management system
CN113192239A (en) * 2021-03-12 2021-07-30 广州朗国电子科技有限公司 Face recognition-based antitheft door lock recognition method, antitheft door lock and medium
CN113591607A (en) * 2021-07-12 2021-11-02 辽宁科技大学 Station intelligent epidemic prevention and control system and method
CN115424353A (en) * 2022-09-07 2022-12-02 杭银消费金融股份有限公司 AI model-based service user feature identification method and system

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106446872A (en) * 2016-11-07 2017-02-22 湖南源信光电科技有限公司 Detection and recognition method of human face in video under low-light conditions
CN106874883A (en) * 2017-02-27 2017-06-20 中国石油大学(华东) A kind of real-time face detection method and system based on deep learning

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106446872A (en) * 2016-11-07 2017-02-22 湖南源信光电科技有限公司 Detection and recognition method of human face in video under low-light conditions
CN106874883A (en) * 2017-02-27 2017-06-20 中国石油大学(华东) A kind of real-time face detection method and system based on deep learning

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
吴健: "基于嵌入式系统的实时人脸识别方法的研究与实现", 《中国优秀硕士学位论文全文数据库》 *
唐徙文 等: "人脸检测级联分类器快速训练算法", 《计算机仿真》 *
邹建成 等: "《数学及其在图像处理中的应用》", 31 July 2015, 北京邮电大学出版社 *

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109241942A (en) * 2018-09-29 2019-01-18 佳都新太科技股份有限公司 Image processing method, device, face recognition device and storage medium
CN109360205A (en) * 2018-12-07 2019-02-19 泰康保险集团股份有限公司 Double record video quality detecting methods, device, medium and electronic equipment
CN109784244A (en) * 2018-12-29 2019-05-21 西安理工大学 A kind of low resolution face precise recognition method of specified target
CN109784244B (en) * 2018-12-29 2022-11-25 西安理工大学 Low-resolution face accurate identification method for specified target
CN111382649A (en) * 2018-12-31 2020-07-07 南京拓步智能科技有限公司 Face image recognition system and method based on nine-grid principle
CN109711386A (en) * 2019-01-10 2019-05-03 北京达佳互联信息技术有限公司 Obtain method, apparatus, electronic equipment and the storage medium of identification model
CN109858426A (en) * 2019-01-27 2019-06-07 武汉星巡智能科技有限公司 Face feature extraction method, device and computer readable storage medium
CN110008919A (en) * 2019-04-09 2019-07-12 南京工业大学 The quadrotor drone face identification system of view-based access control model
CN110378307A (en) * 2019-07-25 2019-10-25 广西科技大学 Texture image orientation estimate method based on deep learning
CN110378307B (en) * 2019-07-25 2022-05-03 广西科技大学 Texture image direction field estimation method based on deep learning
CN111091056A (en) * 2019-11-14 2020-05-01 泰康保险集团股份有限公司 Method and device for identifying sunglasses in image, electronic equipment and storage medium
CN111091056B (en) * 2019-11-14 2023-06-16 泰康保险集团股份有限公司 Method and device for identifying sunglasses in image, electronic equipment and storage medium
CN111832460A (en) * 2020-07-06 2020-10-27 北京工业大学 Face image extraction method and system based on multi-feature fusion
CN111967312A (en) * 2020-07-06 2020-11-20 中央民族大学 Method and system for identifying important persons in picture
CN112183394A (en) * 2020-09-30 2021-01-05 江苏智库智能科技有限公司 Face recognition method and device and intelligent security management system
CN113192239A (en) * 2021-03-12 2021-07-30 广州朗国电子科技有限公司 Face recognition-based antitheft door lock recognition method, antitheft door lock and medium
CN113591607A (en) * 2021-07-12 2021-11-02 辽宁科技大学 Station intelligent epidemic prevention and control system and method
CN113591607B (en) * 2021-07-12 2023-07-04 辽宁科技大学 Station intelligent epidemic situation prevention and control system and method
CN115424353A (en) * 2022-09-07 2022-12-02 杭银消费金融股份有限公司 AI model-based service user feature identification method and system

Similar Documents

Publication Publication Date Title
CN108563999A (en) A kind of piece identity's recognition methods and device towards low quality video image
CN110348319B (en) Face anti-counterfeiting method based on face depth information and edge image fusion
JP5010905B2 (en) Face recognition device
JP4543423B2 (en) Method and apparatus for automatic object recognition and collation
CN100397410C (en) Method and device for distinguishing face expression based on video frequency
US20070154095A1 (en) Face detection on mobile devices
KR100944247B1 (en) System and method for face recognition
Sahbi et al. A Hierarchy of Support Vector Machines for Pattern Detection.
CN109508700A (en) A kind of face identification method, system and storage medium
KR20080033486A (en) Automatic biometric identification based on face recognition and support vector machines
KR20170006355A (en) Method of motion vector and feature vector based fake face detection and apparatus for the same
CN108171223A (en) A kind of face identification method and system based on multi-model multichannel
CN106599785A (en) Method and device for building human body 3D feature identity information database
Tsai et al. Face detection using eigenface and neural network
Nishiyama et al. Face recognition using the classified appearance-based quotient image
Andiani et al. Face recognition for work attendance using multitask convolutional neural network (MTCNN) and pre-trained facenet
Mohamed et al. Automated face recogntion system: Multi-input databases
Sudhakar et al. Facial identification of twins based on fusion score method
Guha A report on automatic face recognition: Traditional to modern deep learning techniques
Curran et al. The use of neural networks in real-time face detection
Sharanya et al. Online attendance using facial recognition
Amjed et al. Noncircular iris segmentation based on weighted adaptive hough transform using smartphone database
KR100711223B1 (en) Face recognition method using Zernike/LDA and recording medium storing the method
Zhou et al. Eye localization based on face alignment
Schimbinschi et al. 4D unconstrained real-time face recognition using a commodity depth camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20180921

RJ01 Rejection of invention patent application after publication