CN103198305B - A kind of facial video image verification method and embedded implement device thereof - Google Patents

A kind of facial video image verification method and embedded implement device thereof Download PDF

Info

Publication number
CN103198305B
CN103198305B CN201310139715.0A CN201310139715A CN103198305B CN 103198305 B CN103198305 B CN 103198305B CN 201310139715 A CN201310139715 A CN 201310139715A CN 103198305 B CN103198305 B CN 103198305B
Authority
CN
China
Prior art keywords
training
sample
classification
test sample
class
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201310139715.0A
Other languages
Chinese (zh)
Other versions
CN103198305A (en
Inventor
宋晓宁
王卫东
杨习贝
祁云嵩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhenjiang kunyan Information Technology Co.,Ltd.
Original Assignee
Jiangsu University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu University of Science and Technology filed Critical Jiangsu University of Science and Technology
Priority to CN201310139715.0A priority Critical patent/CN103198305B/en
Publication of CN103198305A publication Critical patent/CN103198305A/en
Application granted granted Critical
Publication of CN103198305B publication Critical patent/CN103198305B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The invention discloses a kind of facial video image verification method and embedded implement device thereof, method: the facial image sample being obtained personnel to be detected by camera, send self-powered embedding assembly module to; Module handler is by Pattern recognition and image processing technology, and the initial sparse obtaining this test sample book represents coefficient, and adds up each training classification and contribute the classification of test sample book, calculates and solves the deviation degree that each trains classification; Reject the maximum classification of deviation degree, redescribe this face test sample book with other all residue training sample of residue class, and the new rarefaction representation coefficient of iterative is until meet the end condition that classification rejects; Calculate with final remaining all training samples and obtain the final classification contribution of training classification to test sample book, choose the final candidate categories of classification as face test sample book of minimum deviation degree; Processor carries out voice message by voice prompting device to the object information that personnel verify.The system that device is consisted of camera, embedding assembly module and voice prompting device realizes.The present invention is easy to operate, and accuracy of detection is high, and hardware configuration demand is low and easily produce in batches, has good versatility and robustness.

Description

A kind of facial video image verification method and embedded implement device thereof
Technical field
The present invention relates to a kind of facial video image verification method and embedded implement device thereof, particularly the facial video image verification method of sparse deviation degree assessment and an embedded implement device thereof, relate to sample Its Sparse Decomposition technology and sparse minimum deviation degree assessment technology; Comprise the acquisition of facial image, pre-service, Its Sparse Decomposition particularly, deviation degree assessment, training atomic update and checking of finally classifying; Belong to the technical field of image procossing and pattern-recognition.
Background technology
In existing numerous biometrics identification technology, face recognition technology, because of its advantage such as distinctive initiative and user friendly, has been subject to extensive concern in recent years always.Land use models recognition technology to face gather image carry out automatic analysis, checking, retrieval and identification, for guarantee related service safely, increase work efficiency, increase process accuracy all there is very important realistic meaning.Face recognition technology is subject to the extensive attention of lot of domestic and foreign research institution and scholar in recent years with its higher learning value and using value, this technology mainly contains the content of two aspects, first, gather the pre-service of image, it is prerequisite and the basis of all kinds of facial image content understanding and identification; The second, carry out modeling analysis and identification to pretreated picture material, it is the standard of inspection face automatic identification technology performance quality.
At present, in recognition of face research field at home and abroad, solve the feature extraction of higher-dimension face sample by constructing effective sparse base information, the technology of identification and reconstruction is also in the starting stage.Traditional algebraic characteristic description technique only saves the global structure attribute of pattern, is applicable to linear separability situation more.And research shows, many when especially facial image changes because of the factor such as illumination, attitude the image that nature obtains, the linearly inseparable of image can be caused.Secondly, some images are positioned in the non-linearity manifold structure of higher dimensional space, bring the difficulty of signature analysis thus.Above-mentioned factor makes traditional algebraic specification method can not the immanent structure of intermediate scheme preferably, thus causes the efficiency of Classification and Identification lower.As is known in the art, the method typically based on manifold structure has Locallinearembedding (LLE), Isomap and Laplacianeigenmap, and in experimental data, achieve good effect.But the mapping relations that these methods are extracted only are set up on the training data, to the mapping of how evaluation test data and out of true, this causes them few at the Application comparison of field of image recognition.LocalityPreservingProjections (LPP) method is to Laplacianeigenmap method approximate linearization, thus the partial structurtes effectively maintained between sample, achieve good result in fields such as image procossing, but the defect of the method is the classification information fully not finding image.SupervisedLPP (SLPP) and Locallydiscriminatingprojection (LDP) is had with the typical improved model of upper reaches shape method, but, these class methods still fail to strengthen the different information keeping image space, during their partial structurtes between Description Image, have ignored the non local attribute between sample and overall information, thus cause the robustness of these class methods high not enough.
Comprehensive above-mentioned theory basis, comprises three key elements by constructing effective sparse base information realization manifold of higher dimension facial image verification system: openness, irrelevant observation, optimized reconstruction.The prerequisite of the sparse base of structure sample is that input signal has certain degree of rarefication, but often comprises too much measurement number in the irrelevant observation generated, and for subsequent classifier, too much measurement number can suppress the performance of sorter.
Summary of the invention
Goal of the invention: for Problems existing in prior art and deficiency, we need to describe to do to the sparse prior of sample further to rebuild and optimize, expect the institute's extracting method sparse signal of sample is possessed under identical reconstruction accuracy more excellent measurement number and quality, thus effectively can solve the feature extraction of higher-dimension sample, reconstruction and classification problem.Therefore, provide the facial video image verification method of a kind of key post personnel high for multiple place on duty, security and embedded implement device thereof, thus ensure highly vigilant of and the identity security of key post personnel neatly.The present invention overcomes the deficiencies in the prior art, easy to operate, and accuracy of detection is high, has good versatility and portability.
Technical scheme: a kind of facial video image verification method, comprises the steps:
A, obtain the facial image sample of testing staff by camera, and store in bmp mode;
B, pupil position detection is carried out to facial image, accurately obtain the dicoria line of face test sample book;
C, image inclination angle according to dicoria line determination face test sample book;
D, according to above-mentioned inclination angle, face test sample book is rotated to horizontal level, extract facial image region accurately to split;
E, linearly describe this face test sample book with the training sample that system has had, obtain describing result y, the initial sparse calculating this test sample book thus represents coefficient;
Described step e comprises the steps:
E1, supposing the system have C class n training sample x 1..., x n, note X i∈ R m × n(i=1,2 ..., C) and represent the training sample set of the i-th class, X here ieach list show a training sample, then all training sample set that C class is altogether corresponding are X=[X 1..., X c];
E2, for face test sample y ∈ R m, linearly y=a can be described by the training sample under all categories 1x 1+ ... + a cx c;
E3, coefficient is carried out to above-mentioned linear description formula solve, if X is nonsingular, sparse coefficient otherwise, pass through constructive formula try to achieve sparse coefficient wherein, P=(X tx+ μ I) -1x t, μ is positive disturbance term, and I is unit matrix;
F, respectively statistics y, to the classification contribution of each training classification, calculate the deviation degree r of each classification i i;
Described step f comprises the steps:
F1, step e try to achieve the sparse decomposition coefficients of all training sample X corresponding to y, wherein, be defined as training classification i to contribute the classification of y, here association i-th class training sample set X isparse coefficient;
F2, by structure deviation degree formula measure the i-th class training sample set X icontribution in face test sample y process of reconstruction, r here ibe described as y with signal reconstruction residual error, i.e. deviation degree; Obviously, deviation degree r iless, represent contribution in test sample y is rebuild is larger.
The classification that g, rejecting deviation degree are maximum, redescribes this face test sample book with other all residue training sample of residue class, and obtains new rarefaction representation coefficient;
Described step g comprises the steps:
G1, result of calculation according to step f, reject deviation degree r ithe i-th maximum class training sample set X i;
G2, again represent described face test sample book with other all residue training sample of residue class, namely, suppose that eliminating kth class also remains p training sample altogether, then carry out test sample y described in linear expression again with this p residue training sample, and recalculate the new rarefaction representation coefficient of acquisition by step e method.
H, repeat f and g step, until the quantity rejecting training classification is when reaching defined threshold, stop rejecting class operation;
I, carry out face test sample y described in linear expression again with a final remaining M training sample, obtain the final rarefaction representation coefficient of test sample y;
J, in a final remaining M training sample, suppose that all training samples from l class are so calculating acquisition training classification l to the classification contribution that test sample y produces is here b s..., b tassociation l class training sample set X lfinal rarefaction representation coefficient.
K, calculating y and all g ldeviation degree, choose minimum deviation degree D lclassification as the final candidate categories of face test sample book;
L, judge whether the deviation degree of the final candidate categories of above-mentioned facial image sample exceedes predetermined threshold value, if exceed predetermined threshold value, then these personnel of voice message checking is not passed through, otherwise, be verified and the name of voice message testing staff or No. ID.
A kind of facial video image verifies embedded implement device, comprising:
Camera, for obtaining the facial image sample of testing staff, and is sent to sample storage storehouse by the facial image sample got, and the resolution of camera and ordinary video camera of chatting is close, and usual 320 × 240 resolution can meet image acquisition request;
Embedding assembly module and voice prompting device;
Wherein, described camera is arranged on the square position of face acquisition window, is connected with embedding assembly module by USB interface; Described embedding assembly module can be carried out position as required and be moved, and configures power module simultaneously; Described voice prompting device is arranged on by duty personnel position; Described embedding assembly module, primarily of CPU module, memory module, power module, audio playing module and facial image detecting unit composition, is connected by PC/104 bus; Described memory module comprises sample storage storehouse and training sample database; Sample storage storehouse, for storing facial image sample; Training sample database, for depositing training sample; Embedding assembly module is used for verifying personnel's face image and the work such as information prompting, and it carries out process calculating according to the facial video image verification method of above-mentioned sparse deviation degree assessment and information cuing method step.Wherein, CPU module, memory module and power module belong to relative general module, and Video Capture module in charge and camera carry out alternately, and audio playing module is responsible for carrying out alternately, that is, being pointed out by speech player the result with voice prompting device.
Facial image detecting unit, obtains facial image sample from sample storage storehouse, carries out pupil position detection to facial image, obtains the dicoria line of face test sample book; According to the image inclination angle of dicoria line determination face test sample book; According to image inclination angle, face test sample book is rotated to horizontal level, extract facial image region accurately to split; Linearly describe described face test sample book with the training sample in training sample database, obtain describing result y, the initial sparse calculating this test sample book thus represents coefficient; Add up the classification contribution of y to each training classification respectively, calculate the deviation degree r of each classification i i; Reject the classification that deviation degree is maximum, redescribe described face test sample book with other all residue training sample of residue class, and obtain new rarefaction representation coefficient; Until when the quantity rejecting training classification reaches defined threshold, stop rejecting class operation; Carry out face test sample book described in linear expression again with a final remaining M training sample, obtain the final rarefaction representation coefficient of test sample y; In a final remaining M training sample, suppose that all training samples from l class are so calculating and obtaining the classification contribution training classification l to produce test sample y is g l; Calculate y and all g ldeviation degree, choose minimum deviation degree D lclassification as the final candidate categories of face test sample book; Judge whether the deviation degree of the final candidate categories of described facial image sample exceedes predetermined threshold value, if exceed predetermined threshold value, then point out these personnel to verify by voice prompting device not pass through, otherwise, be verified and make the name of voice prompting device voice message testing staff or No. ID.
Beneficial effect: compared with prior art, facial video image verification method provided by the invention and embedded implement device thereof, use less hardware configuration and come the collection of effective implemention facial video image, process and checking based on the software computing method that sparse deviation degree is assessed.Embedded implement device, ensure that highly vigilant of and the identity security of all kinds of key post personnel neatly, and not by the restriction of environmental area transition.The present invention is easy to operate, and accuracy of detection is high, and hardware configuration demand is low and easily produce in batches, has good versatility and robustness.
Accompanying drawing explanation
Fig. 1 is the method flow diagram of the embodiment of the present invention;
Fig. 2 is the apparatus structure block diagram of the embodiment of the present invention;
Fig. 3 is the part face database schematic diagram of test face sample of the present invention and key post personnel;
The face candidate categories schematic diagram of Fig. 4 to be the present invention in order to 4 classes before finally verifying face test sample book have minimum deviation degree.
Embodiment
Below in conjunction with specific embodiment, illustrate the present invention further, these embodiments should be understood only be not used in for illustration of the present invention and limit the scope of the invention, after having read the present invention, the amendment of those skilled in the art to the various equivalent form of value of the present invention has all fallen within the application's claims limited range.
As shown in Figure 1, facial video image verification method, comprises the steps:
A, obtain personnel's face image to be detected;
When obtaining the face image of personnel to be detected, utilize the personnel of camera to face acquisition window to take, and store face test pattern sample in bmp mode; When utilizing camera to take acquisition image, need to carry out parameter calibration to camera, the external parameter of camera comprises height H, depression angle that camera gathers bottom surface relatively , fleet angle θ etc.; Its inner parameter comprises focal distance f, field angle σ, aperture F etc.; The demarcation of camera inside and outside parameter is the prerequisite of carrying out subsequent calculations work.
B, utilization become the pupil position that size mould plate technique detects face test pattern, accurately obtain the dicoria line of face test pattern.
C, according to the inclination angle between dicoria line determination face test sample book and horizontal level.
D, according to above-mentioned inclination angle, face test sample book is rotated to horizontal level, extract facial image region accurately to split.Fig. 3 piece image is for gathering pretreated face test sample book, and residual image is the part face database schematic diagram of key post personnel.
All training samples under e, each classification of existing by system linearly describe this face test sample y, and the initial sparse calculating y thus represents coefficient;
Described step e comprises the steps:
E1, supposing the system have C class n training sample x 1..., x n, note X i∈ R m × n(i=1,2 ..., C) and represent the training sample set of the i-th class, X here ieach list show a training sample, then all training sample set that C class is altogether corresponding are X=[X 1..., X c];
E2, for face test sample y ∈ R m, linearly y=a can be described by the training sample under all categories 1x 1+ ... + a cx c;
E3, coefficient is carried out to above formula solve, if X is nonsingular, sparse coefficient otherwise, pass through constructive formula try to achieve sparse coefficient wherein, P=(X tx+ μ I) -1x t, μ is positive disturbance term, and I is unit matrix.
F, respectively statistics y, to the classification contribution of each training classification, obtain the deviation degree r of each classification i i;
Described step f comprises the steps:
F1, step e try to achieve the sparse decomposition coefficients of all training sample X corresponding to y, wherein, be defined as training classification i to contribute the classification of y, here association i-th class training sample set X isparse coefficient;
F2, by structure deviation degree formula measure the i-th class training sample set X icontribution in face test sample y process of reconstruction, r here ibe described as y with signal reconstruction residual error, i.e. deviation degree.Obviously, deviation degree r iless, represent contribution in test sample y is rebuild is larger.
The classification that g, rejecting deviation degree are maximum, redescribes this face test sample book with other all residue training sample of residue class, and obtains new rarefaction representation coefficient;
Described step g comprises the steps:
G1, result of calculation according to step f, reject deviation degree r ithe i-th maximum class training sample set X i, i.e. Elimination_Class (i)=argmax i{ r i, X={X-X i(i=1,2 ..., C);
G2, again represent this face test sample book with other all residue training sample of residue class, namely, suppose that eliminating kth class also remains p training sample altogether, then carry out this test sample y of linear expression again with this p residue training sample, and recalculate the new rarefaction representation coefficient of acquisition by step e method.
H, repeat f and g step, until when the quantity rejecting training classification reaches defined threshold, stop rejecting class operation.
I, carry out this face of linear expression again test sample y=b with a final remaining M training sample 1x 1+ ... + b mx m, obtain the final rarefaction representation coefficient b of test sample y 1..., b m.
J, in a final remaining M training sample, suppose that all training samples from l class are so training classification l to the classification contribution that test sample y produces is here b s..., b tassociation l class training sample set X lfinal rarefaction representation coefficient.
K, by setting up deviation degree formula D l=|| y-g l|| 2, l ∈ C, measures l class training sample set X ldeviation degree in the final process of reconstruction of face test sample y.Calculate y and all g ldeviation degree, choose minimum deviation degree D lclassification as the final candidate categories of face test sample book.
L, judge whether the deviation degree of the final candidate categories of above-mentioned facial image sample exceedes predetermined threshold value, if exceed predetermined threshold value, then these personnel of voice message checking is not passed through, otherwise, be verified and the name of voice message testing staff or No. ID.The face candidate categories schematic diagram of Fig. 4 to be the present invention in order to 4 classes before finally verifying face test sample book have minimum deviation degree.
As shown in Figure 2, facial video image verifies embedded implement device, comprising: camera, embedding assembly module and voice prompting device;
Camera, for obtaining the facial image sample of testing staff, and is sent to sample storage storehouse by the facial image sample got, and the resolution of camera and ordinary video camera of chatting is close, and usual 320 × 240 resolution can meet image acquisition request; Camera is arranged on the square position of face acquisition window, is connected with embedding assembly module by USB interface; Embedding assembly module can be carried out position as required and be moved, and configures power module simultaneously; Voice prompting device is arranged on by duty personnel position; Embedding assembly module, primarily of CPU module, memory module, power module, audio playing module and facial image detecting unit composition, is connected by PC/104 bus; Memory module comprises sample storage storehouse and training sample database; Sample storage storehouse, for storing facial image sample; Training sample database, for depositing training sample; Embedding assembly module is used for verifying personnel's face image and the work such as information prompting, and it carries out process calculating according to the above-mentioned facial video image verification method assessed based on sparse deviation degree and information cuing method step.Wherein, CPU module, memory module and power module belong to relative general module, and Video Capture module in charge and camera carry out alternately, and audio playing module is responsible for carrying out alternately, that is, being pointed out by speech player the result with voice prompting device.
Facial image detecting unit, obtains facial image sample from sample storage storehouse, carries out pupil position detection to facial image, obtains the dicoria line of face test sample book; According to the image inclination angle of dicoria line determination face test sample book; According to image inclination angle, face test sample book is rotated to horizontal level, extract facial image region accurately to split; Linearly describe described face test sample book with the training sample in training sample database, obtain describing result y, the initial sparse calculating this test sample book thus represents coefficient; Add up the classification contribution of y to each training classification respectively, calculate the deviation degree r of each classification i i; Reject the classification that deviation degree is maximum, redescribe described face test sample book with other all residue training sample of residue class, and obtain new rarefaction representation coefficient; Until when the quantity rejecting training classification reaches defined threshold, stop rejecting class operation; Carry out face test sample book described in linear expression again with a final remaining M training sample, obtain the final rarefaction representation coefficient of test sample y; In a final remaining M training sample, suppose that all training samples from l class are so calculating and obtaining the classification contribution training classification l to produce test sample y is g l; Calculate y and all g ldeviation degree, choose minimum deviation degree D lclassification as the final candidate categories of face test sample book; Judge whether the deviation degree of the final candidate categories of described facial image sample exceedes predetermined threshold value, if exceed predetermined threshold value, then point out these personnel to verify by voice prompting device not pass through, otherwise, be verified and make the name of voice prompting device voice message testing staff or No. ID.

Claims (5)

1. a facial video image verification method, is characterized in that: comprise the steps:
A () obtains the facial image sample of testing staff by camera, and store;
B () carries out pupil position detection to facial image sample, obtain the dicoria line of face test sample book;
C () is according to the image inclination angle of dicoria line determination face test sample book;
D face test sample book is rotated to horizontal level according to described inclination angle by (), extract facial image region to split;
E () linearly describes described face test sample book with the training sample had, obtain describing result y, and the initial sparse calculating described face test sample book thus represents coefficient;
F () adds up the classification contribution of y to each training classification respectively, calculate the deviation degree r of each classification i i;
G () rejects the maximum classification of deviation degree, redescribe described face test sample book, and obtain new rarefaction representation coefficient with other all residue training sample of residue class;
H () repeats (f) and (g) step, until when the quantity rejecting training classification reaches defined threshold, stop rejecting class operation;
I () carrys out this face test sample book of linear expression again with a final remaining M training sample, obtain the final rarefaction representation coefficient of test sample y;
J (), in a final remaining M training sample, supposes that all training samples from l class are so calculating and obtaining the classification contribution training classification l to produce test sample y is g l;
K () calculates y and all g ldeviation degree, choose minimum deviation degree D lclassification as the final candidate categories of described face test sample book;
L () judges whether the deviation degree of the final candidate categories of above-mentioned facial image sample exceedes predetermined threshold value, if exceed predetermined threshold value, then these personnel of voice message checking is not passed through, otherwise, be verified and voice message personnel name or No. ID;
In described step (e), comprise the steps:
(e1) supposing the system has C class n training sample x 1..., x n, note X i∈ R m × n(i=1,2 ..., C) and represent the training sample set of the i-th class, X here ieach list show a training sample, then all training sample set that C class is altogether corresponding are X=[X 1..., X c];
(e2) for face test sample y ∈ R m, linearly y=a can be described by the training sample under all categories 1x 1+ ... + a cx c;
(e3) carry out sparse coefficient to above formula to solve, if X is nonsingular, sparse coefficient otherwise, pass through constructive formula solve sparse coefficient wherein, P=(X tx+ μ I) -1x t, μ is positive disturbance term, and I is unit matrix;
In described step (f), comprise the steps:
(f1) step e tries to achieve the sparse decomposition coefficients of all training sample X corresponding to y, wherein, X i be defined as training classification i to contribute the classification of y, here association i-th class training sample set X isparse coefficient;
(f2) by structure deviation degree formula measure the i-th class training sample set X icontribution in face test sample y process of reconstruction, r here ibe described as y and X i signal reconstruction residual error, i.e. deviation degree; Obviously, deviation degree r iless, represent X i contribution in test sample y is rebuild is larger.
2. facial video image verification method as claimed in claim 1, is characterized in that, in described step (g), comprise the steps:
(g1) according to the result of calculation of step f, deviation degree r is rejected ithe i-th maximum class training sample set X i, i.e. Elimination_Class (i)=argmax i{ r i} ,x={X-X i(i=1,2 ..., C);
(g2) this face test sample book is again represented with other all residue training sample of residue class, namely, suppose that eliminating kth class also remains p training sample altogether, then carry out this test sample y of linear expression again with this p residue training sample, and recalculate the new rarefaction representation coefficient of acquisition by step (f) method.
3. facial video image verification method as claimed in claim 2, is characterized in that, in described step (i), carry out this face of linear expression again test sample y=b with a final remaining M training sample 1x 1+ ... + b mx m, obtain the final rarefaction representation coefficient b of test sample y 1..., b m.
4. facial video image verification method as claimed in claim 3, is characterized in that, in described step (j), in a final remaining M training sample, suppose that all training samples from l class are so training classification l to the classification contribution that test sample y produces is here b s..., b tassociation l class training sample set X lfinal rarefaction representation coefficient.
5. realize an embedded implement device for the facial video image checking described in any one of claim 1-4, comprising:
Camera, for obtaining the facial image sample of testing staff, and is sent to sample storage storehouse by the facial image sample got;
Embedding assembly module, for verifying personnel's face image and information prompting;
And voice prompting device;
Wherein, described camera is arranged on the square position of face acquisition window, is connected with embedding assembly module by USB interface; Described embedding assembly module forms primarily of memory module, power module, audio playing module and facial image detecting unit; Described memory module comprises sample storage storehouse and training sample database; Sample storage storehouse, for storing facial image sample; Training sample database, for depositing training sample; The facial image sample of the testing staff that described Video Capture module is obtained by camera; Described audio playing module is responsible for carrying out alternately with voice prompting device;
Facial image detecting unit, obtains facial image sample from sample storage storehouse, carries out pupil position detection to facial image, obtains the dicoria line of face test sample book; According to the image inclination angle of dicoria line determination face test sample book; According to image inclination angle, face test sample book is rotated to horizontal level, extract facial image region accurately to split; Linearly describe described face test sample book with the training sample in training sample database, obtain describing result y, the initial sparse calculating this test sample book thus represents coefficient; Add up the classification contribution of y to each training classification respectively, calculate the deviation degree r of each classification i i, reject the classification that deviation degree is maximum, redescribe described face test sample book with other all residue training sample of residue class, and obtain new rarefaction representation coefficient; Until when the quantity rejecting training classification reaches defined threshold, stop rejecting class operation; Carry out face test sample book described in linear expression again with a final remaining M training sample, obtain the final rarefaction representation coefficient of test sample y; In a final remaining M training sample, suppose that all training samples from l class are so calculating and obtaining the classification contribution training classification l to produce test sample y is g l; Calculate y and all g ldeviation degree, choose minimum deviation degree D lclassification as the final candidate categories of face test sample book; Judge whether the deviation degree of the final candidate categories of described facial image sample exceedes predetermined threshold value, if exceed predetermined threshold value, then point out these personnel to verify by voice prompting device not pass through, otherwise, be verified and make the name of voice prompting device voice message testing staff or No. ID.
CN201310139715.0A 2013-04-19 2013-04-19 A kind of facial video image verification method and embedded implement device thereof Active CN103198305B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310139715.0A CN103198305B (en) 2013-04-19 2013-04-19 A kind of facial video image verification method and embedded implement device thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310139715.0A CN103198305B (en) 2013-04-19 2013-04-19 A kind of facial video image verification method and embedded implement device thereof

Publications (2)

Publication Number Publication Date
CN103198305A CN103198305A (en) 2013-07-10
CN103198305B true CN103198305B (en) 2016-04-27

Family

ID=48720841

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310139715.0A Active CN103198305B (en) 2013-04-19 2013-04-19 A kind of facial video image verification method and embedded implement device thereof

Country Status (1)

Country Link
CN (1) CN103198305B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106346491A (en) * 2016-10-25 2017-01-25 塔米智能科技(北京)有限公司 Intelligent member-service robot system based on face information
CN112200763A (en) * 2020-08-24 2021-01-08 江苏科技大学 Liver function grading method based on liver CT image
CN113283377B (en) * 2021-06-10 2022-11-11 重庆师范大学 Face privacy protection method, system, medium and electronic terminal

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102073880A (en) * 2011-01-13 2011-05-25 西安电子科技大学 Integration method for face recognition by using sparse representation
CN102737234A (en) * 2012-06-21 2012-10-17 北京工业大学 Gabor filtering and joint sparsity model-based face recognition method
CN102930301A (en) * 2012-10-16 2013-02-13 西安电子科技大学 Image classification method based on characteristic weight learning and nuclear sparse representation

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102073880A (en) * 2011-01-13 2011-05-25 西安电子科技大学 Integration method for face recognition by using sparse representation
CN102737234A (en) * 2012-06-21 2012-10-17 北京工业大学 Gabor filtering and joint sparsity model-based face recognition method
CN102930301A (en) * 2012-10-16 2013-02-13 西安电子科技大学 Image classification method based on characteristic weight learning and nuclear sparse representation

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
基于样本-扩展差分模板的联合双稀疏表示人脸识别;胡正平等;《信号处理》;20121231;第28卷(第12期);1663-1669 *
稀疏保留判决分析在人脸表情识别中的应用;黄勇;《计算机工程》;20110731;第37卷(第14期);167,168,171 *
稀疏表示的人脸识别及其优化算法;郑轶等;《华东交通大学学报》;20120228;第29卷(第1期);10-14 *

Also Published As

Publication number Publication date
CN103198305A (en) 2013-07-10

Similar Documents

Publication Publication Date Title
CN106250870B (en) A kind of pedestrian's recognition methods again of joint part and global similarity measurement study
CN105243374B (en) Three-dimensional face identification method, system and the data processing equipment using it
Zhang et al. Pedestrian detection method based on Faster R-CNN
CN105404886B (en) Characteristic model generation method and characteristic model generating means
CN109344835A (en) Altering detecting method based on vehicle VIN code character position
CN102855491B (en) A kind of central brain functional magnetic resonance image classification Network Based
CN106407958B (en) Face feature detection method based on double-layer cascade
CN105574550A (en) Vehicle identification method and device
CN110147786A (en) For text filed method, apparatus, equipment and the medium in detection image
CN103336945A (en) Finger vein recognition method fusing local features and global features
CN102324042B (en) Visual recognition system and method
Xiang et al. Cross-modality person re-identification based on dual-path multi-branch network
CN103413295B (en) A kind of video multi-target long-range tracking
US20150347804A1 (en) Method and system for estimating fingerprint pose
CN110490238A (en) A kind of image processing method, device and storage medium
CN107392187A (en) A kind of human face in-vivo detection method based on gradient orientation histogram
Sang et al. Pose‐invariant face recognition via RGB‐D images
CN103198305B (en) A kind of facial video image verification method and embedded implement device thereof
CN110472652A (en) A small amount of sample classification method based on semanteme guidance
Liu et al. Integrating the symmetry image and improved sparse representation for railway fastener classification and defect recognition
Tao et al. A pipeline for 3-D object recognition based on local shape description in cluttered scenes
Zhou et al. Classroom learning status assessment based on deep learning
Zhang et al. Automatic recognition of oil industry facilities based on deep learning
CN103839074A (en) Image classification method based on matching of sketch line segment information and space pyramid
CN105989600A (en) Characteristic point distribution statistics-based power distribution network device appearance detection method and system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20211103

Address after: 212299 No. 120, Yangzi Middle Road, Sanmao street, Yangzhong City, Zhenjiang City, Jiangsu Province

Patentee after: Zhenjiang kunyan Information Technology Co.,Ltd.

Address before: 212003, No. 2, Mengxi Road, Zhenjiang, Jiangsu

Patentee before: JIANGSU University OF SCIENCE AND TECHNOLOGY

TR01 Transfer of patent right