CN102411708A - Face recognition method combining dual-tree complex wavelet transform and discrete wavelet transform - Google Patents
Face recognition method combining dual-tree complex wavelet transform and discrete wavelet transform Download PDFInfo
- Publication number
- CN102411708A CN102411708A CN2011103960095A CN201110396009A CN102411708A CN 102411708 A CN102411708 A CN 102411708A CN 2011103960095 A CN2011103960095 A CN 2011103960095A CN 201110396009 A CN201110396009 A CN 201110396009A CN 102411708 A CN102411708 A CN 102411708A
- Authority
- CN
- China
- Prior art keywords
- matrix
- wavelet transform
- vector
- sigma
- calculate
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Landscapes
- Collating Specific Patterns (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a face recognition method combining dual-tree complex wavelet transform and discrete wavelet transform. The method comprises the following steps of: carrying out feature extraction on an input face image by using a method combining dual-tree complex wavelet transform and discrete wavelet transform; carrying out dimension reduction on an extracted feature vector X by using a supervised locally linear embedding method; carrying out cosine similarity calculation on the feature vector of the tested face image and the feature vectors of training set images; and setting the input image and a training image with highest similarity into one group so as to obtain a face recognition result. By utilizing the method combining dual-tree complex wavelet transform and discrete wavelet transform, multidirectional and abundant face feature extraction can be realized and dimension reduction can be realized quickly, thereby realizing accurate and efficient face recognition.
Description
Technical field
The invention belongs to Flame Image Process and mode identification technology, particularly relate to a kind of face identification method that merges dual-tree complex wavelet conversion and wavelet transform.
Background technology
Identity validation means the most frequently used in people's daily life are not only in recognition of face; It also is the important component part of biometrics identification technology; Relate to a plurality of research fields such as Flame Image Process, pattern-recognition, computer vision; Having challenge, is one of problem of greatest concern in the research of current domestic and international security protection.Because recognition of face is noncontact, has non-infringement property that people can not repel this technology, so face recognition technology is a most friendly a kind of biometrics identification technology.Because it has the superiority of simple to operate, visual result, good concealment, recognition of face is with a wide range of applications in fields such as information security, video monitoring, criminal detections in addition.Yet face identification method still also has a series of insoluble problems, for example when ambient lighting, head pose, when facial expression changes, discrimination can significantly descend usually.Especially, face recognition technology has been applied to various resource forms now, from the controlled photo form of static state to uncontrollable video sequence.Therefore, this just presses for a kind of face identification method of innovation, not only has multi-direction and abundant face feature extraction method, and has quick and effective feature dimension reduction method.
Dual-tree complex wavelet conversion (DT-CWT) is though have low redundance, multi-direction selectivity, approximate translation invariance and the high characteristics of operation efficiency; But DT-CWT can extract in 6 fixed size directions and not comprise level and vertical direction; And these two important directions have comprised the abundantest facial characteristics of people's face, therefore adopt the face identification method of dual-tree complex wavelet conversion to have the not high defective of recognition accuracy.
Summary of the invention
In order to solve the above-mentioned technical matters that existing face identification method exists, the present invention provides the high fusion dual-tree complex wavelet conversion of a kind of recognition accuracy and the face identification method of wavelet transform.
The technical scheme that the present invention solves the problems of the technologies described above may further comprise the steps:
(1) to each given facial image I, carries out two-dimentional dual-tree complex wavelet conversion, obtain the complex coefficient matrix of each scale subbands, calculate the range value of each subband matrix complex coefficient, be translated into real number matrix, i.e. the magnitude matrix of this subband;
(2) each magnitude matrix is pressed column direction and launch, line up a column vector, use V
U, vExpression, wherein { 1,2,3,4} and v ∈ { ± 15 °, ± 45 °, ± 75 ° } represent scale parameter and the direction parameter of DT-CWT respectively to u ∈, and the column vector that each subband is corresponding connects and obtains the vectorial X of face characteristic
D1, wherein D1 representes the dimension of DT-CWT proper vector,
(3) given facial image is carried out the wavelet transform of vertical and horizontal direction, obtain the complex coefficient matrix of each scale subbands, calculate the range value of each subband matrix complex coefficient, be translated into real number matrix, i.e. the magnitude matrix of this subband; Each magnitude matrix launches to obtain column vector X respectively by the vertical and horizontal direction
D2, wherein D2 representes the dimension of DWT proper vector,
(4) feature vector, X of facial image I is passed through the DT-CWT feature vector, X
D1With the DWT feature vector, X
D2Couple together composition;
(5) adopt the local linear embedding grammar of supervision that the feature vector, X of extracting is carried out dimensionality reduction; The proper vector of test facial image and the proper vector of training set image are carried out the calculating of cosine similarity; Input picture and the highest training image of similarity are classified as one type, obtain face recognition result.Technique effect of the present invention is: 1) the present invention adopts the method based on dual-tree complex wavelet conversion and discrete wavelet transformer commutation fusion having good effect aspect the extraction face characteristic; At first, the dual-tree complex wavelet conversion can be extracted the small echo range coefficient face characteristic that 6 direction yardsticks have translation invariance from given facial image; Secondly, wavelet transform has replenished level again and has comprised the abundantest dimension of face characteristic with vertical these two; Both combinations have strengthened the robustness to factors such as illumination, attitudes, have realized that multi-direction, abundant face characteristic extracts.2) the present invention utilizes the local linear embedding grammar of supervision to carry out the characteristic dimensionality reduction, makes and can reduce computing in the characteristic dimensionality reduction process, can well keep the topological structure of all kinds of people's face samples again, helps the raising of recognition accuracy.
Below in conjunction with accompanying drawing and embodiment the present invention is done further detailed explanation.
Description of drawings
Fig. 1 is the schematic flow sheet of the embodiment of the invention.
Fig. 2 is the used part of O RL face database of the embodiment of the invention.
Fig. 3 is the used DWT design sketch of the embodiment of the invention.
Fig. 4 is the used DT-CWT design sketch of the embodiment of the invention.
Fig. 5 is the used experimental result picture of the embodiment of the invention.
Embodiment
Shown in Fig. 1-5, practical implementation step of the present invention is following:
1, extraction of human face characteristics
The feature extraction of facial image realizes through combining DT-CWT and DWT.Its step is following:
The first step: to each given facial image I, carry out 4 grades of DT-CWT (two-dimentional dual-tree complex wavelet conversion) on 6 dimension, obtain the complex coefficient matrix of 24 scale subbands.Through calculating the range value of each subband matrix complex coefficient, be translated into real number matrix, i.e. the magnitude matrix of this subband.
Second step: each magnitude matrix is pressed column direction launch, line up a column vector, use V
U, vExpression (wherein u ∈ { 1,2,3,4} and v ∈ { ± 15 °, ± 45 °, ± 75 ° } represent scale parameter and the direction parameter of DT-CWT respectively).Then, couple together through the column vector that these 24 subbands are corresponding and can obtain face characteristic vector X
D1, wherein D1 representes the dimension of DT-CWT proper vector, and is as follows:
The 3rd step: be similar to DT-CWT, DWT (wavelet transform) result of vertical and horizontal direction (0 ° and 90 °) also need calculate and be converted into magnitude matrix, then, expands into column vector respectively by 0 ° and 90 °, uses X
D2Expression, wherein D2 representes the dimension of DWT proper vector, promptly
The 4th step: the feature vector, X of facial image I can be passed through the DT-CWT feature vector, X
D1With the DWT feature vector, X
D2Couple together composition: X=(X
D1, X
D2).
2, characteristic dimensionality reduction
Obtaining face characteristic vector X after the above-mentioned feature extraction is by DWT and 4 grades of common higher dimensional space vectors that produce of DT-CWT.High-dimensional for calculating has caused very big burden, also introduced noise simultaneously.Therefore, adopt the local linear embedding grammar of supervision (SLLE) to carry out dimensionality reduction.Concrete steps are described below:
The first step: choose neighborhood.If sample point add up to n, calculate each sample point X
iK neighbour put X
j, wherein i ∈ 1 ..., n}, j ∈ 1 ..., k}.SLLE has increased the classification information of sample point in this step, promptly SLLE calculate between points apart from the time, adopt following formula:
D′=D+αmax(D)Δ,α∈[0,1]
Wherein D ' is the distance after calculating, and D is an Euclidean distance of not considering classification information.Belong to same time-like when 2, Δ is taken as 0, otherwise Δ gets 1.α is a distance parameter between points: when α=1, SLLE is the LLE that supervision is arranged; When α=0, SLLE is unsupervised LLE; Otherwise SLLE is exactly semi-supervised LLE (α-SLLE).
Second step: calculate the partial reconstruction weight matrix W of sample point, the definition error function:
Calculate each sample point X
iAbility is with the optimum reconstruct weights W of the minimize error ξ of k neighbour's linear reconstruction
Ij, W wherein
IjBe X
iWith X
jBetween weights;
The 3rd step: use weights W
IjCalculate embedded coordinate Y.
Wherein I is that d * d ties up unit matrix, Y
iBe X
iOutput vector, Y
jBe Y
iK neighbour's point.So ξ (Y) also can be expressed as ξ (Y)=tr{YMY
T, M=(I-W) wherein
T(I-W) be sparse matrix.Make ξ (Y) reach minimum value, then Y is minimum m the pairing proper vector of nonzero eigenvalue of M.
3, pattern classification
Assorting process is an important step in the recognition of face.Pattern classification promptly is divided into the classification under it to the proper vector of the test sample book that obtains through feature extraction and characteristic dimensionality reduction.The present invention adopts and based on the method for cosine similarity (cosine similarity) training sample and the test sample book characteristic extracted is classified; Promptly the proper vector to proper vector and the training set image of the test facial image of input is carried out the calculating of cosine similarity in assorting process; Input picture and the highest training image of similarity are classified as one type, thereby realize recognition of face.
The present invention adopts the ORL face database to carry out related experiment.This database is taken at different times by 40 people, everyone 10 width of cloth, and totally 400 width of cloth images are formed.In the experimentation, everyone is randomly drawed 5 images be used for training, remaining 5 are used for test.The inventive method compares with classical face identification method; As shown in Figure 5; The discrimination of the inventive method will obviously be superior to traditional P CA (principal component analysis (PCA)) method; In addition respectively with local linear embed (SLLE) with (DT-CWT+DWT)+two kinds of methods of PCA compare, the former shows that scheme one extracted the face characteristic that more enriches, and the latter shows that the feature dimension reduction method of scheme two is more effective than PCA dimensionality reduction.
Claims (4)
1. a face identification method that merges dual-tree complex wavelet conversion and wavelet transform comprises the steps:
(1) to each given facial image I, carries out two-dimentional dual-tree complex wavelet conversion, obtain the complex coefficient matrix of each scale subbands, calculate the range value of each subband matrix complex coefficient, be translated into real number matrix, i.e. the magnitude matrix of this subband;
(2) each magnitude matrix is pressed column direction and launch, line up a column vector, use V
U, vExpression, wherein { 1,2,3,4} and v ∈ { ± 15 °, ± 45 °, ± 75 ° } represent scale parameter and the direction parameter of DT-CWT respectively to u ∈, and the column vector that each subband is corresponding connects and obtains the vectorial X of face characteristic
D1, wherein D1 representes the dimension of DT-CWT proper vector,
(3) given facial image is carried out the wavelet transform of vertical and horizontal direction, obtain the complex coefficient matrix of each scale subbands, calculate the range value of each subband matrix complex coefficient, be translated into real number matrix, i.e. the magnitude matrix of this subband; Each magnitude matrix launches to obtain column vector X respectively by the vertical and horizontal direction
D2, wherein D2 representes the dimension of DWT proper vector,
(4) feature vector, X of facial image I is passed through the DT-CWT feature vector, X
D1With the DWT feature vector, X
D2Couple together composition;
(5) adopt the local linear embedding grammar of supervision that the feature vector, X of extracting is carried out dimensionality reduction; The proper vector of test facial image and the proper vector of training set image are carried out the calculating of cosine similarity; Input picture and the highest training image of similarity are classified as one type, obtain face recognition result.
2. method according to claim 1 is characterized in that, the DT-CWT of said step (1) is 4 grades of 6 dimension, and obtains the magnitude matrix of 24 scale subbands.
3. according to the said method of claim 1, it is characterized in that said step (4) X=(X
D1, X
D2).
4. method according to claim 1 is characterized in that, the step that said step (5) adopts the local linear embedding grammar of supervision to carry out dimensionality reduction is:
The first step: choose neighborhood, establish the n that adds up to of sample point, calculate each sample point X
iK neighbour put X
i, wherein i ∈ 1 ..., n}, j ∈ 1 ..., k};
Second step: calculate the partial reconstruction weight matrix W of sample point, the definition error function:
Calculate each sample point X
iAbility is with the optimum reconstruct weights W of the minimize error ξ of k neighbour's linear reconstruction
Ij, W wherein
IjBe X
iWith X
jBetween weights;
The 3rd step: with optimum reconstruct weights W
IjCalculate the coordinate Y of lower dimensional space,
Wherein I is that d * d ties up unit matrix, Y
iBe X
iOutput vector, Y
jBe Y
iK neighbour's point, so ξ (Y) also can be expressed as ξ (Y)=tr{YMY
T, M=(I-W) wherein
T(I-W) be sparse matrix, make ξ (Y) reach minimum value, then Y is minimum m the pairing proper vector of nonzero eigenvalue of M.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2011103960095A CN102411708A (en) | 2011-12-02 | 2011-12-02 | Face recognition method combining dual-tree complex wavelet transform and discrete wavelet transform |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2011103960095A CN102411708A (en) | 2011-12-02 | 2011-12-02 | Face recognition method combining dual-tree complex wavelet transform and discrete wavelet transform |
Publications (1)
Publication Number | Publication Date |
---|---|
CN102411708A true CN102411708A (en) | 2012-04-11 |
Family
ID=45913775
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN2011103960095A Pending CN102411708A (en) | 2011-12-02 | 2011-12-02 | Face recognition method combining dual-tree complex wavelet transform and discrete wavelet transform |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN102411708A (en) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103076084A (en) * | 2012-12-07 | 2013-05-01 | 北京物资学院 | Matching pursuit weak signal extraction method based on FrDT-CWT (fractional dual-tree complex wavelet transform) |
CN103336960A (en) * | 2013-07-26 | 2013-10-02 | 电子科技大学 | Human face identification method based on manifold learning |
CN103942526A (en) * | 2014-01-17 | 2014-07-23 | 山东省科学院情报研究所 | Linear feature extraction method for discrete data point set |
CN105426822A (en) * | 2015-11-05 | 2016-03-23 | 郑州轻工业学院 | Non-stable signal multi-fractal feature extraction method based on dual-tree complex wavelet transformation |
CN105574546A (en) * | 2015-12-22 | 2016-05-11 | 洛阳师范学院 | Computer image mode identification method based on SLLE algorithm and system utilizing the same |
CN106250858A (en) * | 2016-08-05 | 2016-12-21 | 重庆中科云丛科技有限公司 | A kind of recognition methods merging multiple face recognition algorithms and system |
CN108520210A (en) * | 2018-03-26 | 2018-09-11 | 河南工程学院 | Based on wavelet transformation and the face identification method being locally linear embedding into |
CN109255748A (en) * | 2018-06-07 | 2019-01-22 | 上海出版印刷高等专科学校 | Digital watermark treatment method and system based on dual-tree complex wavelet |
CN111597896A (en) * | 2020-04-15 | 2020-08-28 | 卓望数码技术(深圳)有限公司 | Abnormal face recognition method, abnormal face recognition device, abnormal face recognition equipment and abnormal face recognition storage medium |
CN105447498B (en) * | 2014-09-22 | 2021-01-26 | 三星电子株式会社 | Client device, system and server system configured with neural network |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101271521A (en) * | 2008-05-13 | 2008-09-24 | 清华大学 | Human face recognition method based on anisotropic double-tree complex wavelet package transforms |
-
2011
- 2011-12-02 CN CN2011103960095A patent/CN102411708A/en active Pending
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101271521A (en) * | 2008-05-13 | 2008-09-24 | 清华大学 | Human face recognition method based on anisotropic double-tree complex wavelet package transforms |
Non-Patent Citations (3)
Title |
---|
李见为 等: "监督局部线性嵌入在人脸识别中的应用", 《重庆大学学报》 * |
柴智 等: "基于复小波和Gabor小波的人脸识别", 《计算机工程》 * |
柴智 等: "应用复小波和独立成分分析的人脸识别", 《计算机应用》 * |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103076084A (en) * | 2012-12-07 | 2013-05-01 | 北京物资学院 | Matching pursuit weak signal extraction method based on FrDT-CWT (fractional dual-tree complex wavelet transform) |
CN103336960A (en) * | 2013-07-26 | 2013-10-02 | 电子科技大学 | Human face identification method based on manifold learning |
CN103942526A (en) * | 2014-01-17 | 2014-07-23 | 山东省科学院情报研究所 | Linear feature extraction method for discrete data point set |
CN103942526B (en) * | 2014-01-17 | 2017-02-08 | 山东省科学院情报研究所 | Linear feature extraction method for discrete data point set |
US11875268B2 (en) | 2014-09-22 | 2024-01-16 | Samsung Electronics Co., Ltd. | Object recognition with reduced neural network weight precision |
US11593586B2 (en) | 2014-09-22 | 2023-02-28 | Samsung Electronics Co., Ltd. | Object recognition with reduced neural network weight precision |
CN105447498B (en) * | 2014-09-22 | 2021-01-26 | 三星电子株式会社 | Client device, system and server system configured with neural network |
CN105426822A (en) * | 2015-11-05 | 2016-03-23 | 郑州轻工业学院 | Non-stable signal multi-fractal feature extraction method based on dual-tree complex wavelet transformation |
CN105574546B (en) * | 2015-12-22 | 2018-11-16 | 洛阳师范学院 | A kind of computer picture mode identification method and system based on SLLE algorithm |
CN105574546A (en) * | 2015-12-22 | 2016-05-11 | 洛阳师范学院 | Computer image mode identification method based on SLLE algorithm and system utilizing the same |
CN106250858B (en) * | 2016-08-05 | 2021-08-13 | 重庆中科云从科技有限公司 | Recognition method and system fusing multiple face recognition algorithms |
CN106250858A (en) * | 2016-08-05 | 2016-12-21 | 重庆中科云丛科技有限公司 | A kind of recognition methods merging multiple face recognition algorithms and system |
CN108520210A (en) * | 2018-03-26 | 2018-09-11 | 河南工程学院 | Based on wavelet transformation and the face identification method being locally linear embedding into |
CN109255748A (en) * | 2018-06-07 | 2019-01-22 | 上海出版印刷高等专科学校 | Digital watermark treatment method and system based on dual-tree complex wavelet |
CN111597896A (en) * | 2020-04-15 | 2020-08-28 | 卓望数码技术(深圳)有限公司 | Abnormal face recognition method, abnormal face recognition device, abnormal face recognition equipment and abnormal face recognition storage medium |
CN111597896B (en) * | 2020-04-15 | 2024-02-20 | 卓望数码技术(深圳)有限公司 | Abnormal face recognition method, recognition device, recognition apparatus, and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102411708A (en) | Face recognition method combining dual-tree complex wavelet transform and discrete wavelet transform | |
Gao et al. | Infar dataset: Infrared action recognition at different times | |
Tao et al. | Person re-identification by regularized smoothing kiss metric learning | |
Sikka et al. | Exploring bag of words architectures in the facial expression domain | |
CN105574510A (en) | Gait identification method and device | |
CN103400154B (en) | A kind of based on the human motion recognition method having supervision Isometric Maps | |
CN102722699A (en) | Face identification method based on multiscale weber local descriptor and kernel group sparse representation | |
CN107679461A (en) | Pedestrian's recognition methods again based on antithesis integration analysis dictionary learning | |
Mu et al. | Palmprint recognition based on discriminative local binary patterns statistic feature | |
CN104239856A (en) | Face recognition method based on Gabor characteristics and self-adaptive linear regression | |
CN115830637B (en) | Method for re-identifying blocked pedestrians based on attitude estimation and background suppression | |
Li et al. | A statistical PCA method for face recognition | |
CN106096528B (en) | A kind of across visual angle gait recognition method analyzed based on two-dimentional coupling edge away from Fisher | |
Dikmen et al. | A data driven method for feature transformation | |
CN109165612A (en) | Pedestrian's recognition methods again based on depth characteristic and two-way KNN sorting consistence | |
CN102004902B (en) | Near infrared human face image identification method based on wavelet theory and sparse representation theory | |
CN110909678B (en) | Face recognition method and system based on width learning network feature extraction | |
Xie et al. | Facial expression recognition using intra‐class variation reduced features and manifold regularisation dictionary pair learning | |
CN101877065B (en) | Extraction and identification method of non-linear authentication characteristic of facial image under small sample condition | |
Panner Selvam et al. | Gender recognition based on face image using reinforced local binary patterns | |
Kalluri et al. | Palmprint identification using Gabor and wide principal line features | |
Okawa et al. | Offline writer verification using pen pressure information from infrared image | |
Umer et al. | Biometric recognition system for challenging faces | |
CN105718858B (en) | A kind of pedestrian recognition method based on positive and negative broad sense maximum pond | |
CN101482917B (en) | Human face recognition system and method based on second-order two-dimension principal component analysis |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C02 | Deemed withdrawal of patent application after publication (patent law 2001) | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20120411 |