CN111488905B - Robust image recognition method based on high-dimensional PCANet - Google Patents
Robust image recognition method based on high-dimensional PCANet Download PDFInfo
- Publication number
- CN111488905B CN111488905B CN202010147000.XA CN202010147000A CN111488905B CN 111488905 B CN111488905 B CN 111488905B CN 202010147000 A CN202010147000 A CN 202010147000A CN 111488905 B CN111488905 B CN 111488905B
- Authority
- CN
- China
- Prior art keywords
- feature
- dimensional
- image
- pcanet
- atlas
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 36
- 238000010586 diagram Methods 0.000 claims description 23
- 238000012360 testing method Methods 0.000 claims description 17
- 239000011159 matrix material Substances 0.000 claims description 15
- 239000013598 vector Substances 0.000 claims description 15
- 238000004364 calculation method Methods 0.000 claims description 6
- 238000010276 construction Methods 0.000 claims description 6
- 239000000284 extract Substances 0.000 claims description 3
- 230000009191 jumping Effects 0.000 claims description 3
- 238000005286 illumination Methods 0.000 abstract description 5
- 238000000605 extraction Methods 0.000 abstract description 4
- 238000005259 measurement Methods 0.000 abstract 2
- 230000006870 function Effects 0.000 description 4
- 238000013527 convolutional neural network Methods 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 230000007547 defect Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/213—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
- G06F18/2135—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
A robust image recognition method based on high-dimensional PCANet comprises robust feature extraction and nearest neighbor classification based on chi-square distance, wherein the robust feature extraction process combines flat convolution and three-dimensional convolution of a feature map, the three-dimensional convolution fully considers correlation among channels, the flat convolution can fully decompose main directions of each channel of an input image, and the obtained pattern map has richer features compared with the original PCANet and can effectively improve the robustness of the PCANet; the classification process comprises the following steps: step 1, acquiring distance measurement from an image to be identified to each training image based on chi-square distance in a high-dimensional histogram feature space; and 2, obtaining a class mark corresponding to the training sample with the minimum distance measurement as the class mark of the image to be identified. The method and the device can effectively process the changes such as shielding, illumination change, resolution difference and the like in the image to be identified, and effectively improve the identification rate of the offset image.
Description
Technical Field
The invention relates to the field of image processing and pattern recognition, in particular to robust image recognition with large difference between an image to be recognized and a training image, which is mainly used for processing and recognizing images in reality.
Background
In the existing field of computer vision and image recognition, deep neural networks (Deep Neural Network, DNN), represented by convolutional neural networks (Convolutional Neural Networks, CNN), have met with great success, and on some disclosed data sets, the classification capabilities of leading edge deep learning methods even exceed those of humans, for example: authentication accuracy on LFW face database, image classification accuracy on ImageNet, and handwriting digital recognition accuracy on MNIST, etc. However, in practice, the image to be identified tends to have a large difference in "distribution" or "structure" from the training image, which can cause DNN to suffer from a large-scale recognition error, which phenomenon is called "Covariate Shift" in the field of deep learning. The covariate offset causes the technical defects of lower accuracy and poor feasibility of the existing image recognition method.
Disclosure of Invention
In order to overcome the defects of low image recognition accuracy and poor feasibility caused by the existing covariate offset, the invention provides a robust image recognition method with High accuracy and good feasibility based on High-dimensional PCANet (HPCANet), which can effectively overcome the recognition problem caused by the covariate offset, and particularly can greatly improve the image recognition performance when the images to be recognized have offset with larger amplitudes such as shielding, illumination change, resolution difference and the like.
The technical scheme adopted for solving the technical problems is as follows:
a robust image recognition method based on high-dimensional PCANet comprises the following steps:
Wherein,, is->Mean value of-> Representing from->B e {1,2, …, mn } feature blocks of size k×k extracted from the c-th channel, vec (·) represents the operation of stretching the matrix into column vectors;
Step 7 computing the feature atlas X of the 1+1th convolution layer (l+1) ;
Step 8, let l=l+1, execute the above steps 3 to 7 until l=l, where L represents the maximum convolution layer number given in advance;
step 12 calculationMain direction->Wherein (1)>Is covariance matrix->The i "th eigenvector of (a), the corresponding eigenvalue is lambda i″ And->
Step 14 calculates a feature atlas Y for the 1+1th convolution layer (l+1) ;
step 17, performing pattern diagram coding on the feature atlas F to obtain a pattern atlas P: p= { P i,β } i=1,…,N;β=1,…,B Wherein, the method comprises the steps of, wherein,beta e {1, …, B } pattern diagram representing the ith sample, F i,· Representing feature map subset F i In>T represents the number of channels involved in the encoding of a single pattern diagram, USF (·) represents a unit step function (Unit Step Function, USF), and the input value is binarized by comparison with 0, i.e.:
step 18 extracts histogram features H from the pattern atlas P: h= [ H ] i ] i=1,…,N Wherein H is i =[H i,1 ,…,H i,B ] T ,H i,β =Qhist(P i,β ),Qhist(P i,β ) Representing a pattern diagramP i,β Divided into Q blocks, a histogram is extracted from each block, each histogram using 2 T The number of packets, i.e. the code value of the statistical pattern diagram is 2 for each feature block T The frequency of occurrence in the individual packets;
step 21 calculates a metric matrix m= [ M ] i,j ] i=1,…,J;j=1,…,K Wherein, the method comprises the steps of, wherein,here the number of the elements is the number,
wherein D representsAnd->Length of->Representation->The d element of (a)>Representation ofThe d element of (a);
step 22 calculates class id= [ Id ] of each sample in the test set Z i ] i=1,…,K :
Wherein M is i Represents the ith column vector in the metric matrix M, minIndx (·) represents M i Index of the smallest element in the (c).
Further, in the step 7, the feature atlas X of the (1) th convolution layer is calculated according to the following steps (l+1) :7.1 Will) beProjected to W (l+1) :/>7.2 Will->The elements in (a) are reorganized into feature atlas +.>Wherein (1)>And is also provided with Here, a->Representation->Column vectors of rows a to b of column c, a% b representing a remainder of b,/-a%>Representation rounding down the real number a, mat m×n (v) represents that an arbitrary column vector is +.>Rearranged into an mxn matrix.
Still further, in the step 14, the feature atlas Y of the (1+1) th convolution layer is calculated as follows (l+1) :14.1 Will) beProjected to W (l+1) :/>14.2 Will->The elements in (a) are reorganized into feature atlas +.>Wherein (1)> And->Here the number of the elements is the number,the representation connects the matrices in the set in the channel direction.
The technical conception of the invention is as follows: when the images to be identified and the training set images have large offsets such as shielding, illumination change, resolution difference and the like, the identification performance of the existing neural network model is often greatly reduced, and PCANet can better solve the problems. However, PCANet suffers from two drawbacks: (1) The PCANet adopts flat convolution, and correlation among channels of the feature map is not fully considered; (2) PCANet compresses the generated feature map 8 times when encoding the pattern map, so that the acquired pattern map lacks rich discriminative features. In order to solve the above problems, the invention combines the stereo convolution and the flat convolution, wherein the stereo convolution can fully consider the correlation between channels, and the flat convolution can fully decompose the main direction of each channel of the input image, so the obtained pattern diagram has richer features compared with the original PCANet, and the robustness of the PCANet can be effectively improved.
The beneficial effects of the invention are mainly shown in the following steps: the method can more effectively process the changes such as shielding, illumination change, resolution difference and the like in the image to be identified, thereby effectively improving the identification rate of the offset image.
Drawings
Fig. 1 is a feature map extraction process of the high-dimensional PCANet according to the present invention, wherein,a flat convolution operation is shown, and the details of the step 7 are shown in the invention content; the U represents merging the feature map subsets; />The block histogram feature extraction of the pattern diagram is shown, and the step 18 of the invention is detailed;
FIG. 2 is a classification process of the high-dimensional PCANet of the present invention, see step 21 and step 22 of the summary, wherein NN represents the nearest neighbor classifier, id represents the final class of the image to be identified;
FIG. 3 is a training set sample and a test set sample from an AR face database, where (a) a sample of test set I, (b) a sample of test set II, (c) a sample of test set III, and (d) a sample of training set;
FIG. 4 (a) is a process of stretching a matrix into column vectors by Vec (·) operator, and FIG. 4 (b) is mat m×n A process in which the (-) operator resets the column vector to a matrix;
FIG. 5 is a process diagram of extracting feature blocks from a feature map in a flat convolution, where (a) is the original feature map, (b) is boundary zero padding, (c) is feature block selection, and (d) is the selected multi-channel feature block;
FIG. 6 is a process diagram of extracting feature blocks from a feature map in a stereo convolution, where (a) is the original feature map, (b) is boundary zero padding, (c) is feature block selection, and (d) is the selected multi-channel feature block;
fig. 7 (a) is a one-dimensional illustration of a flat filter, and fig. 7 (b) is a one-dimensional illustration of a stereo filter;
FIG. 8 is a two-dimensional illustration of a flat filter/stereo filter, wherein (a) represents the flat convolution kernel of convolution layer 1, (b) represents the flat convolution kernel of convolution layer 2, and (c) represents the stereo convolution kernel of convolution layer 2;
FIG. 9 is a model of a feature map generated from an image to be identified through a 2-layer flat convolution and a stereo convolution, where (a) represents the image to be identified (with illumination changes and occlusions), (b) represents the model of 64 feature maps generated through a 2-layer flat convolution, and (c) represents the model of 64 feature maps generated through a 2-layer stereo convolution;
fig. 10 is 16 pattern diagrams generated by the high-dimensional PCANet method, wherein the 8 pattern diagrams of the first row are from the characteristic diagrams generated by the flat convolution, and the 8 pattern diagrams of the second row are from the characteristic diagrams generated by the stereo convolution.
Detailed Description
The invention is further described below with reference to the accompanying drawings.
Referring to fig. 1 to 10, a robust image recognition method based on High-dimensional PCANet (HPCANet), the method comprising the steps of:
Wherein,, is->Mean value of-> Representing from->B e {1,2, …, mn } feature blocks of size k×k extracted from the c-th lane, vec (·) represents the operation of stretching the matrix into column vectors. FIG. 4 (a) specifically depicts the process of Vec (-) stretching the matrix into column vectors, and FIG. 5 details the process of extracting feature blocks from feature maps in a flat convolution;
Step 6 from V (l) Acquisition of C l+1 Three-dimensional filter bankFig. 7 (a) and 8 (b) show one-dimensional and two-dimensional representations of a flat filter, respectively;
step 7 the feature atlas X of the (1) th convolution layer is calculated as follows (l+1) :7.1 Will) beProjected to7.2 Will->The elements in (a) are reorganized into feature atlas +.>Wherein (1)>And is also provided with Here, a->Representation->Column vectors of rows a to b of column c, a% b representing a remainder of b,/-a%>Representation rounding down the real number a, mat m×n (v) represents that an arbitrary column vector is +.>Rearranging into an m×n matrix;
step 8, let l=l+1, execute the above steps 3 to 7 until l=l, where L represents the maximum convolution layer number given in advance;
step 12 calculationMain direction->Wherein (1)>As covariance momentMatrix->The i "th eigenvector of (a), the corresponding eigenvalue is lambda i″ And->
Step 13 consists ofAcquisition of C l+1 Three-dimensional filter bankFig. 7 (b) and 8 (c) show one-dimensional and two-dimensional representations of a stereo filter, respectively;
step 14 the feature atlas Y of the 1+1th convolution layer is calculated as follows (l+1) :14.1 Will) beProjected to14.2 Will->Reorganizing elements in a feature atlasWherein (1)> And is also provided withHere, a->Representing connecting the matrices in the set in the channel direction;
step 16 sets the feature atlas X (L) And Y (L) Combining to form a new feature atlas F:fig. 9 shows the model values of 64×2=128 feature maps generated by 2-layer flat convolution and stereo convolution of the image to be identified (fig. 9 (a));
step 17, performing pattern diagram coding on the feature atlas F to obtain a pattern atlas P: p= { P i,β } i=1,...,N;β=1,…,B Wherein, the method comprises the steps of, wherein,beta e {1, …, B } pattern diagram representing the ith sample, F i,· Representing feature map subset F i In>T represents the number of channels involved in the encoding of a single pattern diagram, USF (·) represents a unit step function (Unit Step Function, USF), and the input value is binarized by comparison with 0, i.e.:
FIG. 10 illustrates a pattern diagram generated by the high-dimensional PCANet method;
step 18 extracts histogram features H from the pattern atlas P: h= [ H ] i ] i=1,…,N Wherein H is i =[H i,1 ,…,H i,B ] T ,H i,β =Qhist(P i,β ),Qhist(P i,β ) Representing the pattern diagram P i,β Divided into Q blocks, a histogram is extracted from each block, each histogram using 2 T The number of packets, i.e. the code value of the statistical pattern diagram is 2 for each feature block T The frequency of occurrence in the individual packets;
step 21 calculates a metric matrix m= [ M ] i,j ] i=1,…,J;j=1,…,K Wherein, the method comprises the steps of, wherein,here the number of the elements is the number,
wherein D representsAnd->Length of->Representation->The d element of (a)>Representation ofThe d element of (a);
step 22 calculates class id= [ Id ] of each sample in the test set Z i ] i=1,…,K :
Wherein M is i Represents the ith column vector in the metric matrix M, minIndx (·) represents M i Index of the smallest element in the (c).
Table 1 compares the recognition rates of three versions of HPCANet (HPCANet-1, HPCANet-2, HPCANet-3) with the existing method (VGG-Face, LCNN, PCANet) for the training set and test set given in FIG. 3. Here, two-layer convolution is adopted by all three versions of HPCANet, the number of flat convolution kernels adopted is 8 (convolution layer 1) +8 (convolution layer 2), the number of three-dimensional convolution kernels adopted by HPCANet-1 is 8 and 24 respectively, the number of three-dimensional convolution kernels adopted by HPCANet-2 is 8 and 32 respectively, and the number of three-dimensional convolution kernels adopted by HPCANet-3 is 8 and 40 respectively.
TABLE 1
As can be seen from Table 1, HPCANet-1 through HPCANet-3 all exhibit better performance than PCANet, especially when the resolution of the image to be identified is low, this advantage is more pronounced; in addition, it can be seen that from HPCANet-1 to HPCANet-3, the recognition performance of HPCANet gradually increases with the increase of feature dimensions.
Claims (3)
1. A robust image recognition method based on high-dimensional PCANet, the method comprising the steps of:
step 1 selecting J images A= { A 1 ,…,A J As training set, the corresponding class label isZ={Z 1 ,…,Z K The number is the set of images to be identified, i.e. the test set, here +.>Respectively represent the C on the real number domain 0 ∈{An image of 1,3 channels having a length-width of mxn;
step 2, initializing parameters and input data: order theHere, a->For indicating the stage in which the network is located, +.>Indicating that the network is in training phase->Indicating that the network is in a testing stage; let l=0, where l is used to indicate the number of layers of the input image or feature map in the network,/>Wherein (1)>
Wherein,, is->Mean value of-> Representing from->B e {1,2, …, mn } feature blocks of size k×k extracted from the c-th channel, vec (·) represents the operation of stretching the matrix into column vectors;
step 4 ifIf the network is in the test stage, jumping to the step 7, otherwise, executing the next step;
step 5 calculationMain direction->Wherein (1)>Is covariance matrix->The i' th eigenvector of (a), the corresponding eigenvalue is lambda i′ And->
Step 7 computing the feature atlas X of the 1+1th convolution layer (l+1) ;
Step 8, let l=l+1, execute the above steps 3 to 7 until l=l, where L represents the maximum convolution layer number given in advance;
Step 10 consists ofConstruction of matrix-> Wherein (1)> Is->Is used for the average value of (a), representing the slave Y i (l) B e {1,2, …, mn } feature blocks of size k x k extracted from the c-th channel;
Step 12 calculationMain direction->Wherein (1)>Is covariance matrix->The i "th eigenvector of (a), the corresponding eigenvalue is lambda i ", and->
Step 14 calculates a feature atlas Y for the 1+1th convolution layer (l+1) ;
Step 15, let l=l+1, execute the above steps 10 to 14 until l=l;
step 17, performing pattern diagram coding on the feature atlas F to obtain a pattern atlas P: p= { P i,β } i=1,…,N;β=1,…,B Wherein, the method comprises the steps of, wherein,beta e 1 representing the ith sample…, B pattern diagrams, F i,· Representing feature map subset F i In>T represents the number of channels involved in the encoding of a single pattern, USF (·) represents a unit step function, and the input value is binarized by comparison with 0, i.e.:
step 18 extracts histogram features H from the pattern atlas P: h= [ H ] i ] i=1,…,N Wherein H is i =[H i,1 ,…,H i,B ] T ,H i,β =Qhist(P i,β ),Qhist(P i,β ) Representing the pattern diagram P i,β Divided into Q blocks, a histogram is extracted from each block, each histogram using 2 T The number of packets, i.e. the code value of the statistical pattern diagram is 2 for each feature block T The frequency of occurrence in the individual packets;
step 21 calculates a metric matrix m= [ M ] i,j ] i=1,…,J;j=1,…,K Wherein, the method comprises the steps of, wherein,here the number of the elements is the number,
wherein D representsAnd->Length of->Representation->The d element of (a)>Representation->The d element of (a);
step 22 calculates class id= [ Id ] of each sample in the test set Z i ] i=1,…,K :
Wherein M is i Represents the ith column vector in the metric matrix M, minIndx (·) represents M i Index of the smallest element in the (c).
2. The robust image recognition method based on high-dimensional PCANet as recited in claim 1, wherein in said step 7, a feature atlas X of the 1 st+1 th convolution layer is calculated as follows (l+1) :7.1 Will) beProjected to W (l+1 ):7.2 Will->Reorganizing elements in a feature atlas X (l+1) :Wherein (1)>And is also provided with c=j%C l+1 The method comprises the steps of carrying out a first treatment on the surface of the Here, a->Representation->Column vectors of rows a to b of column c, a% b representing a remainder of b,/-a%>Representation rounding down the real number a, mat m×n (v) represents that an arbitrary column vector is +.>Rearranged into an mxn matrix.
3. The robust image recognition method based on high-dimensional PCANet according to claim 1 or 2, wherein in the step 14, the feature atlas Y of the 1+1th convolution layer is calculated as follows (l+1) :14.1 Will) beProjected to W (l+1) :14.2 Will->Reorganizing elements in a feature atlas Y (l+1) :Y (l +1) ={Y i (l+1) } i=1,…,N Wherein->And is also provided withHere, a->The representation connects the matrices in the set in the channel direction.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010147000.XA CN111488905B (en) | 2020-03-05 | 2020-03-05 | Robust image recognition method based on high-dimensional PCANet |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010147000.XA CN111488905B (en) | 2020-03-05 | 2020-03-05 | Robust image recognition method based on high-dimensional PCANet |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111488905A CN111488905A (en) | 2020-08-04 |
CN111488905B true CN111488905B (en) | 2023-07-14 |
Family
ID=71794288
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010147000.XA Active CN111488905B (en) | 2020-03-05 | 2020-03-05 | Robust image recognition method based on high-dimensional PCANet |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111488905B (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104573729A (en) * | 2015-01-23 | 2015-04-29 | 东南大学 | Image classification method based on kernel principal component analysis network |
CN106778554A (en) * | 2016-12-01 | 2017-05-31 | 广西师范大学 | Cervical cell image-recognizing method based on union feature PCANet |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10713563B2 (en) * | 2017-11-27 | 2020-07-14 | Technische Universiteit Eindhoven | Object recognition using a convolutional neural network trained by principal component analysis and repeated spectral clustering |
US10747989B2 (en) * | 2018-08-21 | 2020-08-18 | Software Ag | Systems and/or methods for accelerating facial feature vector matching with supervised machine learning |
-
2020
- 2020-03-05 CN CN202010147000.XA patent/CN111488905B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104573729A (en) * | 2015-01-23 | 2015-04-29 | 东南大学 | Image classification method based on kernel principal component analysis network |
CN106778554A (en) * | 2016-12-01 | 2017-05-31 | 广西师范大学 | Cervical cell image-recognizing method based on union feature PCANet |
Non-Patent Citations (2)
Title |
---|
li xiaoxin,et al.Adaptive Weberfaces for occlusion-robust face representation and recognition.IET Image Processiong.2017,第11卷(第11期),第964-975页. * |
韩冰,等.改进的主成分分析网络极光图像分类方法.西安电子科技大学学报.2017,第44卷(第1期),第83-88页. * |
Also Published As
Publication number | Publication date |
---|---|
CN111488905A (en) | 2020-08-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112633382B (en) | Method and system for classifying few sample images based on mutual neighbor | |
CN110738207A (en) | character detection method for fusing character area edge information in character image | |
CN110443128B (en) | Finger vein identification method based on SURF feature point accurate matching | |
CN111027377B (en) | Double-flow neural network time sequence action positioning method | |
CN111652273B (en) | Deep learning-based RGB-D image classification method | |
CN111339924B (en) | Polarized SAR image classification method based on superpixel and full convolution network | |
CN104077742B (en) | Human face sketch synthetic method and system based on Gabor characteristic | |
CN113674334A (en) | Texture recognition method based on depth self-attention network and local feature coding | |
CN112966574A (en) | Human body three-dimensional key point prediction method and device and electronic equipment | |
CN110188864B (en) | Small sample learning method based on distribution representation and distribution measurement | |
CN114387454A (en) | Self-supervision pre-training method based on region screening module and multi-level comparison | |
CN111127407B (en) | Fourier transform-based style migration forged image detection device and method | |
CN110516640B (en) | Vehicle re-identification method based on feature pyramid joint representation | |
CN111488905B (en) | Robust image recognition method based on high-dimensional PCANet | |
CN111488907B (en) | Robust image recognition method based on dense PCANet | |
CN109583584B (en) | Method and system for enabling CNN with full connection layer to accept indefinite shape input | |
CN111539966A (en) | Colorimetric sensor array image segmentation method based on fuzzy c-means clustering | |
CN115937540A (en) | Image Matching Method Based on Transformer Encoder | |
CN112818779B (en) | Human behavior recognition method based on feature optimization and multiple feature fusion | |
CN111488906B (en) | Low-resolution image recognition method based on channel correlation PCANet | |
CN113192076B (en) | MRI brain tumor image segmentation method combining classification prediction and multi-scale feature extraction | |
CN111382703B (en) | Finger vein recognition method based on secondary screening and score fusion | |
CN110599518B (en) | Target tracking method based on visual saliency and super-pixel segmentation and condition number blocking | |
JP3454335B2 (en) | Online handwritten character recognition method and apparatus | |
CN112419373A (en) | Large-displacement optical flow field estimation method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |