CN110633691A - Binocular in-vivo detection method based on visible light and near-infrared camera - Google Patents
Binocular in-vivo detection method based on visible light and near-infrared camera Download PDFInfo
- Publication number
- CN110633691A CN110633691A CN201910911025.XA CN201910911025A CN110633691A CN 110633691 A CN110633691 A CN 110633691A CN 201910911025 A CN201910911025 A CN 201910911025A CN 110633691 A CN110633691 A CN 110633691A
- Authority
- CN
- China
- Prior art keywords
- correlation
- detection method
- binocular
- method based
- visible light
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/40—Spoof detection, e.g. liveness detection
- G06V40/45—Detection of the body part being alive
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a binocular in-vivo detection method based on visible light and near infrared cameras, belongs to the technical field of computer multimedia, and comprises the following steps: s1: extracting illumination robust features, partitioning images received by a VIS camera and an NIR camera, and calculating a histogram for each image block to obtain histogram features of each image block; s2: for each image block, learning the projection direction of the image block respectively, maximizing the correlation coefficient, and calculating the projection size and the correlation degree of the image block; s3: and establishing a correlation confidence map, and automatically removing image blocks which are useless for the anti-cheating detection system. The invention judges whether the detection object is a real face according to the correlation shown by the detection object under VIS and NIR spectrums, thereby effectively resisting face spoofing attacks and effectively coping with various forms of face spoofing attacks such as photos, video playback, three-dimensional face masks and the like.
Description
Technical Field
The invention relates to the technical field of computer multimedia, in particular to a binocular in-vivo detection method based on visible light and near infrared cameras.
Background
As face recognition systems are put into use in more and more occasions, the requirements for their security performance are also increasing. The thing that the face information of the legitimate user is forged to attack the face recognition system often happens. Therefore, there is a need to provide a method for detecting a living body, which can effectively distinguish whether a face recognition system faces a real face or a fake face.
The living body detection method is roughly classified into three types. The authenticity of the human face is analyzed according to texture information, motion information and information of a certain part of a living body.
The detection algorithm based on the Bagging strategy uses frequency domain information of 2D Gabor characteristics, a gray level co-occurrence matrix (GLCM) and Fourier transform to obtain three characteristic vectors, uses principal component analysis to reduce dimensions to select comprehensive characteristics, and then inputs the comprehensive characteristics into a Bagging classifier for discrimination.
The in-vivo detection method based on color texture analysis is characterized in that joint color (RGB, HSV and YCbCr) texture information is extracted through an LBP descriptor to represent an image, and characteristics are input to an SVM classifier to distinguish authenticity.
The method based on image quality evaluation distinguishes authenticity by using 25 image quality analysis indexes (pixel difference analysis, correlation analysis, edge feature analysis, spectrum difference, structural similarity, distortion degree analysis, natural image estimation and the like).
The living body detection method based on the visual rhythm analysis can effectively cope with video playback attacks, calculates horizontal and vertical visual rhythms of videos after Fourier change, adopts three characteristics (LBP, gray level co-occurrence matrixes GCLM and HOG) to represent and reduce the dimension of the visual rhythms, and then distinguishes the authenticity of an identified object by utilizing an SVM classifier and Partial Least Squares (PLS).
A living body detection method based on an image diffusion velocity model is provided for solving the problem of living body detection on a handheld terminal, and the method has the principle that a forged photo presents the characteristics of more balance and slow diffusion on the light reflection characteristic relative to a living body, a total variation stream (TV) is introduced to obtain the diffusion velocity, a local velocity feature vector is obtained by using LSP coding on the basis of an obtained velocity map, and then the local velocity feature vector is input into an SVM classifier to distinguish the authenticity of an identified object.
A face living body detection method based on a codebook algorithm calculates and obtains a related video descriptor according to the phenomenon that noise such as banding effect, moire fringes and the like can appear in a forged image after resampling, performs conversion and pooling operation on the descriptor to obtain an input vector, and inputs the input vector into an SVM classifier or a PLS to distinguish authenticity.
A face living body detection method based on Image Distortion Analysis (IDA) firstly extracts IDA characteristic vectors of a detected object, and comprises the following steps: specular reflection, blur level, image chromaticity and contrast variation, color diversity. Inputting the feature vectors into an SVM classifier to distinguish authenticity.
The methods have certain limitations when applied, some methods can only complete the detection of low-quality photo attack, some methods can not detect the attack of video playback, and some methods have specific requirements on the environment illumination.
Therefore, the invention provides a face spoofing attack detection method with stronger adaptability.
Disclosure of Invention
1. Technical problem to be solved
Aiming at the problems in the prior art, the method provided by the invention judges whether the detection object is a real face according to the correlation shown by the detection object under VIS and NIR spectrums, thereby effectively resisting face spoofing attacks, effectively coping with various forms of face spoofing attacks such as photos, video playback, three-dimensional face masks and the like, and being robust to the change of environmental illumination.
2. Technical scheme
In order to solve the above problems, the present invention adopts the following technical solutions.
A binocular in-vivo detection method based on visible light and near infrared cameras comprises the following steps:
s1: extracting illumination robust features, averagely dividing images received by a VIS camera and an NIR camera into l parts in the vertical direction and the horizontal direction respectively to obtain m-l × l image blocks, wherein if l is larger, the divided sub image blocks are smaller, so that a descriptor which is not influenced by illumination needs to be selected, a local descriptor of the descriptor needs to meet a certain requirement, a local binary mode is adopted to cope with face spoofing detection, and a histogram of the image block is counted by calculating the occurrence frequency of each number;
s2: performing correlation analysis, respectively calculating histogram features of the divided blocks to obtain feature vectors of corresponding image pairs, then extracting consistency information on each pair of image blocks, and finding out the most relevant factors of the two feature vectors, wherein for each pair of image blocks, the correlation features of each pair of image blocks comprise the projection size and the relevance of the two feature vectors;
s3: and establishing a correlation confidence map, establishing the correlation confidence map for each block, generating the confidence map in a training stage, automatically adjusting the weight of each block in the final classification according to the confidence factor of each block, and verifying by adopting a leave-one-out method to detect whether the identified object is a real face.
Further, the requirements that the local descriptor in S1 must satisfy are as follows:
firstly, the living body detection method is not influenced by illumination, and can effectively process the task of living body detection under various illumination;
secondly, the pixel characteristics of the object can be described in a representative way;
thirdly, potential face and pose transformation situations can be robustly processed;
fourthly, the calculation efficiency is high.
Further, the method for handling face spoofing detection by using the local binary pattern in S1 includes: in an image block, for each pixel, a circle with a radius of 1 is drawn, 8 sample points are selected clockwise on the circumference, if the pixel value of the sample point is larger than the central point, the sample point is represented by 1, otherwise, the sample point is represented by 0, and an 8-bit binary digit is generated corresponding to each central pixel.
Further, the two feature vectors in S2 are found by: and for the feature vectors extracted from the ith image block pair, a pair of projection directions is learned by using typical correlation analysis, so that the correlation coefficient between the two projection vectors is maximized, and the optimal projection direction is calculated to obtain the correlation between the two feature vectors.
Further, the canonical correlation analysis can be used to correlate the two sets and learn the mapping information between them.
Further, the verification method of the leave-one-out method in S3 is as follows: from all NIR and VIS features, the features of the i-th block are removed and the correlation coefficient after removal of the i-th block is calculated, maximized by typical correlation analysis, and then the confidence factor of the i-th block is calculated for final classification by SVM.
Further, the SVM used in the final classification is provided with an RBF kernel, and a distance scale is adjusted by using a confidence factor in the RBF kernel.
3. Advantageous effects
Compared with the prior art, the invention has the advantages that:
according to the scheme, a binocular live body detection method based on visible light and near-infrared cameras is provided, the extracted features are guaranteed not to be influenced by illumination change by adopting an illumination robust feature extraction method, the features are more robust, and correlation analysis is performed on VIS and NIR images to obtain correlation features: VIS image characteristic projection, NIR image characteristic projection and two image correlation degrees; the feature can represent the correlation between two images, can effectively solve the problem of face spoofing attack, and finally establishes a method of a correlation confidence map, can automatically remove image blocks which have no value to an anti-spoofing detection system according to the confidence of each block, adjust the weight of the image blocks in an SVM classifier, and avoid the occurrence of the situation of manually setting an analysis area.
Drawings
Fig. 1 is an overall work flow diagram of the present invention.
Detailed Description
The technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention; it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments, and all other embodiments obtained by those skilled in the art without any inventive work are within the scope of the present invention.
Example 1:
referring to fig. 1, a binocular in vivo detection method based on visible light and near infrared cameras includes the following steps:
s1: the method comprises the following steps of extracting illumination robust features, averagely dividing images received by a VIS camera and an NIR camera into l parts in the vertical direction and the horizontal direction respectively to obtain m-l × l image blocks, wherein if l is larger, the divided sub image blocks are smaller, so that the correlation between the VIS image block and the NIR image block is convenient to find, the sensitivity of the image blocks to illumination is reduced, but the calculated amount is increased, the feature dimension is longer, and the probability of overfitting is increased, therefore, a descriptor which is not influenced by the illumination needs to be selected, and a local descriptor of the descriptor needs to meet the following requirements:
firstly, the living body detection method is not influenced by illumination, and can effectively process the task of living body detection under various illumination;
secondly, the pixel characteristics of the object can be described in a representative way;
thirdly, potential face and pose transformation situations can be robustly processed;
fourthly, the calculation efficiency is high,
the local binary pattern can effectively cope with face spoofing detection, in an image block, for each pixel, a circle with the radius of 1 is drawn, 8 sample points are selected clockwise on the circumference, if the pixel value of each sample point is larger than that of a central point, the sample point is represented by 1, otherwise, the sample point is represented by 0, 8-bit binary digits are generated corresponding to each central pixel, a histogram of the image block is counted by calculating the frequency of occurrence of each digit, and the k-th VIS and NIRThe histogram of the ith image block calculated in the image pair is respectively represented by Vi kAndrepresents;
s2: performing correlation analysis, and respectively calculating histogram features of the m divided blocks to obtain a feature vector of a k-th image pair, wherein the feature vector is expressed in the following form:
the two feature vectors essentially encode the local pixel density to reflect the spatial characteristics of the pixels, then, consistency information is extracted from each pair of image blocks, the two sets can be associated by adopting typical correlation analysis, so that the most relevant factors of the two feature vectors are found by utilizing the typical correlation analysis,
for the feature vector V extracted from the ith image block pairi,NiUsing a canonical correlation analysis to learn a pair of projection directions ωVi,ωNiSo that two projection vectorsAndcoefficient of correlation between piTo the maximum extent that the number of the first,
in the above formula, E represents expectation, and an intra-class covariance matrix C is introducedVV,CNNInter-class covariance matrix CVNMeanwhile, in order to avoid overfitting, a regularization parameter lambda is added to all the covariance matrixes in the class, and the correlation coefficient is rewritten into the following form:
the above formula can be used for solving the maximization by using the regularized typical correlation analysis and calculating the optimal projection direction Then, the correlation of two feature vectors is obtained, and for each pair of image blocks, the correlation features include three contents: size of projectionAndand the relevance is calculated by adopting the following formula:
will be provided withThe degrees of association are respectively expressed as ai,bi,riThen the correlation eigenvector of the ith image block can be written as fi={ai,bi,riThe feature vector contains the correlation information of the VIS image and the NIR image, and for the k image pair, the feature vectors of the m blocks are connected in series to form a vector which can be expressed as
S3: establishing a correlation confidence map, establishing a correlation confidence map for each block, generating the confidence map in a training stage, automatically adjusting the weight of each block in the final classification according to the confidence factor of each block, verifying by a leave-one-out method from all NIR and VIS characteristicsRemoving the features of the ith block, and representing the features after removal asThen, a correlation coefficient p after the removal of the ith block is calculated(-i),
In the above formula, the first and second carbon atoms are,are each V(-i) k,N-(i) kIs solved by a canonical correlation analysis to get ρ(-i)Maximizing, then calculating the confidence factor c of the ith blocki,
Performing final classification by using SVM with RBF kernel in which confidence factor c is usediAdjusting the distance scale, and calculating the RBF core by adopting the following formula:
for each image block, a feature vector has been obtainedElement of ith row and jth column of QWherein the confidence factor ctSubscript t ═ [ (i +2)/3]Function symbol [ 2 ]]Indicates the number in parenthesesThe word is rounded up by the number of words,
whether the identified object is a real face or not can be detected through SVM classification.
The technical scheme provides a binocular in-vivo detection method based on visible light and near infrared cameras, and the method adopts an illumination robust feature extraction method to ensure that the extracted features are not influenced by illumination change, so that the features are more robust, and correlation analysis is performed on VIS and NIR images to obtain correlation features: VIS image characteristic projection, NIR image characteristic projection and two image correlation degrees; the feature can represent the correlation between two images, can effectively solve the problem of face spoofing attack, and finally establishes a method of a correlation confidence map, can automatically remove image blocks which have no value to an anti-spoofing detection system according to the confidence of each block, adjust the weight of the image blocks in an SVM classifier, and avoid the occurrence of the situation of manually setting an analysis area.
Although embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.
Claims (7)
1. A binocular in-vivo detection method based on visible light and near infrared cameras is characterized by comprising the following steps:
s1: extracting illumination robust features, averagely dividing images received by a VIS camera and an NIR camera into l parts in the vertical direction and the horizontal direction respectively to obtain m-l × l image blocks, wherein if l is larger, the divided sub image blocks are smaller, so that a descriptor which is not influenced by illumination needs to be selected, a local descriptor of the descriptor needs to meet a certain requirement, a local binary mode is adopted to cope with face spoofing detection, and a histogram of the image block is counted by calculating the occurrence frequency of each number;
s2: performing correlation analysis, respectively calculating histogram features of the divided blocks to obtain feature vectors of corresponding image pairs, then extracting consistency information on each pair of image blocks, and finding out the most relevant factors of the two feature vectors, wherein for each pair of image blocks, the correlation features of each pair of image blocks comprise the projection size and the relevance of the two feature vectors;
s3: and establishing a correlation confidence map, establishing the correlation confidence map for each block, generating the confidence map in a training stage, automatically adjusting the weight of each block in the final classification according to the confidence factor of each block, and verifying by adopting a leave-one-out method to detect whether the identified object is a real face.
2. The binocular in-vivo detection method based on the visible light and near infrared camera as claimed in claim 1, wherein: the requirements that the local descriptors in S1 have to satisfy are as follows:
firstly, the living body detection method is not influenced by illumination, and can effectively process the task of living body detection under various illumination;
secondly, the pixel characteristics of the object can be described in a representative way;
thirdly, potential face and pose transformation situations can be robustly processed;
fourthly, the calculation efficiency is high.
3. The binocular in-vivo detection method based on the visible light and near infrared camera as claimed in claim 1, wherein: the method for detecting face spoofing by using the local binary pattern in the S1 includes: in an image block, for each pixel, a circle with a radius of 1 is drawn, 8 sample points are selected clockwise on the circumference, if the pixel value of the sample point is larger than the central point, the sample point is represented by 1, otherwise, the sample point is represented by 0, and an 8-bit binary digit is generated corresponding to each central pixel.
4. The binocular in-vivo detection method based on the visible light and near infrared camera as claimed in claim 1, wherein: the two feature vectors in S2 are found: and for the feature vectors extracted from the ith image block pair, a pair of projection directions is learned by using typical correlation analysis, so that the correlation coefficient between the two projection vectors is maximized, and the optimal projection direction is calculated to obtain the correlation between the two feature vectors.
5. The binocular in-vivo detection method based on the visible light and near infrared camera as claimed in claim 4, wherein: the canonical correlation analysis can be used to correlate the two sets and learn the mapping information between them.
6. The binocular in-vivo detection method based on the visible light and near infrared camera as claimed in claim 1, wherein: the verification method of the leave-one-out method in the S3 comprises the following steps: from all NIR and VIS features, the features of the i-th block are removed and the correlation coefficient after removal of the i-th block is calculated, maximized by typical correlation analysis, and then the confidence factor of the i-th block is calculated for final classification by SVM.
7. The binocular in-vivo detection method based on the visible light and near infrared camera as claimed in claim 6, wherein: the SVM adopted in the final classification is provided with an RBF core, and a distance scale is adjusted by using a confidence factor in the RBF core.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910911025.XA CN110633691A (en) | 2019-09-25 | 2019-09-25 | Binocular in-vivo detection method based on visible light and near-infrared camera |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910911025.XA CN110633691A (en) | 2019-09-25 | 2019-09-25 | Binocular in-vivo detection method based on visible light and near-infrared camera |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110633691A true CN110633691A (en) | 2019-12-31 |
Family
ID=68973902
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910911025.XA Pending CN110633691A (en) | 2019-09-25 | 2019-09-25 | Binocular in-vivo detection method based on visible light and near-infrared camera |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110633691A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113723243A (en) * | 2021-08-20 | 2021-11-30 | 南京华图信息技术有限公司 | Thermal infrared image face recognition method for wearing mask and application |
CN115205939A (en) * | 2022-07-14 | 2022-10-18 | 北京百度网讯科技有限公司 | Face living body detection model training method and device, electronic equipment and storage medium |
CN115578797A (en) * | 2022-09-30 | 2023-01-06 | 北京百度网讯科技有限公司 | Model training method, image recognition device and electronic equipment |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107862299A (en) * | 2017-11-28 | 2018-03-30 | 电子科技大学 | A kind of living body faces detection method based on near-infrared Yu visible ray binocular camera |
CN107918773A (en) * | 2017-12-13 | 2018-04-17 | 汉王科技股份有限公司 | A kind of human face in-vivo detection method, device and electronic equipment |
CN108549886A (en) * | 2018-06-29 | 2018-09-18 | 汉王科技股份有限公司 | A kind of human face in-vivo detection method and device |
CN110119695A (en) * | 2019-04-25 | 2019-08-13 | 江苏大学 | A kind of iris activity test method based on Fusion Features and machine learning |
-
2019
- 2019-09-25 CN CN201910911025.XA patent/CN110633691A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107862299A (en) * | 2017-11-28 | 2018-03-30 | 电子科技大学 | A kind of living body faces detection method based on near-infrared Yu visible ray binocular camera |
CN107918773A (en) * | 2017-12-13 | 2018-04-17 | 汉王科技股份有限公司 | A kind of human face in-vivo detection method, device and electronic equipment |
CN108549886A (en) * | 2018-06-29 | 2018-09-18 | 汉王科技股份有限公司 | A kind of human face in-vivo detection method and device |
CN110119695A (en) * | 2019-04-25 | 2019-08-13 | 江苏大学 | A kind of iris activity test method based on Fusion Features and machine learning |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113723243A (en) * | 2021-08-20 | 2021-11-30 | 南京华图信息技术有限公司 | Thermal infrared image face recognition method for wearing mask and application |
CN113723243B (en) * | 2021-08-20 | 2024-05-17 | 南京华图信息技术有限公司 | Face recognition method of thermal infrared image of wearing mask and application |
CN115205939A (en) * | 2022-07-14 | 2022-10-18 | 北京百度网讯科技有限公司 | Face living body detection model training method and device, electronic equipment and storage medium |
CN115205939B (en) * | 2022-07-14 | 2023-07-25 | 北京百度网讯科技有限公司 | Training method and device for human face living body detection model, electronic equipment and storage medium |
CN115578797A (en) * | 2022-09-30 | 2023-01-06 | 北京百度网讯科技有限公司 | Model training method, image recognition device and electronic equipment |
CN115578797B (en) * | 2022-09-30 | 2023-08-29 | 北京百度网讯科技有限公司 | Model training method, image recognition device and electronic equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Zhang et al. | Contact lens detection based on weighted LBP | |
US10445602B2 (en) | Apparatus and method for recognizing traffic signs | |
CN110838119B (en) | Human face image quality evaluation method, computer device and computer readable storage medium | |
CN110633691A (en) | Binocular in-vivo detection method based on visible light and near-infrared camera | |
CN111709313B (en) | Pedestrian re-identification method based on local and channel combination characteristics | |
Abidin et al. | Copy-move image forgery detection using deep learning methods: a review | |
CN111898621A (en) | Outline shape recognition method | |
CN112784712B (en) | Missing child early warning implementation method and device based on real-time monitoring | |
CN110991547A (en) | Image significance detection method based on multi-feature optimal fusion | |
CN111259756A (en) | Pedestrian re-identification method based on local high-frequency features and mixed metric learning | |
CN109325472B (en) | Face living body detection method based on depth information | |
Roy et al. | Local morphological pattern: A scale space shape descriptor for texture classification | |
Suárez et al. | Cross-spectral image patch similarity using convolutional neural network | |
Liu et al. | Iris recognition in visible spectrum based on multi-layer analogous convolution and collaborative representation | |
CN110135435B (en) | Saliency detection method and device based on breadth learning system | |
Chen et al. | Generalized face antispoofing by learning to fuse features from high-and low-frequency domains | |
Potje et al. | Extracting deformation-aware local features by learning to deform | |
Liu et al. | A novel infrared and visible face fusion recognition method based on non-subsampled contourlet transform | |
Liu et al. | A novel adaptive weights proximity matrix for image registration based on R-SIFT | |
Eisa et al. | Local binary patterns as texture descriptors for user attitude recognition | |
CN114926348A (en) | Device and method for removing low-illumination video noise | |
CN109934190B (en) | Self-adaptive highlight face image texture recovery method based on deformed Gaussian kernel function | |
CN113361422A (en) | Face recognition method based on angle space loss bearing | |
Huang et al. | Boosting scheme for detecting region duplication forgery in digital images | |
Psaila et al. | Image matching using enhancement offsets with adaptive parameter selection via histogram analysis |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20191231 |