AU2020101825A4 - Ear Recognition Method Based On Invariant Features - Google Patents

Ear Recognition Method Based On Invariant Features Download PDF

Info

Publication number
AU2020101825A4
AU2020101825A4 AU2020101825A AU2020101825A AU2020101825A4 AU 2020101825 A4 AU2020101825 A4 AU 2020101825A4 AU 2020101825 A AU2020101825 A AU 2020101825A AU 2020101825 A AU2020101825 A AU 2020101825A AU 2020101825 A4 AU2020101825 A4 AU 2020101825A4
Authority
AU
Australia
Prior art keywords
sift
gabor
feature
image
matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
AU2020101825A
Inventor
Ying TIAN
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Science and Technology Liaoning USTL
Original Assignee
University of Science and Technology Liaoning USTL
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Science and Technology Liaoning USTL filed Critical University of Science and Technology Liaoning USTL
Priority to AU2020101825A priority Critical patent/AU2020101825A4/en
Application granted granted Critical
Publication of AU2020101825A4 publication Critical patent/AU2020101825A4/en
Ceased legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/15Correlation function computation including computation of convolution operations
    • G06F17/156Correlation function computation including computation of convolution operations using a domain transform, e.g. Fourier transform, polynomial transform, number theoretic transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/42Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Multimedia (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Databases & Information Systems (AREA)
  • Algebra (AREA)
  • Image Analysis (AREA)

Abstract

: In this paper, we compare the performance of different methods in image recognition, such as image compression, illumination invariance, and geometric description,n all of the above situations, sift descriptor has the best performance. 100 95 90 85 -- SIFT - Gabor-SIFT -A-SIFT+Geo 80 1 6 11 16 21 26 FIG.4 100 95 - - - 90 85 80 SIFT Gabor-SIFT SIFT+Gabor- SIFT+Geo SIFT FIG.5

Description

-- SIFT - Gabor-SIFT -A-SIFT+Geo
1 6 11 16 21 26
FIG.4
100
95 -- -
90
85
80 SIFT Gabor-SIFT SIFT+Gabor- SIFT+Geo SIFT
FIG.5
Editorial Note 2020101825 There is only seven pages of the description
Ear Recognition Method Based On Invariant Features Technical field The invention relates to a complex image tag recognition system, which can simultaneously deal with tag missing problem, generic attribute problem, label relationship problem and example relationship problem Background technology It is a key step for ear recognition to accurately extract the feature vectors which can fully represent the ear features. The existing ear feature extraction methods can be divided into two categories: algebraic method based on global feature and geometric method based on discrete feature,Although the physical meaning of the vector is not intuitive, it reflects the nature of the image and plays an important role in the classification of patterns. PCA, ICA and Fisher are the dominant onesError! Reference source not foundin addition, although the difference between these two methods is due to the difference between the speed of the image and that of the general method, such as the difference between the two methods, such as the speed of calculation and the method of calculating the parameters of the ear, etc., are relatively largeThe latter extracts discrete features, which usually include the edge curves, feature points and angles formed by the inner ear contour or inner sulcus, such as using the inner ear edgeError! Reference source not foundError! Reference source not foundError! Reference source not foundHowever, the recognition rate is generally based on the accuracy of feature points and feature edge extraction. Due to the directional and scale differences between edge features and point features, they usually occur when the scale and rotation angle of the image changeThe extracted edge will also produce errors summary of the invention In order to effectively solve the above problems, an object of the present invention is to provide a human ear recognition method based on invariant features. Although sift method has many advantages mentioned above, the dimension of feature descriptor is too high, which leads to the increase of computation. Therefore, scholars have extended sift operator. At present, there are two extension methods in the literature, both of which retain the first three steps of the operator, and improve the design of descriptor in the fourth step, and both use principal component analysis technology. Creating a PCA sift descriptor consists of the following two steps. The first step is to generate a PCA sift projection matrix offline. This matrix is generated in advance, which needs only to be calculated once and saved. A series of representative images are selected and all the key points of these images are detected. Then, an image spot with the size of 41 x 41 pixels is selected around each key point. The vertical and horizontal gradients are calculated to form a vector with the size of 39 x 39 x 2 = 3042. These vectors are put into a matrix A with the size of K x 3042, where k is the number of key points detected; The eigenvalues and eigenvectors of covariance matrix of matrix A are calculated; the first n eigenvectors are selected, and the projection matrix is a matrix of n x 3042 composed of these eigenvectors; n can be a fixed value set according to experience, or dynamically selected based on eigenvalues to store the matrix. Step 2: create descriptors.
A 41 x 41 image spot is extracted around a key point of a given scale and rotated to its main direction. The horizontal and vertical gradients of 39 x 39 are calculated to form a vector of 3042. The vector is multiplied by the pre calculated projection matrix n x 3042. In this way, an n-dimensional pCa-SIFT descriptor is generated. The advantage of PCA sift method is that the dimension of feature descriptors constructed by PCA sift is variable, and 20 dimension or even less is recommended in the literature. Moreover, it also keeps the invariance of sift method, which greatly reduces the calculation time. In recent years, multi-scale and multi-directional Gabor wavelet transform has gradually become one of the mainstream ideas. This is mainly because Gabor wavelet can well simulate the contour of single cell receptive field in cerebral cortex, capture outstanding visual attributes, especially Gabor wavelet can extract multi-scale and multi-directional spatial frequency-domain features in specific areas of the image,Magnifying the change of gray scale like a microscope means that it can obtain the best localization in both time and frequency domainError! Reference source not foundFirstly, the 2D Gabor transform is applied to the field of computer vision, and then fingerprint recognition is carried outError! Reference source not foundPalmprint recognitionError! Reference source not foundThere are related researches in many fields, especially in face recognitionError! Reference source not foundBecause of Gabor decomposition, the dimension of data increases greatly, especially when the image size is too large. In order to avoid disaster, the dimension must be reduced. Elastic graph matching methodError! Reference source not foundGabor transform is only applied to some key feature points in the image, and the transformation coefficient is used as the attribute of feature points to match. However, this method requires high accuracy in feature selection and location. Since sift operator can accurately locate the key feature points, the stable feature points extracted from sift method can be used. Then, taking the feature points as the center, the multi-scale and multi-directional Gabor transform eigenvalues can be calculated as the local feature descriptors of feature points, which not only shortens the dimension of feature descriptors, but also has the salient features of Gabor, and obtains the best localization in time domain and frequency domain Two dimensional gabor wavelet transform is a powerful tool for multi-scale representation and analysis of images. As the only Gabor function that can obtain the lower bound of joint uncertainty in spatial and frequency domain, it is often used as wavelet basis function,Gabor wavelet transform is the convolution of a set of filter functions with a given signal to represent or approximate a signal. here: is the image coordinate of the given position; , is the center frequency of the filter; (is the Gaussian standard deviation, V is the scale parameter, (is the direction parameter It controls the width of the Gaussian window, the wavelength and direction of the oscillation part, and the parameter determines the proportional relationship between the window width and wavelength, that is, the number of oscillations affected by the Gaussian envelope function. First square bracket It is a complex numerical plane wave, which determines that the oscillatory part of Gabor nucleus is cosine plane wave The imaginary part is a sine plane wave Because the cosine plane wave is symmetrical about the center of the Gaussian window, the integral value is not 0 within the constraint range of the Gaussian envelope function; while the integral value of the sine plane wave is odd symmetric about the center of the Gaussian window, and the integral value is 0 within the constraint range of the Gaussian envelope function. In order to eliminate the influence of the DC component of the image on the two-dimensional Gabor wavelet transform, the integral value is subtracted from the real part of the complex plane wave This makes the function of 2D Gabor filter a complex function, and its real part and imaginary part can be expressed as follows: All Gabor kernel functions defined in have similar shapes, but have different directions and sizes Gabor transform is used to describe two-dimensional image Give me a point This can be defined by convolution. For DC image, that is, when the gray values of all pixels in the image are equal, according to the function properties of 2D Gabor filter, the following results are obtained: It has good frequency selectivity in the two-dimensional Gabor domain. It has good spatial frequency selectivity in Gabor domain, and has good spatial resolution in Gabor domain,Determines its ability to express signals. Two dimensional gabor wavelet is a group of filters generated by two-dimensional Gabor filter function through scaling and rotation. The selection of its parameters is usually considered in the frequency space. In order to sample the whole frequency domain of an image, Gabor filter banks with multiple central frequencies and directions can be used to describe the image , The different selection of two-dimensional Gabor wavelets reflects the sampling mode in frequency and direction space respectively. Where 0 is the half peak bandwidth expressed by octave, when At octave, When, when At octave, ; when At octave, The selection of filter parameters is based on the experimental data of neurophysiology. The whole frequency space can be any value from 0 to infinity. Because the actual frequency distribution of an image is a limited range, for the local characteristics of the image, the parameters of the filter can be used as reference It can only be selected in a very small range, The value range of 0 to Considering the symmetry of Gabor filter, The actual value range of is 0 to The invention adopts 12 Gabor filters composed of 3 central frequencies and 4 directions
, parameter and The values are as follows: Gabor convolution process actually produces complex response composed of real part and imaginary step. Near the edge, the real part and imaginary part of Gabor transform will oscillate instead of a smooth peak response, which is not conducive to the matching of recognition phase. Therefore, the general method is to seek the second and discard the linear characteristics of Gabor transform itselfHowever, only the amplitude of Gabor response (i.e. the root of square sum of real part and imaginary part) is retained. The amplitude information actually reflects the intensity of image local energy spectrum, which can also be understood as the edge intensity of specific direction, and has good smoothness near the real edge, which is conducive to recognition. Fig. 0 shows 12 different amplitude maps obtained by computing each Gabor transform pixel by pixel for a human ear image. By the above method, 12 amplitude features are calculated at each image position, which reflect the energy distribution characteristics in the frequency domain of the local region centered on the image position. These 12 amplitude features are cascaded together to be called a jetError! Reference source not found, abbreviated as J, that is, jet at image position I (x, y) is: If the jets of all pixel positions are cascaded, the Gabor feature representation of input image I is obtained. so If the dimension of the original image is 64 x 64, the Gabor feature dimension is 49152. If the selected scale and direction are further increased, the Gabor feature dimension is 49 152,It is very difficult to classify and recognize such high-dimensional feature vectors directly. Generally, dimensionality reduction is used to deal with them. However, dimensionality reduction will not only result in information loss, but also increase the system overhead by filtering the original image with such a large-scale Gabor group in advance. In order to avoid the above problems, the invention firstly extracts feature key points from ear image, so as not to perform Gabor filtering operation on the original image pixel by pixel, only Gabor transformation is needed for the key point position pixel, and then the jet of the feature point is taken as the feature description vector of the point, and then the matching recognition is carried out, so that the operation is greatly simplified. Considering the excellent time-frequency characteristics of Gabor transform and the robustness of SIFT, we combine sift with Gabor transform to construct a new feature extraction and description method Gabor sift. Firstly, sift is used to locate the key points accurately, and then Gabor function is used to extract the feature attributes of key points. 1) Feature point detection: as the first two steps of sift method, the extremum points are obtained in scale space, and then the key points are accurately located. 2) Find the main direction of the key point: apply Formula 5.6 to calculate the gradient direction and modulus of each pixel. Sample in the neighborhood window centered on the key point, and use histogram to count the gradient direction of the neighborhood pixel. The peak value of histogram represents the main direction of the neighborhood gradient at the key point, as the direction of the key point. 3) Find the feature descriptors of key points: rotate the coordinate axis as the main direction of the key points, and then convolute each key point extracted by SIFT method with 12 Gabor filters, and each key point obtains 12 complex outputs. Using these 12 complex outputs to calculate the amplitude, it reflects the characteristics of energy distribution in the local area centered on the position of the key pointThe 12 amplitude features are cascaded as the energy spectrum features at the key point. In order to improve the convolution operation speed, the invention adopts fast Fourier transform. To sum up, the invention has the following advantages compared with common use: The present invention introduces sift technology, geometric feature extraction method and Gabor wavelet transform function, and proposes two new ear recognition methods based on SIFT technology. Gabor SIFT features describe the local structural information of the image. Although it has no obvious visual significance as corner and edge features, it estimates the scale and direction,Experimental results show that the feature extraction method based on Gabor SIFT can not only effectively solve the problem of automatic ear recognition based on static ear image, but also has strong robustness and is not sensitive to the changes of image illumination conditions and rotation angle. Description of drawings Fig. 1the relationship between matching accuracy and threshold TN and matching points Figure 2 Relationship between recognition rate and weight Fig. 3 cms curve of image library || Fig. 4 cms curve on image libraryI Figure 5 the recognition rate of three different methods Mode of implementation
The experiment of the invention uses image library || and image libraryIll. in image library ll, people and 150 images are selected from image library Ill, and a total of 350 images are selected as small-scale sample library for experiment. The main purpose of the method is to verify its robustness to image illumination, rotation angle and scale changes. Therefore, there is no need to normalize the image in advance. In order to test the performance of the proposed ear recognition method, two experiments are carried out. The experiment on image library II is to study the effectiveness of the method applied in the controlled environment; the experiment on image library Ill is to verify the robustness of the method when the illumination conditions and rotation angles of ear images change. The whole experimental process is completely automatic. In the method of the invention, the following parameters need to be set in advance, and the experimental data in parameter determination are calculated by the average value of the experimental results of image library |l and image library Ill. 1) Principal component analysis. For the classical principal component analysis method, we take 120 dimensional feature vector, which can retain 98.5% information. Euclidean distance is used to calculate the similarity. The first two images in image library || are used as training samples, the last one as test samples, the first two images in image library Ill as training samples, and the last two images as test samples. 2) Sift, Gabor sift and fusion method. The threshold TNand the matching point logarithm PMneed to be set when the three methods match. When the threshold value tNis different, the distribution of the point logarithm of the two images matching is shown in Fig. 1. The threshold value tN= 0.7 is selected for all three methods. The table shows the experimental results when tN= 0.7,1f there are more than two pairs of point pairs matching, the two images are matched successfully. The experimental data in Fig. 1 are obtained by standard sift method with image library III as an example. When pM= 2, different weights (influence on recognition rate) are selected when pM= 2. When (= 0 and (= 1, it is the recognition situation using global geometric features and sift descriptor separately). It can be seen that the SIFT feature descriptor vector with local characteristics is only related to the local area around the feature pointTherefore, in the recognition process, it shows more stable characteristics and higher recognition rate; while the geometric features with global characteristics are easily affected by the factors such as edge detection and inaccurate positioning of special points on the edge, the recognition rate is not high, so it can only be used as auxiliary features to give smaller weight, and the appropriate fusion of the two can further improve the recognition rate. According to the results of Fig. 3,(= 0.7). The two methods proposed by us, such as SIFT (120-d), SIFT (128-d) and SIFT (1-D) are effective methods for image recognition,Gabor sift has the same recognition rate as PCA and sift in Rank1, and reaches 100% in Rank5, which is faster than PCA. The recognition rate of Gabor sift is the highest with global and local feature fusion. Gabor sift method is faster than other methods because of the small dimension of feature vector. Table 5.2 shows the comparison of matching time and recognition rate between the two ear recognition methods proposed in the present invention, PCA method and standard sift method on image library ll. In the process of feature extraction, although Gabor transform itself is a little slowHowever, the Gabor transform is only applied to the key points, and only about -30 key points are extracted from the ear, so the extraction time of the feature vector is very fast, and there is no difference between them. It can be seen from the table that the matching time of Gabor sift method is much faster than that of other methods. Implementation 2: test the robustness of Gabor sift method and global and local feature fusion method for the recognition of changes in illumination conditions and rotation angle. Experiments are carried out on image library Ill. experiments show that the results of several methods are worse than those of Experiment 1 in varying degrees. This is mainly because the images in experiment 1 are relatively ideal, there is little difference between the two images, only slight illumination changesTherefore, the recognition effect is better. In Experiment 2, there are changes in illumination, scaling, translation and rotation, and the recognition rate will naturally decrease compared with experiment 1. The recognition rate of standard PCA method is greatly affected by image rotation, illumination and scale change, and the recognition rate will be reduced to less than 50%. However, the recognition rate based on SIFT and Gabor sift methods and fusion methods still maintain high recognition rate. Therefore, it is meaningless to compare the two methods with PCA method. Fig. 4 shows the CMS (cumulation) of the two methods and the standard sift method on two image databasesMatching sccore) curve shows that the recognition rate of rank 1 is very high, especially the fusion method of SIFT features and geometric features has a recognition rate of nearly 95%, and the recognition rate of the three methods reaches 100% around rank 23. The experimental results in Experiment 1 and Experiment 2 show that although the images in image library III have great changes, these changes have little impact on the classification results, mainly because the extracted feature vectors are invariant to scale, translation and rotation,Therefore, the scaling, translation and rotation of the ear image will not affect the feature variables and recognition results. It can be seen that the method has strong robustness to the changes of illumination conditions and rotation angles in ear recognition. In the experiment, it is found that if the 128 dimension feature vector obtained by the standard sift method is properly fused with the 12 dimensional feature vector obtained by Gabor sift method, the recognition rate will be further improved. For each detected key point, we normalize and concatenate the two eigenvectors to establish a 140 dimensional feature vector composed of sift descriptor and Gabor sift descriptor. The Euclidean distance is also used for matching recognition. The recognition rate is improved in different degrees by experiments on the above two image databases. The average recognition rate is compared as shown in Figure 5. It can be seen from Figure 5 that the fusion method based on global geometric features and local SIFT features has the best recognition rate and the strongest robustness. Fig. 6 shows the experimental results of a group of correct matching on image database III (using sift and geometric feature fusion method). The rotation angle of two images in Fig. 6A changes, the illumination changes in Fig. 6B, and the resolution changes in Fig. 6CThe experimental results show that the recognition rate is not affected by the resolution. Fig. 7 shows a set of correct matching results on image library III using Gabor sift method. Although the two methods have high recognition rate on the selected image database, their error rejection rate and equal error rate are not low, that is, the intra class matching error rate is high. Fig. 8 shows the average ROC curve of the method on the two image databases, and the average equal error rate of the method is 2.9%. In this case, the difference in the depth of rotation between the two images is mainly due to the difference in the depth of rotation between the two images.

Claims (2)

  1. Editorial Note 2020101825 There is only one page of the claim
    Right-claiming document 1. An ear recognition method based on invariant features, including the following two steps: creating PCA sift descriptor: The first step: offline generation of a pCa-SIFT projection matrix, this matrix is generated in advance, only need to calculate once and save, the generation method is as follows: select a series of representative images and detect all the key points of these images, then select a 41 x 41 pixel image spot around each key point, and calculate the vertical and horizontal gradientThe size of these vectors is 30K, and the size of a key vector is 30K x 42; Firstly, the eigenvalues and eigenvectors of the covariance matrix of matrix A are calculated; secondly, the first n eigenvectors are selected, and the projection matrix is a matrix of n x 3042 composed of these eigenvectors; secondly, n can be a fixed value set according to experience, or it can be dynamically selected and stored based on the eigenvalues; Step 2: establish a descriptor: extract a 41 x 41 image spot around a key point of a given scale, rotate it to its main direction, calculate the horizontal and vertical gradients of 39 x 39, form a 3042 vector, multiply the vector with the pre calculated projection matrix n x 3042 to generate an n-dimensional pCa-SIFT descriptor.
  2. 2. The ear recognition method based on invariant features according to claim 1, which is characterized in that the key feature points can be accurately located. Therefore, the stable feature points extracted from sift method can be used, and then the multi-scale and multi-directional Gabor transform eigenvalues can be calculated as the local feature descriptor of the feature points, which shortens the dimension of the feature descriptors,In addition, it has Gabor's remarkable feature, which is to obtain the best localization in both time and frequency domain.
AU2020101825A 2020-08-14 2020-08-14 Ear Recognition Method Based On Invariant Features Ceased AU2020101825A4 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2020101825A AU2020101825A4 (en) 2020-08-14 2020-08-14 Ear Recognition Method Based On Invariant Features

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
AU2020101825A AU2020101825A4 (en) 2020-08-14 2020-08-14 Ear Recognition Method Based On Invariant Features

Publications (1)

Publication Number Publication Date
AU2020101825A4 true AU2020101825A4 (en) 2020-09-24

Family

ID=72513295

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2020101825A Ceased AU2020101825A4 (en) 2020-08-14 2020-08-14 Ear Recognition Method Based On Invariant Features

Country Status (1)

Country Link
AU (1) AU2020101825A4 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116824168A (en) * 2023-08-29 2023-09-29 青岛市中医医院(青岛市海慈医院、青岛市康复医学研究所) Ear CT feature extraction method based on image processing

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116824168A (en) * 2023-08-29 2023-09-29 青岛市中医医院(青岛市海慈医院、青岛市康复医学研究所) Ear CT feature extraction method based on image processing
CN116824168B (en) * 2023-08-29 2023-11-21 青岛市中医医院(青岛市海慈医院、青岛市康复医学研究所) Ear CT feature extraction method based on image processing

Similar Documents

Publication Publication Date Title
Sifre et al. Rigid-motion scattering for texture classification
Wang et al. Global ridge orientation modeling for partial fingerprint identification
Li et al. Overview of principal component analysis algorithm
CN103136520B (en) The form fit of Based PC A-SC algorithm and target identification method
Wang et al. Vibration mode shape recognition using image processing
Zhang et al. Comparison of wavelet, Gabor and curvelet transform for face recognition
CN112001257A (en) SAR image target recognition method and device based on sparse representation and cascade dictionary
Hwa Kim et al. Multi-resolution shape analysis via non-euclidean wavelets: Applications to mesh segmentation and surface alignment problems
Bronstein et al. Making laplacians commute
AU2020101825A4 (en) Ear Recognition Method Based On Invariant Features
CN104008389A (en) Object recognition method with combination of Gabor wavelet and SVM
Dash et al. Non-redundant stockwell transform based feature extraction for handwritten digit recognition
Shao et al. A multi-scale and multi-orientation image retrieval method based on rotation-invariant texture features
Deng et al. Expression-robust 3D face recognition based on feature-level fusion and feature-region fusion
Retsinas et al. Isolated character recognition using projections of oriented gradients
Teuner et al. Orientation-and scale-invariant recognition of textures in multi-object scenes
Kautkar et al. Face recognition based on ridgelet transforms
Mehrdad et al. 3D object retrieval based on histogram of local orientation using one-shot score support vector machine
US8953875B2 (en) Multiscale modulus filter bank and applications to pattern detection, clustering, classification and registration
Limberger et al. Curvature-based spectral signatures for non-rigid shape retrieval
Mude et al. Gabor filter for accurate iris segmentation analysis
Zheng et al. Finger vein recognition based on PCA and sparse representation
Sharma et al. Linearized kernel representation learning from video tensors by exploiting manifold geometry for gesture recognition
Hahmann et al. Model interpolation for eye localization using the Discriminative Generalized Hough Transform
JS et al. Feature Reduction and Optimization using Learnt Kernel Matrix

Legal Events

Date Code Title Description
FGI Letters patent sealed or granted (innovation patent)
MK22 Patent ceased section 143a(d), or expired - non payment of renewal fee or expiry