CN110188646B - Human ear identification method based on fusion of gradient direction histogram and local binary pattern - Google Patents
Human ear identification method based on fusion of gradient direction histogram and local binary pattern Download PDFInfo
- Publication number
- CN110188646B CN110188646B CN201910433620.7A CN201910433620A CN110188646B CN 110188646 B CN110188646 B CN 110188646B CN 201910433620 A CN201910433620 A CN 201910433620A CN 110188646 B CN110188646 B CN 110188646B
- Authority
- CN
- China
- Prior art keywords
- human ear
- image
- value
- pixel
- binary pattern
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 26
- 230000004927 fusion Effects 0.000 title claims abstract description 14
- 238000012847 principal component analysis method Methods 0.000 claims abstract description 9
- 239000013598 vector Substances 0.000 claims description 30
- 230000009467 reduction Effects 0.000 claims description 15
- 239000011159 matrix material Substances 0.000 claims description 9
- 238000004364 calculation method Methods 0.000 claims description 8
- 230000008859 change Effects 0.000 claims description 5
- 238000012545 processing Methods 0.000 claims description 3
- 230000009466 transformation Effects 0.000 claims description 3
- 230000000903 blocking effect Effects 0.000 claims description 2
- 238000006243 chemical reaction Methods 0.000 claims description 2
- 238000005516 engineering process Methods 0.000 description 6
- 238000011160 research Methods 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 2
- 210000005069 ears Anatomy 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000005286 illumination Methods 0.000 description 2
- 238000000513 principal component analysis Methods 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 210000000883 ear external Anatomy 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000036544 posture Effects 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/213—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
- G06F18/2135—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/50—Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
- G06V10/507—Summing image-intensity values; Histogram projection analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/467—Encoded features or binary features, e.g. local binary patterns [LBP]
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Human Computer Interaction (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a human ear identification method based on fusion of a gradient direction histogram and a local binary pattern, which solves the problem of low human ear identification rate in a human ear image. The method comprises the steps of firstly extracting a gradient direction histogram of an image, reducing dimensions by using a principal component analysis method, extracting texture features of the image by using a local binary pattern, then fusing the two features, and finally classifying by using a minimum distance classifier. The invention improves the recognition rate of human ear recognition through multi-feature fusion, and has good implementation and effectiveness.
Description
Technical Field
The invention relates to a human ear identification method based on fusion of a gradient direction histogram and a local binary pattern, and belongs to the cross technical field of biological feature identification, deep learning, artificial intelligence and the like.
Background
As a new biological feature recognition technology, theory and application research of the human ear recognition technology is more concerned by scholars at home and abroad in recent two years, and the human ear recognition technology has important theoretical significance and practical application value.
The human ear recognition is to perform feature recognition by taking human ear images as research objects, can be used as beneficial supplement of other biological recognition technologies, and can also be independently applied to occasions of individual identity recognition. In the identification technology based on biological characteristics, human ear identification has numerous advantages, such as small size of human ear images, small calculated amount, consistent color distribution of external ears, less information loss when converting into gray level images, no influence of expression change on human ears, and realization of non-disturbance identification.
The current methods for identifying human ears are divided into two categories according to the extracted features, wherein one category is a method based on geometric features, and the other category is a method for constructing the geometric features by searching key points of human ear outlines and internal structures. The method is easily influenced by illumination and imaging angles, and is poor in robustness. One is algebraic feature-based methods such as principal component analysis, invariant moment iii methods, wavelet transform methods, and the like. The methods obtain satisfactory results under the conditions of small change of the human ear postures and good image quality. However, when the rotation angle of the human ear changes, the two-dimensional image of the human ear is greatly deformed, and the recognition rate of the conventional method is sharply reduced, so that a great deal of research work is required for a more cost-effective and more accurate human ear recognition method.
Disclosure of Invention
The technical problem is as follows: the invention aims to solve the technical problem of how to perform human ear recognition on an input human ear image by using a minimum distance classifier so as to improve the training speed and accuracy of human ear recognition.
The technical scheme is as follows: the invention discloses a human ear identification method based on fusion of a gradient direction histogram and a local binary pattern, which comprises the following steps of:
step 1) acquiring an image of a human ear from an ear image library;
step 2) calculating a characteristic value of each pixel in the human ear image, and obtaining the gradient direction histogram characteristics of the human ear image after blocking and standardizing;
step 3) carrying out spatial transformation on the histogram of the gradient direction of the human ear image by using a principal component analysis method, so that the original coordinate is projected to a new space with lower dimensionality and mutually orthogonal, and the dimensionality reduction of the characteristic of the histogram of the gradient direction of the human ear image is realized;
step 4) calculating a local binary pattern value of each pixel on the human ear image to obtain a local binary pattern characteristic of the human ear image;
step 5) cascading the feature vectors of the gradient direction histogram feature and the local binary pattern feature to obtain a new feature vector and realize feature fusion;
and 6) inputting the data into a minimum distance classifier for classification and identification.
Wherein,
the step 2) is as follows:
step 21) carrying out color standardization processing on the human ear image obtained in the step 1), and uniformly converting the image into a gray image, wherein the conversion formula is as follows: h (x, y) ═ 0.3 × R (x, y) +0.59 × G (x, y) +0.11 × B (x, y), where R (x, y), G (x, y), B (x, y) are the color values of red, green, and blue of each pixel in the image, respectively, and H (x, y) represents the grayscale value of each pixel.
Step 22) calculating the module value and the direction angle of each pixel gradient of the image by using a Sobel operator:
wherein, G (x, y) represents the gradient amplitude of the pixel point, α (x, y) represents the gradient direction of the pixel point, and H (x, y) represents the gray value of the pixel point.
Step 23) space and direction cell weighted voting. Firstly, calculating a direction weight: x (i) cos (θ), y (i) sin (θ), θ + pi/(N) direction +1), wherein: i is a direction serial number; theta is an angle, and the initial value is 0; x (i) is the weight of the x-axis difference in the i direction; y (i) is the weight of the y-axis difference in the i direction, N direction The total direction number is generally set to 9;
step 24) calculating the amplitude and the direction, wherein the amplitude is the mean square value of the image difference of the x axis or the y axis, and the direction value is taken as the weighted maximum value in each direction;
step 25), building blocks and standardizing, and assembling the features in the cells into a combined block, wherein the calculation method comprises the following steps: wherein B (x), B (y) respectively represent the total value of the x-axis and the total value of the y-axis of the block, C (x), C (y) respectively represent the total value of the x-axis and the total value of the y-axis of the cell, B (size) is the size of the block, and B (step) is the step size of the block change;
and 26) summarizing the characteristic values in different directions and blocks to construct a direction gradient histogram of the image.
The step 3) is as follows:
step 31) calculating the mean value of the gradient direction histogram characteristics of the corresponding pixel points in all the human ear images
Step 32) according toComputing a covariance matrix, where x i For features requiring dimension reduction, U T Is a covariance matrix.
And step 33) taking the front p principal components of the covariance matrix, and performing characteristic dimension reduction on each gradient direction histogram feature value in the human ear image to obtain the gradient direction histogram feature of the human ear image subjected to dimension reduction by the principal component analysis method, wherein the vector dimension is p dimension. The characteristic dimension reduction method comprisesWhere y represents a principal component feature. p is obtained through experiments according to actual conditions, the speed is slow when the p is too large, and the accuracy is influenced when the p is too small.
The step 4) is as follows:
step 41) defining the central pixel of the 3 × 3 window as a threshold, sequentially comparing the gray values of the remaining 8 pixels with the threshold, marking the pixel which is larger than the central pixel as 1, otherwise marking the pixel as 0, and forming 8-bit binary number which is the binary representation of the local binary pattern value of the window.Wherein (x) c ,y c ) Representing a central element of the 3 x 3 neighborhood having a pixel value of i c ,i p Values representing other pixels in the neighborhood, s (x) is a sign function, 1 when x ≧ 0, otherwise 0, LBP (x) c ,y c ) Is a binary representation of the local binary pattern value of the central pixel.
Step 42) LBP (x) for each pixel c ,y c ) And converting the binary pattern into a decimal number to obtain a final local binary pattern value, and summarizing to obtain the local binary pattern characteristics of the human ear image.
The step 6) is as follows:
and (3) inputting the characteristic vectors obtained after the to-be-identified human ear image and the human ear images of the known classifications are processed in the steps 2 to 5 into a minimum distance classifier, and calculating the distance between the characteristic vector of the to-be-identified human ear image and the characteristic vector of each human ear image of the known classifications, wherein the classification corresponding to the minimum distance is the classification of the to-be-identified human ear image. The distance calculation formula is as follows:
where d (i) is the distance between the ear image to be recognized and the ear image of the i-th known classification, t 1 (i) First feature vector, p, for the i-th known classification of the ear image 1 Is a first feature vector, t, of an image of a human ear to be recognized n (i) For the n-th feature vector, p, of the ith known classified image n Is the nth feature vector of the image to be identified.
Has the beneficial effects that: compared with the prior art, the invention adopting the technical scheme has the following technical effects:
the invention uses the principal component analysis method to reduce the dimension of the gradient direction histogram characteristics, filters out a large amount of redundant information, greatly improves the accuracy of human ear identification, simultaneously reduces the dimension of the characteristic vector and improves the speed of human ear identification; the method uses the local binary pattern characteristic and the gradient direction histogram characteristic for fusion, overcomes the interference of some noises, improves the robustness of characteristic vectors, and improves the stability of an ear recognition algorithm; the invention adopts the minimum distance classifier, and has lower calculation complexity and higher speed. Through the application of the methods, the accuracy and stability of human ear recognition are improved, and meanwhile, the calculation complexity is reduced, so that the system has higher cost benefit, particularly:
(1) the invention adopts multi-feature fusion for classification and identification, and has higher accuracy compared with single feature.
(2) The invention uses the principal component analysis method to reduce the dimension of the gradient direction histogram characteristics, filters a large amount of redundant information and greatly improves the accuracy of human ear identification.
(3) The invention uses the local binary pattern characteristic and the gradient direction histogram characteristic, overcomes the interference of some noises, improves the robustness of the characteristic vector and improves the stability of the human ear recognition algorithm.
(4) The invention uses principal component analysis method to reduce dimension of the histogram feature of the gradient direction, compared with the traditional histogram feature of the gradient direction, the dimension of the feature vector is reduced, and the speed of the whole human ear recognition is improved.
(5) Compared with other classifiers, the minimum distance classifier based on the Euclidean distance is lower in calculation complexity, and the speed of human ear recognition is improved.
Drawings
Fig. 1 is a flow of a human behavior recognition method based on a convolutional neural network.
Detailed Description
In specific implementation, fig. 1 is a flowchart of a human behavior recognition method based on a convolutional neural network.
This example uses the Beijing university of science and technology human ear laboratory library as the experimental subject, containing 77 human ear images. Each ear in the ear bank has four ear images, which are: the image of the front face of the human ear, the images of the human ear rotated by +30 degrees and-30 degrees under normal conditions, and the image of the front face of the human ear under the condition of dark illumination.
In a specific implementation, each person has 4 images of the ear, 3 for training and 1 for testing.
Firstly, inputting 3 human ear images of each person into a system, performing color standardization, and converting the images into gray level images by using H (x, y) ═ 0.3R (x, y) + 0.59G (x, y) + 0.11B (x, y), wherein R (x, y), G (x, y) and B (x, y) are the color values of red, green and blue of each pixel point in the images respectively, and H (x, y) represents the gray level value of each pixel point; calculating a module value and a direction angle of each pixel gradient of the image by using a Sobel operator; calculating a direction weight, an amplitude and a direction, wherein the amplitude is a mean square value of image difference of an x axis or a y axis, and the direction value is taken as a weighted maximum value in each direction; building blocks and standardizing, and combining the characteristics in the cells into a combined block, wherein the calculation method comprises the following steps: wherein B (x), B (y) respectively represent the total value of the x-axis and the total value of the y-axis of the block, C (x), C (y) respectively represent the total value of the x-axis and the total value of the y-axis of the cell, B (size) is the size of the block, and B (step) is the step size of the block change; and summarizing the characteristic values in different directions and blocks to construct a directional gradient histogram of the image.
Then, the gradient direction histogram of the human ear image is subjected to space transformation by a principal component analysis method, so that the original coordinates are projected to a new space with lower dimensionality and mutually orthogonal, and the dimensionality reduction of the gradient direction histogram characteristics of the human ear image is realized. The concrete method is thatComputing a covariance matrix, where x i In order to have the feature of needing dimension reduction,is the mean value, U, of the histogram features of the human ear image gradient directions T Is a covariance matrix. Taking the front p principal components of the covariance matrix, and taking each principal component in the ear imageA characteristic value of histogram of gradient directionAnd (4) performing feature dimension reduction to obtain gradient direction histogram features of the principal component analysis dimension reduction of the human ear image, wherein the vector dimension is p dimension, and y represents the principal component features.
Then, defining the central pixel of a 3 × 3 window as a threshold, sequentially comparing the gray values of the remaining 8 pixels with the threshold, marking the gray value larger than the central pixel as 1, otherwise marking the gray value as 0, and forming 8-bit binary number, namely the binary representation of the local binary pattern value of the window.Wherein (x) c ,y c ) Representing a central element of the 3 x 3 neighborhood having a pixel value of i c ,i p Values representing other pixels in the neighborhood, s (x) is a sign function, 1 when x ≧ 0, otherwise 0, LBP (x) c ,y c ) The binary representation of the local binary pattern value of the central pixel is converted into a decimal number to obtain a final local binary pattern value, and the local binary pattern characteristic of the human ear image is obtained.
And finally, cascading the feature vectors of the gradient direction histogram feature and the local binary pattern feature to obtain a new feature vector, realizing feature fusion, and inputting the feature vector into a minimum distance classifier for classification and identification.
The specific method comprises the following steps: and inputting the characteristic vectors obtained by processing the to-be-identified human ear image and a plurality of known classified human ear images in the minimum distance classifier, and calculating the distance between the characteristic vector of the to-be-identified human ear image and the characteristic vector of each known classified human ear image respectively, wherein the classification corresponding to the minimum distance is the classification of the to-be-identified human ear images. The distance calculation formula is as follows:
wherein d (i) isIdentifying a distance, t, between the ear image and an i-th known classified ear image 1 (i) For the first feature vector, p, of the i-th known classification of the image of the human ear 1 Is a first feature vector, t, of an image of a human ear to be recognized n (i) For the nth feature vector, p, of the image of the ith known classification n Is the nth feature vector of the image to be identified.
The above description is only of the preferred embodiments of the present invention, and it should be noted that: it will be apparent to those skilled in the art that various modifications and adaptations can be made without departing from the principles of the invention and these are intended to be within the scope of the invention.
Claims (3)
1. A human ear recognition method based on fusion of a gradient direction histogram and a local binary pattern is characterized by comprising the following steps:
step 1) collecting a plurality of known classified human ear images;
step 2) calculating a characteristic value of each pixel in the human ear image, and obtaining the gradient direction histogram characteristics of the human ear image after blocking and standardization;
step 3) carrying out spatial transformation on the gradient direction histogram of the human ear image by using a principal component analysis method, so that the original coordinates are projected to a new space with lower dimensionality and orthogonal to each other, and the dimensionality reduction of the gradient direction histogram characteristics of the human ear image is realized;
step 4), calculating a local binary pattern value of each pixel on the human ear image to obtain a local binary pattern characteristic of the human ear image;
step 5) cascading the gradient direction histogram feature after dimension reduction and the feature of the local binary pattern feature to obtain a new feature vector to realize feature fusion;
step 6) inputting the feature vectors obtained after the to-be-recognized ear image and the plurality of known classified ear images are respectively processed in the steps 2) to 5) into a minimum distance classifier, calculating the distance between the feature vector of the to-be-recognized ear image and the feature vector of each known classified ear image, wherein the classification corresponding to the minimum distance is the classification of the to-be-recognized ear image;
the step 2) is specifically as follows:
step 21) carrying out color standardization processing on the human ear image obtained in the step 1), and uniformly converting the image into a gray image, wherein the conversion formula is as follows: h (x, y) ═ 0.3 × R (x, y) +0.59 × G (x, y) +0.11 × B (x, y), where R (x, y), G (x, y), B (x, y) are the color values of red, green, and blue of each pixel in the image, respectively, and H (x, y) represents the grayscale value of each pixel;
step 22) calculating the module value and the direction angle of each pixel gradient of the image by using a Sobel operator:
g (x, y) represents the gradient amplitude of the pixel point, alpha (x, y) represents the gradient direction of the pixel point, and H (x, y) represents the gray value of the pixel point;
step 23), voting with weights for the spatial and directional cells; firstly, calculating a direction weight: x (i) cos (θ), y (i) sin (θ), θ + pi/(N) direction +1), wherein: i is a direction serial number; theta is an angle, and the initial value is 0; x (i) is the weight of the x-axis difference in the i direction; y (i) is the weight of the y-axis difference in the i direction, N direction The total direction number;
step 24) calculating the amplitude and the direction, wherein the amplitude is the mean square value of the image difference of the x axis or the y axis, and the direction value is taken as the weighted maximum value in each direction;
step 25), building blocks and standardizing, and assembling the features in the cells into a combined block, wherein the calculation method comprises the following steps: wherein B (x), B (y) respectively represent the total value of the x-axis and the total value of the y-axis of the block, C (x), C (y) respectively represent the total value of the x-axis and the total value of the y-axis of the cell, B (size) is the size of the block, and B (step) is the step size of the block change;
and 26) summarizing the characteristic values in different directions and blocks to construct a direction gradient histogram of the image.
2. The method for recognizing the human ear based on the fusion of the histogram of gradient directions and the local binary pattern as claimed in claim 1, wherein the step 3) is specifically as follows:
step 31) calculating the mean value of the gradient direction histogram characteristics of the corresponding pixel points in all the human ear images
Step 32) according toComputing a covariance matrix, where x i In order to have the feature of needing dimension reduction,is the mean value of the histogram features of the human ear image gradient direction, U T Is a covariance matrix;
step 33) taking the front p principal components of the covariance matrix, and performing characteristic dimension reduction on each gradient direction histogram characteristic value in the human ear image to obtain gradient direction histogram characteristics of the human ear image subjected to dimension reduction by a principal component analysis method, wherein the vector dimension is p dimension; the characteristic dimension reduction method comprisesWhere y represents a principal component feature.
3. The method for recognizing the human ear based on the fusion of the histogram of gradient directions and the local binary pattern as claimed in claim 1, wherein the step 4) is specifically as follows:
step 41) defining the central pixel of a 3 × 3 window as a threshold, sequentially comparing the gray values of the remaining 8 pixels with the threshold, marking the pixel which is larger than the central pixel as 1, otherwise marking the pixel as 0, and forming 8-bit binary number which is the binary representation of the local binary pattern value of the window;wherein (x) c ,y c ) Representing a central element of the 3 x 3 neighborhood having a pixel value of i c ,i p Values representing other pixels in the neighborhood, s (x) is a sign function, 1 when x ≧ 0, otherwise 0, LBP (x) c ,y c ) Is a binary representation of the local binary pattern value of the central pixel;
step 42) LBP (x) for each pixel c ,y c ) And converting the binary pattern into a decimal number to obtain a final local binary pattern value, and summarizing to obtain the local binary pattern characteristics of the human ear image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910433620.7A CN110188646B (en) | 2019-05-23 | 2019-05-23 | Human ear identification method based on fusion of gradient direction histogram and local binary pattern |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910433620.7A CN110188646B (en) | 2019-05-23 | 2019-05-23 | Human ear identification method based on fusion of gradient direction histogram and local binary pattern |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110188646A CN110188646A (en) | 2019-08-30 |
CN110188646B true CN110188646B (en) | 2022-08-02 |
Family
ID=67717439
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910433620.7A Active CN110188646B (en) | 2019-05-23 | 2019-05-23 | Human ear identification method based on fusion of gradient direction histogram and local binary pattern |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110188646B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111461147B (en) * | 2020-04-30 | 2023-05-23 | 柳州智视科技有限公司 | Binary coding organization algorithm based on image features |
CN111967531B (en) * | 2020-08-28 | 2022-09-16 | 南京邮电大学 | High-precision indoor image positioning method based on multi-feature fusion |
CN115547475A (en) * | 2022-12-05 | 2022-12-30 | 医修技术服务(北京)有限公司 | Intelligent automatic control method based on weighing type medical consumable management cabinet |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106599870A (en) * | 2016-12-22 | 2017-04-26 | 山东大学 | Face recognition method based on adaptive weighting and local characteristic fusion |
CN107066958A (en) * | 2017-03-29 | 2017-08-18 | 南京邮电大学 | A kind of face identification method based on HOG features and SVM multi-categorizers |
CN107578007A (en) * | 2017-09-01 | 2018-01-12 | 杭州电子科技大学 | A kind of deep learning face identification method based on multi-feature fusion |
CN108549868A (en) * | 2018-04-12 | 2018-09-18 | 中国矿业大学 | A kind of pedestrian detection method |
-
2019
- 2019-05-23 CN CN201910433620.7A patent/CN110188646B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106599870A (en) * | 2016-12-22 | 2017-04-26 | 山东大学 | Face recognition method based on adaptive weighting and local characteristic fusion |
CN107066958A (en) * | 2017-03-29 | 2017-08-18 | 南京邮电大学 | A kind of face identification method based on HOG features and SVM multi-categorizers |
CN107578007A (en) * | 2017-09-01 | 2018-01-12 | 杭州电子科技大学 | A kind of deep learning face identification method based on multi-feature fusion |
CN108549868A (en) * | 2018-04-12 | 2018-09-18 | 中国矿业大学 | A kind of pedestrian detection method |
Also Published As
Publication number | Publication date |
---|---|
CN110188646A (en) | 2019-08-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108537743B (en) | Face image enhancement method based on generation countermeasure network | |
CN105956582B (en) | A kind of face identification system based on three-dimensional data | |
CN103605972B (en) | Non-restricted environment face verification method based on block depth neural network | |
CN100426314C (en) | Feature classification based multiple classifiers combined people face recognition method | |
CN102332084B (en) | Identity identification method based on palm print and human face feature extraction | |
CN111126240B (en) | Three-channel feature fusion face recognition method | |
CN110188646B (en) | Human ear identification method based on fusion of gradient direction histogram and local binary pattern | |
CN111582044A (en) | Face recognition method based on convolutional neural network and attention model | |
CN112232184B (en) | Multi-angle face recognition method based on deep learning and space conversion network | |
CN110647820B (en) | Low-resolution face recognition method based on feature space super-resolution mapping | |
CN108108760A (en) | A kind of fast human face recognition | |
CN106096517A (en) | A kind of face identification method based on low-rank matrix Yu eigenface | |
CN111401156B (en) | Image identification method based on Gabor convolution neural network | |
CN113239839B (en) | Expression recognition method based on DCA face feature fusion | |
CN104156690B (en) | A kind of gesture identification method based on image space pyramid feature bag | |
CN110516525A (en) | SAR image target recognition method based on GAN and SVM | |
CN111832405A (en) | Face recognition method based on HOG and depth residual error network | |
CN102930300A (en) | Method and system for identifying airplane target | |
CN104573672A (en) | Discriminative embedding face recognition method on basis of neighbor preserving | |
CN109002770B (en) | Face recognition method under low-resolution condition | |
CN108520215A (en) | Single sample face recognition method based on multiple dimensioned union feature encoder | |
CN108491863A (en) | Color image processing method based on Non-negative Matrix Factorization and convolutional neural networks | |
CN107578005A (en) | A kind of Complex Wavelet Transform domain LBP face identification methods | |
Fengxiang | Face Recognition Based on Wavelet Transform and Regional Directional Weighted Local Binary Pattern. | |
CN103942572A (en) | Method and device for extracting facial expression features based on bidirectional compressed data space dimension reduction |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |