WO2018151357A1 - Human face recognition method based on improved multi-channel cabor filter - Google Patents

Human face recognition method based on improved multi-channel cabor filter Download PDF

Info

Publication number
WO2018151357A1
WO2018151357A1 PCT/KR2017/001886 KR2017001886W WO2018151357A1 WO 2018151357 A1 WO2018151357 A1 WO 2018151357A1 KR 2017001886 W KR2017001886 W KR 2017001886W WO 2018151357 A1 WO2018151357 A1 WO 2018151357A1
Authority
WO
WIPO (PCT)
Prior art keywords
lbp
image
images
gabor filter
face recognition
Prior art date
Application number
PCT/KR2017/001886
Other languages
French (fr)
Korean (ko)
Inventor
이응주
이석환
왕계원
호앙응옥
동후민넷
Original Assignee
동명대학교산학협력단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 동명대학교산학협력단 filed Critical 동명대학교산학협력단
Publication of WO2018151357A1 publication Critical patent/WO2018151357A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/164Detection; Localisation; Normalisation using holistic features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/446Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering using Haar-like filters, e.g. using integral image techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/469Contour-based spatial representations, e.g. vector-coding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • G06V10/507Summing image-intensity values; Histogram projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Definitions

  • the present invention relates to an improved multi-channel Gabor filter-based human face recognition method. More specifically, the Gabor filter and the CS-LBP in order to reduce the noise robust feature extraction and the high dimensionality of the extracted facial feature points
  • the present invention relates to an improved multi-channel Gabor filter-based human face recognition method for human face recognition combining center-symmetric local binary patterns.
  • the present invention is to solve the above problems, by combining the Gabor filter and the center-symmetric local binary patterns (CS-LBP), to extract the robust feature point extraction and high dimensionality of the extracted facial feature point
  • An object of the present invention is to provide an improved multi-faceted Gabor filter-based human face recognition method.
  • the present invention reduces the dimension of the feature image by combining Gabor feature images in different directions and scales, and improves human face recognition based on multi-channel Gabor filter for extracting low-dimensional facial feature points based on CS-LBP from feature images. It is to provide a method.
  • the present invention provides an improved multi-channel for achieving high accuracy of face recognition while reducing feature dimensions, storage space, and computation time through the Gabor filter and CS-LBP combining method, compared to the conventional Gabor filter and LBP approach. It is to provide a Gabor filter-based human face recognition method.
  • an improved multi-channel Gabor filter-based human face recognition method includes an input of N face (N is a natural number of 2 or more).
  • Output is an image of a cascading particle histogram feature vector representation of the statistics.
  • Feature image A first step of obtaining; bracket
  • a fifth step of extracting a face image feature vector after the feature extraction of the first to fourth steps characterized in that further comprises,
  • the first step is a face image Step 1-1 of obtaining a heirloom amplitude spectrum by performing heirloom conversion; For the amplitude spectrum of n different by overlap, Obtaining a first step 1-2; And for each CS-LBP code, an image Obtaining the first 1-3 steps; Characterized in that it comprises a.
  • a Gabor filter and a center-symmetric local binary patterns are combined to extract feature points that are robust to noise and extracted facial feature points. It provides the effect of reducing high dimensionality.
  • the improved multi-channel Gabor filter-based human face recognition method reduces the dimension of the feature image by combining the Gabor feature images in different directions and scales, and based on CS-LBP based low dimension from the feature image. Provides the effect of extracting facial feature points.
  • the improved multi-channel Gabor filter-based human face recognition method is characterized by a feature dimension, a storage space, and a combination of the Gabor filter and the CS-LBP method, compared to the conventional Gabor filter and LBP approach. It reduces the computation time and at the same time provides the effect of high accuracy of face recognition.
  • FIG. 1 is a diagram illustrating a feature extraction algorithm module 100 in which an improved multi-channel Gabor filter-based human face recognition method is performed according to an embodiment of the present invention.
  • FIG. 2 illustrates an image of a Gabor amplitude spectrum used in an improved multi-channel Gabor filter-based human face recognition method according to an embodiment of the present invention.
  • FIG. 3 is a diagram illustrating an improved multi-channel Gabor filter-based human face recognition method according to an embodiment of the present invention.
  • FIG. 4 is a diagram illustrating a Yale face database used in an improved multi-channel Gabor filter-based human face recognition method according to an embodiment of the present invention.
  • FIG. 5 illustrates the effect of image block size and bin on CS-LBP for explaining an improved multi-channel Gabor filter-based human face recognition method according to an embodiment of the present invention
  • FIG. 6 illustrates image block size and storage for LBP. A graph showing the effect of.
  • FIG. 7 illustrates an ORL face database used in an improved multi-channel Gabor filter-based human face recognition method according to an embodiment of the present invention.
  • FIG. 7 illustrates a FERET face database used in an improved multi-channel Gabor filter-based human face recognition method according to an embodiment of the present invention.
  • the component when one component 'transmits' data or a signal to another component, the component may directly transmit the data or signal to another component, and through at least one other component. This means that data or signals can be transmitted to other components.
  • FIG. 1 is a diagram illustrating a feature extraction algorithm module 100 in which an improved multi-channel Gabor filter-based human face recognition method is performed according to an embodiment of the present invention.
  • a "feature extraction algorithm module 100" based on a combination of Center-Symmetric Local Binary Patterns (CS-LBP) and Gabor Wavelet is performed.
  • CS-LBP Center-Symmetric Local Binary Patterns
  • Feature extraction algorithm module 100 is a Gabor filter using Gabor Wavelet transform means 110 and a CS-LBP operator to perform Gabor wavelet transform combined with CS-LBP face image feature extraction algorithm.
  • Image texture extracting function means 120 is a Gabor filter using Gabor Wavelet transform means 110 and a CS-LBP operator to perform Gabor wavelet transform combined with CS-LBP face image feature extraction algorithm.
  • the Gabor Wavelet transform means 110 extracts a multi-scale image using optional attributes such as good spatial locality, spatial frequency and image.
  • the Gabor filter image texture extraction function means 120 using the CS-LBP operator improves the robustness of the external environment change. This is because changes in the external environment, such as the direction of the region and other important features, lighting, expression, posture and shade, are robust.
  • the algorithm based on the combination of Center-Symmetric Local Binary Patterns (CS-LBP) and Gabor Wavelet of the present invention is an algorithm in space and time. In addition to reducing the overhead of, it can represent a significant recognition rate.
  • the Gabor filter image texture extracting function unit 120 using the CS-LBP operator may analyze the gray scale change of the scale and the image in all directions by a group of various scales having different filter directions.
  • the Gabor filter image texture extraction function means 120 using the CS-LBP operator has good time-frequency localization and multiple resolution characteristics, and has the ability to extract local nuances of an image.
  • the Gabor filter image texture extraction function means 120 using the CS-LBP operator ensures robustness in lighting variations, image rotations and deformations.
  • the Gabor wavelet kernel function used by the Gabor filter image texture extraction function means 120 using the CS-LBP operator is expressed by Equation 1 below.
  • ⁇ and v correspond to the direction and size of the filter, respectively, adjust the ⁇ and v to select the direction and dimension of the filter.
  • v ⁇ ⁇ 0, 1, 2, 3, 4 ⁇ , ⁇ ⁇ ⁇ 0, 1, 2, 3, 4, 5, 6, 7 ⁇ , , to be.
  • FIG. 2 shows images of Gabor amplitude spectra used in an improved multi-channel Gabor filter-based human face recognition method according to an embodiment of the present invention.
  • the feature extraction algorithm module 100 combines Gabor filter images according to the following two methods to reduce the size of the Gabor function, and uses the multi-channel Gabor function to reduce the number of Gabor filter images and maintain the images. Extract the multiscale image from the original image.
  • Gabor filter images of the two combination methods are as follows.
  • the feature extraction algorithm module 100 adjusts each scale in the direction of the Gabor filter image overlay in various scales of the multi-channel Gabor function image to obtain a multi-frequency Gabor channel (MFGC).
  • MFGC multi-frequency Gabor channel
  • the feature extraction algorithm module 100 obtains a multi-directional Gabor channel (MOGC) in each direction in the scaled image overlay in the other direction of the multi-channel Gabor feature image Gabor filter.
  • MOGC multi-directional Gabor channel
  • FIG. 3 is a diagram illustrating an improved multi-channel Gabor filter-based human face recognition method according to an embodiment of the present invention.
  • the improved multi-channel Gabor filter-based human face recognition method is performed by the feature extraction algorithm module 100 of FIG. 1, and the feature extraction algorithm module 100 includes a multi-channel Gabor filter and a CS-LBP. It may be performed by combining MOGC and CS-LBP based on facial image feature extraction algorithm.
  • the feature extraction algorithm module 100 may include image Gabor wavelet transform, superposition of Gabor amplitude spectra, superposition feature CS-LBP combination after MOGC extracted image coding, block and statistical histogram, for face feature extraction and face feature reduction. Cascade histogram style feature vector sequencing can be performed.
  • Output is an image of a cascading particle histogram feature vector representation of the statistics.
  • the feature extraction algorithm module 100 Acquire feature images, , to be.
  • step 1-1 Heirloom transformed to obtain a heirloom amplitude spectrum, and for superposed n amplitude spectra by superposition in steps 1-2, And obtain an image for each CS-LBP code in steps 1-3. Acquire.
  • the feature extraction algorithm module 100 In the same way, separate multiple overlapping sub-images.
  • the feature extraction algorithm module 100 Create a histogram of all sub-images of, and extract the cascading histogram feature vector sequence.
  • the feature extraction algorithm module 100 Extract the corresponding eigenvectors of the cascade sequence.
  • the feature extraction algorithm module 100 extracts a face image feature vector after feature extraction in the first to fourth steps described above.
  • the feature extraction algorithm module 100 in the face recognition step of Equation 2 You can use the distance function to calculate test samples and learn sample similarities.
  • T denotes a one-dimensional histogram feature vector of the training sample
  • S denotes a test sample one-dimensional histogram eigenvector
  • P denotes the number of sub-images
  • Q denotes the number of sub-image histogram bins.
  • r and i are exponents for P and Q, respectively.
  • the CS-LBP operator and the LBP operator for parameter selection are used. Blocking the size of sub-images affects recognition performance. If the block is too large, the extreme block size is the original image size and cannot reflect the benefits of local areas of image analysis.
  • Yale's face database has 11 images, including 15 people, and a total of 165 positive face images, including facial expressions and contrast changes.
  • a portion of Yale is some sample image database of a person at Yale, such as FIG.
  • FIGS. 5 and 6 show the effect of image block size and storage on LBP.
  • the recognition rate is all 8 ⁇ 8 as the image block size, and when the bins are 16 and 8, the two recognition rate curves are very close, so we choose the CS-LBP algorithm.
  • the parameter image block size is 8 * 8 and the number of bins is 8.
  • the CS-LBP algorithm based on the MFGC and MOGC extracting the feature dimension is 1/32 of the LBP, and the CS-LBP algorithm extracts the feature point dimension much lower than the LBP algorithm.
  • CS-LBP has a greater advantage in the time required for training samples
  • the CS-LBP algorithm is still superior in terms of test time, and takes half of LBP.
  • the longest Gabor + LBP algorithm requires 7.0 seconds and the MOGC + CS-LBP algorithm requires 0.27 seconds. Therefore, CSP-LBP extracts feature point dimensions over LBP, has more advantages in the time required for training and test samples, and can perform image feature point extraction more effectively.
  • the test plan randomly selects 3, 4, 5, 6, 7 images as training samples, the rest as test samples, and repeats 10 tests.
  • Table 2 shows the results of the algorithm's recognition rate on Yale's face database according to five kinds of test comparison algorithms.
  • the CS-LBP algorithm achieved a significant recognition rate compared to the LBP algorithm, and the recognition rate increased by almost 1% for 4 and 6 in the test sample.
  • Gabor + LBP algorithm, MOGC + CS-LBP algorithm achieved a significant recognition rate compared to the LBP algorithm, and the recognition rate improved about 2% compared to the CS-LBP algorithm.
  • the ORL face database contains 40 people, all have 10 images, a total of 400 face images.
  • FIG. 7 shows ten sample images of a person in an ORL.
  • Table 3 shows the test results for the five types of algorithms in the iterative test. That is, Table 3 shows the recognition rate results of the five algorithms for the ORL.
  • Training sample number 3 4 5 6 LBP 86.93 91.08 94.10 96.00 CS-LBP 88.09 91.70 94.40 96.00 Gabor + LBP 89.78 93.32 95.54 96.89 MFGC + CS-LBP 88.36 91.79 93.50 94.69 MOGC + CS-LBP 90.32 93.54 95.65 97.00
  • Table 4 shows the test comparison results for five different algorithms according to the iterative test and the 10-fold average test. That is, Table 4 shows the recognition rate results of the five algorithms in FERET.
  • Training sample number 2 3 4 LBP 82.79 88.86 92.33 CS-LBP 82.21 89.00 91.79 Gabor + LBP 86.70 91.33 92.89 MFGC + CS-LBP 85.86 90.03 92.00 MOGC + CS-LBP 87.37 91.42 93.00
  • the MOGC + CS-LBP algorithm has a low level of feature extraction and a significant recognition rate compared to the Gabor + LBP algorithm. That is, the MOGC + CS-LBP algorithm has the best recognition rate and the accuracy of the recognition rate is improved.
  • the invention can also be embodied as computer readable code on a computer readable recording medium.
  • Computer-readable recording media include all kinds of recording devices that store data that can be read by a computer system.
  • Examples of computer-readable recording media include ROM, RAM, CD-ROM, magnetic tape, floppy disks, optical data storage devices, and the like, which are also implemented in the form of carrier waves (eg, transmission over the Internet). It also includes.
  • the computer readable recording medium can also be distributed over network coupled computer systems so that the computer readable code is stored and executed in a distributed fashion.
  • functional programs, codes and code segments for implementing the present invention can be easily inferred by programmers in the art to which the present invention belongs.
  • the present invention provides an improved multi-channel Gabor for human face recognition combining Gabor filter and Center-symmetric local binary patterns (CS-LBP) to reduce the noise's robust feature point extraction and the high dimensionality of the extracted facial feature points.
  • CS-LBP Center-symmetric local binary patterns
  • the improved multi-channel Gabor filter-based human face recognition method reduces the dimension of the feature image by combining the Gabor feature images in different directions and scales, and based on CS-LBP based low dimension from the feature image. Provides the effect of extracting facial feature points.
  • the improved multi-channel Gabor filter-based human face recognition method is characterized by a feature dimension, a storage space, and a combination of the Gabor filter and the CS-LBP method, compared to the conventional Gabor filter and LBP approach. It reduces the computation time and at the same time provides the effect of high accuracy of face recognition.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The present invention relates to a human face recognition method based on an improved multi-channel Gabor filter. The present invention comprises: a first step of acquiring a feature image when an input is of face images of N persons (N is a natural number of two or more) and an output is of an image of a stepped particle histogram feature vector expression of statistics; a second step of dividing the respective face images into a plurality of overlapping sub-images in the same manner; and a third step of generating a histogram of all the sub-images so as to extract a stepped histogram feature vector sequence. Therefore, the Gabor filter and center-symmetric local binary patterns (CS-LBP) are combined, thereby extracting feature points, which are robust against noise, and providing an effect of enabling high dimensionality of the extracted face feature point to be reduced.

Description

향상된 다중채널 가버 필터 기반 휴먼 얼굴 인식 방법Enhanced Human Face Recognition Based on Multichannel Gabor Filters
본 발명은 향상된 다중채널 가버 필터 기반 휴먼 얼굴 인식 방법에 관한 것으로, 보다 구체적으로는, 잡음에 강인한 특징점 추출과 추출된 얼굴 특징점의 차원(high dimensionality)을 줄이기 위하여 가버(Gabor) 필터와 CS-LBP(Center-symmetric local binary patterns)를 결합한 휴먼 얼굴 인식을 위한 향상된 다중채널 가버 필터 기반 휴먼 얼굴 인식 방법에 관한 것이다. The present invention relates to an improved multi-channel Gabor filter-based human face recognition method. More specifically, the Gabor filter and the CS-LBP in order to reduce the noise robust feature extraction and the high dimensionality of the extracted facial feature points The present invention relates to an improved multi-channel Gabor filter-based human face recognition method for human face recognition combining center-symmetric local binary patterns.
더욱 안전한 사회를 위하여 개인 신원 확인에 기반한 신뢰성 높은 출입 통제 시스템 구축에 대한 관심이 점증 되고 있다. 더욱 안전한 신원 확인을 위하여 종래의 토큰 방식(카드, 키 등)보다는 사람 고유의 생체 정보를 이용한 생체 인식 분야 기술이 활발히 연구되고 있는데, 생체 인식 가운데 얼굴 인식은 이용자의 거부감이 적고 가장 자연스러운 생체 인식 방법이어서 많은 연구 노력이 집중되고 있다. 그런데 조명, 자세, 얼굴 표정, 세월 등의 환경에 있어서의 차이에 따라 같은 사람의 얼굴 이미지라도 매우 변화가 심하여 경우에 따라서는 같은 사람의 이미지끼리의 상관관계보다도 다른 사람 얼굴 이미지와의 상관관계가 더 높을 수 있다. 이러한 이유 등으로 조명, 자세, 표정, 세월 등에 무관한 안정적인 얼굴 인식 알고리즘의 개발이 매우 어렵다는 것이 잘 알려져 있다.There is a growing interest in establishing a reliable access control system based on personal identification for a safer society. In order to more securely verify the identity, biometrics technology using human-specific biometric information is being actively researched rather than the conventional token method (card, key, etc.). Among the biometrics, face recognition has less user rejection and is the most natural biometric recognition method. Subsequently, a lot of research effort is concentrated. However, due to differences in the environment such as lighting, posture, facial expressions, and time, even the face image of the same person varies greatly, and in some cases, the correlation between the face images of other people is different from the correlation between images of the same person. Can be higher. For this reason, it is well known that it is very difficult to develop a stable face recognition algorithm irrespective of lighting, posture, facial expression, and time.
본 발명은 상기의 문제점을 해결하기 위한 것으로, 가버(Gabor) 필터와 CS-LBP(Center-symmetric local binary patterns)를 결합하여, 잡음에 강인한 특징점 추출과 추출된 얼굴 특징점의 차원(high dimensionality)을 줄이도록 하기 위한 향상된 다중채널 가버 필터 기반 휴먼 얼굴 인식 방법을 제공하기 위한 것이다.The present invention is to solve the above problems, by combining the Gabor filter and the center-symmetric local binary patterns (CS-LBP), to extract the robust feature point extraction and high dimensionality of the extracted facial feature point An object of the present invention is to provide an improved multi-faceted Gabor filter-based human face recognition method.
또한, 본 발명은 가버 특징 영상들을 다른 방향과 스케일로 결합하여 특징 영상의 차원을 줄이고, 특징 영상으로부터 CS-LBP 기반으로 낮은 차원의 얼굴 특징점들을 추출하도록 하기 위한 향상된 다중채널 가버 필터 기반 휴먼 얼굴 인식 방법을 제공하기 위한 것이다.In addition, the present invention reduces the dimension of the feature image by combining Gabor feature images in different directions and scales, and improves human face recognition based on multi-channel Gabor filter for extracting low-dimensional facial feature points based on CS-LBP from feature images. It is to provide a method.
또한, 본 발명은 기존의 가버 필터와 LBP의 접근 방법에 비해 가버 필터와 CS-LBP 결합 방법을 통해 특징 차원, 저장 공간 및 계산 시간을 줄이는 동시에 얼굴 인식의 높은 정확도를 얻도록 하기 위한 향상된 다중채널 가버 필터 기반 휴먼 얼굴 인식 방법을 제공하기 위한 것이다.In addition, the present invention provides an improved multi-channel for achieving high accuracy of face recognition while reducing feature dimensions, storage space, and computation time through the Gabor filter and CS-LBP combining method, compared to the conventional Gabor filter and LBP approach. It is to provide a Gabor filter-based human face recognition method.
그러나 본 발명의 목적들은 상기에 언급된 목적으로 제한되지 않으며, 언급되지 않은 또 다른 목적들은 아래의 기재로부터 당업자에게 명확하게 이해될 수 있을 것이다.However, the objects of the present invention are not limited to the above-mentioned objects, and other objects not mentioned will be clearly understood by those skilled in the art from the following description.
상기의 목적을 달성하기 위해 본 발명의 실시예에 따른 향상된 다중채널 가버 필터 기반 휴먼 얼굴 인식 방법은, 입력이 N명(N은 2 이상의 자연수)의 얼굴 이미지
Figure PCTKR2017001886-appb-I000001
이며, 출력은 통계의 계단식 파티클 히스토그램 특징 벡터 표현의 이미지인
Figure PCTKR2017001886-appb-I000002
인 경우, 특징 이미지
Figure PCTKR2017001886-appb-I000003
(
Figure PCTKR2017001886-appb-I000004
,
Figure PCTKR2017001886-appb-I000005
)를 획득하는 제 1단계; 각
Figure PCTKR2017001886-appb-I000006
를 같은 방식으로, 여러 개의 중복되는 하위 이미지를 구분하는 제 2 단계; 및
Figure PCTKR2017001886-appb-I000007
의 모든 하위 이미지의 히스토그램을 생성하여, 계단식 히스토그램 특징 벡터 순서를 추출하는 제 3 단계; 를 포함하는 것을 특징으로 한다.
In order to achieve the above object, an improved multi-channel Gabor filter-based human face recognition method according to an embodiment of the present invention includes an input of N face (N is a natural number of 2 or more).
Figure PCTKR2017001886-appb-I000001
Output is an image of a cascading particle histogram feature vector representation of the statistics.
Figure PCTKR2017001886-appb-I000002
Feature image
Figure PCTKR2017001886-appb-I000003
(
Figure PCTKR2017001886-appb-I000004
,
Figure PCTKR2017001886-appb-I000005
A first step of obtaining; bracket
Figure PCTKR2017001886-appb-I000006
In the same manner, a second step of separating a plurality of overlapping sub-images; And
Figure PCTKR2017001886-appb-I000007
Generating a histogram of all sub-images of the, and extracting a cascading histogram feature vector sequence; Characterized in that it comprises a.
이때, 상기 제 3 단계 이후, 각각
Figure PCTKR2017001886-appb-I000008
의 캐스케이드 시퀀스의 해당 고유 벡터를 추출하는 제 4 단계; 를 더 포함하는 것을 특징으로 한다.
At this time, after the third step, respectively
Figure PCTKR2017001886-appb-I000008
Extracting a corresponding eigenvector of the cascade sequence of; It characterized in that it further comprises.
또한, 상기 제 4 단계 이후, 상기 제 1 내지 제 4 단계의 특징 추출 후 얼굴 이미지 특징 벡터를 추출하는 제 5 단계; 를 더 포함하는 것을 특징으로 한다,Also, after the fourth step, a fifth step of extracting a face image feature vector after the feature extraction of the first to fourth steps; Characterized in that further comprises,
또한, 상기 제 1 단계는, 얼굴 이미지
Figure PCTKR2017001886-appb-I000009
를 가보 변환하여 가보 진폭 스펙트럼을 획득하는 제 1-1 단계; 중첩에 의해 다른 n의 진폭 스펙트럼에 대해,
Figure PCTKR2017001886-appb-I000010
를 획득하는 제 1-2 단계; 및 각 CS-LBP 코드에 대해, 이미지
Figure PCTKR2017001886-appb-I000011
를 획득하는 제 1-3 단계; 를 포함하는 것을 특징으로 한다.
In addition, the first step is a face image
Figure PCTKR2017001886-appb-I000009
Step 1-1 of obtaining a heirloom amplitude spectrum by performing heirloom conversion; For the amplitude spectrum of n different by overlap,
Figure PCTKR2017001886-appb-I000010
Obtaining a first step 1-2; And for each CS-LBP code, an image
Figure PCTKR2017001886-appb-I000011
Obtaining the first 1-3 steps; Characterized in that it comprises a.
본 발명의 실시예에 따른 향상된 다중채널 가버 필터 기반 휴먼 얼굴 인식 방법은, 가버(Gabor) 필터와 CS-LBP(Center-symmetric local binary patterns)를 결합하여, 잡음에 강인한 특징점 추출과 추출된 얼굴 특징점의 차원(high dimensionality)을 줄일 수 있는 효과를 제공한다. In the improved multi-channel Gabor filter-based human face recognition method according to an embodiment of the present invention, a Gabor filter and a center-symmetric local binary patterns (CS-LBP) are combined to extract feature points that are robust to noise and extracted facial feature points. It provides the effect of reducing high dimensionality.
또한, 본 발명의 다른 실시예에 따른 향상된 다중채널 가버 필터 기반 휴먼 얼굴 인식 방법은, 가버 특징 영상들을 다른 방향과 스케일로 결합하여 특징 영상의 차원을 줄이고, 특징 영상으로부터 CS-LBP 기반으로 낮은 차원의 얼굴 특징점들을 추출할 수 있는 효과를 제공한다.In addition, the improved multi-channel Gabor filter-based human face recognition method according to another embodiment of the present invention reduces the dimension of the feature image by combining the Gabor feature images in different directions and scales, and based on CS-LBP based low dimension from the feature image. Provides the effect of extracting facial feature points.
뿐만 아니라, 본 발명의 다른 실시예에 따른 향상된 다중채널 가버 필터 기반 휴먼 얼굴 인식 방법은, 기존의 가버 필터와 LBP의 접근 방법에 비해 가버 필터와 CS-LBP 결합 방법을 통해 특징 차원, 저장 공간 및 계산 시간을 줄이는 동시에 얼굴 인식의 높은 정확도를 얻을 수 있는 효과를 제공한다. In addition, the improved multi-channel Gabor filter-based human face recognition method according to another embodiment of the present invention is characterized by a feature dimension, a storage space, and a combination of the Gabor filter and the CS-LBP method, compared to the conventional Gabor filter and LBP approach. It reduces the computation time and at the same time provides the effect of high accuracy of face recognition.
도 1은 본 발명의 실시예에 따른 향상된 다중채널 가버 필터 기반 휴먼 얼굴 인식 방법이 수행되는 특징 추출 알고리즘 모듈(100)을 나타내는 도면이다.1 is a diagram illustrating a feature extraction algorithm module 100 in which an improved multi-channel Gabor filter-based human face recognition method is performed according to an embodiment of the present invention.
도 2는 본 발명의 실시예에 따른 향상된 다중채널 가버 필터 기반 휴먼 얼굴 인식 방법에서는 사용하는 가버 진폭 스펙트럼의 이미지를 나타낸다.2 illustrates an image of a Gabor amplitude spectrum used in an improved multi-channel Gabor filter-based human face recognition method according to an embodiment of the present invention.
도 3은 본 발명의 실시예에 따른 향상된 다중채널 가버 필터 기반 휴먼 얼굴 인식 방법을 나타내는 도면이다.3 is a diagram illustrating an improved multi-channel Gabor filter-based human face recognition method according to an embodiment of the present invention.
도 4와 본 발명의 실시예에 따른 향상된 다중채널 가버 필터 기반 휴먼 얼굴 인식 방법에서 사용되는 예일 얼굴 데이터베이스를 나타내는 도면이다.4 is a diagram illustrating a Yale face database used in an improved multi-channel Gabor filter-based human face recognition method according to an embodiment of the present invention.
도 5는 본 발명의 실시예에 따른 향상된 다중채널 가버 필터 기반 휴먼 얼굴 인식 방법을 설명하기 위한 CS-LBP에 대한 이미지 블록 크기 및 빈의 영향을 나타내며, 도 6은 LBP에 대한 이미지 차단 크기 및 저장소의 영향을 나타내는 그래프이다. FIG. 5 illustrates the effect of image block size and bin on CS-LBP for explaining an improved multi-channel Gabor filter-based human face recognition method according to an embodiment of the present invention, and FIG. 6 illustrates image block size and storage for LBP. A graph showing the effect of.
도 7은 본 발명의 실시예에 따른 향상된 다중채널 가버 필터 기반 휴먼 얼굴 인식 방법에서 사용되는 ORL 얼굴 데이터베이스를 나타내는 도면이다.FIG. 7 illustrates an ORL face database used in an improved multi-channel Gabor filter-based human face recognition method according to an embodiment of the present invention.
도 7은 본 발명의 실시예에 따른 향상된 다중채널 가버 필터 기반 휴먼 얼굴 인식 방법에서 사용되는 FERET 얼굴 데이터베이스를 나타내는 도면이다.FIG. 7 illustrates a FERET face database used in an improved multi-channel Gabor filter-based human face recognition method according to an embodiment of the present invention.
이하, 본 발명의 바람직한 실시예의 상세한 설명은 첨부된 도면들을 참조하여 설명할 것이다. 하기에서 본 발명을 설명함에 있어서, 관련된 공지 기능 또는 구성에 대한 구체적인 설명이 본 발명의 요지를 불필요하게 흐릴 수 있다고 판단되는 경우에는 그 상세한 설명을 생략할 것이다.Hereinafter, the detailed description of the preferred embodiment of the present invention will be described with reference to the accompanying drawings. In the following description of the present invention, detailed descriptions of well-known functions or configurations will be omitted when it is deemed that they may unnecessarily obscure the subject matter of the present invention.
본 명세서에 있어서는 어느 하나의 구성요소가 다른 구성요소로 데이터 또는 신호를 '전송'하는 경우에는 구성요소는 다른 구성요소로 직접 상기 데이터 또는 신호를 전송할 수 있고, 적어도 하나의 또 다른 구성요소를 통하여 데이터 또는 신호를 다른 구성요소로 전송할 수 있음을 의미한다.In the present specification, when one component 'transmits' data or a signal to another component, the component may directly transmit the data or signal to another component, and through at least one other component. This means that data or signals can be transmitted to other components.
도 1은 본 발명의 실시예에 따른 향상된 다중채널 가버 필터 기반 휴먼 얼굴 인식 방법이 수행되는 특징 추출 알고리즘 모듈(100)을 나타내는 도면이다. 도 1을 참조하면, 향상된 다중채널 가버 필터 기반 휴먼 얼굴 인식 방법에서는 CS-LBP(Center-Symmetric Local Binary Patterns)와 가버 웨이브릿(Gabor Wavelet) 조합에 기반한 "특징 추출 알고리즘 모듈(100)"이 수행된다. "특징 추출 알고리즘 모듈(100)"은 CS-LBP 얼굴 영상 특징 추출 알고리즘과 결합된 가버 웨이브릿 변환을 수행하기 위해 가버 웨이브릿(Gabor Wavelet) 변환 수단(110) 및 CS-LBP 연산자를 사용한 가버 필터 이미지 텍스처 추출 기능 수단(120)을 구비한다.1 is a diagram illustrating a feature extraction algorithm module 100 in which an improved multi-channel Gabor filter-based human face recognition method is performed according to an embodiment of the present invention. Referring to FIG. 1, in the enhanced multi-channel Gabor filter-based human face recognition method, a "feature extraction algorithm module 100" based on a combination of Center-Symmetric Local Binary Patterns (CS-LBP) and Gabor Wavelet is performed. do. "Feature extraction algorithm module 100" is a Gabor filter using Gabor Wavelet transform means 110 and a CS-LBP operator to perform Gabor wavelet transform combined with CS-LBP face image feature extraction algorithm. Image texture extracting function means 120.
먼저, 가버 웨이브릿(Gabor Wavelet) 변환 수단(110)은 좋은 공간 지역성, 공간 주파수 및 이미지와 같은 선택적 속성을 사용하여 멀티 스케일 이미지를 추출한다. First, the Gabor Wavelet transform means 110 extracts a multi-scale image using optional attributes such as good spatial locality, spatial frequency and image.
둘째, CS-LBP 연산자를 사용한 가버 필터 이미지 텍스처 추출 기능 수단(120)에 의해 외부 환경 변화의 견고성을 향상시킨다. 여기서 지역, 그 밖의 중요한 특성의 방향, 조명, 표현, 자세 및 그늘과 같은 외부 환경의 변화는 견고함을 가지기 때문이다. Second, the Gabor filter image texture extraction function means 120 using the CS-LBP operator improves the robustness of the external environment change. This is because changes in the external environment, such as the direction of the region and other important features, lighting, expression, posture and shade, are robust.
이에 따라 종래의 LBP 연산 알고리즘과 결합된 가버 웨이브릿 변환과 비교할 때, 본 발명의 CS-LBP(Center-Symmetric Local Binary Patterns)와 가버 웨이브릿(Gabor Wavelet) 조합에 기반한 알고리즘은 공간과 시간에서 알고리즘의 오버 헤드를 감소시킬 뿐만 아니라 상당한 인식률을 나타낼 수 있다. Accordingly, when compared to the Gabor wavelet transform combined with the conventional LBP algorithm, the algorithm based on the combination of Center-Symmetric Local Binary Patterns (CS-LBP) and Gabor Wavelet of the present invention is an algorithm in space and time. In addition to reducing the overhead of, it can represent a significant recognition rate.
먼저, CS-LBP 연산자를 사용한 가버 필터 이미지 텍스처 추출 기능 수단(120)은 필터의 방향이 다른 다양한 스케일의 그룹에 의해 모든 방향에서 스케일 및 이미지의 회색조 변화를 분석할 수 있다. First, the Gabor filter image texture extracting function unit 120 using the CS-LBP operator may analyze the gray scale change of the scale and the image in all directions by a group of various scales having different filter directions.
CS-LBP 연산자를 사용한 가버 필터 이미지 텍스처 추출 기능 수단(120)은 좋은 시간-주파수 로컬리제이션과 다중 해상도 특성이 있으며, 이미지의 로컬 뉘앙스를 추출하는 능력이 있다. 따라서, CS-LBP 연산자를 사용한 가버 필터 이미지 텍스처 추출 기능 수단(120)은 조명 변화, 이미지 회전 및 변형에 있어서 확실한 견고성을 갖도록 한다. CS-LBP 연산자를 사용한 가버 필터 이미지 텍스처 추출 기능 수단(120)이 사용하는 가버 웨이브릿 커널 함수는 하기의 수학식 1과 같다.The Gabor filter image texture extraction function means 120 using the CS-LBP operator has good time-frequency localization and multiple resolution characteristics, and has the ability to extract local nuances of an image. Thus, the Gabor filter image texture extraction function means 120 using the CS-LBP operator ensures robustness in lighting variations, image rotations and deformations. The Gabor wavelet kernel function used by the Gabor filter image texture extraction function means 120 using the CS-LBP operator is expressed by Equation 1 below.
Figure PCTKR2017001886-appb-M000001
Figure PCTKR2017001886-appb-M000001
여기서,
Figure PCTKR2017001886-appb-I000012
,
Figure PCTKR2017001886-appb-I000013
,
Figure PCTKR2017001886-appb-I000014
,
Figure PCTKR2017001886-appb-I000015
이며, μ와 v는 각각 필터의 방향과 크기에 해당하므로, μ와 v를 조정하여 필터의 방향과 치수를 선택한다. 여기서, v ∈ {0, 1, 2, 3, 4}, μ ∈ {0, 1, 2, 3, 4, 5, 6, 7},
Figure PCTKR2017001886-appb-I000016
,
Figure PCTKR2017001886-appb-I000017
이다.
here,
Figure PCTKR2017001886-appb-I000012
,
Figure PCTKR2017001886-appb-I000013
,
Figure PCTKR2017001886-appb-I000014
,
Figure PCTKR2017001886-appb-I000015
Since μ and v correspond to the direction and size of the filter, respectively, adjust the μ and v to select the direction and dimension of the filter. Where v ∈ {0, 1, 2, 3, 4}, μ ∈ {0, 1, 2, 3, 4, 5, 6, 7},
Figure PCTKR2017001886-appb-I000016
,
Figure PCTKR2017001886-appb-I000017
to be.
한편, 도 2는 본 발명의 실시예에 따른 향상된 다중채널 가버 필터 기반 휴먼 얼굴 인식 방법에서는 사용하는 가버 진폭 스펙트럼의 이미지를 나타낸다.Meanwhile, FIG. 2 shows images of Gabor amplitude spectra used in an improved multi-channel Gabor filter-based human face recognition method according to an embodiment of the present invention.
도 2를 참조하면, 가버 진폭 스펙트럼의 이미지를 보여 주며, 관련 매개 변수는 v ∈ {0, 1, 2}, μ ∈ {0, 1, 2, 3, 4, 5, 6, 7}, σ = 2π,
Figure PCTKR2017001886-appb-I000018
,
Figure PCTKR2017001886-appb-I000019
이다.
Referring to Fig. 2, an image of the Gabor amplitude spectrum is shown, and the relevant parameters are v 0 {0, 1, 2}, μ ∈ {0, 1, 2, 3, 4, 5, 6, 7}, σ = 2π,
Figure PCTKR2017001886-appb-I000018
,
Figure PCTKR2017001886-appb-I000019
to be.
특징 추출 알고리즘 모듈(100)은 가버(Gabor) 기능의 크기를 줄이기 위해 다음과 같은 두 가지 방법에 따라 가버 필터 이미지를 결합하여 멀티 채널 가버 기능을 사용하여 가버 필터 이미지의 수를 줄이고 이미지를 유지하고 원본 이미지를 멀티 스케일 이미지를 추출한다. 여기서, 두 가지 조합 방법의 가버 필터 이미지는 다음과 같다. The feature extraction algorithm module 100 combines Gabor filter images according to the following two methods to reduce the size of the Gabor function, and uses the multi-channel Gabor function to reduce the number of Gabor filter images and maintain the images. Extract the multiscale image from the original image. Here, Gabor filter images of the two combination methods are as follows.
첫째로, 특징 추출 알고리즘 모듈(100)은 멀티 채널 가버 기능 이미지의 다양한 스케일에서 가버 필터 이미지 오버레이 방향으로 각 스케일이 조정되어 멀티 주파수 가버 채널(MFGC)을 획득한다.First, the feature extraction algorithm module 100 adjusts each scale in the direction of the Gabor filter image overlay in various scales of the multi-channel Gabor function image to obtain a multi-frequency Gabor channel (MFGC).
둘째로, 특징 추출 알고리즘 모듈(100)은 멀티 채널 가버 특징 이미지 가버 필터의 다른 방향에 있어서, 스케일 이미지 오버레이에서 각 방향으로 멀티 방향 가버 채널(MOGC)을 획득한다. Second, the feature extraction algorithm module 100 obtains a multi-directional Gabor channel (MOGC) in each direction in the scaled image overlay in the other direction of the multi-channel Gabor feature image Gabor filter.
도 3은 본 발명의 실시예에 따른 향상된 다중채널 가버 필터 기반 휴먼 얼굴 인식 방법을 나타내는 도면이다. 도 3을 참조하면, 향상된 다중채널 가버 필터 기반 휴먼 얼굴 인식 방법은 상술한 도 1의 특징 추출 알고리즘 모듈(100)에 의해 수행되며, 특징 추출 알고리즘 모듈(100)은 멀티 채널 가버 필터 및 CS-LBP 얼굴 이미지 특징 추출 알고리즘을 기반으로 MOGC와 CS-LBP을 결합한 방식으로 수행될 수 있다.3 is a diagram illustrating an improved multi-channel Gabor filter-based human face recognition method according to an embodiment of the present invention. Referring to FIG. 3, the improved multi-channel Gabor filter-based human face recognition method is performed by the feature extraction algorithm module 100 of FIG. 1, and the feature extraction algorithm module 100 includes a multi-channel Gabor filter and a CS-LBP. It may be performed by combining MOGC and CS-LBP based on facial image feature extraction algorithm.
즉, 특징 추출 알고리즘 모듈(100)은 얼굴 특징점의 추출과 얼굴 특징점의 감소를 위해 이미지 가버 웨이브릿 변환, 가버 진폭 스펙트럼의 중첩, MOGC 추출 이미지 코딩 후 중첩 특성 CS-LBP 결합, 블록 및 통계 히스토그램, 캐스케이드 히스토그램 양식 특징 벡터 시퀀싱을 수행할 수 있다. In other words, the feature extraction algorithm module 100 may include image Gabor wavelet transform, superposition of Gabor amplitude spectra, superposition feature CS-LBP combination after MOGC extracted image coding, block and statistical histogram, for face feature extraction and face feature reduction. Cascade histogram style feature vector sequencing can be performed.
만약, 특징 추출 알고리즘 모듈(100)로의 입력은 N명(N은 2 이상의 자연수)의 얼굴 이미지
Figure PCTKR2017001886-appb-I000020
이며, 출력은 통계의 계단식 파티클 히스토그램 특징 벡터 표현의 이미지인
Figure PCTKR2017001886-appb-I000021
인 경우, 특징 추출 알고리즘 모듈(100)은 제 1 단계에서
Figure PCTKR2017001886-appb-I000022
특징 이미지를 획득하며,
Figure PCTKR2017001886-appb-I000023
,
Figure PCTKR2017001886-appb-I000024
이다.
If the input to the feature extraction algorithm module 100 is a face image of N people (N is a natural number of 2 or more)
Figure PCTKR2017001886-appb-I000020
Output is an image of a cascading particle histogram feature vector representation of the statistics.
Figure PCTKR2017001886-appb-I000021
In the first step, the feature extraction algorithm module 100
Figure PCTKR2017001886-appb-I000022
Acquire feature images,
Figure PCTKR2017001886-appb-I000023
,
Figure PCTKR2017001886-appb-I000024
to be.
보다 구체적으로, 제 1-1 단계에서 얼굴 이미지
Figure PCTKR2017001886-appb-I000025
를 가보 변환하여 가보 진폭 스펙트럼을 획득하며, 제 1-2 단계에서 중첩에 의해 다른 n의 진폭 스펙트럼에 대해,
Figure PCTKR2017001886-appb-I000026
를 획득하며, 제 1-3 단계에서 각 CS-LBP 코드에 대해, 이미지
Figure PCTKR2017001886-appb-I000027
를 획득한다.
More specifically, the face image in step 1-1
Figure PCTKR2017001886-appb-I000025
Heirloom transformed to obtain a heirloom amplitude spectrum, and for superposed n amplitude spectra by superposition in steps 1-2,
Figure PCTKR2017001886-appb-I000026
And obtain an image for each CS-LBP code in steps 1-3.
Figure PCTKR2017001886-appb-I000027
Acquire.
다음으로, 제 2 단계에서 특징 추출 알고리즘 모듈(100)은 각
Figure PCTKR2017001886-appb-I000028
를 같은 방식으로, 여러 개의 중복되는 하위 이미지를 구분한다.
Next, in the second step, the feature extraction algorithm module 100
Figure PCTKR2017001886-appb-I000028
In the same way, separate multiple overlapping sub-images.
다음으로, 제 3 단계에서 특징 추출 알고리즘 모듈(100)은
Figure PCTKR2017001886-appb-I000029
의 모든 하위 이미지의 히스토그램을 생성하여, 계단식 히스토그램 특징 벡터 순서를 추출한다.
Next, in the third step, the feature extraction algorithm module 100
Figure PCTKR2017001886-appb-I000029
Create a histogram of all sub-images of, and extract the cascading histogram feature vector sequence.
다음으로, 제 4 단계에서 특징 추출 알고리즘 모듈(100)은 각각
Figure PCTKR2017001886-appb-I000030
의 캐스케이드 시퀀스의 해당 고유 벡터를 추출한다.
Next, in the fourth step, the feature extraction algorithm module 100
Figure PCTKR2017001886-appb-I000030
Extract the corresponding eigenvectors of the cascade sequence.
특징 추출 알고리즘 모듈(100)은 상술한 제 1 내지 제 4 단계의 특징 추출 후 얼굴 이미지 특징 벡터를 추출한다.The feature extraction algorithm module 100 extracts a face image feature vector after feature extraction in the first to fourth steps described above.
여기서, 특징 추출 알고리즘 모듈(100)은 얼굴 인식 단계에서 하기의 수학식 2의
Figure PCTKR2017001886-appb-I000031
거리 함수를 사용하여 테스트 샘플을 계산하고, 샘플 유사성을 학습할 수 있다.
Here, the feature extraction algorithm module 100 in the face recognition step of Equation 2
Figure PCTKR2017001886-appb-I000031
You can use the distance function to calculate test samples and learn sample similarities.
Figure PCTKR2017001886-appb-M000002
Figure PCTKR2017001886-appb-M000002
상기 수학식 2에서, T는 트레이닝 샘플의 1차원 히스토그램 특징 벡터에 대한 것이고, S는 테스트 샘플 1차원 히스토그램 고유 벡터이며, P는 서브 이미지의 수, Q가 서브 이미지 히스토그램 빈의 수를 의미한다. r과 i는 각각 P와 Q에 대한 지수이다. 테스트 샘플과 각 트레이닝 샘플을 유사성의 특징으로 만들 때, 가장 가까운 이웃 분류기 원리는 얼굴 인식에 널리 사용되는 간단한 계산이기 때문에, 이를 사용하여 테스트 샘플을 분류시 효과가 향상된다. In Equation 2, T denotes a one-dimensional histogram feature vector of the training sample, S denotes a test sample one-dimensional histogram eigenvector, P denotes the number of sub-images, and Q denotes the number of sub-image histogram bins. r and i are exponents for P and Q, respectively. When characterizing the test sample and each training sample, the nearest neighbor classifier principle is a simple calculation that is widely used for face recognition, which improves the effectiveness of classifying test samples.
이하에선, 상술한 향상된 다중채널 가버 필터 기반 휴먼 얼굴 인식 방법에 대한 테스트 결과 및 분석에 대해서 살펴보도록 한다. 여기서는 Yale, ORL 및 FERET에서 MFGC 및 MOGC CS-LBP를 기반으로 하는 LBP, CS-LBP, Gabor + LBP에 대한 표준 라이브러리를 활용하여, 알고리즘의 인식 성능을 비교한다. 테스트를 하는 동안 모든 이미지를 잘라내고 쌍 선형 보간법을 사용하여 64x64 크기로 조정한 것을 전제로 한다.Hereinafter, the test results and analysis of the above-described improved multi-channel Gabor filter-based human face recognition method will be described. Here, we use the standard libraries for LBP, CS-LBP, Gabor + LBP based on MFGC and MOGC CS-LBP in Yale, ORL, and FERET to compare the recognition performance of the algorithm. The test assumes that all images are cropped and scaled to 64x64 using bilinear interpolation.
먼저, 매개 변수 선택에 있어서, 매개 변수 선택을 위한 CS-LBP 연산자와 LBP 연산자를 사용한다. 하위 이미지의 크기를 차단하면 인식 성능에 영향을 미친다. 블록이 너무 크면 극단적인 블록 크기가 원본 이미지 크기이므로 이미지 분석의 로컬 영역의 이점을 반영할 수 없다. First, in parameter selection, the CS-LBP operator and the LBP operator for parameter selection are used. Blocking the size of sub-images affects recognition performance. If the block is too large, the extreme block size is the original image size and cannot reflect the benefits of local areas of image analysis.
블록 이미지 픽셀 수준 분석의 극단적인 상황은 너무 작아서 이미지 등록 감도를 높이고 계산 복잡성을 증가시키며 노이즈 이미지의 특성을 도입하기 쉽다. 동시에, 히스토그램 빈의 수를 선택하는 것은 인식 성능에 특정 영향을 미쳤다. 블록 크기와 히스토그램 빈 수를 적절하게 선택하기 위해 CS-LBP 및 LBP 블록 이미지 크기와 히스토그램 빈 수에 가장 적합한 것을 선택하기 위해 Yale 얼굴 표준 라이브러리를 이용해 테스트한다.The extreme situation of block image pixel level analysis is so small that it is easy to increase image registration sensitivity, increase computational complexity, and introduce the characteristics of noise images. At the same time, selecting the number of histogram bins had a certain effect on recognition performance. To properly select the block size and histogram bins, we use the Yale Face Standard Library to select the best suited for CS-LBP and LBP block image sizes and histogram bins.
예일대의 얼굴 데이터베이스는 15명을 포함하여 모두 11 개의 이미지를 가지고 있으며 얼굴 표정, 명암 변화 등을 포함한 총 165 개의 긍정적인 얼굴 이미지가 있다. 예일대의 한 부분은 도 4와 같은 예일에 있는 사람의 몇 가지 샘플 이미지 데이터베이스이다.Yale's face database has 11 images, including 15 people, and a total of 165 positive face images, including facial expressions and contrast changes. A portion of Yale is some sample image database of a person at Yale, such as FIG.
테스트하는 동안 무작위로 5개의 이미지를 선택하고 나머지 이미지 테스트를 수행하되, 10회 테스트를 반복한다. During the test, you randomly select five images and perform the remaining image tests, but repeat the test ten times.
하위 이미지 크기와 히스토그램 빈 수를 각각 도 5 및 도 6에 표시된 대로 CS-LBP 및 LBP의 영향에 대한 테스트 결과로 한정한다. 여기서 도 5는 CS-LBP에 대한 이미지 블록 크기 및 빈의 영향을 나타내며, 도 6은 LBP에 대한 이미지 차단 크기 및 저장소의 영향을 나타낸다. The lower image size and histogram bin number are limited to test results for the effects of CS-LBP and LBP as shown in FIGS. 5 and 6, respectively. 5 shows the effect of image block size and bin on CS-LBP, and FIG. 6 shows the effect of image block size and storage on LBP.
도 5에서와 같이, 이미지 블록이 크지 않은 경우 인식률에 미치는 빈의 수는 그다지 크지 않다. 반대로, 빈 수가 적을수록 인식률은 낮아진다. 빈이 16, 8 또는 4인지 여부에 관계없이 인식률은 모두 이미지 블록 크기로 8 × 8이며 빈이 16과 8일 때 두 인식률 곡선이 매우 가깝기 때문에 CS-LBP 알고리즘을 선택한다. 여기서 매개 변수 이미지 블록 크기는 8 * 8, 빈 수는 8이다.As shown in FIG. 5, when the image block is not large, the number of bins on the recognition rate is not very large. Conversely, the smaller the number of bins, the lower the recognition rate. Regardless of whether the bin is 16, 8, or 4, the recognition rate is all 8 × 8 as the image block size, and when the bins are 16 and 8, the two recognition rate curves are very close, so we choose the CS-LBP algorithm. Where the parameter image block size is 8 * 8 and the number of bins is 8.
도 6을 참조하면 이미지 블록 크기가 4 * 4인 경우 인식률에 미치는 빈의 수는 그다지 크지 않다. 빈(bin)의 수가 256일 때, 전체 최적의 인식률 곡선과 이미지 블록 크기가 가장 높을 때 8 * 8이므로 LBP 알고리즘 매개 변수에 대해 이미지 블록 크기를 8 * 8, 빈 수를 256인 시점에서 각각 LBP와 CS-LBP 알고리즘이 후술하는 테스트에서 사용된 최적의 매개 변수로 선택한다. Referring to FIG. 6, when the image block size is 4 * 4, the number of bins on the recognition rate is not very large. When the number of bins is 256, the total optimal recognition rate curve and 8 * 8 when the image block size is the highest, so LBP at 8 * 8 and 256 bins for the LBP algorithm parameters, respectively. And CS-LBP algorithm are selected as the optimal parameters used in the test described below.
이러한 상술한 매개 변수를 이용해 알고리즘 성능 비교를 살펴본다.We look at the comparison of algorithm performance using these parameters.
샘플을 추출하는 5가지 특성 차원 알고리즘을 사용하여 가장 가까운 이웃 분류자를 사용한 5가지 샘플 테스트와 테스트 비교에 필요한 샘플 시간을 트레이닝하는데 사용한다. 트레이닝과 테스트에 필요한 시간은 테스트 평균의 10배이다. 테스트 결과는 표 1과 같으며, 표 1은 특징점 치수 및 교육 및 테스트 시간 결과를 나타낸다. Five feature dimensional algorithms are used to extract the five sample tests using the nearest neighbor classifiers and to train the sample time needed to compare the tests. The time required for training and testing is 10 times the test average. The test results are shown in Table 1, which shows the feature point dimensions and the training and test time results.
Feature dimensionFeature dimension Training time(s)Training time (s) Test time(s)Test time (s)
LBPLBP 1638416384 14.3014.30 0.060.06
CS-LBPCS-LBP 512512 0.270.27 0.030.03
Gabor + LBPGabor + LBP 373216373216 454.0454.0 7.07.0
MFGC + CS-LBPMFGC + CS-LBP 15361536 12.3212.32 0.110.11
MOGC + CS-LBPMOGC + CS-LBP 40964096 17.1017.10 0.270.27
표 1에서 (1) 특징 차원을 추출하는 MFGC와 MOGC를 기반으로 한 CS-LBP 알고리즘은 LBP의 1/32로 CS-LBP 알고리즘은 LBP 알고리즘보다 특징점 차원을 추출하는 알고리즘이 훨씬 낮으며, (2) 트레이닝 샘플에 필요한 시간에 CS-LBP는 더 큰 우위를 가지며, (3) 테스트 시간 면에서 CS-LBP 알고리즘이 여전히 우위에 있으며, LBP의 1/2이 소요된다. 가장 긴 Gabor + LBP 알고리즘은 7.0초, MOGC + CS-LBP 알고리즘은 0.27초가 필요하다. 따라서 CSP-LBP는 LBP보다 특징점 치수를 추출하고, 트레이닝 및 테스트 샘플에 필요한 시간에 더 많은 이점이 있으며, 보다 효과적으로 이미지 특징점 추출을 수행할 수 있다. In Table 1, (1) the CS-LBP algorithm based on the MFGC and MOGC extracting the feature dimension is 1/32 of the LBP, and the CS-LBP algorithm extracts the feature point dimension much lower than the LBP algorithm. ) CS-LBP has a greater advantage in the time required for training samples, (3) the CS-LBP algorithm is still superior in terms of test time, and takes half of LBP. The longest Gabor + LBP algorithm requires 7.0 seconds and the MOGC + CS-LBP algorithm requires 0.27 seconds. Therefore, CSP-LBP extracts feature point dimensions over LBP, has more advantages in the time required for training and test samples, and can perform image feature point extraction more effectively.
다음으로, 예일대의 얼굴 데이터베이스의 테스트 결과를 살펴보면, 예일대의 얼굴 데이터베이스가 기본 상황에 대한 라이브러리의 얼굴에 자세히 소개되었으므로 여기서 생략하도록 한다. 테스트 계획은 다음과 같이 무작위로 3, 4, 5, 6, 7 이미지를 트레이닝 샘플로 선택하고 나머지는 테스트 샘플로, 10회 테스트를 반복한다. 다섯 가지 종류의 테스트적 비교 알고리즘에 따른 예일대의 얼굴 데이터베이스에 대한 알고리즘의 인식률 결과는 표 2와 같다. Next, if you look at the test results of Yale's face database, the Yale's face database was introduced in detail in the face of the library for the basic situation, so it is omitted here. The test plan randomly selects 3, 4, 5, 6, 7 images as training samples, the rest as test samples, and repeats 10 tests. Table 2 shows the results of the algorithm's recognition rate on Yale's face database according to five kinds of test comparison algorithms.
Training sample numberTraining sample number 33 44 55 66 77
LBPLBP 88.1788.17 89.3389.33 90.7890.78 92.4092.40 93.0093.00
CS-LBPCS-LBP 88.8388.83 90.6790.67 92.3392.33 93.3393.33 93.5093.50
Gabor + LBPGabor + LBP 89.4589.45 91.3391.33 93.2093.20 94.0794.07 95.0095.00
MFGC + CS-LBPMFGC + CS-LBP 85.5085.50 88.1988.19 90.5690.56 93.0093.00 93.8393.83
MOGC + CS-LBPMOGC + CS-LBP 90.0890.08 91.4391.43 93.2293.22 94.2794.27 95.0095.00
표 2를 살펴보면, (1) LBP 알고리즘보다 CS-LBP 알고리즘이 상당한 인식률을 얻었으며, 테스트 샘플에서 4와 6에 대해 인식률이 거의 1% 증가했다. (2) Gabor + LBP 알고리즘, MOGC + CS-LBP 알고리즘은 LBP 알고리즘에 비해 상당한 인식률을 달성했으며 CS-LBP 알고리즘에 비해 인식률이 약 2% 향상되었다.As shown in Table 2, (1) the CS-LBP algorithm achieved a significant recognition rate compared to the LBP algorithm, and the recognition rate increased by almost 1% for 4 and 6 in the test sample. (2) Gabor + LBP algorithm, MOGC + CS-LBP algorithm achieved a significant recognition rate compared to the LBP algorithm, and the recognition rate improved about 2% compared to the CS-LBP algorithm.
다음으로, ORL 얼굴 데이터베이스의 테스트 결과를 살펴보면, ORL 얼굴 데이터베이스는 40명을 포함하고 있으며, 모두가 10 이미지, 총 400 얼굴 이미지를 가지고 있다. 그 중 얼굴 이미지의 10 ORL 얼굴 이미지 데이터베이스 중 하나에 대해 도 7과 같이 약간의 편향, 블록 등이 각 개인의 이미지에 포함되어 있다. 즉, 도 7은 ORL에 있는 사람의 10 가지 샘플 이미지를 나타낸다. 테스트에서 3, 4, 5, 6 이미지 각각을 트레이닝 샘플로 무작위로 선택하고 이미지의 나머지 부분을 테스트 샘플로 무작위로 선택했다. 반복 테스트에 있어서 다섯 가지 종류의 알고리즘에 대한 테스트 결과는 표 3과 같다. 즉, 표 3은 ORL에 대한 다섯 가지 알고리즘의 인식률 결과를 나타낸다. Next, look at the test results of the ORL face database, the ORL face database contains 40 people, all have 10 images, a total of 400 face images. Among the 10 ORL face image databases of the face images, some deflections, blocks, and the like are included in each individual image as shown in FIG. 7. That is, FIG. 7 shows ten sample images of a person in an ORL. In the test, each of 3, 4, 5, and 6 images was randomly selected as the training sample and the rest of the images were randomly selected as the test sample. Table 3 shows the test results for the five types of algorithms in the iterative test. That is, Table 3 shows the recognition rate results of the five algorithms for the ORL.
Training sample numberTraining sample number 33 44 55 66
LBPLBP 86.9386.93 91.0891.08 94.1094.10 96.0096.00
CS-LBPCS-LBP 88.0988.09 91.7091.70 94.4094.40 96.0096.00
Gabor + LBPGabor + LBP 89.7889.78 93.3293.32 95.5495.54 96.8996.89
MFGC + CS-LBPMFGC + CS-LBP 88.3688.36 91.7991.79 93.5093.50 94.6994.69
MOGC + CS-LBPMOGC + CS-LBP 90.3290.32 93.5493.54 95.6595.65 97.0097.00
표 3을 참조하면, (1) 다른 트레이닝 샘플에서, LBP 알고리즘보다 CS-LBP 알고리즘이 높은 인식률을 보였으며, (2) MFGC + CS-LBP 알고리즘, CS-LBP 알고리즘은 유사한 인식률을 보였으며, (3) MOGC + CS-LBP 알고리즘은 최고의 인식 속도를 달성했다. Referring to Table 3, (1) CS-LBP algorithm showed higher recognition rate than LBP algorithm in other training samples, (2) MFGC + CS-LBP algorithm, CS-LBP algorithm showed similar recognition rate, ( 3) MOGC + CS-LBP algorithm achieved the best recognition speed.
다음으로, FERET 얼굴 데이터베이스의 테스트 결과를 살펴본다. FERET 얼굴 표준 라이브러리에서 120명을 선택하고 각각 6개의 이미지, 총 720개의 얼굴 이미지를 선택했다. 선택한 각 이미지에는 표정, 조명 및 연령 변경이 포함된다. 도 8을 참조하면, 6개의 이미지 중 하나인 FERET 얼굴 표준 라이브러리에서 선택된다. 즉, 도 8은 FERET의 사람의 여섯 샘플 이미지를 나타낸다.Next, look at the test results of the FERET face database. 120 people were selected from the FERET facial standard library, six images each and a total of 720 facial images. Each image you select includes a change of expression, lighting, and age. Referring to FIG. 8, one of six images is selected from the FERET face standard library. That is, Figure 8 shows six sample images of the person of FERET.
테스트에서 2개 및 4 개의 이미지 각각을 트레이닝 샘플로 무작위로 선택하고 나머지는 테스트 샘플로 선택했다. 반복 테스트, 10배 평균 테스트에 따라 다섯 가지 종류의 알고리즘에 대한 테스트 비교 결과는 표 4와 같다. 즉, 표 4는 FERET에서의 다섯 알고리즘의 인식률 결과를 나타낸다. In the test, two and four images each were randomly selected as training samples and the rest were selected as test samples. Table 4 shows the test comparison results for five different algorithms according to the iterative test and the 10-fold average test. That is, Table 4 shows the recognition rate results of the five algorithms in FERET.
Training sample number Training sample number 22 33 44
LBPLBP 82.7982.79 88.8688.86 92.3392.33
CS-LBPCS-LBP 82.2182.21 89.0089.00 91.7991.79
Gabor + LBPGabor + LBP 86.7086.70 91.3391.33 92.8992.89
MFGC + CS-LBPMFGC + CS-LBP 85.8685.86 90.0390.03 92.0092.00
MOGC + CS-LBPMOGC + CS-LBP 87.3787.37 91.4291.42 93.0093.00
표 4를 참조하면, (1) LBP와 CS-LBP 알고리즘이 비슷한 인식률을 달성했으며, (2) Gabor + LBP와 MOGC + CS-LBP 알고리즘은 여전히 상당한 인식률을 갖는다.Referring to Table 4, (1) LBP and CS-LBP algorithms achieve similar recognition rates, and (2) Gabor + LBP and MOGC + CS-LBP algorithms still have significant recognition rates.
이와 같은, 테스트 결과 분석 결과 Gabor + LBP 알고리즘에 비해 MOGC + CS-LBP 알고리즘은 특징 추출의 차원이 낮고 상당한 인식률을 달성했다. 즉, MOGC + CS-LBP 알고리즘은 가장 좋은 인식률을 얻었고 인식률의 정확도가 향상된다. As a result of the analysis of the test results, the MOGC + CS-LBP algorithm has a low level of feature extraction and a significant recognition rate compared to the Gabor + LBP algorithm. That is, the MOGC + CS-LBP algorithm has the best recognition rate and the accuracy of the recognition rate is improved.
본 발명은 또한 컴퓨터로 읽을 수 있는 기록매체에 컴퓨터가 읽을 수 있는 코드로서 구현하는 것이 가능하다. 컴퓨터가 읽을 수 있는 기록매체는 컴퓨터 시스템에 의하여 읽혀질 수 있는 데이터가 저장되는 모든 종류의 기록 장치를 포함한다.The invention can also be embodied as computer readable code on a computer readable recording medium. Computer-readable recording media include all kinds of recording devices that store data that can be read by a computer system.
컴퓨터가 읽을 수 있는 기록매체의 예로는 ROM, RAM, CD-ROM, 자기테이프, 플로피 디스크, 광 데이터 저장장치 등이 있으며, 또한 캐리어 웨이브(예를 들어, 인터넷을 통한 전송)의 형태로 구현되는 것도 포함한다. Examples of computer-readable recording media include ROM, RAM, CD-ROM, magnetic tape, floppy disks, optical data storage devices, and the like, which are also implemented in the form of carrier waves (eg, transmission over the Internet). It also includes.
또한 컴퓨터가 읽을 수 있는 기록매체는 네트워크로 연결된 컴퓨터 시스템에 분산되어, 분산방식으로 컴퓨터가 읽을 수 있는 코드가 저장되고 실행될 수 있다. 그리고 본 발명을 구현하기 위한 기능적인(functional) 프로그램, 코드 및 코드 세그먼트들은 본 발명이 속하는 기술 분야의 프로그래머들에 의해 용이하게 추론될 수 있다.The computer readable recording medium can also be distributed over network coupled computer systems so that the computer readable code is stored and executed in a distributed fashion. And functional programs, codes and code segments for implementing the present invention can be easily inferred by programmers in the art to which the present invention belongs.
이상과 같이, 본 명세서와 도면에는 본 발명의 바람직한 실시예에 대하여 개시하였으며, 비록 특정 용어들이 사용되었으나, 이는 단지 본 발명의 기술 내용을 쉽게 설명하고 발명의 이해를 돕기 위한 일반적인 의미에서 사용된 것이지, 본 발명의 범위를 한정하고자 하는 것은 아니다. 여기에 개시된 실시예 외에도 본 발명의 기술적 사상에 바탕을 둔 다른 변형 예들이 실시 가능하다는 것은 본 발명이 속하는 기술 분야에서 통상의 지식을 가진 자에게 자명한 것이다.As described above, the present specification and drawings have been described with respect to preferred embodiments of the present invention, although specific terms are used, it is only used in a general sense to easily explain the technical contents of the present invention and to help the understanding of the present invention. It is not intended to limit the scope of the present invention. It will be apparent to those skilled in the art that other modifications based on the technical idea of the present invention can be carried out in addition to the embodiments disclosed herein.
본 발명은 잡음에 강인한 특징점 추출과 추출된 얼굴 특징점의 차원(high dimensionality)을 줄이기 위하여 가버(Gabor) 필터와 CS-LBP(Center-symmetric local binary patterns)를 결합한 휴먼 얼굴 인식을 위한 향상된 다중채널 가버 필터 기반 휴먼 얼굴 인식 방법에 관한 것으로,The present invention provides an improved multi-channel Gabor for human face recognition combining Gabor filter and Center-symmetric local binary patterns (CS-LBP) to reduce the noise's robust feature point extraction and the high dimensionality of the extracted facial feature points. The filter-based human face recognition method,
가버(Gabor) 필터와 CS-LBP(Center-symmetric local binary patterns)를 결합하여, 잡음에 강인한 특징점 추출과 추출된 얼굴 특징점의 차원(high dimensionality)을 줄일 수 있는 효과를 제공한다. By combining the Gabor filter with CS-LBP (Center-symmetric local binary patterns), it provides noise-resistant feature point extraction and high dimensionality of extracted facial feature points.
또한, 본 발명의 다른 실시예에 따른 향상된 다중채널 가버 필터 기반 휴먼 얼굴 인식 방법은, 가버 특징 영상들을 다른 방향과 스케일로 결합하여 특징 영상의 차원을 줄이고, 특징 영상으로부터 CS-LBP 기반으로 낮은 차원의 얼굴 특징점들을 추출할 수 있는 효과를 제공한다.In addition, the improved multi-channel Gabor filter-based human face recognition method according to another embodiment of the present invention reduces the dimension of the feature image by combining the Gabor feature images in different directions and scales, and based on CS-LBP based low dimension from the feature image. Provides the effect of extracting facial feature points.
뿐만 아니라, 본 발명의 다른 실시예에 따른 향상된 다중채널 가버 필터 기반 휴먼 얼굴 인식 방법은, 기존의 가버 필터와 LBP의 접근 방법에 비해 가버 필터와 CS-LBP 결합 방법을 통해 특징 차원, 저장 공간 및 계산 시간을 줄이는 동시에 얼굴 인식의 높은 정확도를 얻을 수 있는 효과를 제공한다. In addition, the improved multi-channel Gabor filter-based human face recognition method according to another embodiment of the present invention is characterized by a feature dimension, a storage space, and a combination of the Gabor filter and the CS-LBP method, compared to the conventional Gabor filter and LBP approach. It reduces the computation time and at the same time provides the effect of high accuracy of face recognition.

Claims (4)

  1. 입력이 N명(N은 2 이상의 자연수)의 얼굴 이미지
    Figure PCTKR2017001886-appb-I000032
    이며, 출력은 통계의 계단식 파티클 히스토그램 특징 벡터 표현의 이미지인
    Figure PCTKR2017001886-appb-I000033
    인 경우, 특징 이미지
    Figure PCTKR2017001886-appb-I000034
    (
    Figure PCTKR2017001886-appb-I000035
    ,
    Figure PCTKR2017001886-appb-I000036
    )를 획득하는 제 1 단계;
    Face image of N inputs (N is a natural number of 2 or more)
    Figure PCTKR2017001886-appb-I000032
    Output is an image of a cascading particle histogram feature vector representation of the statistics.
    Figure PCTKR2017001886-appb-I000033
    Feature image
    Figure PCTKR2017001886-appb-I000034
    (
    Figure PCTKR2017001886-appb-I000035
    ,
    Figure PCTKR2017001886-appb-I000036
    A first step of obtaining;
    Figure PCTKR2017001886-appb-I000037
    를 같은 방식으로, 여러 개의 중복되는 하위 이미지를 구분하는 제 2 단계; 및
    bracket
    Figure PCTKR2017001886-appb-I000037
    In the same manner, a second step of separating a plurality of overlapping sub-images; And
    Figure PCTKR2017001886-appb-I000038
    의 모든 하위 이미지의 히스토그램을 생성하여, 계단식 히스토그램 특징 벡터 순서를 추출하는 제 3 단계; 를 포함하는 것을 특징으로 하는 향상된 다중채널 가버 필터 기반 휴먼 얼굴 인식 방법.
    Figure PCTKR2017001886-appb-I000038
    Generating a histogram of all sub-images of the, and extracting a cascading histogram feature vector sequence; An improved multi-channel Gabor filter-based human face recognition method comprising a.
  2. 청구항 1에 있어서, 상기 제 3 단계 이후,The method according to claim 1, After the third step,
    각각
    Figure PCTKR2017001886-appb-I000039
    의 캐스케이드 시퀀스의 해당 고유 벡터를 추출하는 제 4 단계; 를 더 포함는 것을 특징으로 하는 향상된 다중채널 가버 필터 기반 휴먼 얼굴 인식 방법.
    each
    Figure PCTKR2017001886-appb-I000039
    Extracting a corresponding eigenvector of the cascade sequence of; An improved multi-channel Gabor filter based human face recognition method further comprising.
  3. 청구항 2에 있어서, 상기 제 4 단계 이후,The method according to claim 2, After the fourth step,
    상기 제 1 내지 제 4 단계의 특징 추출 후 얼굴 이미지 특징 벡터를 추출하는 제 5 단계; 를 더 포함하는 것을 특징으로 하는 향상된 다중채널 가버 필터 기반 휴먼 얼굴 인식 방법.A fifth step of extracting a face image feature vector after the feature extraction of the first to fourth steps; An improved multi-channel Gabor filter-based human face recognition method further comprising.
  4. 청구항 1에 있어서, 상기 제 1 단계는, The method according to claim 1, wherein the first step,
    얼굴 이미지
    Figure PCTKR2017001886-appb-I000040
    를 가보 변환하여 가보 진폭 스펙트럼을 획득하는 제 1-1 단계;
    Face image
    Figure PCTKR2017001886-appb-I000040
    Step 1-1 of obtaining a heirloom amplitude spectrum by performing heirloom conversion;
    중첩에 의해 다른 n의 진폭 스펙트럼에 대해,
    Figure PCTKR2017001886-appb-I000041
    를 획득하는 제 1-2 단계; 및
    For the amplitude spectrum of n different by overlap,
    Figure PCTKR2017001886-appb-I000041
    Obtaining a first step 1-2; And
    각 CS-LBP 코드에 대해, 이미지
    Figure PCTKR2017001886-appb-I000042
    를 획득하는 제 1-3 단계; 를 포함하는 것을 특징으로 하는 향상된 다중채널 가버 필터 기반 휴먼 얼굴 인식 방법.
    Image for each CS-LBP code
    Figure PCTKR2017001886-appb-I000042
    Obtaining the first 1-3 steps; An improved multi-channel Gabor filter-based human face recognition method comprising a.
PCT/KR2017/001886 2017-02-15 2017-02-21 Human face recognition method based on improved multi-channel cabor filter WO2018151357A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020170020770A KR101993729B1 (en) 2017-02-15 2017-02-15 FACE RECOGNITION Technique using Multi-channel Gabor Filter and Center-symmetry Local Binary Pattern
KR10-2017-0020770 2017-02-15

Publications (1)

Publication Number Publication Date
WO2018151357A1 true WO2018151357A1 (en) 2018-08-23

Family

ID=63170362

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2017/001886 WO2018151357A1 (en) 2017-02-15 2017-02-21 Human face recognition method based on improved multi-channel cabor filter

Country Status (2)

Country Link
KR (1) KR101993729B1 (en)
WO (1) WO2018151357A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109753910A (en) * 2018-12-27 2019-05-14 北京字节跳动网络技术有限公司 Crucial point extracting method, the training method of model, device, medium and equipment

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109902581B (en) * 2019-01-28 2020-11-10 重庆邮电大学 Single-sample partially-occluded face recognition method based on multi-step weighting
CN110084135B (en) * 2019-04-03 2024-04-23 平安科技(深圳)有限公司 Face recognition method, device, computer equipment and storage medium
CN110705375B (en) * 2019-09-11 2022-07-12 西安交通大学 Skeleton detection system and method for noise image
CN110781800B (en) * 2019-10-23 2022-04-12 北京远舢智能科技有限公司 Image recognition system
CN111126300B (en) * 2019-12-25 2023-09-08 成都极米科技股份有限公司 Human body image detection method and device, electronic equipment and readable storage medium
CN111539271B (en) * 2020-04-10 2023-05-02 哈尔滨新光光电科技股份有限公司 Face recognition method based on wearable equipment and wearable face detection equipment for frontier defense
CN111967542B (en) * 2020-10-23 2021-01-29 江西小马机器人有限公司 Meter identification secondary positioning method based on depth feature points
CN113626553B (en) * 2021-07-15 2024-02-20 人民网股份有限公司 Cascade binary Chinese entity relation extraction method based on pre-training model

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080107311A1 (en) * 2006-11-08 2008-05-08 Samsung Electronics Co., Ltd. Method and apparatus for face recognition using extended gabor wavelet features
KR20100096686A (en) * 2009-02-25 2010-09-02 오리엔탈종합전자(주) Illumination-robust face recognition based on illumination-separated eigenfaces
KR20110067480A (en) * 2009-12-14 2011-06-22 한국전자통신연구원 Sensor actuator node and method for conditions meeting using thereof
US20130004028A1 (en) * 2011-06-28 2013-01-03 Jones Michael J Method for Filtering Using Block-Gabor Filters for Determining Descriptors for Images
KR101314293B1 (en) * 2012-08-27 2013-10-02 재단법인대구경북과학기술원 Face recognition system robust to illumination change

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102199094B1 (en) * 2014-05-26 2021-01-07 에스케이텔레콤 주식회사 Method and Apparatus for Learning Region of Interest for Detecting Object of Interest

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080107311A1 (en) * 2006-11-08 2008-05-08 Samsung Electronics Co., Ltd. Method and apparatus for face recognition using extended gabor wavelet features
KR20100096686A (en) * 2009-02-25 2010-09-02 오리엔탈종합전자(주) Illumination-robust face recognition based on illumination-separated eigenfaces
KR20110067480A (en) * 2009-12-14 2011-06-22 한국전자통신연구원 Sensor actuator node and method for conditions meeting using thereof
US20130004028A1 (en) * 2011-06-28 2013-01-03 Jones Michael J Method for Filtering Using Block-Gabor Filters for Determining Descriptors for Images
KR101314293B1 (en) * 2012-08-27 2013-10-02 재단법인대구경북과학기술원 Face recognition system robust to illumination change

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109753910A (en) * 2018-12-27 2019-05-14 北京字节跳动网络技术有限公司 Crucial point extracting method, the training method of model, device, medium and equipment
CN109753910B (en) * 2018-12-27 2020-02-21 北京字节跳动网络技术有限公司 Key point extraction method, model training method, device, medium and equipment

Also Published As

Publication number Publication date
KR20180094453A (en) 2018-08-23
KR101993729B1 (en) 2019-06-27

Similar Documents

Publication Publication Date Title
WO2018151357A1 (en) Human face recognition method based on improved multi-channel cabor filter
CN103886301B (en) Human face living detection method
US8027522B2 (en) Image recognition system and recognition method thereof and program
US7483569B2 (en) Reduced complexity correlation filters
CN102938065B (en) Face feature extraction method and face identification method based on large-scale image data
Chen et al. Invariant pattern recognition using contourlets and AdaBoost
CN112464730B (en) Pedestrian re-identification method based on domain-independent foreground feature learning
WO2014194620A1 (en) Method, module, apparatus and system for extracting, training and detecting image characteristic
Çevik et al. A novel high-performance holistic descriptor for face retrieval
Mazloom et al. Combinational method for face recognition: Wavelet, PCA and ANN
Lian Pedestrian detection using quaternion histograms of oriented gradients
JPH0520442A (en) Face picture collation device
CN113111797B (en) Cross-view gait recognition method combining self-encoder and view transformation model
KR20110094947A (en) Door on/off switching system using face recognition and detection method therefor
Al-Abaji et al. The using of PCA, wavelet and GLCM in face recognition system, a comparative study
CN109598262A (en) A kind of children's facial expression recognizing method
Rahman et al. Curvelet texture based face recognition using principal component analysis
Rahman et al. Performance of mpeg-7 edge histogram descriptor in face recognition using principal component analysis
Mazloom et al. Face recognition using wavelet, pca, and neural networks
Schwartz Scalable people re-identification based on a one-against-some classification scheme
CN111709312A (en) Local feature face recognition method based on joint main mode
Chen et al. Palmprint classification using contourlets
Irhebhude et al. Exploring the Efficacy of Face Recognition Algorithm on Different Skin Colors.
Zhou et al. Cross-channel similarity based histograms of oriented gradients for color images
Mahmoud et al. 2D-multiwavelet transform 2D-two activation function wavelet network-based face recognition

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17896385

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17896385

Country of ref document: EP

Kind code of ref document: A1