KR101921717B1 - Face recognition method and facial feature extraction method using local contour patten - Google Patents

Face recognition method and facial feature extraction method using local contour patten Download PDF

Info

Publication number
KR101921717B1
KR101921717B1 KR1020150051821A KR20150051821A KR101921717B1 KR 101921717 B1 KR101921717 B1 KR 101921717B1 KR 1020150051821 A KR1020150051821 A KR 1020150051821A KR 20150051821 A KR20150051821 A KR 20150051821A KR 101921717 B1 KR101921717 B1 KR 101921717B1
Authority
KR
South Korea
Prior art keywords
face
value
mask
extracting
image
Prior art date
Application number
KR1020150051821A
Other languages
Korean (ko)
Other versions
KR20160122323A (en
Inventor
이승호
전태준
Original Assignee
(주)리얼아이즈
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by (주)리얼아이즈 filed Critical (주)리얼아이즈
Priority to KR1020150051821A priority Critical patent/KR101921717B1/en
Publication of KR20160122323A publication Critical patent/KR20160122323A/en
Application granted granted Critical
Publication of KR101921717B1 publication Critical patent/KR101921717B1/en

Links

Images

Classifications

    • G06K9/00221
    • G06K9/00268
    • G06K9/6276

Abstract

A face feature extraction method using a local contour pattern and a face recognition method using the same are disclosed. Extracting an edge component similar to a human eye by applying a mask to the adjusted face image, extracting a face feature using a local contour pattern for the extracted edge component, And a step of classifying faces using the extracted facial features. Therefore, face recognition can be performed by extracting features having similar characteristics to human vision.

Description

TECHNICAL FIELD [0001] The present invention relates to a facial feature extraction method using a local contour pattern, and a facial recognition method using the same. [0002]

The present invention relates to a face feature extraction method using a local contour pattern and a face recognition method using the same, and more particularly, to a face feature extraction method using a region outline pattern that extracts features having similar characteristics to human vision, And a recognition method.

Recently, the necessity of using biometric information has been increased due to massive personal information leakage accidents that have been happening in various institutions. As the global network using the Internet is formed, a serious problem that important information of an individual is stolen by others is raised. Therefore, one of the ways to solve this problem is to increase interest in biometrics, which uses personal physical information of an individual to judge whether or not a person is a person. Also, there are various systems such as an access management system using biometrics technology, The field is being used. Among biometric technologies, face recognition has advantages over convenience in terms of recognition because the recognition process does not require special physical contact or action to the user compared to iris recognition and fingerprint recognition. It also has an advantage that it can be applied to various fields such as access control, unmanned monitoring using CCTV, and entertainment.

On the other hand, the face of a person does not exist only as a single image but is influenced by various environmental factors and exists as various images. First, the face changes according to the emotional state of the person. Secondly, the expression of the face changes little by little over time. Thirdly, the noise causes a lot of noise in the face image. And various additional obstacles (noise, beard, molding, etc.). Because of these various factors, it is difficult to extract human facial features. Therefore, in order to commercialize the face recognition system, robust recognition technology capable of securing high recognition performance in various environmental factors is indispensable.

Korean Patent Publication No. 10-2000-0044789 (published on July 15, 2000). Korean Patent Publication No. 10-2000-0007799 (published Feb. 2, 2000)

SUMMARY OF THE INVENTION It is an object of the present invention to provide a face feature extraction method using a local outline pattern that extracts features having human-like characteristics and a face recognition method using the same.

According to an aspect of the present invention, there is provided a method for processing an image, the method comprising: acquiring and adjusting a size of a face image; extracting edge components similar to human vision by applying a mask to the adjusted face image; Extracting facial features using a pattern, and classifying faces using the extracted facial features.

Here, the mask is a LoG (Laplacian of Gaussian) mask.

At this time, applying the LoG mask includes removing noise of the face image by adjusting the sigma value.

At this time, the step of applying the LoG mask includes the step of detecting the illumination change and applying the mask size differently.

In this case, the step of using the local contour pattern may include searching each pixel in the vertical, horizontal and diagonal directions, converting each direction into a binary pattern by applying a threshold to the search value, and converting the binary pattern value into decimal, Lt; RTI ID = 0.0 > a < / RTI > new label value.

At this time, the threshold value is 3 for small masks, 0.3 for Gaussian sigma, 2 for large masks, and 1.8 for Gaussian sigma.

At this time, the new label value has a value of 0 to 255 when the search direction n is 8.

At this time, the extracting step includes dividing the face image into a predetermined region, and calculating a histogram for each region and extracting a sum of the histograms.

In this case, the classifying step includes analyzing the similarity between the feature vectors extracted as the facial features and classifying the face class into feature vectors analyzed using the nearest neighbor classifier.

At this time, the classifying step includes calculating the distance or similarity between extracted feature vectors by Euclidean distance, histogram intersection, or square statistics.

When the facial feature extraction method using the local contour pattern according to the present invention and the face recognition method using the method are used, facial features can be recognized by extracting features having characteristics similar to human vision.

1 is a block diagram illustrating a configuration of a terminal according to an embodiment of the present invention.
2 is a flowchart illustrating a face recognition method according to an embodiment of the present invention.
3 is a general outline of face recognition according to an embodiment of the present invention.
4 is an exemplary view illustrating a process of normalizing a face image.
5 is an exemplary view showing a vertical and horizontal laplacian mask.
6 is an exemplary view showing a laplacian mask in all directions.
7 is an exemplary diagram showing a Laplacian differential method.
8 is a diagram illustrating an example of a Gaussian image in which image noise is removed.
Fig. 9 is an example of extracting an edge having characteristics similar to human vision.
10 shows a function change according to the sigma value of the LoG mask.
11 shows a basic LBP calculation method.
12 is a general outline of a face feature extraction method using a local contour pattern according to an embodiment of the present invention.
13 shows the edge detection result according to the sigma value in a general image.
14 and 15 show edge detection results according to the mask size in the illumination change image.
FIG. 16 is an example of searching for pixels in the vertical, horizontal, and diagonal directions.
Fig. 17 shows the result of searching for an edge in eight directions.
18 is an exemplary diagram showing the LCP algorithm.
FIG. 19 shows a result of comparing LBP and LCP algorithms.
FIG. 20 shows a result of a comparative experiment in which LBP and LCP are applied to face images in a lighting change environment.
FIG. 21 is a result of applying the methods proposed in the present invention to a face image having various lighting change environments.
FIG. 22 is an example of dividing an image. FIG.
23 is an exemplary diagram showing a feature vector.
Fig. 24 is an exemplary diagram showing K-NN. Fig.
25 is an exemplary diagram showing an SVM error.
Figure 26 is an exemplary diagram illustrating SVM advantages.
FIG. 27 is an example of analyzing the similarity of all data. FIG.

Hereinafter, preferred embodiments according to the present invention will be described in detail with reference to the accompanying drawings.

1 is a block diagram illustrating a configuration of a terminal according to an embodiment of the present invention.

The device configuration of the terminal 100 includes a processor 110 for executing a face recognition program, a memory 120 for storing a face recognition program and a face image, a camera 130 for photographing a face, And the memory stores the operating system and the face recognition program. The operating system provides an operating environment for executing the face recognition program. The face recognition program is executed by the processor 110 and the terminal 100 operates.

The processor 110 executes the executable code of the face recognition program of the memory 120, the memory 120 stores the face recognition execution code and the face data, the camera 130 photographs the face image, And displays the recognition result. One embodiment of the terminal 100 configured based on the device configuration is as follows.

The processor 110 acquires and adjusts the size of the face image, extracts edge components similar to human vision by applying a mask to the adjusted face image, and extracts facial features using the local contour pattern for the extracted edge components Extracts facial features, classifies faces using extracted facial features, and displays facial recognition results. A configuration for enabling the operation of the terminal 100 will be described.

The processor 110 acquires and adjusts the size of the face image. In another embodiment, the processor 110 may create a tilted image by correcting the face image to an up and down tilted angle. The CCTV is installed on the ceiling and the face is photographed with the face tilted. The processor 110 can make an image inclined by inclining the face image to be compared with the face photographing angle. The processor 110 may use a cosine transform to produce an oblique image.

The processor 110 applies a mask to the adjusted face image to extract edge components similar to human vision. The mask is a LoG (Laplacian of Gaussian) mask. In applying the LoG mask, the processor 110 adjusts the sigma value to remove the noise of the facial image. The processor 110 detects the illumination change and applies the mask size differently in applying the LoG mask.

The processor 110 extracts facial features using the local contour pattern for the extracted edge components. The processor searches each pixel in the vertical, horizontal, and diagonal directions using a local contour pattern, transforms each direction into a binary pattern by applying a threshold to the search value, converts the binary pattern value into a decimal number, Apply the value. The threshold is 3 for small masks, 0.3 for Gaussian sigma, 2 for large masks, and 1.8 for Gaussian sigma. The new label value has a value from 0 to 255 when the search direction n is 8.

When extracting facial features, the processor 110 divides the facial image into a predetermined region, calculates a histogram for each region, and extracts the sum of the histograms.

The processor 110 classifies faces using the extracted facial features. The processor 110 analyzes the similarity between the feature vectors extracted as the facial features, and classifies the facial classes into the feature vectors analyzed using the adjacent neighborhood classifiers. The processor 110 calculates the distance or similarity between extracted feature vectors by Euclidean distance, histogram intersection, or square statistics.

The processor 110 displays the face recognition result classified into the face class on the LCD. The processor 110 may display the photographed face image and the face recognition result. The processor 110 outputs an identification code corresponding to the face as a face recognition result. The identification code is an identification value corresponding to the face image to be compared. The processor 110 can update the photographed face image to the comparison target face image after the face recognition is finished. Processor 110 may update the face image to keep the face database fresh.

2 is a flowchart illustrating a face recognition method according to an embodiment of the present invention.

Describe how the terminal recognizes faces.

The terminal includes a program memory for storing a program, a data memory for storing data, and a processor for executing the program.

The program memory includes a step 210 of acquiring and adjusting a size of a face image, a step 220 of extracting edge components similar to human vision by applying a mask to the adjusted face image, Extracting facial features 230 using the local contour pattern for the edge components, and classifying faces 240 using the extracted facial features. And displaying the face recognition result after the face classification step.

A terminal executes a program stored in a program memory by a processor.

Procedures executed in the terminal are described in time series.

3 is a general outline of face recognition according to an embodiment of the present invention.

The terminal processes the face recognition by applying a face feature extraction technique using a local contour pattern (LCP).

First, the terminal acquires and aligns the face image (210).

Second, the terminal detects an edge using a LoG (Laplacian of Gaussian) mask (220). The terminal detects the correct edge having characteristics similar to the human vision by using the Laplacian which is robust against the change of the facial expression and the illumination, and the Gaussian mask which removes the noise.

Third, the terminal extracts facial features using a local contour pattern (230). In order to efficiently represent the edge components of the face detected by the LoG mask, the terminal searches for the n-directional edge based on the center pixel, expresses it as a binary value, converts it to a decimal number, and applies the label value to the center pixel. At this time, the larger the edge component of the center pixel, the brighter the value. Next, the terminal extracts the face histogram. The terminal generates a histogram by dividing it into a certain grid for efficient expression of the face, and generates a face histogram by summing all the histograms.

Fourth, the terminal classifies the face class (240). The terminal measures the similarity between the learning image and the experimental image by using Nearest Neighbor Classifier technique and classifies and judges the face image by selecting one class having high similarity.

The terminal acquires and arranges face images, detects edges using LoG, extracts face features using local outline patterns, and classifies face classes.

1. Face image acquisition and alignment (210)

The terminal obtains and aligns the face image. The terminal detects the face area in a general image using a camera.

4 is an exemplary view illustrating a process of normalizing a face image.

The terminal converts the 256 gray level image into a floating point shape, finds the face area from the image using eye coordinates, cuts out only the face area by covering the elliptical mask, smoothes the histogram, and normalizes the pixel.

The terminal aligns all facial images to a size of 130x150 to maintain consistency of the facial images.

2. Edge Detection Using Differential Computation

The terminal detects the edge in the image processing. The terminal uses a differential operation, which is a mathematical tool for measuring the amount of spatial variation to detect the edge. video

Figure 112015035814236-pat00001
The best way to find the strength and direction at (x, y)
Figure 112015035814236-pat00002
And is a slope defined as Equation (1.1).

Figure 112015035814236-pat00003
- (1.1)

On the other hand, Equation (1.1) represents the position (x, y)

Figure 112015035814236-pat00004
The direction of change is the largest direction, and the vector
Figure 112015035814236-pat00005
Is defined as in Equation (1.2), and the edge direction angle by each slope value is defined as Equation (1.3).

Figure 112015035814236-pat00006
- (1.2)

Figure 112015035814236-pat00007
- (1.3)

The terminal detects an edge through a differential operation in the spatial domain. The terminal performs spatial filtering using a derivative mask. Spatial filtering is called Convolution. For spatial filtering, the terminal multiplies the m x n mask with the area value of the image occupied by the mask, as shown in Equation (1.4), and adds the result to obtain the mask response of the center pixel of the area.

Figure 112015035814236-pat00008
- (1.4)

Figure 112015035814236-pat00009

Typically, edge detection methods using primary differentials include Roberts and Sobel. Each circuit mask has a unique characteristic. First, the Roberts mask is defined as Equation (1.5). The Roberts mask has a small size and operates at a very high speed and can be effectively used. However, the Roberts mask can not average the protruding values, and has a disadvantage of being sensitive to noise.

Figure 112015035814236-pat00010
- (1.5)

Figure 112015035814236-pat00011

Second, horizontal and vertical edge detection of a Sobel mask is defined as Equation (1.6). The Sobel mask extracts edges in all directions and has the advantage of relatively averaging the extruded values. However, the Sobel mask has a disadvantage in that the operation speed is relatively slow and the noise portion is sensitive to the brightness enough to recognize the outline.

Figure 112015035814236-pat00012
- (1.6)

Figure 112015035814236-pat00013

The edge detector using the second derivative has an advantage of detecting a correct edge by forming a connected closed curve without cutting off the detected edge. However, the edge detector is sensitive to noise, and has a disadvantage in that it can not obtain directions that detect only the intensity of the contour. In a typical edge detection method using the second order differential, a Laplacian operator such as Equation (1.7) is defined. In order to express the equation (2.7) in discrete form, it is defined as the equation (2.8) by expressing the variable in the x direction. Similarly, in the y direction, it is expressed as Equation (2.9). Therefore, the discrete laplacian operations of the two variables from the previous three equations are defined as (2.10).

Figure 112015035814236-pat00014
- (2.7)

Figure 112015035814236-pat00015
- (2.8)

Figure 112015035814236-pat00016
- (2.9)

Figure 112015035814236-pat00017
- (2.10)

Fig. 5 is an exemplary view showing a vertical, horizontal, and diagonal laplacian mask.

Meanwhile, Equation (2.7) is defined as shown in FIG. 5A, and diagonal directions are defined as shown in FIG. 5B by adding a diagonal direction to FIG. 5A. The Laplacian operation emphasizes the edges in all directions, while the values corresponding to the low frequency components are erased, while the high frequency components appear more clearly. Therefore, emphasize the edges of all directions. However, there is a drawback that it is sensitive to noise.

6 is an exemplary view showing a laplacian mask in all directions.

Meanwhile, the terminal detects edge components as shown in FIG. 6 (b) by applying a vertical and horizontal laplacian mask of FIG. 5 (a) to all pixels as shown in FIG. 6 (a). In FIG. 6 (b), the value is 0 in a constant brightness region, the pixel component in a dark place is a negative value, and the pixel component in a bright place is a positive value.

7 is an exemplary diagram showing a Laplacian differential method.

The terminal differentiates facial images using Laplacian differential methods. As shown in FIG. 7, facial elements using differentials have very strong edge components. The terminal uses a Laplacian differential method to detect the correct edges in the vertical, horizontal, and diagonal directions.

3. Edge detection using LoG (Laplacian of Gaussian) (220)

The facial elements have various edge components in the face image, and the position, shape, size, surface pattern, etc. are not easily changed and have a lot of information.

However, there are many external factors that can be changed in texture information of a flat area such as a ball, a forehead, etc. of a face image. Therefore, in the present invention, a laplacian mask is used to minimize the influence of illumination on a facial image and emphasize important edge components of a facial image in the vertical, horizontal, and diagonal directions. However, since the Laplacian mask has a noise-sensitive problem, in the present invention, an edge detection method using a LoG mask combining Laplacian and Gaussian is used in order to improve the accuracy of edge detection. As a method of implementing the LoG mask, a method of performing a laplacian mask after performing a Gaussian mask is used.

8 is a diagram illustrating an example of a Gaussian image in which image noise is removed.

On the other hand, the Gaussian mask is defined as Equation (1.11), and a Gaussian image obtained by removing noise of the image as shown in FIG. The Gaussian distribution is the most common distribution used in all scientific disciplines and is used as a filter to remove noise in image processing. The width of the Gaussian mask serves as a parameter of the standard deviation value. The larger the standard deviation, the larger the noise reduction effect, but the image itself is blurred.

Figure 112015035814236-pat00018
- (1.11)

Figure 112015035814236-pat00019

The terminal removes most of the noise by applying a Gaussian mask, and then detects the edge of the face image using a Laplacian mask. Most of the noise is removed and only strong edges appear.

Fig. 9 is an example of extracting an edge having characteristics similar to human vision.

It is possible to extract an edge having characteristics similar to human vision as shown in Fig. 2.10 (b) of Fig. The LoG mask is defined as Equation (2.12) in which Equation (2.7) and Equation (2.11) are combined. On the other hand, the mask size of the LoG should be selected as the minimum odd integer of about 6σ or more. If a smaller mask is selected, the LoG function is cut off.

Figure 112015035814236-pat00020
- (1.12)

The LoG mask not only resembles the characteristics of the human visual system, but also has the important advantage of being isotropic in response to changes in brightness in all directions. Therefore, in the present invention, an edge detection method using a LoG mask is used.

10 shows a function change according to the sigma value of the LoG mask.

On the other hand, the sigma value can be changed to detect various edges. Fig. 10 shows the function change according to the sigma value of the LoG equation. When the sigma value of the function is large, it is effective for removing the noise, but the edge is widely detected. On the contrary, if the sigma value is small, it is affected by the noise, but the edge with sharp inclination and the accurate edge are detected.

4. Facial feature extraction using local contour pattern (230)

Finding efficient descriptors that can better represent facial image information is an important issue in face recognition. A good technician should be easy to calculate, have a low variance within the same person's image, and a high variance over the other person's image. Also, it should be less affected by factors such as lighting and noise. Recently, the LBP technique, which has been used for texture analysis, is widely used as a facial expression method in the face recognition field.

Among the facial feature extraction methods, the LBP technique is defined as Equation (1.13).

Figure 112015035814236-pat00021
- (1.13)

11 shows a basic LBP calculation method.

The terminal extracts the binary set by comparing the gray scale values between neighboring pixels based on the center pixel. The terminal converts the extracted binary set into a decimal number and uses it as the label value of the center pixel.

The LBP computation method is simple in operation and has the advantage of extracting various feature vectors of the face by extending it in a circular form with respect to the center pixel. However, since the LBP calculation method uses a simple large and small difference, it is highly susceptible to changes in illumination and noise.

Accordingly, in the present invention, only a significant edge component is detected from a face using a LoG mask which is not affected by illumination change and noise, and a local contour pattern technique is applied to accurately represent an edge component, ) Value.

12 is a general outline of a face feature extraction method using a local contour pattern according to an embodiment of the present invention.

The facial feature extraction method is divided into three processes: Laplacian Gaussian mask application process, local contour pattern calculation process, and histogram generation process.

① Laplacian Gaussian mask application process

The Laplacian Gaussian mask application process emphasizes edge components in the vertical, horizontal, and diagonal directions to accurately represent the outline of a facial image. The facial elements have various edge components in the facial image and do not change easily and have a lot of information. Conversely, the texture information of a flat area is distorted by noise with non-edge components, which interferes with facial feature extraction. Therefore, in the Laplacian Gaussian mask application process, a LoG mask is used to detect edges using a LoG mask that is not affected by illumination changes and noise.

The terminal detects the edge using the LoG mask. The terminal can extract various edge components by using the mask size and the sigma value as parameters.

13 shows the edge detection result according to the sigma value in a general image.

First, increasing the sigma value has the advantage of eliminating noise in the face image. However, detailed edges are not detected.

14 and 15 show edge detection results according to the mask size in the illumination change image.

Second, in the present invention, in the case of an image in which the illumination change is not severe as shown in FIG. 14, a small size mask is applied to detect a detailed edge.

However, in the case of an image with a large illumination change as shown in FIG. 15 (a), as shown in FIG. 15 (b), since a small mask can not detect an edge by illumination, Apply a mask.

Therefore, in the Laplacian Gaussian mask application process, edges are detected by emphasizing facial features and selecting sigma value and mask size that minimizes the influence of illumination change and noise.

② Regional contour pattern calculation process

The local contour pattern calculation process searches each pixel in the vertical, horizontal, and diagonal directions, applies a threshold to the detected value, converts each direction to a binary pattern, converts the binary pattern value to a decimal number, This is the process of applying the value.

FIG. 16 is an example of searching for pixels in the vertical, horizontal, and diagonal directions.

The terminal defines arbitrary image blocks as shown in FIG. 16 for calculating a contour pattern, and searches for pixels in the directions of vertical, horizontal, and diagonal lines with respect to the center pixel.

The terminal compares the center pixel with the n-direction pixels by searching for pixels in the n-direction, and accumulates the search results when the n-direction pixels are less than or equal to the center pixel. This is defined as Equation (1.14).

Figure 112015035814236-pat00022
,
Figure 112015035814236-pat00023
- (1.14)

Fig. 17 shows the result of searching for an edge in eight directions.

In Equation (1.14)

Figure 112015035814236-pat00024
A center pixel,
Figure 112015035814236-pat00025
Means the pixel values of neighboring pixels at a distance of R + i in the n-direction, and the result of searching the edge of the distance by 5 in 8 directions using Equation (1.14) .

As shown in FIG. 17 (a), when the center pixel is a dark value, the neighboring pixel values in the n direction are less likely to be brighter than the center pixel, so that the cumulative value is small. Then, the function that applies the threshold value to determine the cumulative result by searching the n-directional edge is defined as Equation (1.15). At this time, the value of the n-direction is converted into a binary number having a value of 0 or 1 by applying a threshold value as shown in FIG. 17 (b). In this case, the threshold value for the small mask is 0.3 for sigma of 3 Gaussian, and the parameter for 1.8 sigma value of the threshold value for the large mask is applied.

Figure 112015035814236-pat00026
- (1.15)

The terminal converts the binary pattern into a decimal number by applying a threshold value in a certain direction by applying Equation (1.15) to each direction with respect to the center pixel, and then assigns a new label value to the center pixel . The expression for assigning a new label value to the center pixel is defined by Equation (1.16). On the other hand, the value converted by Equation (1.16) has a value of 0 to 255 when n is 8.

Figure 112015035814236-pat00027
- (1.16)

18 is an exemplary diagram showing the LCP algorithm.

FIG. 18 shows a result of assigning a new label value to the center pixel by applying the LCP algorithm to all the pixels.

FIG. 19 shows a result of comparing LBP and LCP algorithms.

Meanwhile, FIG. 19 shows a result of comparing the LCP and the LCP algorithm by applying the LCP algorithm. In the case of the LCP algorithm, a broad radius is searched and a threshold is applied to the cumulative value. Therefore, it is possible to express a consistent pattern such as illumination change and noise. However, in the case of the LBP algorithm, since it uses a simple difference in magnitude, it has a large influence on illumination change and noise. Therefore, as shown in FIG. 19 (b), it is impossible to extract a consistent pattern in response to illumination change and noise sensitively.

FIG. 20 shows a result of a comparative experiment in which LBP and LCP are applied to face images in a lighting change environment.

As shown in FIG. 20 (b), when the LBP algorithm is applied to the face image, it is sensitive to the illumination change. This results in the same case even in the case of noise, and in the case of the LBP algorithm, a coherent pattern can not be extracted. The local contour pattern proposed in the present invention extracts facial features by minimizing the influence of illumination change, noise, etc. on images which are difficult to distinguish by eyes due to illumination change as shown in (c) of FIG.

FIG. 21 is a result of applying the methods proposed in the present invention to a face image having various lighting change environments.

The parameters applied in FIG. 21 are a sigma: 1 and a mask size of 7x7 for the LoG mask and R: 1, t: 2 and Rmax: 5 for the LCP algorithm. Even if the image is severely damaged due to illumination, the local contour pattern can be used to extract facial features by minimizing the influence of illumination change and noise.

③ Histogram generation process

The histogram generation process divides the face image into a certain grid and constructs a histogram of all the grids in order to efficiently express the face image. One histogram has local facial feature information and generates the final feature vector of the facial image as the sum of all the histogram beans.

FIG. 22 is an example of dividing an image. FIG.

The terminal completes the calculation of the new label value for all the pixels of the image using the LCP algorithm, and generates the feature vector of the face image composed of the sum of the histogram beans. Therefore, as shown in FIG. 22, the terminal divides the image into M x N for efficient expression of the face.

The terminal divides the face area into grids of a certain size, and then obtains an LCP histogram for each grid. Equation 1.17 shows a method of accumulating an input image having a size of M x N in the histogram. Where m and n represent the position of each pixel and i is the value of the label calculated by the LCP. The size of the histogram bin is determined by the number of directions of the LCP code, and the size of the histogram bin in the case of 8 directions is 256. On the other hand, the histogram of each grid is used as a final feature vector describing the entire face image by connecting in a line.

Figure 112015035814236-pat00028
- (1.17)

23 is an exemplary diagram showing a feature vector.

Therefore, the feature vector can be composed of the sum of all the histogram bins as shown in FIG. The terminal is divided into s regions as shown in FIG. 23

Figure 112015035814236-pat00029
,
Figure 112015035814236-pat00030
, ...,
Figure 112015035814236-pat00031
And the histogram in each region
Figure 112015035814236-pat00032
. Finally, the terminal concatenates all of the locally calculated histogram bins to represent a feature vector of the entire face, thereby making a single large histogram. The final histogram size is determined by the product of the number of regions divided by s and the length of the LCP code.

5. Face classification for face recognition (240)

Classification methods are based on a multivariate analysis that creates a classification model of how each group has characteristics from multivariate observations belonging to previously known mutually exclusive groups and determines which group the new observations are unknown Analysis method. Therefore, the face classification method refers to classifying them automatically without the class label for the face data. There are two types of face classification methods used for facial recognition, K-Nearest Neighbor (K-NN) and Support Vector Machine (SVM).

Fig. 24 is an exemplary diagram showing K-NN. Fig.

The K-NN calculates all the distance or similarity of the experimental data and the learning data to be classified as shown in FIG. 24 and finds k data at a close distance, and classifies the data into the class having the highest frequency. In this case, if k is 1, it is Nearest Neighbor Classifier. The nearest neighbor classifier has the advantages of high accuracy and easy implementation. However, since the distance between the data to be classified and all learning data is calculated, there is a disadvantage that the calculation time is large.

25 is an exemplary diagram showing an SVM error.

SVM (Support Vector Machine) is a pattern classification algorithm based on statistical learning theory proposed by Vapnik in 1960. The SVM has the following advantages compared with the class classification method by the emphasis among class classification methods. First, an error occurs as shown in FIG. 25 in the class classification method by the emphasis point.

Figure 26 is an exemplary diagram illustrating SVM advantages.

As shown in FIG. 26, the SVM has an advantage that fast results can be obtained if errors are solved by focusing on data at the boundaries of two groups, which are not the center of each group, and data is learned. SVM, however, takes a considerable amount of time to train training images, and there is a disadvantage that re-learning is required to train new data.

On the other hand, face recognition rate can be changed according to the performance of class classifier in face recognition, and it is important to select an optimum classifier. Therefore, in the present invention, to evaluate the objective recognition rate of the proposed feature extraction technique, Adin Ramirez Rivera et al.

The terminal computes distance or similarity between two face image learning images (G) and feature vectors extracted from the experiment image (P) to classify faces using the nearest neighbors classifier technique. Euclidean distance, Histogram intersection, and Chi-square statistic can be used to calculate distance or similarity.

First, the Euclidean distance is defined by Equation (1.18) and is generally used to calculate the distance between two vectors. The closer the distance is, the more likely it is a similar vector.

Second, the histogram intersection is defined by Equation (1.19) and uses the minimum values of the two histograms. If the histograms are completely equal, the histogram intersection has a degree of similarity of 1 and completely different.

Third, the Chi-square statistic is defined by Equation (1.20), and the closer the chi-square distance of the two vectors is to zero, the more similar the two vectors are.

● Euclidean distance

Figure 112015035814236-pat00033
- (1.18)

● Histogram Intersection

Figure 112015035814236-pat00034
- (1.19)

● Chi-square distance

Figure 112015035814236-pat00035
- (1.20)

In the present invention, a histogram intersection, which is a similarity comparison method that most effectively distinguishes features, is applied as a result of applying equations (1.18), (1.19), and (1.20) through experiments. The terminal obtains the final feature vector data of the training image and the experiment image by changing the learning image and all the images of the experiment image to the histogram form by performing the LCP operation.

FIG. 27 is an example of analyzing the similarity of all data. FIG.

Each data has one label. As shown in FIG. 27, the terminal analyzes the similarity between all the data samples of the learning image and the experiment image. After that, the terminal finds the label having the highest similarity between the experimental data to be classified and the learning data, and classifies the class.

It will be apparent to those skilled in the art that various modifications and variations can be made in the present invention without departing from the spirit or scope of the present invention as defined by the following claims It can be understood that

100: Terminal 110: Processor
120: memory 130: camera

Claims (10)

delete delete delete delete A facial feature extraction method for extracting a facial feature by extracting an edge component by applying a mask to a facial image and extracting a facial feature using a local contour pattern derived for the extracted edge component,
The process of calculating a local contour pattern for the extracted edge component
To extract a local contour pattern consistent with the illumination changing environment,
Horizontal, and diagonal directions, compares the extracted pixels with the center pixel, and accumulates the search values when the pixel value in each direction of the comparison result is less than or equal to the pixel value of the center pixel,
When the center pixel is a dark pixel value, since the probability that the pixel value of the surrounding pixels in each direction searched is higher than the pixel value of the center pixel is high, the cumulative search value is small. Therefore, Into a binary pattern,
Transforming the binary pattern value into a decimal number and applying a new label value to the center pixel,
Further comprising the step of extracting facial features using the derived local contour pattern,
The step of extracting facial features using the local contour pattern may include dividing the facial image into a predetermined region, calculating a histogram for each region, and extracting a sum of the histograms
A feature extracting method using a local contour pattern.
The method of claim 5,
Wherein the threshold value is 3 for a small mask, 0.3 for Gaussian sigma, 2 for a large mask, and 1.8 for a Gaussian sigma.
The method of claim 5,
Wherein the new label value is a local contour pattern having a value of 0 to 255 when the search direction n is 8.
delete A face feature extraction method using the local contour pattern according to claim 5,
And classifying the face using the extracted facial feature.
The method of claim 9,
The step of classifying faces comprises:
Analyzing the similarity between feature vectors that are extracted facial features, and
And classifying the face class into feature vectors analyzed using the nearest neighbors classifier.
KR1020150051821A 2015-04-13 2015-04-13 Face recognition method and facial feature extraction method using local contour patten KR101921717B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020150051821A KR101921717B1 (en) 2015-04-13 2015-04-13 Face recognition method and facial feature extraction method using local contour patten

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020150051821A KR101921717B1 (en) 2015-04-13 2015-04-13 Face recognition method and facial feature extraction method using local contour patten

Publications (2)

Publication Number Publication Date
KR20160122323A KR20160122323A (en) 2016-10-24
KR101921717B1 true KR101921717B1 (en) 2018-11-26

Family

ID=57256845

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020150051821A KR101921717B1 (en) 2015-04-13 2015-04-13 Face recognition method and facial feature extraction method using local contour patten

Country Status (1)

Country Link
KR (1) KR101921717B1 (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102174175B1 (en) * 2018-11-06 2020-11-06 숙명여자대학교산학협력단 Facial emotional recognition apparatus for Identify Emotion and method thereof
KR102220237B1 (en) * 2019-04-17 2021-02-25 주식회사 태산솔루젼스 3D Modularization and Method of CT Image Information for the Restoration of Cultural Heritage
CN111783621B (en) * 2020-06-29 2024-01-23 北京百度网讯科技有限公司 Method, device, equipment and storage medium for facial expression recognition and model training
CN111860343B (en) * 2020-07-22 2023-04-28 杭州海康威视数字技术股份有限公司 Method and device for determining face comparison result
KR102366364B1 (en) * 2021-08-25 2022-02-23 주식회사 포스로직 Method for geomatrical pattern matching and device for performing the method

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100300961B1 (en) 1998-07-07 2001-09-06 윤종용 Optimum face region extraction method & face recognition method thereof
KR20000044789A (en) 1998-12-30 2000-07-15 전주범 Face profile line approximation method by using standard face profile pattern

Also Published As

Publication number Publication date
KR20160122323A (en) 2016-10-24

Similar Documents

Publication Publication Date Title
US9971929B2 (en) Fingerprint classification system and method using regular expression machines
US5715325A (en) Apparatus and method for detecting a face in a video image
KR100724932B1 (en) apparatus and method for extracting human face in a image
KR101921717B1 (en) Face recognition method and facial feature extraction method using local contour patten
US7957560B2 (en) Unusual action detector and abnormal action detecting method
Abedin et al. Traffic sign recognition using surf: Speeded up robust feature descriptor and artificial neural network classifier
Campos et al. Discrimination of abandoned and stolen object based on active contours
Gilly et al. A survey on license plate recognition systems
Liu et al. Smoke-detection framework for high-definition video using fused spatial-and frequency-domain features
KR100664956B1 (en) Method and apparatus for eye detection
Jebarani et al. Robust face recognition and classification system based on SIFT and DCP techniques in image processing
Jillela et al. Methods for iris segmentation
Patel et al. Robust face detection using fusion of haar and daubechies orthogonal wavelet template
Nigam et al. Iris classification based on its quality
Aqel et al. Traffic video surveillance: Background modeling and shadow elimination
Alobaidi et al. Face detection based on probability of amplitude distribution of local binary patterns algorithm
Almomani et al. Object tracking via Dirichlet process-based appearance models
Higashi et al. New feature for shadow detection by combination of two features robust to illumination changes
Reddy et al. Driver drowsiness monitoring based on eye map and mouth contour
Sinaga et al. Real Time Catalog of Uniqueness Face Using the CAMShift and Gabor Wavelet Algorithims
Carvajal-González et al. Feature selection by relevance analysis for abandoned object classification
Bandara et al. A feature clustering approach based on Histogram of Oriented Optical Flow and superpixels
Patel et al. An introduction to license plate detection system
Bakr et al. Detecting moving shadow using a fusion of local binary pattern and gabor features
Zhu et al. Robust text segmentation in low quality images via adaptive stroke width estimation and stroke based superpixel grouping

Legal Events

Date Code Title Description
A201 Request for examination
E902 Notification of reason for refusal
E902 Notification of reason for refusal
N231 Notification of change of applicant
E701 Decision to grant or registration of patent right
GRNT Written decision to grant