KR101521136B1 - Method of recognizing face and face recognition apparatus - Google Patents

Method of recognizing face and face recognition apparatus Download PDF

Info

Publication number
KR101521136B1
KR101521136B1 KR1020130156639A KR20130156639A KR101521136B1 KR 101521136 B1 KR101521136 B1 KR 101521136B1 KR 1020130156639 A KR1020130156639 A KR 1020130156639A KR 20130156639 A KR20130156639 A KR 20130156639A KR 101521136 B1 KR101521136 B1 KR 101521136B1
Authority
KR
South Korea
Prior art keywords
image
test image
local feature
feature descriptor
block
Prior art date
Application number
KR1020130156639A
Other languages
Korean (ko)
Inventor
박혜영
서정인
Original Assignee
경북대학교 산학협력단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 경북대학교 산학협력단 filed Critical 경북대학교 산학협력단
Priority to KR1020130156639A priority Critical patent/KR101521136B1/en
Application granted granted Critical
Publication of KR101521136B1 publication Critical patent/KR101521136B1/en

Links

Images

Abstract

Disclosed is a method for recognizing a face including the steps of: receiving a test image, dividing the test image into a plurality of blocks, and then obtaining a plurality of local feature descriptors for each divided block; selecting one local feature descriptor for each block of the test image by using an object probability model; calculating a weighted value for each block of the test image; calculating distance values of similarity between the blocks, matched with each other, by using the selected local feature descriptor and the weighted value while sequentially comparing a plurality of reference images with the test image; and recognizing an image among the reference images which corresponds to the test image by comparing the calculated distance values of similarity.

Description

[0001] METHOD OF RECOGNIZING FACE AND FACE RECOGNITION APPARATUS [0002]

The present invention relates to a face recognition method and a face recognition apparatus, and more particularly, to a face recognition method and a face recognition apparatus that exhibit robust recognition performance for face images obtained in various environments.

Awareness has been a subject of long research in the field of computer vision and machine learning in recent decades. In spite of the great interest in image recognition, conventional image recognition researches have proved their research results through experiments using images acquired in a limited environment.

Particularly, in face recognition, there are deformations such as occlusion, movement, and rotation as well as illumination and facial expression in images acquired in a general environment. This has been a factor that significantly degrades the performance of face authentication system using face image.

Therefore, there is a need for a device capable of maintaining a robust image recognition performance even for images having various distortions detected in a real-time environment.

It is an object of the present invention to provide an image recognition method and an image recognition apparatus that exhibit robust recognition performance even for images obtained in various environments.

According to an aspect of the present invention, there is provided an image recognition method for dividing a test image into a plurality of blocks and obtaining a plurality of regional feature descriptors for each of the divided blocks, Selecting one local feature descriptor for each block of the test image using an object probability model estimated based on a plurality of reference images, using the object probability model, Calculating the similarity distance values between blocks matching each other using the selected local feature descriptor and the weight value while sequentially comparing the plurality of reference images and the test image, Comparing the similarity distance values, and comparing the similarity distance values to correspond to the test image among the plurality of reference images Includes a step of recognizing an image.

In this case, the image recognition method according to an embodiment of the present invention includes the steps of: dividing a plurality of reference images into a plurality of blocks and obtaining one regional feature descriptor for each block, using the obtained local feature descriptor And estimating and storing the object probability model.

The obtained plurality of local feature descriptors have different directional properties, and the selecting may include, for each block of the test image, one of the obtained plurality of local feature descriptors corresponding to the directionality of the test image Can be selected.

In this case,

Figure 112013115118319-pat00001

, Where m is the index of each block, < RTI ID = 0.0 >

Figure 112013115118319-pat00002
(K = 0, ..., 8) of the m block of the test image, p m is the object probability model,
Figure 112013115118319-pat00003
(M = 1, ..., M) of m blocks of the test image, and I tst may be a test image.

Meanwhile, in the image recognition method according to an embodiment of the present invention, a determination is made as to whether to reject the recognition of the test image using the one local feature descriptor selected per each block, the weight per each block, Wherein the calculating of the similarity distance values and the recognition of the image may be selectively performed according to the determination result of the determination step.

In this case, if the value calculated by applying the selected local feature descriptor to the object probability model is less than the first threshold value, the determining step may reject the recognition of the test image.

If the similarity distance value between the selected image and the test image is greater than the second threshold value, the test image is displayed on the test image, You can refuse recognition.

The calculating of the similarity distance values may include:

Figure 112013115118319-pat00004

And calculates the similarity distance values according to the following equation: < RTI ID = 0.0 >

Figure 112013115118319-pat00005
Is the local feature descriptor of m blocks of the ith reference image, d () is the similarity distance calculation function,
Figure 112013115118319-pat00006
Is a weight for the m block of the test image, and I i may be the i-th reference image.

Meanwhile, the image recognition apparatus according to an embodiment of the present invention includes: a storage unit for storing an object probability model estimated using a local feature descriptor for a plurality of blocks constituting each of a plurality of reference images; An input unit for dividing the test image into a plurality of blocks, acquiring a plurality of local feature descriptors for each divided block, and detecting one local feature descriptor for each block of the test image using the object probability model A calculating unit for calculating a weight per block of the test image using the object probability model, and a calculating unit for sequentially comparing the plurality of reference images and the test image, using the selected local feature descriptor and the weight value Calculates similarity distance values between blocks that match each other, Comparing the Li values, and includes an image recognition from among the plurality of reference image recognition the image corresponding to the test image.

In this case, the obtained plurality of local feature descriptors have different directional attributes, and the detecting unit may detect, for each block of the test image, one of the obtained plurality of local feature descriptors, A local feature descriptor can be detected.

In this case,

Figure 112013115118319-pat00007

, Where m is the index of each block, < RTI ID = 0.0 >

Figure 112013115118319-pat00008
(K = 0, ..., 8) of the m block of the test image, p m is the object probability model,
Figure 112013115118319-pat00009
(M = 1, ..., M) of m blocks of the test image, and I tst may be a test image.

The image recognition apparatus according to an embodiment of the present invention may further include a determination unit for determining whether to reject the recognition of the test image, and the image recognition unit may selectively operate according to a determination result of the determination unit have.

In this case, the determination unit may reject the recognition of the test image if the value calculated by applying the detected local feature descriptor to the object probability model is less than the first threshold value.

If the similarity distance value between the selected image and the test image is greater than the second threshold value, the determination unit may select the image having the smallest similarity distance value from the test image among the plurality of reference images, I can refuse.

Meanwhile,

Figure 112013115118319-pat00010

And calculates the similarity distance values according to the following equation: < RTI ID = 0.0 >

Figure 112013115118319-pat00011
Is the local feature descriptor of m blocks of the ith reference image, d () is the similarity distance calculation function,
Figure 112013115118319-pat00012
Is a weight for the m block of the test image, and I i may be the i-th reference image.

According to the above-described various embodiments, image recognition can be performed considering various variables included in the input image, so that excellent recognition performance can be expected.

1 is a diagram for explaining a method of estimating an object probability model according to an embodiment of the present invention;
FIG. 2 and FIG. 4 are flowcharts for explaining an image recognition method according to various embodiments of the present invention;
3 is a diagram for explaining feature selection according to an exemplary embodiment of the present invention;
FIG. 5 is a block diagram schematically showing an image recognition apparatus according to an embodiment of the present invention, and FIG.
FIG. 6 through FIG. 8 show various experimental examples to which the present invention is applied.

Hereinafter, an image recognition method and an image recognition apparatus according to the present invention will be described in detail with reference to the accompanying drawings.

According to the present image recognition method, an image most similar to a test image among a plurality of reference images is detected using the object probability model generated based on a plurality of reference images, despite the movement of the test image direction, can do.

The plurality of reference images are images in which an arbitrary object is photographed, which means an image to be learned for estimating an object probability model. The test image means an object to be recognized.

The image recognition method can be divided into a learning step and a recognition step. In the learning step, a plurality of reference images are learned to estimate the object probability model, and in the recognition step, feature selection for the test image is performed. First, a method of estimating an object probability model in a learning step will be described with reference to FIG.

First, an object image 1, which is one arbitrary reference image as shown in FIG. 1, is divided into a plurality of blocks (M). For example, when the recognition target object is a face, the image recognition according to an embodiment of the present invention uses the entire face image. Therefore, all the parts other than the specific part (eye, nose, mouth, etc.) of the object image 1 are partitioned into blocks of the same size. At this time, the block may be divided into a grid pattern. Each of the divided blocks is hereinafter referred to as P1 to PM.

Next, one local descriptor is obtained for each of the blocks P1 to PM divided into a plurality of blocks. For example, local feature descriptors can be obtained through SIFT, HOG, and LBP. Here, SIFT, HOG and LBP are well known in the art, and a detailed description thereof will be omitted.

A case in which a Scalar Invariant Feature Transform (SIFT) is used to acquire a regional feature descriptor will be described as an example. The process of acquiring the regional feature descriptor is as follows.

SIFT goes through two computation steps to create a set of image features of the object image (1). The first step determines how to select the important pixels from the reference image (1). Here, the selected important pixel is referred to as a 'keypoint'. Step 2 defines an appropriate descriptor for the selected keypoints to represent meaningful local properties of the reference image (1). Here, the descriptor is called a 'local feature descriptor'. Thus, one object image 1 can be represented by a regional feature descriptor of each of the blocks P1 to PM.

SIFT uses a scale-space Difference-Of-Gaussian (DOG) function to detect multiple keypoints within each image. For an input object image I (x, y), the scale-space is a function L (x, y, y) provided from a convolution of a variable-scale Gaussian G (x, y, σ). Accordingly, the DOG function is defined as Equation (1).

[Equation 1]

Figure 112013115118319-pat00013

Figure 112013115118319-pat00014

Here, x is an x-axis coordinate, y is a y-axis coordinate, σ is a scale, and k is a multiplicative factor. In this case, the local maximum and minimum values of the D (x, y, sigma) function are based on eight neighboring blocks surrounding a block in the object image (1).

On the other hand, in case of face recognition, a very small number of keypoints can be extracted because of lack of texture of face image. In this case, Dense-SIFT extraction method, which is another local feature extraction method, is applied instead of SIFT The above problem can be solved.

Each local feature descriptor extracted through the SIFT extraction method is represented by a 128-dimensional vector descriptor, which is a four-part 128-dimensional vector. Where the four parts are locus, scale (sigma), direction and slope that indicate where the feature is selected. In this case, the degree of inclination m (x, y) and the direction (? (X, y)) of each keypoint located in the two-dimensional coordinate (x, y) Lt; / RTI >

&Quot; (2) "

Figure 112013115118319-pat00015

&Quot; (3) "

Figure 112013115118319-pat00016

Then, in order to apply the SIFT extraction method to represent the object image 1, the location of the regional feature descriptors is first determined on the M local feature descriptors and on a regular grid. Here, since each local feature descriptor is represented by a descriptor vector y m , one face image I can be represented by a set of M descriptor vectors. In this case, the face image I can be expressed by the following equation (4).

&Quot; (4) "

Figure 112013115118319-pat00017

If a plurality of reference images are {I i } i = 1, ... , N , the entire plurality of reference image sets T can be expressed by the following equation (5).

&Quot; (5) "

Figure 112013115118319-pat00018

Then, an 'object probability model' can be derived using the reference image set T. For example, when the reference image is a face image, a probability model of a normal face can be derived. In order to obtain the object probability model, first, the reference image set T is divided into M subsets according to the positions of keypoints as shown in Equation (6) below. Here, m is the index of each block.

&Quot; (6) "

Figure 112013115118319-pat00019

Using each subset T m , the object probability model (p m ) for the regional feature corresponding to the m block can be estimated using simple Gaussian estimation as shown in Equation (7) below.

&Quot; (7) "

Figure 112013115118319-pat00020

In Equation 7, Z is a normalization factor. And the two model parameters as a mean (μ m) and covariance matrix (Σ m) can be estimated by a simple average and the simple covariance matrix of each subset (T m).

Using the estimated object probability model as described above, the probability that each local feature descriptor is found at a specific position of the input test image can be calculated.

Hereinafter, a step of recognizing a test image using the object probability model will be described with reference to the flowchart of FIG.

Referring to FIG. 2, first, a test image is input (S210). The test image is the image to be compared with the reference image.

The input test image is divided into a plurality of blocks so as to correspond to the blocks of the reference image (S220). For example, if the reference image is divided into M blocks as described above, the test image is also divided into M identical segments.

A plurality of local feature descriptors are obtained for each block of the divided test image (S230). Since a plurality of regional features are extracted for each block, more features are extracted than those extracted from the reference image. Specifically, in the test image, more features are extracted by reducing the keypoint interval than the reference image. Therefore, there is no overlap between each block of the reference image, but there is a 50% overlap between adjacent two partial areas in the test image. For example, as shown in Fig. 3, in the comparison between the general face model (a) and the test image (b) estimated from a plurality of reference images, in the m blocks corresponding to each other, Can see more. A plurality of local feature descriptors of m blocks of the test image (

Figure 112013115118319-pat00021
) Can be expressed by the following equation (8).

&Quot; (8) "

Figure 112013115118319-pat00022

Next, one local feature descriptor is selected for each block of the test image (S240). That is, a plurality of local feature descriptors (

Figure 112013115118319-pat00023
) Candidate (c) is a local feature descriptor representing m blocks
Figure 112013115118319-pat00024
). According to one embodiment of the present invention, a plurality of local feature descriptor candidates c of the block m have different directional properties. For example, as shown in Fig. 3, a position corresponding to a key point of the general face model (a) Geographic feature descriptor
Figure 112013115118319-pat00025
, And eight local feature descriptors ("
Figure 112013115118319-pat00026
, ... ,,
Figure 112013115118319-pat00027
) Have different directional properties. Multiple local feature descriptor candidates are applied to the object probability model, and when this process is repeated for all blocks, k max , from which the highest probability value is obtained, can be obtained. This is obtained through the following equation (9).

&Quot; (9) "

Figure 112013115118319-pat00028

here,

Figure 112013115118319-pat00029
(K = 0, ..., 8) of the m block of the test image.

For example, if the local feature descriptor selected in FIG. 3 through equation (9) is eta 5 , this would mean that the test image (b) has the following directionality. Actually, the test image (b) of FIG. 3 shows that when the lower lip is cut off, its directionality is lower when compared with the general face model (a).

According to the above-described embodiment, it is possible to know in which direction the moving image is deformed based on the general face image of the test image, and the face recognition performance for the moving image can be improved by selecting the characteristic of the direction.

It is possible to acquire the local feature descriptor of the test image I tst as shown in Equation (10) using k max obtained through Equation (9). For example, when? 5 is selected, since the regional feature descriptors having a direction corresponding to? 5 for each block of the test image are selected for each block of the test image, . In this case, k max becomes 5.

&Quot; (10) "

Figure 112013115118319-pat00030

here

Figure 112013115118319-pat00031
(M = 1, ..., M) of m blocks of the test image.

In the next step, a weight for each block of the test image is calculated using the object probability model (S250). Specifically, the local feature descriptors for the test image obtained are applied to the object probability model as shown in Equation (10), and the weight of each block of the test image is calculated. Here, the weight w m can be expressed by Equation (11).

&Quot; (11) "

Figure 112013115118319-pat00032

For example, if there is an occlusion such as occluded in a test image, the weight for the corresponding block will be measured low.

Then, the obtained test image I tst And a plurality of reference images Ii are successively compared with each other to calculate inter-block similarity distance values matching each other (S270). In this case, the lower the value of the similarity distance value, the higher the degree of similarity between the reference image and the test image. The similarity distance value can be calculated using the following equation (12).

&Quot; (12) "

Figure 112013115118319-pat00033

here

Figure 112013115118319-pat00034
Is a regional feature descriptor of m blocks of the i-th reference image, d (·, ·) is a similarity distance calculation function,
Figure 112013115118319-pat00035
Is a weight for m blocks of the test image, and I i is an i-th reference image.

Finally, the calculated similarity distance values are compared, and an image corresponding to the test image among the plurality of reference images, that is, the image most similar to the test image is recognized (S270).

4 is a flowchart illustrating an image recognition method according to another embodiment of the present invention. Hereinafter, a description will be given of a method for enhancing the reliability of image recognition by previously rejecting a test image that is too similar to the reference image. The embodiment described below further includes the image rejection step in the above-described embodiment.

Referring to FIG. 4, the image recognition method according to the present embodiment can be largely divided into a learning process and a recognition process. In FIG. 4, the solid line represents the learning process, and the dotted line represents the recognition process.

First, a plurality of reference images are stored (S410). Then, each of the plurality of reference images is divided into a plurality of blocks, and a regional feature descriptor is obtained for each divided block (S412). Then, the object probability model is estimated using the obtained local feature descriptor (S414). Then, a test image is input (S420), and a plurality of regional feature descriptors for each of the divided blocks of the test image are obtained as described above (S422).

Then, in consideration of the directionality of the test image, a part of the obtained local feature descriptors is selected (S424), and is defined as a regional feature descriptor indicating a test image (S424). The above steps have not been described in detail.

In the next step, it is determined whether to reject the test image using the estimated object probability model (S426). If the test image is rejected at this stage, image recognition is no longer performed. According to one embodiment, the rejection of an image may be determined through a determination according to Equation (13) below. This step is called the first image rejection.

&Quot; (13) "

if

Figure 112013115118319-pat00036
, The test image I tst is rejected.

Here,? Is a first threshold value arbitrarily set.

In addition, the image rejection method according to another embodiment may be additionally or independently considered in the image rejection method described above. This is called a second image rejection. Specifically, if the image is rejected, an image having the smallest similarity distance value is selected from among the plurality of reference images. If the similarity distance value between the selected image and the test image is greater than the second threshold value, the test image is rejected. For example, a plurality of reference images are classified using a K-nearest neighbor classifier, and the nearest reference image (I nrst ) is detected. The detection of I nrst uses the following equation (14).

&Quot; (14) "

Figure 112013115118319-pat00037

If the similarity distance value to which the detected I nrst is applied is larger than the arbitrarily set second threshold value, the test image is rejected.

If the image is not rejected, the similarity distance value between the plurality of reference images and the test image is calculated (S430). Meanwhile, since the above-described second image rejection step is also performed through the calculation of the similarity distance value, it can be simultaneously performed in the similarity distance value calculation step S430 of FIG.

In the next step, the calculated values are compared with each other to finally recognize the image (S432).

Using the image rejection step as described above, there is an advantage that an object detector, for example, a detector for detecting a face, can effectively process images in which a portion other than a face is erroneously detected.

The above-described image recognition method can be implemented in an image recognition apparatus. 5 is a block diagram schematically showing the configuration of an image recognition apparatus according to an embodiment of the present invention.

Referring to FIG. 5, the image recognition apparatus 100 includes a storage unit 110, an input unit 120, and a control unit 130. Meanwhile, the image recognition apparatus 100 may be implemented by various electronic apparatuses such as a smart phone, a tablet PC, a desktop PC, and the like.

The storage unit 110 stores various programs and data used in the image recognition apparatus 100. In particular, the storage unit 110 may store a plurality of reference images to be learned to generate an object probability model. The reference image may be stored in the image recognition apparatus 100 or may be transmitted from an external apparatus through the communication unit 140. [ The storage unit 110 also stores an object probability model estimated through learning of a plurality of reference images.

The input unit 120 is configured to input a test image to the image recognition apparatus 100. For example, the input unit 120 may be a photographing apparatus that photographs an object, a configuration that receives a test image as an interface, and the like.

The control unit 130 can control the overall operation of the image recognition apparatus 100. In particular, the control unit 130 may include a detection unit 131, a calculation unit 132, and an image recognition unit 133.

The detecting unit 131 divides the test image into a plurality of blocks, acquires a plurality of regional feature descriptors for each divided block, and detects one local feature descriptor for each block of the test image using the object probability model . The detection unit 131 also detects a local feature descriptor for the reference image.

According to an embodiment of the present invention, the detection unit 131 may detect a local feature descriptor more than the reference image for a test image, and feature selection may be performed to select one local feature descriptor per block from the detected feature descriptor. Specifically, the directionality of the test image can be considered using the object probability model, and the detecting unit 131 'selects' a specific regional feature descriptor among the detected local feature descriptors according to the directionality. Therefore, the recognition performance can be enhanced even when the movement of the test image is changed.

The calculating unit 132 calculates the weight per block of the test image. For example, if the test image is a face image, there may be an occlusion region such as a face masked by an external object in the block of the face image. For such an area, the calculation unit 132 calculates the weight as low. Therefore, robust recognition results can be obtained despite the obstruction area.

The image recognition unit 133 sequentially compares the plurality of reference images with the test image, calculates similarity distance values between blocks matching each other using the selected regional feature descriptor and the weight, compares the calculated similarity distance values, And recognizes an image corresponding to the test image among a plurality of reference images. The image recognizing unit 133 can recognize the reference image having the smallest similarity distance value as the image matched with the test image. Alternatively, even if the similarity distance value is the smallest, the image recognition unit 133 can determine that the image is not recognized if the predetermined threshold value or more is exceeded.

The specific operations of the components included in the image recognition apparatus 100 are duplicated with the description of the above-described entity recognition method, and will not be repeated.

According to another embodiment of the present invention, the control unit 130 may include a determination unit 134 for determining whether to reject the test image.

Whether or not the image recognition unit 133 performs the recognition can be determined according to the determination result of the determination unit 134. [ For example, in the case where the image recognition apparatus 100 wants to recognize a face, if the input test image is not a face image but a landscape image, the similarity calculation with the reference image is meaningless. In this case, the efficiency of the image recognition apparatus 100 can be improved by rejecting the test image in advance.

According to an embodiment of the present invention, the determination unit 134 rejects the test image if the calculated value is less than the first threshold by applying the detected local feature descriptor to the object probability model. According to another embodiment, when the similarity distance value between the selected image and the test image is greater than the second threshold value, the determination unit 134 selects the image having the smallest similarity distance value from the test image among the plurality of reference images, . The various embodiments of the determination unit 134 described above may be applied independently to the image recognition apparatus 100, or may be simultaneously applied to exhibit synergy effects. As described above with respect to the image rejection method of the determination unit 134, detailed description is not provided.

FIG. 6 illustrates an example of images used in the image recognition method or image recognition apparatus according to an embodiment of the present invention.

6 (a), three test images of the user A 610, the user B 620, and the user C 630 are input.

For example, the test images of the user A 610 may include images having arbitrary movement in eight directions. Specifically, the left downwardly transformed image 611, the leftwardly deformed image 612, the leftwardly upwardly deformed image 613, the upwardly deformed image 614, 615, a right shifted image 616, a right downward shifted image 617, and a downwardly shifted image 618.

Even when a face detector boasts excellent performance, it is usual to always detect a face image to which such a moving deformation is applied. According to the above-described various embodiments, it is possible to consider the directionality due to the motion distortion of the test image, thereby providing a better recognition result than when the directionality is not considered. Referring to FIG. 7, the recognition accuracy shown by the graph 710 in the case where a weighted distance is taken into consideration and a step in which a local feature descriptor is selected is added Is higher than that of the graph 720 in the case of not otherwise. In the table of FIG. 7, the x-axis represents the degree of motion distortion of the test image, and the y-axis represents the recognition accuracy.

In Fig. 6 (b), the images that reflect the deformation that can be obtained in the actual situation are included. Specifically, the test images of (b) include a motion transformation as in (a), and a partial occlusion is also included. Experimental results including the test images of (b) can be seen from FIG. 8 shows a comparison example 1 (810) using a distance calculation function such as L 1 norm, a comparison example 2 (820) in which SIFT and weight are considered, and an experimental example 830 in which SIFT, weight and feature selection are considered ) Were compared. As expected, it can be seen that the accuracy is higher in the experimental example (830).

The test images of FIG. 6 (c) include a face image 640 and images 650 that are not face images. Using the test images, the image rejection is performed through the determination unit 134 according to an embodiment of the present invention. As a result, it can be seen that the images 650 that are not the face images are effectively rejected, and consequently, the reliability of the face recognition is improved. The results are shown in Table 1 below.

[Table 1]

Figure 112013115118319-pat00038

In this case, No rejection corresponds to a case in which no image rejection is introduced, which corresponds to a comparative example. In the case of the single rejection, when the above-described first image rejection is introduced, the dual rejection simultaneously introduces the first image rejection and the second image rejection . As shown in the table, it can be seen that the reliability of the image recognition in the case of the embodiment in which image rejection is introduced is doubled.

Meanwhile, the image recognition method according to various embodiments described above can be software-coded and stored in a non-transitory readable medium. Such non-transiently readable media can be used in various devices.

A non-transitory readable medium is a medium that stores data for a short period of time, such as a register, cache, memory, etc., but semi-permanently stores data and is readable by the apparatus. Specifically, it may be a CD, a DVD, a hard disk, a Blu-ray disk, a USB, a memory card, a ROM, or the like.

While the present invention has been particularly shown and described with reference to exemplary embodiments thereof, it is to be understood that the invention is not limited to the disclosed exemplary embodiments, but, on the contrary, It will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the present invention.

100: Image recognition device 110:
120: input unit 130:

Claims (15)

In the image recognition method,
Dividing the test image into a plurality of blocks and acquiring a plurality of regional feature descriptors for each of the divided blocks when a test image is input;
Selecting one local feature descriptor for each block of the test image using an estimated object probability model based on a plurality of reference images;
Calculating a weight per block of the test image using the object probability model;
Calculating similarity distance values between blocks matching each other using the selected local feature descriptor and the weight value while sequentially comparing the plurality of reference images and the test image; And
Comparing the calculated similarity distance values and recognizing an image corresponding to the test image among the plurality of reference images.
The method according to claim 1,
Dividing the plurality of reference images into a plurality of blocks and obtaining one local feature descriptor for each block before a test image is input;
And estimating and storing the object probability model using the obtained local feature descriptors of the plurality of reference images when acquiring the local feature descriptor.
The method according to claim 1,
Wherein the obtained plurality of regional feature descriptors have different directional properties,
Wherein the selecting comprises:
Wherein one local feature descriptor corresponding to the directionality of the test image is selected for each block of the test image among the plurality of obtained local feature descriptors.
The method of claim 3,
Wherein the selecting comprises:
Figure 112013115118319-pat00039

And selects the one local feature descriptor according to a formula such as < RTI ID = 0.0 >
Here, m is the index of each block,
Figure 112013115118319-pat00040
(K = 0, ..., 8) of the m block of the test image,
p m is the object probability model,
Figure 112013115118319-pat00041
(M = 1, ..., M) of m blocks of the test image,
And I tst is a test image.
The method according to claim 1,
When the local feature descriptor is selected and the weight is calculated, it is determined whether to reject the recognition of the test image using the one local feature descriptor selected for each block and the calculated weight per block Further comprising:
Wherein the step of calculating the similarity distance values and the step of recognizing the image are selectively performed according to the determination result of the determination step.
6. The method of claim 5,
Wherein,
Wherein the recognition of the test image is rejected if the value calculated by applying the selected local feature descriptor to the object probability model is less than a first threshold value.
6. The method of claim 5,
Wherein,
Selecting an image having the smallest similarity distance value from the test image among the plurality of reference images and rejecting recognition of the test image if the similarity distance value between the selected image and the test image is greater than a second threshold value Characterized in that:
3. The method of claim 2,
Wherein the calculating of the similarity distance values comprises:
Figure 112013115118319-pat00042

And calculates the similarity distance values according to the following equation,
here
Figure 112013115118319-pat00043
Is a local feature descriptor of m blocks of the ith reference image,
d () is the similarity distance calculation function,
Figure 112013115118319-pat00044
A weight for the m block of the test image,
And I i is an i-th reference image.
An image recognition apparatus comprising:
A storage unit for storing an object probability model estimated using a regional feature descriptor for a plurality of blocks constituting each of a plurality of reference images;
An input unit for receiving a test image;
A detector for dividing the test image into a plurality of blocks, acquiring a plurality of regional feature descriptors for each divided block, and detecting one local feature descriptor for each block of the test image using the object probability model;
A calculating unit for calculating a weight per block of the test image using the object probability model; And
Comparing the plurality of reference images and the test image sequentially, calculating similarity distance values between blocks matching each other using the detected local feature descriptor and the weight, comparing the calculated similarity distance values, And an image recognition unit for recognizing an image corresponding to the test image among a plurality of reference images.
10. The method of claim 9,
Wherein the obtained plurality of regional feature descriptors have different directional properties,
Wherein:
For each block of the test image, one local feature descriptor corresponding to the directionality of the test image among the plurality of obtained local feature descriptors.
11. The method of claim 10,
Wherein:
Figure 112013115118319-pat00045

And the local feature descriptor,
Here, m is the index of each block,
Figure 112013115118319-pat00046
The k-direction local feature descriptor (k = 0, ..., 8) of m blocks of the test image,
p m is the object probability model,
Figure 112013115118319-pat00047
(M = 1, ..., M) of m blocks of the test image,
And I tst is a test image.
10. The method of claim 9,
Further comprising a determination unit configured to determine whether to reject the recognition of the test image,
Wherein the image recognizing unit selectively operates according to the determination result of the determining unit.
13. The method of claim 12,
Wherein,
And rejects recognition of the test image if a value calculated by applying the detected local feature descriptor to the object probability model is less than a first threshold value.
13. The method of claim 12,
Wherein,
Selecting an image having the smallest similarity distance value from the test image among the plurality of reference images,
And rejects recognition of the test image if a similarity distance value between the selected image and the test image is greater than a second threshold value.
10. The method of claim 9,
Wherein the image recognizing unit comprises:
Figure 112013115118319-pat00048

And calculates the similarity distance values according to the following equation,
here
Figure 112013115118319-pat00049
Is a local feature descriptor of m blocks of the ith reference image,
d () is the similarity distance calculation function,
Figure 112013115118319-pat00050
A weight for the m block of the test image,
And I i is an i-th reference image.


KR1020130156639A 2013-12-16 2013-12-16 Method of recognizing face and face recognition apparatus KR101521136B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020130156639A KR101521136B1 (en) 2013-12-16 2013-12-16 Method of recognizing face and face recognition apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020130156639A KR101521136B1 (en) 2013-12-16 2013-12-16 Method of recognizing face and face recognition apparatus

Publications (1)

Publication Number Publication Date
KR101521136B1 true KR101521136B1 (en) 2015-05-20

Family

ID=53394944

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020130156639A KR101521136B1 (en) 2013-12-16 2013-12-16 Method of recognizing face and face recognition apparatus

Country Status (1)

Country Link
KR (1) KR101521136B1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101851695B1 (en) * 2016-11-15 2018-06-11 인천대학교 산학협력단 System and Method for Controlling Interval Type-2 Fuzzy Applied to the Active Contour Model
CN113158928A (en) * 2021-04-27 2021-07-23 浙江云奕科技有限公司 Image recognition-based anti-counterfeiting method for concrete test block
CN116840693A (en) * 2023-06-30 2023-10-03 深圳市盛弘新能源设备有限公司 Charge and discharge test control method and system based on artificial intelligence

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20040001319A (en) * 2002-06-27 2004-01-07 주식회사 케이티 Face awareness method to use face information abstraction method and he
JP2008501172A (en) * 2004-05-28 2008-01-17 ソニー・ユナイテッド・キングダム・リミテッド Image comparison method
KR101314293B1 (en) * 2012-08-27 2013-10-02 재단법인대구경북과학기술원 Face recognition system robust to illumination change

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20040001319A (en) * 2002-06-27 2004-01-07 주식회사 케이티 Face awareness method to use face information abstraction method and he
JP2008501172A (en) * 2004-05-28 2008-01-17 ソニー・ユナイテッド・キングダム・リミテッド Image comparison method
KR101314293B1 (en) * 2012-08-27 2013-10-02 재단법인대구경북과학기술원 Face recognition system robust to illumination change

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101851695B1 (en) * 2016-11-15 2018-06-11 인천대학교 산학협력단 System and Method for Controlling Interval Type-2 Fuzzy Applied to the Active Contour Model
CN113158928A (en) * 2021-04-27 2021-07-23 浙江云奕科技有限公司 Image recognition-based anti-counterfeiting method for concrete test block
CN113158928B (en) * 2021-04-27 2023-09-19 浙江云奕科技有限公司 Concrete test block anti-counterfeiting method based on image recognition
CN116840693A (en) * 2023-06-30 2023-10-03 深圳市盛弘新能源设备有限公司 Charge and discharge test control method and system based on artificial intelligence
CN116840693B (en) * 2023-06-30 2024-03-19 深圳市盛弘新能源设备有限公司 Charge and discharge test control method and system based on artificial intelligence

Similar Documents

Publication Publication Date Title
JP5554984B2 (en) Pattern recognition method and pattern recognition apparatus
US8582836B2 (en) Face recognition in digital images by applying a selected set of coefficients from a decorrelated local binary pattern matrix
JP5929896B2 (en) Image recognition system, image recognition method, and image recognition program
JP5604256B2 (en) Human motion detection device and program thereof
US9294665B2 (en) Feature extraction apparatus, feature extraction program, and image processing apparatus
JP5591360B2 (en) Classification and object detection method and apparatus, imaging apparatus and image processing apparatus
JP2015207280A (en) target identification method and target identification device
JP5500024B2 (en) Image recognition method, apparatus, and program
TWI567660B (en) Multi-class object classifying method and system
JP6351243B2 (en) Image processing apparatus and image processing method
US11049256B2 (en) Image processing apparatus, image processing method, and storage medium
US10832030B2 (en) Method and apparatus of selecting candidate fingerprint image for fingerprint recognition
Lee et al. Video saliency detection based on spatiotemporal feature learning
KR101521136B1 (en) Method of recognizing face and face recognition apparatus
JP2011253354A (en) Image processing apparatus, method and program
JP2018201161A (en) Moving body tracking device, moving body tracking method, and program
JP2015011526A (en) Action recognition system, method, and program, and recognizer construction system
JP2010504575A (en) Method and apparatus for recognizing face and face recognition module
KR101681233B1 (en) Method and apparatus for detecting face with low energy or low resolution
JP6393495B2 (en) Image processing apparatus and object recognition method
KR101829386B1 (en) Apparatus and method for detecting target
KR102538804B1 (en) Device and method for landmark detection using artificial intelligence
JP6540577B2 (en) Object recognition device
WO2017179728A1 (en) Image recognition device, image recognition method, and image recognition program
Essa et al. High order volumetric directional pattern for video-based face recognition

Legal Events

Date Code Title Description
E701 Decision to grant or registration of patent right
GRNT Written decision to grant
FPAY Annual fee payment

Payment date: 20190429

Year of fee payment: 5