KR101748048B1 - Apparatus and method for constructing combined feature based on discriminant analysis for face recognition - Google Patents

Apparatus and method for constructing combined feature based on discriminant analysis for face recognition Download PDF

Info

Publication number
KR101748048B1
KR101748048B1 KR1020150184261A KR20150184261A KR101748048B1 KR 101748048 B1 KR101748048 B1 KR 101748048B1 KR 1020150184261 A KR1020150184261 A KR 1020150184261A KR 20150184261 A KR20150184261 A KR 20150184261A KR 101748048 B1 KR101748048 B1 KR 101748048B1
Authority
KR
South Korea
Prior art keywords
feature
distance
image
discrimination
vector
Prior art date
Application number
KR1020150184261A
Other languages
Korean (ko)
Inventor
최상일
김준모
신원용
Original Assignee
단국대학교 산학협력단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 단국대학교 산학협력단 filed Critical 단국대학교 산학협력단
Priority to KR1020150184261A priority Critical patent/KR101748048B1/en
Application granted granted Critical
Publication of KR101748048B1 publication Critical patent/KR101748048B1/en

Links

Images

Classifications

    • G06K9/00268
    • G06K9/00228
    • G06K9/42
    • G06K9/4661

Landscapes

  • Collating Specific Patterns (AREA)
  • Image Analysis (AREA)

Abstract

The present invention proposes an apparatus and method for generating a combined feature based on discriminant analysis for generating a combined feature based on discriminant features extracted from local normalization processing and shadow compensated images. The combined feature generation apparatus based on the discrimination analysis for the presented face recognition generates a local normalized image and a shadow compensated image through local normalization and shading compensation for a plurality of images recorded in different illumination states, A combining feature is generated based on the local normalization determining feature vector and the shading compensation distinguishing feature vector included in the discriminating feature extracted from the compensating image.

Description

[0001] APPARATUS AND METHOD FOR CONSTRUCTING COMBINED FEATURE BASED ON DISCRIMINANT ANALYSIS FOR FACE RECOGNITION [0002]

The present invention relates to an apparatus and method for generating a combined feature based on discriminant analysis for face recognition, and more particularly, to a combined feature generating apparatus and method based on discriminant analysis for generating face recognition reference parameters .

Face recognition technology is a technology that identifies matching images by comparing them with other people 's images stored in the database. Various methods are being developed to improve the accuracy.

Representative methods for face recognition include Eigenface method based on appearance (appearance), Fisherface method, and DCV (Discriminant Common Vector) method.

Conventional face recognition methods have excellent performance in face recognition in an ideal state in which there is no change in illumination condition and face posture, but face recognition performance is degraded in a real environment where there are many changes in lighting conditions, face pose, and the like.

In particular, changes in lighting conditions result in significant distortions in the human image, which is the object of recognition, due to the formation of various types of shading that obscures the facial similarity. Distortion in the human image leads to degradation of the accuracy of the face recognition. Therefore, in order to construct the face recognition system, it is necessary to overcome the change according to the illumination condition.

In order to solve the problems caused by the change of illumination condition, there are three methods of face modeling method, shading compensation method as a preprocessing method, and illumination invariant characteristic extraction method. In the environment of changing illumination condition, facial modeling is performed based on a physical model.

Generally, to create a physical model, three-dimensional shape information such as surface normals and albedo is required.

In SCFA (Shadow Compensation using Fourier Analysis) and SCWAD (Shadow Compensation using Weighted Average Difference) methods, preprocessing is used to compensate shadows generated by illumination changes, and face recognition is performed through various image processing techniques.

LN (Local Normalization), LBP (Local Binary Pattern) and MCT (Modified Census Transform) methods extract illumination invariant characteristics and perform face recognition using them.

The shading compensation method and the illumination invariant feature extraction method do not involve a learning step and a remodeling step and can be directly applied to a 2D facial image having no 3D facial shape information. Accordingly, the shading compensation method and the illumination invariant characteristic extraction method are frequently used in a face recognition system because a large amount of data operation is not required for face recognition.

However, face recognition methods due to changes in lighting conditions do not provide optimal recognition performance for all types of illumination change because there are advantages and disadvantages depending on characteristics of applied algorithms, and recognition performance is changed according to kinds of illumination changes.

Korean Registered Patent No. 10-1326691 (Name: Robust face recognition method through statistical learning of regional features)

Disclosure of Invention Technical Goals The present invention has been proposed in order to solve the problems of the related art described above, and it is an object of the present invention to extract a discrimination feature from a local normalized image and a shade compensation image generated through local normalization processing and shading compensation processing on images recorded in different illumination states And an apparatus and method for generating a combined feature based on discriminant analysis for generating a combined feature based on a local normalized discriminant feature vector and a shadow compensated discriminant feature vector included in the extracted discriminant feature.

According to an aspect of the present invention, there is provided an apparatus for generating a combined feature based on discriminant analysis for face recognition, comprising: an input unit for receiving a plurality of images recorded in different illumination states; A local normalization processing unit for performing local normalization on a plurality of images input to the input unit to generate a local normalized image; A shading compensation processor for performing shading compensation on a plurality of images input to the input unit to generate a shading compensation image; A discrimination feature extracting unit for extracting a discrimination feature including a local normalization discrimination feature vector and a discrimination feature vector from a local normalization image generated by the local normalization processing unit and a shade compensation image generated by the shading compensation processing unit; And a combined feature generation unit for generating a combined feature based on the local normalized discriminant feature vector and the shading compensation discriminant feature vector included in the discriminant feature extracted by the discriminant feature extractor.

The local normalization processing unit calculates an illumination value for each pixel of the image recorded in the different illumination state based on the image recorded in the forward lighting state among the plurality of images using a local normalization method (LN (Local Normalization)), An average value and a variance value are used to calculate an illumination value obtained by locally normalizing an image recorded in different illumination states on the basis of the image recorded in the forward lighting state, and based on the calculated local normalized illumination value, Lt; RTI ID = 0.0 > normalized < / RTI >

The local normalization processing unit calculates the illumination value for each pixel of the pieces constituting the image by using the multiplication noise and the additive noise.

The shading compensation processor calculates the frequency-magnitude component of the non-shadowed average image using a shadow compensation method (SCFA) and compensates the frequency-magnitude component of the shadowed image with the calculated frequency-magnitude component A frequency-magnitude component compensation value is generated, and a shade compensation image is generated through a Fourier inverse transform of the frequency-magnitude component compensation value.

The discrimination feature extraction unit converts the local normalized image and the shaded compensation image into a local normalization vector and a shading compensation vector through a discriminative common vector (DCV) method, and generates a matrix consisting of a projection vector satisfying an objective function and a local And extracts the discriminant features including the local normalization discrimination feature vector and the shading compensation discrimination feature vector on the basis of the normalization vector and the shading compensation vector.

The joint feature generation unit defines the discrimination distance of the discrimination feature on the basis of the distance in the class and the distance between the classes of the discrimination feature, and the distinction distance with the discrimination distance of the mixed vector obtained by combining the local normalization discrimination feature vector and the shadow compensation discrimination feature vector And generates a combining feature based on the discrimination distance vector.

The joint feature generation unit calculates

Figure 112015126048113-pat00001

(D W i is a class within a distance, D B i is the class between the distance, m i j is the class (c i) j-th element, m j, j-th element of the mean value of the training data samples is N i of the average value of the class (C i )), the distance between classes and the distances between classes are calculated.

The joint feature generation unit generates the joint feature distances j = D B i -βD W i Calculates a determination distance (determined distance j is j determines the distance of the second discriminating characteristics, D W i is a class within a distance, D B i is a distance between the class, β is the penalty value for D W i) is determined by the characteristics .

The combining feature generator generates a discrimination feature having the largest value of the generated discrimination distance vector as a combining feature.

According to another aspect of the present invention, there is provided a method of generating a combined feature based on discriminant analysis for face recognition, the method comprising: Receiving a plurality of images recorded in different illumination states; Performing local normalization on a plurality of input images to generate a local normalized image; Performing shading compensation on a plurality of input images to generate a shading compensation image; Extracting a discriminating feature including a local normalization discriminating feature vector and a shading compensation discriminating feature vector from the generated local normalized image and the shade compensation image; And generating a combining feature based on the local normalization determining feature vector and the shading compensation distinguishing feature vector included in the extracted discriminating feature.

Wherein the step of generating the local normalized image comprises: calculating an illumination value for each pixel of the image recorded in the different illumination state based on the image recorded in the forward illumination state among the plurality of images; Calculating an illumination value locally normalized for each pixel of the image recorded in different illumination states based on the image recorded in the forward lighting state using the local average value and the variance value; And generating a local normalized image of each image based on the calculated local normalized illumination values.

The step of calculating the illumination value for each pixel of the image uses the multiplication noise and the additive noise to calculate an illumination value for each pixel of the pieces constituting the image.

Generating a shade compensation image comprises: calculating a frequency magnitude component of the non-shade average image; Calculating a frequency-magnitude component compensation value by compensating the frequency magnitude component of the shadowed image with the calculated frequency magnitude component; And generating a shadow compensated image through inverse Fourier transform of the frequency-magnitude component compensation value.

The step of extracting the discriminating feature comprises the steps of converting the local normalized image and the shade compensation image into a local normalization vector and a shading compensation vector through a discriminative common vector (DCV) method; Extracting a matrix composed of a projection vector satisfying an objective function through a discriminant common vector method; And extracting a discrimination feature including a local normalization discrimination feature vector and a shading compensation discrimination feature vector based on the extracted matrix, the local normalization vector, and the shading compensation vector.

The generating of the combining feature may include: calculating a distance between classes and a distance between classes of the extracted discrimination feature; Calculating a discrimination distance of the discrimination feature on the basis of the distance between the defined class and the distance between the classes; Generating a discrimination distance vector having the discrimination distance of the mixed vector obtained by combining the local normalization discrimination feature vector and the shading compensation discrimination feature vector as elements; And generating a combining feature based on the generated discrimination distance vector.

In the step of calculating the distance between the class and the distance between the classes,

Figure 112015126048113-pat00002

(D W i is a class within a distance, D B i is the class between the distance, m i j is the class (c i) j-th element, m j, j-th element of the mean value of the training data samples is N i of the average value of the class (C i )), the distance between classes and the distances between classes are calculated.

In the step of calculating the discrimination distance, the equation discrimination distance j = D B i - ? D W i Calculates a determination distance (determined distance j is j determines the distance of the second discriminating characteristics, D W i is a class within a distance, D B i is a distance between the class, β is the penalty value for D W i) is determined by the characteristics .

In the step of generating the combining feature, the discrimination feature having the largest value of the generated discrimination distance vector is generated as the combining feature.

According to the present invention, a combined feature generation apparatus and method based on discrimination analysis for face recognition generates a local normalized image and a shadow compensated image through local normalization and shading compensation for a plurality of images recorded in different illumination states , The local normalization image and the shading compensation feature vector included in the discriminant feature extracted from the shading compensation image to generate the combining feature, thereby maximizing the recognition rate in all types of databases unlike the conventional method using the individual method There is an effect that can be done.

1 is a block diagram for explaining a combined feature generation apparatus based on discriminant analysis for face recognition according to an embodiment of the present invention;
Fig. 2 is a view for explaining the input unit of Fig. 1; Fig.
Fig. 3 is a diagram for explaining the local normalization processing unit of Fig. 1; Fig.
4 is a diagram for explaining the shading compensation processing unit of FIG. 1;
FIG. 5 is a diagram for explaining a combining feature generating unit of FIG. 1; FIG.
6 is a flowchart illustrating a method of generating a combined feature based on discriminant analysis for face recognition according to an embodiment of the present invention.
FIG. 7 is a flowchart for explaining the local normalized image generating step of FIG. 6; FIG.
8 is a flowchart for explaining the shade compensation image generating step of FIG. 6;
9 is a flowchart for explaining the discrimination feature extraction step of FIG.
FIG. 10 is a flowchart for explaining a coupling feature generation step of FIG. 6;
11 to 15 are diagrams for explaining a face recognition experiment result based on a combined feature generated through a combined feature generation apparatus and method based on discriminant analysis for face recognition according to an embodiment of the present invention.

DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS Hereinafter, preferred embodiments of the present invention will be described in detail with reference to the accompanying drawings in order to facilitate a person skilled in the art to easily carry out the technical idea of the present invention. . In the drawings, the same reference numerals are used to designate the same or similar components throughout the drawings. In the following description of the present invention, a detailed description of known functions and configurations incorporated herein will be omitted when it may make the subject matter of the present invention rather unclear.

DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS Hereinafter, a combined feature generation apparatus based on discriminant analysis for face recognition according to an embodiment of the present invention will be described in detail with reference to the accompanying drawings. 1 is a block diagram for explaining a combined feature generation apparatus based on discriminant analysis for face recognition according to an embodiment of the present invention. FIG. 2 is a view for explaining the input unit of FIG. 1, FIG. 3 is a view for explaining the local normalization processing unit of FIG. 1, FIG. 4 is a view for explaining the shadow compensation processing unit of FIG. 1, 1 < / RTI >

1, a combined feature generation apparatus 100 (hereinafter referred to as a combined feature generation apparatus 100) based on a discrimination analysis for face recognition includes an input unit 110, a local normalization processing unit 130, a shading compensation processing unit 150, a discrimination feature extracting unit 170, and a combining feature generating unit 190.

The input unit 110 receives an image to be recognized. At this time, the input unit 110 receives the recorded image in the illumination change state. At this time, as shown in FIG. 2, the input unit 110 receives a plurality of images recorded in different illumination change states.

The local normalization processing unit 130 performs local normalization on the image input to the input unit 110. [ That is, the local normalization processing unit 130 generates local normalized images using a local normalization (LN) method, which is one of effective illumination invariant feature extraction methods.

The LN method treats a person's face as a sequence of small, flat pieces. If the image recorded in the forward lighting state is I and the image recorded in the other types of illumination change state is I V , the local normalization processing unit 130 determines the position of the pixel located at the position (x, y) The illumination value I V (x, y) can be expressed using a multiplicative noise A and an additive noise B as shown in Equation (1) below.

Figure 112015126048113-pat00003

At this time, the LN method is based on the basic idea that if I (x, y) and I V (x, y) in each piece are locally normalized and have a zero mean and unit variance, respectively, then the resulting images of I LN and I V LN are the same . Therefore, the local normalization processing unit 130 expresses I LN and I V LN as shown in the following equation (2).

Figure 112015126048113-pat00004

Where E (·) is the local average value in the piece W and Var (·) is the variance value in the piece W.

At this time, if each piece is simple and defined in a rectangular shape for easy handling, local normalization processing unit 130 by applying the N × N size filter of the I LN and I V LN to the pixel position (x, y) Can be calculated. 3, the local normalization processing unit 130 generates a local normalized image I LN of the image input to the input unit 110 using the LN method.

The shading compensation processing unit 150 performs shading compensation on an image input to the input unit 110. That is, the shading compensation processing unit 150 generates an image of shadows compensated for shadows on the face of the input image, using a shadow compensation using Fourier analysis (SCFA) method among the shading compensation methods.

The image signal I (x, y) ∈ R M × N ) having the size of M × N in the spatial domain can be expressed by the following Equation 3 using the magnitude component and the phase component of the frequency domain through the Fourier transform .

Figure 112015126048113-pat00005

The shading caused by the illumination change eventually has a significant impact on the frequency magnitude component, but it has little effect on the phase component. Thus, in the case of an image signal, the phase component φ I (u, v) accompanying the image structure information plays a more important role than the magnitude component | F I (u, v) |.

Accordingly, the shade compensation processing unit 150 calculates the frequency-magnitude component (| F IAUX (u, v) |) of the non-shadowed average image using the SCFA method. The shading compensation processing unit 150 multiplies the frequency-magnitude component (| F IA (u, v) |) of the shaded image by the frequency-magnitude component (| F IAUX ) |) To calculate the frequency-magnitude component compensation value (| Fc (u, v) |).

Figure 112015126048113-pat00006

The shading compensation processing unit 150 obtains the shadow compensated image I SCFA (x, y) through the inverse Fourier transform of the frequency-magnitude component compensation value (| Fc (u, v) | . 4, the shade compensation processing unit 150 generates a shade compensation image I SCFA of the image input to the input unit 110 using the SCFA method.

Figure 112015126048113-pat00007

The discrimination feature extractor 170 extracts the discrimination feature from the local normalization image I LN generated in the local normalization processor 130 and the shading compensation image I SCFA generated in the shading compensation processor 150. At this time, the discrimination feature extracting unit 170 extracts the discriminating feature from the local normalized image I LN and the shading compensation image I SCFA using a DCV (Discriminant Common Vector) method effective for classifying the high dimensional data. do.

The discrimination feature extraction unit 170 converts an image having an M × N size into a local normalization vector (X LN ) and a shading compensation vector (X SCFA ) of dimension n (n = M · N) The discriminant features are extracted through linear discriminant analysis.

If the number of classes, the between-covariance matrix and the within-class covariance matrix are defined as C, S B , and S W , respectively, then the null space of S W has many And positive identification information is included.

The discrimination feature extracting unit 170 extracts the projection vector satisfying the objective function (see Equation 6) while maximizing | W T S B W | (where W T S W W = 0) using the DCV method (W = [w 1 , w 2 , ..., w m ]) consisting of W 1 , l = 1, ..., m.

Figure 112015126048113-pat00008

The discrimination feature extraction unit 170 extracts the local normalization discriminant feature vector Y LN = [Y 1 LN , ..., Y m LN ] T and the shading compensation vector X SCFA for the local normalization vector X LN (Y SCFA , ..., Y m SCFA ] T ) of the compensation discrimination characteristic vector (Y SCFA = [Y 1 SCFA , ..., Y m SCFA ] T ) and the following equation (7).

Figure 112015126048113-pat00009

The combined feature generation unit 190 generates the combined feature extraction unit 170 based on the local normalization determination feature vector Y LN = [Y 1 LN , ..., Y m LN ] T and the shading compensation determination feature vector Y SCFA = Y 1 SCFA , ..., Y m SCFA ] T ).

A comparative analysis of the face recognition performance between the Multi-PIE, AR and Yale databases used as reference by many studies on illumination changes is performed with Y LN , Y LBP , Y SCFA and Y SCWAD . 5, the local normalized image I LN shows better performance than the shade compensation discriminant feature vector Y SCFA in the AR database, while the shade compensation discriminant feature vector Y SCFA is local And exhibits a better recognition rate than the normalized image (I LN ).

This is because the type, shape and size of the shade can vary under the same lighting conditions depending on the facial characteristics of the individual. Individuals with dimpled eyes and protruding noses tend to have more shaded area than other individuals.

Likewise, none of the local normalization discriminating feature vectors Y LN and the shadow compensating discriminating feature vectors Y SCFA exhibit different types of characteristics depending on the shape of the shade, so that none of them provide the best performance results under all circumstances.

Therefore, the combining feature generation unit 190 extracts only the features having a good distinguishability between the features of the local normalized discriminant feature vector Y LN and the shadow compensated discriminant feature vector Y SCFA , .

In order to effectively utilize the different features of the LN method and the SCFA method, the combining feature generation section 190 evaluates the characteristics of each of the two methods and constructs combining features combining the two features.

To this end, the combining feature generation unit 190 generates discrimination characteristics (i.e., Y i LN , Y i SCFA , i = 1 , 2, ...) extracted from the local normalized image I LN and the shadowed compensation image I SCFA using the discrimination distance reference, 1, ..., m).

In order to measure the discrimination distance, the combining feature generation unit 190 defines a distance D W i in the class and a distance D B i between the classes as shown in Equation (8).

Figure 112015126048113-pat00010

Here, m i j is the j th element of the mean of the class (c i ), m j is the j th element of the mean value of the training data samples, and N i is the number of samples of the class (C i ).

The joint feature generation unit 190 defines the discrimination distance of the jth feature as D B i -β D W i based on the intra-class distance (D W i ) and the inter-class distance (D B i ). As a result, the distance between the classes and the distance in the grades decreases, thereby increasing the possibility of separation. Here, β, which means a penalty for D W i in the discrimination distance, is a user parameter that is determined differently according to the attribute of the data. β can be set to a small value when the data is widely distributed within the same class but is highly discriminatory.

Combined feature generation unit 190 is a vector combination of local normalization to determine a feature vector (Y LN), and shading compensation is determined feature vector (Y SCFA) (Y c = [Y LN T Y SCFA T]) i-th features (Y i C (i = 1,2 , ..., 2m)) to determine the distance elements (D i) to determine a distance vector (Dist = [D 1, D 2, ... to the , D 2m ] T ).

The combining feature generation unit 190 generates a feature in which the discrimination distance vector corresponds to a large value as a combining feature (Y CF ). The combined feature vector (Y CF ) is input to a classifier (200) for face recognition, and is used as a reference factor for face recognition.

Hereinafter, a combined feature generation method based on discriminant analysis for face recognition according to an embodiment of the present invention will be described in detail with reference to the accompanying drawings. 6 is a flowchart illustrating a combined feature generation method based on discriminant analysis for face recognition according to an embodiment of the present invention. FIG. 7 is a flowchart for explaining the local normalized image generating step of FIG. 6, FIG. 8 is a flowchart for explaining the shade compensation image generating step of FIG. 6, FIG. 9 is a flowchart for explaining the discriminating feature extracting step of FIG. And Fig. 10 is a flowchart for explaining the coupling feature generation step of Fig.

A combined feature generation apparatus 100 (hereinafter, a combined feature generation apparatus 100) based on a discrimination analysis for face recognition receives a plurality of images recorded in different illumination change states (S100).

The combined feature generation apparatus 100 performs a local normalization (LN) on a plurality of images to generate a local normalized image (S200). That is, the combined feature generation apparatus 100 performs local normalization on a plurality of images input in step S100 using a local normalization (LN) method, which is one of effective illumination invariant feature extraction methods. As a result, the combined feature generation apparatus 100 generates a local normalized image for each of a plurality of images. This will be described with reference to FIG. 7 attached hereto.

The combined feature generation apparatus 100 calculates an illumination value for each pixel of the image I V recorded in the different illumination state based on the image I recorded in the forward lighting state (S220). That is, the LN method treats the face of a person as a sequence of small, flat pieces. When the image recorded in the front lighting state is I and the image recorded in the other types of illumination change state is I V , of the I V (x, y) illumination value may be represented by the I (x, y) and the multiplication noise (multiplicative noise) a commercial sex and noise (additive noise), B (see equation 1).

The combined feature generation apparatus 100 calculates a local normalized illumination value for each pixel constituting the image I and the image I V using the local average value and the variance value at step S240. That is, the LN method has the basic idea that if I (x, y) and I V (x, y) in each piece are locally normalized and have zero mean and unit variance, respectively, then the resulting images of I LN and I V LN are the same . Accordingly, the combined feature generation apparatus 100 generates the localized normalized illumination values I LN (x, y) and I (x, y) using the illumination value of each pixel calculated in step S220, the local average value E and the variance value Var V LN (x, y) (see Equation 2). At this time, if each piece is defined as a rectangular shape for simple and easy processing, the combined feature generation apparatus 100 applies an NxN-sized filter to a pixel at the (x, y) position to obtain I LN and I V LN .

The combined feature generation apparatus 100 generates a local normalized image I LN of each image based on the local normalized illumination value (S260). That is, the combined feature generation apparatus 100 generates a local normalized image (I LN ) using the local normalized illumination value of each pixel calculated in step S240.

The combined feature generation apparatus 100 performs shadow compensation (SFCA) on a plurality of images to generate a shadow compensation image (S300). That is, the combined feature generation apparatus 100 generates a shadow compensation image that compensates shadows on the face of the image input in step S100 by using a shadow compensation using Fourier analysis (SCFA) method among the shadow compensation methods . This will be described with reference to FIG. 8 attached hereto.

An image signal (I (x, y) εR M × N ) having a size of M × N in the spatial domain is represented by a magnitude component and a phase component of the frequency domain through a Fourier transform (see Equation 3). The shading caused by the illumination change eventually has a significant impact on the frequency magnitude component, but it has little effect on the phase component. Thus, in the case of an image signal, the phase component φ I (u, v) accompanying the image structure information plays a more important role than the magnitude component | F I (u, v) |. Based on this, the combined feature generation apparatus 100 calculates the frequency-magnitude component (| F IAUX (u, v) |) of the non-shadowed average image using the SCFA method (S320).

The combined feature generation apparatus 100 compensates the frequency-magnitude component (| F IA (u, v) |) of the shaded image by the frequency-magnitude component (| F IAUX And calculates the frequency-magnitude component compensation value (| Fc (u, v) |) (see Equation 4) (S340).

The shading compensation processing unit 150 generates the shade compensation image I SCFA through the inverse Fourier transform of the calculated frequency-magnitude component compensation value | Fc (u, v) | (S360).

The combined feature generation apparatus 100 extracts the discrimination feature from the local normalized image and the shade compensation image (S400). That is, the combined feature generation apparatus 100 generates the discrimination characteristic from the local normalized image I LN generated in step S200 and the shaded compensation image I SCFA generated in step S300 using the DCV method effective for classifying the high dimensional data . This will be described with reference to FIG. 9 attached hereto.

The combined feature generation apparatus 100 converts the image into a local normalization vector and a shadow compensation vector through the DCV method (S420). That is, the combined feature generation apparatus 100 converts an image having an M × N size into a local normalization vector (X LN ) and a shading compensation vector (X SCFA ) of dimension n (n = M · N) do.

The combined feature generation apparatus 100 extracts a matrix composed of a plurality of projection vectors through a DCV method (S440). That is, if the number of classes, the covariance matrix between classes, and the covariance matrix within a class are defined as C, S B , and S W , a large amount of discrimination information is included in the null space of S W. Projection vector satisfying (see equation 6) (where, W T S W W = 0), while maximizing the objective function (| to, binding characteristics generating apparatus 100 by using the DCV method | W T S B W (W = [w 1 , w 2 , ..., w m ]) consisting of W 1 , l = 1, ..., m.

The joint feature generation apparatus 100 calculates a discrimination feature based on the local normalization vector, the shading compensation vector, and the matrix extracted in operation S460. That is, the joint feature generation apparatus 100 uses the local normalization vector X LN and the shading compensation vector X SCFA transformed in operation S420 and the matrix extracted in operation S440 as a local normalization determining feature vector Y LN = [Y 1 calculates the LN, ..., Y m LN] T) , and shading compensation is determined feature vector (Y SCFA = [Y 1 SCFA , ..., Y m SCFA] T) ( see equation 7).

The combined feature generation apparatus 100 generates a combined feature based on the discrimination feature (S500). That is, the combined feature generation apparatus 100 generates the local normalization determination feature vector (Y LN = [Y 1 LN , ..., Y m LN ] T ) and the shadow compensation determination feature vector Y SCFA = [Y 1 SCFA , ..., Y m SCFA ] T ).

A comparative analysis of the face recognition performance between the Multi-PIE, AR and Yale databases used as reference by many studies on illumination changes is performed with Y LN , Y LBP , Y SCFA and Y SCWAD . 5, the local normalized image I LN shows better performance than the shade compensation discriminant feature vector Y SCFA in the AR database, while the shade compensation discriminant feature vector Y SCFA is local And exhibits a better recognition rate than the normalized image (I LN ).

This is because the type, shape and size of the shade can vary under the same lighting conditions depending on the facial characteristics of the individual. Individuals with dimpled eyes and protruding noses tend to have more shaded area than other individuals.

Likewise, none of the local normalization discriminating feature vectors Y LN and the shadow compensating discriminating feature vectors Y SCFA exhibit different types of characteristics depending on the shape of the shade, so that none of them provide the best performance results under all circumstances.

Therefore, the combined feature generation apparatus 100 extracts only the features having a good distinguishability between the features of each of the local normalized discriminant feature vector Y LN and the shadow compensated discriminant feature vector Y SCFA , .

The combined feature generation device 100 evaluates the characteristics of each of the two methods and constructs the combined features that combine them to effectively utilize the different features of the LN method and the SCFA method.

For this purpose, the combined feature generation unit 100 using a determined distance based on local normalized image (I LN) and determining characteristics extracted from the shading compensation image (I SCFA) (i.e., Y i LN, Y i SCFA, i = 1, ..., m). The joint feature generation apparatus 100 generates a joint feature based on the measured separability.

This will be described with reference to FIG. 10 attached hereto.

The joint feature generation apparatus 100 defines distances between classes and distances between classes of the discrimination feature for measurement of the discrimination distance (S520). That is, the number of samples of the combined feature generation unit 100 includes a class j-th element of the mean value of (c i) (m i j), j-th element of the mean value of the training data sample (m j) and a class (C i) ( (D W i ) and a distance between classes (D B i ) are defined (see Equation (8)) based on N i .

The combining feature generation apparatus 100 defines a discrimination distance of the discrimination feature based on the distance between classes and the distance between classes (S540). That is, the combining feature generator 190 defines the discrimination distance of the discriminating feature based on the in-class distance D W i and the inter- class distance D B i calculated in step S520. At this time, the joint feature generation apparatus 100 calculates a value obtained by subtracting the distance in the class from the distance between classes (i.e., D B i -β D W i ) is defined as the discrimination distance of the jth discrimination feature. As a result, the distance between the classes and the distance in the grades decreases, thereby increasing the possibility of separation. Here, β, which means a penalty for D W i in the discrimination distance, is a user parameter that is determined differently according to the attribute of the data. β can be set to a small value when the data is widely distributed within the same class but is highly discriminatory.

The combined feature generation apparatus 100 generates a discrimination distance vector having the discrimination distance of the mixed vector of the local normalization discrimination feature vector and the shade compensation discrimination feature vector as elements (S560). That is, the combined feature generation apparatus 100 generates a combined feature vector Y c = [Y LN T (Y LN T )) of the local normalization discrimination feature vector Y LN and the shade compensation discrimination feature vector Y SCFA Y SCFA T]) i-th features (Y i C (i = 1,2 , ..., 2m)) to determine the distance elements (D i) to determine a distance vector (Dist = [D 1, D 2, ... to the , D 2m ] T ).

The combining feature generation apparatus 100 generates a combining feature based on the discrimination distance vector (S580). That is, the combined feature generation apparatus 100 generates a feature in which the discrimination distance vector corresponds to a large value as a combining feature (Y CF ). The combined feature vector (Y CF ) is input to a classifier (200) for face recognition, and is used as a reference factor for face recognition.

DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS Hereinafter, a combined feature generation apparatus based on discrimination analysis for face recognition according to an embodiment of the present invention and a face recognition experiment result based on combined features generated through the method will be described in detail with reference to the accompanying drawings. . FIGS. 11 to 15 are views for explaining results of face recognition experiments based on combined features generated through a combined feature generation apparatus and method based on discriminant analysis for face recognition according to an embodiment of the present invention.

In order to confirm the effect of combining features generated by the coupled feature generation device and method based on discriminant analysis for face recognition, face recognition performance is evaluated for various face database commonly used in research on illumination change.

Images taken under seven different types of illumination conditions in the forward pose of the CMU-PIE database are selected as the practice data, and the images of the multi-PIE, Yale B, AR, and Yale databases are used for measuring the recognition rate.

Each characteristic of each database is as shown in Fig. All images used at this time are manually detected and aligned at a size of 120 × 100 based on the coordinates of the two eyes.

The multi-PIE database consists of a total of 249 images of people acquired under 20 different lighting conditions per person. Among them, one image obtained under forward illumination is selected as a gallery image, and the remaining 19 images are used as a probe image.

The Yale B database consists of 10 person's front-pose images acquired under 45 different lighting conditions per person.

Similar to the multi-PIE database, a single image obtained under the forward illumination is selected as the gallery image, and the remaining 44 images are used as the probe image.

Both the AR and the Yale database include a number of changes including facial expression changes, partial-occlusion or illumination changes, but only changes involving illumination changes are selected for measurement of the recognition rate.

The AR database consists of a total of 118 images of people acquired under eight different lighting conditions, while the Yale database consists of a total of 15 images of people acquired under five different lighting conditions.

The combined features generation method and the method based on the discriminant analysis based on the face recognition are based on the discriminant characteristics (Y LN , Y LBP , Y SCFA, and LBP extracted from the LN, LBP, SCFA and SCWAD) Y SCWAD ) were compared to a set of binding features (Y CF ). The Euclidean distance is used for distance measurement, but the nearest neighbor classifier 200 is used as the classifier 200.

As the eigenvalues of the projection vector w i containing the piece W are higher, the i-th feature y i of the discriminant feature vector y is better suited to the objective function comprising the piece W Respectively. An apparatus and method for combined feature generation based on discriminant analysis for face recognition uses a projection vector in ascending order of eigenvalues, and the tendency of face recognition rate is examined as the dimension of feature space increases.

Figures 12-15 illustrate the use of a combined feature vector ( YCF ) with elements that are only features with good distinguishability between Y LN and Y SCFA features per database, and increasing their dimensions in ascending order of eigenvalues Shows the recognition rate when using only Y LN , Y LBP , Y SCFA and Y SCWAD . Fig. 12 shows the recognition rate in the multi-PIE database, Fig. 13 shows the recognition rate in the Yale B database, Fig. 14 shows the recognition rate in the AR database, and Fig. 15 shows the recognition rate in the Yale database Respectively.

As shown in Figs. 12 to 15, the recognition rate improves as all dimensions of Y LN , Y LBP , Y SCFA, and Y SCWAD and Y CF increase.

The multi-PIE and Yale B databases exhibit a higher overall recognition rate as compared to the AR and Yale databases, which are assumed to be related to the characteristics of the CMU-PIE database used as exercise data.

The multi-PIE database contains images acquired under illumination set at an angle similar to the illumination angle of the CMU-PIE database.

The shadows of these two databases show darker shadows, since the illumination setup angle of the Yale B database is different from the installation angle of the CMU-PIE database, since all images in both databases have been acquired only as flash light without ambient illumination.

In contrast, all images in the AR and Yale databases were acquired with additional flash light under background illumination, and the installation angles of the flash light fixtures differed from the installation angles of the CMU-PIE database, so these two databases have a relatively lower recognition rate.

For the multi-PIE database, when only the illumination invariant feature extraction method or the shading compensation method was used, the recognition rate ranged from 97.2% to 98.1% depending on the method.

In the combined feature generation device and method based on discriminant analysis for face recognition, the highest recognition rate was 99.3% due to the synergy effect of Y LN and Y SCFA . Using a combining feature similar to the Yale B database shows a higher recognition rate by 0.0% to 7.7% than when using a single method.

On the other hand, the AR and Yale databases showed various recognition rates depending on the method used. For the AR database, when only LN, LBP, SCFA or SCWAD were used, Y LN showed the best performance and showed performance in the order of Y SCWAD , Y SCFA and Y LBP .

However, Y SCWAD showed the best performance in the Yale database experiment, while Y LN showed the lowest recognition rate.

This phenomenon is difficult to explain only in terms of the properties of specific methods because of the similarity between the practice data and the gallery image, the similarity in illumination conditions between probe images, and the result of combined factors such as the different shading characteristics of the individual.

Although the Y LN and Y SCFA may have conflicting results depending on the database type, the combining feature formed by extracting only those having good distinguishability from Y LN and Y SCFA is Y LN of the AR database and Y Compared with SCWAD , they show higher recognition rates by 6.3% and 8.3%, respectively.

As described above, the combined feature generation apparatus and method based on the discriminant analysis for face recognition generates local normalized images and shade compensation images through local normalization and shading compensation for a plurality of images recorded in different illumination states , The local normalization image and the shading compensation feature vector included in the discriminant feature extracted from the shading compensation image to generate the combining feature, thereby maximizing the recognition rate in all types of databases unlike the conventional method using the individual method There is an effect that can be done.

That is, the illumination change is the most representative change that can occur in a real environment. A combined feature generation apparatus and method based on discriminant analysis for face recognition proposes a hybrid face recognition method combining local normalization method and shading compensation method to solve illumination change. Previously developed methods have their own advantages and disadvantages depending on the characteristics of the shade recognized, and the performance varies widely depending on the environmental conditions (or experimental database).

Apparatus and method for combined feature generation based on discrimination analysis for face recognition A discrimination feature is extracted by using any one of an illumination change characteristic extraction method or a shading compensation method having different characteristics and extracted based on the respective discrimination distance criterion And the discrimination possibility of discrimination characteristics is measured. Then, a combined feature generation apparatus and method based on discriminant analysis for face recognition constitutes a set of characteristics combined only with those having good discrimination. According to the experimental results, the conventional individual methods show different recognition rates in different types of databases, while the apparatus and method for generating combined features based on discriminant analysis for face recognition has the effect of maximizing the recognition rate in all types of databases .

While the present invention has been described in connection with what is presently considered to be practical exemplary embodiments, it is to be understood that the invention is not limited to the disclosed embodiments, but many variations and modifications may be made without departing from the scope of the present invention. It will be understood that the invention may be practiced.

100: coupling feature generation device 110: input part
130: Local normalization processing unit 150: Shadow compensation processing unit
170: discrimination feature extracting unit 190: combining feature generating unit
200: classifier

Claims (18)

An input unit for receiving a plurality of images recorded in different illumination states;
A local normalization unit for performing local normalization on the plurality of images input to the input unit to generate a local normalized image;
A shading compensation processor for performing shading compensation on a plurality of images input to the input unit to generate a shading compensation image;
A discrimination feature extracting unit for extracting a discrimination feature including a local normalization discrimination feature vector and a discrimination feature vector from a local normalization image generated by the local normalization processor and a shade compensation image generated by the shading compensation processor; And
And a combined feature generation unit for generating a combined feature based on the local normalization discriminant feature vector and the shading compensation discriminant feature vector included in the discriminant feature extracted by the discrimination feature extractor,
Wherein the combining feature generation unit comprises:
The discrimination distance of the discrimination feature is defined on the basis of the distance in the class and the distance between the class of the discrimination feature and a discrimination distance vector is generated using the discrimination distance of the mixture vector combining the local normalization discrimination feature vector and the shadow compensation discrimination feature vector , And the combining feature is generated based on the discrimination distance vector.
The method according to claim 1,
The local normalization processing unit,
The illumination value for each pixel of the image recorded in the different illumination state is calculated based on the image recorded in the forward lighting state among the plurality of images using the local normalization method (LN (Local Normalization)), and the local average value and the variance value A local normalized illumination value for an image recorded in different illumination states based on the image recorded in the forward lighting state, and a local normalization of each image based on the calculated local normalized illumination value And generating an image based on the discrimination analysis result for the face recognition.
The method of claim 2,
The local normalization processing unit,
Wherein the illumination value for each pixel of the pieces constituting the image is calculated using multiplication noise and additive noise.
The method according to claim 1,
Wherein the shading compensation processing unit comprises:
The frequency-magnitude component of the non-shadowed average image is calculated using a shadow compensation method (SCFA), and the frequency-magnitude component of the shadowed image is compensated with the calculated frequency- Calculating a compensation value, and generating a shade compensation image through inverse Fourier transform of the frequency-magnitude component compensation value.
The method according to claim 1,
The discriminating feature extracting unit extracts,
Transforming the local normalized image and the shadow compensated image into a local normalized vector and a shadowed compensation vector through a discriminative common vector (DCV) method, a matrix consisting of a projection vector satisfying an objective function, And a discriminating feature including a local normalization discriminating feature vector and a shading compensation discriminating feature vector on the basis of the shading compensation vector is extracted.
delete The method according to claim 1,
Wherein the combining feature generation unit comprises:
Equation
Figure 112017003541870-pat00011

(D W i is a class within a distance, D B i is the class between the distance, m i j is the class (c i) j-th element, m j, j-th element of the mean value of the training data samples is N i of the average value of the class The distance between classes and the distance between classes of the discrimination feature are calculated on the basis of the number of samples of the discrimination feature (C i )).
The method according to claim 1,
Wherein the combining feature generation unit comprises:
Equation
Discrimination distance j = D B i - ? D W i
(Determine the distance j is j determines the distance of the second discriminating characteristics, D W i is a class within a distance, D B i is a distance between the class, β is the penalty value for D W i) for calculating the determined distance to determine characteristics using Wherein the face recognition apparatus comprises:
The method according to claim 1,
Wherein the combining feature generation unit comprises:
Wherein the discrimination feature having the largest value of the discrimination distance vector is generated as a combining feature.
A combined feature generation method based on discrimination analysis for face recognition using a combined feature generation device,
Receiving a plurality of images recorded in different illumination states;
Performing local normalization on the input plurality of images to generate a local normalized image;
Performing shadow compensation on the input plurality of images to generate a shadow compensation image;
Extracting a discriminating feature including a local normalization discriminating feature vector and a shading compensation discriminating feature vector from the generated local normalized image and the shading compensation image; And
And generating a combining feature based on the local normalization determining feature vector and the shading compensation distinguishing feature vector included in the extracted discriminating feature,
Wherein generating the combining feature comprises:
Calculating a distance between classes and a distance between classes of the extracted discrimination feature;
Calculating a discrimination distance of the discrimination feature based on the distance in the class and the distance between the classes;
Generating a discrimination distance vector having a discrimination distance of a mixed vector obtained by combining the local normalization discrimination characteristic vector and the shading compensation discrimination characteristic vector as elements; And
And generating a combined feature based on the generated discriminant distance vector.
The method of claim 10,
Wherein generating the local normalized image comprises:
Calculating an illumination value for each pixel of the image recorded in the different illumination state based on the image recorded in the forward illumination state among the plurality of images;
Calculating an illumination value locally normalized for each pixel of the image recorded in different illumination states based on the image recorded in the forward lighting state using the local average value and the variance value; And
And generating a local normalized image of each image on the basis of the calculated local normalized illumination value.
The method of claim 11,
In the step of calculating an illumination value for each pixel of the image,
And calculating an illumination value for each pixel of the pieces constituting the image using the multiplication noise and the additive noise.
The method of claim 10,
Wherein the generating the shade compensation image comprises:
Calculating a frequency magnitude component of the non-shadowed average image;
Calculating a frequency-magnitude component compensation value by compensating a frequency magnitude component of the shadowed image with the calculated frequency magnitude component; And
And generating a shadow compensated image through inverse Fourier transform of the frequency-magnitude component compensation value.
The method of claim 10,
Wherein the step of extracting the discriminating feature comprises:
Converting the local normalized image and the shadow compensated image into a local normalized vector and a shadowed compensation vector through a discriminant common vector (DCV) method;
Extracting a matrix composed of a projection vector satisfying an objective function through a discriminant common vector method; And
Extracting a discriminant feature including a local normalization discriminant feature vector and a shading compensation discriminant feature vector on the basis of the extracted matrix and the local normalization vector and the shading compensation vector, A method for generating a combined feature.
delete The method of claim 10,
In the step of calculating the distance between the class and the distance between the classes,
Equation
Figure 112017003541870-pat00012

(D W i is a class within a distance, D B i is the class between the distance, m i j is the class (c i) j-th element, m j, j-th element of the mean value of the training data samples is N i of the average value of the class The distance between classes and the distance between classes of the discrimination characteristic are calculated on the basis of the number of samples of the discrimination feature (C i )).
The method of claim 10,
In the step of calculating the discrimination distance,
Equation
Discrimination distance j = D B i - ? D W i
(Determine the distance j is j determines the distance of the second discriminating characteristics, D W i is a class within a distance, D B i is a distance between the class, β is the penalty value for D W i) for calculating the determined distance to determine characteristics using A method for generating a combined feature based on discriminant analysis for face recognition.
The method of claim 10,
In generating the combining feature,
Wherein the discrimination feature having the largest value of the discrimination distance vector is generated as a combining feature.
KR1020150184261A 2015-12-22 2015-12-22 Apparatus and method for constructing combined feature based on discriminant analysis for face recognition KR101748048B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020150184261A KR101748048B1 (en) 2015-12-22 2015-12-22 Apparatus and method for constructing combined feature based on discriminant analysis for face recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020150184261A KR101748048B1 (en) 2015-12-22 2015-12-22 Apparatus and method for constructing combined feature based on discriminant analysis for face recognition

Publications (1)

Publication Number Publication Date
KR101748048B1 true KR101748048B1 (en) 2017-06-15

Family

ID=59217535

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020150184261A KR101748048B1 (en) 2015-12-22 2015-12-22 Apparatus and method for constructing combined feature based on discriminant analysis for face recognition

Country Status (1)

Country Link
KR (1) KR101748048B1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116071415A (en) * 2023-02-08 2023-05-05 淮阴工学院 Stereo matching method based on improved Census algorithm

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101515928B1 (en) * 2013-11-29 2015-05-06 재단법인대구경북과학기술원 Apparatus and method for face recognition using variable weight fusion

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101515928B1 (en) * 2013-11-29 2015-05-06 재단법인대구경북과학기술원 Apparatus and method for face recognition using variable weight fusion

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116071415A (en) * 2023-02-08 2023-05-05 淮阴工学院 Stereo matching method based on improved Census algorithm
CN116071415B (en) * 2023-02-08 2023-12-01 淮阴工学院 Stereo matching method based on improved Census algorithm

Similar Documents

Publication Publication Date Title
Pala et al. Multimodal person reidentification using RGB-D cameras
Bashir et al. Cross view gait recognition using correlation strength.
CN106778468B (en) 3D face identification method and equipment
Karande et al. Independent component analysis of edge information for face recognition
Jüngling et al. View-invariant person re-identification with an implicit shape model
CN111191655A (en) Object identification method and device
Wang et al. Real-time hand posture recognition based on hand dominant line using kinect
Chaa et al. Features-level fusion of reflectance and illumination images in finger-knuckle-print identification system
CN105139013B (en) A kind of object identification method merging shape feature and point of interest
Liang et al. Bayesian multi-distribution-based discriminative feature extraction for 3D face recognition
Nambiar et al. Frontal gait recognition combining 2D and 3D data
CN104636729A (en) Three-dimensional face recognition method based on Bayesian multivariate distribution characteristic extraction
Jin et al. Cross-modality 2D-3D face recognition via multiview smooth discriminant analysis based on ELM
Zhao et al. Amplitude spectrum-based gait recognition
KR101748048B1 (en) Apparatus and method for constructing combined feature based on discriminant analysis for face recognition
CN101739545A (en) Face detection method
DelPozo et al. Detecting specular surfaces on natural images
Karamizadeh et al. Race classification using gaussian-based weight K-nn algorithm for face recognition.
Salah et al. A Proposed Generalized Eigenfaces System for Face Recognition Based on One Training Image
Berretti et al. 3D partial face matching using local shape descriptors
Nishiyama et al. Illumination normalization using quotient image-based techniques
Chen et al. Cross-view gait recognition based on human walking trajectory
Xu et al. MultiView-based hand posture recognition method based on point cloud
Singh et al. Face liveness detection through face structure analysis
Ibikunle et al. Face recognition using line edge mapping approach

Legal Events

Date Code Title Description
E701 Decision to grant or registration of patent right
GRNT Written decision to grant