KR20150135745A - Device and method for face recognition - Google Patents
Device and method for face recognition Download PDFInfo
- Publication number
- KR20150135745A KR20150135745A KR1020150070651A KR20150070651A KR20150135745A KR 20150135745 A KR20150135745 A KR 20150135745A KR 1020150070651 A KR1020150070651 A KR 1020150070651A KR 20150070651 A KR20150070651 A KR 20150070651A KR 20150135745 A KR20150135745 A KR 20150135745A
- Authority
- KR
- South Korea
- Prior art keywords
- quality
- face
- quality measurement
- image
- value
- Prior art date
Links
Images
Classifications
-
- G06K9/00268—
-
- G06K9/00281—
-
- G06K9/00597—
Abstract
The present invention relates to a face recognition apparatus and method, and more particularly, to a face recognition apparatus and method with improved face recognition accuracy.
Description
The present invention relates to a face recognition apparatus and method, and more particularly, to a face recognition apparatus and method with improved face recognition accuracy.
Biometrics technology applied to a system for existing user authentication is technologies using human specific characteristic information such as fingerprint, face, iris, and voice. Particularly, since face shape is mainly used to identify the identity of a person, face recognition technology is the most natural and less sensitive biometric technology.
However, the face recognition technology using the conventional face feature information may deteriorate the recognition performance due to various pose, focus state, and illumination state changes during face detection and tracking in a continuous image.
The background art of the present invention is disclosed in Korean Patent Laid-Open Publication No. 2013-0114893 (published on October 21, 2013).
The present invention proposes a face recognition apparatus and method capable of selecting a high quality face image by calculating an image quality score by adaptively setting a quality measurement item for each image included in a continuous face image.
The present invention proposes a face recognition apparatus and method that improves face recognition performance by performing face recognition by selecting a high-quality face image from a continuous face image.
According to an aspect of the present invention, a face recognition apparatus is disclosed.
A face recognition apparatus according to an embodiment of the present invention includes an input unit that receives a plurality of continuous face images, a detection unit that detects feature points in the plurality of continuous face images, A quality measurement unit for calculating a plurality of quality measurement values of each of the face images by measuring quality of each face image for each item, and adaptively selecting at least two of the plurality of quality measurement values, A selection unit for selecting a predetermined number of consecutive face images from a higher level on the basis of the quality score, and a selection unit for selecting a predetermined number of consecutive face images from a higher level on the basis of the quality score, And a face recognition unit for performing face recognition using the selected continuous face image.
The score calculation unit calculates a variance value of the quality measurement values of the plurality of continuous face images for each quality measurement item and compares the calculated variance values to select a predetermined number of quality measurement items having a high variance value , The quality measurement value corresponding to the selected quality measurement item is used for the input of the fuzzy logic to adaptively select the quality measurement value.
The quality measuring unit measures the degree of difference between the registered head pose and the predicted head pose, the degree of illumination change of the facial image, the sharpness of the facial image, the degree of opening of the detected facial image, the contrast of the facial image, resolution is measured to calculate a plurality of quality measurement values.
The quality measure is normalized from 0 to 1 for application to the fuzzy logic.
The quality measuring unit measures the rotation angle of the face recognition object based on the position of the feature point, compares the measured rotation angle with the registered rotation angle, and calculates the difference value as the quality measurement value.
The quality measuring unit measures the degree of left-right symmetry with reference to the left and right dividing lines of the face region detected from the face image, and calculates the degree of illumination change of the face image as a quality measurement value.
The quality measuring unit obtains the difference between the pixel value of the facial image and the result obtained by applying the low-pass filter to the pixel value, thereby calculating the sharpness reflecting the intermediate frequency and the high-frequency component as the quality measurement value.
The quality measuring unit calculates an open value of the eye, which is calculated using the standard deviation of the number of black pixels projected on the horizontal axis, as a quality measurement value.
The quality measuring unit calculates a contrast value by dividing the difference between the pixel maximum values at the positions of 25% and 75% in the cumulative histogram of the face image by the pixel brightness value.
The quality measuring unit calculates a distance between the detected two eyes as a resolution value.
The face recognition unit calculates a weight of each selected continuous face image by a ratio of a sum of quality scores of the selected continuous face image and a quality score of each selected continuous face image, and performs face recognition using the calculated weight.
According to another aspect of the present invention, a face recognition method performed by a face recognition apparatus using a plurality of continuous face images is disclosed.
A method for recognizing a face according to an embodiment of the present invention includes receiving a plurality of continuous face images, detecting feature points in the plurality of continuous face images, Measuring a quality of each face image by each measurement item to calculate a plurality of quality measurement values of each face image; adaptively selecting at least two of the plurality of quality measurement values; Calculating a quality score of each of the face images with an output value calculated by inputting the input image data into a fuzzy logic, selecting a preset number of consecutive face images based on the quality score, And performing face recognition using the image.
Wherein adaptively selecting at least two of the plurality of quality measurement values comprises: calculating a variance value of the quality measurement values of the plurality of continuous face images for each quality measurement item; comparing the calculated variance values Selecting a predetermined number of quality metrics having a high variance value and determining a quality measurement value corresponding to the selected quality metric item as an input of the fuzzy logic.
Wherein the step of calculating a plurality of quality measurement values of each face image includes calculating a difference between a registered head position and a predicted head position, a degree of illumination change of a face image, a sharpness of a face image, , The contrast of the face image and the image resolution are measured to calculate a plurality of quality measurement values.
The step of performing face recognition may further include calculating a weight of each selected continuous face image by a ratio of a sum of quality scores of the selected continuous face image and a quality score of each selected continuous face image, And performs recognition.
The present invention improves face recognition performance by performing face recognition by selecting a high-quality face image from a continuous face image.
The present invention can improve the face recognition performance by calculating an image quality score by adaptively setting a quality measurement item for each image included in the continuous face image.
BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a diagram illustrating an environment to which a face recognition apparatus according to an embodiment of the present invention is applied; FIG.
2 is a view for explaining a concept of a face recognition apparatus according to an embodiment of the present invention.
3 is a view schematically illustrating a configuration of a face recognition apparatus according to an embodiment of the present invention.
FIG. 4 is a view showing an example of a binarized image of a bird's eye and a sensed eye and a corresponding histogram according to an embodiment of the present invention; FIG.
5 is a diagram illustrating a procedure for finally calculating a quality score of an input image using a purge system according to an embodiment of the present invention.
6 illustrates an example of a symmetric input fuzzy membership function according to an embodiment of the present invention.
Figure 7 illustrates an example of a symmetric output fuzzy membership function according to an embodiment of the present invention;
8 illustrates an example of obtaining an output value using an input membership function according to an embodiment of the present invention.
9 is a diagram illustrating an example of a depiction based on a membership function for an output value and IV according to an embodiment of the present invention.
FIG. 10 is a view showing an example of a face image selected from a continuous face image and a face log according to an embodiment of the present invention; FIG.
11 is a diagram illustrating a concept of MLBP feature extraction according to an embodiment of the present invention.
12 is a flowchart illustrating a face recognition method according to an embodiment of the present invention.
13 is a view showing an example of a face image obtained in the initial registration step according to the embodiment of the present invention.
14 illustrates an example of face and facial features detected and tracked in a recognition step according to an embodiment of the present invention.
15 to 22 are views for explaining experimental results and analysis results of a face recognition method according to an embodiment of the present invention.
While the present invention has been described in connection with certain exemplary embodiments, it is to be understood that the invention is not limited to the disclosed embodiments, but, on the contrary, is intended to cover various modifications and similarities. It should be understood, however, that the invention is not intended to be limited to the particular embodiments, but includes all modifications, equivalents, and alternatives falling within the spirit and scope of the invention.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS Hereinafter, the present invention will be described in detail with reference to the accompanying drawings. In addition, numerals (e.g., first, second, etc.) used in the description of the present invention are merely an identifier for distinguishing one component from another.
Also, in this specification, when an element is referred to as being "connected" or "connected" with another element, the element may be directly connected or directly connected to the other element, It should be understood that, unless an opposite description is present, it may be connected or connected via another element in the middle.
Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings. In order to facilitate a thorough understanding of the present invention, the same reference numerals are used for the same means regardless of the number of the drawings.
1 is a diagram illustrating an environment to which a face recognition apparatus according to an embodiment of the present invention is applied.
Referring to FIG. 1, the
2 is a view for explaining a concept of a face recognition apparatus according to an embodiment of the present invention.
As shown in FIG. 2, the
Here, in order to improve the face recognition performance when the face recognition is performed by the
Accordingly, the
3 is a schematic view illustrating a configuration of a face recognition apparatus according to an embodiment of the present invention.
3, the
The
The
The
That is, the
The
[Equation 1]
Here, P (t, a) is the head pose value, t is the face image number, and a is the head pose number representing the five head pose values obtained in the registration step. x t and y t are rotation angles calculated in the X and Y axis directions with respect to the face image in the recognition step. xm a and ym a are the average rotation angles of all registered users in head pose a. Thus, the final head pose value (F 1 ) of the t-th image, which is the closest value among the five head pose values, can be calculated by the following equation.
&Quot; (2) "
The
&Quot; (3) "
Here, I is the average pixel value of the face region, IMG (x, y) t | is the pixel value at the (x, y) position of the tth face image, W and H are the width and height to be. F 2 represents an illumination value based on the difference between the average pixel values of the right and left face regions. On the basis of the symmetry of the right and left face regions, if the entire face region has uniform illumination, F 2 becomes smaller.
The
&Quot; (4) "
Here, IMG (x, y) t is the second face image of t (x, y) is the pixel value at the position, LowPass (IMG (x, y ) t) is a low-pass filter to the IMG (x, y) t The result is the applied value. By obtaining the difference between IMG (x, y) t and LowPass (IMG (x, y) t ), F 3 reflecting the intermediate frequency and high frequency components of IMG (x, y) t can be calculated.
The
&Quot; (5) "
Here, F 4 is the open value of the eye calculated using the standard deviation of the number of black pixels projected on the horizontal axis. i and n represent the position of the projected pixel and the range of the histogram, respectively. x i is the number of black pixels at the i-th position,
Is the average of the number of black pixels. F 4 is a high value for the floating eye, the eye wound F 4 can be calculated at a low value.The
&Quot; (6) "
Where H q1 and H q3 are the maximum pixel values at 25% and 75% positions in the cumulative histogram of the facial image, respectively, and I r is the range of pixel brightness values of the facial image. Generally, high contrast images have a wider range of pixel brightness values than low contrast images. Therefore, F 5 is increased in the case of an image of high contrast.
The
The
5 is a diagram illustrating a procedure for finally calculating a quality score of an input image using a purge system according to an embodiment of the present invention.
As shown in FIG. 5, four quality measurements may be selected and input to the purge system to obtain the quality score of the facial image. To use the fuzzy system, the membership function must be determined.
6 is a diagram illustrating an example of a symmetric input fuzzy membership function according to an embodiment of the present invention.
As shown in FIG. 6, the membership function can be designed according to the input values QM 1 to QM 4 . Hereinafter, four selected quality measurement values (QM 1 to QM 4 ) will be referred to as
For example, if the value of all elements is low, then the image quality can be considered low and the output value can be given low. If the values of the two elements are low and the values of the remaining two elements are high, the output value can be given as intermediate. If the value of all elements is high, the image quality can be considered high and the output value can be given high. Since all the weights of
8 is a diagram illustrating an example of obtaining an output value using an input membership function according to an embodiment of the present invention.
As shown in FIG. 8, one input value of an element corresponds to two output values using a membership function. Since there are four input values (
FIG. 9 is a diagram illustrating an example of the division based on the membership function for the output value and IV according to the embodiment of the present invention.
As shown in FIG. 9, for each IV, one of two output values (quality score) can be obtained. If IV is 0.2 (low), the corresponding output value is S 1 . Thus, a plurality of output values (S 1 , S 2 , ..., S N ) can be obtained from the sixteen IVs, and the final output value (quality score) by the devel- opment method can be determined. Five differentiation methods such as first of maxima (FOM), last of maxima (LOM), middle of maxima (MOM), mean of maxima (MeOM) and center of gravity (COG) The FOM is a method of selecting the first output value (S 2 ) among the output values calculated using the maximum IV (0.8 (intermediate)) as the output value. The LOM is a method of selecting the last output value (S 3 ) among the output values calculated using the maximum IV (0.8 (intermediate)) as the output value. The MOM is a method of selecting the intermediate output value ((S 2 + S 3 ) / 2) among the output values calculated using the maximum IV (0.8 (intermediate)) as the output value. MeOM is a method of selecting the average output value ((S 2 + S 3 ) / 2) among the output values calculated using the maximum IV (0.8 (intermediate)) as the output value. The output value of COG can be determined as S 5 , which is the geometrical center of the common region of the three regions R 1 , R 2 and R 3 , as shown in FIG. 9 (b). The geometric center can be calculated based on the weighted average of all regions defined by all IVs.
After all the quality scores of a plurality of continuous face images are calculated, the
FIG. 10 is a view showing an example of a face image selected from a continuous face image and a face log according to an embodiment of the present invention. The quality score shown in FIG. 10 (b) represents the quality score of the i-th image in FIG.
The
11 is a diagram illustrating a concept of MLBP feature extraction according to an embodiment of the present invention.
Referring to FIG. 11, the face region is divided into sub-blocks, and an LBP histogram can be obtained from each block as shown in FIG. 11 (C). The final histogram feature may be concatenated from all the histogram blocks to form the final feature vector for face recognition, as shown in Figure 11 (d). For example, a chi-squared distance (matching score) can be used to measure the difference between the registered face histogram feature and the input image face histogram feature. In order to deal with head pose changes (horizontal and vertical rotation), the matching histogram feature of the input image is matched with the five registered face histogram features, and then the matching score may be determined to be the smallest of the five. As the final matching score, the obtained matching score of the selected face image in the face log can be fused. The weight of the matching score can be calculated by the following equation.
&Quot; (7) "
Here, w i represents the i-th weight of the matching score of the facial image in the face log, m represents the number of facial images in the face log, and Score i represents the i-th quality score in the face log.
The final matching score for face recognition can be calculated using the weight of the selected face image as shown in the following equation.
&Quot; (8) "
The matching score is obtained by MLBP (MS 1, ..., MS m) and using the weights (w 1, ..., w m ) of the face image in the face log, fusing the matching score (FMS: fused matching score ) Can be calculated by Equation (8).
12 is a flowchart illustrating a face recognition method according to an embodiment of the present invention. FIG. 12 schematically illustrates a face recognition method performed by the face recognition apparatus according to the embodiment of the present invention.
First, the initial registration step will be described.
In step S1210, the
In step S1220, the
In step S1230, the
Next, the recognition step will be described.
In step S1240, the
In step S1250, the
In step S1260, the
In step S1270, the
In step S1280, the
In step S1290, the
In step S1300, the
Five face images according to the gazing point are obtained in the initial registration stage to improve the accuracy of face recognition irrespective of the head pose. In the initial registration phase, the user can face the TV with a Z-axis distance of 2m.
13 is a diagram showing an example of a face image obtained in the initial registration step. Face and eye regions can be detected in the acquired face image using the Adaboost algorithm. In addition, the nostril region can be detected using a sub-block based template matching method. Facial feature information in which the head pose of the image is estimated at the recognition step is stored based on the facial features (face, eye and nostril), and facial histogram feature information according to the head pose identifying each user is registered have.
Also, in the recognition step, the continuous face image information can be used for face detection and tracking. The face region detected by the Adaboost algorithm can be traced by the CamShift algorithm. Although the Adaboost method has a high detection rate, it consumes a short time. In other words, face tracking using the CamShift algorithm has the advantage of being stronger at high processing speed and head pose change.
Eye detection and tracking can be performed using the Adaboost algorithm and the ATM method, respectively. The nostril region can then be detected using a nostril detection mask based on sub-block based template matching and tracked by the ATM method. A facial image for quality evaluation can be obtained using the facial feature information. 14 is a diagram showing an example of face and face features detected and tracked in the recognition step.
15 to 22 are diagrams for explaining experimental results and analysis results of the face recognition method according to the embodiment of the present invention.
In the experiment, there are many face databases, but most of the databases do not have head pose, illumination, sharpness, openness of the eyes, contrast, Since it does not include all elements such as image resolution, a built-in database (Database I) was used. A self-contained database contains all elements in a continuous face image for experimentation.
When creating a self-built database, 20 groups were defined based on 20 people attending the experiment. Three subjects in each group performed three shots with varying Z-axis distance (2, 2.5m) and sitting position (left, middle, and right). Participants blinked and looked randomly and naturally at random points on the TV screen. During this period, successive images were acquired. The built-in database contains a total of 31,234 images to measure the performance of the face recognition system. In addition, at the initial registration stage, five images per person in each group were acquired at a Z-axis distance of 2 m. 15 shows an example of an image obtained for an experiment.
As shown in FIG. 16, the self-built database includes head pose, illumination, sharpness, openness of the eyes, contrast and image resolution, As shown in FIG. The most influential quality metric for creating a face log is the head pose. Since there is little change in the Z-axis distance, the resolution was observed to be unaffected by the quality metric. In Fig. 16, the bars in the first column represent the number of quality measurement values selected primarily. Similarly, the bars in the fourth column represent the number of quality measures selected in the fourth order. That is, the vertical axis in FIG. 16 is the number of selected quality measurement values.
In the first experiment, the accuracy of the face recognition method was measured based on the genuine acceptance rate (GAR) by setting the number of registered persons to three. Either MIN or MAX methods were selected to obtain an IV. Finally, the final quality score was obtained using one of five dif- ferentialization methods (FOM, LOM, MOM, MeOM, COG). Thus, as shown in the following Table 2, the accuracy of the face recognition was compared using the MIN or MAX method, and the accuracy of the face recognition was compared according to the differential method.
Here, the number of continuous face images is 10, and the accuracy of face recognition is measured while changing the number of selected images in the face log. In Table 2, no fusion means that the matching scores (MS 1 , ..., MS m ) of
Experimental results show that the methods based on MIN and COG show higher face recognition accuracy than other methods. The highest accuracy (92.94%) was obtained with the fuzzy MIN rule and the COG when combining the five selected images.
In the next experiment, the accuracy of face recognition was measured according to the number of facial images with respect to the GAR, as shown in Table 3 below.
In Table 3, fusion means that the FMS of
In addition, the accuracy of the proposed method is compared with methods based on fixed quality measures. Using the results of FIG. 16, four influential quality measurement items (head pose, illumination, sharpness, openness of eyes) were selected for the method based on fixed quality metrics. In this experiment, n and m were set to 25 and 5, respectively. The method based on fixed quality metrics is less accurate than the proposed method because it can not evaluate other quality metrics that affect the accuracy of face recognition. The accuracy of existing methods using all images and the accuracy of methods based on fixed quality metrics were compared with the accuracy of the proposed method. The following Table 4 shows the results. Experimental results show that the proposed method is more accurate than other methods.
[40]
[21]
( proposed method )
17 and 18 show face images of correct recognition results according to an embodiment of the present invention. As shown in FIGS. 17A and 17B, even when a part of a face is hidden by a hand, a face region is not detected correctly, or an eye is wound, Can be excluded. Based on these results, it can be seen that the proposed method can accurately select good images and correctly match registered face images of the same person.
19 and 20 show face images of inaccurate recognition results according to the embodiment of the present invention. As shown in FIGS. 19 and 20, even if there is an eye-wrapped image, the proposed method can exclude a low-quality image and accurately select a good quality face image. However, it was incorrectly matched with registered face images of other people. This is because there is a difference in size between the registered image and the face region of the input image due to incorrect detection of the face and eye regions. This can be solved by increasing the accuracy of redefining the face area based on a more accurate detection algorithm.
For the experiment, as shown in Fig. 13, an image obtained while each user naturally watches TV was used. As shown in Fig. 21, in the case where a severe head rotation occurs, it is difficult to detect the face and the facial feature, and when the user normally watches TV, there is no serious case of head rotation. Thus, images of this case were not used in the experiment.
When a user watches TV as usual, the basic factor for determining the degree of head pose change is the size of the TV and the viewing distance between the user and the TV. The relationship between TV size and proper viewing distance is already defined. That is, the larger the TV size, the more the viewing distance should be further. And, the smaller the TV size, the closer the viewing distance should be.
22 (a) to 22 (c) illustrate a case where (a) a user views a 50-inch TV at a viewing distance of about 2 m, (b) a user watches a 60- (C) a user views a 70-inch TV at a viewing distance of about 2.8 m. 22 (a) to 22 (c), when each user gazes at the same position (lower left side) in the TV although the image resolution of each user decreases as the viewing distance increases, The degree of change in pose is similar in all cases. As shown in the bottom image of FIGS. 22 (a) to 22 (c), the accurate region of the face and face features is detected by the proposed method, and the proposed method of measuring the quality of the face image successfully operates. This shows that the performance of the proposed method is not affected by using a small or large size TV when considering the proper viewing distance.
Meanwhile, the face recognition method according to an embodiment of the present invention may be implemented in a form of a program command that can be executed through a variety of means for electronically processing information, and may be recorded in a storage medium. The storage medium may include program instructions, data files, data structures, and the like, alone or in combination.
Program instructions to be recorded on the storage medium may be those specially designed and constructed for the present invention or may be available to those skilled in the art of software. Examples of storage media include magnetic media such as hard disks, floppy disks and magnetic tape, optical media such as CD-ROMs and DVDs, magneto-optical media such as floptical disks, magneto-optical media and hardware devices specifically configured to store and execute program instructions such as ROM, RAM, flash memory, and the like. The above-mentioned medium may also be a transmission medium such as a light or metal wire, wave guide, etc., including a carrier wave for transmitting a signal designating a program command, a data structure and the like. Examples of program instructions include machine language code such as those produced by a compiler, as well as devices for processing information electronically using an interpreter or the like, for example, a high-level language code that can be executed by a computer.
The hardware devices described above may be configured to operate as one or more software modules to perform the operations of the present invention, and vice versa.
It will be apparent to those skilled in the art that various modifications and variations can be made in the present invention without departing from the spirit or scope of the invention as defined in the appended claims. It will be understood that the invention may be varied and varied without departing from the scope of the invention.
100: face recognition device
111:
112:
113: Quality measurement section
114: score calculation unit
115:
116:
Claims (16)
A detecting unit detecting feature points in the plurality of continuous face images;
A quality measuring unit for measuring a quality of each face image by using a plurality of preset quality metrics using the detected feature points and calculating a plurality of quality measurement values of each face image;
A score calculation unit for adaptively selecting at least two of the plurality of quality measurement values, inputting the selected quality measurement value into fuzzy logic, and calculating a quality score of each face image with the calculated output value;
A selection unit for selecting a predetermined number of consecutive face images from the upper side based on the quality score; And
And a face recognition unit for performing face recognition using the selected continuous face image.
Wherein the score calculating unit calculates a variance value of the quality measurement values of the plurality of continuous face images for each quality measurement item and compares the calculated variance values to determine a predetermined number of quality measurement items having a high variance value as an adaptive And selects a quality measurement value adaptively by using a quality measurement value corresponding to the selected quality measurement item as the input of the fuzzy logic.
The quality measuring unit measures the degree of difference between the registered head pose and the predicted head pose, the degree of illumination change of the facial image, the sharpness of the facial image, the degree of opening of the detected facial image, the contrast of the facial image, resolution of at least two of the plurality of quality measurement values.
Wherein the quality measurement value is normalized between 0 and 1 for application to the fuzzy logic.
Wherein the quality measuring unit measures the rotation angle of the face recognition object based on the position of the feature point and compares the measured rotation angle and the registered rotation angle to calculate a difference value as a quality measurement value. Device.
Wherein the quality measuring unit measures the degree of left-right symmetry with reference to the left and right division lines of the face region detected from the face image, and calculates the degree of illumination change of the face image as a quality measurement value.
Wherein the quality measuring unit calculates a sharpness reflecting the intermediate frequency and high frequency components as a quality measurement value by obtaining a difference between a pixel value of a face image and a result obtained by applying a low pass filter to the pixel value. Recognition device.
Wherein the quality measuring unit calculates an open value of eyes calculated using a standard deviation of the number of black pixels projected on a horizontal axis as a quality measurement value.
Wherein the quality measuring unit calculates a contrast value as a value obtained by dividing a difference between maximum pixel values at positions of 25% and 75% by a pixel brightness value in a cumulative histogram of a face image.
Wherein the quality measuring unit calculates a distance between the detected two eyes as a resolution value.
Wherein the face recognition unit calculates a weight of each selected continuous face image by a ratio of the sum of the quality scores of the selected continuous face image and the quality score of each selected continuous face image and performs face recognition using the calculated weight Wherein the face recognition apparatus comprises:
Receiving the plurality of continuous face images;
Detecting feature points in the plurality of continuous face images;
Calculating a plurality of quality measurement values of each face image by measuring quality of each face image for each of a plurality of preset quality measurement items using the detected feature points;
Adaptively selecting at least two of the plurality of quality measurements;
Inputting the selected quality measurement value into a fuzzy logic and calculating a quality score of each of the face images using the calculated output value;
Selecting a predetermined number of consecutive face images from the top based on the quality score; And
And performing face recognition using the selected continuous face image.
Wherein adaptively selecting at least two of the plurality of quality measurements comprises:
Calculating a variance value of quality measurement values of the plurality of continuous face images for each quality measurement item;
Comparing the calculated variance values to select a preset number of quality metrics having a high variance value; And
And determining a quality measurement value corresponding to the selected quality measurement item as an input of the fuzzy logic.
Wherein the step of calculating a plurality of quality measurement values of each face image comprises:
The degree of difference between the registered head pose and the predicted head pose, the degree of illumination change of the facial image, the sharpness of the facial image, the degree of opening of the detected facial image, the contrast of the facial image, And a plurality of quality measurement values are calculated by measuring the two.
Wherein the step of performing face recognition comprises:
Calculating a weight of each selected continuous face image by a ratio of a sum of quality scores of the selected continuous face images and a quality score of each selected continuous face image, and performing face recognition using the calculated weight values Recognition method.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR20140062385 | 2014-05-23 | ||
KR1020140062385 | 2014-05-23 |
Publications (2)
Publication Number | Publication Date |
---|---|
KR20150135745A true KR20150135745A (en) | 2015-12-03 |
KR101756919B1 KR101756919B1 (en) | 2017-07-13 |
Family
ID=54872027
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
KR1020150070651A KR101756919B1 (en) | 2014-05-23 | 2015-05-20 | Device and method for face recognition |
Country Status (1)
Country | Link |
---|---|
KR (1) | KR101756919B1 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2017112310A1 (en) * | 2015-12-24 | 2017-06-29 | Intel Corporation | Facial contour recognition for identification |
US10977509B2 (en) | 2017-03-27 | 2021-04-13 | Samsung Electronics Co., Ltd. | Image processing method and apparatus for object detection |
US11403878B2 (en) | 2018-12-31 | 2022-08-02 | Samsung Electronics Co., Ltd. | Apparatus and method with user verification |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101254181B1 (en) * | 2012-12-13 | 2013-04-19 | 위아코퍼레이션 주식회사 | Face recognition method using data processing technologies based on hybrid approach and radial basis function neural networks |
-
2015
- 2015-05-20 KR KR1020150070651A patent/KR101756919B1/en active IP Right Grant
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2017112310A1 (en) * | 2015-12-24 | 2017-06-29 | Intel Corporation | Facial contour recognition for identification |
US10977509B2 (en) | 2017-03-27 | 2021-04-13 | Samsung Electronics Co., Ltd. | Image processing method and apparatus for object detection |
US11908117B2 (en) | 2017-03-27 | 2024-02-20 | Samsung Electronics Co., Ltd. | Image processing method and apparatus for object detection |
US11403878B2 (en) | 2018-12-31 | 2022-08-02 | Samsung Electronics Co., Ltd. | Apparatus and method with user verification |
Also Published As
Publication number | Publication date |
---|---|
KR101756919B1 (en) | 2017-07-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10049262B2 (en) | Method and system for extracting characteristic of three-dimensional face image | |
Huang et al. | Face detection and precise eyes location | |
US8213690B2 (en) | Image processing apparatus including similarity calculating unit, image pickup apparatus, and processing method for the apparatuses | |
KR102339607B1 (en) | Apparatuses and Methods for Iris Based Biometric Recognition | |
US20160335495A1 (en) | Apparatus and method for acquiring image for iris recognition using distance of facial feature | |
Boehnen et al. | A fast multi-modal approach to facial feature detection | |
CN111344703B (en) | User authentication device and method based on iris recognition | |
MX2012010602A (en) | Face recognizing apparatus, and face recognizing method. | |
Hollingsworth et al. | Iris recognition using signal-level fusion of frames from video | |
CN106056064A (en) | Face recognition method and face recognition device | |
CN111291701B (en) | Sight tracking method based on image gradient and ellipse fitting algorithm | |
CN109858375A (en) | Living body faces detection method, terminal and computer readable storage medium | |
JP2013065119A (en) | Face authentication device and face authentication method | |
Lee et al. | An automated video-based system for iris recognition | |
CN113614731A (en) | Authentication verification using soft biometrics | |
KR101756919B1 (en) | Device and method for face recognition | |
CN114270417A (en) | Face recognition system and method capable of updating registered face template | |
CN113920591A (en) | Medium-distance and long-distance identity authentication method and device based on multi-mode biological feature recognition | |
CN106156739A (en) | A kind of certificate photo ear detection analyzed based on face mask and extracting method | |
KR20070088982A (en) | Deformation-resilient iris recognition methods | |
Sathish et al. | Multi-algorithmic iris recognition | |
WO2022244357A1 (en) | Body part authentication system and authentication method | |
Lee et al. | Improvements in video-based automated system for iris recognition (vasir) | |
CN104751144A (en) | Frontal face quick evaluation method for video surveillance | |
Proença et al. | A method for the identification of inaccuracies in pupil segmentation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
A201 | Request for examination | ||
E902 | Notification of reason for refusal | ||
E701 | Decision to grant or registration of patent right | ||
GRNT | Written decision to grant |