KR20170080116A - Face Recognition System using Depth Information - Google Patents

Face Recognition System using Depth Information Download PDF

Info

Publication number
KR20170080116A
KR20170080116A KR1020150191344A KR20150191344A KR20170080116A KR 20170080116 A KR20170080116 A KR 20170080116A KR 1020150191344 A KR1020150191344 A KR 1020150191344A KR 20150191344 A KR20150191344 A KR 20150191344A KR 20170080116 A KR20170080116 A KR 20170080116A
Authority
KR
South Korea
Prior art keywords
unit
depth
image
facial
information
Prior art date
Application number
KR1020150191344A
Other languages
Korean (ko)
Other versions
KR101818984B1 (en
Inventor
권순각
Original Assignee
동의대학교 산학협력단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 동의대학교 산학협력단 filed Critical 동의대학교 산학협력단
Priority to KR1020150191344A priority Critical patent/KR101818984B1/en
Publication of KR20170080116A publication Critical patent/KR20170080116A/en
Application granted granted Critical
Publication of KR101818984B1 publication Critical patent/KR101818984B1/en

Links

Images

Classifications

    • G06K9/00221
    • G06K9/00248
    • G06K9/00281
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

More particularly, the present invention relates to a face recognition system using depth information, and more particularly, to a face recognition system in which a face depth value of a person is stored in advance in a database, and when a person is photographed through a depth camera, Extracting face features from depth values of the photographed images, comparing the depth values of the extracted face features with face data of the persons stored in the database, And recognizes the person as a person when they match.
A facial recognition system using depth information according to the present invention includes a facial storage unit for storing facial feature depth information; Depth image capturing unit for face depth image capturing; A depth image correcting unit for correcting the depth value error; A depth image rotation unit for image rotation conversion and facial alignment; Depth image expansion and contraction for facial image extension and contraction according to image capturing distance; A face detector for extracting a depth image face portion; Facial feature extractor for facial feature extraction; A facial feature comparing unit for comparing the data stored in the facial storing unit with data stored in the facial storing unit; And a personality coincidence determining unit for determining a person coincidence degree in a result of comparison with the stored data.

Description

{Face Recognition System using Depth Information}

More particularly, the present invention relates to a face recognition system using depth information, and more particularly, to a face recognition system in which a face depth value of a person is stored in advance in a database, and when a person is photographed through a depth camera, Extracting face features from depth values of the photographed images, comparing the depth values of the extracted face features with face data of the persons stored in the database, And recognizes the person as a person when they match.

Face Recognition technology is one of the fields of biometric recognition (Boimetrics), where the machine automatically identifies and authenticates people using unique feature information contained in each face. The facial image, which is relatively easy and naturally input from various image media, is used to separate faces from complex backgrounds and then search for positions such as eyes, nose, and mouth, sort and normalize size, extract feature information necessary for recognition, It is used for registration, recognition and authentication of faces by storing the template in a database by statistical methods. In the field of biometric systems, fingerprint, vein, and iris recognition fields are widely used and commercialized and commercialized. However, these biometrics systems have the disadvantage that human beings must be in touch with human beings, data collection is difficult, and intuitive. In this respect, facial recognition technology has the advantage that it is a non-contact type that takes an image and that the data recorded and used here is very intuitive in terms of face photographs.

This facial recognition technology is commonly used to compare patterns with previously stored reference images using image processing techniques. This is a typical application and example of using the optical inspection system in the printed circuit board (PCB) manufacturing line to determine the presence or absence of the PCB defect, the automatic license plate recognition for the intelligent traffic system, and the pattern matching technology in the object Internet technology. In order to perform such pattern recognition, when a conventional camera captures an image, a geometrically distorted image due to the perspective of the image depending on the camera position is photographed, which prevents the detection and recognition of the pattern from being performed smoothly. In order to correct the geometric distortion image due to the perspective, there has been proposed a distortion coefficient calculation method using a corrected object, a multi-directional pattern image using technique, and a geometric feature utilization technique of a line or a vanishing point. However, There is a problem that it can not be done. Also, there is a problem that the method of acquiring image information through color information is weak in an environment that affects color images.

In the case of image taking using a camera, the H.264 and MPEG-4 Part 10 encoding techniques have been established as video encoding standards, and now the HEVC standard is in the completion stage. The motion estimation and compensation method in video coding is essential for eliminating redundancy in the temporal direction. In H.264, HEVC, and so on, a block matching method is used to find a block that is a bundle of neighboring pixels in the spatial direction of the current screen and a closest block in the neighboring reference picture in the time direction for motion estimation. In the block matching motion estimation process, the evaluation scale for finding the block, the size of the search area in the reference screen, and the size of the current block should be considered. Since motion estimation accounts for more than 70% of implementation complexity in video coding, fast motion estimation methods that reduce complexity from the beginning of video coding have been studied. In this block matching motion estimation, the accuracy of estimation is high in the left and right movement of the camera and the spatial movement of the object in the image, but the accuracy of the image expansion and contraction is low. The expansion / contraction motion of the image means that the current screen is enlarged or reduced on the reference screen. In order to accurately extract the expansion / contraction ratio corresponding to enlargement / reduction, all possible expansion / contraction ratios must be applied. Furthermore, since the number of possible stretch ratios is too large, it is impossible to apply all cases. Therefore, techniques for reducing the complexity of implementation have been developed. In the first stage of the stretching motion estimation, a simple motion estimation is performed on several selected pixels. In the second step, a variety of iterative least-squares estimation is performed in consideration of a method of increasing the accuracy of all the pixels, calculation efficiency, A method of using an interpolated reference picture, and a method of simplifying motion estimation using a 3-D diamond search pattern have been proposed. However, since the conventional art uses only color information of the camera, it is difficult to estimate an accurate stretching motion vector, and there is also a problem that there is a limit in reducing the complexity of estimation.

A similar prior art to disclose a facial recognition system using depth information according to the present invention is Korean Patent Laid-Open No. 10-2009-0011133 'Pattern recognition and security management system using RFID'. The similar prior art includes a pattern recognition unit for extracting a face image vector from a video signal photographed by a camera; Acquires the tag ID from the RFID tag of the ID card through the RFID reader, confirms the first identity through the tag ID inquiry provided through the RFID reader, and displays the learned image on the ID card whose first identity is confirmed. An RFID device part for mapping; And a security management unit for comparing the face image vector extracted by the pattern recognition unit with the image mapped by the RFID device unit to confirm the secondary identity.

Another prior art is the Korean Registered Patent Application No. 10-1549599 entitled " Restricted Area Access Warning System " which allows accessors to identify and track copper lines using facial recognition and tag recognition. Wherein the similar prior art includes an identification tag in which identification information and an identification signal are registered and can be signaled on a propagation path; An identification tag attached to the restricted area entrance and receiving the identification tag propagation signal to recognize the operator identification information; An identification unit positioned below the tag reader unit in the identification tag unit to acquire facial image recognition and identification information of the safety wearer; A main server unit for receiving the identity registration information from the identification tag unit, receiving the identification information from the identity recognition unit, comparing and reading the identification information, verifying the identity, transmitting an operator access control signal, and tracking the copper wire while confirming the gateway access worker ; ≪ / RTI >

Another similar prior art is disclosed in Korean Patent Registration No. 10-1576630 entitled " Method of Displaying Personalized Screen by User with Intelligent Pattern Recognition ". Wherein the similar prior art includes a user facial recognition step of recognizing a face for each user on the basis of the intelligent pattern recognition of the image information input from the image sensor by the control unit; Measuring a ratio between the recognized user facial image and a predetermined user facial image to recognize a user specific distance; The screen magnification of the user having the recognized face is searched in the DB and designated as the display screen magnification and adjusted to the user-specific type according to the changed user-specific distance distance according to the ratio between the facial images of the user, And displaying the size on a screen display unit.

However, in the related art prior art described above, the face depth value of a person is stored in advance in a database. When a person is photographed through a depth camera, the depth image of the person is corrected through the depth value, Extracting facial features from the stored database and depth values of the captured image, comparing the depth values of the extracted facial features with facial data of the person stored in the database, and recognizing the depth characteristics of the extracted facial features as corresponding persons Technology.

KR 10-2009-0011133 (A) KR 10-1549599 (B1) KR 10-1576630 (B1)

The present invention aims to satisfy the technical needs required from the background of the above-mentioned invention. In particular, it is an object of the present invention to provide a face recognition system including a depth image rotation unit based on a perspective distortion image correction system using depth information in order to solve the problem of a method of correcting a perspective distortion of an image in the prior art. In order to solve the problem of difficulty in estimating a stretching motion vector of a color camera image in the prior art, a facial recognition system having a depth image expanding unit based on a stretching / It has its purpose.

The technical objects to be achieved by the present invention are not limited to the above-mentioned problems, and other technical subjects not mentioned can be clearly understood by those skilled in the art from the following description. There will be.

To achieve these and other advantages and in accordance with the purpose of the present invention, as embodied and broadly described herein, there is provided a face recognition system using depth information, Depth image capturing unit for face depth image capturing; A depth image correcting unit for correcting the depth value error; A depth image rotation unit for image rotation conversion and facial alignment; Depth image expansion and contraction for facial image extension and contraction according to image capturing distance; A face detector for extracting a depth image face portion; Facial feature extractor for facial feature extraction; A facial feature comparing unit for comparing the data stored in the facial storing unit with data stored in the facial storing unit; And a personality coincidence determining unit for determining a person coincidence degree in a result of comparison with the stored data.

As described above, according to the present invention, depth information is acquired by photographing one planar region through a depth camera, and a depth image rotation unit having improved processing speed and accuracy by correcting the perspective distortion generated in the corresponding color image with depth information is provided Even if a person is photographed on the side without being performed in the front side, it is possible to calibrate the person's own perspective, thereby enhancing the performance of the facial recognition system. In addition, since a depth expansion / contraction unit with improved estimation accuracy is provided by calculating the expansion / contraction ratio using the depth information to reduce the motion estimation difference signal, the error according to the imaging distance can be corrected by stretching and adjusting itself, .

It is to be understood that the technical advantages of the present invention are not limited to the technical effects mentioned above and that other technical effects not mentioned can be clearly understood by those skilled in the art from the description of the claims There will be.

1 is a block diagram of a main part of a face recognition system using depth information according to the present invention;
FIG. 2 illustrates an example of depth image capturing using the depth image capturing unit of the present invention;
FIG. 3 illustrates an example of correction of an error generating pixel using the depth image correcting unit of the present invention; FIG.
FIG. 4 illustrates interpolation of an eyeglass frame of a spectacle wearer using a depth image correction unit according to the present invention;
5 is a block diagram of a detailed module of a depth image rotating unit according to the present invention;
FIG. 6 is a flowchart showing an embodiment in which face-surface alignment is performed in a depth image rotating unit according to the present invention;
FIG. 7 illustrates an example of a facial image stretching operation using a depth image stretching unit according to the present invention;
8 is a block diagram of a detailed module of a depth image expanding and contracting unit according to the present invention;
9 is a flowchart illustrating a process of expanding and contracting a facial image in a depth image expanding and contracting unit according to the present invention;
10 is an exemplary view illustrating a process of extracting a face using a depth value in a face detector according to the present invention;
FIG. 11 is an exemplary view illustrating differences in depth values of main features extracted by the facial feature extraction unit according to the present invention; FIG.
12 is an exemplary view of a facial region extracted by the facial feature extraction unit according to the present invention;
13 is a diagram illustrating a process of correcting the positions of two eyes in parallel to a horizontal line in order to correct a state in which the face is inclined;
14 shows an example of a method of measuring the relative depth magnitude by calculating the depth difference between the nose and the nose in the face;
15 is an exemplary view for extracting jaw parts using the facial recognition system according to the present invention;
16 is an exemplary view of facial width measurement using the facial recognition system according to the present invention.

BRIEF DESCRIPTION OF THE DRAWINGS The above and other objects, features and advantages of the present invention will become more apparent from the following detailed description of the present invention when taken in conjunction with the accompanying drawings, It is not. In the following description of the present embodiment, the same components are denoted by the same reference numerals and symbols, and further description thereof will be omitted.

Prior to the detailed description of each step of the invention, terms and words used in the present specification and claims should not be construed as limited to ordinary or dictionary meanings, and the inventor shall design his own invention in the best manner It should be interpreted in the meaning and concept consistent with the technical idea of the present invention based on the principle that the concept of the term can be properly defined. Therefore, the embodiments described in the present specification and the configurations shown in the drawings are only the most preferred embodiments of the present invention and do not represent all the technical ideas of the present invention. Therefore, It is to be understood that equivalents and modifications are possible.

Referring to FIG. 1, an apparatus configuration of a face recognition system using depth information according to the present invention includes a facial feature storage unit 100 for storing facial feature depth information; A depth image capturing unit 200 for facial depth image capturing; A depth image correction unit 300 for depth value error correction; A depth image rotation unit 400 for image rotation conversion and facial alignment; A depth image expansion / contraction unit 500 for facial image extension / contraction according to the image capturing distance; A facial image detector 600 for extracting a facial image portion; A facial feature extraction unit 700 for extracting a depth image facial feature; A facial feature comparing unit 800 for comparing the data stored in the facial image storing unit 100 with data stored in the facial image storing unit 100; And a personality coincidence determining unit 900 for determining a person coincidence degree in a result of comparison with the stored data.

The facial features of the person to be identified are stored in the face storage unit 100 in a depth value form. In addition to the depth information of the face, the physical feature to be stored at this time is preferably the position and shape of the eyes, nose, and mouth, the width of the face, the height of the nose, and the outline information of the jaw.

When the depth image capturing unit 200 is installed at a position where it is possible to take a portrait, the person's face is photographed as shown in FIG. At this time, since the nose portion is located nearest to the depth image capturing unit 200 as a feature of the obtained depth image, the depth value is the smallest. Also, the depth values of the face and other regions are greatly different from each other, so that the facial portion can be extracted using the facial image detector 600. A pixel in which an error occurs in the photographed image may be generated when photographing and acquiring the depth image through the depth image photographing unit 200. [ At this time, interpolation is performed using the depth image correcting unit 300 as shown in FIG. 3, thereby correcting the error pixel.

In the facial recognition process, the wearer of the glasses may be subject to face recognition. In the case where the wearer of the eyeglass is a subject of depth image taking, the eyeglass lens is glass, so it does not affect the measurement of the depth value, but in the case of the eyeglass frame, it is measured that there is depth value, which may cause an error in the face recognition process. Therefore, the spectacle frame can be distinguished from the face area by using the fact that the characteristic of the spectacle frame is smaller than the average depth of the face. Also, by interpolating the area of the spectacle frame with the peripheral depth value as shown in FIG. 4, an image in which the spectacle area is removed from the depth image can be obtained.

Since the depth image photographing unit 200 may not photograph the person at all times when photographing the person, the depth image rotating unit 400 may correct the perspective distortion by performing the perspective distortion correction on the face.

5, the depth image rotation unit 400 includes a depth information calculator 410 for calculating depth information on an image of a plane photographed by the depth image photographing unit 200, ); A coordinate transforming unit 420 for calculating a position of each pixel in the coordinate system of the depth image photographing unit 200 using the depth information calculated by the depth information calculating unit 410; A local normal vector calculating unit 430 for calculating a local normal vector of the pixel using the peripheral information of each pixel calculated by the coordinate transforming unit 420; A plane normal vector calculating unit 440 for obtaining a normal vector of the entire plane using the local normal vector obtained by the local normal vector calculating unit 430; A rotation matrix calculation unit 450 for calculating a rotation matrix by calculating an angle between the rotation axis and the image; And a perspective distortion correction unit 460 for correcting distortion of an image according to a position of the depth image photographing unit 200 by applying rotation transformation.

The coordinate transforming unit 420 photographs a plane through the depth image capturing unit 200 and obtains the position P (x, y) of the pixel in the depth image and the depth image capturing unit 200, (X, y) from the xy plane centered on the depth image capturing unit 200 and using the factors of the depth image capturing unit 200 and the distance D (x, y) And the depth optical image capturing unit 200 has a front optical axis direction as a z-axis.

Here, the transformation into the coordinate system of the depth image capturing unit 200 is performed by the position information P (x, y) of the pixel in the depth image coordinate system having the upper left corner of the image as the origin, y) is the position on the coordinate system which the image center is the origin where P (x, y) of the pixels in said depth image taking unit 200, the distance z axis in the coordinate system, z c is the image coordinate system information P v ( x v , y v ).

In order to calculate the position of each pixel in the coordinate system of the depth image capturing unit 200, the coordinate transforming unit 420 transforms the image using the viewing angle and resolution, which are internal information of the depth image capturing unit 200 The distance f to the viewport is obtained by obtaining the distance f to the viewport through the vertical viewing angle f oVv of the depth image capturing unit 200 and the vertical resolution h of the depth image capturing unit 200 or by calculating the horizontal viewing angle f oVh and the depth The position P c (x, y) = (x c , y) in the coordinate system of the depth image capturing section 200 is obtained by using the horizontal resolution w of the image capturing section 200, c , z c ) to obtain position information in the coordinate system of the depth image photographing unit 200.

If the position in the coordinate system of the depth image pickup unit 200 at the position P (x, y) in the image coordinate system is P c (x, y), the local normal vector calculator 430 calculates the local normal vector location information of the in depth image taking unit 200, the coordinate system of the point P c (x, y + 1 ), P c (x, y-1) and said of a point located on the left and right depth image taking unit 200 in the coordinate system the position information P c (x + 1, y ), P c (x-1, y) two vectors v 1 = P c (x + 1, y) on the basis of - P c (x-1, y), in P c (x, y-1 ) produced and N xy = the v 1 × v pixel P (x, y) is obtained the cross product of two vectors with a 2 - v 2 = P c ( x, y + 1) And a local normal vector N xy is obtained.

The plane normal vector calculator 440 obtains a normal vector N in a plane region by adding the local normal vector of each pixel obtained by the local normal vector calculator 430, The image taken by the image capturing unit 200 is transformed into a plane parallel to the z axis through a normal vector N of the plane through rotation transformation, and the plane image is corrected to an image parallel to the xy plane to remove perspective distortion. And the unit vector of the plane normal vector after the rotation transformation is N '(0, 0, 1).

If the unit vector of the plane normal vector after the rotation transformation is N ', the unit vector which is the axis of the rotation transformation is u = (N x N' ) / (| N × N ' |) and external to the respective normal vectors before and after the conversion vector u (u x normalized by the, u y, u z) and rotational angle θ is θ = cos - 1 ((N and N ') / (| N || N' |)). In the rotation matrix calculator 450,

R = cos? I + sin? [U] + (1-cos?) U? U,

uⓧu =

Figure pat00001

[u] x =

Figure pat00002
And a position P c (i, j) in the coordinate system of the depth image capturing unit 200 of each pixel is obtained through the transformation matrix

Figure pat00003
= R
Figure pat00004
And the rotation conversion, wherein to obtain the position P 'c (i, j) = (x' c, y 'c, z' c) after conversion through.

The perspective distortion correction unit 460 corrects the distance P ' c (i, j) = (x' c , y ' c , z' c ) obtained by converting the distance f to the viewport on which the image is projected, If, P 'is x to convert them because the coordinates on the depth of the image taking unit 200, the coordinate system to image coordinates back' v = (x 'c f ) / z' c, y 'v = (y' c f) / z 'position in the image coordinate system by using the c which origin is present in the center of the screen P' v (x 'v, y' v) is converted, and then set the origin it back to interp converted pixel P (x, (x ', y') mapped to the pixel P '(x', y ').

Referring to FIG. 6, an embodiment of facial alignment in the depth image rotation unit 400 includes calculating depth information in an image of a plane photographed by the depth image photographing unit 200 (S401); Calculating (s402) a position of each pixel in the coordinate system of the depth image capturing unit 200 using the calculated depth information; Calculating a local normal vector of the pixel using the calculated peripheral information of each pixel (s403); Obtaining a normal vector of the entire plane using the local normal vector (s404); Calculating a rotation matrix by calculating an angle between a rotation axis of the image and an angle between the rotation axis and the image (s405); And correcting the distortion of the image according to the position of the depth image photographing unit 200 by applying rotation transformation (S406).

The depth image of the person and the color image are simultaneously captured through the depth image capturing unit 200. The depth image captured by the depth image capturing unit 200 may include the size of the face stored in the face storage unit 100, The facial size in the photographed image may be different. At this time, the two face depth values are compared and the depth image expanding / contracting unit 500 performs expansion / contraction to match the face size in the image with the face storage unit 100. For example, referring to FIG. 7, if the left side is a facial image stored in the facial storage unit 100 and the right side is a facial image actually photographed, an average depth value for each person is used to perform expansion and contraction, do.

8, the depth image expanding unit 500 acquires color information from a color camera and acquires depth information from the depth image photographing unit 200 510; An average of the color information and the depth information for the block in the current screen and the color information and the depth information for the search block in the reference screen are calculated based on the color information and the depth information obtained by the information obtaining unit 510 An average calculating unit 520 for calculating an average value An expansion / contraction ratio calculation unit 530 that calculates an expansion / contraction ratio from an average value of depth information of a current block and a reference block; And a difference signal calculation unit 540 for calculating a motion estimation difference signal between the current block and the reference block by enlarging or reducing the reference picture according to the expansion ratio calculated by the expansion ratio calculation unit 530 .

The information obtaining unit 510 obtains RGB or YUV color information of a moving image from the color camera and obtains distance information between the depth image capturing unit 200 and the object from the depth image capturing unit 200 And acquiring the color information in units of pixels, and acquiring the depth information in units of mm.

The expansion / contraction ratio calculation unit 530 is capable of calculating the expansion / contraction ratio S,

Figure pat00005
Where P c is the depth information for the pixels in the current block in the current screen and P r is the depth information for the pixels in the reference block in the reference screen,
Figure pat00006
Means the average depth information of the block calculated from the depth information of each pixel in the current block
Figure pat00007
Mean mean depth information for the reference block, and alpha, beta, and gamma denote constant values. At this time, assuming that? = 0
Figure pat00008
Is obtained. And
Figure pat00009
, S > 1, the reference block is enlarged
Figure pat00010
, S < 1, and the reference block is reduced.

9, the execution flow of expanding and contracting the facial image in the depth image expanding / contracting unit 500 includes acquiring color information from a color camera, acquiring color information and depth information for acquiring depth information from the depth image capturing unit 200, Acquiring step S501; An average of color information and depth information of a block in the current screen and color information and depth information of a search block in the reference screen, based on the color information and depth information obtained in the color information and depth information acquiring step (S501) An average calculation step (S502) of calculating an average of the information; An expansion ratio calculation step (S503) of calculating an expansion ratio from an average value of depth information of a current block and a reference block calculated by the average calculation step (S502); And a motion estimation difference signal calculation step (S504) of calculating a motion estimation difference signal between the current block and the reference block by enlarging or reducing the reference picture according to the expansion ratio calculated in the expansion ratio calculation step (S503) .

The process of extracting the face using the depth value in the face detector 600 is as follows. The person is photographed by the depth image photographing unit 200 and the region is separated by labeling according to the depth value. At this time, since the face is located close to the depth image capturing unit 200, the average depth value of the face region is the smallest. This can be used to separate the face from other parts of the body. Referring to FIG. 10, a depth image which is brighter as the depth value is lower can be confirmed. In this case, the depth of the facial region is lower than that of the other region, so that the facial region can be separated from other regions. The facial feature extracting unit 700 extracts features of the facial features to compare facial features stored in the facial image storage unit 100 after the facial detection unit 600 detects the facial features. The feature of the face extracted here is preferably the face contour, the depth and position of the eye / nose / mouth /, the shape of the jaw, the height of the cheekbone, the height of the eyebrow bone, the height of the nose, and the face width. First extract the contours of the face and then extract the eyes / nose / mouth. The depth value of the face can be detected by using this feature because the nose region is the lowest and the eye region is relatively large. In addition, although the depth value of the mouth is larger than the depth value of the nose, since the mouth is protruded rather than the other facial parts, the depth value may be relatively small, so that feature extraction for eyes / nose / mouth is possible using this point Fig. 11; Fig. 12). Detects the detected eye / nose / mouth contour in the above process and detects the relative position of the eye / nose / mouth position. At this time, if the face is tilted, the eye / nose / mouth position fluctuation may occur, so that the position of the two eyes rotates the depth image parallel to the horizontal line. Then, the position of the two eyes, the position of the nose, and the position of the mouth are measured based on the center point of the positions of the two eyes (Fig. 13). Also, the height of the nose can be extracted, and the depth difference between the nose and the nose can be measured and measured through the depth image capturing unit 200 (FIG. 14). The height of the cheekbone below the eye and the height of the eyebrow bone above the eye are measured in the manner described above, and utilized as a feature of the face. After that, the shape of the jaw area is extracted (FIG. 15). The jaw is regarded as the area from the area below the lip to the bottom of the face, thereby extracting the shape of the outline. The face width is measured by measuring the face width. The actual face distance can be obtained through a depth value, a relative position in the depth image, and an internal factor of the depth image photographing unit 200. (FIG. 16).

If facial features that can identify a person are extracted, the facial feature comparing unit 800 performs a comparison operation on the feature data of the respective persons stored in the facial storage unit 100. If the comparison result is equal to or less than a certain degree of similarity, It is determined that the personality coincidence judgment unit 900 is not a specific person. Also, when the facial feature comparing unit 800 performs a comparison operation and confirms that all the features match, the personality determining unit 900 determines that the person is a specific person.

While the present invention has been described in connection with what is presently considered to be practical exemplary embodiments, it is to be understood that the invention is not limited to the disclosed embodiments, but, on the contrary, will be. Accordingly, the true scope of the present invention should be determined only by the appended claims.

Claims (9)

A face recognition system using depth information,
A facial feature depth information storage face storage unit (100);
A depth image capturing unit 200 for facial depth image capturing;
A depth image correction unit 300 for depth value error correction;
A depth image rotation unit 400 for image rotation conversion and facial alignment;
A depth image expansion / contraction unit 500 for facial image extension / contraction according to the image capturing distance;
A facial image detector 600 for extracting a facial image portion;
A facial feature extraction unit 700 for extracting a depth image facial feature;
A facial feature comparing unit 800 for comparing the data stored in the facial image storing unit 100 with data stored in the facial image storing unit 100;
And a person matching determination unit (900) for determining a person matching degree.
The method according to claim 1,
The facial features of the person to be identified are stored in a depth value form and the physical characteristics stored in the facial expression storage unit 100 include depth information of the face, The width of the face, the height of the nose, and depth information on the outer periphery of the jaw,
The depth image of the person obtained by the depth image photographing unit 200 has a nose portion located at a short distance from the depth image photographing unit 200 and has the smallest depth value and the depth value of the face and the non- The face portion can be extracted as the depth image data only by using the face detector 600 and the pixel having an error in the captured image can be interpolated by the depth image corrector 300, And when the face is not photographed in front of the person, the face image is rotated by performing the perspective distortion correction in the depth image rotation unit (400).
The method according to claim 1,
The depth image rotation unit 400 includes a depth information calculation unit 410 for calculating depth information on an image of a plane photographed by the depth image photographing unit 200; A coordinate transforming unit 420 for calculating a position in the coordinate system of the depth image photographing unit 200 using the depth information calculated by the depth information calculating unit 410; A local normal vector calculating unit 430 for calculating a local normal vector of the pixel using the peripheral information of each pixel calculated by the coordinate transforming unit 420; A plane normal vector calculating unit 440 for obtaining a normal vector of the entire plane using the local normal vector obtained by the local normal vector calculating unit 430; A rotation matrix calculator 450 for calculating a rotation matrix by calculating a rotation axis and an angle between the depth images; And a perspective distortion correction unit (460) for correcting distortion of an image according to a position of the depth image capturing unit (200) by applying rotation transformation,
The depth image expanding / contracting unit 500 includes an information obtaining unit 510 for simultaneously obtaining color information and depth information; An average of color information and depth information for a current block in the current screen and an average of color information and depth information for a reference block in a reference screen are calculated based on color information and depth information obtained by the information obtaining unit 510 An average calculating unit 520 for calculating an average value An expansion / contraction ratio calculation unit 530 for calculating an expansion / contraction ratio from an average value of the depth information of the current block and the reference block; A difference signal calculation unit 540 for calculating a motion estimation difference signal between the current block and the reference block by enlarging or reducing the reference picture according to the expansion ratio calculated by the expansion ratio calculation unit 530 The face recognition system comprising:
The method of claim 3,
The coordinate transforming unit 420 photographs a plane through the depth image capturing unit 200 and obtains the depth value D (x, y) of the pixel position P (x, y) in the depth image and the depth value P y) of the depth image radiographing unit 200 and the z axis of the front optical axis of the depth image radiographing unit 200 with the focus of the depth image radiographing unit 200 as an origin, (X, y) is the position information in the depth image coordinate system having the upper left corner of the depth image as the origin in the transformation into the depth image capturing unit 200 coordinate system, and D (x, y) wherein in the recording unit 200, the coordinate system within a distance z-axis in a z c is the depth of the image coordinate system P (x, y) depth position information on the coordinate system the image center is the reference point P v (x v, y v) And the coordinate transforming unit 420 transforms the vertical viewing angle f oVv of the depth image photographing unit 200 and the depth The coordinate conversion unit 420 converts the horizontal viewing angle f oVh of the depth image capturing unit 200 and the depth image f oVh of the depth image capturing unit 200, The position P c (x, y) = (x c , y c , z) in the coordinate system of the depth image photographing unit 200 is obtained using the horizontal resolution w of the photographing unit 200, c ) to obtain position information in the coordinate system of the depth image photographing unit (200).
The method of claim 3,
The local normal vector calculation unit 430 calculates the local normal vector of the depth image P in accordance with the position information P c (x, y + 1) and P c (x, y-1) in the coordinate system of the depth image photographing unit 200, position information of the imaging unit 200, the coordinate system P c (x + 1, y ), P c (x-1, y) for, based on two vectors v 1 = P c (x + 1, y) - P c (x-1, y), v 2 = P c (x, y + 1) - a P c (x, y-1 ) generate and N xy = v cross product of two vectors by the relationship 1 × v 2 To obtain a local normal vector N xy at P (x, y)
The plane normal vector calculator 440 obtains a normal vector N in a plane region by adding the local normal vector of each pixel obtained by the local normal vector calculator 430, The image photographed by the image capturing unit 200 is subjected to rotation transformation to make the normal vector N of the plane parallel to the z axis and the plane image to be parallel to the xy plane to eliminate the perspective distortion, And the unit vector of the rough normal plane vector is N '(0, 0, 1).
The method of claim 3,
The rotation matrix calculator 450 calculates the unit vector of the unit vector N 'of the normal vector N in the plane region and the plane normal vector after the rotation transformation by u = (N × N'), / (| N × N '| ) vector u (u x, u y, u z) to a normalized cross product of each of the normal vectors for the forward and backward rotation is translated through the rotation angle θ is θ = cos -1 (( The rotation matrix calculator 450 calculates R = cos θ I + sin θ [u] + N 'N') between the two vectors obtained through the relationship of N · N ' (1-cosθ) uⓧu, uⓧu =
Figure pat00011
, [u] x =
Figure pat00012
And a position P c (i, j) in the coordinate system of the depth image pickup unit 200 of each pixel through the R
Figure pat00013
= R
Figure pat00014
Using the relationship between the rotational transformation can be obtained by a person P 'c (i, j) = (x' c, y 'c, z' c) location after conversion,
To the trapezoidal distortion correction unit 460 is a distance f, then converting the obtained a rotation transformation position P 'c (i, j) = (x' c, y 'c, z' c) to the viewport to be depth images that the projection in, P 'is x to convert them because the coordinates on the depth of the image taking unit 200, the coordinate system to image coordinates back' v = (x 'c f ) / z' c, y 'v = (y' c f) / 'position in the image coordinate system by using the c origin is present in the center of the screen P' z v (x, v, y, v) by P (x, y) in the next correction conversion by setting the zero point it back converted to P '(x', y ') that is mapped to P'
The expansion / contraction ratio S calculated by the expansion /
Figure pat00015
, P c is depth information for pixels in the current block, P r is depth information for pixels in the reference block,
Figure pat00016
Means average depth information of a block calculated from depth information of each pixel in the current block,
Figure pat00017
Denotes average depth information of the reference block, and alpha, beta, and gamma are constant values,
Assuming that? Is 0
Figure pat00018
Can be obtained,
Figure pat00019
And
Figure pat00020
in
Figure pat00021
, The expansion / contraction ratio S becomes a relation of S > 1, and the reference block is enlarged
Figure pat00022
Is satisfied, the expansion / contraction ratio S is in a relationship of S < 1, and the reference block is reduced.
The method according to claim 1,
In the depth image rotation unit 400,
Calculating depth information on an image of a plane photographed by the depth image photographing unit 200 (S401);
Calculating (s402) a position of each pixel in the coordinate system of the depth image capturing unit 200 using the calculated depth information;
Calculating a local normal vector of the pixel using the calculated peripheral information of each pixel (s403);
Obtaining a normal vector of the entire plane using the local normal vector (s404);
Calculating a rotation matrix by calculating an angle between a rotation axis of the image and an angle between the rotation axis and the image (s405);
(S406) of correcting distortion of an image according to a position of the depth image photographing unit (200) by applying rotation transformation.
The method according to claim 1,
The method for expanding and contracting a face image in the depth image expanding / contracting unit 500 includes:
Acquiring color information and depth information (S501) for acquiring color information and depth information of a face image;
An average calculation step (S502) of calculating an average value of color information and depth information of a current block, an average value of color information and depth information of a reference block;
An expansion ratio calculation step (S503) of calculating an expansion ratio from an average value of the depth information of the current block and the reference block;
And a motion estimation difference signal calculation step (S504) of enlarging or reducing the reference picture according to the expansion / contraction ratio to calculate a motion estimation difference signal between the current block and the reference block.
The method according to claim 1,
The depth image of the person and the color image are simultaneously photographed through the depth image photographing unit 200 and the size of the face stored in the face storage unit 100 according to the distance photographed by the depth image photographing unit 200 When the facial image size of the captured image is different, the two facial depth values are compared and the depth image expanding and contracting unit 500 performs expansion and contraction to match the facial size in the captured image with the facial image storage unit 100 And,
When the face is detected using the depth value in the face detector 600, the person is photographed by the depth image photographing unit 200 and the region is separated by labeling according to the depth value. The face is detected by the depth image photographing unit 200 ) Due to the difference in the average depth value due to the close proximity to the face,
The facial feature extraction unit 700 extracts facial features extracted from the facial features stored in the facial storage unit 100 after facial detection by the facial detection unit 600, The depth and position of the nose / mouth, the shape of the jaw, the height of the cheekbones, the height of the eyebrows, the height of the nose,
The feature data of each person stored in the face storage unit 100 is compared in the facial feature comparison unit 800. When the comparison result is less than or equal to a predetermined degree of similarity, If the facial feature comparing unit 800 performs a comparison operation to confirm that all the features match, the facial feature determining unit 900 determines that the facial feature determining unit 900 is a specific person.
KR1020150191344A 2015-12-31 2015-12-31 Face Recognition System using Depth Information KR101818984B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020150191344A KR101818984B1 (en) 2015-12-31 2015-12-31 Face Recognition System using Depth Information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020150191344A KR101818984B1 (en) 2015-12-31 2015-12-31 Face Recognition System using Depth Information

Publications (2)

Publication Number Publication Date
KR20170080116A true KR20170080116A (en) 2017-07-10
KR101818984B1 KR101818984B1 (en) 2018-01-16

Family

ID=59356352

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020150191344A KR101818984B1 (en) 2015-12-31 2015-12-31 Face Recognition System using Depth Information

Country Status (1)

Country Link
KR (1) KR101818984B1 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108932505A (en) * 2018-07-25 2018-12-04 维沃移动通信有限公司 A kind of image processing method and electronic equipment
CN111986097A (en) * 2019-05-24 2020-11-24 北京小米移动软件有限公司 Image processing method and device
KR20210108258A (en) 2020-02-25 2021-09-02 동의대학교 산학협력단 Quantization Parameter Determination Method, Device And Non-Transitory Computer Readable Recording Medium of Face Depth Image Encoding, And Face recognition Method And device Using The Same
KR20210108260A (en) 2020-02-25 2021-09-02 동의대학교 산학협력단 Identity Recognition System And Method, Spherical Modeling Method, Device And Non-Transitory Computer Readable Recording Medium of Face Depth Picture for Identity Recognition
CN113610864A (en) * 2021-07-23 2021-11-05 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium
CN114089433A (en) * 2020-08-05 2022-02-25 恒景科技股份有限公司 Presence detection system
KR20220162351A (en) * 2021-06-01 2022-12-08 주식회사 인포쉐어 Intelligent CCTV And Surveillance System Using It

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101438011B1 (en) * 2013-11-08 2014-09-04 수원대학교산학협력단 Three-dimensional face recognition system using 3d scanner

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108932505A (en) * 2018-07-25 2018-12-04 维沃移动通信有限公司 A kind of image processing method and electronic equipment
CN111986097A (en) * 2019-05-24 2020-11-24 北京小米移动软件有限公司 Image processing method and device
CN111986097B (en) * 2019-05-24 2024-02-09 北京小米移动软件有限公司 Image processing method and device
KR20210108258A (en) 2020-02-25 2021-09-02 동의대학교 산학협력단 Quantization Parameter Determination Method, Device And Non-Transitory Computer Readable Recording Medium of Face Depth Image Encoding, And Face recognition Method And device Using The Same
KR20210108260A (en) 2020-02-25 2021-09-02 동의대학교 산학협력단 Identity Recognition System And Method, Spherical Modeling Method, Device And Non-Transitory Computer Readable Recording Medium of Face Depth Picture for Identity Recognition
CN114089433A (en) * 2020-08-05 2022-02-25 恒景科技股份有限公司 Presence detection system
CN114089433B (en) * 2020-08-05 2024-02-09 恒景科技股份有限公司 Presence detection system
KR20220162351A (en) * 2021-06-01 2022-12-08 주식회사 인포쉐어 Intelligent CCTV And Surveillance System Using It
CN113610864A (en) * 2021-07-23 2021-11-05 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium
CN113610864B (en) * 2021-07-23 2024-04-09 Oppo广东移动通信有限公司 Image processing method, device, electronic equipment and computer readable storage medium

Also Published As

Publication number Publication date
KR101818984B1 (en) 2018-01-16

Similar Documents

Publication Publication Date Title
KR101818984B1 (en) Face Recognition System using Depth Information
KR101821144B1 (en) Access Control System using Depth Information based Face Recognition
Breitenstein et al. Real-time face pose estimation from single range images
JP3954484B2 (en) Image processing apparatus and program
JP5074322B2 (en) Image processing apparatus, image processing method, image processing program, and imaging apparatus
US20150262346A1 (en) Image processing apparatus, image processing method, and image processing program
CN108470356B (en) Target object rapid ranging method based on binocular vision
CN105740778B (en) Improved three-dimensional human face in-vivo detection method and device
US20120069007A1 (en) Method for Enhancing Depth Maps
JP5672112B2 (en) Stereo image calibration method, stereo image calibration apparatus, and computer program for stereo image calibration
US10909363B2 (en) Image acquisition system for off-axis eye images
JP2000105829A (en) Method and device for face parts image detection
KR101642402B1 (en) Apparatus and method for capturing digital image for guiding photo composition
US11727637B2 (en) Method for generating 3D skeleton using joint-based calibration acquired from multi-view camera
US11315360B2 (en) Live facial recognition system and method
JP4952267B2 (en) Three-dimensional shape processing apparatus, three-dimensional shape processing apparatus control method, and three-dimensional shape processing apparatus control program
CN110120013A (en) A kind of cloud method and device
Labati et al. Fast 3-D fingertip reconstruction using a single two-view structured light acquisition
KR101775874B1 (en) Integrated Control method for Vehicle Drive using Depth Information based Face Recognition
KR101053253B1 (en) Apparatus and method for face recognition using 3D information
Hanna et al. A System for Non-Intrusive Human Iris Acquisition and Identification.
CN109410272B (en) Transformer nut recognition and positioning device and method
KR101818992B1 (en) COSMETIC SURGERY method USING DEPTH FACE RECOGNITION
Li et al. Detecting and tracking human faces in videos
KR20210136092A (en) Image processing apparatus, image processing method and image processing program

Legal Events

Date Code Title Description
A201 Request for examination
E902 Notification of reason for refusal
E902 Notification of reason for refusal
E701 Decision to grant or registration of patent right
GRNT Written decision to grant