KR101818992B1 - COSMETIC SURGERY method USING DEPTH FACE RECOGNITION - Google Patents

COSMETIC SURGERY method USING DEPTH FACE RECOGNITION Download PDF

Info

Publication number
KR101818992B1
KR101818992B1 KR1020150191333A KR20150191333A KR101818992B1 KR 101818992 B1 KR101818992 B1 KR 101818992B1 KR 1020150191333 A KR1020150191333 A KR 1020150191333A KR 20150191333 A KR20150191333 A KR 20150191333A KR 101818992 B1 KR101818992 B1 KR 101818992B1
Authority
KR
South Korea
Prior art keywords
image
unit
depth
facial
face
Prior art date
Application number
KR1020150191333A
Other languages
Korean (ko)
Other versions
KR20170080112A (en
Inventor
권순각
이동석
Original Assignee
동의대학교 산학협력단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 동의대학교 산학협력단 filed Critical 동의대학교 산학협력단
Priority to KR1020150191333A priority Critical patent/KR101818992B1/en
Publication of KR20170080112A publication Critical patent/KR20170080112A/en
Application granted granted Critical
Publication of KR101818992B1 publication Critical patent/KR101818992B1/en

Links

Images

Classifications

    • G06K9/00228
    • G06K9/00214
    • G06K9/00255
    • G06K9/00268
    • G06K9/00288
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • H04N13/0275
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N2013/0074Stereoscopic image analysis

Abstract

According to another aspect of the present invention, there is provided a molded medical system comprising: a facial storage unit for storing facial feature depth information; Depth image capturing unit for face depth image capturing; A depth image correcting unit for correcting the depth value error; A face detector for extracting a depth image face portion; A depth image converting unit for facial image stretching and facial aligning according to a video image stretching conversion and image capturing distance; Facial feature extractor for facial feature extraction; A facial feature comparing unit for comparing the data stored in the facial storing unit with data stored in the facial storing unit; A color image photographing unit for photographing a color image of a person; A molding result calculation unit for calculating depth information after molding; A face rendering unit for rendering the calculation result as a 3D image; And a pre-shaping / pre-shaping image exposing unit for comparing and displaying the image rendered by the facial rendering unit with the initial facial image.

Description

TECHNICAL FIELD [0001] The present invention relates to a method of deriving a molding result image using depth information of an image,

The present invention relates to a technique capable of providing a 3D comparison image for confirmation and adjustment of a changed facial image after a user's molding operation by allowing a user to predict an after-molding image using a facial recognition system using depth information .

Recently, as the living environment has been improved, the social interest in the plastic surgery has been greatly increased, and the virtual molding related technology has also been rapidly developed in proportion to the social atmosphere.

On the side of a molding related medical institution (for example, a plastic surgeon) under such a conventional virtual molding technology system, a CIV corresponding to a molding target site of a customer, for example, a customer's face is photographed using a capture camera, a pattern generator, Then, the procedure for generating three-dimensional shape data is performed on the basis of the CIV, and then, after the shape data editing procedure using the commercialized image editing tool, By presenting a virtual representation to the customers who are present, a series of plastic surgery procedures are being conducted in cooperation with the customer so that they can proceed more smoothly.

Of course, under such a conventional system, the CIV generated by the capture camera, the pattern generator, and the like is utilized as the basic image of the finally generated three-dimensional shape data for virtual molding, Which is a very important position in the plastic surgery procedure.

However, in spite of this situation, in the conventional molding-related medical institution side, there is a problem in that, for example, without taking a systematic consideration of various variables such as the degree of illumination around the target site, The photographing apparatuses are simply arranged according to the experience so far and a series of photographing procedures are carried out. Therefore, unless another measure is taken, there will be a great deal of difficulty in securing the optimum CIV , And the resulting three-dimensional shape data also has to exhibit a low quality. As a result, under the conventional system, the customer and the medical institution inevitably suffer damages that greatly reduce the reliability of virtual molding .

KR 10-2009-0011133 (A)

KR 10-1549599 (B1)

KR 10-1576630 (B1)

SUMMARY OF THE INVENTION It is an object of the present invention to solve the above problems of the prior art.

More specifically, it is an object of the present invention to provide a 3D face recognition system using a face recognition system using depth information for predicting a user's post-molding image, So as to provide a 3D comparison image.

According to an aspect of the present invention, there is provided a molded medical system comprising: a facial feature depth information storage unit for storing facial feature depth information photographed by a camera; A depth image capturing unit 200 for capturing a depth of field image; A depth image correcting depth image correcting unit 300 for correcting a depth value of an image photographed by the depth image photographing unit 200; A face detector 400 for extracting a face portion of the image extracted by the depth image correcting unit 300; A depth image converting unit 500 for converting the image photographed by the depth image face detecting unit 400 into a facial image stretching and facial aligning according to image rotation stretching transformation and image capturing distance; A facial feature extraction unit 600 for extracting a facial part of the image transformed by the depth image transform unit 500; A facial feature comparing unit 700 for comparing the data stored in the facial image storing unit 100 with data already stored in the facial image storing unit 100; A feature matching unit 800 for comparing features extracted from the facial feature comparing unit 700; A color image capturing unit 900 for capturing a color image of a person in the depth image capturing unit 200; A molding result calculation unit 910 for calculating depth information after forming the image photographed by the color image photographing unit 900; A facial rendering unit 920 for rendering the calculation result of the forming result calculation unit 910 as a 3D image; And a pre-forming / post-image displaying unit 930 for comparing and displaying the image rendered by the face rendering unit 920 and the initial facial image.

As described above, according to the present invention, a face depth image of a user is acquired using a depth camera, and a predicted face change image after the operation is provided to the user by substituting prediction data for a molding operation based on the obtained depth depth image, A 3D image of his / her facial appearance after molding can be received, and through this image, it is possible to discuss opinions of a procedure change, a procedure adjustment, and the like with molding, thereby enabling a more satisfactory molding operation .

Brief Description of the Drawings Fig. 1 is a block diagram of a molded medical system through depth face recognition according to the present invention; Fig.
FIG. 2 illustrates an example of depth image capturing using the depth image capturing unit of the present invention;
3 illustrates an example of correction of an error generating pixel using the depth image correcting unit of the present invention;
FIG. 4 illustrates interpolation of an eyeglass frame of a spectacle wearer using a depth image correction unit according to the present invention;
5 is a block diagram of a detailed module of a depth image converting unit according to the present invention;
6 is a flowchart illustrating a face-to-face alignment performed in the depth image converting unit according to the present invention;

FIG. 7 illustrates an example of a process of extracting a face using a depth value in a face detector according to the present invention;
FIG. 8 is an exemplary view showing differences in depth values of main features extracted by the facial feature extraction unit according to the present invention; FIG.
9 is an exemplary view of a facial region extracted by the facial feature extraction unit according to the present invention;
FIG. 10 illustrates an example of a process of correcting the position of two eyes in parallel to a horizontal line in order to correct the inclined state of the face; FIG.
11 is a diagram illustrating a method of measuring a relative depth size by calculating a depth difference between a nose and a nose in a face;
12 is an exemplary view of jaw region extraction using a facial recognition system according to the present invention;
FIG. 13 is an exemplary view of facial width measurement using the facial recognition system according to the present invention.
14 is a flowchart of a method of deriving an image after forming according to the present invention.

BRIEF DESCRIPTION OF THE DRAWINGS The above and other objects, features and advantages of the present invention will become more apparent from the following detailed description of the present invention when taken in conjunction with the accompanying drawings, It is not. In the following description of the present embodiment, the same components are denoted by the same reference numerals and symbols, and further description thereof will be omitted.

Prior to the detailed description of each step of the invention, terms and words used in the present specification and claims should not be construed as limited to ordinary or dictionary meanings, and the inventor shall design his own invention in the best manner It should be interpreted in the meaning and concept consistent with the technical idea of the present invention based on the principle that the concept of the term can be properly defined. Therefore, the embodiments described in the present specification and the configurations shown in the drawings are only the most preferred embodiments of the present invention and do not represent all the technical ideas of the present invention. Therefore, It is to be understood that equivalents and modifications are possible.

Referring to FIG. 1, a molding medical system using depth face recognition according to the present invention includes: a facial feature depth information storage unit 100 for storing facial feature depth information photographed by a camera; A depth image capturing unit 200 for capturing a depth of field image; A depth image correcting depth image correcting unit 300 for correcting a depth value of an image photographed by the depth image photographing unit 200; A face detector 400 for extracting a face portion of the image extracted by the depth image correcting unit 300; A depth image converting unit 500 for converting the image photographed by the depth image face detecting unit 400 into a facial image stretching and facial aligning according to image rotation stretching transformation and image capturing distance; A facial feature extraction unit 600 for extracting a facial part of the image transformed by the depth image transform unit 500; A facial feature comparing unit 700 for comparing the data stored in the facial image storing unit 100 with data already stored in the facial image storing unit 100; A feature matching unit 800 for comparing features extracted from the facial feature comparing unit 700; A color image capturing unit 900 for capturing a color image of a person in the depth image capturing unit 200; A molding result calculation unit 910 for calculating depth information after forming the image photographed by the color image photographing unit 900; A facial rendering unit 920 for rendering the calculation result of the forming result calculation unit 910 as a 3D image; And a pre-forming / post-image displaying unit 930 for comparing and displaying the image rendered by the face rendering unit 920 and the initial facial image.

Referring to FIG. 14, the post-forming image derivation method of the post-forming result calculation unit 910 includes: inputting molding result data of each molding procedure (S100); A molding site specification step (S200) of designating a site to be formed in the stored facial image; A procedure confirmation step (S300) of confirming the type of operation of the molding site designated in the molding site designation step (S200); After the procedure confirmation step (S300), the result inputted in the molding operation result data input step (S100) of the confirmed operation is confirmed, and the molding designation part operation result Assignment step S400; (S500) for transmitting the derived result to the facial rendering unit 920 (S500) after the step S400 of assigning the designation site to the designation site.

The facial features of the person to be identified are stored in the face storage unit 100 in a depth value form. In addition to the depth information of the face, the physical feature to be stored at this time is preferably the position and shape of the eyes, nose, and mouth, the width of the face, the height of the nose, and the outline information of the jaw. In addition, the facial expression storage unit 100 stores the transformed image using the depth image transform unit 500 so that the smallest depth value in the depth image has a reference depth value D reference .

When the depth image capturing unit 200 is installed at a position where it is possible to take a portrait, the person's face is photographed as shown in FIG. At this time, since the nose portion is located nearest to the depth image capturing unit 200 as a feature of the obtained depth image, the depth value is the smallest. Also, the depth values of the face and other regions are greatly different from each other, so that the facial portion can be extracted using the face detector 400. A pixel in which an error occurs in the photographed image may be generated when photographing and acquiring the depth image through the depth image photographing unit 200. [ At this time, interpolation is performed using the depth image correcting unit 300 as shown in FIG. 3, thereby correcting the error pixel.

In the facial recognition process, the wearer of the glasses may be subject to face recognition. In the case where the wearer of the eyeglass is a subject of depth image taking, the eyeglass lens is glass, so it does not affect the measurement of the depth value, but in the case of the eyeglass frame, it is measured that there is depth value, which may cause an error in the face recognition process. Therefore, the spectacle frame can be distinguished from the face area by using the fact that the characteristic of the spectacle frame is smaller than the average depth of the face. Also, by interpolating the area of the spectacle frame with the peripheral depth value as shown in FIG. 4, an image in which the spectacle area is removed from the depth image can be obtained.
The process of extracting the face using the depth value in the face detector 400 is as follows. The person is photographed by the depth image photographing unit 200 and the region is separated by labeling according to the depth value. At this time, since the face is located close to the depth image capturing unit 200, the average depth value of the face region is the smallest. This can be used to separate the face from other parts of the body. Referring to FIG. 7, a depth image which is brighter as the depth value is lower can be confirmed. In this case, the depth of the facial region is lower than that of the other region, so that the facial region can be separated from other regions.

Since the depth image photographing unit 200 may not always photograph the person in front of the photographing unit, it is necessary to correct the same, and the depth image converting unit 500 may apply the transformation to align the face.

Referring to FIG. 5, the depth image converter 500 includes a depth information calculator (not shown) for calculating depth information on an image of a plane taken by the depth image photographer 200 510); A coordinate transforming unit 520 for calculating a position of each pixel in the coordinate system of the depth image photographing unit 200 using the depth information calculated by the depth information calculating unit 510; A local normal vector calculating unit 530 for calculating a local normal vector of the pixel using the peripheral information of each pixel calculated by the coordinate transforming unit 520; A plane normal vector calculating unit 540 for obtaining a normal vector of the entire plane by using the local normal vector obtained by the local normal vector calculating unit 530; Calculating a translation matrix by obtaining a translation matrix by calculating the rotation axis and an angle between the image and obtaining a translation matrix using the depth value of the image and the reference depth value in the face storage unit 100, (550); And a transform applying unit 560 applying the transform using the transformation matrix and aligning the face in the image so that it can be compared with the face storage unit 100. [

The coordinate transforming unit 520 photographs a plane through the depth image capturing unit 200 and obtains the position P (x, y) of the pixel in the depth image and the depth image capturing unit 200, (X, y) from the xy plane centered on the depth image capturing unit 200 and using the factors of the depth image capturing unit 200 and the distance D (x, y) And the depth optical image capturing unit 200 has a front optical axis direction as a z-axis.

Here, the transformation into the coordinate system of the depth image capturing unit 200 is performed by the position information P (x, y) of the pixel in the depth image coordinate system having the upper left corner of the image as the origin, y) is the position on the coordinate system which the image center is the origin where P (x, y) of the pixels in said depth image taking unit 200, the distance z axis in the coordinate system, z c is the image coordinate system information P v ( x v , y v ).

In order to calculate the position of each pixel in the coordinate system of the depth image capturing unit 200, the coordinate transforming unit 520 transforms the image using the viewing angle and resolution, which are internal information of the depth image capturing unit 200 The distance f to the viewport is obtained by obtaining the distance f to the viewport through the vertical viewing angle f oVv of the depth image capturing unit 200 and the vertical resolution h of the depth image capturing unit 200 or by calculating the horizontal viewing angle f oVh and the depth The position P c (x, y) = (x c , y) in the coordinate system of the depth image capturing section 200 is obtained by using the horizontal resolution w of the image capturing section 200, c , z c ) to obtain position information in the coordinate system of the depth image photographing unit 200.

If the position in the coordinate system of the depth image photographing unit 200 at the position P (x, y) in the image coordinate system is P c (x, y), the local normal vector calculating unit 530 calculates the local normal vector location information of the in depth image taking unit 200, the coordinate system of the point P c (x, y + 1 ), P c (x, y-1) and said of a point located on the left and right depth image taking unit 200 in the coordinate system the position information P c (x + 1, y ), P c (x-1, y) two vectors v 1 = P c (x + 1, y) on the basis of - P c (x-1, y), in P c (x, y-1 ) produced and N xy = the v 1 × v pixel P (x, y) is obtained the cross product of two vectors with a 2 - v 2 = P c ( x, y + 1) And a local normal vector N xy is obtained.

The plane normal vector calculating unit 540 obtains a normal vector N in the plane region by adding the local normal vector of each pixel obtained by the local normal vector calculating unit 530, The image taken by the photographing unit 200 is transformed into a parallel plane normal vector N by the rotation transformation to the z axis and the plane image is corrected to an image parallel to the xy plane to remove the perspective distortion. And the unit vector of the plane normal vector after the rotation transformation is N '(0, 0, 1).

If the unit vector of the plane normal vector after the rotation transformation is N ', the unit vector which is the axis of the rotation transformation is u = (N × N' ) / (| N × N ' |) conversion vector u (u x, u y, u z) to a normalized cross product of each of the normal vectors of the forward and backward through a rotation angle θ is θ = cos -1 ((N and N ') / (| N || N' |)). In the transformation matrix calculation unit 550, the rotation transformation matrix R is

R = cos? I + sin? [U] + (1-cos?) U? U,

delete

uⓧu =

Figure 112016090295208-pat00001

delete

[u] x =

Figure 112016090295208-pat00003
D min and the parallel movement matrices T 1 and T 2 are calculated using the D reference of the face storage unit 100
T 1 =
Figure 112016090295208-pat00038
, T 2 =
Figure 112016090295208-pat00039
And P c (i, j) in the coordinate system of the depth image capturing unit 200 of each pixel,
Figure 112016090295208-pat00040
= R
Figure 112016090295208-pat00041
By the conversion through characterized in that to obtain a position P 'c (i, j) = (x' c, y 'c, z' c) after the conversion.

delete

The transformation applying unit 560 transforms the transformed position P ' c (i, j) = (x' c , y ' c , z' c ) obtained by transforming the distance f to the viewport on which the image is projected, , P 'is x to convert them because the coordinates on the depth of the image taking unit 200, the coordinate system to image coordinates back' v = (x 'c f ) / z' c, y 'v = (y' c f) / 'position in the image coordinate system by using the c origin is present in the center of the screen P' z v (x, v, y, v) is converted to the next set to the origin, as again the original pixel P (x, interp converted to y ) pixel P '(x' mapped to, y ') to obtain that, the pixel value of the depth value in the depth image P' (x ', y') to z 'to change to c the face to face in the image And is arranged to be comparable to the storage unit (100).

Referring to FIG. 6, an embodiment of facial alignment in the depth image converting unit 500 includes calculating depth information in an image of a plane photographed by the depth image photographing unit 200 (S501); Calculating a position of each pixel in the coordinate system of the depth image capturing unit 200 using the calculated depth information (s502); Calculating (S503) a local normal vector of the pixel using the calculated peripheral information of each pixel; Obtaining a normal vector of the entire plane using a local normal vector (s504); Calculating a rotation matrix by calculating an angle between a rotation axis and an image of the image (s505); And applying a transform using the transform matrix (s506).
The facial feature extraction unit 600 extracts features of the facial features to compare features of the facial features stored in the facial feature storage unit 100 after the facial feature detection unit 400 detects the facial features. The feature of the face extracted here is preferably the face contour, the depth and position of the eye / nose / mouth /, the shape of the jaw, the height of the cheekbone, the height of the eyebrow bone, the height of the nose, and the face width. First extract the contours of the face and then extract the eyes / nose / mouth. The depth value of the face can be detected by using this feature because the nose region is the lowest and the eye region is relatively large. In addition, although the depth value of the mouth is larger than the depth value of the nose, since the mouth is protruded rather than the other facial parts, the depth value may be relatively small, so that feature extraction for eyes / nose / mouth is possible using this point Fig. 8; Fig. 9). Detects the detected eye / nose / mouth contour in the above process and detects the relative position of the eye / nose / mouth position. At this time, if the face is tilted, the eye / nose / mouth position fluctuation may occur, so that the position of the two eyes rotates the depth image parallel to the horizontal line. Then, the position of the two eyes, the position of the nose, and the position of the mouth are measured based on the center point of the positions of the two eyes (FIG. 10). Also, the height of the nose can be extracted, and the depth difference between the nose and the nose can be measured by measuring the nose and relative depth from the face through the depth image capturing unit 200 (FIG. 11). The height of the cheekbone below the eye and the height of the eyebrow bone above the eye are measured in the manner described above, and utilized as a feature of the face. After that, the shape of the jaw portion is extracted (FIG. 12), and the jaw is regarded as an area from below the lip to the bottom of the face, thereby extracting the shape of the outline. The face width is measured by measuring the face width. The actual face distance can be obtained through a depth value, a relative position in the depth image, and an internal factor of the depth image photographing unit 200. (FIG. 13).

delete

delete

delete

delete

delete

delete

If facial features that can identify a person are extracted, the facial feature comparing unit 800 performs a comparison operation on the feature data of the respective persons stored in the facial storage unit 100. If the comparison result is equal to or less than a certain degree of similarity, It is determined that the personality coincidence judgment unit 900 is not a specific person. Also, when the facial feature comparing unit 800 performs a comparison operation and confirms that all the features match, the personality determining unit 900 determines that the person is a specific person.

While the present invention has been described in connection with what is presently considered to be practical exemplary embodiments, it is to be understood that the invention is not limited to the disclosed embodiments, but, on the contrary, will be. Accordingly, the true scope of the present invention should be determined only by the appended claims.

Claims (6)

A face storage unit (100) for storing face feature depth information photographed by a camera; A depth image capturing unit 200 for capturing a depth of field image; A depth image correcting depth image correcting unit 300 for correcting a depth value of an image photographed by the depth image photographing unit 200; A face detector 400 for extracting a face portion of the image extracted by the depth image correcting unit 300; A depth image converting unit 500 for converting the image photographed by the depth image face detecting unit 400 into a facial image stretching and facial aligning according to image rotation stretching transformation and image capturing distance; A facial feature extraction unit 600 for extracting a facial part of the image transformed by the depth image transform unit 500; A facial feature comparing unit 700 for comparing the data stored in the facial image storing unit 100 with data already stored in the facial image storing unit 100; A feature matching unit 800 for comparing features extracted from the facial feature comparing unit 700; A color image capturing unit 900 for capturing a color image of a person in the depth image capturing unit 200; A molding result calculation unit 910 for calculating depth information after forming the image photographed by the color image photographing unit 900; A facial rendering unit 920 for rendering the calculation result of the forming result calculation unit 910 as a 3D image; A pre-forming / post-forming image displaying unit 930 for comparing and displaying an image rendered by the facial rendering unit 920 and a first facial image, the method comprising:

A molding operation result data input step (SlOO) for inputting result value data of each molding operation;
A molding site specification step (S200) of designating a site to be formed in the stored facial image;
A procedure confirmation step (S300) of confirming the type of operation of the molding site designated in the molding site designation step (S200);
After the procedure confirmation step (S300), the result inputted in the molding operation result data input step (S100) of the confirmed operation is confirmed, and the molding designation part operation result Assignment step S400;
Comparing the results after the molding procedure is a distance f, where P 'c (i, j) = (x' after the calculated conversion by the rotational transformation of the viewport to be the image of the converted application unit 560 projection c, y 'c, z' c) referred to when, P 'is because the coordinates on the depth of the image taking unit 200, the coordinate system x in order to convert them to image coordinates back' v = (x 'c f ) / z' c, y ' v = (y 'c f) / z' position in the image coordinate system by using the c which origin is present in the center of the screen P 'v (x' v, y 'v) is converted to the setting, and the zero point it back correction conversion, the pixel P (x, y) ', a pixel value of the depth value at obtaining as a depth image P a pixel P' (x ', y) ' which is mapped to (x ', y') for z 'c Comparing the face in the image with the face storage unit 100;
(S500) of transmitting a resultant value obtained after the step S400 of assigning the molding designation site operation result to the facial rendering unit 920. [ Of forming image using depth information
The method according to claim 1,
The face storage unit 100 stores the facial features of the person to be identified in the form of a depth value. The forming result image using the depth information of the image
The method according to claim 1,
The physical characteristics stored in the face storage unit 100 are depth information of the face, position and shape of the eyes, nose, mouth, depth of the face, height of the nose, and depth information of the outer periphery of the jaw. Derivation method of molding result image using depth information
The method according to claim 1,
The depth image of the person obtained by the depth image photographing unit 200 has a nose portion located at a short distance from the depth image photographing unit 200 and has the smallest depth value and the depth value of the face and the non- The face portion can be extracted as the depth image data only by using the face detection unit 400. [
The method according to claim 1,
Wherein the depth image correcting unit (300) corrects the pixel generating the error in the captured image when the depth image is acquired by using the depth image capturing unit (200) Method of image formation
delete
KR1020150191333A 2015-12-31 2015-12-31 COSMETIC SURGERY method USING DEPTH FACE RECOGNITION KR101818992B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020150191333A KR101818992B1 (en) 2015-12-31 2015-12-31 COSMETIC SURGERY method USING DEPTH FACE RECOGNITION

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020150191333A KR101818992B1 (en) 2015-12-31 2015-12-31 COSMETIC SURGERY method USING DEPTH FACE RECOGNITION

Publications (2)

Publication Number Publication Date
KR20170080112A KR20170080112A (en) 2017-07-10
KR101818992B1 true KR101818992B1 (en) 2018-02-28

Family

ID=59356395

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020150191333A KR101818992B1 (en) 2015-12-31 2015-12-31 COSMETIC SURGERY method USING DEPTH FACE RECOGNITION

Country Status (1)

Country Link
KR (1) KR101818992B1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109145865A (en) * 2018-09-07 2019-01-04 北京相貌空间科技有限公司 Face standard level calculating method and device

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102476758B1 (en) * 2018-05-09 2022-12-09 삼성전자주식회사 Method and device to normalize image

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101397476B1 (en) * 2012-11-28 2014-05-20 주식회사 에스하이텍 Virtual cosmetic surgery method by 3d virtual cosmetic surgery device
KR101438011B1 (en) * 2013-11-08 2014-09-04 수원대학교산학협력단 Three-dimensional face recognition system using 3d scanner

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101397476B1 (en) * 2012-11-28 2014-05-20 주식회사 에스하이텍 Virtual cosmetic surgery method by 3d virtual cosmetic surgery device
KR101438011B1 (en) * 2013-11-08 2014-09-04 수원대학교산학협력단 Three-dimensional face recognition system using 3d scanner

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109145865A (en) * 2018-09-07 2019-01-04 北京相貌空间科技有限公司 Face standard level calculating method and device

Also Published As

Publication number Publication date
KR20170080112A (en) 2017-07-10

Similar Documents

Publication Publication Date Title
CN107273846B (en) Human body shape parameter determination method and device
JP4950787B2 (en) Image processing apparatus and method
JP4692526B2 (en) Gaze direction estimation apparatus, gaze direction estimation method, and program for causing computer to execute gaze direction estimation method
KR101818984B1 (en) Face Recognition System using Depth Information
JP2011198349A (en) Method and apparatus for processing information
WO1994023390A1 (en) Apparatus for identifying person
WO2006049147A1 (en) 3d shape estimation system and image generation system
JP2017204673A (en) Image processing system, image processing method and program
JP4936491B2 (en) Gaze direction estimation apparatus, gaze direction estimation method, and program for causing computer to execute gaze direction estimation method
KR101510312B1 (en) 3D face-modeling device, system and method using Multiple cameras
US10909363B2 (en) Image acquisition system for off-axis eye images
JP6855872B2 (en) Face recognition device
EP3506149A1 (en) Method, system and computer program product for eye gaze direction estimation
CN109948400A (en) It is a kind of to be able to carry out the smart phone and its recognition methods that face characteristic 3D is identified
JP4631973B2 (en) Image processing apparatus, image processing apparatus control method, and image processing apparatus control program
JP6552266B2 (en) Image processing apparatus, image processing method, and program
JP2017194301A (en) Face shape measuring device and method
KR101053253B1 (en) Apparatus and method for face recognition using 3D information
KR101818992B1 (en) COSMETIC SURGERY method USING DEPTH FACE RECOGNITION
CN111652018A (en) Face registration method and authentication method
KR101775874B1 (en) Integrated Control method for Vehicle Drive using Depth Information based Face Recognition
JP7326965B2 (en) Image processing device, image processing program, and image processing method
JP2023006150A (en) Face authentication system
KR101711307B1 (en) Portable and Computer Equipment Unlock System using Depth Face Recognition
D’Apuzzo et al. Three-dimensional human face feature extraction from multi images

Legal Events

Date Code Title Description
A201 Request for examination
E902 Notification of reason for refusal
E902 Notification of reason for refusal
E701 Decision to grant or registration of patent right
GRNT Written decision to grant