CN101499132A - Three-dimensional transformation search method for extracting characteristic points in human face image - Google Patents

Three-dimensional transformation search method for extracting characteristic points in human face image Download PDF

Info

Publication number
CN101499132A
CN101499132A CN 200910037867 CN200910037867A CN101499132A CN 101499132 A CN101499132 A CN 101499132A CN 200910037867 CN200910037867 CN 200910037867 CN 200910037867 A CN200910037867 A CN 200910037867A CN 101499132 A CN101499132 A CN 101499132A
Authority
CN
China
Prior art keywords
prime
coordinate
theta
dimension
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN 200910037867
Other languages
Chinese (zh)
Other versions
CN101499132B (en
Inventor
易法令
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
GUANGZHOU HENGBIKANG INFORMATION TECHNOLOGY CO.,LTD.
Guangdong Pharmaceutical University
Original Assignee
Guangdong Pharmaceutical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Pharmaceutical University filed Critical Guangdong Pharmaceutical University
Priority to CN 200910037867 priority Critical patent/CN101499132B/en
Publication of CN101499132A publication Critical patent/CN101499132A/en
Application granted granted Critical
Publication of CN101499132B publication Critical patent/CN101499132B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The invention discloses a three-dimension transform search method for extracting characteristic point in face image which uses ASM (Active Shape Models) method locating face image as a base, changes two-dimension transform shape search method in present ASM into three-dimension transform shape search method. The method includes steps as follows: firstly, constructing a standard face three-dimension model; secondly, constructing three-dimension coordinates of a two-dimension stat model (basic shape) of ASM train centralizing face characteristic point according with standard face three-dimension model; finally, processing three-dimension transform to the two-dimension stat model having three-dimension coordinate and projecting to the two-dimension plane for approaching the given face characteristic point shape after search; the method can reflect real change of face gesture and has more search precision; the test result shows: the method can more approach fact characteristic point compared with present two-dimension transform search method.

Description

The three-dimensional transformation search method of feature point extraction in a kind of facial image
Technical field
The invention belongs to the recognition of face field, be specifically related to the method for face organ's feature point extraction of people's face.
Background technology
Recognition of face is based on a kind of biological identification technology that people's face feature information is carried out identification, and wherein to extract be the basis of carrying out recognition of face to facial characteristics point.Recognition of face has wide practical use as people's identification mode, and at present, though there are some gyp face identification systems to progress into market, still, these technology and system have far from practicability, and performance and accuracy rate have much room for improvement.At present, what human face characteristic point extracted generally employing is ASM (Active Shape Models, moving shape model) localization method, and this method generally comprised for three steps: (1) obtains a real shape description by the alignment training sample set; (2) catch the statistical information of the shape of having alignd; (3) on image, search for shape instance.This method has effect preferably in general front face location, then effect is not good enough but locate for people's face that certain angle deflection is arranged.By analyzing and experiment, we find, the way of search of this and image is relevant: current ASM method, be by the basic configuration to the image of two dimension be rotated, convergent-divergent and translation go to approach associated shape; And people's face is the object of a three-dimensional, and aforesaid operations obviously can not reflect the variation of human face posture fully, therefore, can have bigger difference in the shape search when approaching.
Summary of the invention
The method that the objective of the invention is to consider the problems referred to above and provide a kind of angle removal search human face posture from three-dimensional to change uses this method can improve the accuracy of people's face ASM shape search, and then improves the precision and the efficient of whole face identification system.
Technical scheme of the present invention is:
The three-dimensional transformation search method of feature point extraction in a kind of facial image, at first make up the standard three-dimensional model of people's face, secondly be the third dimension coordinate that the basis obtains the two-dimension human face unique point with this standard three-dimensional model, be that the search of three-dimension varying ASM shape is carried out on the basis at last with the three-dimensional coordinate, promptly comprise the steps:
(1) makes up the standard three-dimensional model of people's face, comprise three-dimensional coordinate (x, y, z) (wherein the front of people's face is the XY plane) of human face characteristic point in the three-dimensional model;
(2) based on three-dimensional master pattern, according to the two-dimentional statistical model (x of the human face characteristic point of ASM training set 1, y 1, x 2, y 2..., x n, y n), the third dimension coordinate (z of each unique point in definite in proportion two-dimentional statistical model i);
(3) with the basic configuration that comprises three-dimensional coordinate respectively from around Z axle, X-axis, Y direction rotation, convergent-divergent and translation, the result with conversion projects to the XY plane at last, goes to approach the shape of current search by the result of projection.
Described step (1) people's face standard three-dimensional model takes the method for actual measurement to make up, and at first selects the people's face more than 50 in training set, the three-dimensional coordinate (x of each unique point of actual measurement 1, y 1, z 1, x 2, y 2, z 2..., x n, y n, z n), wherein third dimension coordinate (z axle) begins to measure as zero plane with the central plane of neck, and above data are carried out normalized, asks its mean value then, just obtains the standard faces three-dimensional model.
Described step (2) realizes as follows:
1) the corresponding third dimension of two-dimentional statistical model (Z direction) the coordinate array SZ=[z of the human face characteristic point of foundation and ASM training set 1, z 2..., z n], it is data from the third dimension coordinate of standard faces three-dimensional model, and with the ASM training set in human face characteristic point corresponding one by one;
2) in people's face standard three-dimensional model, choose three unique points, and (x y), carries out corresponding calculated by these three points to write down its two dimensional surface coordinate figure.Described three unique points are tail of the eyes of choosing two eyes and nose totally three points (P1, P2, P3) (corresponding with the point 13,26,41 among Fig. 2), in the standard faces three-dimensional model, this two dimensional surface coordinate of 3 is known, if coordinate is respectively (xc1, yc1), (xc2, yc2), (xc3, yc3); Corresponding three point coordinate also are known in the two-dimentional statistical model of the human face characteristic point of ASM training set, establish its coordinate be respectively (x1, y1), (x2, y2), (x3, y3).
3) ask horizontal zoom factor C by a P1, P2 x, ask vertical zoom factor C by mid point and the P3 of P1 and P2 y, as follows respectively:
C x = ( x 2 - x 1 ) 2 + ( y 2 - y 1 ) 2 / ( xc 2 - xc 1 ) C y = ( x 3 - x 1 + x 2 2 ) 2 + ( y 3 - y 1 + y 2 2 ) 2 / ( yc 3 - yc 1 )
Get both mean value at the zoom factor of Z direction, that is:
C z=(C x+C y)/2
Pass through C zThe third dimension coordinate that multiplies each other and just can obtain two-dimension human face image with third dimension coordinate array SZ, i.e. Z axial coordinate.
Described step (3) is the anglec of rotation θ that needs to seek three directions x, θ y, θ z, zooming parameter S x, S y, S z, and the side-play amount T of three directions x, T y, T z, calculate T for convenient zBe made as 0, the original shape vector x of given people's face and target shape vector x ', the both is the projection of three-dimensional face on the XY plane, carries out geometric transformation M to obtain the minor increment of x and x ', promptly minimizes following formula:
E(θ x,θ y,θ z,S x,S y,S z,T x,T y)=|M(x)-x′| 2 (1)
Adopt the method for two step conversion and iterative approach to make formula (1) approach optimum parameter value.
The first step conversion of described two step conversion is around rotation of Z axle and translation on the XY plane; Detailed process is as follows: given two similar shape x and x ', seek anglec of rotation θ, scale s, translational movement t, to x do geometric transformation X=M (s, θ) [x]+t makes x after x ' and the conversion apart from minimum:
E=(M(s,θ)[x]+t-x′) T(M(s,θ)[x]+t-x′) (2)
Wherein:
M ( s , θ ) x i y i = ( s · cos θ ) x i - ( s · sin θ ) y i ( s · sin θ ) x i + ( s · cos θ ) y i
t=(t x,t y,...t x,t y) T
Make a=scos θ, b=ssin θ has like this: s 2=a 2+ b 2, θ=tan -1(b/a)
So: M ( s , θ ) x i y i = a - b b a x i y i + t x t y - - - ( 3 )
Wherein a, b, t x, t yBe exactly calculative four attitude parameters, make the value minimum of E in (2) formula by seeking these four parameters, thereby make actual variation be consistent with calculating.
The second step conversion of described quadratic transformation is to rotate around Y-axis and Z axle respectively, and projects on the XY plane, and implementation procedure is as follows:
If (X z, Y z) be the coordinate (the z coordinate is constant) after process horizontally rotates (around the Z axle) and displacement, it at first rotates θ around Y-axis yAngle, zoom factor are S y, transversal displacement is T X ', then have:
X y = X z * S y * cos θ y + Z * S y * sin θ y + T x ′ Y y = Y z ; Z y = - X z * S y * sin θy + Z * S y * cos θy - - - ( 6 )
Rotate θ around X-axis again xAngle, zoom factor is S simultaneously x, the vertical misalignment amount is T Y ', then have:
X x = X y Y x = Y y * S x * cos θ x - Z y * S x * sin θ x + T y ′ ; Z x = Y y * S x * sin θ x + Z y * S x * cos θ x - - - ( 7 )
Above two formulas of simultaneous, and to the XY plane projection can get the coordinate after the conversion:
X e = X x = X z * S y * cos θ y + Z * S y * sin θ y + T x ′ Y e = Y x = Y z * S x * cos θ x - ( - X z * S y * sin θ y + Z * S y * cos θ y ) * S x * sin θ x + T y ′ - - - ( 8 )
Make in the equation (8) a y=S y* cos θ y, b y=S y* sin θ y, a x=S x* cos θ x, b x=S x* sin θ x, and the actual coordinate of establishing after the conversion is (x ', y '), getting in equation (8) the substitution formula (1):
|X z*a y+Z*b y+T x′-x′| 2+|Y z*a x-(-X z*b y+Z*a y)*b x+T y′-y′| 2 (9)
Make (9) formula minimum, parameter wherein asked local derviation, can get:
X z*a y+Z*b y+T x′-x′=0 (10)
Y z*a x-(-X z*b y+Z*a y)*b x+T y′-y′=0 (11)
By formula (10) is carried out multiple linear regression analysis, can be in the hope of parameter value a wherein y, b y, T X 'If total n unique point, detailed process is as follows:
1) averages
X z ‾ = 1 n Σ i = 1 n X zi
Z ‾ = 1 n Σ i = 1 n Z i
x ′ ‾ = 1 n Σ i = 1 n x i ′
2) S 11 = Σ i = 1 n ( X zi - X z ‾ ) 2
S 22 = Σ i = 1 n ( Z i - Z ‾ ) 2
L = Σ i = 1 n ( x i ′ - x ′ ‾ ) 2
S 12 = S 21 = Σ i = 1 n ( X zi - X ‾ ) ( Z i - Z )
S 10 = Σ i = 1 n ( X zi - X z ‾ ) ( x i ′ - x ′ ‾ )
S 20 = Σ i = 1 n ( Z i - Z ‾ ) ( x i ′ - x ′ ‾ )
3) a y = S 10 S 22 - S 20 S 12 S 11 S 22 - S 12 2
b y = S 20 S 11 - S 10 S 21 S 11 S 22 - S 12 2
T x ′ = x ′ ‾ - a y X z ‾ - b y Z ‾
A y, b y, T X 'Value substitution formula (11), using the same method can be in the hope of a x, b x, T Y 'Second portion conversion M 2Expression, then:
M 2 ( a y , b y , T x ′ , a x , b x , T y ′ ) X zi Y zi The most approaching with impact point.
Described iterative approach be meant unique point on the XY plane coordinate (X after Z axle rotation and translation z, Y z) intermediateness be to adopt repeatedly the method for iterative approach,
If intermediateness is (X z, Y z), concrete steps are as follows:
1) initial seasonal (X z, Y z) be end value (x ', y ');
2) (X z, Y z) substitution formula (1) replaces x ' wherein, tries to achieve in the formula (3) four transformation parameter a, b, t according to the ASM transform method of current two dimension x, t y
3) parameter a, b, t x, t ySubstitution formula (3) is obtained intermediateness (X z, Y z);
4) (X z, Y z) carry out second portion conversion (M 2), obtain parameter a y, b y, T X ', a x, b x, T Y '
5) be the basis with (x ', y '), calculate M 2Inverse transformation, can draw intermediateness (X ' z, Y ' z) promptly:
( X ′ z , Y ′ z ) = M 2 ( a y , b y , T x ′ , a x , b x , T y ′ ) - 1 ( x ′ , y ′ )
(X ' z, Y ' z) substitution formula (1), try to achieve in the formula (3) four transformation parameter a, b, t according to two-dimentional ASM transform method x, t yChanged for (3) step then over to, carry out iterative computation, iteration just can obtain meeting 10 parameters of accuracy requirement for 10 times.
The present invention with respect to the beneficial effect of prior art is: the three-dimensional transformation search method of ASM of the present invention is compared with current two-dimensional search method, has reflected the variation of human face posture more truly, thereby has the search of better unique point and approach effect.
Description of drawings
The present invention is further detailed explanation below in conjunction with the drawings and specific embodiments.
Fig. 1 is the process flow diagram of people's face three-dimensional search method of the present invention;
Fig. 2 carries out the synoptic diagram that unique point is demarcated to two-dimension human face image when specifically test is of the present invention;
Fig. 3 for the given concrete face characteristic point coordinate of test after to the relative degree of approximation comparison diagram of view data in the training set;
Fig. 4 for the given concrete face characteristic point coordinate of test after to the relative degree of approximation comparison diagram of view data in the non-training set;
Fig. 5 during for the search of test actual persons face to the relative degree of approximation comparison diagram of view data in the training set;
Relative degree of approximation to view data in the non-training set when Fig. 6 searches for for test actual persons face compares.
Embodiment
The process flow diagram of people's face three-dimensional transformation search method of the present invention as shown in Figure 1, people's head and face organ according to this point, can construct the three-dimensional face model of a standard having similarity aspect shape and the position; The present invention determines the three-dimensional coordinate of people's face portion unique point just by this master pattern, be the basis ASM shape search of carrying out three-dimension varying then with the three-dimensional coordinate; Because during actual search is two dimensional image, so need project to two dimensional surface at last, the process of three-dimensional search conversion is:
The first, the standard three-dimensional model (x of structure people face 1, y 1, z 1, x 2, y 2, z 2..., x n, y n, z n) (wherein the front of people's face is the XY plane), the unique point in the three-dimensional model should comprise unique point in the actual two dimensional image (be the adaptability of embodiment standard three-dimensional model, can more than the unique point in the actual two dimensional image).
The second, carrying out people's face two dimensional image when search, based on three-dimensional master pattern, according to two-dimentional statistical model (the basic configuration) (x of the human face characteristic point of ASM training set 1, y 1, x 2, y 2..., x n, y n), the third dimension coordinate (z of each unique point in definite in proportion two-dimentional statistical model i).
The 3rd, when shape search conversion, basic configuration is respectively from around Z axle, X-axis, Y direction rotation, convergent-divergent and translation, and the result with conversion projects to the XY plane at last, goes to approach the shape of current search by the result of projection.
Below the main content of describing three aspects, the one, obtain the three-dimensional coordinate of two-dimension human face unique point by the standard three-dimensional model of people's face; The 2nd, adopt the method for two step conversion and iteration to obtain 10 transformation parameters when carrying out three-dimensional search; The 3rd, the implementation result test.
(1) third dimension coordinate of the two-dimentional statistical model of human face characteristic point obtains
Generally speaking, people's face portion organ is not only fixed on the relative position of two dimensional surface, and the height of the third dimension (face contour) also is a basically identical.Though the height of people's face portion organ may have certain difference, such as: it is low that people's nose has height to have, if but with the reference plane of the central plane (center of three-dimensional rotation) of neck as third dimension coordinate, then this difference is just very little, does not influence search precision when actual search is approached.The standard three-dimensional model of people's face also constitutes with unique point, specifically represents with the three-dimensional coordinate form, that is: (x 1, y 1, z 1, x 2, y 2, z 2..., x n, y n, z n); Wherein should comprise all unique points selected in the two-dimension human face, when Fig. 2 was test, the unique point synoptic diagram that two dimensional image is selected had been chosen 59 unique points altogether.The acquisition methods of the third dimension coordinate of the two-dimentional statistical model of human face characteristic point is as follows:
(1) the corresponding third dimension of two-dimentional statistical model (Z direction) the coordinate array SZ=[z of the human face characteristic point of foundation and ASM training set 1, z 2..., z n], it is data from the third dimension coordinate of standard faces three-dimensional model, and with the ASM training set in human face characteristic point corresponding one by one.
(2) in the standard faces three-dimensional model, choose three unique points, and write down its planimetric coordinates value.During actual the test, what choose is three points of the tail of the eye and nose (P1, P2, P3) (corresponding with the point 13,26,41 among Fig. 2) of two eyes.In the standard faces three-dimensional model, this planar coordinate of 3 is known, establish coordinate be respectively (xc1, yc1), (xc2, yc2), (xc3, yc3); Correspondence three point coordinate of the two-dimentional statistical model of the human face characteristic point of ASM training set also are known, establish its coordinate be respectively (x1, y1), (x2, y2), (x3, y3).
(3) ask horizontal zoom factor C by a P1, P2 x, ask vertical zoom factor C by mid point and the P3 of P1 and P2 y, as follows respectively:
C x = ( x 2 - x 1 ) 2 + ( y 2 - y 1 ) 2 / ( xc 2 - xc 1 ) C y = ( x 3 - x 1 + x 2 2 ) 2 + ( y 3 - y 1 + y 2 2 ) 2 / ( yc 3 - yc 1 )
Get both mean value at the zoom factor of Z direction, that is:
C z=(C x+C y)/2
Pass through C zThe third dimension coordinate that multiplies each other and just can obtain two-dimension human face image with third dimension coordinate array SZ, i.e. Z axial coordinate.
(2) three-dimension varying search
The original shape vector x of given people's face and target shape vector x ', the both is the projection of three-dimensional face on the XY plane.By the operation of first realized the three dimensional stress of original shape vector x, three-dimensional search conversion be exactly carry out on the x basis of three dimensional stress the three-dimension varying removal search approach the target shape vector x '.Compare with the ASM searching method of current facial image, three-dimensional ASM searching method need be sought the anglec of rotation θ of three directions x, θ y, θ z, zooming parameter S x, S y, S z, and the side-play amount T of three directions x, T y, T z(, calculate T for convenient owing to finally will project to the XY plane zCan be made as 0), carry out geometric transformation M to obtain the minor increment of x and x ', promptly minimize following formula:
E(θ x,θ y,θ z,S x,S y,S z,T x,T y)=|M(x)-x′| 2 (1)
Conventional minimized method is that each parameter on the left side in the following formula is asked partial derivative, and making it then is 0, unites at last to find the solution separate equation and draw each parameter value.But because its parameter is numerous, and parameter all with certain any (x, y) coordinate is relevant, above-mentioned conventional method may be obtained its parameter value hardly.Therefore, adopted the method for two step conversion and iteration to remove to approach optimum parameter value.
1) two step conversion
Whole three-dimension varying is divided into two parts: first is around rotation of Z axle and translation on the XY plane; Second portion is to rotate around Y-axis and Z axle respectively, and projects on the XY plane.Wherein: the conversion of first is consistent with current ASM search shape process, and detailed process is as follows: given two similar shape x and x ', seek anglec of rotation θ, scale s, translational movement t, to x do geometric transformation X=M (s, θ) [x]+t makes x after x ' and the conversion apart from minimum:
E=(M(s,θ)[x]+t-x′) T(M(s,θ)[x]+t-x′) (2)
Wherein: M ( s , θ ) x i y i = ( s · cos θ ) x i - ( s · sin θ ) y i ( s · sin θ ) x i + ( s · cos θ ) y i
t=(t x,t y,...t x,t y) T
Make a=scos θ, b=ssin θ has like this: s 2=a 2+ b 2, θ=tan -1(b/a)
So: M ( s , θ ) x i y i = a - b b a x i y i + t x t y - - - ( 3 )
Wherein a, b, t x, t yBe exactly calculative four attitude parameters, make the value minimum of E in the formula (2) by seeking these four parameters, thereby make actual variation be consistent with calculating.Computing method are identical with the ASM transform method of current two dimension.Be provided with n unique point, computation process is as follows:
(1) wushu (3) substitution formula (2) gets:
E ( a , b , t x , t y ) = | M ( x ) - x ′ | 2
= Σ i = 1 n ( ax i - by i + t x - x ′ i ) 2 + ( bx i + ay i + t y - y ′ i ) 2 - - - ( 4 )
(2) describe for convenient, define following and value:
S x = 1 n Σ i = 1 n x i ; S y = 1 n Σ i = 1 n y i
S x ′ = 1 n Σ i = 1 n x ′ i ; S y ′ = 1 n Σ i = 1 n y ′ i
S xx = 1 n Σ i = 1 n x i 2 ; S yy = 1 n Σ i = 1 n y i 2 ;
S xy = 1 n Σ i = 1 n x i y i
S xx ′ = 1 n Σ i = 1 n x i x ′ i ; S yy ′ = 1 n Σ i = 1 n y i y ′ i
S xy ′ = 1 n Σ i = 1 n x i y ′ i ; S yx ′ = 1 n Σ i = 1 n y i x ′ i
(3) each parameter in the formula (4) is asked partial derivative, and makes equation equal 0 can getting:
a ( S xx + S yy ) + t x S x + t y S y = S xx ′ + S yy ′ b ( S xx + S yy ) + t y S x - t x S y = S xy ′ - S yx ′ aS x - bS y + t x = S x ′ bS x + aS y + t y = S y ′ - - - ( 5 )
(4) to top system of equations (5) simultaneous solution, calculate for simplifying, the center of original state x is moved on to initial point, like this S x=S y=0.So, can obtain the value of 4 parameters:
t x=S x′;t y=S y′
a=(S xx′+S yy′)/(S xx+S yy)
b=(S xy′-S yx′)/(S xx+S yy)
The implementation procedure of second portion conversion is as follows:
If (X z, Y z) be the coordinate (the z coordinate is constant) after process horizontally rotates (around the Z axle) and displacement, it at first rotates θ around Y-axis yAngle, zoom factor are S y, transversal displacement is T X ', then have:
X y = X z * S y * cos θ y + Z * S y * sin θ y + T x ′ Y y = Y z ; Z y = - X z * S y * sin θy + Z * S y * cos θy - - - ( 6 )
Rotate θ around X-axis again xAngle, zoom factor is S simultaneously x, the vertical misalignment amount is T Y ', then have:
X x = X y Y x = Y y * S x * cos θ x - Z y * S x * sin θ x + T y ′ ; Z x = Y y * S x * sin θ x + Z y * S x * cos θ x - - - ( 7 )
Above two formulas of simultaneous, and to the XY plane projection can get the coordinate after the conversion:
X e = X x = X z * S y * cos θ y + Z * S y * sin θ y + T x ′ Y e = Y x = Y z * S x * cos θ x - ( - X z * S y * sin θ y + Z * S y * cos θ y ) * S x * sin θ x + T y ′ - - - ( 8 )
Make in the equation (8) a y=S y* cos θ y, b y=S y* sin θ y, a x=S x* cos θ x, b x=S x* sin θ x, and the actual coordinate of establishing after the conversion is (x ', y '), getting in equation (8) the substitution formula (1):
|X z*a y+Z*b y+T x′-x′| 2+|Y z*a x-(-X z*b y+Z*a y)*b x+T y′-y′| 2 (9)
Make (9) formula minimum, parameter wherein asked local derviation, can get:
X z*a y+Z*b y+T x′-x′=0 (10)
Y z*a x-(-X z*b y+Z*a y)*b x+T y′-y′=0 (11)
By formula (10) is carried out multiple linear regression analysis, can be in the hope of parameter value a wherein y, b y, T X 'If total n unique point, detailed process is as follows:
(1) averages
X z ‾ = 1 n Σ i = 1 n X zi
Z ‾ = 1 n Σ i = 1 n Z i
x ′ ‾ = 1 n Σ i = 1 n x i ′
(2) S 11 = Σ i = 1 n ( X zi - X z ‾ ) 2
S 22 = Σ i = 1 n ( Z i - Z ‾ ) 2
L = Σ i = 1 n ( x i ′ - x ′ ‾ ) 2
S 12 = S 21 = Σ i = 1 n ( X zi - X ‾ ) ( Z i - Z )
S 10 = Σ i = 1 n ( X zi - X z ‾ ) ( x i ′ - x ′ ‾ )
S 20 = Σ i = 1 n ( Z i - Z ‾ ) ( x i ′ - x ′ ‾ )
(3) a y = S 10 S 22 - S 20 S 12 S 11 S 22 - S 12 2
b y = S 20 S 11 - S 10 S 21 S 11 S 22 - S 12 2
T x ′ = x ′ ‾ - a y X z ‾ - b y Z ‾
A y, b y, T X 'Value substitution formula (11), using the same method can be in the hope of a x, b x, T Y 'Second portion conversion M 2Expression, then:
M 2 ( a y , b y , T x ′ , a x , b x , T y ′ ) X zi Y zi The most approaching with impact point.
2) iterative approach
How to obtain unique point on the XY plane coordinate (X after Z axle rotation and translation z, Y z) be the key of carrying out the second step conversion, because wherein intermediateness is unknown, when design, adopts repeatedly the method for iterative approach to obtain actual intermediateness, and finally obtain actual transformation parameter.If intermediateness is (X z, Y z) concrete steps are as follows:
(1) initial seasonal (X z, Y z) be end value (x ', y '),
(2) (X z, Y z) substitution formula (4), try to achieve in the formula (4) four transformation parameter a, b, t according to the ASM transform method of current two dimension x, t y
(3) parameter a, b, t x, t ySubstitution formula (3) is obtained intermediateness (X z, Y z)
(4) (X z, Y z) carry out second portion conversion (M 2), obtain parameter a y, b y, T X ', a x, b x, T Y '
(5) be the basis with (x ', y '), calculate M 2Inverse transformation, can draw intermediateness (X ' z, Y ' z) promptly:
( X ′ z , Y ′ z ) = M 2 ( a y , b y , T x ′ , a x , b x , T y ′ ) - 1 ( x ′ , y ′ )
(6) (X ' z, Y ' z) substitution formula (4), try to achieve in the formula (4) four transformation parameter a, b, t according to two-dimentional ASM transform method x, t yChanged for (3) step then over to, carry out loop iteration and calculate, general iteration just can reach accuracy requirement 10 times.
Can finally obtain 10 parameters of image three-dimensional conversion by above-mentioned alternative manner, that is: 6 parameters of 4 parameters of conversion for the first time (on the XY plane around rotation of Z axle and translation) and conversion for the second time (respectively around Y-axis and the rotation of Z axle and project on the XY plane).
(3) implementation result test
By the human face characteristic point extraction system that adopts three-dimensional ASM method is tested, aspect the feature point extraction of carrying out non-training set data, this method improves a lot aspect accuracy than the ASM method of two dimension.Carried out two types test below: the one, given concrete face characteristic point coordinate goes to approach with two kinds of methods then, tests its approximation ratio; The 2nd, given concrete people's face is also pressed same searching algorithm removal search unique point, the difference between comparison search result and the fact characteristic point then with two kinds of methods respectively.Wherein every type test all includes the test to training set data and non-training set data.Result's demonstration, not obvious to the improvement degree of training set data this method, but non-training set data then is greatly increased.Because in actual applications, most view data should belong to non-training set, so this method has higher utility.
When the structure test macro, the facial image of having selected the different attitudes of 100 width of cloth is as training set data, and other has 30 width of cloth images as test data, wherein image resolution-ratio is 125*150, all images is all carried out manual unique point demarcate, as shown in Figure 2, each image has been chosen 59 unique points.In order to compare both effects more exactly, defined the notion of a relative degree of approximation.If unique point that D1 calculates when approaching for employing three-dimensional transformation search method of the present invention and the mean distance between the actual calibration point, the unique point and mean distance actual calibration point between of D2 for adopting conventional two-dimensional transform search to calculate when approaching, relative degree of approximation RN is expressed as:
RN=(D2-D1)/D1*100%
Obviously RN is being for just, represents that then three-dimensional approaches better effects if, represents then that for bearing two dimension approaches effectively, and its numerical values recited is then represented the degree of approaching.
1. the test search approaches concrete coordinate
We have chosen 12 width of cloth images in training set, the direct substitution of its coordinate is approached with two kinds of methods respectively, its result as shown in Figure 3, as can be seen from the figure, as a rule, both approach the effect unanimity.Fig. 4 then is the relative degree of approximation when image directly approaches in the non-training set, and therefrom as can be seen, in most of the cases, three-dimensional approach method can more approach desired value.
2. test the search of concrete people's face
When the concrete people's face of search, from training set, chosen 15 width of cloth images and searched for its result such as Fig. 5.With the expected results basically identical, both difference is not obvious.Fig. 6 carries out result after the search matched to image in 30 non-training sets, and therefrom three-dimension varying obviously is better than two dimension as can be seen, and approaches effect relatively and be better than directly the approaching of objectives, this be since in approximate procedure target may repeatedly adjust.

Claims (9)

1, the three-dimensional transformation search method of feature point extraction in a kind of facial image, it is characterized in that, at first make up the standard three-dimensional model of people's face, secondly being the third dimension coordinate that the basis obtains the two-dimension human face unique point with this standard three-dimensional model, is that the search of three-dimension varying ASM shape is carried out on the basis at last with the three-dimensional coordinate.
2, the three-dimensional transformation search method of feature point extraction in the facial image according to claim 1 is characterized in that comprising the steps:
(1) makes up the standard three-dimensional model of people's face, comprise in the three-dimensional model that (z), wherein the front of people's face is the XY plane for x, y for the three-dimensional coordinate of human face characteristic point;
(2) based on three-dimensional master pattern, according to the two-dimentional statistical model (x of the human face characteristic point of ASM training set 1, y 1, x 2, y 2..., x n, y n), the third dimension coordinate (z of each unique point in definite in proportion two-dimentional statistical model i);
(3) with the basic configuration that comprises three-dimensional coordinate respectively from around Z axle, X-axis, Y direction rotation, convergent-divergent and translation, the result with conversion projects to the XY plane at last, goes to approach the shape of current search by the result of projection.
3, the three-dimensional transformation search method of feature point extraction in the facial image according to claim 2, it is characterized in that described step (1) people's face standard three-dimensional model takes the method for actual measurement to make up, at first in training set, select the people's face more than 50 arbitrarily, the three-dimensional coordinate (x of each unique point of actual measurement 1, y 1, z 1, x 2, y 2, z 2..., x n, y n, z n), wherein third dimension coordinate (z axle) begins to measure as zero plane with the central plane of neck, and above data are carried out normalized, asks its mean value then, just obtains the standard faces three-dimensional model.
4, the three-dimensional transformation search method of feature point extraction in the facial image according to claim 2 is characterized in that described step (2) realizes as follows:
1) the corresponding third dimension of two-dimentional statistical model (Z direction) the coordinate array SZ=[z of the human face characteristic point of foundation and ASM training set 1, z 2..., z n], it is data from the third dimension coordinate of standard faces three-dimensional model, and with the ASM training set in human face characteristic point corresponding one by one;
2) in people's face standard three-dimensional model, choose three unique points, and write down its two dimensional surface coordinate figure (x, y), carry out corresponding calculated by these three points, described three unique points are to choose the tail of the eye of two eyes and nose totally three points (P1, P2, P3), in the standard faces three-dimensional model, this two dimensional surface coordinate of 3 is known, establish coordinate be respectively (xc1, yc1), (xc2, yc2), (xc3, yc3); Corresponding three point coordinate also are known in the two-dimentional statistical model of the human face characteristic point of ASM training set, establish its coordinate be respectively (x1, y1), (x2, y2), (x3, y3).
3) ask horizontal zoom factor C by a P1, P2 x, ask vertical zoom factor C by mid point and the P3 of P1 and P2 y, as follows respectively:
C x = ( x 2 - x 1 ) 2 + ( y 2 - y 1 ) 2 / ( xc 2 - xc 1 ) C y = ( x 3 - x 1 + x 2 2 ) 2 + ( y 3 - y 1 + y 2 2 ) 2 / ( yc 3 - yc 1 )
Get both mean value at the zoom factor of Z direction, that is:
C z=(C x+C y)/2
Pass through C zThe third dimension coordinate that multiplies each other and just can obtain two-dimension human face image with third dimension coordinate array SZ, i.e. Z axial coordinate.
5, the three-dimensional transformation search method of feature point extraction in the facial image according to claim 2 is characterized in that being that described step (3) is the anglec of rotation θ that needs are sought three directions x, θ y, θ z, zooming parameter S x, S y, S z, and the side-play amount T of three directions x, T y, T z, calculate T for convenient zBe made as 0; The original shape vector x of given people's face and target shape vector x ', the both is the projection of three-dimensional face on the XY plane, carries out geometric transformation M to obtain the minor increment of x and x ', promptly minimizes following formula:
E(θ x,θ y,θ z,S x,S y,S z,T x,T y)=|M(x)-x′| 2 (1)
6, the three-dimensional transformation search method of feature point extraction in the facial image according to claim 5 is characterized in that adopting the method for two step conversion and iterative approach to make formula (1) approach optimum parameter value.
7, the three-dimensional transformation search method of feature point extraction in the facial image according to claim 6 is characterized in that being that the first step conversion of described two step conversion is around the rotation of Z axle, convergent-divergent and translation on the XY plane; Detailed process is as follows: given two similar shape x and x ', seek anglec of rotation θ, scale s, translational movement t, to x do geometric transformation X=M (s, θ) [x]+t makes x after x ' and the conversion apart from minimum:
E=(M(s,θ)[x]+t-x′) T(M(s,θ)[x]+t-x′) (2)
Wherein: M ( s , θ ) x i y i = ( s · cos θ ) x i - ( s · sin θ ) y i ( s · sin θ ) x i + ( s · cos θ ) y i
t=(t x,t y,...t x,t y) T
Make a=scos θ, b=ssin θ has like this: s 2=a 2+ b 2, θ=tan -1(b/a)
So: M ( s , θ ) x i y i = a - b b a x i y i + t x t y - - - ( 3 )
Wherein a, b, t x, t yBe exactly calculative four attitude parameters, make the value minimum of E in (2) formula by seeking these four parameters, thereby make actual variation be consistent with calculating.
8, the three-dimensional transformation search method of feature point extraction in the facial image according to claim 6 is characterized in that being that the second step conversion of described quadratic transformation is to rotate around Y-axis and Z axle respectively, and projects on the XY plane that implementation procedure is as follows:
If (X z, Y z) be the coordinate (the z coordinate is constant) after process horizontally rotates (around the Z axle) and displacement, it at first rotates θ around Y-axis yAngle, zoom factor are S y, transversal displacement is T x', then have:
X y = X z * S y * cos θ y + Z * S y sin θ y + T x ′ Y y = Y z ; Z y = - X z * S y * sin θy + Z * S y * cos θy - - - ( 6 )
Rotate θ around X-axis again xAngle, zoom factor is S simultaneously x, the vertical misalignment amount is T y', then have:
X x = X y ; Y x = Y y * S x * cos θ x - Z y * S x * sin θ x + T y ′ ; Z x = Y y * S x * sin θ x + Z y * S x * cos θ x - - - ( 7 )
Above two formulas of simultaneous, and to the XY plane projection can get the coordinate after the conversion:
X e = X x = X z * S y * cos θ y + Z * S y * sin θ y + T x ′ Y e = Y x = Y z * S x * cos θ x - ( - X z * S y * sin θ y + Z * S y * cos θ y ) * S x * sin θ x + T y ′ - - - ( 8 )
Make in the equation (8) a y=S y* cos θ y, b y=S y* sin θ y, a x=S x* cos θ x, b x=S x* sin θ x, and the actual coordinate of establishing after the conversion is (x ', y '), getting in equation (8) the substitution formula (1):
|X z*a y+Z*b y+T x′-x′| 2+|Y z*a x-(-X z*b y+Z*a y)*b x+T y′-y′| 2 (9)
Make (9) formula minimum, parameter wherein asked local derviation, can get:
X z*a y+Z*b y+T x′-x′=0 (10)
Y z*a x-(-X z*b y+Z*a y)*b x+T y′-y′=0 (11)
By formula (10) is carried out multiple linear regression analysis, can be in the hope of parameter value a wherein y, b y, T x'; If total n unique point, detailed process is as follows:
1) averages
X z ‾ = 1 n Σ i = 1 n X zi
Z ‾ = 1 n Σ i = 1 n Z i
x ′ ‾ = 1 n Σ i = 1 n x i ′
2) S 11 = Σ i = 1 n ( X zi - X z ‾ ) 2
S 22 = Σ i = 1 n ( Z i - Z ‾ ) 2
L = Σ i = 1 n ( x i ′ - x ′ ‾ ) 2
S 12 = S 21 = Σ i = 1 n ( X zi - X ‾ ) ( Z i - Z )
S 10 = Σ i = 1 n ( X zi - X z ‾ ) ( x i ′ - x ′ ‾ )
S 20 = Σ i = 1 n ( Z i - Z ‾ ) ( x i ′ - x ′ ‾ )
3) a y = S 10 S 22 - S 20 S 12 S 11 S 22 - S 12 2
b y = S 20 S 11 - S 10 S 21 S 11 S 22 - S 12 2
T x ′ = x ′ ‾ - a y X z ‾ - b y Z ‾
A y, b y, T x' value substitution formula (11), using the same method can be in the hope of a x, b x, T y'; Second portion conversion M 2Expression, then:
M 2 ( a y , b y , T x ′ , a x , b x , T y ′ ) X zi Y zi The most approaching with impact point.
9, the three-dimensional transformation search method of feature point extraction in the facial image according to claim 6, it is characterized in that being described iterative approach be meant unique point on the XY plane coordinate (X after rotation of Z axle and translation z, Y z) intermediateness be to adopt repeatedly the method for iterative approach,
If intermediateness is (X z, Y z), concrete steps are as follows:
1) initial seasonal (X z, Y z) be end value (x ', y ');
2) (X z, Y z) substitution formula (1) replaces x ' wherein, tries to achieve in the formula (3) four transformation parameter a, b, t according to the ASM method of current two dimension x, t y
3) parameter a, b, t x, t ySubstitution formula (3) is obtained intermediateness (X z, Y z);
4) (X z, Y z) carry out second portion conversion (M 2), obtain parameter a y, b y, T x', a x, b x, T y';
5) be the basis with (x ', y '), calculate M 2Inverse transformation, can draw intermediateness (X ' z, Y ' z) promptly:
( X ′ Z , Y ′ Z ) = M 2 ( a y , b y , T x ′ , a x , b x , T y ′ ) - 1 ( x ′ , y ′ )
6) (X ' z, Y ' z) substitution formula (1), try to achieve in the formula (3) four transformation parameter a, b, t according to two-dimentional ASM transform method x, t yChanged for (3) step then over to, carry out iterative computation, iteration just can obtain meeting 10 parameters of accuracy requirement for 10 times.
CN 200910037867 2009-03-12 2009-03-12 Three-dimensional transformation search method for extracting characteristic points in human face image Expired - Fee Related CN101499132B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 200910037867 CN101499132B (en) 2009-03-12 2009-03-12 Three-dimensional transformation search method for extracting characteristic points in human face image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 200910037867 CN101499132B (en) 2009-03-12 2009-03-12 Three-dimensional transformation search method for extracting characteristic points in human face image

Publications (2)

Publication Number Publication Date
CN101499132A true CN101499132A (en) 2009-08-05
CN101499132B CN101499132B (en) 2013-05-01

Family

ID=40946200

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 200910037867 Expired - Fee Related CN101499132B (en) 2009-03-12 2009-03-12 Three-dimensional transformation search method for extracting characteristic points in human face image

Country Status (1)

Country Link
CN (1) CN101499132B (en)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104899563A (en) * 2015-05-29 2015-09-09 深圳大学 Two-dimensional face key feature point positioning method and system
CN105404861A (en) * 2015-11-13 2016-03-16 中国科学院重庆绿色智能技术研究院 Training and detecting methods and systems for key human facial feature point detection model
CN105426929A (en) * 2014-09-19 2016-03-23 佳能株式会社 Object shape alignment device, object processing device and methods thereof
CN105989326A (en) * 2015-01-29 2016-10-05 北京三星通信技术研究有限公司 Method and device for determining three-dimensional position information of human eyes
CN106022281A (en) * 2016-05-27 2016-10-12 广州帕克西软件开发有限公司 Face data measurement method and system
CN106203248A (en) * 2014-09-05 2016-12-07 三星电子株式会社 Method and apparatus for face recognition
CN106503682A (en) * 2016-10-31 2017-03-15 北京小米移动软件有限公司 Crucial independent positioning method and device in video data
CN106845327A (en) * 2015-12-07 2017-06-13 展讯通信(天津)有限公司 The training method of face alignment model, face alignment method and device
CN107016319A (en) * 2016-01-27 2017-08-04 北京三星通信技术研究有限公司 A kind of key point localization method and device
CN107341784A (en) * 2016-04-29 2017-11-10 掌赢信息科技(上海)有限公司 A kind of expression moving method and electronic equipment
CN108932459A (en) * 2017-05-26 2018-12-04 富士通株式会社 Face recognition model training method and device and recognition algorithms
CN108985220A (en) * 2018-07-11 2018-12-11 腾讯科技(深圳)有限公司 A kind of face image processing process, device and storage medium
CN109606728A (en) * 2019-01-24 2019-04-12 中国人民解放军国防科技大学 Method and system for designing precursor of hypersonic aircraft
CN109692476A (en) * 2018-12-25 2019-04-30 广州华多网络科技有限公司 Game interaction method, apparatus, electronic equipment and storage medium
CN110032941A (en) * 2019-03-15 2019-07-19 深圳英飞拓科技股份有限公司 Facial image detection method, facial image detection device and terminal device
CN110520056A (en) * 2017-04-07 2019-11-29 国立研究开发法人产业技术综合研究所 Measuring instrument installation auxiliary device and measuring instrument install householder method
CN112052847A (en) * 2020-08-17 2020-12-08 腾讯科技(深圳)有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4793698B2 (en) * 2005-06-03 2011-10-12 日本電気株式会社 Image processing system, three-dimensional shape estimation system, object position / posture estimation system, and image generation system

Cited By (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106203248A (en) * 2014-09-05 2016-12-07 三星电子株式会社 Method and apparatus for face recognition
CN105426929A (en) * 2014-09-19 2016-03-23 佳能株式会社 Object shape alignment device, object processing device and methods thereof
CN105426929B (en) * 2014-09-19 2018-11-27 佳能株式会社 Object shapes alignment device, object handles devices and methods therefor
CN105989326A (en) * 2015-01-29 2016-10-05 北京三星通信技术研究有限公司 Method and device for determining three-dimensional position information of human eyes
CN105989326B (en) * 2015-01-29 2020-03-03 北京三星通信技术研究有限公司 Method and device for determining three-dimensional position information of human eyes
CN104899563B (en) * 2015-05-29 2020-01-07 深圳大学 Two-dimensional face key feature point positioning method and system
CN104899563A (en) * 2015-05-29 2015-09-09 深圳大学 Two-dimensional face key feature point positioning method and system
CN105404861B (en) * 2015-11-13 2018-11-02 中国科学院重庆绿色智能技术研究院 Training, detection method and the system of face key feature points detection model
CN105404861A (en) * 2015-11-13 2016-03-16 中国科学院重庆绿色智能技术研究院 Training and detecting methods and systems for key human facial feature point detection model
CN106845327A (en) * 2015-12-07 2017-06-13 展讯通信(天津)有限公司 The training method of face alignment model, face alignment method and device
CN106845327B (en) * 2015-12-07 2019-07-02 展讯通信(天津)有限公司 Training method, face alignment method and the device of face alignment model
CN107016319A (en) * 2016-01-27 2017-08-04 北京三星通信技术研究有限公司 A kind of key point localization method and device
CN107016319B (en) * 2016-01-27 2021-03-05 北京三星通信技术研究有限公司 Feature point positioning method and device
CN107341784A (en) * 2016-04-29 2017-11-10 掌赢信息科技(上海)有限公司 A kind of expression moving method and electronic equipment
WO2017202191A1 (en) * 2016-05-27 2017-11-30 广州帕克西软件开发有限公司 Facial data measurement method and system
CN106022281A (en) * 2016-05-27 2016-10-12 广州帕克西软件开发有限公司 Face data measurement method and system
CN106503682B (en) * 2016-10-31 2020-02-04 北京小米移动软件有限公司 Method and device for positioning key points in video data
CN106503682A (en) * 2016-10-31 2017-03-15 北京小米移动软件有限公司 Crucial independent positioning method and device in video data
CN110520056A (en) * 2017-04-07 2019-11-29 国立研究开发法人产业技术综合研究所 Measuring instrument installation auxiliary device and measuring instrument install householder method
CN110520056B (en) * 2017-04-07 2022-08-05 国立研究开发法人产业技术综合研究所 Surveying instrument installation assisting device and surveying instrument installation assisting method
CN108932459A (en) * 2017-05-26 2018-12-04 富士通株式会社 Face recognition model training method and device and recognition algorithms
CN108985220A (en) * 2018-07-11 2018-12-11 腾讯科技(深圳)有限公司 A kind of face image processing process, device and storage medium
CN108985220B (en) * 2018-07-11 2022-11-04 腾讯科技(深圳)有限公司 Face image processing method and device and storage medium
CN109692476A (en) * 2018-12-25 2019-04-30 广州华多网络科技有限公司 Game interaction method, apparatus, electronic equipment and storage medium
CN109692476B (en) * 2018-12-25 2022-07-01 广州方硅信息技术有限公司 Game interaction method and device, electronic equipment and storage medium
CN109606728B (en) * 2019-01-24 2019-10-29 中国人民解放军国防科技大学 Method and system for designing precursor of hypersonic aircraft
CN109606728A (en) * 2019-01-24 2019-04-12 中国人民解放军国防科技大学 Method and system for designing precursor of hypersonic aircraft
CN110032941A (en) * 2019-03-15 2019-07-19 深圳英飞拓科技股份有限公司 Facial image detection method, facial image detection device and terminal device
CN110032941B (en) * 2019-03-15 2022-06-17 深圳英飞拓科技股份有限公司 Face image detection method, face image detection device and terminal equipment
CN112052847A (en) * 2020-08-17 2020-12-08 腾讯科技(深圳)有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium
CN112052847B (en) * 2020-08-17 2024-03-26 腾讯科技(深圳)有限公司 Image processing method, apparatus, electronic device, and computer-readable storage medium

Also Published As

Publication number Publication date
CN101499132B (en) 2013-05-01

Similar Documents

Publication Publication Date Title
CN101499132B (en) Three-dimensional transformation search method for extracting characteristic points in human face image
Eisenberger et al. Smooth shells: Multi-scale shape registration with functional maps
Khoury et al. Learning compact geometric features
CN102999942B (en) Three-dimensional face reconstruction method
Robles-Kelly et al. A Riemannian approach to graph embedding
EP2048599B1 (en) System and method for 3D object recognition
Ghezelghieh et al. Learning camera viewpoint using CNN to improve 3D body pose estimation
CN110363849A (en) A kind of interior three-dimensional modeling method and system
CN104346824A (en) Method and device for automatically synthesizing three-dimensional expression based on single facial image
CN105701455A (en) Active shape model (ASM) algorithm-based face characteristic point acquisition and three dimensional face modeling method
Kroemer et al. Point cloud completion using extrusions
Wu et al. On signature invariants for effective motion trajectory recognition
Shamai et al. Efficient inter-geodesic distance computation and fast classical scaling
CN107507218B (en) Component movement prediction method based on static frame
Leymarie et al. The SHAPE Lab: New technology and software for archaeologists
Bronstein et al. Feature-based methods in 3D shape analysis
Chen et al. Learning shape priors for single view reconstruction
Spek et al. A fast method for computing principal curvatures from range images
Lee et al. Noniterative 3D face reconstruction based on photometric stereo
Mian et al. 3D face recognition
SANDOVAL et al. Robust sphere detection in unorganized 3D point clouds using an efficient Hough voting scheme based on sliding voxels
Sharma Representation, segmentation and matching of 3D visual shapes using graph laplacian and heat-kernel
Deng et al. Adaptive feature selection based on reconstruction residual and accurately located landmarks for expression-robust 3D face recognition
Einecke et al. Direct surface fitting
Ma et al. Visual reconstruction method of architectural space under laser point cloud big data

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C53 Correction of patent of invention or patent application
CB03 Change of inventor or designer information

Inventor after: Yi Faling

Inventor after: Xiong Wei

Inventor after: Huang Zhanpeng

Inventor after: Zhao Jie

Inventor before: Yi Faling

COR Change of bibliographic data

Free format text: CORRECT: INVENTOR; FROM: YI FALING TO: YI FALING XIONG WEI HUANG ZHANPENG ZHAO JIE

C14 Grant of patent or utility model
GR01 Patent grant
CP01 Change in the name or title of a patent holder

Address after: 510006 Guangdong City, Guangzhou province outside the University of East Ring Road, No. 280

Patentee after: Guangdong Pharmaceutical University

Address before: 510006 Guangdong City, Guangzhou province outside the University of East Ring Road, No. 280

Patentee before: Guangdong Pharmaceutical University

CP03 Change of name, title or address
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20170605

Address after: 510000 Guangdong city of Guangzhou province Panyu District Xiaoguwei Street Outer Ring Road No. 280 Building 1, room 207, Department of Guangdong Pharmaceutical University

Patentee after: GUANGZHOU HENGBIKANG INFORMATION TECHNOLOGY CO.,LTD.

Address before: 510006 Guangdong City, Guangzhou province outside the University of East Ring Road, No. 280

Patentee before: Guangdong Pharmaceutical University

CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20130501

Termination date: 20190312