CN100430963C - Method for modeling personalized human face basedon orthogonal image - Google Patents

Method for modeling personalized human face basedon orthogonal image Download PDF

Info

Publication number
CN100430963C
CN100430963C CNB2005101081365A CN200510108136A CN100430963C CN 100430963 C CN100430963 C CN 100430963C CN B2005101081365 A CNB2005101081365 A CN B2005101081365A CN 200510108136 A CN200510108136 A CN 200510108136A CN 100430963 C CN100430963 C CN 100430963C
Authority
CN
China
Prior art keywords
face
grid
image
projection
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CNB2005101081365A
Other languages
Chinese (zh)
Other versions
CN1940996A (en
Inventor
陶建华
李永林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Feng Cheng Powerise Technology Co Ltd
Original Assignee
Institute of Automation of Chinese Academy of Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Automation of Chinese Academy of Science filed Critical Institute of Automation of Chinese Academy of Science
Priority to CNB2005101081365A priority Critical patent/CN100430963C/en
Publication of CN1940996A publication Critical patent/CN1940996A/en
Application granted granted Critical
Publication of CN100430963C publication Critical patent/CN100430963C/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

A fast human face module building method based on quadrature images includes: take two pictures from the front and side face; normalize the two pictures; make projective grid of the two orthogonal graphs through a general face grid module; get the best matching index of the face graph and the projective grid on the character points; then apply it to the project grid and radial functional interpolation to get an accurate total matching points; resume the three-dimensional data from the matched projective grid to get the face grid module; finally texture mapping the grid module to get the final unique face module.

Description

Quick method for modeling personalized human face based on orthogonal image
Technical field
The present invention relates to three-dimensional data modeling technique field, refer to a kind of method that obtains the personalized human face model from two width of cloth quadrature facial images especially.
Background technology
The modeling of people's face is owing to its widespread use has caused researchers' concern, and one of them very important direction is exactly personalized modeling, because its widespread use has caused that people's face Modeling Research person more and more payes attention to.
The method that a variety of personalized modelings are arranged at present from Data Source, can be carried out modeling from three-dimensional data, also can carry out modeling from multiple image, as orthogonal image.In addition, also can from video, carry out modeling.The present invention mainly pays close attention to the personalized modeling process that carries out from orthogonal image.
Mainly contain two kinds of approach from the personalized modeling of orthogonal image, first kind is to select the characteristic of correspondence point earlier from two width of cloth images, as all visual unique point in two width of cloth facial images such as nose, the corners of the mouth, recover to obtain their three-dimensional information then from these unique points.The corresponding unique point of mark on general faceform simultaneously so just obtains the 3-D data set of two groups of correspondences.Method by Feature Points Matching obtains the conversion matching parameter between these two groups of three-dimensional datas, then general faceform is carried out conversion, obtains personalized three-dimensional face model.But all calculating of this method all are to carry out at three dimensions, make the computing more complicated that becomes, simultaneously, three-dimensional coupling since the non-rigid body of people's face but degree of accuracy be difficult to be guaranteed.
Another method is the idea that people from Japan such as (NIT Human Interaction) Takaaki Akimoto proposes, promptly earlier general faceform is carried out the projection of positive and side, select characteristic of correspondence point on facial image and the projection grid then, then by the conversion interpolation technique of unique point being obtained the position coordinates of non-unique point.Utilize the three-dimensional reconstruction technology to obtain personalized faceform again, this method is directly specified the unique point of facial image and projection grid, do not calculate the matched transform relation between them, interpolation algorithm just carries out at unique point simultaneously, cause the result of calculation out of true, faceform who obtains and true man have bigger gap.
In addition, Microsoft's thunder covers people such as graduate Liu then from a secondary direct picture, and the specific general faceform who adopts them to design is mated, and calculates the matching parameter between the two, and three-dimensional computing is reduced to two-dimensional space, simplified computational complexity.This method is owing to only adopting a width of cloth direct picture, so the angle of the personalized human face model that obtains is more single.
Summary of the invention
The quick method for modeling personalized human face that the present invention is based on orthogonal image improves preceding two kinds of methods, has realized a personalized human face modeling fast.Its step comprises
Utilize the orthogonal image of video camera shooting people face and carry out normalized, general face wire frame model is carried out projection, obtain front projection grid and side projection grid;
Select the character pair point of facial image and projection grid, mate, try to achieve matching parameter;
Utilize matching parameter that the projection grid is carried out matched transform, obtain the projection grid after the conversion, the projection grid of the interpolation algorithm that utilizes radial basis function after to conversion carries out interpolation, obtains the projection grid after the interpolation;
Projection grid result according to positive and side recovers three-dimensional information, obtains personalized grid model;
According to orthogonal image, utilize the texture technology to obtain personalized three-dimensional face model.
Native system utilizes common camera to take people's face two width of cloth orthogonal images, according to equal this fact of people's face height of head two width of cloth images are carried out normalized then, and select some unique points according to the significance level of unique point in facial image, according to orthogonal image general people's face grid is carried out grid after respective projection obtains projection, on the projection grid, select the characteristic of correspondence point with reference to the unique point in the facial image.Calculate matching parameter between them according to two stack features point sets.According to matching parameter the projection grid is carried out conversion, obtain the personalized human face grid, utilize the texture technology to obtain personalized faceform.
The calculating of this system is fairly simple, realize easily, and the result is more accurate.Can be applied directly to the every field such as human face animation, recognition of face in later stage.
The present invention gathers people's face direct picture and side image by common camera, utilizes general faceform as priori, by simple manual interaction, obtains personalized faceform true to nature quickly and easily.The faceform that this method obtains can be widely used in human face animation, fields such as visual phonetic synthesis and recognition of face.
Description of drawings
Fig. 1 personalized human face modeling framework synoptic diagram.
The quadrature facial image that video camera after Fig. 2 normalization is taken.
The general face wire frame model figure of Fig. 3.
Fig. 4 matching process figure.
Front and side face wire frame model after Fig. 5 coupling.
The personalized human face model of the different angles after Fig. 6 texture.
Embodiment
Fig. 1 is the quadrature facial image after the normalization.
At first utilize common Digital Video to take two width of cloth quadrature facial images, a front face image, a Side Face Image will guarantee strict orthogonal between two images, guarantees that promptly the angle between twice shooting is 90 degree.
1) image normalization and general projection grid
The photo that shooting obtains is done normalized, promptly carry out normalized according to this fact that can not change of the people's face height in the human face photo of taking in the identical time.
A, suppose that the height of people's face in the front face image is FrontHeight, the height of people's face is SideHeight in the Side Face Image.Make R=FrontHeight/SideHeight.According to R Side Face Image is carried out stretching, the facial image height of two orthogonal images is consistent.Can avoid like this because the deviation between two facial images that the difference in shooting orientation is brought.
B, respectively general face wire frame model is carried out projection according to direct picture and side image, the general face wire frame model that adopts is IST (Institute Superior Tecnico, be Portugal high-technology research centre) general faceform, and on its basis, change, defined the parameter that can be used for animation with the Chinese language correspondence, general face wire frame model is carried out the projection on XY plane and YZ plane respectively, obtain two projection grids of universal model.
Existing general face wire frame model is carried out the rectangular projection of XY direction and YZ direction, general face wire frame model has a variety of selections, can select existing faceform with good topological structure, the general faceform of Candide as the exploitation of the Linkoping university of Switzerland, the general faceform of IST that laboratory, Portuguese high-technology research centre (Institute Superior Tecnico) proposes, also can adopt 3 d modeling software such as 3Dmax, Maya, SoftImage, Polyworks etc. oneself make.In this example, what we adopted is to change on the IST model based, has defined the parameter that can be used for animation with the Chinese language correspondence.
The quadrature facial image that video camera after Fig. 2 normalization is taken.
Fig. 3 is the effect of observing general faceform from different perspectives.Because general faceform is a three-dimensional space data structure under the Cartesian coordinate, it is carried out rectangular projection can not change existing topological relation, under this prerequisite, the data structure of supposing general faceform be M (x, y, z).To its projection result of carrying out the XY direction be Mf (x, y), the projection result of carrying out the YZ direction be Ms (y, z).Existed data still remains unchanged after projection.Obtain two projection grid Mf of universal model, Ms.
2) find the solution matching parameter
A. select the face feature point according to the significance level of human face characteristic point in face image, allow manual interaction to guarantee the accuracy of characteristic point position, { Pif1, Pif2....Pifn} (0<n) to suppose to have chosen altogether n unique point.Concrete steps are as follows:
In order to guarantee accuracy, what this method adopted is manually to select unique point, can certainly adopt image process method to select unique point automatically.For front face image, consider following unique point:
Left side eyebrow; Right eyebrow; The left side eyes; The right side eyes; Nose; Mouth; Chin; Lower jaw.
In addition, for matching parameter accurately, can increase the unique point of front human face outline, as forehead, ear profile etc.
Suppose for front face image, selected Pif1 altogether, Pif2, Pif3....Pifk unique point, k 〉=3.
For Side Face Image, owing to block, Side Face Image can not be the same with direct picture, so different on selected characteristic point.The unique point of mainly choosing has:
Nose; The canthus; The corners of the mouth; Forehead; Lower jaw.
In addition, the unique point that can increase has, ear; Hair profile etc.
Suppose for Side Face Image, selected Pis1 altogether, Pis2, Pis3....Pism unique point, m 〉=3.
B. in the information of the unique point of projection grid subscript note same position,, equally with a choose n unique point altogether { Pmf1, Pmf2....Pmfn} (0<n) according to the result of a information in the unique point of projection grid subscript note same position.Concrete steps are as follows:
Corresponding to the unique point on the facial image, we will select the characteristic of correspondence point equally in the projection of general people's face grid, with they initial points as coupling.Therefore,, need corresponding selection Pwf1 for front face projection grid, Pwf2, Pwf3....Pwfk unique point for side people's face projection grid, needs corresponding selection Pws1, Pws2, Pws3....Pwsm unique point.
C. according to the initial characteristics point, can obtain corresponding matching parameter.Concrete steps are as follows:
For front face image:
Make that s ∈ R is a zoom factor, R is 2 * 2 rotation matrix, and T is 2 * 1 translation vector, then can try to achieve s according to formula (1), R, T.
Min E ( s , R , T ) = Σ n ( P mf - P if ) - - - ( 1 )
Wherein, n unique point, s, R, T and Pmf, the relation of Pif is as shown in Equation (2)
s·R·P mf+T=P if (2)
3) coupling interpolation arithmetic
A, obtain matching parameter s, R after the T, can carry out matching operation to front face projection grid, and promptly the grid vertex to all front face projection grids carries out conversion according to formula 2.Obtain the projective net lattice model after the conversion.
B, employing radial basis function interpolation algorithm carry out the interpolation conversion to all unique points.Obtain the projection grid after the interpolation.Wherein interpolating function can be selected linear polynomial interpolating function.Obtain the front face projective net lattice model after the final conversion like this.
For Side Face Image, adopt the above-mentioned and same method of front face image to carry out conversion, obtain people from side face projective net lattice model after the conversion.
Fig. 4 has represented the detailed process of mating.
4) recover three-dimensional information and obtain personalized grid model
Through after the above-mentioned conversion, we can obtain:
Via the front face projective net lattice model of front face image conversion can obtain after faceform's conversion the XY coordinate information (Xf, Yf); Via people from side face projective net lattice model of Side Face Image conversion can obtain after faceform's conversion the YZ coordinate information (Ys, Zs), the three-dimensional information that then can therefrom recover to obtain the personalized human face model after the conversion is:
X = Xf Y = ( Yf + Ys ) / 2 Z = Zs
Promptly can obtain personalized face wire frame model in view of the above.
Fig. 5 is the personalized human face grid model after overmatching.
5) texture
After obtaining personalized face wire frame model, owing to do not carry out texture, faceform's the sense of reality can't reflect, therefore need carry out texture.
In this invention, we have proposed a kind of scheme of simple texture, directly use two quadrature facial images to carry out texture, have exempted the step of synthetic texture image.The imagination of this scheme is such, at first, judge the normal vector of each dough sheet in the personalized face wire frame model and the angle between the z axle, can see from angle intuitively, if the angle angle is excessive, the position of this dough sheet in the personalized human face grid side and back relatively partially are described then.Therefore, we set, and when the angle angle is spent greater than 45~60, select Side Face Image to carry out texture as texture image.Otherwise, adopt front face image to carry out texture as texture image.Can obtain realistic personalized human face model like this.Use this method to carry out texture, owing to need not synthesize texture image in advance, need not recomputate the texture coordinate of texture image, make arithmetic speed and complexity all reduce greatly, and the quality of texture does not obtain any reduction, has effectively guaranteed faceform's sense of reality degree.
Fig. 6 is through the personalized human face model after the texture, can see that therefrom algorithm can reach effect very true to nature.

Claims (6)

1. based on the quick method for modeling personalized human face of orthogonal image, comprise step:
Utilize the orthogonal image of video camera shooting people face and carry out normalized, general face wire frame model is carried out projection, obtain front projection grid and side projection grid;
Select the character pair point of facial image and projection grid, mate, try to achieve matching parameter;
Utilize matching parameter that the projection grid is carried out matched transform, obtain the projection grid after the conversion, the projection grid of the interpolation algorithm that utilizes radial basis function after to conversion carries out interpolation, obtains the projection grid after the interpolation;
Projection grid result according to positive and side recovers three-dimensional information, obtains personalized grid model;
According to orthogonal image, utilize the texture technology to obtain personalized three-dimensional face model.
2. by the described method of claim 1, it is characterized in that described normalization processing method and projection grid method to orthogonal image comprises step:
The height of people's face is FrontHeight in a, the people's face direct picture supposing to collect, the height of people's face is SideHeight in the side image, make R=FrontHeight/SideHeight, therefore can adjust the convergent-divergent side image according to R, make direct picture identical with people's face height in the side image;
B, respectively general face wire frame model is carried out projection according to direct picture and side image, the general face wire frame model that adopts is the general faceform of IST, and on its basis, change, be that Chinese language is corresponding with the parameter of animation, general face wire frame model is carried out the projection on XY plane and YZ plane respectively, obtain two projection grids of universal model.
3. by the described method of claim 1, it is characterized in that the method for solving of described matching parameter comprises step:
A, select the face feature point, allow manual interaction, promptly manually select unique point to guarantee the accuracy of characteristic point position according to the significance level of human face characteristic point in face image, suppose to have chosen altogether n unique point Pif1, Pif2....Pifn} (0<n);
B, according to the information of result's unique point of mark same position on front and side projection grid of a, the same with a choose altogether n unique point Pmf1, Pmf2....Pmfn} (0<n);
C, according to the initial characteristics point, can obtain corresponding matching parameter, s ∈ R is a zoom factor, R is 2 * 2 rotation matrix, T is a translation vector, then can try to achieve s according to formula (1), R, T,
MinE ( s , R , T ) = Σ n ( P mf - P if ) - - - ( 1 )
Wherein, chosen n unique point, s, R, T and Pmf, the relation of Pif is as shown in Equation (2)
s·R·P mf+T=P if。(2)
4. by the described method of claim 3, it is characterized in that described matched transform and interpolation algorithm method comprise step:
A, obtain s, R after the T, imposes conversion to all net points, obtains the projection grid after the conversion;
B, adopt the radial basis function interpolation algorithm then, all unique points are carried out the interpolation conversion, obtain the projection grid after the interpolation.
5. by the described method of claim 1, it is characterized in that the step that described three-dimensional information recovery obtains the method for personalized grid model is:
Obtain the projection grid after direct picture obtains conversion that (image obtains obtaining in the projection grid after the conversion that (then faceform's three-dimensional information can be expressed as for Ys, Zs) coordinate from the side for Xf, Yf) coordinate
X = Xf Y = ( Yf + Ys ) / 2 Z = Zs
Obtain personalized face wire frame model.
6. by the described method of claim 1, it is characterized in that the step of the method for described texture is:
Judge the normal vector of each dough sheet in the grid model and the angle of z axle,, otherwise select direct picture, carry out texture, obtain personalized human face model true to nature as texture image if angle then selects side image as texture image greater than 45~60 degree.
CNB2005101081365A 2005-09-29 2005-09-29 Method for modeling personalized human face basedon orthogonal image Expired - Fee Related CN100430963C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB2005101081365A CN100430963C (en) 2005-09-29 2005-09-29 Method for modeling personalized human face basedon orthogonal image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB2005101081365A CN100430963C (en) 2005-09-29 2005-09-29 Method for modeling personalized human face basedon orthogonal image

Publications (2)

Publication Number Publication Date
CN1940996A CN1940996A (en) 2007-04-04
CN100430963C true CN100430963C (en) 2008-11-05

Family

ID=37959145

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB2005101081365A Expired - Fee Related CN100430963C (en) 2005-09-29 2005-09-29 Method for modeling personalized human face basedon orthogonal image

Country Status (1)

Country Link
CN (1) CN100430963C (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100468465C (en) * 2007-07-13 2009-03-11 中国科学技术大学 Stereo vision three-dimensional human face modelling approach based on dummy image
CN101916454B (en) * 2010-04-08 2013-03-27 董洪伟 Method for reconstructing high-resolution human face based on grid deformation and continuous optimization
CN101977314B (en) * 2010-09-26 2012-10-03 深圳大学 Image interpolation processing method and device and display terminal
CN102034097B (en) * 2010-12-21 2012-07-04 中国科学院半导体研究所 Method for recognizing human face by comprehensively utilizing front and lateral images
CN102157010A (en) * 2011-05-25 2011-08-17 上海大学 Method for realizing three-dimensional facial animation based on layered modeling and multi-body driving
EP2755164A3 (en) * 2013-01-09 2017-03-01 Samsung Electronics Co., Ltd Display apparatus and control method for adjusting the eyes of a photographed user
CN104346822B (en) * 2013-07-23 2017-07-21 富士通株式会社 texture mapping method and device
CN104777329B (en) * 2014-01-13 2018-06-05 北京航空航天大学 A kind of linear programming algorithm for the reconstruct of particle image velocimetry three dimensional particles field
CN106033621B (en) * 2015-03-17 2018-08-24 阿里巴巴集团控股有限公司 A kind of method and device of three-dimensional modeling
CN110428491B (en) * 2019-06-24 2021-05-04 北京大学 Three-dimensional face reconstruction method, device, equipment and medium based on single-frame image
CN110458121B (en) * 2019-08-15 2023-03-14 京东方科技集团股份有限公司 Method and device for generating face image

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002013144A1 (en) * 2000-08-10 2002-02-14 Ncubic Corp. 3d facial modeling system and modeling method
US20050031195A1 (en) * 2003-08-08 2005-02-10 Microsoft Corporation System and method for modeling three dimensional objects from a single image

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002013144A1 (en) * 2000-08-10 2002-02-14 Ncubic Corp. 3d facial modeling system and modeling method
US20050031195A1 (en) * 2003-08-08 2005-02-10 Microsoft Corporation System and method for modeling three dimensional objects from a single image

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Automatic Creation of 3D Facial Models. Akimoto T., Suenaga Y., Wallace R.S..IEEE Computer Graphics and Applications,Vol.Vol.13 No.No.5. 1993 *
基于特征点的特定人脸三维网格的生成. 李保州,何昕等.计算机工程,第25卷第9期. 1999 *

Also Published As

Publication number Publication date
CN1940996A (en) 2007-04-04

Similar Documents

Publication Publication Date Title
CN100430963C (en) Method for modeling personalized human face basedon orthogonal image
CN103606186B (en) The virtual hair style modeling method of a kind of image and video
CN102222363B (en) Method for fast constructing high-accuracy personalized face model on basis of facial images
CN106067190B (en) A kind of generation of fast face threedimensional model and transform method based on single image
JP4865093B2 (en) Method and system for animating facial features and method and system for facial expression transformation
CN109978930A (en) A kind of stylized human face three-dimensional model automatic generation method based on single image
CN101916454B (en) Method for reconstructing high-resolution human face based on grid deformation and continuous optimization
CN107730449B (en) Method and system for beautifying facial features
CN107564049B (en) Faceform's method for reconstructing, device and storage medium, computer equipment
CN113744374B (en) Expression-driven 3D virtual image generation method
JP2011170891A (en) Facial image processing method and system
WO2006049147A1 (en) 3d shape estimation system and image generation system
CN109741382A (en) A kind of real-time three-dimensional method for reconstructing and system based on Kinect V2
US11928778B2 (en) Method for human body model reconstruction and reconstruction system
CN115951784B (en) Method for capturing and generating motion of wearing human body based on double nerve radiation fields
CN109191393A (en) U.S. face method based on threedimensional model
Hilton et al. From 3d shape capture to animated models
Azevedo et al. An augmented reality virtual glasses try-on system
CN112116699A (en) Real-time real-person virtual trial sending method based on 3D face tracking
Maninchedda et al. Face reconstruction on mobile devices using a height map shape model and fast regularization
JPH08147494A (en) Picture forming device for hair of head
CN112802031B (en) Real-time virtual trial sending method based on three-dimensional head tracking
Zhang et al. Constructing a realistic face model of an individual for expression animation
CN115861525A (en) Multi-view face reconstruction method based on parameterized model
JP2001222725A (en) Image processor

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
ASS Succession or assignment of patent right

Owner name: SHENZHEN ZHONGCHUANG FUTURE TECHNOLOGY CO., LTD.

Free format text: FORMER OWNER: INSTITUTE OF AUTOMATION, CHINESE ACADEMY OF SCIENCES

Effective date: 20141010

C41 Transfer of patent application or patent right or utility model
COR Change of bibliographic data

Free format text: CORRECT: ADDRESS; FROM: 100080 HAIDIAN, BEIJING TO: 518172 SHENZHEN, GUANGDONG PROVINCE

TR01 Transfer of patent right

Effective date of registration: 20141010

Address after: Longgang District of Shenzhen City, Guangdong province 518172 city street in the center city Huangge Road No. 441 Longgang Tian An Digital Innovation Park plant No. three

Patentee after: SHENZHEN ZHONGCHUANG FUTURE TECHNOLOGY CO., LTD.

Address before: 100080 Zhongguancun East Road, Beijing, No. 95, No.

Patentee before: Institute of Automation, Chinese Academy of Sciences

C56 Change in the name or address of the patentee
CP03 Change of name, title or address

Address after: 518172 706/707 room B, 2 Shenzhen Bay science and technology ecological garden, Shenzhen, Guangdong, Nanshan District, China

Patentee after: Shenzhen Feng Cheng Powerise Technology Co. Ltd.

Address before: Longgang District of Shenzhen City, Guangdong province 518172 city street in the center city Huangge Road No. 441 Longgang Tian An Digital Innovation Park plant No. three

Patentee before: SHENZHEN ZHONGCHUANG FUTURE TECHNOLOGY CO., LTD.

CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20081105

Termination date: 20180929

CF01 Termination of patent right due to non-payment of annual fee