CN109377544A - A kind of face three-dimensional image generating method, device and readable medium - Google Patents

A kind of face three-dimensional image generating method, device and readable medium Download PDF

Info

Publication number
CN109377544A
CN109377544A CN201811459413.0A CN201811459413A CN109377544A CN 109377544 A CN109377544 A CN 109377544A CN 201811459413 A CN201811459413 A CN 201811459413A CN 109377544 A CN109377544 A CN 109377544A
Authority
CN
China
Prior art keywords
face
dimensional image
target
dimensional
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811459413.0A
Other languages
Chinese (zh)
Other versions
CN109377544B (en
Inventor
陈雅静
林祥凯
宋奕兵
凌永根
暴林超
刘威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201811459413.0A priority Critical patent/CN109377544B/en
Publication of CN109377544A publication Critical patent/CN109377544A/en
Application granted granted Critical
Publication of CN109377544B publication Critical patent/CN109377544B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Computer Graphics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a kind of face three-dimensional image generating method, device and readable mediums, belong to technical field of image processing, in method and device provided by the invention, face feature parameter, photo environment feature and the parameter information of taking pictures identified from target face two dimensional image using adjustable training pattern, according to the three-dimensional substrates model reconstruction target human face three-dimensional model in face feature parameter and standard faces template library, simulates photo environment feature and parameter information of taking pictures carries out rendering to target human face three-dimensional model and obtains intermediate face two dimensional image;When determining that target face two dimensional image and intermediate face two dimensional image are unsatisfactory for condition for consistence, adjusting training model simultaneously returns to the step of obtaining intermediate face two dimensional image according to target face two dimensional image using training pattern adjusted;Determining that the target human face three-dimensional model based on newest reconstruct obtains target face 3-D image when meeting condition for consistence, improves the fidelity of target face 3-D image.

Description

A kind of face three-dimensional image generating method, device and readable medium
Technical field
The present invention relates to technical field of image processing more particularly to a kind of face three-dimensional image generating methods, device and can Read medium.
Background technique
Face is the part of human body most expressiveness, is had the characteristics that personalized and diversified.With opening for various softwares Hair, these softwares, which have been not content with using only face two dimensional image, shows and conveys some information, it is believed that three-dimensional using face Image more can be three-dimensional and visually transmits the information that some face two dimensional images cannot transmit.
And the prior art is when generating face 3-D image, is directly to pass through study input face two dimension using neural network The feature of picture obtains face 3-D image to be in turn fitted true three-dimensional face images parameter, although this method can be based on Face two dimensional image learns the shape, expression and texture to face, but is caused due to the identity limitation for lacking output and input The face 3-D image of output and the face two dimensional image of input are less as the same person.
Therefore, one of face 3-D image fidelity higher the problem of being computer overriding concern of generation how is improved.
Summary of the invention
The embodiment of the present invention provides a kind of face three-dimensional image generating method, device and readable medium, generates to improve Face 3-D image fidelity.
In a first aspect, the embodiment of the present invention provides a kind of face three-dimensional image generating method, comprising:
According to target face two dimensional image, obtain intermediate face two dimensional image using adjustable training pattern, it is described in Between face two dimensional image be according to the face feature parameter that is identified from the target face two dimensional image and standard faces mould Three-dimensional substrates model reconstruction target human face three-dimensional model in plate library, and simulate and take pictures from what target face two dimensional image identified Environmental characteristic and take pictures what parameter information obtain after rendering processing to the target human face three-dimensional model;
Determine whether the target face two dimensional image and intermediate face two dimensional image meet the condition for consistence of setting;
When determination is unsatisfactory for the condition for consistence, adjusts the training pattern and utilize training pattern weight adjusted It is new to return to the step of intermediate face two dimensional image is obtained according to the target face two dimensional image;
When determining to meet condition for consistence, the target human face three-dimensional model based on newest reconstruct obtains target face three Tie up image.
Second aspect, the embodiment of the present invention provide a kind of face 3-dimensional image creation device, comprising:
Obtaining unit, for obtaining intermediate face two using adjustable training pattern according to target face two dimensional image Image is tieed up, the intermediate face two dimensional image is according to the face feature parameter identified from the target face two dimensional image With the three-dimensional substrates model reconstruction target human face three-dimensional model in standard faces template library, and simulate from target face two dimensional image The photo environment feature that identifies and take pictures what parameter information obtain after rendering processing to the target human face three-dimensional model;
Determination unit, for determining whether the target face two dimensional image and intermediate face two dimensional image meet setting Condition for consistence;
Adjustment unit, for adjusting the trained mould when determination unit determination is unsatisfactory for the condition for consistence Type is simultaneously returned to using training pattern adjusted according to the target face two dimensional image acquisition intermediate face two dimension The step of image;
Generation unit, for the determination unit determine meet the condition for consistence when, the mesh based on newest reconstruct Mark human face three-dimensional model obtains target face 3-D image.
The third aspect, the embodiment of the present invention provide a kind of computer-readable medium, are stored with computer executable instructions, institute Computer executable instructions are stated for executing face three-dimensional image generating method provided by the present application.
Fourth aspect, the embodiment of the present invention provide a kind of electronic equipment, comprising:
At least one processor;And
The memory being connect at least one described processor communication;Wherein,
The memory is stored with the instruction that can be executed by least one described processor, and described instruction is by described at least one A processor executes, so that at least one described processor is able to carry out face three-dimensional image generating method provided by the present application.
The invention has the advantages that:
Face three-dimensional image generating method, device and readable medium provided in an embodiment of the present invention, are getting target person It after face two dimensional image, is entered into adjustable training pattern, which can be from target face two dimensional image Identify face feature parameter, photo environment feature and parameter information of taking pictures, be then based on the face feature parameter that identifies and Three-dimensional substrates model reconstruction target human face three-dimensional model in standard faces template library, then the photo environment feature that simulation identification goes out Rendering is carried out to target human face three-dimensional model with parameter information of taking pictures to handle to obtain intermediate face two dimensional image, then determines target Whether face two dimensional image and intermediate face two dimensional image meet the condition for consistence of setting, when determining to be unsatisfactory for consistency item When part, adjusts the training pattern and returned to using training pattern adjusted and obtained according to the target face two dimensional image The step of obtaining the intermediate face two dimensional image;When determining to meet condition for consistence, the target face based on newest reconstruct Threedimensional model obtains target face 3-D image.Using the above method, adjustable training pattern is adjusted by dynamic, so that being based on The face feature parameter that this is obtained makes the photo environment identified closer to the target face in target face two dimensional image Feature with take pictures parameter information with when photographic subjects face two dimensional image photo environment parameter and parameter information of taking pictures it is closer, So that the target human face three-dimensional model and target face that are obtained based on this are closer, so that the target obtained based on this 3-D image fidelity is higher.
Other features and advantages of the present invention will be illustrated in the following description, also, partly becomes from specification It obtains it is clear that understand through the implementation of the invention.The objectives and other advantages of the invention can be by written explanation Specifically noted structure is achieved and obtained in book, claims and attached drawing.
Detailed description of the invention
The drawings described herein are used to provide a further understanding of the present invention, constitutes a part of the invention, this hair Bright illustrative embodiments and their description are used to explain the present invention, and are not constituted improper limitations of the present invention.In the accompanying drawings:
Fig. 1 is the application scenarios schematic diagram of face three-dimensional image generating method provided in an embodiment of the present invention;
Fig. 2 is the flow diagram of face three-dimensional image generating method provided in an embodiment of the present invention;
One of the logical architecture figure of Fig. 3 face three-dimensional image generating method provided in an embodiment of the present invention;
Fig. 4 a utilizes adjustable training pattern from the target face two dimensional image to be provided in an embodiment of the present invention Identify the flow diagram of parameters for shape characteristic;
Fig. 4 b is the characteristic point schematic diagram in target face two dimensional image provided in an embodiment of the present invention;
Fig. 5 a utilizes adjustable training pattern from the target face two dimensional image to be provided in an embodiment of the present invention Identify the flow diagram of expression parameter;
Fig. 5 b is the schematic diagram of the basic facial expression substrate model of standard faces provided in an embodiment of the present invention;
Fig. 6 a utilizes adjustable training pattern from the target face two dimensional image to be provided in an embodiment of the present invention Identify the flow diagram of parametric texture;
Fig. 6 b is for certain face provided in an embodiment of the present invention in Y-axis at -70, -50, -30, -15,0,15,30,50 and 70 degree The three-D grain substrate model schematic of variation;
Fig. 7 is the flow diagram of reconstruct target human face three-dimensional model provided in an embodiment of the present invention;
Fig. 8 is that determining target face two dimensional image provided in an embodiment of the present invention and intermediate face two dimensional image characterize respectively The whether consistent flow diagram of identity characteristic information;
Fig. 9 is the target face three-dimensional mould of the target face reconstruction of two-dimensional images provided in an embodiment of the present invention based on Fig. 4 a The effect diagram of type;
Figure 10 is the two of the logical architecture figure of face three-dimensional image generating method provided in an embodiment of the present invention;
Figure 11 is the structural schematic diagram of face 3-dimensional image creation device provided in an embodiment of the present invention;
Figure 12 is the structural representation of the computing device provided in an embodiment of the present invention for implementing face three-dimensional image generating method Figure.
Specific embodiment
Face three-dimensional image generating method, device and readable medium provided in an embodiment of the present invention, to improve generation The fidelity of face 3-D image.
Below in conjunction with Figure of description, preferred embodiment of the present invention will be described, it should be understood that described herein Preferred embodiment only for the purpose of illustrating and explaining the present invention and is not intended to limit the present invention, and in the absence of conflict, this hair The feature in embodiment and embodiment in bright can be combined with each other.
To facilitate the understanding of the present invention, in technical term of the present invention:
1, VGG Face is depth human face recognition model, by Oxonian visual geometric group (Visual Geometry Group, VGG) human face recognition model that proposes, which is the implementation of convolutional neural networks.The present invention passes through utilization VGG Face can extract the face feature parameter of face from face two dimensional image and photo environment feature and take pictures parameter Information etc..
2, convolutional neural networks: (Convolutional Neural Network, CNN) is a kind of for two dimension input knowledge The neural network of other problem is made of one or more convolutional layers and pond layer (pooling layer).Its main feature is that weight is total It enjoys, reduces number of parameters, and there is height invariance to translation, scaling, inclination or the deformation of his total form.
3, the humorous illumination of ball, a kind of Real-time Rendering Technology can produce the rendering and hatching effect of high quality using the technology.
4, terminal device can to install types of applications program, and can will provide in mounted application program The electronic equipment that object is shown, the electronic equipment can be mobile, be also possible to fixed.For example, mobile phone, plate are electric Brain, all kinds of wearable devices, mobile unit, personal digital assistant (personal digital assistant, PDA), sale Terminal (point of sales, POS), the monitoring device in subway station or other electronic equipments that can be realized above-mentioned function Deng.
In the face three-dimensional image generating method that the prior art uses, there is the truthful data for being not based on face two dimensional image Face 3-D image is obtained by estimating, this method will lead to face and face two dimension in the face 3-D image of output The face of image causes the face 3-D image generated not true to nature enough less as the same person.
The image fidelity obtained for the face three-dimensional image generating method for solving to use in the prior art is lower to ask Topic, the embodiment of the present invention gives solution, with reference to application scenarios schematic diagram shown in FIG. 1, is equipped on user equipment 11 The client of shooting function can be called, then user 10 shoots a user 10 by the client installed in user equipment 11 Face two dimensional image, the face two dimensional image is then sent to server 12, server 12 is in the people for receiving user 10 After face two dimensional image, face three-dimensional image generating method provided by the invention can be implemented, at this time the face two dimensional image of user 10 As target face two dimensional image, the implementation process of the above method is substantially are as follows: server 12 using adjustable training pattern from When identifying the face feature parameter of user 10 in image in the face two dimensional image of user 10, shooting the face two dimensional image Photo environment feature and parameter information of taking pictures, then server 12 is based on the face feature parameter and standard faces template identified The human face three-dimensional model of three-dimensional substrates model reconstruction user 10 in library, the photo environment feature and take pictures that then simulation identification goes out Parameter information renders the human face three-dimensional model of the user 10 of above-mentioned reconstruct to obtain an intermediate face two dimensional image, then Server 12 determines whether intermediate face two dimensional image meets the condition for consistence of setting with the face two dimensional image of user 10, when When server 12 determines that the face two dimensional image of intermediate face two dimensional image and user 10 are unsatisfactory for the condition for consistence of setting, Then show that intermediate face two dimensional image and the face two dimensional image similarity of user 100 are lower, so when adjust adjustable instruction Practice model, and re-executes the face two dimensional image based on user 10 using training pattern adjusted and obtain intermediate face two dimension The step of image, and then determine whether the face two dimensional image of the intermediate face two dimensional image and user 10 that regain is full again Sufficient condition for consistence, if being unsatisfactory for again adjusting training model until determine based on training pattern adjusted obtain Intermediate face two dimensional image and the face two dimensional image of user 10 meet condition for consistence;When determining to meet condition for consistence When, then show that intermediate face two dimensional image and the face two dimensional image similarity of user 10 are very high, namely based on current instruction The human face three-dimensional model for practicing model reconstruction is relatively good, then the human face three-dimensional model based on reconstruct newest when meeting condition for consistence The face 3-D image for obtaining user 10, the face 3-D image that then can use the user 10 of acquisition execute certain operations. By using the above method, the fidelity of the human face three-dimensional model of acquisition is substantially increased, and then is provided based on this acquisition The fidelity of face 3-D image.
It should be noted that be communicatively coupled between user equipment 11 and server 12 by network, which can be with For local area network, wide area network etc..User equipment 11 can be portable equipment (such as: mobile phone, plate, laptop etc.), can also To think PC (PC, PersonalComputer), server 12 can be any equipment for being capable of providing Internet service, Client in user equipment 11 can be the client that can call camera function, can be wechat, QQ and game client Etc..
It should be noted that face three-dimensional image generating method provided by the invention is also applied in client, when When processing capacity and request memory to user equipment 11 are not especially high, it can be implemented by user equipment 11 provided by the invention Face three-dimensional image generating method, specific implementation process is similar with server 12, is not described in detail herein.
The application scenarios of face three-dimensional image generating method provided in an embodiment of the present invention are to apply it to virtual reality Etc. in the client that face 3-D image can be used, for example, the image of role uses face 3-D image in game;Work as user After opening a game client, which can prompt user to need to create the role in game using the face of user, Then user needs to call camera function based on game client at this time, then shoots user's face two dimensional image, then objective User's face two dimensional image is sent to game server by family end, and game server is getting user's face two dimensional image Afterwards, it can identify that the face of user's face in two dimensional image is special from user's face two dimensional image using adjustable training pattern Parameter, photo environment feature and parameter information of taking pictures are levied, the face feature parameter and mark of the user's face identified are then utilized Three-dimensional substrates model reconstruction human face three-dimensional model in quasi- face template library, then again simulation identification go out photo environment feature and Acquisition parameters feature carries out rendering to the human face three-dimensional model of reconstruct and handles to obtain intermediate face two dimensional image, then determine in Between face two dimensional image and user's face two dimensional image whether meet condition for consistence, if being unsatisfactory for condition for consistence, show Face feature parameter, shooting environmental feature and the parameter information validity of taking pictures that current training pattern obtains are not high enough, therefore need Training pattern is adjusted and the above process is then executed based on training pattern adjusted again again, until determining base Meet condition for consistence with user's face two dimensional image in the intermediate face two dimensional image that model adjusted obtains, then full The face 3-D image of user is obtained when sufficient condition for consistence based on newest human face three-dimensional model, then obtains the face three of user It can be applied it to after dimension image in the image of the role in game and user is showed by game client.Such one Come, user can carry out game based on the image, and the game role for finally showing user is made by using the above method Image is higher with user's face fidelity, improves the satisfaction of user.
Since the similarity of the human face three-dimensional model and user's face of reconstruct is not necessarily high, therefore the present invention will be based on reconstruct The two dimensional image that human face three-dimensional model renders is referred to as intermediate two dimensional image, intermediate two dimensional image and use if similarity is high The face two dimensional image similarity at family then can be assumed that intermediate two dimensional image is exactly to use with regard to relatively high when meeting condition for consistence The face two dimensional image at family.
Below with reference to application scenarios shown in FIG. 1, illustrative embodiments according to the present invention are described with reference to Fig. 2-Figure 12 The face three-dimensional image generating method of offer.It should be noted that above-mentioned application scenarios be merely for convenience of understanding it is of the invention Spirit and principle and show, embodiments of the present invention are not limited in this respect.On the contrary, embodiments of the present invention can To be applied to applicable any scene.
As shown in Fig. 2, being the flow diagram of face three-dimensional image generating method provided in an embodiment of the present invention, by it It is illustrated for being applied in server, the process of the server implementation above method may comprise steps of:
S21, according to target face two dimensional image, identified from target face two dimensional image using adjustable training pattern Face feature parameter, photo environment feature and parameter information of taking pictures out.
Specifically, with reference to logical framework figure shown in Fig. 3, target face two dimensional image is input to adjustable trained mould In type, the face feature parameter of target face can be identified from target face two dimensional image by being then based on the model, and Identify photo environment feature when shooting the target face two dimensional image and parameter information of taking pictures.
When it is implemented, in the present invention adjustable training pattern can be, but not limited to as VGG Face encoder and FaceNet network structure etc..When using VGG Face encoder, adjustable VGG Face encoder can use from target Learn the face feature parameter, photo environment feature and parameter information of taking pictures to target face in face two dimensional image.
The three-dimensional substrates model reconstruction target in face feature parameter and standard faces template library that S22, basis identify Human face three-dimensional model.
Due to can most represent a user other than the identity of the user, the face feature of the user also can uniquely be known Not Chu a user, and face feature generally comprises face shape, expression and texture etc., therefore is based on target face two dimensional image Shape, expression and the texture for obtaining target face are that can determine the face for obtaining the user, and then identify the identity of the user. And the shape feature of target face generally comprises the parts such as eyebrow, eyes, nose, mouth and cheek, and expression is varied , but various expressions are also based on some basic facial expressions and combine to obtain;Texture can also be based on the texture-combined of different angle It obtains.Based on foregoing description, in order to preferably reconstruct target human face three-dimensional model, the face feature parameter identified in the present invention It can be, but not limited to include parameters for shape characteristic, expression parameter and parametric texture etc..
It should be noted that when standard faces template library includes the shape template library of standard faces, then standard faces mould Three-dimensional substrates model in plate library is 3D shape substrate model;When standard faces template library includes the basic facial expression of standard faces When template library, then the three-dimensional substrates model in standard faces template library is basic facial expression substrate model;And work as standard faces template When library includes the texture formwork library of standard faces, then the three-dimensional substrates model in standard faces template library is three-D grain substrate mould Type.
Next the acquisition process of parameters for shape characteristic, expression parameter and parametric texture is described:
Preferably, adjustable training pattern can be utilized from the target face X-Y scheme according to process shown in Fig. 4 a Parameters for shape characteristic is identified as in:
S41, it is determined to constitute each 3D shape substrate of the shape of target face using adjustable training pattern The weight coefficient of model.
S42, the weight coefficient of each 3D shape substrate model is determined as the parameters for shape characteristic.
In step S41~S42, the shape of a large amount of face can be in advance based on to generate the shape mould an of standard faces Plate library includes each 3D shape substrate template of standard faces in the shape template library, and each 3D shape substrate model is all It is a complete three-dimensional face shape.
After server gets target face two dimensional image, each characteristic portion in characterization face can be first identified Characteristic point then due to three-dimensional reconstruction to be carried out, therefore needs to carry out 3D process of fitting treatment to each characteristic point with reference to shown in Fig. 4 b, The corresponding three-dimensional feature point of this feature point is obtained, is equivalent to two dimensional image plus depth value.Obtained three-dimensional feature point can be with It indicates are as follows: (x, y, z), wherein x indicates the abscissa value of the corresponding pixel of three-dimensional feature point;Y is expressed as the three-dimensional feature Select the abscissa value of corresponding vertical vegetarian refreshments;Z is expressed as the depth value of the three-dimensional feature point.Wherein, x and y be based on two dimensional image X the and y value of obtained characteristic point is identical.After the three-dimensional feature point for obtaining each characteristic point, three-dimensional feature point can be constituted Shape matched with each 3D shape substrate model, it is intended to determine that each the three of the shape of target face can be constituted Tie up the weight coefficient of shaped substrate model.It can be understood as solving linear equation f (x)=a1x1+a2x2+a3x3+...+aixi+... +anxnThe process of middle coefficient, f (x) is expressed as the shape that the three-dimensional feature point of target face is constituted, x in above-mentioned formulaiIt is expressed as I-th of 3D shape substrate model;aiIndicate the weight coefficient of i-th of 3D shape substrate model.Based on the above process Determine to be capable of forming the weight system of each 3D shape substrate model of the shape of target face in target face two dimensional image Number, and the weight coefficient for each 3D shape substrate model determined is parameters for shape characteristic.
It specifically, can also be by the maximum value for the three-dimensional feature point that target face two dimensional image obtains and each 3D shape Ratio in substrate model between the maximum value of characteristic point is determined as the weight coefficient of each 3D shape substrate model, Huo Zheye It can be by the pixel value averaged for the three-dimensional feature point that target face 3-D image obtains, while also to each 3D shape The pixel value averaged of the three-dimensional feature point of substrate model, the then three-dimensional feature that will be obtained based on target face 3-D image The average value that the average value and the three-dimensional feature point based on each 3D shape substrate model that point obtains obtain is determined as the three-dimensional The weight coefficient of shaped substrate model, the similarly also weight coefficient of available each 3D shape substrate model.
Preferably, adjustable training pattern can be utilized from the target face X-Y scheme according to process shown in Fig. 5 a Expression parameter is identified as in:
S51, it is determined to constitute each of expression shown in target face two dimensional image using adjustable training pattern The weight coefficient of a basic facial expression substrate model.
S52, the weight coefficient of each basic facial expression substrate model is determined as expression parameter.
In step S51~S52, the institute's espressiove that can be expressed of face can also be in advance based on to generate one group of standard people The basic facial expression template library of face includes each basic facial expression substrate model in the basic facial expression template library, with reference to shown in Fig. 5 b, this Invention provides the schematic diagram of the basic facial expression substrate model of standard faces shown in Fig. 5 b, and first is standard faces in Fig. 5 b Threedimensional model, that is, it is amimia when basic facial expression substrate model, i.e. characteristic portion in the standard faces threedimensional model exists Model under natural conditions;Only one characteristic portion changes in remaining each basic facial expression substrate model, such as schemes The expression of the 8th basic facial expression substrate model of the first row is mouth opening, the expression of the 11st basic facial expression substrate model in 5b The expression etc. to raise up for the left corners of the mouth.Then the three-dimensional feature determined point is gone to each basic facial expression substrate mould in matching Fig. 5 b Type can determine that each basic facial expression substrate model for being capable of forming expression shown in target face two dimensional image based on this Weight coefficient, and the above-mentioned weight coefficient determined is expression parameter.Specifically, target face X-Y scheme will can be based on As obtained three-dimensional feature point composition Matrix C, the three-dimensional feature point of each basic facial expression substrate model is constituted into matrix Mi, then It can determine the matrix of differences between Matrix C and Mi, then take absolute value to matrix of differences, be denoted as Di, then can determine In each Di element and value, due to element and be worth it is smaller, then show expression shown in the basic facial expression basic model With expression shown in target face two dimensional image more close to the element and value of Di is bigger, then shows the basic mould of the basic facial expression Expression shown in type differs larger with expression shown in target face two dimensional image, is based on foregoing description, can be by element And the weight coefficient of the smallest basic facial expression substrate model of value set biggish value, by element and the maximum basic facial expression base of value The weight coefficient of bed die type sets lesser value, it is hereby achieved that being capable of forming expression shown in target face two dimensional image The weight coefficient of each basic facial expression substrate model.Each substrate can certainly be determined using the method in similar Fig. 4 b The weight coefficient of expression substrate model.
It is alternatively possible to utilize adjustable training pattern from the target face X-Y scheme according to process shown in Fig. 6 a Parametric texture is identified as in:
S61, it is determined to form the texture of target face in target face two dimensional image using adjustable training pattern Each three-D grain substrate model weight coefficient.
S62, the weight coefficient of each three-D grain substrate model is determined as parametric texture.
In step S61~S62, the texture of a large amount of face can also be in advance based on to generate the texture of a standard faces Template library includes three-D grain substrate model of the standard faces under all angles in the texture formwork library, for example, giving in Fig. 6 b The three-D grain substrate model that certain face changes in Y-axis at -70, -50, -30, -15,0,15,30,50 and 70 degree is gone out, has been based on Above-mentioned three-D grain substrate model can determine to be capable of forming each of the texture of target face in target face two dimensional image The weight coefficient of three-D grain substrate model can be specifically not described in detail herein with reference to the description in Fig. 4 a and Fig. 5 a.
It, can be with when reconstructing target human face three-dimensional model after introducing above-mentioned three kinds of face feature parameter acquisition procedures Target face is reconstructed based on the three-dimensional substrates model in the face feature and standard faces template library of the target face identified Threedimensional model, and since face feature parameter includes parameters for shape characteristic, expression parameter and parametric texture, therefore above-mentioned standard face Template library also accordingly includes shape template library, the basic facial expression template library of standard faces and the texture of standard faces of standard faces Template library, is then based on parameters for shape characteristic, expression parameter and parametric texture and correspondingly template library reconstructs target face three Dimension module.
Specifically, target human face three-dimensional model can be reconstructed according to process shown in Fig. 7, may comprise steps of:
S71, based on each 3D shape substrate model in shape template library and each 3D shape substrate determined The weight coefficient of model determines target face three-dimensional shape model.
In practical application, no matter 3D shape substrate model, basic facial expression substrate model or three-D grain model, in fact In matter all it is that matrix is constituted, therefore in implementation steps S71, it can be in the matrix and Fig. 4 a to each 3D shape substrate model The weight coefficient for each 3D shape substrate model determined is weighted summation process, and weighted sum result is target Face three-dimensional shape model.
S72, based on basic facial expression substrate model each in basic facial expression template library and each basic facial expression base determined The weight coefficient of bed die type determines the expression shift term of target face.
Specifically, can determine respectively the matrix of the basic substrate model of each basic facial expression respectively with standard faces three-dimensional mould Matrix of differences between type, then to determining energy in the corresponding matrix of differences of the basic substrate model of each basic facial expression and Fig. 5 a The weight coefficient for enough forming each basic facial expression substrate model of expression shown in target face two dimensional image, which is weighted, to be asked And processing, weighted sum result are the expression shift term of target face.
S73, it is based on the target face three-dimensional shape model and expression shift term, generating has target face two dimensional image The target human face three-dimensional model of middle expression.
In this step, the expression shift term that step S72 is generated is fused to the target face 3D shape of step S71 generation In model, it may be assumed that the weighted sum result that the corresponding matrix of expression shift term and step S71 for generating step S72 generate carries out Summation process, the summed result are the target human face three-dimensional model with expression in target face two dimensional image.
S74, based on three-D grain substrate model each in texture formwork library and each three-D grain substrate mould determined The weight coefficient of type determines the three-D grain model of target face.
It specifically, can each three-D grain substrate for determining of matrix to each three-D grain substrate model and Fig. 6 a The weight coefficient of model is weighted summation process and obtains the three-D grain model of target face.
S75, have three of the target human face three-dimensional model and target face of expression in target face two dimensional image to described It ties up texture model and carries out fusion treatment, reconstruct target human face three-dimensional model.
In this step, by the three-D grain Model Fusion of the step S74 target face determined to target face two The target human face three-dimensional model for tieing up expression in image, can be obtained target human face three-dimensional model.
The photo environment feature and parameter information of taking pictures that S23, simulation identification go out, carry out the target human face three-dimensional model Rendering processing obtains intermediate face two dimensional image.
In this step, can use photo environment feature that orthogonal projection techniques and the humorous illumination model simulation identification of ball go out and Take pictures parameter information, then simulate one it is consistent with the shooting environmental feature of photographic subjects face two dimensional image and acquisition parameters Environment, such as determine light source and polishing position, one then can be rendered to the processing that take pictures of target human face three-dimensional model Intermediate face two dimensional image.It should be noted that Feng Shi reflection model (phong reflection model) can also be utilized To replace the humorous illumination model of ball.
Specifically, the photo environment feature in the present invention can for Lighting information etc., take pictures parameter information can with but it is unlimited In for alignment parameter information etc..
S24, determine whether target face two dimensional image and intermediate face two dimensional image meet the condition for consistence of setting, if It is unsatisfactory for thening follow the steps S25;If satisfaction thens follow the steps S26.
In this step, after rendering an intermediate face two dimensional image based on step S23, in order to determine target face two It ties up image and the intermediate face two dimensional image similarity of rendering is enough, the present invention proposes to determine target face two dimensional image and centre Whether face two dimensional image meets the condition for consistence of setting.Just show when only meeting based on current setting in step S21 Face feature parameter, photo environment feature and the parameter information reduction degree of taking pictures that training pattern identifies are very high, therefore bases It is also closely similar in target human face three-dimensional model that face feature parameter reconstructs and target face, and then this can be based on Target human face three-dimensional model obtains the output of target face 3-D image and shows.
Preferably, condition for consistence provided by the invention includes: target face two dimensional image and intermediate face two dimensional image The identity characteristic information characterized respectively is consistent.
By the way that identity characteristic information condition for consistence is arranged, face and target in intermediate face two dimensional image can be determined Face in face two dimensional image whether be the same person face, if it is generate the target person of intermediate face two dimensional image Face three-dimensional model is only relatively good model, the feature based on the target face 3-D image that this target human face three-dimensional model obtains More like the feature of face in target face two dimensional image, the fidelity of target face 3-D image is improved.
Specifically, target face two dimensional image and intermediate face two dimensional image point can be determined according to method shown in Fig. 8 Whether the identity characteristic information not characterized is consistent:
S81, determined respectively using trained identification model the corresponding user of the target face two dimensional image and Geographical location of the corresponding user of intermediate face two dimensional image on the spherical surface of the simulation earth.
Specifically, target face two dimensional image and intermediate face two dimensional image can be input to trained identification In model, it can determine that the corresponding user of target face two dimensional image is corresponding with intermediate face two dimensional image based on this model Geographical location of the user on the spherical surface of the simulation earth.
It should be noted that the identification model in the present invention is carried out by the feature vector to the last one characteristic layer It limits, whether target face two dimensional image and the identity characteristic information of intermediate face two dimensional image to determine input are consistent.This Outside, the identification model in the present invention can be, but not limited to as fixed VGG Face classifier and FaceNet network model Deng.Identification model in the present invention is limited by the consistency to identity characteristic information, so that finally obtained instruction Practicing model may learn the relevant face characteristic of identity according to target face, such as nose height, Sunken orbital socket and lip Thin and thick etc. so that the target human face three-dimensional model rebuild based on this is more like target face.
Specifically, fixed VGG Face classifier obtains training process substantially are as follows: goes to train VGG Face points using data set Class device obtains fixed VGG Face classifier, and the classifier is allowed to determine that being input to the classifier obtains face two dimensional image Geographical location of the corresponding user on the spherical surface of the simulation earth.The data set includes the face two dimensional image of a large number of users and every The geographical location of a user.Then VGG Face classifier can be data on spherical surface according to the geographical location of each user Each user is concentrated to simulate a geographical location, and the geographical location that different users simulates on spherical surface is not identical.
It can then be calculated after fixed VGG Face classifier input face two dimensional image based on above-mentioned training process Geographical location of the corresponding user of face two dimensional image inputted out on spherical surface.
S82, judge geographical location and centre of the corresponding user of target face two dimensional image on the spherical surface of the simulation earth Whether geographical location of the corresponding user of face two dimensional image on the spherical surface of the simulation earth meets identities match condition, if meeting Then follow the steps S83;If being unsatisfactory for thening follow the steps S84.
S83, determine that target face two dimensional image is consistent with the identity characteristic information of intermediate face two dimensional image characterization.
S84, determine that the identity characteristic information of target face two dimensional image and intermediate face two dimensional image characterization is inconsistent.
In step S82~S84, the identities match condition in the present invention can be geographical location difference in the range of license Deng.If determining geographical location of the corresponding user of target face two dimensional image on spherical surface in based on VGG Face classifier Between difference of the corresponding user of face two dimensional image between the geographical location on spherical surface in the range of license, it is determined that target Face two dimensional image is consistent with the identity characteristic information of intermediate face two dimensional image characterization, i.e., the user in the two images is same One people, if showing that the user characterized in both images is not the same person, degree differentiation ratio not in the range of license It is larger, then further demonstrate that the face feature parameter, photo environment feature and bat identified based on the training pattern in step S71 It is inadequate according to parameter information reduction degree, it needs to be adjusted the training pattern in step S71.
Preferably, condition for consistence provided by the invention further include: characteristic point and go-between in target face two dimensional image Euclidean distance in face two dimensional image between individual features point meets Feature Points Matching condition and/or target face two dimensional image Pixel value meets images match condition between intermediate face two dimensional image.
Specifically, the Feature Points Matching condition in the present invention can be, but not limited to as no more than default Euclidean distance threshold value Deng, the present invention in images match condition can be, but not limited to as no more than pixel value difference threshold value etc..
Specifically, can be identified respectively from target face two dimensional image and intermediate face two dimensional image in the present invention to Few 68 characteristic points, this 68 characteristic points are respectively distributed to eyebrow, eyes, nose, mouth and cheek part.For example, all in accordance with Characteristic point schematic diagram shown in Fig. 4 a extracts characteristic point from target face two dimensional image and intermediate face two dimensional image respectively, Then be directed to each characteristic point, determine in target face two dimensional image position of this feature point in target face two dimensional image with Whether the Euclidean distance in intermediate face two dimensional image between the position of this feature point in the picture is not more than default Euclidean distance Threshold value, if then determination meets Feature Points Matching condition, i.e., this two image alignment degree are higher, while can guarantee intermediate face The expression similarity-rough set in expression and target face two dimensional image in two dimensional image is high;If more than default Euclidean distance threshold Value, it is determined that be unsatisfactory for Feature Points Matching condition, then need to be adjusted training pattern, so that being obtained based on model adjusted To intermediate face two dimensional image be aligned with target face two dimensional image.
Specifically, pixel value between target face two dimensional image and intermediate face two dimensional image can also be determined in the present invention Difference, then whether determining pixel value difference is not more than pixel value difference threshold value, if being not more than, shows to meet images match condition, And show that the shooting environmental feature for the target face two dimensional image that training pattern identifies and texture information reduction degree are higher, texture Information such as skin and color etc. can then be made when determining to be unsatisfactory for images match condition by the adjustment to training pattern The normal vector that the target human face three-dimensional model of reconstruct is adjusted by the shade of target face two dimensional image is obtained, so that being based on The shape of this obtained target face 3-D image and the shape similarity in target face two dimensional image are higher.
S25, adjusting training model, and continue to execute step S21.
Specifically, in the training pattern of set-up procedure S71, can with the identification result of identity-based identification model, Feature Points Matching result and/or images match result are to the convolution kernel and biasing bias in training pattern, then again based on adjustment The step of training pattern afterwards re-execute the steps S21~S24, until determining that step S24 judging result is yes, i.e. target person Face two dimensional image and intermediate face two dimensional image meet the condition for consistence of setting.
S26, the target human face three-dimensional model based on newest reconstruct obtain target face 3-D image.
When step S24 judging result, which is, is, then the target human face three-dimensional model based on newest reconstruct obtains target face 3-D image, the target face 3-D image fidelity obtained based on this is higher, and Fig. 4 a gives target face two dimensional image Effect diagram;The effect that Fig. 9 gives the target human face three-dimensional model of the target face reconstruction of two-dimensional images based on Fig. 4 a is shown It is intended to, it can be deduced that be based on method provided by the invention, obtained target human face three-dimensional model and target human face similarity degree is higher.
Refering to what is shown in Fig. 10, identification model is fixed with adjustable training pattern for VGG Face encoder VGG Face classifier, photo environment feature be Lighting information for implement face three-dimensional image generating method provided by the invention Logical architecture figure, implementation procedure is substantially are as follows: in Figure 10 number be 1 facial image be target face two dimensional image, will Target face two dimensional image in Figure 10 is input in adjustable VGG Face encoder, and then the VGG Face encoder can To learn from target face two dimensional image to parameters for shape characteristic, expression parameter, parametric texture, take pictures parameter information and illumination Then information first reconstructs target human face three-dimensional model based on face feature parameter, then simulate take pictures parameter information and illumination letter Breath carries out rendering to the target human face three-dimensional model of reconstruct and handles to obtain intermediate face two dimensional image, i.e. number is 2 in Figure 10 Then target face two dimensional image and intermediate face two dimensional image are input in fixed VGG Face classifier, the VGG by figure Face classifier be can determine that target person face two dimensional image and intermediate face two dimensional image characterization identity characteristic information whether Unanimously, in addition, also can determine whether the characteristic point in target face two dimensional image and intermediate face two dimensional image meets characteristic point Matching condition, and determine that pixel value meets images match between target face two dimensional image and intermediate face two dimensional image Condition is then adjusted adjustable VGG Face encoder when either condition is unsatisfactory for, and is then again carried out Figure 10 institute The logic shown, when meeting above three condition, is based on newest target face three until determining to meet above three condition Dimension module obtains target face 3-D image.
Face three-dimensional image generating method provided by the invention is inputted after getting target face two dimensional image Into adjustable training pattern, which can identify face feature parameter from target face two dimensional image, clap According to environmental characteristic and parameter information of taking pictures, the three-dimensional being then based in the face feature parameter and standard faces template library identified Substrate model reconstruction target human face three-dimensional model, then simulation identification go out photo environment feature and take pictures parameter information to target person Face three-dimensional model carries out rendering and handles to obtain intermediate face two dimensional image, then determines target face two dimensional image and intermediate face Whether two dimensional image meets the condition for consistence of setting, when determining to be unsatisfactory for condition for consistence, adjusts the training pattern And it is returned to using training pattern adjusted and the intermediate face X-Y scheme is obtained according to the target face two dimensional image The step of picture;When determining to meet condition for consistence, the target human face three-dimensional model based on newest reconstruct obtains target face 3-D image.Using the above method, adjustable training pattern is adjusted by dynamic, so that the face feature parameter based on this acquisition Closer to the target face in target face two dimensional image, and make the photo environment feature identified and take pictures parameter information with Photo environment parameter and parameter information of taking pictures when photographic subjects face two dimensional image is closer, so that obtained based on this Target human face three-dimensional model and target face are closer, so that higher based on the target three-dimensional image fidelity that this is obtained.
Based on the same inventive concept, a kind of face 3-dimensional image creation device is additionally provided in the embodiment of the present invention, due to The principle that above-mentioned apparatus solves the problems, such as is similar to face three-dimensional image generating method, therefore the implementation side of may refer to of above-mentioned apparatus The implementation of method, overlaps will not be repeated.
It as shown in figure 11, is the structural schematic diagram of face 3-dimensional image creation device provided in an embodiment of the present invention, comprising:
Obtaining unit 111, for obtaining intermediate face using adjustable training pattern according to target face two dimensional image Two dimensional image, the intermediate face two dimensional image are according to the face feature ginseng identified from the target face two dimensional image Three-dimensional substrates model reconstruction target human face three-dimensional model in several and standard faces template library, and simulate from target face X-Y scheme As the photo environment feature that identifies and taking pictures after parameter information carries out rendering processing to the target human face three-dimensional model obtains 's;
Determination unit 112 is set for determining whether the target face two dimensional image and intermediate face two dimensional image meet Fixed condition for consistence;
Adjustment unit 113, for adjusting the training when determination unit determination is unsatisfactory for the condition for consistence Model is simultaneously returned to using training pattern adjusted according to the target face two dimensional image acquisition intermediate face two The step of tieing up image;
Generation unit 114, for the determination unit determine meet the condition for consistence when, based on newest reconstruct Target human face three-dimensional model obtains target face 3-D image.
Preferably, the condition for consistence includes:
The target face two dimensional image is consistent with the identity characteristic information of intermediate face two dimensional image characterization.
Optionally, the condition for consistence further include:
In the target face two dimensional image in characteristic point and the intermediate face two dimensional image between individual features point Euclidean distance meet Feature Points Matching condition and/or the target face two dimensional image and the intermediate face two dimensional image it Between pixel value meet images match condition.
Preferably, the face feature parameter includes parameters for shape characteristic;The standard faces template library includes standard people The shape template library of face, then the three-dimensional substrates model includes 3D shape substrate model;Then
The obtaining unit 111, specifically for being determined to constitute the target face using adjustable training pattern Shape each 3D shape substrate model weight coefficient;The weight coefficient of each 3D shape substrate model is determined as The parameters for shape characteristic.
Optionally, the face feature parameter further includes expression parameter;The standard faces template library further includes standard people The basic facial expression template library of face, then the three-dimensional substrates model further includes basic facial expression substrate model;
The obtaining unit 111 is also used to be determined to using adjustable training pattern to constitute the target face two Tie up the weight coefficient of each basic facial expression substrate model of expression shown in image;By the power of each basic facial expression substrate model Weight coefficient is determined as expression parameter.
Optionally, the face feature further includes parametric texture;The standard faces template library further includes standard faces Texture formwork library, then the three-dimensional substrates model further includes three-D grain substrate model;
The obtaining unit 111 is also used to be determined to form the target face two using adjustable training pattern Tie up the weight coefficient of each three-D grain substrate model of the texture of target face in image;By each three-D grain substrate model Weight coefficient be determined as parametric texture.
Further, the obtaining unit 111, specifically for based on each 3D shape substrate mould in shape template library The weight coefficient of type and each 3D shape substrate model determined, determines target face three-dimensional shape model;Based on basic The weight coefficient of each basic facial expression substrate model and each basic facial expression substrate model determined in expression template library determines The expression shift term of target face;Based on the target face three-dimensional shape model and expression shift term, generating has target person The target human face three-dimensional model of expression in face two dimensional image;Based on three-D grain substrate model each in texture formwork library and determination The weight coefficient of each three-D grain substrate model out, determines the three-D grain model of target face;There is target to described The three-D grain model of the target human face three-dimensional model and target face of expression carries out fusion treatment, reconstruct in face two dimensional image Target human face three-dimensional model.
Optionally, the determination unit 112, specifically for determine by the following method the target face two dimensional image and Whether the identity characteristic information of intermediate face two dimensional image characterization is consistent: determining institute respectively using trained identification model The corresponding user of target face two dimensional image user corresponding with intermediate face two dimensional image is stated on the spherical surface of the simulation earth Geographical location;If it is determined that geographical location and centre of the corresponding user of target face two dimensional image on the spherical surface of the simulation earth Geographical location of the corresponding user of face two dimensional image on the spherical surface of the simulation earth meets identities match condition, it is determined that described Target face two dimensional image is consistent with the identity characteristic information of intermediate face two dimensional image characterization;If it is determined that being unsatisfactory for, then really The identity characteristic information of the fixed target face two dimensional image and intermediate face two dimensional image characterization is inconsistent.
For convenience of description, above each section is divided by function describes respectively for each module (or unit).Certainly, exist Implement to realize the function of each module (or unit) in same or multiple softwares or hardware when the present invention.
Face three-dimensional image generating method, device and the readable medium for describing exemplary embodiment of the invention it Afterwards, next, introducing the computing device of another exemplary embodiment according to the present invention.
Person of ordinary skill in the field it is understood that various aspects of the invention can be implemented as system, method or Program product.Therefore, various aspects of the invention can be embodied in the following forms, it may be assumed that complete hardware embodiment, complete The embodiment combined in terms of full Software Implementation (including firmware, microcode etc.) or hardware and software, can unite here Referred to as circuit, " module " or " system ".
In some possible embodiments, it is single can to include at least at least one processing for computing device according to the present invention Member and at least one storage unit.Wherein, the storage unit is stored with program code, when said program code is described When processing unit executes, so that the processing unit executes the exemplary implementations various according to the present invention of this specification foregoing description Step in the face three-dimensional image generating method of mode.For example, the processing unit can execute step as shown in Figure 2 Face 3-D image product process in S21~S26.
The computing device 120 of this embodiment according to the present invention is described referring to Figure 12.The meter that Figure 12 is shown Calculating device 110 is only an example, should not function to the embodiment of the present invention and use scope bring any restrictions.
As shown in figure 12, computing device 110 is showed in the form of universal computing device.The component of computing device 110 can be with Including but not limited to: at least one above-mentioned processing unit 111, at least one above-mentioned storage unit 112, the different system components of connection The bus 113 of (including storage unit 112 and processing unit 111).
Bus 113 indicates one of a few class bus structures or a variety of, including memory bus or Memory Controller, Peripheral bus, processor or the local bus using any bus structures in a variety of bus structures.
Storage unit 112 may include the readable medium of form of volatile memory, such as random access memory (RAM) 1121 and/or cache memory 1122, it can further include read-only memory (ROM) 1123.
Storage unit 112 can also include program/utility with one group of (at least one) program module 1124 1125, such program module 1124 includes but is not limited to: operating system, one or more application program, other program moulds It may include the realization of network environment in block and program data, each of these examples or certain combination.
Computing device 110 can also be communicated with one or more external equipments 114 (such as keyboard, sensing equipment etc.), also Can be enabled a user to one or more equipment interacted with computing device 110 communication, and/or with make the computing device The 110 any equipment (such as router, modem etc.) that can be communicated with one or more of the other calculating equipment are led to Letter.This communication can be carried out by input/output (I/O) interface 115.Also, computing device 110 can also be suitable by network Orchestration 116 and one or more network (such as local area network (LAN), wide area network (WAN) and/or public network, such as because of spy Net) communication.As shown, network adapter 116 is communicated by bus 113 with other modules for computing device 11.It should Understand, although not shown in the drawings, other hardware and/or software module can be used in conjunction with computing device 110, including but unlimited In: microcode, device driver, redundant processing unit, external disk drive array, RAID system, tape drive and number According to backup storage system etc..
In some possible embodiments, the various aspects of face three-dimensional image generating method provided by the invention may be used also In the form of being embodied as a kind of program product comprising program code, when described program product is run on a computing device, The exemplary realities various according to the present invention that said program code is used to that the computer equipment to be made to execute this specification foregoing description The step in the face three-dimensional image generating method of mode is applied, for example, the computer equipment can execute step as shown in Figure 2 Face 3-D image product process in rapid S21~S26.
Described program product can be using any combination of one or more readable mediums.Readable medium can be readable letter Number medium or readable storage medium storing program for executing.Readable storage medium storing program for executing for example may be-but not limited to-electricity, magnetic, optical, electromagnetic, red The system of outside line or semiconductor, device or device, or any above combination.The more specific example of readable storage medium storing program for executing (non exhaustive list) includes: the electrical connection with one or more conducting wires, portable disc, hard disk, random access memory (RAM), read-only memory (ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, portable compact disc Read memory (CD-ROM), light storage device, magnetic memory device or above-mentioned any appropriate combination.
The program product for face three-dimensional image generating method of embodiments of the present invention can be using portable tight It gathers disk read-only memory (CD-ROM) and including program code, and can run on the computing device.However, program of the invention Product is without being limited thereto, and in this document, readable storage medium storing program for executing can be any tangible medium for including or store program, the program Execution system, device or device use or in connection can be commanded.
Readable signal medium may include in a base band or as the data-signal that carrier wave a part is propagated, wherein carrying Readable program code.The data-signal of this propagation can take various forms, including --- but being not limited to --- electromagnetism letter Number, optical signal or above-mentioned any appropriate combination.Readable signal medium can also be other than readable storage medium storing program for executing it is any can Read medium, the readable medium can send, propagate or transmit for by instruction execution system, device or device use or Program in connection.
The program code for including on readable medium can transmit with any suitable medium, including --- but being not limited to --- Wirelessly, wired, optical cable, RF etc. or above-mentioned any appropriate combination.
The program for executing operation of the present invention can be write with any combination of one or more programming languages Code, described program design language include object oriented program language-Java, C++ etc., further include conventional Procedural programming language-such as " C " language or similar programming language.Program code can be fully in user It calculates and executes in equipment, partly executes on a user device, being executed as an independent software package, partially in user's calculating Upper side point is executed on a remote computing or is executed in remote computing device or server completely.It is being related to far Journey calculates in the situation of equipment, and remote computing device can pass through the network of any kind --- including local area network (LAN) or extensively Domain net (WAN)-be connected to user calculating equipment, or, it may be connected to external computing device (such as utilize Internet service Provider is connected by internet).
It should be noted that although being referred to several unit or sub-units of device in the above detailed description, this stroke It point is only exemplary not enforceable.In fact, embodiment according to the present invention, it is above-described two or more The feature and function of unit can embody in a unit.Conversely, the feature and function of an above-described unit can It is to be embodied by multiple units with further division.
In addition, although describing the operation of the method for the present invention in the accompanying drawings with particular order, this do not require that or Hint must execute these operations in this particular order, or have to carry out shown in whole operation be just able to achieve it is desired As a result.Additionally or alternatively, it is convenient to omit multiple steps are merged into a step and executed by certain steps, and/or by one Step is decomposed into execution of multiple steps.
It should be understood by those skilled in the art that, the embodiment of the present invention can provide as method, system or computer program Product.Therefore, complete hardware embodiment, complete software embodiment or reality combining software and hardware aspects can be used in the present invention Apply the form of example.Moreover, it wherein includes the computer of computer usable program code that the present invention, which can be used in one or more, The computer program implemented in usable storage medium (including but not limited to magnetic disk storage, CD-ROM, optical memory etc.) produces The form of product.
The present invention be referring to according to the method for the embodiment of the present invention, the process of equipment (system) and computer program product Figure and/or block diagram describe.It should be understood that every one stream in flowchart and/or the block diagram can be realized by computer program instructions The combination of process and/or box in journey and/or box and flowchart and/or the block diagram.It can provide these computer programs Instruct the processor of general purpose computer, special purpose computer, Embedded Processor or other programmable data processing devices to produce A raw machine, so that being generated by the instruction that computer or the processor of other programmable data processing devices execute for real The device for the function of being specified in present one or more flows of the flowchart and/or one or more blocks of the block diagram.
These computer program instructions, which may also be stored in, is able to guide computer or other programmable data processing devices with spy Determine in the computer-readable memory that mode works, so that it includes referring to that instruction stored in the computer readable memory, which generates, Enable the manufacture of device, the command device realize in one box of one or more flows of the flowchart and/or block diagram or The function of being specified in multiple boxes.
These computer program instructions also can be loaded onto a computer or other programmable data processing device, so that counting Series of operation steps are executed on calculation machine or other programmable devices to generate computer implemented processing, thus in computer or The instruction executed on other programmable devices is provided for realizing in one or more flows of the flowchart and/or block diagram one The step of function of being specified in a box or multiple boxes.
Although preferred embodiments of the present invention have been described, it is created once a person skilled in the art knows basic Property concept, then additional changes and modifications can be made to these embodiments.So it includes excellent that the following claims are intended to be interpreted as It selects embodiment and falls into all change and modification of the scope of the invention.
Obviously, various changes and modifications can be made to the invention without departing from essence of the invention by those skilled in the art Mind and range.In this way, if these modifications and changes of the present invention belongs to the range of the claims in the present invention and its equivalent technologies Within, then the present invention is also intended to include these modifications and variations.

Claims (13)

1. a kind of face three-dimensional image generating method characterized by comprising
According to target face two dimensional image, face is identified from the target face two dimensional image using adjustable training pattern Portion's characteristic parameter, photo environment feature and parameter information of taking pictures;
It is three-dimensional according to the three-dimensional substrates model reconstruction target face in the face feature parameter and standard faces template library identified Model;
The photo environment feature and parameter information of taking pictures that simulation identification goes out, carry out rendering processing to the target human face three-dimensional model Obtain intermediate face two dimensional image;
Determine whether the target face two dimensional image and intermediate face two dimensional image meet the condition for consistence of setting;
When determination is unsatisfactory for the condition for consistence, adjusts the training pattern and returned again using training pattern adjusted Return the step of intermediate face two dimensional image is obtained according to the target face two dimensional image;
Determining that it is three-dimensional that the target human face three-dimensional model based on newest reconstruct obtains target face when meeting the condition for consistence Image.
2. the method as described in claim 1, which is characterized in that the condition for consistence includes:
The target face two dimensional image is consistent with the identity characteristic information of intermediate face two dimensional image characterization.
3. method according to claim 2, which is characterized in that the condition for consistence further include:
Euclidean in the target face two dimensional image in characteristic point and the intermediate face two dimensional image between individual features point Distance meets picture between Feature Points Matching condition and/or the target face two dimensional image and the intermediate face two dimensional image Plain value difference value meets images match condition.
4. the method as described in claims 1 to 3 is any, which is characterized in that the face feature parameter includes shape feature ginseng Number;The standard faces template library includes the shape template library of standard faces, then the three-dimensional substrates model includes 3D shape Substrate model;Then
Face feature parameter is identified from the target face two dimensional image using adjustable training pattern, comprising:
It is determined to constitute each 3D shape substrate model of the shape of the target face using adjustable training pattern Weight coefficient;
The weight coefficient of each 3D shape substrate model is determined as the parameters for shape characteristic.
5. method as claimed in claim 4, which is characterized in that the face feature parameter further includes expression parameter;The mark Quasi- face template library further includes the basic facial expression template library of standard faces, then the three-dimensional substrates model further includes basic facial expression base Bed die type;
It is described to identify face feature parameter from the target face two dimensional image using adjustable training pattern, also wrap It includes:
It is determined to constitute each base of expression shown in the target face two dimensional image using adjustable training pattern The weight coefficient of this expression substrate model;
The weight coefficient of each basic facial expression substrate model is determined as expression parameter.
6. method as claimed in claim 5, which is characterized in that the face feature further includes parametric texture;The standard people Face template library further includes the texture formwork library of standard faces, then the three-dimensional substrates model further includes three-D grain substrate model;
It is described to identify face feature parameter from the target face two dimensional image using adjustable training pattern, also wrap It includes:
It is determined to be formed each of the texture of target face in the target face two dimensional image using adjustable training pattern The weight coefficient of a three-D grain substrate model;
The weight coefficient of each three-D grain substrate model is determined as parametric texture.
7. method as claimed in claim 6, which is characterized in that according to the face identified from the target face two dimensional image Three-dimensional substrates model reconstruction target human face three-dimensional model in portion's characteristic parameter and standard faces template library, specifically includes:
Power based on each 3D shape substrate model in shape template library and each 3D shape substrate model determined Weight coefficient, determines target face three-dimensional shape model;
Based on basic facial expression substrate model each in basic facial expression template library and each basic facial expression substrate model for determining Weight coefficient determines the expression shift term of target face;
Based on the target face three-dimensional shape model and expression shift term, generating has expression in target face two dimensional image Target human face three-dimensional model;
Weight based on three-D grain substrate model each in texture formwork library and each three-D grain substrate model determined Coefficient determines the three-D grain model of target face;
There is the target human face three-dimensional model of expression and the three-D grain mould of target face in target face two dimensional image to described Type carries out fusion treatment, reconstructs target human face three-dimensional model.
8. method according to claim 2, which is characterized in that determine by the following method the target face two dimensional image and Whether the identity characteristic information of intermediate face two dimensional image characterization is consistent:
The corresponding user of the target face two dimensional image and intermediate face are determined respectively using trained identification model Geographical location of the corresponding user of two dimensional image on the spherical surface of the simulation earth;
If it is determined that geographical location and intermediate face of the corresponding user of target face two dimensional image on the spherical surface of the simulation earth Geographical location of the corresponding user of two dimensional image on the spherical surface of the simulation earth meets identities match condition, it is determined that the target Face two dimensional image is consistent with the identity characteristic information of intermediate face two dimensional image characterization;
If it is determined that being unsatisfactory for, it is determined that the identity characteristic of the target face two dimensional image and intermediate face two dimensional image characterization Information is inconsistent.
9. a kind of face 3-dimensional image creation device characterized by comprising
Obtaining unit, for obtaining intermediate face X-Y scheme using adjustable training pattern according to target face two dimensional image Picture, the intermediate face two dimensional image are according to the face feature parameter and mark identified from the target face two dimensional image Three-dimensional substrates model reconstruction target human face three-dimensional model in quasi- face template library, and simulate and identified from target face two dimensional image Photo environment feature out and take pictures what parameter information obtain after rendering processing to the target human face three-dimensional model;
Determination unit, for determining whether the target face two dimensional image meets the consistent of setting with intermediate face two dimensional image Property condition;
Adjustment unit, for adjusting the training pattern simultaneously when determination unit determination is unsatisfactory for the condition for consistence It is returned to using training pattern adjusted and the intermediate face two dimensional image is obtained according to the target face two dimensional image The step of;
Generation unit, for the determination unit determine meet the condition for consistence when, the target person based on newest reconstruct Face three-dimensional model obtains target face 3-D image.
10. device as claimed in claim 9, which is characterized in that the condition for consistence includes:
The target face two dimensional image is consistent with the identity characteristic information of intermediate face two dimensional image characterization.
11. device as claimed in claim 10, which is characterized in that the condition for consistence further include:
Euclidean in the target face two dimensional image in characteristic point and the intermediate face two dimensional image between individual features point Distance meets picture between Feature Points Matching condition and/or the target face two dimensional image and the intermediate face two dimensional image Plain value difference value meets images match condition.
12. a kind of computer-readable medium, is stored with computer executable instructions, which is characterized in that the computer is executable Instruction is for executing the method as described in claim 1 to 8 any claim.
13. a kind of electronic equipment characterized by comprising
At least one processor;And
The memory being connect at least one described processor communication;Wherein,
The memory is stored with the instruction that can be executed by least one described processor, and described instruction is by described at least one It manages device to execute, so that at least one described processor is able to carry out the method as described in claim 1 to 8 any claim.
CN201811459413.0A 2018-11-30 2018-11-30 Human face three-dimensional image generation method and device and readable medium Active CN109377544B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811459413.0A CN109377544B (en) 2018-11-30 2018-11-30 Human face three-dimensional image generation method and device and readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811459413.0A CN109377544B (en) 2018-11-30 2018-11-30 Human face three-dimensional image generation method and device and readable medium

Publications (2)

Publication Number Publication Date
CN109377544A true CN109377544A (en) 2019-02-22
CN109377544B CN109377544B (en) 2022-12-23

Family

ID=65376343

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811459413.0A Active CN109377544B (en) 2018-11-30 2018-11-30 Human face three-dimensional image generation method and device and readable medium

Country Status (1)

Country Link
CN (1) CN109377544B (en)

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109902767A (en) * 2019-04-11 2019-06-18 网易(杭州)网络有限公司 Model training method, image processing method and device, equipment and medium
CN110163953A (en) * 2019-03-11 2019-08-23 腾讯科技(深圳)有限公司 Three-dimensional facial reconstruction method, device, storage medium and electronic device
CN110298319A (en) * 2019-07-01 2019-10-01 北京字节跳动网络技术有限公司 Image composition method and device
CN110310224A (en) * 2019-07-04 2019-10-08 北京字节跳动网络技术有限公司 Light efficiency rendering method and device
CN110428491A (en) * 2019-06-24 2019-11-08 北京大学 Three-dimensional facial reconstruction method, device, equipment and medium based on single-frame images
CN110675475A (en) * 2019-08-19 2020-01-10 腾讯科技(深圳)有限公司 Face model generation method, device, equipment and storage medium
CN110807451A (en) * 2020-01-08 2020-02-18 腾讯科技(深圳)有限公司 Face key point detection method, device, equipment and storage medium
CN110941332A (en) * 2019-11-06 2020-03-31 北京百度网讯科技有限公司 Expression driving method and device, electronic equipment and storage medium
CN111080626A (en) * 2019-12-19 2020-04-28 联想(北京)有限公司 Detection method and electronic equipment
CN111161399A (en) * 2019-12-10 2020-05-15 盎锐(深圳)信息科技有限公司 Data processing method and component for generating three-dimensional model based on two-dimensional image
CN111243106A (en) * 2020-01-21 2020-06-05 杭州微洱网络科技有限公司 Method for correcting three-dimensional human body model based on 2D human body image
CN111695471A (en) * 2020-06-02 2020-09-22 北京百度网讯科技有限公司 Virtual image generation method, device, equipment and storage medium
CN111729314A (en) * 2020-06-30 2020-10-02 网易(杭州)网络有限公司 Virtual character face pinching processing method and device and readable storage medium
CN111881709A (en) * 2019-05-03 2020-11-03 爱唯秀股份有限公司 Face image processing method and device
CN112052834A (en) * 2020-09-29 2020-12-08 支付宝(杭州)信息技术有限公司 Face recognition method, device and equipment based on privacy protection
CN112150615A (en) * 2020-09-24 2020-12-29 四川川大智胜软件股份有限公司 Face image generation method and device based on three-dimensional face model and storage medium
WO2021093453A1 (en) * 2019-11-15 2021-05-20 腾讯科技(深圳)有限公司 Method for generating 3d expression base, voice interactive method, apparatus and medium
CN112884881A (en) * 2021-01-21 2021-06-01 魔珐(上海)信息科技有限公司 Three-dimensional face model reconstruction method and device, electronic equipment and storage medium
CN113129425A (en) * 2019-12-31 2021-07-16 Tcl集团股份有限公司 Face image three-dimensional reconstruction method, storage medium and terminal device
CN113177466A (en) * 2021-04-27 2021-07-27 北京百度网讯科技有限公司 Identity recognition method and device based on face image, electronic equipment and medium
CN113240802A (en) * 2021-06-23 2021-08-10 中移(杭州)信息技术有限公司 Three-dimensional reconstruction whole-house virtual dimension installing method, device, equipment and storage medium
US20210303923A1 (en) * 2020-03-31 2021-09-30 Sony Corporation Cleaning dataset for neural network training
CN113838176A (en) * 2021-09-16 2021-12-24 网易(杭州)网络有限公司 Model training method, three-dimensional face image generation method and equipment
CN113963425A (en) * 2021-12-22 2022-01-21 北京的卢深视科技有限公司 Testing method and device of human face living body detection system and storage medium
CN114549728A (en) * 2022-03-25 2022-05-27 北京百度网讯科技有限公司 Training method of image processing model, image processing method, device and medium
CN116188640A (en) * 2022-12-09 2023-05-30 北京百度网讯科技有限公司 Three-dimensional virtual image generation method, device, equipment and medium
CN116912639A (en) * 2023-09-13 2023-10-20 腾讯科技(深圳)有限公司 Training method and device of image generation model, storage medium and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102592309A (en) * 2011-12-26 2012-07-18 北京工业大学 Modeling method of nonlinear three-dimensional face
US20130215113A1 (en) * 2012-02-21 2013-08-22 Mixamo, Inc. Systems and methods for animating the faces of 3d characters using images of human faces
CN103593870A (en) * 2013-11-12 2014-02-19 杭州摩图科技有限公司 Picture processing device and method based on human faces
CN107067429A (en) * 2017-03-17 2017-08-18 徐迪 Video editing system and method that face three-dimensional reconstruction and face based on deep learning are replaced
CN108510437A (en) * 2018-04-04 2018-09-07 科大讯飞股份有限公司 A kind of virtual image generation method, device, equipment and readable storage medium storing program for executing

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102592309A (en) * 2011-12-26 2012-07-18 北京工业大学 Modeling method of nonlinear three-dimensional face
US20130215113A1 (en) * 2012-02-21 2013-08-22 Mixamo, Inc. Systems and methods for animating the faces of 3d characters using images of human faces
CN103593870A (en) * 2013-11-12 2014-02-19 杭州摩图科技有限公司 Picture processing device and method based on human faces
CN107067429A (en) * 2017-03-17 2017-08-18 徐迪 Video editing system and method that face three-dimensional reconstruction and face based on deep learning are replaced
CN108510437A (en) * 2018-04-04 2018-09-07 科大讯飞股份有限公司 A kind of virtual image generation method, device, equipment and readable storage medium storing program for executing

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
LIKEWIND1993: "《A Morphable Model For The Synthesis Of 3D Face》笔记", 《CSDNHTTPS://BLOG.CSDN.NET/LIKEWIND1993/ARTICLE/DETAILS/79177566》 *

Cited By (42)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110163953A (en) * 2019-03-11 2019-08-23 腾讯科技(深圳)有限公司 Three-dimensional facial reconstruction method, device, storage medium and electronic device
CN110163953B (en) * 2019-03-11 2023-08-25 腾讯科技(深圳)有限公司 Three-dimensional face reconstruction method and device, storage medium and electronic device
CN109902767A (en) * 2019-04-11 2019-06-18 网易(杭州)网络有限公司 Model training method, image processing method and device, equipment and medium
CN109902767B (en) * 2019-04-11 2021-03-23 网易(杭州)网络有限公司 Model training method, image processing device, model training apparatus, image processing apparatus, and computer-readable medium
CN111881709A (en) * 2019-05-03 2020-11-03 爱唯秀股份有限公司 Face image processing method and device
CN110428491A (en) * 2019-06-24 2019-11-08 北京大学 Three-dimensional facial reconstruction method, device, equipment and medium based on single-frame images
CN110428491B (en) * 2019-06-24 2021-05-04 北京大学 Three-dimensional face reconstruction method, device, equipment and medium based on single-frame image
CN110298319A (en) * 2019-07-01 2019-10-01 北京字节跳动网络技术有限公司 Image composition method and device
CN110310224A (en) * 2019-07-04 2019-10-08 北京字节跳动网络技术有限公司 Light efficiency rendering method and device
CN110675475A (en) * 2019-08-19 2020-01-10 腾讯科技(深圳)有限公司 Face model generation method, device, equipment and storage medium
CN110675475B (en) * 2019-08-19 2024-02-20 腾讯科技(深圳)有限公司 Face model generation method, device, equipment and storage medium
CN110941332A (en) * 2019-11-06 2020-03-31 北京百度网讯科技有限公司 Expression driving method and device, electronic equipment and storage medium
US11748934B2 (en) 2019-11-15 2023-09-05 Tencent Technology (Shenzhen) Company Limited Three-dimensional expression base generation method and apparatus, speech interaction method and apparatus, and medium
WO2021093453A1 (en) * 2019-11-15 2021-05-20 腾讯科技(深圳)有限公司 Method for generating 3d expression base, voice interactive method, apparatus and medium
CN111161399B (en) * 2019-12-10 2024-04-19 上海青燕和示科技有限公司 Data processing method and assembly for generating three-dimensional model based on two-dimensional image
CN111161399A (en) * 2019-12-10 2020-05-15 盎锐(深圳)信息科技有限公司 Data processing method and component for generating three-dimensional model based on two-dimensional image
CN111080626A (en) * 2019-12-19 2020-04-28 联想(北京)有限公司 Detection method and electronic equipment
CN113129425A (en) * 2019-12-31 2021-07-16 Tcl集团股份有限公司 Face image three-dimensional reconstruction method, storage medium and terminal device
CN110807451B (en) * 2020-01-08 2020-06-02 腾讯科技(深圳)有限公司 Face key point detection method, device, equipment and storage medium
CN110807451A (en) * 2020-01-08 2020-02-18 腾讯科技(深圳)有限公司 Face key point detection method, device, equipment and storage medium
CN111243106B (en) * 2020-01-21 2021-05-25 杭州微洱网络科技有限公司 Method for correcting three-dimensional human body model based on 2D human body image
CN111243106A (en) * 2020-01-21 2020-06-05 杭州微洱网络科技有限公司 Method for correcting three-dimensional human body model based on 2D human body image
US11748943B2 (en) * 2020-03-31 2023-09-05 Sony Group Corporation Cleaning dataset for neural network training
US20210303923A1 (en) * 2020-03-31 2021-09-30 Sony Corporation Cleaning dataset for neural network training
CN111695471A (en) * 2020-06-02 2020-09-22 北京百度网讯科技有限公司 Virtual image generation method, device, equipment and storage medium
CN111695471B (en) * 2020-06-02 2023-06-27 北京百度网讯科技有限公司 Avatar generation method, apparatus, device and storage medium
CN111729314A (en) * 2020-06-30 2020-10-02 网易(杭州)网络有限公司 Virtual character face pinching processing method and device and readable storage medium
CN112150615A (en) * 2020-09-24 2020-12-29 四川川大智胜软件股份有限公司 Face image generation method and device based on three-dimensional face model and storage medium
CN112052834B (en) * 2020-09-29 2022-04-08 支付宝(杭州)信息技术有限公司 Face recognition method, device and equipment based on privacy protection
CN112052834A (en) * 2020-09-29 2020-12-08 支付宝(杭州)信息技术有限公司 Face recognition method, device and equipment based on privacy protection
CN112884881A (en) * 2021-01-21 2021-06-01 魔珐(上海)信息科技有限公司 Three-dimensional face model reconstruction method and device, electronic equipment and storage medium
CN113177466A (en) * 2021-04-27 2021-07-27 北京百度网讯科技有限公司 Identity recognition method and device based on face image, electronic equipment and medium
CN113240802A (en) * 2021-06-23 2021-08-10 中移(杭州)信息技术有限公司 Three-dimensional reconstruction whole-house virtual dimension installing method, device, equipment and storage medium
CN113838176A (en) * 2021-09-16 2021-12-24 网易(杭州)网络有限公司 Model training method, three-dimensional face image generation method and equipment
CN113838176B (en) * 2021-09-16 2023-09-15 网易(杭州)网络有限公司 Model training method, three-dimensional face image generation method and three-dimensional face image generation equipment
CN113963425B (en) * 2021-12-22 2022-03-25 北京的卢深视科技有限公司 Testing method and device of human face living body detection system and storage medium
CN113963425A (en) * 2021-12-22 2022-01-21 北京的卢深视科技有限公司 Testing method and device of human face living body detection system and storage medium
CN114549728A (en) * 2022-03-25 2022-05-27 北京百度网讯科技有限公司 Training method of image processing model, image processing method, device and medium
CN116188640A (en) * 2022-12-09 2023-05-30 北京百度网讯科技有限公司 Three-dimensional virtual image generation method, device, equipment and medium
CN116188640B (en) * 2022-12-09 2023-09-08 北京百度网讯科技有限公司 Three-dimensional virtual image generation method, device, equipment and medium
CN116912639B (en) * 2023-09-13 2024-02-09 腾讯科技(深圳)有限公司 Training method and device of image generation model, storage medium and electronic equipment
CN116912639A (en) * 2023-09-13 2023-10-20 腾讯科技(深圳)有限公司 Training method and device of image generation model, storage medium and electronic equipment

Also Published As

Publication number Publication date
CN109377544B (en) 2022-12-23

Similar Documents

Publication Publication Date Title
CN109377544A (en) A kind of face three-dimensional image generating method, device and readable medium
Gonzalez-Franco et al. The rocketbox library and the utility of freely available rigged avatars
US20240087200A1 (en) Systems and methods for real-time complex character animations and interactivity
Magnenat-Thalmann et al. Handbook of virtual humans
US9245176B2 (en) Content retargeting using facial layers
CN111542861A (en) System and method for rendering an avatar using a depth appearance model
CN110163054A (en) A kind of face three-dimensional image generating method and device
CN101751689A (en) Three-dimensional facial reconstruction method
WO2022205762A1 (en) Three-dimensional human body reconstruction method and apparatus, device, and storage medium
US11514638B2 (en) 3D asset generation from 2D images
Clarke et al. Automatic generation of 3D caricatures based on artistic deformation styles
CN110458924A (en) A kind of three-dimensional facial model method for building up, device and electronic equipment
CN116977522A (en) Rendering method and device of three-dimensional model, computer equipment and storage medium
CN111754622B (en) Face three-dimensional image generation method and related equipment
CN116863044A (en) Face model generation method and device, electronic equipment and readable storage medium
Han et al. Customizing blendshapes to capture facial details
US20240127539A1 (en) Mechanical weight index maps for mesh rigging
CN117218300B (en) Three-dimensional model construction method, three-dimensional model construction training method and device
Bai et al. Construction of virtual image synthesis module based on computer technology
Zhao et al. Implementation of Computer Aided Dance Teaching Integrating Human Model Reconstruction Technology
Wang et al. SketchBodyNet: A Sketch-Driven Multi-faceted Decoder Network for 3D Human Reconstruction
Zhang et al. Digital Media Technology Application Research Based on 5g Internet of Things Virtual Reality Technology as the Carrier
WO2023224634A1 (en) Blendshape weights predicted for facial expression of hmd wearer using machine learning model for cohort corresponding to facial type of wearer
CN117671090A (en) Expression processing method and device, electronic equipment and storage medium
Yu et al. Real-time individual 3D facial animation by combining parameterized model and muscular model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant