CN104376594A - Three-dimensional face modeling method and device - Google Patents

Three-dimensional face modeling method and device Download PDF

Info

Publication number
CN104376594A
CN104376594A CN201410687577.4A CN201410687577A CN104376594A CN 104376594 A CN104376594 A CN 104376594A CN 201410687577 A CN201410687577 A CN 201410687577A CN 104376594 A CN104376594 A CN 104376594A
Authority
CN
China
Prior art keywords
picture
conversion
master pattern
primitive man
dimensional face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410687577.4A
Other languages
Chinese (zh)
Other versions
CN104376594B (en
Inventor
吴拥民
叶仲雯
许凯杰
何汉鑫
苏珠明
李春数
陈吉
刘德建
陈宏展
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujian TQ Digital Co Ltd
Original Assignee
Fujian TQ Digital Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujian TQ Digital Co Ltd filed Critical Fujian TQ Digital Co Ltd
Priority to CN201410687577.4A priority Critical patent/CN104376594B/en
Publication of CN104376594A publication Critical patent/CN104376594A/en
Application granted granted Critical
Publication of CN104376594B publication Critical patent/CN104376594B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Abstract

The invention provides a three-dimensional face modeling method to achieve efficient three-dimensional face modeling. The method includes the steps that an original image and a standard model are obtained; the bone point information is obtained according to the standard model; a feature point is marked on the original portrait and the coordinate information of the feature point is recorded; first conversion is conducted on the original portrait; second conversion is conducted on the standard model; the surface of the standard model generated after the second conversion is conducted is unfolded according to the preset method to form a texture image; the original portrait generated after the first conversion is conducted is mapped to the texture image. The invention further provides a corresponding three-dimensional face modeling device for implementing the three-dimensional face modeling method. According to the technical scheme, the three-dimensional face modeling method and device are simple in operation mode, good in fitting effect and high in degree of reduction; meanwhile, the method and device are higher in implementation speed and higher in efficiency, the processes of implementation and calculation are simplified, the data collection process is simplified, practicability and adaptability of a system are improved, and the method and device have great advantages in situations of low requirement for the face depth information.

Description

Three-dimensional face modeling method and device
Technical field
The present invention relates to computer graphics and digital image processing field, relate more specifically to a kind of three-dimensional face modeling method and device.
Background technology
Along with the tremendous development of computer graphics and image processing techniques, three-dimensional face modeling technique as two subject crossing fields is widely used in relevant industries such as game animation, medical science and lift face, film advertisement, video conference and videophone, and becomes the hot issue of research gradually.
The current major technique about three-dimensional face modeling comprises several as follows:
Three-dimensional face modeling based on spatial digitizer etc.: mainly refer to utilize 3-dimensional digital scanner face to be scanned to the three-dimensional information obtaining face, carry out the three-dimensional reconstruction of faceform in a computer according to the three-dimensional data obtained, the method can obtain accurate faceform.But, adopt the hardware device such as spatial digitizer to carry out the method versatility of three-dimensional face modeling and dirigibility poor, operation is comparatively complicated, and the cost of its hardware device is also higher, is generally only applicable to the occasion that some is special.
Human face model building based on image: mainly refer to by single view or multiple views facial image or video sequence, obtain the three-dimensional information of human face characteristic point, utilize this information to reconstruct the 3-D geometric model of face.Human face model building based on image is mainly divided into the modeling based on single width facial image and the modeling based on several facial images.
Modeling based on single width facial image: the image generally referring to by gathering individual face carries out the analysis of data and the matching of face and three-dimensional reconstruction, its basic skills is all carried out the judgement of the upper depth information of faceform by the light and shade information on the facial image of micro-side or image, thus completes the modeling work of three-dimensional face.But the method adopting single width facial image to carry out modeling needs to make choice between the shape and appearance of face, is difficult to accurately simulate the model conformed to real human face.Due to the right and left of face and non complete symmetry, therefore when utilizing the face of micro-side to carry out the collection of depth information, the distortion of rebuilding descendant's appearance looks must be caused; And utilize the shade of face direct picture and high light to carry out depth information when calculating, due to deficiencies such as calculation of complex, operation time are long, the poor reliability of result of calculation, be often difficult to obtain good effect.
Modeling based on several facial images: refer to the facial image by several multi-angles, as the images such as front, left surface, right flank obtain the three-dimensional data of more plurality of human faces to carry out the three-dimensional modeling of face.Modeling based on several facial images more intuitively, intactly can obtain the front texture information, depth information etc. of face, thus is conducive to setting up more accurate three-dimensional face model.But the method adopting several facial images to carry out modeling no doubt comparatively intactly can obtain the information such as front texture, the face degree of depth of face, but it adds manually-operated step to a certain extent, reduces the dirigibility of operation.And in some handheld devices, user is difficult to provide the side image met the demands comparatively easily, and this also result in very large impact to the degree of accuracy of face modeling.
Summary of the invention
For this reason, need to provide a kind of and can evade that face depth information calculates, operating process is easy, change fast and automatically, three-dimensional face modeling method that modeling result is true and reliable and device.
For achieving the above object, inventor provide a kind of three-dimensional face modeling method, comprise step:
Obtain primitive man's picture and master pattern;
Obtain skeleton point information according to master pattern, described skeleton point is the unique point that deformation quantity is positioned at a pre-set interval when the deformation of master pattern grid;
Described primitive man's picture marks unique point and records its coordinate information;
Do the first conversion to described primitive man's picture, described first conversion makes described primitive man as the preset standard aligned in position on upper preset standard position and described master pattern, and does same coordinate transform to the skeleton point of portrait;
Do the second conversion to described master pattern, described second conversion comprises the corresponding skeleton point plane projection of the skeleton point on master pattern point being moved to the primitive man's picture through the first conversion;
By through second conversion master pattern with presetting method unfolded surface for texture image;
Primitive man's picture through the first conversion is mapped to described texture image.
Further, in described three-dimensional face modeling method, described " by through second conversion master pattern with presetting method unfolded surface for texture image; Primitive man's picture through the first conversion is mapped to described texture image " specifically comprise:
By through second conversion master pattern with presetting method unfolded surface for UV pinup picture;
Primitive man's picture through the first conversion is mapped to described UV pinup picture.
Further, in described three-dimensional face modeling method, before the primitive man's picture through the first conversion " is mapped to described UV pinup picture " by step, also comprise the colour of skin matching treatment that UV pinup picture described in a pair carries out, described colour of skin matching treatment specifically comprises:
Choose primitive man as upper one or more predeterminable area, obtain colour of skin sampled value according to preset algorithm;
Described colour of skin sampled value is utilized to do variable color process to described UV pinup picture.
Further, in described three-dimensional face modeling method, step " utilizes described colour of skin sampled value to do variable color process to described UV pinup picture " specifically to comprise:
Utilize described colour of skin sampled value to set up and the colour of skin sample graph of UV pinup picture formed objects, and with described UV pinup picture be source picture, described colour of skin sample graph carries out graph cut for Target Photo.
Further, in described three-dimensional face modeling method, described " the primitive man's picture through the first conversion is mapped to described UV pinup picture " specifically comprises:
With the emergence masking-out of pre-set dimension for shade, the primitive man's picture through the first conversion is mapped on the UV pinup picture of colour of skin matching treatment by emergence suture way.
Further, in described three-dimensional face modeling method, described preset standard position is the left and right pupil in unique point;
Rotation, convergent-divergent or translation are comprised to described first conversion of primitive man's picture;
Described rotation specifically comprises: rotate described primitive man's picture, makes the left and right pupil horizontal level in the standard portrait that its left and right pupil and master pattern are corresponding consistent;
Described convergent-divergent specifically comprises: primitive man's picture described in convergent-divergent, makes the left and right interpupillary distance in the standard portrait that its left and right interpupillary distance and master pattern are corresponding consistent;
Described translation specifically comprises: primitive man's picture described in translation, and the left and right pupil in the standard portrait making its left and right pupil corresponding with master pattern aligns.
Further, in described three-dimensional face modeling method, described second conversion specifically comprises the steps:
By orthogonal projection conversion, the skeleton point of master pattern is projected to screen plane;
The plane projection of the skeleton point of master pattern point is moved to the corresponding skeleton point of the primitive man's picture through the first conversion;
To the inverse transformation of the master pattern skeleton point plane projection point orthogonal projection conversion after translation.
Further, in described three-dimensional face modeling method, described orthogonal projection conversion specifically comprises:
Model matrix Model is utilized to be world coordinate system by the skeleton point of master pattern from self-defined ordinate transform;
Utilize observation matrix View that the skeleton point of master pattern is converted to visual coordinate system from world coordinate system;
Utilize the rectangular projection of projection matrix Projection that Z axis coordinate is removed by visual coordinate system, make it fall within screen plane.
Further, in described three-dimensional face modeling method, described master pattern is the master pattern mated as ethnic group with described primitive man.
Further, in described three-dimensional face modeling method, described primitive man's picture meets a resolution condition preset or a light and shade difference condition preset.
Inventor additionally provides a kind of three-dimensional face model building device, comprises input block, skeleton point determining unit, unique point indexing unit, converter unit, texture mapping unit and map unit;
Described input block is for obtaining primitive man's picture and master pattern;
Skeleton point determining unit is used for obtaining skeleton point information according to master pattern, and described skeleton point is the unique point that deformation quantity is positioned at a pre-set interval when the deformation of master pattern grid;
Unique point indexing unit is used on described primitive man's picture, marking unique point and recording its coordinate information;
Converter unit is used for doing the first conversion to described primitive man's picture, and described first conversion makes described primitive man as the preset standard aligned in position on upper preset standard position and described master pattern, and does same coordinate transform to the skeleton point of portrait;
Converter unit is also for doing the second conversion to described master pattern, and described second conversion comprises the corresponding skeleton point plane projection of the skeleton point on master pattern point being moved to the primitive man's picture through the first conversion;
Texture mapping unit be used for will through second conversion master pattern with presetting method unfolded surface for texture image;
Map unit is used for the primitive man's picture through the first conversion to map to described texture image.
Further, in described three-dimensional face model building device, texture mapping unit be used for will through second conversion master pattern with presetting method unfolded surface for UV pinup picture;
Map unit is used for the primitive man's picture through the first conversion to map to described UV pinup picture.
Further, in described three-dimensional face model building device, also comprise colour of skin matching unit, described colour of skin matching unit is used for choosing primitive man as upper one or more predeterminable area, obtains colour of skin sampled value according to preset algorithm; And utilize described colour of skin sampled value to do variable color process to described UV pinup picture.
Further, in described three-dimensional face model building device, colour of skin matching unit utilizes described colour of skin sampled value to do variable color process to described UV pinup picture specifically to comprise:
Utilize described colour of skin sampled value to set up and the colour of skin sample graph of UV pinup picture formed objects, and with described UV pinup picture be source picture, described colour of skin sample graph carries out graph cut for Target Photo.
Further, in described three-dimensional face model building device, the primitive man's picture through the first conversion is mapped to described UV pinup picture and specifically comprises by map unit:
With the emergence masking-out of pre-set dimension for shade, the primitive man's picture through the first conversion is mapped on the UV pinup picture of colour of skin matching treatment by emergence suture way.
Further, in described three-dimensional face model building device, described preset standard position is the left and right pupil in unique point;
Described first conversion of converter unit to primitive man's picture comprises rotation, convergent-divergent or translation;
Described rotation specifically comprises: rotate described primitive man's picture, makes the left and right pupil horizontal level in the standard portrait that its left and right pupil and master pattern are corresponding consistent;
Described convergent-divergent specifically comprises: primitive man's picture described in convergent-divergent, makes the left and right interpupillary distance in the standard portrait that its left and right interpupillary distance and master pattern are corresponding consistent;
Described translation specifically comprises: primitive man's picture described in translation, and the left and right pupil in the standard portrait making its left and right pupil corresponding with master pattern aligns.
Further, in described three-dimensional face model building device, converter unit does the second conversion and specifically comprises the steps:
By orthogonal projection conversion, the skeleton point of master pattern is projected to screen plane;
The plane projection of the skeleton point of master pattern point is moved to the corresponding skeleton point of the primitive man's picture through the first conversion;
To the inverse transformation of the master pattern skeleton point plane projection point orthogonal projection conversion after translation.
Further, in described three-dimensional face model building device, orthogonal projection that converter unit does conversion specifically comprises:
Model matrix Model is utilized to be world coordinate system by the skeleton point of master pattern from self-defined ordinate transform;
Utilize observation matrix View that the skeleton point of master pattern is converted to visual coordinate system from world coordinate system;
Utilize the rectangular projection of projection matrix Projection that Z axis coordinate is removed by visual coordinate system, make it fall within screen plane.
Further, in described three-dimensional face model building device, described master pattern is the master pattern mated as ethnic group with described primitive man.
Further, in described three-dimensional face model building device, described primitive man's picture meets a resolution condition preset or a light and shade difference condition preset.
Be different from prior art, technique scheme adopts the mode of bone alignment to realize the modeling of the three-dimensional face of robotization, and its mode of operation is simple, matching effective, and reducing degree is higher; Be compared to tradition to carry out three-dimensional modeling method based on single image, simplify the process realizing and calculate, realize speed faster, efficiency is higher, can meet and have certain requirements system to real-time.And being compared to the method for carrying out three-dimensional modeling based on multiple image, technical solution of the present invention in turn simplify the process of data acquisition, improves practicality and the adaptability of system, has huge advantage at some to the less demanding scene of face depth information.
Accompanying drawing explanation
Fig. 1 is the process flow diagram of three-dimensional face modeling method described in an embodiment of the present invention;
Fig. 2 is the structural representation of three-dimensional face model building device described in an embodiment of the present invention.
Description of reference numerals:
1-input block
2-skeleton point determining unit
3-unique point indexing unit
4-converter unit
The assorted unit of the 5-colour of skin
6-map unit
7-texture mapping unit
Embodiment
By describe in detail technical scheme technology contents, structural attitude, realized object and effect, coordinate accompanying drawing to be explained in detail below in conjunction with specific embodiment.
Referring to Fig. 1, is the process flow diagram of three-dimensional face modeling method described in an embodiment of the present invention.Described method comprises the steps:
S1, acquisition primitive man's picture and master pattern;
Further, described master pattern is the master pattern mated as ethnic group with described primitive man; Described primitive man's picture is the two-dimentional full face meeting a resolution condition preset and a light and shade difference condition preset.
Core due to technical solution of the present invention is the modeling being carried out three-dimensional face by deformable grid model matching, and there is very large difference in different deformation models in the similarity of human face fitting, therefore in present embodiment, characteristic information according to different ethnic group is analyzed, set up different master patterns, such as, for Asian (yellow), by collecting a large amount of Asian face, the information such as its head dummy, contour shape, face comprehensively being analyzed and extracting feature and sets up Asian's head master pattern.The master pattern of other ethnic groups in like manner.Then, according to the kind of the ethnic group confirmed standard model of primitive man's picture, for follow-up matching.
In addition, in order to ensure the accurately reliable of modeling, to the input also demand fulfillment certain condition of primitive man's picture.First, in order to as far as possible intactly retain the appearance information of face, described primitive man's picture should be the front face image that satisfied presets resolution condition.Secondly, should note the control of light in the gatherer process of facial image, the facial image gathered under half-light accurately cannot obtain Skin Color Information, and face calibration may be caused to occur error, the similarity of faceform after reduction modeling.Thus, described primitive man's picture should meet a resolution condition preset and a left and right face light and shade difference condition preset.
S2, obtain skeleton point information according to master pattern, described skeleton point is the unique point that deformation quantity is positioned at a pre-set interval when the deformation of master pattern grid;
S3, described primitive man picture marks unique point and records its coordinate information;
This concept of the unique point occurred in step S2 and step S3, mainly refers in Face detection and recognition technology, for calibrating face shape of face, eyebrow, eyes, nose, face etc. appearance being had to the point of prominent feature meaning, being called unique point.Based on different face calibration technical research results, have different unique point models can for selecting, what present embodiment adopted be 83 the unique point systems marking off 7 significant features (face exterior contour, left and right eyebrow, right and left eyes, nose, mouth) according to face general knowledge.In other embodiments, other unique point scaling method of the prior art can also be adopted.
And this concept of bone key point, then refer to that in unique point, those will produce the point of considerable influence in grid deformation to facial contour and face, the method obtaining bone key point normally carries out Deformation Experiments to face wire frame model, analyze the impact of each feature point pairs grid deformation, what meet default deformation quantity condition is confirmed as bone key point.
S4, the first conversion is done to described primitive man's picture;
Described first conversion makes described primitive man as the preset standard aligned in position on upper preset standard position and described master pattern, and does same coordinate transform to the skeleton point of portrait;
In present embodiment, described preset standard position is the left and right pupil in unique point;
Rotation, convergent-divergent or translation are comprised to described first conversion of primitive man's picture;
Described rotation specifically comprises: rotate described primitive man's picture, makes the left and right pupil horizontal level in the standard portrait that its left and right pupil and master pattern are corresponding consistent; Particularly, demarcate information according to the unique point of primitive man's picture, get pupil feature point position, left and right, calculate the angle of itself and horizontal direction, then with pupil position line mid point for rotation center, portrait is rotated corresponding angle, makes its horizontal direction consistent.
Described convergent-divergent specifically comprises: according to the interpupillary distance of standard portrait, primitive man's picture described in convergent-divergent, makes the left and right interpupillary distance in the standard portrait that its left and right interpupillary distance and master pattern are corresponding consistent;
Described translation specifically comprises: primitive man's picture described in translation, and the left and right pupil in the standard portrait making its left and right pupil corresponding with master pattern aligns.
In addition also comprise cutting step where necessary, namely cut out the human face region of specific size according to portrait characteristic point position, put it into the image of standard portrait formed objects.
In fact, the conversion that this step is done is a kind of pretreatment work of the primitive man's picture to input, mainly through rotating described primitive man's picture, convergent-divergent, translation or cutting process, the predeterminated position (pupil) making one predeterminated position (being pupil position in present embodiment) and master pattern launch the standard portrait of gained aligns.The step of above-mentioned rotation, convergent-divergent, translation all can the form of matrix disposal represent, therefore also can carry out corresponding coordinate transform by matrix identical with it, to obtain the coordinate position in new images to the unique point of demarcating in portrait.
S5, the second conversion is done to described master pattern;
Described second conversion comprises the corresponding skeleton point plane projection of the skeleton point on master pattern point being moved to the primitive man's picture through the first conversion;
Further, described second conversion specifically comprises again as follows step by step:
S51, by orthogonal projection conversion the skeleton point of master pattern is projected to screen plane;
S52, the plane projection of the skeleton point of master pattern point moved to the corresponding skeleton point of the primitive man's picture through the first conversion;
Owing to having carried out pupil registration process according to standard portrait through pretreated primitive man's picture before this, therefore only the subpoint of the skeleton point on master pattern on screen need be moved to respectively the skeleton point position of standardized portrait.
S53, the inverse transformation that the master pattern skeleton point plane projection point orthogonal projection after translation is converted.In fact this step is inverse transformation calculating (Model*View*Projection) of matrix computations in S51 (Model*View*Projection) -1, carry out inverse transformation to obtain the three-dimensional coordinate position of skeleton point in the model space after alignment with this, the result obtained is the model after bone alignment.
Further, the orthogonal projection conversion described in step S52 specifically comprises:
Model matrix Model is utilized to be world coordinate system by the skeleton point of master pattern from self-defined ordinate transform;
Utilize observation matrix View that the skeleton point of master pattern is converted to visual coordinate system from world coordinate system;
Utilize the rectangular projection of projection matrix Projection that Z axis coordinate is removed by visual coordinate system, make it fall within screen plane.
S6, by through second conversion master pattern with presetting method unfolded surface for UV pinup picture;
S7, colour of skin matching treatment is done to described UV pinup picture;
Described colour of skin matching treatment specifically comprises as follows step by step:
S71, choose primitive man as upper one or more predeterminable area, obtain colour of skin sampled value according to preset algorithm; Because common portrait is by gathering such environmental effects existence shadow and highlight in various degree, this can affect the performance of true skin tone to a certain extent, and in order to make the color of model after modeling closer to the true skin tone of face, need to sample to skin color.Such as, utilizing the result of face calibration, samples in the region chosen between nose both sides and face feature point, calculates the sampled value of average as the colour of skin of two field color.
S72, described colour of skin sampled value is utilized to do variable color process to described UV pinup picture.
More specifically, described variable color process comprises again: utilize described colour of skin sampled value to set up and the colour of skin sample graph of UV pinup picture formed objects, and with described UV pinup picture be source picture, described colour of skin sample graph carries out graph cut for Target Photo.
S8, by through first conversion primitive man's picture map to described UV pinup picture.
This step specifically comprises again: with the emergence masking-out of pre-set dimension for shade, maps to the primitive man's picture through the first conversion on the UV pinup picture of colour of skin matching treatment by emergence suture way.
In fact, UV pinup picture is the 2 d texture coordinate system be defined, for determining the surface how a texture image being positioned over three-dimensional model.In other embodiments, other coordinate systems that can realize identical object also can be adopted to realize the conversion of master pattern to skin texture images.
Method described in present embodiment adopts the mode of bone alignment, utilizes the coordinate of human face characteristic point to carry out alignment operation to drive the bone key point on master pattern, can effectively simulate real face shape.Set up the human body head master pattern of differentiation according to the difference of different ethnic group head, face feature, the error that matching produces can be reduced as much as possible, improve the reducing degree of facial contour and facial characteristics.When carrying out bone alignment, it is the pre-align process that the facial image after standardization and standard pinup picture are carried out pupil by alignment benchmark that present embodiment proposes with pupil, skeleton point only need be carried out translation by follow-up work can complete alignment work, effectively simplifies the calculating of models fitting.In addition, the facial image utilizing user to input carries out colour of skin sampling processing and calculating, the color of pinup picture is carried out graph cut according to the sampled value of the colour of skin, the basis retaining source images details enables the colour of skin on faceform match with the color value of sampling, make model can reduce the color of face on the whole preferably.Meanwhile, present embodiment, based on the analysis carried out face wire frame model, is extracted and is wherein affected larger unique point as bone key point to facial contour and face feature, carry out profile and the face feature of matching face according to these key points.In sum, the three-dimensional face modeling function that the three-dimensional face modeling method that present embodiment provides can realize having high universalizable and dirigibility, easy and simple to handle, with low cost, fitting effect is accurately changed truly, fast and automatically.
Referring to Fig. 2, is the structural representation of three-dimensional face model building device described in an embodiment of the present invention; Described device comprises input block 1, skeleton point determining unit 2, unique point indexing unit 3, converter unit 4, texture mapping unit 5 and map unit 6;
Described input block 1 is for obtaining primitive man's picture and master pattern; Described master pattern is the master pattern mated as ethnic group with described primitive man; Described primitive man's picture meets a resolution condition preset or a light and shade difference condition preset.
Core due to technical solution of the present invention is the modeling being carried out three-dimensional face by deformable grid model matching, and there is very large difference in different deformation models in the similarity of human face fitting, therefore in present embodiment, primitive man's picture that input block 1 obtains and the certain condition of master pattern demand fulfillment.First, master pattern carries out analyzing according to the characteristic information of different ethnic group and set up obtaining, such as, for Asian (yellow), by collecting a large amount of Asian face, the information such as its head dummy, contour shape, face comprehensively being analyzed and extracting feature and sets up Asian's head master pattern.The master pattern of other ethnic groups in like manner.Then, according to the kind of the ethnic group confirmed standard model of primitive man's picture, for follow-up matching.
In addition, in order to ensure the accurately reliable of modeling, primitive man's picture also demand fulfillment certain condition.First, in order to as far as possible intactly retain the appearance information of face, described primitive man's picture should be the front face image that satisfied presets resolution condition.Secondly, should note the control of light in the gatherer process of facial image, the facial image gathered under half-light accurately cannot obtain Skin Color Information, and face calibration may be caused to occur error, the similarity of faceform after reduction modeling.Thus, described primitive man's picture should meet a resolution condition preset and a left and right face light and shade difference condition preset.
Skeleton point determining unit 2 is for obtaining skeleton point information according to master pattern, and described skeleton point is the unique point that deformation quantity is positioned at a pre-set interval when the deformation of master pattern grid.
Unique point indexing unit 3 for marking unique point and recording its coordinate information on described primitive man's picture.
The information processing that skeleton point determining unit 2 and unique point indexing unit 3 realize comprises this concept of unique point, mainly refer in Face detection and recognition technology, for calibrating face shape of face, eyebrow, eyes, nose, face etc. appearance being had to the point of prominent feature meaning, being called unique point.Based on different face calibration technical research results, have different unique point models can for selecting, what present embodiment adopted be 83 the unique point systems marking off 7 significant features (face exterior contour, left and right eyebrow, right and left eyes, nose, mouth) according to face general knowledge.In other embodiments, other unique point scaling method of the prior art can also be adopted.And this concept of bone key point, then refer to that in unique point, those will produce the point of considerable influence in grid deformation to facial contour and face, the method obtaining bone key point normally carries out Deformation Experiments to face wire frame model, analyze the impact of each feature point pairs grid deformation, what meet default deformation quantity condition is confirmed as bone key point.
Converter unit 4 is for doing the first conversion to described primitive man's picture, and described first conversion makes described primitive man as the preset standard aligned in position on upper preset standard position and described master pattern, and does same coordinate transform to the skeleton point of portrait;
In present embodiment, described preset standard position is the left and right pupil in unique point;
Described first conversion of converter unit 4 pairs of primitive man's pictures comprises rotation, convergent-divergent or translation;
Described rotation specifically comprises: rotate described primitive man's picture, makes the left and right pupil horizontal level in the standard portrait that its left and right pupil and master pattern are corresponding consistent;
Described convergent-divergent specifically comprises: primitive man's picture described in convergent-divergent, makes the left and right interpupillary distance in the standard portrait that its left and right interpupillary distance and master pattern are corresponding consistent;
Described translation specifically comprises: primitive man's picture described in translation, and the left and right pupil in the standard portrait making its left and right pupil corresponding with master pattern aligns.
In fact, the first conversion that converter unit 4 does is a kind of pretreatment work of the primitive man's picture to input, mainly through rotating described primitive man's picture, convergent-divergent, translation or cutting process, the predeterminated position (pupil) making one predeterminated position (being pupil position in present embodiment) and master pattern launch the standard portrait of gained aligns.The step of above-mentioned rotation, convergent-divergent, translation all can the form of matrix disposal represent, therefore also can carry out corresponding coordinate transform by matrix identical with it, to obtain the coordinate position in new images to the unique point of demarcating in portrait.
Converter unit 4 is also for doing the second conversion to described master pattern, and described second conversion comprises the corresponding skeleton point plane projection of the skeleton point on master pattern point being moved to the primitive man's picture through the first conversion;
Further, converter unit 4 do the second conversion and specifically comprise the steps:
By orthogonal projection conversion, the skeleton point of master pattern is projected to screen plane;
The plane projection of the skeleton point of master pattern point is moved to the corresponding skeleton point of the primitive man's picture through the first conversion;
To the inverse transformation of the master pattern skeleton point plane projection point orthogonal projection conversion after translation.
Further, converter unit 4 do orthogonal projection conversion specifically comprise:
Model matrix Model is utilized to be world coordinate system by the skeleton point of master pattern from self-defined ordinate transform; Utilize observation matrix View that the skeleton point of master pattern is converted to visual coordinate system from world coordinate system; Utilize the rectangular projection of projection matrix Projection that Z axis coordinate is removed by visual coordinate system, make it fall within screen plane.
Texture mapping unit 5 for will through second conversion master pattern with presetting method unfolded surface for texture image;
Map unit 6 is for mapping to described texture image by the primitive man's picture through the first conversion.
Further, texture mapping unit 5 for will through second conversion master pattern with presetting method unfolded surface for UV pinup picture;
Map unit 6 is for mapping to described UV pinup picture by the primitive man's picture through the first conversion.
Further, the primitive man's picture through the first conversion is mapped to described UV pinup picture and specifically comprises by map unit 6:
With the emergence masking-out of pre-set dimension for shade, the primitive man's picture through the first conversion is mapped on the UV pinup picture of colour of skin matching treatment by emergence suture way.
In addition, preferably, in described three-dimensional face model building device, also comprise colour of skin matching unit 7, described colour of skin matching unit 7, for choosing primitive man as upper one or more predeterminable area, obtains colour of skin sampled value according to preset algorithm; And utilize described colour of skin sampled value to do variable color process to described UV pinup picture.
Further, colour of skin matching unit 7 utilizes described colour of skin sampled value to do variable color process to described UV pinup picture specifically to comprise:
Utilize described colour of skin sampled value to set up and the colour of skin sample graph of UV pinup picture formed objects, and with described UV pinup picture be source picture, described colour of skin sample graph carries out graph cut for Target Photo.
Because common portrait is by gathering such environmental effects existence shadow and highlight in various degree, this can affect the performance of true skin tone to a certain extent, and in order to make the color of model after modeling closer to the true skin tone of face, need to sample to skin color.Such as, utilizing the result of face calibration, samples in the region chosen between nose both sides and face feature point, calculates the sampled value of average as the colour of skin of two field color, and recycling colour of skin sampled value carries out variable color process further.
In addition, in fact UV pinup picture is the 2 d texture coordinate system be defined, for determining the surface how a texture image being positioned over three-dimensional model.In other embodiments, other coordinate systems that can realize identical object or method also can be adopted to realize the conversion of master pattern to skin texture images.
Present embodiment adopts the mode of bone alignment, utilizes the coordinate of human face characteristic point to carry out alignment operation to drive the bone key point on master pattern, can effectively simulate real face shape.Set up the human body head master pattern of differentiation according to the difference of different ethnic group head, face feature, the error that matching produces can be reduced as much as possible, improve the reducing degree of facial contour and facial characteristics.When carrying out bone alignment, it is the pre-align process that the facial image after standardization and standard pinup picture are carried out pupil by alignment benchmark that present embodiment proposes with pupil, skeleton point only need be carried out translation by follow-up work can complete alignment work, effectively simplifies the calculating of models fitting.In addition, the facial image utilizing user to input carries out colour of skin sampling processing and calculating, the color of pinup picture is carried out graph cut according to the sampled value of the colour of skin, the basis retaining source images details enables the colour of skin on faceform match with the color value of sampling, make model can reduce the color of face on the whole preferably.Meanwhile, present embodiment, based on the analysis carried out face wire frame model, is extracted and is wherein affected larger unique point as bone key point to facial contour and face feature, carry out profile and the face feature of matching face according to these key points.In sum, the three-dimensional face modeling function that the three-dimensional face model building device that present embodiment provides can realize having high universalizable and dirigibility, easy and simple to handle, with low cost, fitting effect is accurately changed truly, fast and automatically.
It should be noted that, in this article, the such as relational terms of first and second grades and so on is only used for an entity or operation to separate with another entity or operational zone, and not necessarily requires or imply the relation that there is any this reality between these entities or operation or sequentially.And, term " comprises ", " comprising " or its any other variant are intended to contain comprising of nonexcludability, thus make to comprise the process of a series of key element, method, article or terminal device and not only comprise those key elements, but also comprise other key elements clearly do not listed, or also comprise by the intrinsic key element of this process, method, article or terminal device.When not more restrictions, the key element limited by statement " comprising ... " or " comprising ... ", and be not precluded within process, method, article or the terminal device comprising described key element and also there is other key element.In addition, in this article, " be greater than ", " being less than ", " exceeding " etc. be interpreted as and do not comprise this number; " more than ", " below ", " within " etc. be interpreted as and comprise this number.
Those skilled in the art should understand, the various embodiments described above can be provided as method, device or computer program.These embodiments can adopt the form of complete hardware embodiment, completely software implementation or the embodiment in conjunction with software and hardware aspect.The hardware that all or part of step in the method that the various embodiments described above relate to can carry out instruction relevant by program has come, described program can be stored in the storage medium that computer equipment can read, for performing all or part of step described in the various embodiments described above method.Described computer equipment, includes but not limited to: personal computer, server, multi-purpose computer, special purpose computer, the network equipment, embedded device, programmable device, intelligent mobile terminal, intelligent home device, wearable intelligent equipment, vehicle intelligent equipment etc.; Described storage medium, includes but not limited to: the storage of RAM, ROM, magnetic disc, tape, CD, flash memory, USB flash disk, portable hard drive, storage card, memory stick, the webserver, network cloud storage etc.
The various embodiments described above describe with reference to the process flow diagram of method, equipment (system) and computer program according to embodiment and/or block scheme.Should understand can by the combination of the flow process in each flow process in computer program instructions realization flow figure and/or block scheme and/or square frame and process flow diagram and/or block scheme and/or square frame.These computer program instructions can being provided to the processor of computer equipment to produce a machine, making the instruction performed by the processor of computer equipment produce device for realizing the function of specifying in process flow diagram flow process or multiple flow process and/or block scheme square frame or multiple square frame.
These computer program instructions also can be stored in can in the computer equipment readable memory that works in a specific way of vectoring computer equipment, the instruction making to be stored in this computer equipment readable memory produces the manufacture comprising command device, and this command device realizes the function of specifying in process flow diagram flow process or multiple flow process and/or block scheme square frame or multiple square frame.
These computer program instructions also can be loaded on computer equipment, make to perform sequence of operations step on a computing device to produce computer implemented process, thus the instruction performed on a computing device is provided for the step realizing the function of specifying in process flow diagram flow process or multiple flow process and/or block scheme square frame or multiple square frame.
Although be described the various embodiments described above; but those skilled in the art are once obtain the basic creative concept of cicada; then can make other change and amendment to these embodiments; so the foregoing is only embodiments of the invention; not thereby scope of patent protection of the present invention is limited; every utilize instructions of the present invention and accompanying drawing content to do equivalent structure or equivalent flow process conversion; or be directly or indirectly used in other relevant technical fields, be all in like manner included within scope of patent protection of the present invention.

Claims (20)

1. a three-dimensional face modeling method, comprises step:
Obtain primitive man's picture and master pattern;
Obtain skeleton point information according to master pattern, described skeleton point is the unique point that deformation quantity is positioned at a pre-set interval when the deformation of master pattern grid;
Described primitive man's picture marks unique point and records its coordinate information;
Do the first conversion to described primitive man's picture, described first conversion makes described primitive man as the preset standard aligned in position on upper preset standard position and described master pattern, and does same coordinate transform to the skeleton point of portrait;
Do the second conversion to described master pattern, described second conversion comprises the corresponding skeleton point plane projection of the skeleton point on master pattern point being moved to the primitive man's picture through the first conversion;
By through second conversion master pattern with presetting method unfolded surface for texture image;
Primitive man's picture through the first conversion is mapped to described texture image.
2. in three-dimensional face modeling method as claimed in claim 1, described " by the master pattern through the second conversion with presetting method unfolded surface for texture image; Primitive man's picture through the first conversion is mapped to described texture image " specifically comprise:
By through second conversion master pattern with presetting method unfolded surface for UV pinup picture;
Primitive man's picture through the first conversion is mapped to described UV pinup picture.
3., in three-dimensional face modeling method as claimed in claim 2, before the primitive man's picture through the first conversion " is mapped to described UV pinup picture " by step, also comprise the colour of skin matching treatment that UV pinup picture described in a pair carries out, described colour of skin matching treatment specifically comprises:
Choose primitive man as upper one or more predeterminable area, obtain colour of skin sampled value according to preset algorithm;
Described colour of skin sampled value is utilized to do variable color process to described UV pinup picture.
4., in three-dimensional face modeling method as claimed in claim 3, step " utilizes described colour of skin sampled value to do variable color process to described UV pinup picture " specifically to comprise:
Utilize described colour of skin sampled value to set up and the colour of skin sample graph of UV pinup picture formed objects, and with described UV pinup picture be source picture, described colour of skin sample graph carries out graph cut for Target Photo.
5., in three-dimensional face modeling method as claimed in claim 2 or claim 3, described " the primitive man's picture through the first conversion is mapped to described UV pinup picture " specifically comprises:
With the emergence masking-out of pre-set dimension for shade, the primitive man's picture through the first conversion is mapped on the UV pinup picture of colour of skin matching treatment by emergence suture way.
6., in three-dimensional face modeling method as claimed in claim 1 or 2, described preset standard position is the left and right pupil in unique point;
Rotation, convergent-divergent or translation are comprised to described first conversion of primitive man's picture;
Described rotation specifically comprises: rotate described primitive man's picture, makes the left and right pupil horizontal level in the standard portrait that its left and right pupil and master pattern are corresponding consistent;
Described convergent-divergent specifically comprises: primitive man's picture described in convergent-divergent, makes the left and right interpupillary distance in the standard portrait that its left and right interpupillary distance and master pattern are corresponding consistent;
Described translation specifically comprises: primitive man's picture described in translation, and the left and right pupil in the standard portrait making its left and right pupil corresponding with master pattern aligns.
7., in three-dimensional face modeling method as claimed in claim 1 or 2, described second conversion specifically comprises the steps:
By orthogonal projection conversion, the skeleton point of master pattern is projected to screen plane;
The plane projection of the skeleton point of master pattern point is moved to the corresponding skeleton point of the primitive man's picture through the first conversion;
To the inverse transformation of the master pattern skeleton point plane projection point orthogonal projection conversion after translation.
8., in three-dimensional face modeling method as claimed in claim 7, described orthogonal projection conversion specifically comprises:
Model matrix Model is utilized to be world coordinate system by the skeleton point of master pattern from self-defined ordinate transform;
Utilize observation matrix View that the skeleton point of master pattern is converted to visual coordinate system from world coordinate system;
Utilize the rectangular projection of projection matrix Projection that Z axis coordinate is removed by visual coordinate system, make it fall within screen plane.
9., in three-dimensional face modeling method as claimed in claim 1 or 2, described master pattern is the master pattern mated as ethnic group with described primitive man.
10., in three-dimensional face modeling method as claimed in claim 1 or 2, described primitive man's picture meets a resolution condition preset or a light and shade difference condition preset.
11. 1 kinds of three-dimensional face model building devices, comprise input block, skeleton point determining unit, unique point indexing unit, converter unit, texture mapping unit and map unit;
Described input block is for obtaining primitive man's picture and master pattern;
Skeleton point determining unit is used for obtaining skeleton point information according to master pattern, and described skeleton point is the unique point that deformation quantity is positioned at a pre-set interval when the deformation of master pattern grid;
Unique point indexing unit is used on described primitive man's picture, marking unique point and recording its coordinate information;
Converter unit is used for doing the first conversion to described primitive man's picture, and described first conversion makes described primitive man as the preset standard aligned in position on upper preset standard position and described master pattern, and does same coordinate transform to the skeleton point of portrait;
Converter unit is also for doing the second conversion to described master pattern, and described second conversion comprises the corresponding skeleton point plane projection of the skeleton point on master pattern point being moved to the primitive man's picture through the first conversion;
Texture mapping unit be used for will through second conversion master pattern with presetting method unfolded surface for texture image;
Map unit is used for the primitive man's picture through the first conversion to map to described texture image.
In 12. three-dimensional face model building devices as claimed in claim 11, texture mapping unit be used for will through second conversion master pattern with presetting method unfolded surface for UV pinup picture;
Map unit is used for the primitive man's picture through the first conversion to map to described UV pinup picture.
In 13. three-dimensional face model building devices as claimed in claim 12, also comprise colour of skin matching unit, described colour of skin matching unit is used for choosing primitive man as upper one or more predeterminable area, obtains colour of skin sampled value according to preset algorithm; And utilize described colour of skin sampled value to do variable color process to described UV pinup picture.
In 14. three-dimensional face model building devices as claimed in claim 13, colour of skin matching unit utilizes described colour of skin sampled value to do variable color process to described UV pinup picture specifically to comprise:
Utilize described colour of skin sampled value to set up and the colour of skin sample graph of UV pinup picture formed objects, and with described UV pinup picture be source picture, described colour of skin sample graph carries out graph cut for Target Photo.
In 15. three-dimensional face model building devices as described in claim 12 or 13, the primitive man's picture through the first conversion is mapped to described UV pinup picture and specifically comprises by map unit:
With the emergence masking-out of pre-set dimension for shade, the primitive man's picture through the first conversion is mapped on the UV pinup picture of colour of skin matching treatment by emergence suture way.
In 16. three-dimensional face model building devices as described in claim 11 or 12, described preset standard position is the left and right pupil in unique point;
Described first conversion of converter unit to primitive man's picture comprises rotation, convergent-divergent or translation;
Described rotation specifically comprises: rotate described primitive man's picture, makes the left and right pupil horizontal level in the standard portrait that its left and right pupil and master pattern are corresponding consistent;
Described convergent-divergent specifically comprises: primitive man's picture described in convergent-divergent, makes the left and right interpupillary distance in the standard portrait that its left and right interpupillary distance and master pattern are corresponding consistent;
Described translation specifically comprises: primitive man's picture described in translation, and the left and right pupil in the standard portrait making its left and right pupil corresponding with master pattern aligns.
In 17. three-dimensional face model building devices as described in claim 11 or 12, converter unit does the second conversion and specifically comprises the steps:
By orthogonal projection conversion, the skeleton point of master pattern is projected to screen plane;
The plane projection of the skeleton point of master pattern point is moved to the corresponding skeleton point of the primitive man's picture through the first conversion;
To the inverse transformation of the master pattern skeleton point plane projection point orthogonal projection conversion after translation.
In 18. three-dimensional face model building devices as claimed in claim 17, orthogonal projection that converter unit does conversion specifically comprises:
Model matrix Model is utilized to be world coordinate system by the skeleton point of master pattern from self-defined ordinate transform;
Utilize observation matrix View that the skeleton point of master pattern is converted to visual coordinate system from world coordinate system;
Utilize the rectangular projection of projection matrix Projection that Z axis coordinate is removed by visual coordinate system, make it fall within screen plane.
In 19. three-dimensional face model building devices as described in claim 11 or 12, described master pattern is the master pattern mated as ethnic group with described primitive man.
In 20. three-dimensional face model building devices as described in claim 11 or 12, described primitive man's picture meets a resolution condition preset or a light and shade difference condition preset.
CN201410687577.4A 2014-11-25 2014-11-25 Three-dimensional face modeling method and device Active CN104376594B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410687577.4A CN104376594B (en) 2014-11-25 2014-11-25 Three-dimensional face modeling method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410687577.4A CN104376594B (en) 2014-11-25 2014-11-25 Three-dimensional face modeling method and device

Publications (2)

Publication Number Publication Date
CN104376594A true CN104376594A (en) 2015-02-25
CN104376594B CN104376594B (en) 2017-09-29

Family

ID=52555483

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410687577.4A Active CN104376594B (en) 2014-11-25 2014-11-25 Three-dimensional face modeling method and device

Country Status (1)

Country Link
CN (1) CN104376594B (en)

Cited By (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104809687A (en) * 2015-04-23 2015-07-29 上海趣搭网络科技有限公司 Three-dimensional human face image generation method and system
CN106570822A (en) * 2016-10-25 2017-04-19 宇龙计算机通信科技(深圳)有限公司 Human face mapping method and device
CN106618734A (en) * 2016-11-04 2017-05-10 王敏 Face-lifting-model-comparison imprinting device
CN106815568A (en) * 2016-12-30 2017-06-09 易瓦特科技股份公司 For the method and system being identified for destination object
CN106934073A (en) * 2017-05-02 2017-07-07 成都通甲优博科技有限责任公司 Face comparison system, method and mobile terminal based on three-dimensional image
CN107274493A (en) * 2017-06-28 2017-10-20 河海大学常州校区 A kind of three-dimensional examination hair style facial reconstruction method based on mobile platform
CN107316340A (en) * 2017-06-28 2017-11-03 河海大学常州校区 A kind of fast human face model building based on single photo
CN107452049A (en) * 2016-05-30 2017-12-08 腾讯科技(深圳)有限公司 A kind of three-dimensional head modeling method and device
CN107578469A (en) * 2017-09-08 2018-01-12 明利 A kind of 3D human body modeling methods and device based on single photo
CN108171789A (en) * 2017-12-21 2018-06-15 迈吉客科技(北京)有限公司 A kind of virtual image generation method and system
CN108171788A (en) * 2017-12-19 2018-06-15 西安蒜泥电子科技有限责任公司 Body variation representation method based on three-dimensional modeling
CN108470321A (en) * 2018-02-27 2018-08-31 北京小米移动软件有限公司 U.S. face processing method, device and the storage medium of photo
CN108596827A (en) * 2018-04-18 2018-09-28 太平洋未来科技(深圳)有限公司 Three-dimensional face model generation method, device and electronic equipment
CN108932459A (en) * 2017-05-26 2018-12-04 富士通株式会社 Face recognition model training method and device and recognition algorithms
CN109118579A (en) * 2018-08-03 2019-01-01 北京微播视界科技有限公司 The method, apparatus of dynamic generation human face three-dimensional model, electronic equipment
CN109191508A (en) * 2018-09-29 2019-01-11 深圳阜时科技有限公司 A kind of simulation beauty device, simulation lift face method and apparatus
CN109191505A (en) * 2018-08-03 2019-01-11 北京微播视界科技有限公司 Static state generates the method, apparatus of human face three-dimensional model, electronic equipment
CN109191393A (en) * 2018-08-16 2019-01-11 Oppo广东移动通信有限公司 U.S. face method based on threedimensional model
CN109859305A (en) * 2018-12-13 2019-06-07 中科天网(广东)科技有限公司 Three-dimensional face modeling, recognition methods and device based on multi-angle two-dimension human face
CN109993689A (en) * 2019-03-14 2019-07-09 珠海天燕科技有限公司 A kind of makeups method and apparatus
CN110032927A (en) * 2019-02-27 2019-07-19 视缘(上海)智能科技有限公司 A kind of face identification method
CN110111247A (en) * 2019-05-15 2019-08-09 浙江商汤科技开发有限公司 Facial metamorphosis processing method, device and equipment
CN110326034A (en) * 2017-03-21 2019-10-11 宝洁公司 Method for the simulation of age appearance
CN110751078A (en) * 2019-10-15 2020-02-04 重庆灵翎互娱科技有限公司 Method and equipment for determining non-skin color area of three-dimensional face
CN110853147A (en) * 2018-08-21 2020-02-28 东方梦幻文化产业投资有限公司 Three-dimensional face transformation method
CN111696184A (en) * 2020-06-10 2020-09-22 上海米哈游天命科技有限公司 Bone skin fusion determination method, device, equipment and storage medium
WO2020228322A1 (en) * 2019-05-15 2020-11-19 浙江商汤科技开发有限公司 Three-dimensional partial human body model generation method, device and equipment
CN112418195A (en) * 2021-01-22 2021-02-26 电子科技大学中山学院 Face key point detection method and device, electronic equipment and storage medium
CN112949360A (en) * 2019-12-11 2021-06-11 广州市久邦数码科技有限公司 Video face changing method and device
CN113554745A (en) * 2021-07-15 2021-10-26 电子科技大学 Three-dimensional face reconstruction method based on image
CN113920282A (en) * 2021-11-15 2022-01-11 广州博冠信息科技有限公司 Image processing method and device, computer readable storage medium, and electronic device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6944320B2 (en) * 2000-03-09 2005-09-13 Microsoft Corporation Rapid computer modeling of faces for animation
CN101968891A (en) * 2009-07-28 2011-02-09 上海冰动信息技术有限公司 System for automatically generating three-dimensional figure of picture for game
JP2011039869A (en) * 2009-08-13 2011-02-24 Nippon Hoso Kyokai <Nhk> Face image processing apparatus and computer program
CN103116902A (en) * 2011-11-16 2013-05-22 华为软件技术有限公司 Three-dimensional virtual human head image generation method, and method and device of human head image motion tracking
CN103606190A (en) * 2013-12-06 2014-02-26 上海明穆电子科技有限公司 Method for automatically converting single face front photo into three-dimensional (3D) face model
CN103646416A (en) * 2013-12-18 2014-03-19 中国科学院计算技术研究所 Three-dimensional cartoon face texture generation method and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6944320B2 (en) * 2000-03-09 2005-09-13 Microsoft Corporation Rapid computer modeling of faces for animation
CN101968891A (en) * 2009-07-28 2011-02-09 上海冰动信息技术有限公司 System for automatically generating three-dimensional figure of picture for game
JP2011039869A (en) * 2009-08-13 2011-02-24 Nippon Hoso Kyokai <Nhk> Face image processing apparatus and computer program
CN103116902A (en) * 2011-11-16 2013-05-22 华为软件技术有限公司 Three-dimensional virtual human head image generation method, and method and device of human head image motion tracking
CN103606190A (en) * 2013-12-06 2014-02-26 上海明穆电子科技有限公司 Method for automatically converting single face front photo into three-dimensional (3D) face model
CN103646416A (en) * 2013-12-18 2014-03-19 中国科学院计算技术研究所 Three-dimensional cartoon face texture generation method and device

Cited By (47)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104809687A (en) * 2015-04-23 2015-07-29 上海趣搭网络科技有限公司 Three-dimensional human face image generation method and system
CN107452049A (en) * 2016-05-30 2017-12-08 腾讯科技(深圳)有限公司 A kind of three-dimensional head modeling method and device
CN106570822B (en) * 2016-10-25 2020-10-16 宇龙计算机通信科技(深圳)有限公司 Face mapping method and device
WO2018076437A1 (en) * 2016-10-25 2018-05-03 宇龙计算机通信科技(深圳)有限公司 Method and apparatus for human facial mapping
CN106570822A (en) * 2016-10-25 2017-04-19 宇龙计算机通信科技(深圳)有限公司 Human face mapping method and device
CN106618734A (en) * 2016-11-04 2017-05-10 王敏 Face-lifting-model-comparison imprinting device
CN106815568A (en) * 2016-12-30 2017-06-09 易瓦特科技股份公司 For the method and system being identified for destination object
CN110326034A (en) * 2017-03-21 2019-10-11 宝洁公司 Method for the simulation of age appearance
CN106934073A (en) * 2017-05-02 2017-07-07 成都通甲优博科技有限责任公司 Face comparison system, method and mobile terminal based on three-dimensional image
CN108932459B (en) * 2017-05-26 2021-12-10 富士通株式会社 Face recognition model training method and device and face recognition method
CN108932459A (en) * 2017-05-26 2018-12-04 富士通株式会社 Face recognition model training method and device and recognition algorithms
CN107316340B (en) * 2017-06-28 2020-06-19 河海大学常州校区 Rapid face modeling method based on single photo
CN107316340A (en) * 2017-06-28 2017-11-03 河海大学常州校区 A kind of fast human face model building based on single photo
CN107274493A (en) * 2017-06-28 2017-10-20 河海大学常州校区 A kind of three-dimensional examination hair style facial reconstruction method based on mobile platform
CN107274493B (en) * 2017-06-28 2020-06-19 河海大学常州校区 Three-dimensional virtual trial type face reconstruction method based on mobile platform
CN107578469A (en) * 2017-09-08 2018-01-12 明利 A kind of 3D human body modeling methods and device based on single photo
CN108171788A (en) * 2017-12-19 2018-06-15 西安蒜泥电子科技有限责任公司 Body variation representation method based on three-dimensional modeling
CN108171788B (en) * 2017-12-19 2021-02-19 西安蒜泥电子科技有限责任公司 Body change representation method based on three-dimensional modeling
CN108171789B (en) * 2017-12-21 2022-01-18 迈吉客科技(北京)有限公司 Virtual image generation method and system
CN108171789A (en) * 2017-12-21 2018-06-15 迈吉客科技(北京)有限公司 A kind of virtual image generation method and system
CN108470321A (en) * 2018-02-27 2018-08-31 北京小米移动软件有限公司 U.S. face processing method, device and the storage medium of photo
CN108470321B (en) * 2018-02-27 2022-03-01 北京小米移动软件有限公司 Method and device for beautifying photos and storage medium
CN108596827A (en) * 2018-04-18 2018-09-28 太平洋未来科技(深圳)有限公司 Three-dimensional face model generation method, device and electronic equipment
CN109118579A (en) * 2018-08-03 2019-01-01 北京微播视界科技有限公司 The method, apparatus of dynamic generation human face three-dimensional model, electronic equipment
WO2020024569A1 (en) * 2018-08-03 2020-02-06 北京微播视界科技有限公司 Method and device for dynamically generating three-dimensional face model, and electronic device
CN109191505A (en) * 2018-08-03 2019-01-11 北京微播视界科技有限公司 Static state generates the method, apparatus of human face three-dimensional model, electronic equipment
CN109191393B (en) * 2018-08-16 2021-03-26 Oppo广东移动通信有限公司 Three-dimensional model-based beauty method
CN109191393A (en) * 2018-08-16 2019-01-11 Oppo广东移动通信有限公司 U.S. face method based on threedimensional model
CN110853147B (en) * 2018-08-21 2023-06-20 东方梦幻文化产业投资有限公司 Three-dimensional face transformation method
CN110853147A (en) * 2018-08-21 2020-02-28 东方梦幻文化产业投资有限公司 Three-dimensional face transformation method
CN109191508A (en) * 2018-09-29 2019-01-11 深圳阜时科技有限公司 A kind of simulation beauty device, simulation lift face method and apparatus
CN109859305A (en) * 2018-12-13 2019-06-07 中科天网(广东)科技有限公司 Three-dimensional face modeling, recognition methods and device based on multi-angle two-dimension human face
CN110032927A (en) * 2019-02-27 2019-07-19 视缘(上海)智能科技有限公司 A kind of face identification method
CN109993689A (en) * 2019-03-14 2019-07-09 珠海天燕科技有限公司 A kind of makeups method and apparatus
CN110111247A (en) * 2019-05-15 2019-08-09 浙江商汤科技开发有限公司 Facial metamorphosis processing method, device and equipment
US11100709B2 (en) 2019-05-15 2021-08-24 Zhejiang Sensetime Technology Development Co., Ltd Method, apparatus and device for processing deformation of virtual object, and storage medium
WO2020228322A1 (en) * 2019-05-15 2020-11-19 浙江商汤科技开发有限公司 Three-dimensional partial human body model generation method, device and equipment
US11367236B2 (en) 2019-05-15 2022-06-21 Zhejiang Sensetime Technology Development Co., Ltd Method, apparatus and device for generating three-dimensional local human body model
CN110751078A (en) * 2019-10-15 2020-02-04 重庆灵翎互娱科技有限公司 Method and equipment for determining non-skin color area of three-dimensional face
CN110751078B (en) * 2019-10-15 2023-06-20 重庆灵翎互娱科技有限公司 Method and equipment for determining non-skin color region of three-dimensional face
CN112949360A (en) * 2019-12-11 2021-06-11 广州市久邦数码科技有限公司 Video face changing method and device
CN111696184A (en) * 2020-06-10 2020-09-22 上海米哈游天命科技有限公司 Bone skin fusion determination method, device, equipment and storage medium
CN111696184B (en) * 2020-06-10 2023-08-29 上海米哈游天命科技有限公司 Bone skin fusion determination method, device, equipment and storage medium
CN112418195B (en) * 2021-01-22 2021-04-09 电子科技大学中山学院 Face key point detection method and device, electronic equipment and storage medium
CN112418195A (en) * 2021-01-22 2021-02-26 电子科技大学中山学院 Face key point detection method and device, electronic equipment and storage medium
CN113554745A (en) * 2021-07-15 2021-10-26 电子科技大学 Three-dimensional face reconstruction method based on image
CN113920282A (en) * 2021-11-15 2022-01-11 广州博冠信息科技有限公司 Image processing method and device, computer readable storage medium, and electronic device

Also Published As

Publication number Publication date
CN104376594B (en) 2017-09-29

Similar Documents

Publication Publication Date Title
CN104376594A (en) Three-dimensional face modeling method and device
Huang et al. Texturenet: Consistent local parametrizations for learning from high-resolution signals on meshes
CN108509848B (en) The real-time detection method and system of three-dimension object
CN110458939B (en) Indoor scene modeling method based on visual angle generation
CN108648269B (en) Method and system for singulating three-dimensional building models
Qi et al. Volumetric and multi-view cnns for object classification on 3d data
CN111243093B (en) Three-dimensional face grid generation method, device, equipment and storage medium
WO2021093453A1 (en) Method for generating 3d expression base, voice interactive method, apparatus and medium
CN110264416A (en) Sparse point cloud segmentation method and device
CN103729885B (en) Various visual angles projection registers united Freehandhand-drawing scene three-dimensional modeling method with three-dimensional
CN115100339A (en) Image generation method and device, electronic equipment and storage medium
Pan et al. Dense 3D reconstruction combining depth and RGB information
CN104376596A (en) Method for modeling and registering three-dimensional scene structures on basis of single image
CN106155299B (en) A kind of pair of smart machine carries out the method and device of gesture control
CN104680532A (en) Object labeling method and device
CN110889901B (en) Large-scene sparse point cloud BA optimization method based on distributed system
CN105261064A (en) Three-dimensional cultural relic reconstruction system and three-dimensional cultural relic reconstruction method based on computer stereo vision
CN110751097A (en) Semi-supervised three-dimensional point cloud gesture key point detection method
Chen et al. Autosweep: Recovering 3d editable objects from a single photograph
Özbay et al. A voxelize structured refinement method for registration of point clouds from Kinect sensors
Yin et al. Virtual reconstruction method of regional 3D image based on visual transmission effect
Zhai et al. Image real-time augmented reality technology based on spatial color and depth consistency
CN109345570A (en) A kind of multichannel three-dimensional colour point clouds method for registering based on geometry
Hyeon et al. Automatic spatial template generation for realistic 3d modeling of large-scale indoor spaces
CN109118576A (en) Large scene three-dimensional reconstruction system and method for reconstructing based on BDS location-based service

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant