CN1971615A - Method for generating cartoon portrait based on photo of human face - Google Patents

Method for generating cartoon portrait based on photo of human face Download PDF

Info

Publication number
CN1971615A
CN1971615A CNA2006101144941A CN200610114494A CN1971615A CN 1971615 A CN1971615 A CN 1971615A CN A2006101144941 A CNA2006101144941 A CN A2006101144941A CN 200610114494 A CN200610114494 A CN 200610114494A CN 1971615 A CN1971615 A CN 1971615A
Authority
CN
China
Prior art keywords
human face
cartoon
people
face picture
face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CNA2006101144941A
Other languages
Chinese (zh)
Other versions
CN100487732C (en
Inventor
刘军发
陈益强
高文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Computing Technology of CAS
Original Assignee
Institute of Computing Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Computing Technology of CAS filed Critical Institute of Computing Technology of CAS
Priority to CNB2006101144941A priority Critical patent/CN100487732C/en
Publication of CN1971615A publication Critical patent/CN1971615A/en
Application granted granted Critical
Publication of CN100487732C publication Critical patent/CN100487732C/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Generation (AREA)

Abstract

The invention discloses a cartoon portrait forming method that includes following steps: 1) collecting and processing the face images data; 2) producing a deformable mapping model from the real face image to cartoon face image; 3) the input real face image forms the deformable face image by said deformable mapping model; 4) the deformable face image in step 3 is pattern manipulated. The process of said face image data contains following steps: (1) Extracting the contour points of face image data; (2) aligning the contour of all face images following the average contour points. Said step 2) contains following steps: (1) extracting the key component of contour points of all face images; (2) obtaining the deformable mapping model from the real face image to cartoon face image. The advantages of invention are: realizing the synchronous coordinate variation of several faces characteristics, and obtaining the cartoon face images with shape grandiloquent effect and lamination effect.

Description

A kind of method for generating cartoon portrait based on human face photo
Technical field
The present invention relates to the computer graphic image treatment technology, particularly face image processing process.
Background technology
In recent years, incorporate life under the digital media technology gradually, greatly affect people's live and work mode, the cartoon portrait that utilizes computing machine to generate the personage has widely in a lot of fields to be used, in video conference, online game, mobile digital amusement, use virtual cartoon portrait to replace real human face, not only can improve the speed that data transmit, and can build light, harmonious atmosphere well.
In the prior art, generate human face cartoon portrait (or being called the caricature portrait) by computing machine and mainly comprise two kinds of methods: the one, the profile of comparison film people face carries out the exaggerated deformation method, and the 2nd, to cartooning's disposal route of people's face texture.
It mainly is to extract by facial contour that the profile of comparison film people face carries out exaggerated deformation, obtain the lines feature of expression people face, then lines are carried out some distortion, then generate " the cartoon stick figure " of people's face, such as document 1: disclosed technology in " Sun Yuhong, slaughter long river, Meng Xiangxu, " based on the conversion of stick figure style and distortion, the computer-aided design (CAD) and the graphics journal; Vol.18, No.3 of shape evolution.Fig. 1 has provided the example of 3 stick figures, and being only has lines not have the stick figure of texture.The characteristics of this method are the lines portraits that can provide people's face concisely, and shortcoming is there is not texture, better the quarter face characteristic.Simultaneously, during distortion, can only be out of shape, can comprise at positions such as volume, eyebrow, eye, nose, mouths and changing, and the method is once to linear change such as wherein a kind of feature enlarge, dwindles such as distortion at a kind of feature.If carry out repeatedly a plurality of features being changed, then can not guarantee a plurality of features harmony on the whole.
Second method can further be enriched the expression of cartoon effect by the texture of people's face is handled, such as oil paint effect, pencil drawing effect and other effect etc.Document 2: " Chen Hong, Zheng Nanning, Liang Lin, Xu Yingqing, Shen Xiangyang, based on the portrait painting automatic generating calculation of sample learning, Chinese journal of computers, 2003 2 phases " discloses the technology that the texture of people's face is handled.Fig. 2 has provided 3 portrait paintings with texture.The weak point of the portrait painting that this kind method produces mainly is not have the exaggerated deformation of face characteristic or does not have texture transformation effect preferably.
Therefore,, wish to have a kind of new method, make that the cartoon portrait that generates had both had the deformation effect of exaggeration, have abundant cartoon texture again based on human face photo generation cartoon portrait in order to overcome the deficiency of prior art.
Summary of the invention
The objective of the invention is to overcome the independent lines distortion and the shortcoming of texture variations,, have abundant grain effect again, a kind of new method for generating cartoon portrait based on human face photo is provided in order to make last effect both have the deformation effect of exaggeration.
In order to achieve the above object, the present invention takes following technical scheme:
A kind of method for generating cartoon portrait based on human face photo may further comprise the steps:
1) collection of people's face image data and treatment step; Described people's face picture comprises several real human face pictures and several cartoon human face pictures;
2) the training study step of the deformation map model of generation from the real human face picture to the cartoon human face picture;
3) the real human face picture to input generates the step of being out of shape people's face picture by described deformation mapping model.
In technique scheme, comprise that also step 4) carries out distortion people face picture in the described step 3) step of texture transformation.
In technique scheme, the processing to described people's face image data in described step 1) may further comprise the steps:
(1) to the extraction of point in people's face image data;
(2), everyone face picture is carried out the profile alignment according to the mean profile point.
In technique scheme, described step 2) specifically comprises the steps: in
(1) all pictures is extracted its point major component;
(2) based on the point major component, at the real human face picture and with its one to one the cartoon human face picture carry out machine learning, obtain deformation mapping model from the real human face picture to the cartoon human face picture.
Further, in above-mentioned steps, the number that extracts point at every width of cloth people face image data is the arbitrary integer between the 40-200.
In technique scheme, further, the number that extracts point at every width of cloth people face image data is 118.
In technique scheme, gather people's face picture in the described step 1) and comprise the real human face photo and cartoon human face picture at least 200 width of cloth.
Compared with the prior art, beneficial effect of the present invention is:
1) analyzes by PCA, can find the shape major component, on the shape major component, change, can bring the synchronous coordination of a plurality of face characteristics to change indirectly;
2) by the combination of warpage and two steps of texture processing, making the existing shape exaggeration of final effect has grain effect again.
Description of drawings
Fig. 1 is the stick figure that 3 width of cloth do not have texture.
Fig. 2 is the cartoon portrait that 3 width of cloth have certain textural characteristics.
Fig. 3 is the cartoon portrait that 4 width of cloth of the present invention have shape and texture variations.
Fig. 4 utilizes ASM to extract the synoptic diagram of human face characteristic point.
Fig. 5 utilizes PCA data to be carried out the synoptic diagram of dimensionality reduction.
The synoptic diagram of grid deforming method among Fig. 6 the present invention.
Fig. 7 is the flow process frame diagram of deformation method of the present invention.
Fig. 8 is the synoptic diagram of facial contour alignment among the present invention.
Fig. 9 is a main-process stream frame diagram of the present invention.
Embodiment
Below in conjunction with the drawings and specific embodiments the present invention is described in further detail:
The present invention has made up shape exaggerated deformation and two steps of texture transformation, makes the cartoon portrait that generates both have the deformation effect of exaggeration, has abundant cartoon texture again.Aspect warpage, in order to make the effect that generates as far as possible near artistical style, the present invention adopts the method for machine learning.At first need to collect a large amount of real human face and corresponding caricature portrait thereof, the feature extraction of the two is come out, analyze its major component then, set up the subspace, and adopt the method for machine learning to learn the mapping relations of the two feature based on this subspace, the result of study is a mapping model Y=f (X), and Y is the feature of cartoon portrait, and X is the feature of real human face.Obtain after this model,, extract its feature X, adopt this mapping model f then, promptly can obtain the eigenwert Y of corresponding cartoon portrait, generate new cartoon portrait shape data according to this eigenwert then for new facial image.Based on the operation of major component, also promptly in the operation of low n-dimensional subspace n, actual being equivalent to, carried out synchronous operation to higher dimensional space (118 dimensional features that will introduce such as the back), therefore a plurality of features of people's face all carried out exaggerated deformation and coordination.This is one of advantage of the inventive method, and conventional deformation method always is out of shape a certain feature of people's face.
Aspect texture transformation, adopt specific image processing algorithm, generate various cartoon effects, as the pencil drawing effect, watercolor effect etc.Accompanying drawing 3 provides that this method produces both has texture variations, and the cartoon portrait picture of exaggerated deformation is arranged again.Relate to the process flow diagram of deformation process in the concrete enforcement of the present invention of being shown in Figure 7, be divided into training stage and application stage.In the training stage, through mapping study, just between two groups of data, obtained mapping model, set up the mapping model F of normal face characteristic and cartoon human face feature.In the application stage,, can carry out deformation process to the facial image of input based on the mapping model that has obtained.
A kind of method for generating cartoon portrait based on human face photo may further comprise the steps:
1) collection of people's face image data and treatment step; Described people's face picture comprises several real human face pictures and several cartoon human face pictures;
As an example, collected 1000 width of cloth real human face pictures and 200 width of cloth cartoon human face pictures herein, it is one to one that 100 width of cloth real human face pictures and 100 width of cloth cartoon human face pictures are wherein arranged;
The processing of people's face image data may further comprise the steps:
(A) to the extraction of point in everyone the face image data.Carry out feature extraction herein, i.e. the extraction of point.For the real human face photo, be to realize by ASM (Active Shape Model, active shape model) method.The ASM method is the profile extraction algorithm that extensively adopts in this area, such as disclosed technology in the document " T.F.Cootes; C.J.Taylor; D.Cooper; J.Graham.Active shape models--their training andapplication [J] .Computer vision and image understanding, 1995,61 (1): 38-59. ", through training, can the better extract human face characteristic point.But for human face cartoon caricature picture, meet the statistical distribution of ordinary person's face down,, can only finish by interactive means based on ASM so can't finish by ASM because it has gone on foot.Accompanying drawing 4 is examples that ASM extracts human face characteristic point, and the point of extraction is the one-dimension array of one 118 dimension.
(B), everyone face picture is carried out the profile alignment according to point.The purpose of alignment is to unified yardstick with the somebody of institute face normalizing.Because facial image may come from different scale when gathering, not of uniform size.The operation of alignment is exactly at first to obtain an average facial contour, then all facial contours is amplified one by one or dwindles, up to the most approaching with average man's face.Accompanying drawing 8 is synoptic diagram of people's face alignment, and wherein (a) is the original facial contour that extracts, and (b) is the average shape of all facial contours, (c) is all facial contours after the alignment.
2) the training study step of the deformation mapping model of generation from the real human face picture to the cartoon human face image data; Specifically comprise the steps:
(A) 1000 width of cloth real human face pictures and 200 width of cloth cartoon human face pictures are extracted the point major component; The facial contour alignment of data is later on normal person's face data and the cartoon human face data under same yardstick, at this moment carry out PCA (Principal Component Analysis is principal component analysis (PCA)) based on the one-dimension array data of these 118 dimensions again, obtain people face space and major component vector.PCA be the complex data dimensionality reduction the method that extensively adopts.After setting up people's face space, both can play the effect of dimensionality reduction, can observe spatial relationship between the sample based on these major components again.As an example, accompanying drawing 5 is a PCA carries out dimensionality reduction, acquisition major component to 2-D data synoptic diagram, wherein (a) is the original diffusing point data of two dimension, (b) is after analyzing through PCA, to have obtained major component vector (with reference to the straight line L among Fig. 5), also promptly set up with this straight line is the one-dimensional subspace of coordinate, (c) be to original two dimensional number pick arbitrarily,, can pass through projection such as a S, obtain the coordinate on the one dimension straight line, thereby can represent its locus with a dimension coordinate.Article two, the straight line perpendicular to coordinate axis is through after the PCA backwards calculation, recovers the synoptic diagram of its 2-D data from one-dimensional data.Thought is identical therewith, after setting up people's face space, to the somebody of institute face data (comprise real and caricature), it is projected on the major component, both can play the effect of dimensionality reduction, can observe its spatial relationship based on these major components again.
(B) based on the point major component, with 100 width of cloth real human face pictures and with its one to one 100 width of cloth cartoon human face pictures carry out projection and calculate, obtain two row data for projection.As described in previous step (A), can observe between this two row data for projection correlationship in the subspace.The view mode that we adopt is to carry out machine learning, obtains the deformation mapping model from the real human face picture to the cartoon human face picture.Machine learning can be adopted multiple mapping learning method, adopt artificial neural network (English full name Artificial Neural Network herein, be called for short ANN) and support vector regression analysis (English full name Support Vector Regression, abbreviation SVR) realize.Wherein, artificial neural network is a method comparatively common in the machine learning, can carry out effective recurrence learning to two groups of data, such as disclosed technology in the document " artificial neural network and simulated evolution calculate (the 2nd edition); Yan Pingfan Zhang Changshui, publishing house of Tsing-Hua University, in September, 2005 ".SVR can carry out one of method of effective regretional analysis to high dimensional data, such as document " essence of Statistical Learning Theory; Vladimir N.Vapnik work, Zhang Xuegong translates, publishing house of Tsing-Hua University; in September, 2000 " and " support vector machine introduction; work such as Nello Cristianini, Li Guozheng etc. translate, the Electronic Industry Press; in March, 2004 " disclosed technology, be well known to those skilled in the art.As previously mentioned, owing to can have influence on each dimensions of 118 dimension facial contour sample spaces based on the study of people's face shape major component and prediction, so this distortion is that various features is carried out synchronous bulk deformation.
3) to the real human face picture of input step by described deformation mapping model generation deformation people face picture.
Based on the mapping model that has obtained, can carry out deformation process to the facial image of input.Detailed process is: as shown in Figure 7, the input human face photo, extract the facial contour data by ASM, align with average man's face, obtain the vector data of same yardstick bottom profiled point, carry out projection then calculates in people's face space, acquisition is at the coordinate of subspace, the mapping model of application of aforementioned carries out the volume coordinate mapping, obtain new projection vector,, return to 118 dimensional vectors of sample space from low n-dimensional subspace n through the PCA backwards calculation, according to these data facial image is out of shape again, can obtains to have the cartoon portrait of deformation effect.The anamorphose algorithm of Cai Yonging is widely used distortion of the mesh algorithm herein, in the method, at first determines a group of feature point of image, unique point is carried out triangulation, then unique point is edited, according to the demand of target image, moving characteristic point is to new position.Such as disclosed technology in the document " the algorithm basis of computer animation, Bao Hujun etc. write, publishing house of Zhejiang University, in Dec, 2000 ".Accompanying drawing 6 is synoptic diagram of distortion of the mesh.
4) deformation people face picture in the described step 3) is carried out texture transformation and obtain the cartoon human face picture.
Texture transformation is to adopt existing disclosed texture generation technique, such as document " A.Hertzmann; " Painterly Rendering with Curved Brush Strokes of Multiple Sizes "; In:Proceedings of SIGGRAPH ' 98[C]; Florida; USA, 1998:453~460. " in disclosed oil paint effect; Document C.J.Curtis, S.E.Anderson, J.E.Seims, Kurt W.Fleischer, and David H.Salesin, " Computer-Generated Watercolor ", In:Proceedings ofSIGGRAPH ' 97[C], Los Angeles, CA, USA, 1997:421~430. " in disclosed watercolor effect, and document " X.Mao, Y.Nagasaka and A.Imamiya, Automatic generation of pencildrawing from 2D images using line inegral convolution[J], CAD/Graphics 2001:240~248. " in disclosed pencil drawing effect.
Based on above-mentioned steps,, can design the flow operations according to as shown in Figure 9 for the processing that cartoonizes of the new real human face picture of a width of cloth.Different with other method, according to Fig. 9, the present invention at first adopts the ASM method to carry out the extraction of human face characteristic point, so just can handle at the local feature of people's face, in perception people face characteristics, avoided man-machine interaction, can realize automatically, also can carry out more accurate control and distortion simultaneously people's face.Comparatively speaking, existing technology instrument carries out whole compression or stretches realizing distortion that this is one of superiority of the present invention to facial image.
After adopting the ASM method to carry out the extraction of human face characteristic point, according to the setting of the prior appointment of user, judge whether to carry out the processing of grain effect, if do not need grain effect, then change next step deformation process over to.If need carry out the processing of grain effect, then, generate corresponding grain effect at facial image according to user's setting.Enter the deformation process module then, judge at first whether user's setting needs to carry out deformation process,, then directly export the result of previous step, enter the end module if do not need.If need deformation process, then enter the deformation process flow process shown in the accompanying drawing 7, promptly aforesaid step 3).
It should be noted last that above embodiment is only unrestricted in order to technical scheme of the present invention to be described.Although the present invention is had been described in detail with reference to embodiment, those of ordinary skill in the art is to be understood that, technical scheme of the present invention is made amendment or is equal to replacement, do not break away from the spirit and scope of technical solution of the present invention, it all should be encompassed in the middle of the claim scope of the present invention.

Claims (7)

1, a kind of method for generating cartoon portrait based on human face photo may further comprise the steps:
1) collection of people's face image data and treatment step; Described people's face picture comprises several real human face pictures and several cartoon human face pictures;
2) the training study step of the deformation map model of generation from the real human face picture to the cartoon human face picture;
3) the real human face picture to input generates the step of being out of shape people's face picture by described deformation mapping model.
2, according to the described method for generating cartoon portrait of claim 1, it is characterized in that, comprise that also step 4) carries out distortion people face picture in the described step 3) step of texture transformation based on human face photo.
According to the described method for generating cartoon portrait of claim 1, it is characterized in that 3, the processing to described people's face image data in described step 1) may further comprise the steps based on human face photo:
(1) to the extraction of point in the people's face picture number pick;
(2), everyone face picture is carried out the profile alignment according to the mean profile point.
4, according to the described method for generating cartoon portrait of claim 1, it is characterized in that described step 2 based on human face photo) in specifically comprise the steps:
(1) all pictures is extracted its point major component;
(2) based on the point major component, at the real human face picture and with its one to one the cartoon human face picture carry out machine learning, obtain deformation mapping model from the real human face picture to the cartoon human face picture.
According to the described method for generating cartoon portrait of claim 3, it is characterized in that 5, in described step (1), the number that extracts point at every width of cloth people face image data is the arbitrary integer between the 40-200 based on human face photo.
According to the described method for generating cartoon portrait of claim 5, it is characterized in that 6, the number that extracts point at every width of cloth people face image data is 118 based on human face photo.
7, according to the described method for generating cartoon portrait of claim 1, it is characterized in that, gather people's face picture in the described step 1) and comprise that real human face photo and cartoon human face picture are more than totally 200 width of cloth based on human face photo.
CNB2006101144941A 2006-11-10 2006-11-10 Method for generating cartoon portrait based on photo of human face Active CN100487732C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB2006101144941A CN100487732C (en) 2006-11-10 2006-11-10 Method for generating cartoon portrait based on photo of human face

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB2006101144941A CN100487732C (en) 2006-11-10 2006-11-10 Method for generating cartoon portrait based on photo of human face

Publications (2)

Publication Number Publication Date
CN1971615A true CN1971615A (en) 2007-05-30
CN100487732C CN100487732C (en) 2009-05-13

Family

ID=38112419

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB2006101144941A Active CN100487732C (en) 2006-11-10 2006-11-10 Method for generating cartoon portrait based on photo of human face

Country Status (1)

Country Link
CN (1) CN100487732C (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102096934A (en) * 2011-01-27 2011-06-15 电子科技大学 Human face cartoon generating method based on machine learning
CN103366390A (en) * 2012-03-29 2013-10-23 展讯通信(上海)有限公司 Terminal, image processing method and device thereof
CN105374055A (en) * 2014-08-20 2016-03-02 腾讯科技(深圳)有限公司 Image processing method and device
CN106791347A (en) * 2015-11-20 2017-05-31 比亚迪股份有限公司 A kind of image processing method, device and the mobile terminal using the method
CN107967667A (en) * 2017-12-21 2018-04-27 广东欧珀移动通信有限公司 Generation method, device, terminal device and the storage medium of sketch
CN108564127A (en) * 2018-04-19 2018-09-21 腾讯科技(深圳)有限公司 Image conversion method, device, computer equipment and storage medium
CN109615958A (en) * 2018-12-04 2019-04-12 深圳市诺信连接科技有限责任公司 The processing method and VR of interactive VR image
WO2021232708A1 (en) * 2020-05-21 2021-11-25 北京达佳互联信息技术有限公司 Image processing method and electronic device

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7660482B2 (en) * 2004-06-23 2010-02-09 Seiko Epson Corporation Method and apparatus for converting a photo to a caricature image
TW200614094A (en) * 2004-10-18 2006-05-01 Reallusion Inc System and method for processing comic character

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102096934A (en) * 2011-01-27 2011-06-15 电子科技大学 Human face cartoon generating method based on machine learning
CN103366390A (en) * 2012-03-29 2013-10-23 展讯通信(上海)有限公司 Terminal, image processing method and device thereof
CN103366390B (en) * 2012-03-29 2016-04-06 展讯通信(上海)有限公司 terminal and image processing method and device
CN105374055A (en) * 2014-08-20 2016-03-02 腾讯科技(深圳)有限公司 Image processing method and device
CN105374055B (en) * 2014-08-20 2018-07-03 腾讯科技(深圳)有限公司 Image processing method and device
CN106791347A (en) * 2015-11-20 2017-05-31 比亚迪股份有限公司 A kind of image processing method, device and the mobile terminal using the method
CN107967667A (en) * 2017-12-21 2018-04-27 广东欧珀移动通信有限公司 Generation method, device, terminal device and the storage medium of sketch
CN108564127A (en) * 2018-04-19 2018-09-21 腾讯科技(深圳)有限公司 Image conversion method, device, computer equipment and storage medium
CN108564127B (en) * 2018-04-19 2022-02-18 腾讯科技(深圳)有限公司 Image conversion method, image conversion device, computer equipment and storage medium
CN109615958A (en) * 2018-12-04 2019-04-12 深圳市诺信连接科技有限责任公司 The processing method and VR of interactive VR image
WO2021232708A1 (en) * 2020-05-21 2021-11-25 北京达佳互联信息技术有限公司 Image processing method and electronic device
CN113706369A (en) * 2020-05-21 2021-11-26 北京达佳互联信息技术有限公司 Image processing method, image processing device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN100487732C (en) 2009-05-13

Similar Documents

Publication Publication Date Title
CN100487732C (en) Method for generating cartoon portrait based on photo of human face
CN100557640C (en) A kind of interactive multi-vision point three-dimensional model reconstruction method
CN106067190B (en) A kind of generation of fast face threedimensional model and transform method based on single image
CN102332095B (en) Face motion tracking method, face motion tracking system and method for enhancing reality
CN104376596B (en) A kind of three-dimensional scene structure modeling and register method based on single image
CN101098241A (en) Method and system for implementing virtual image
CN106778628A (en) A kind of facial expression method for catching based on TOF depth cameras
CN111951384B (en) Three-dimensional face reconstruction method and system based on single face picture
CN111476710B (en) Video face changing method and system based on mobile platform
CN102567716B (en) Face synthetic system and implementation method
CN102096934B (en) Human face cartoon generating method based on machine learning
CN106709931B (en) Method for mapping facial makeup to face and facial makeup mapping device
CN111951381B (en) Three-dimensional face reconstruction system based on single face picture
CN103606190A (en) Method for automatically converting single face front photo into three-dimensional (3D) face model
CN106780713A (en) A kind of three-dimensional face modeling method and system based on single width photo
CN103745206B (en) A kind of face identification method and system
KR102353556B1 (en) Apparatus for Generating Facial expressions and Poses Reappearance Avatar based in User Face
CN109146808A (en) A kind of portrait U.S. type method and system
CN109766866B (en) Face characteristic point real-time detection method and detection system based on three-dimensional reconstruction
CN111951368A (en) Point cloud, voxel and multi-view fusion deep learning method
CN107358645A (en) Product method for reconstructing three-dimensional model and its system
CN110717978B (en) Three-dimensional head reconstruction method based on single image
Peng et al. RGB-D human matting: A real-world benchmark dataset and a baseline method
He Application of local color simulation method of landscape painting based on deep learning generative adversarial networks
CN110348344A (en) A method of the special facial expression recognition based on two and three dimensions fusion

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant