CN108765273A - The virtual lift face method and apparatus that face is taken pictures - Google Patents
The virtual lift face method and apparatus that face is taken pictures Download PDFInfo
- Publication number
- CN108765273A CN108765273A CN201810551058.3A CN201810551058A CN108765273A CN 108765273 A CN108765273 A CN 108765273A CN 201810551058 A CN201810551058 A CN 201810551058A CN 108765273 A CN108765273 A CN 108765273A
- Authority
- CN
- China
- Prior art keywords
- face
- dimensional model
- original
- human face
- dimensional
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 58
- 230000001815 facial effect Effects 0.000 claims abstract description 130
- 238000007493 shaping process Methods 0.000 claims abstract description 41
- 238000003384 imaging method Methods 0.000 claims description 16
- 230000006870 function Effects 0.000 claims description 15
- 238000003860 storage Methods 0.000 claims description 9
- 230000003796 beauty Effects 0.000 claims description 8
- 238000004590 computer program Methods 0.000 claims description 7
- 239000000284 extract Substances 0.000 claims description 7
- 238000000605 extraction Methods 0.000 claims description 6
- 238000013507 mapping Methods 0.000 claims description 4
- 238000001514 detection method Methods 0.000 claims description 3
- 239000011800 void material Substances 0.000 claims 1
- 230000000694 effects Effects 0.000 abstract description 21
- 230000002159 abnormal effect Effects 0.000 description 18
- 238000010586 diagram Methods 0.000 description 13
- 210000003128 head Anatomy 0.000 description 9
- 241000700647 Variola virus Species 0.000 description 8
- 208000003351 Melanosis Diseases 0.000 description 7
- 238000004040 coloring Methods 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 5
- 238000001914 filtration Methods 0.000 description 5
- 210000000887 face Anatomy 0.000 description 4
- 238000005457 optimization Methods 0.000 description 4
- 230000003255 anti-acne Effects 0.000 description 3
- 230000005611 electricity Effects 0.000 description 3
- 210000004709 eyebrow Anatomy 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 208000002874 Acne Vulgaris Diseases 0.000 description 2
- 241000208340 Araliaceae Species 0.000 description 2
- 235000005035 Panax pseudoginseng ssp. pseudoginseng Nutrition 0.000 description 2
- 235000003140 Panax quinquefolius Nutrition 0.000 description 2
- 206010000496 acne Diseases 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 239000000872 buffer Substances 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 235000008434 ginseng Nutrition 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 230000029052 metamorphosis Effects 0.000 description 2
- 230000037311 normal skin Effects 0.000 description 2
- 206010004950 Birth mark Diseases 0.000 description 1
- 241001269238 Data Species 0.000 description 1
- 206010060766 Heteroplasia Diseases 0.000 description 1
- 230000004888 barrier function Effects 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- XEGGRYVFLWGFHI-UHFFFAOYSA-N bendiocarb Chemical compound CNC(=O)OC1=CC=CC2=C1OC(C)(C)O2 XEGGRYVFLWGFHI-UHFFFAOYSA-N 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 230000001149 cognitive effect Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000005034 decoration Methods 0.000 description 1
- 238000000151 deposition Methods 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 235000013399 edible fruits Nutrition 0.000 description 1
- 239000004744 fabric Substances 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 210000004209 hair Anatomy 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 210000003733 optic disk Anatomy 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 239000002245 particle Substances 0.000 description 1
- 230000001902 propagating effect Effects 0.000 description 1
- 230000002087 whitening effect Effects 0.000 description 1
Classifications
-
- G06T3/04—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G06T3/06—
Abstract
The application proposes the virtual lift face method and apparatus that a kind of face is taken pictures, wherein method includes:Obtain the current original two dimensional facial image of user, and depth information corresponding with original two dimensional facial image;Three-dimensionalreconstruction is carried out according to depth information and original two dimensional facial image, obtains original human face three-dimensional model;Pre-registered face information is inquired, judges whether user registers;If knowing, user has registered, and obtains human face three-dimensional model shaping parameter corresponding to the user, is adjusted to the key point on original human face three-dimensional model according to human face three-dimensional model shaping parameter, obtains the target human face three-dimensional model after virtual lift face;Target human face three-dimensional model after virtual lift face is mapped to two dimensional surface, obtains target two-dimension human face image.Be based on human face three-dimensional model as a result, and U.S. face carried out to chartered user, optimize U.S. face effect, improve target user to the satisfaction of U.S. face effect and with the viscosity of product.
Description
Technical field
This application involves the virtual lift face method and apparatus that facial image processing technical field more particularly to a kind of face are taken pictures.
Background technology
With popularizing for terminal device, more and more users get used to terminal device and take pictures, and therefore, terminal is set
The diversification of standby camera function also more, for example, correlation is taken pictures, application program provides U.S. face function etc. to the user.
In the related technology, it is based on two-dimensional facial image and carries out U.S. face processing, treatment effect is bad, and treated, and image is true
True feeling is not strong.
Apply for content
The application is intended to solve at least some of the technical problems in related technologies.
In order to achieve the above object, the application first aspect embodiment proposes a kind of virtual lift face method that face is taken pictures, packet
It includes:Obtain the current original two dimensional facial image of user, and depth information corresponding with the original two dimensional facial image;Root
Three-dimensionalreconstruction is carried out according to the depth information and the original two dimensional facial image, obtains original human face three-dimensional model;Inquiry is pre-
The face information first registered, judges whether the user registers;If knowing, the user has registered, and obtains and the user
Corresponding human face three-dimensional model shaping parameter, according to the human face three-dimensional model shaping parameter to the original human face three-dimensional model
On key point be adjusted, obtain the target human face three-dimensional model after virtual lift face;By the target person after the virtual lift face
Face three-dimensional model is mapped to two dimensional surface, obtains target two-dimension human face image.
In order to achieve the above object, the application second aspect embodiment proposes a kind of virtual beauty device that face is taken pictures, packet
It includes:Acquisition module, the original two dimensional facial image current for obtaining user, and it is corresponding with the original two dimensional facial image
Depth information;Reconstructed module is obtained for carrying out three-dimensionalreconstruction according to the depth information and the original two dimensional facial image
Take original human face three-dimensional model;Enquiry module judges whether the user registers for inquiring pre-registered face information;
Module is adjusted, for when knowing that the user has registered, obtaining human face three-dimensional model shaping ginseng corresponding with the user
Number, is adjusted the key point on the original human face three-dimensional model according to the human face three-dimensional model shaping parameter, obtains
Target human face three-dimensional model after virtual lift face;Mapping block is used for the target human face three-dimensional model after the virtual lift face
It is mapped to two dimensional surface, obtains target two-dimension human face image.
In order to achieve the above object, the application third aspect embodiment proposes a kind of electronic equipment, including memory, processor
And the computer program that can be run on a memory and on a processor is stored, the processor executes the computer program
When, realize the virtual lift face method that the face as described in aforementioned first aspect embodiment is taken pictures.
In order to achieve the above object, the application fourth aspect embodiment proposes a kind of computer readable storage medium, deposit thereon
Computer program is contained, the face as described in aforementioned first aspect embodiment is realized when the computer program is executed by processor
The virtual lift face method taken pictures.
In order to achieve the above object, the 5th aspect embodiment of the application proposes a kind of image processing circuit.Described image processing
Circuit includes:Elementary area, depth information unit and processing unit;
Described image unit, the original two dimensional facial image current for exporting user;
The depth information unit, for exporting depth information corresponding with the original two dimensional facial image;
The processing unit is electrically connected with described image unit and the depth information unit respectively, for according to institute
It states depth information and the original two dimensional facial image carries out three-dimensionalreconstruction, obtain original human face three-dimensional model, inquire note in advance
The face information of volume, judges whether the user registers, if knowing, the user has registered, and obtains corresponding with the user
Human face three-dimensional model shaping parameter, according to the human face three-dimensional model shaping parameter on the original human face three-dimensional model
Key point is adjusted, and obtains the target human face three-dimensional model after virtual lift face, by the target face three after the virtual lift face
Dimension module is mapped to two dimensional surface, obtains target two-dimension human face image.Technical solution provided by the present application, including at least having as follows
Beneficial effect:
U.S. face is carried out to chartered user based on human face three-dimensional model, U.S. face effect is optimized, improves target use
Family to the satisfaction of U.S. face effect and with the viscosity of product.
The additional aspect of the application and advantage will be set forth in part in the description, and will partly become from the following description
It obtains obviously, or recognized by the practice of the application.
Description of the drawings
The application is above-mentioned and/or additional aspect and advantage will become from the following description of the accompanying drawings of embodiments
Obviously and it is readily appreciated that, wherein:
The flow diagram for the virtual lift face method that the face that Fig. 1 is provided by the application one embodiment is taken pictures;
The flow diagram for the virtual lift face method that Fig. 2 is taken pictures by the face that the application another embodiment provides;
The structural schematic diagram for the depth image acquisition component that Fig. 3 is provided by the application one embodiment;
The techniqueflow for the virtual lift face method that the face that Fig. 4 (a) is provided by the application one embodiment is taken pictures is illustrated
Figure;
The techniqueflow for the virtual lift face method that Fig. 4 (b) is taken pictures by the face that the application another embodiment provides shows
It is intended to;
Fig. 5 is the structural schematic diagram for the virtual beauty device taken pictures according to the application one embodiment face;
Fig. 6 is the structural schematic diagram for the virtual beauty device taken pictures according to the application another embodiment face;
The structural schematic diagram for the electronic equipment that Fig. 7 is provided by the embodiment of the present application;And
Fig. 8 is the schematic diagram of image processing circuit in one embodiment;
Fig. 9 is as a kind of schematic diagram of the image processing circuit of possible realization method.
Specific implementation mode
Embodiments herein is described below in detail, examples of the embodiments are shown in the accompanying drawings, wherein from beginning to end
Same or similar label indicates same or similar element or element with the same or similar functions.Below with reference to attached
The embodiment of figure description is exemplary, it is intended to for explaining the application, and should not be understood as the limitation to the application.
U.S. face processing is carried out for two-dimensional facial image is based in the prior art, treatment effect is bad, and treated schemes
As the not strong technical problem of the sense of reality, in the embodiment of the present application, passes through and obtain two-dimensional facial image and facial image corresponds to
Depth information carry out three-dimensionalreconstruction, obtain human face three-dimensional model according to depth information and facial image, it is three-dimensional based on face
Model carries out U.S. face, compared to two-dimentional U.S. face, has considered the depth information of face, the differentiationization for realizing face different parts is beautiful
Face improves the sense of reality of U.S. face, for example, U.S. face is carried out based on human face three-dimensional model, when carrying out mill skin to nose areas, by
It is helped in depth information and clearly distinguishes nose and other positions, thus, it avoids leading to face mould to other positions mistake mill skin
Paste etc..
Below with reference to the accompanying drawings the virtual lift face method and apparatus that the face of the embodiment of the present application is taken pictures are described.
The flow diagram for the virtual lift face method that the face that Fig. 1 is provided by the application one embodiment is taken pictures.
The virtual lift face method of face of the embodiment of the present application can be applied to obtain dress with depth information and colour information
The computer equipment set, wherein there is the device of depth information and colour information (two-dimensional signal) acquisition device function can be
Double to take the photograph system etc., which can be mobile phone, tablet computer, personal digital assistant, Wearable etc. with various
The hardware device of operating system, touch screen and/or display screen.
Step 101, the current original two dimensional facial image of user, and depth corresponding with original two dimensional facial image are obtained
Spend information.
It should be noted that according to the difference of application scenarios, in embodiments herein, depth information and original two is obtained
The hardware device for tieing up human face image information is different:
As a kind of possible realization method, the hardware device for obtaining original two dimensional face information is visible light RGB image
Sensor can obtain original two dimensional face based on the RGB visible light image sensors in computer equipment.Specifically, it is seen that
Light RGB image sensor may include visible image capturing head, it is seen that light video camera head can be captured to be reflected from imaging object
Light is imaged, and the corresponding original two dimensional face of imaging object is obtained.
As a kind of possible realization method, the mode for obtaining depth information is to be obtained by structured light sensor, specifically
Ground, as shown in Fig. 2, the mode for obtaining the corresponding depth information of each facial image includes the following steps:
Step 201, to active user's face projective structure light.
Step 202, the structure light image modulated through active user's face is shot.
Step 203, the corresponding phase information of each pixel of demodulation structure light image is to obtain the corresponding depth of facial image
Spend information.
In this example, referring to Fig. 3 computer equipments be smart mobile phone 1000 when, depth image acquisition component 12 include knot
Structure light projector 121 and structure light video camera head 122.Step 201 can realize by structured light projector 121, step 202 and step
203 can be realized by structure light video camera head 122.
In other words, structured light projector 121 can be used for active user's face projective structure light;Structure light video camera head
The 122 each pixels that can be used for shooting the structure light image and demodulation structure light image modulated through active user's face correspond to
Phase information to obtain depth information.
Specifically, structured light projector 121 is being worked as after on the face of the project structured light of certain pattern to active user
The surface of the face of preceding user can be formed by the modulated structure light image of active user's face.Structure light video camera head 122 is shot
Structure light image after modulated, then structure light image is demodulated to obtain depth information.Wherein, the pattern of structure light can
To be laser stripe, Gray code, sine streak, non-homogeneous speckle etc..
Wherein, structure light video camera head 122 can be further used for the corresponding phase letter of each pixel in demodulation structure light image
Breath converts phase information to depth information, and generates depth image according to depth information.
Specifically, compared with non-modulated structure light, the phase information of modulated structure light is changed, and is being tied
The structure light showed in structure light image is the structure light produced after distortion, wherein the phase information of variation can characterize
The depth information of object.Therefore, structure light video camera head 122 demodulates the corresponding phase letter of each pixel in structure light image first
Breath, depth information is calculated further according to phase information.
Step 102, three-dimensionalreconstruction is carried out according to depth information and original two dimensional facial image, obtains original face three-dimensional mould
Type.
Specifically, three-dimensionalreconstruction is carried out according to depth information and original two dimensional facial image, assigns reference point depth information
And two-dimensional signal, reconstruct obtain original human face three-dimensional model, which is that three-dimensional stereo model can be abundant
Face is restored, relative two dimensional faceform further comprises the information such as the three-dimensional viewpoin of the face of face.
According to the difference of application scenarios, three-dimensionalreconstruction is carried out according to depth information and facial image and obtains original face three-dimensional
The mode of model includes but is not limited to following manner:
As a kind of possible realization method, key point identification is carried out to each two dimensional sample facial image, is determined
Position key point, to each facial image, according to the depth information of positioning key point and positioning key point in two dimensional sample face
Distance on image, including the x-axis distance on two-dimensional space and y-axis distance determine the phase of positioning key point in three dimensions
Is connected by adjacent positioning key point, generates original sample according to the relative position of positioning key point in three dimensions for position
Human face three-dimensional model.Wherein, key point is behaved characteristic point on the face, it may include canthus, nose, the point etc. on the corners of the mouth.
As alternatively possible realization method, the facial image of multiple angle original two dimensionals is obtained, and is filtered out clearly
Higher facial image is spent as initial data, carries out positioning feature point, using feature location result rough estimate facial angle,
Coarse face three-dimensional deformation model is established according to the angle of face and profile, and by human face characteristic point by translating, scaling behaviour
It adjusts with face three-dimensional deformation model on same scale, and extracts the coordinate information shape with human face characteristic point corresponding points
At sparse face three-dimensional deformation model.
In turn, according to facial angle rough estimate value and sparse face three-dimensional deformation model, particle swarm optimization iteration is carried out
Face three-dimensionalreconstruction obtains face 3-D geometric model, after obtaining face 3-D geometric model, the method puted up using texture
The face texture information inputted in two dimensional image is mapped to face 3-D geometric model, obtains complete original face three-dimensional mould
Type.
In one embodiment of the application, in order to promote U.S. face effect, be also based on beautification after original two dimensional people
Face image carries out the structure of original human face three-dimensional model, and the original human face three-dimensional model built as a result, is more beautiful, ensure that U.S.
The aesthetics of face.
Specifically, the user property feature of extraction user, wherein user property feature may include gender, age, people
Kind and the colour of skin, wherein the personal information that the attributive character of user inputs when can be according to user's registration obtains, and also may be used
Being obtained by two-dimension human face image information analysis when acquiring user's registration, according to user property feature to original two dimensional
Facial image carries out landscaping treatment, the original two dimensional facial image after being beautified, wherein according to user property feature to original
Two-dimension human face image carries out the mode of landscaping treatment, can be the corresponding pass for pre-establishing user property feature and U.S. face parameter
System, for example, the U.S. face parameter of women is anti-acne, mill skin, whitening, the U.S. face parameter of male is anti-acne etc., to obtain user
After attributive character, inquires the correspondence and obtain corresponding U.S. face parameter, according to the U.S. face parameter inquired to original two dimensional people
Face image carries out landscaping treatment.
Certainly, mode original two dimensional facial image beautified other than above-mentioned U.S. face, may also include brightness optimization,
Clarity raising, denoising, barrier processing etc., to ensure that original human face three-dimensional model is more accurate.
Step 103, pre-registered face information is inquired, judges whether user registers.
It is appreciated that in the present embodiment, providing optimization U.S. face service based on chartered user, on the one hand, so that
Registered users are when taking pictures, and especially when more people take pictures, obtain most beautiful face effect, promote the satisfaction of registered users
On the other hand degree helps to promote Related product.In practical applications, in order to further enhance the body of taking pictures of registered users
It tests, when identifying registered users, registered users can be labeled using distinctive mark symbol, for example, using
Different colours face focus frame highlights chartered user, is highlighted and has been registered using focus frame of different shapes
User.
Under different application scenarios, inquire pre-registered face information, judge user whether register including but it is unlimited
In following manner:
As a kind of possible realization method, the facial characteristics of registration user is obtained in advance, for example, the special markings such as birthmark
The shape at the face such as feature, nose, eyes position and position feature etc. analyze original two dimensional facial image, for example use image
The facial characteristics of the technology extraction user of identification, inquires pre-registered facial database, judges whether facial characteristics, if
In the presence of, it is determined that user has registered;If being not present, it is determined that user does not register.
Step 104, if knowing, user has registered, and obtains human face three-dimensional model shaping parameter corresponding to the user, root
The key point on original human face three-dimensional model is adjusted according to human face three-dimensional model shaping parameter, obtains the mesh after virtual lift face
Mark human face three-dimensional model.
Wherein, human face three-dimensional model shaping parameter includes but not limited to the target critical point to being adjusted in human face three-dimensional model
Adjustment position and distance etc..
Specifically, knowing how user is chartered user, then in order to provide U.S. of optimization for the registered users
Face service obtains human face three-dimensional model shaping parameter corresponding to the user, according to human face three-dimensional model shaping parameter to primitive man
Key point on face three-dimensional model is adjusted, and obtains the target human face three-dimensional model after virtual lift face.It is understood that former
Beginning human face three-dimensional model is actually to connect what the triangular network formed was built by key point and key point, thus, to original
On beginning human face three-dimensional model when the key point at lift face position is adjusted, corresponding human face three-dimensional model variation, to obtain
Target faceform after virtual lift face.
Wherein, the mode of human face three-dimensional model shaping parameter corresponding to the user can be user's Active Registration, can be with
It is to be automatically generated after being analyzed according to the original human face three-dimensional model method of user.
As a kind of possible realization method, obtain the two dimensional sample facial image of the multiple angles of user, and with it is each
The corresponding depth information of two dimensional sample facial image carries out three-dimensionalreconstruction according to depth information and two dimensional sample facial image, obtains
Original sample human face three-dimensional model is taken, to waiting for that the key point at lift face position is adjusted, and is obtained on original sample human face three-dimensional model
Target sample human face three-dimensional model after to virtual lift face, compares original sample human face three-dimensional model and target sample face is three-dimensional
Model extracts human face three-dimensional model shaping parameter corresponding to the user, for example, according to the corresponding key point coordinate difference of same area
Heteroplasia is at corresponding coordinate difference information etc..
In the present embodiment, for more convenient adjustment to human face three-dimensional model, in original sample human face three-dimensional model
The key point at the upper each lift face position of display detects for example, showing the key point at each lift face position in a manner of being highlighted
User treats the shifting function that the key point at lift face position carries out, such as drag operation of the detection user to the key point chosen
Deng being adjusted to key point according to shifting function, according to the connection of key point and other adjacent key points after adjustment, obtained
Target sample human face three-dimensional model after to virtual lift face.
In practical implementation, it can be received based on different realization methods to being waited on original sample human face three-dimensional model
The key point at lift face position is adjusted, and is illustrated as follows:
The first example:
In this example, for the ease of the operation of user, adjustment control can be provided to the user by user to control
Operation carries out the adjustment of human face three-dimensional model in real time.
Specifically, in the present embodiment, generating adjustment control corresponding with each key point at lift face position, detection is used
The touch control operation that the corresponding adjustment control of key point at lift face position carries out is treated at family, corresponding adjusting parameter is obtained, according to tune
Whole parameter is to waiting for that the key point at lift face position is adjusted, and obtains the target after virtual lift face on original sample human face three-dimensional model
Sample human face three-dimensional model, based on the target sample human face three-dimensional model and original sample human face three-dimensional model after the virtual lift face
Gap obtain lift face parameter.Wherein, adjusting parameter includes moving direction and displacement distance of key point etc..
In the present embodiment, it is whole that lift face advisory information, such as offer " rich lip, filling Macintosh " etc. can also be provided a user
Hold and suggest, wherein the lift face advisory information can be written form, speech form etc., if user confirms lift face advisory information, root
The key point and adjusting parameter for waiting for lift face position are determined according to lift face advisory information, for example, user confirms above-mentioned lift face suggestion, really
Fixed lift face parameter is the depth value etc. of the depth value and cheek that adjust mouth, wherein the size of depth value variation can root
It is determined according to the depth value of corresponding part on the original sample human face three-dimensional model of user, in order to ensure the effect of adjustment naturally, adjusting
The difference of depth value after whole and initial depth value in a certain range, according to adjusting parameter to original sample human face three-dimensional model
On wait for that the key point at lift face position is adjusted, obtain the target sample human face three-dimensional model after virtual lift face.
Wherein, in order to further increase lift face effect aesthetic feeling, lift face position is waited on to original human face three-dimensional model
Key point be adjusted before, the dermatoglyph figure for being covered in original human face three-dimensional model surface can also be beautified, be obtained
Original human face three-dimensional model after to beautification.
It is understood that when there is small pox in facial image, the color at the corresponding position of small pox can in dermatoglyph figure
Think red, alternatively, when there is freckle in facial image, the color at the corresponding position of freckle can be coffee in dermatoglyph figure
Color or black, alternatively, when there is black mole in facial image, the color at the corresponding position of black mole can be black in dermatoglyph figure
Color.
It therefore, can be according to the color of the dermatoglyph figure of original human face three-dimensional model, it is determined whether there are abnormal ranges,
, can be without any processing when not there are no abnormal ranges, and when there are abnormal ranges, it can be further according to abnormal ranges
The colouring information of interior each point relative position relation in three dimensions and abnormal ranges, using corresponding beautification strategy,
Abnormal ranges are beautified.
Under normal circumstances, small pox is prominent skin surface, and black mole can also be prominent skin surface, and freckle is not
Prominent skin surface, it therefore, can be according to the height between the central point and marginal point of abnormal ranges in the embodiment of the present application
Difference determines the Exception Type belonging to abnormal ranges, for example, Exception Type can be raised or not raised.Determining exception class
After type, corresponding beautification strategy can be determined, then according to the corresponding matching of abnormal ranges according to Exception Type and colouring information
The colour of skin carries out mill skin processing using the filter range and filtering strength of beautification strategy instruction to abnormal ranges.
For example, when Exception Type is protrusion, and colouring information is red, at this point, can be acne in the abnormal ranges
Acne, the corresponding mill skin degree of small pox is stronger, when Exception Type is not raised, when color is cyan, at this point, can in the abnormal ranges
Think and tatoo, corresponding mill skin degree of tatooing is weaker.
Alternatively, the colour of skin in abnormal ranges can also be filled according to the abnormal ranges corresponding matching colour of skin.
For example, when Exception Type is protrusion, when colouring information is red, at this point, can be small pox in the abnormal ranges, then
The beautification strategy of anti-acne can be:Mill skin processing is carried out to small pox, and can be according to the normal skin tone near small pox, the application
It is denoted as the matching colour of skin in embodiment, fills the colour of skin in the corresponding abnormal ranges of small pox, alternatively, when Exception Type is not raised,
When color is coffee-like, at this point, can be freckle in the abnormal ranges, then the beautification strategy of nti-freckle can be:It is attached according to freckle
Close normal skin tone is denoted as the matching colour of skin, the colour of skin in the corresponding abnormal ranges of filling freckle in the embodiment of the present application.
In the application, due to the closed area in the model of original human face three-dimensional model, obtained as vertex using each key point
Depth information be consistent, when beautifying to the dermatoglyph figure for being covered in human face three-dimensional model surface, can distinguish
Each closed area is beautified, thus, it is possible to increase the confidence level of pixel value in the closed area after beautification, promotes beautification
Effect.
As the alternatively possible realization method of the embodiment of the present application, the corresponding beautification in local facial can be pre-set
Strategy, wherein local facial may include the faces such as nose, lip, eye, cheek position.For example, for nose,
Corresponding beautification strategy can be that nose highlight processing, wing of nose Shadows Processing, to increase the three-dimensional sense of nose, alternatively, for
For cheek, corresponding beautification strategy can be addition blush and/or mill skin processing.
Therefore, in the embodiment of the present application, can according to colouring information and the relative position in original human face three-dimensional model,
Local facial is identified from dermatoglyph figure, then according to the corresponding beautification strategy in local facial, local facial is carried out beautiful
Change.
It optionally, can be right according to the filtering strength of the corresponding beautification strategy instruction of eyebrow when local facial is eyebrow
Local facial carries out mill skin processing.
It, can be according to the filtering strength of the corresponding beautification strategy instruction of cheek, to local people when local facial is cheek
Face carries out mill skin processing.It should be noted that in order to enable the effect after beautification more naturally, landscaping effect is more prominent, face
The filtering strength of the corresponding beautification strategy instruction of cheek can be more than the filtering strength of the corresponding beautification strategy instruction of eyebrow.
It, can be according to the shadow intensity of the corresponding beautification strategy instruction of nose, increase office when local facial belongs to nose
The shade of portion's face.
In the application, the relative position based on local facial in original human face three-dimensional model carries out landscaping treatment to it,
It can make the dermatoglyph figure after beautification more naturally, landscaping effect is more prominent.And it is possible to realize it is targetedly right
Local facial carries out landscaping treatment and promotes the experience of taking pictures of user to promote imaging effect.
Certainly, in practical applications, if knowing, user does not register, or user is provided for preferably U.S. face clothes
Business.
In the present embodiment, if knowing, user does not register, and extracts the user property feature of user, wherein user belongs to
Property feature may include gender, age, ethnic group and the colour of skin, for example hair style, the head of user are identified based on image analysis technology
Decorations, whether there is or not makeups etc., to determine user property feature, in turn, obtain preset standard faces corresponding with user property feature
Threedimensional model shaping parameter carries out the key point on original human face three-dimensional model according to standard faces threedimensional model shaping parameter
Adjustment, obtains the target human face three-dimensional model after virtual lift face.
Step 105, the target human face three-dimensional model after virtual lift face is mapped to two dimensional surface, obtains target two-dimension human face
Image.
Specifically, it waits for that the key point at lift face position is adjusted on to original human face three-dimensional model, obtains virtual lift face
After target human face three-dimensional model afterwards, the target human face three-dimensional model after virtual lift face can be mapped to two dimensional surface, obtained
U.S. face processing is carried out to target facial image, and to target two-dimension human face image.
In the application, since dermatoglyph figure is three-dimensional, dermatoglyph figure is beautified, after can making beautification
Dermatoglyph figure is more naturally, to will be according to the target face three-dimensional mould generated after the virtual lift face of face three-dimensional mould after beautification
Type is mapped to two dimensional surface, the target two-dimension human face image after being beautified, and U.S. face processing is carried out to target two-dimension human face image,
The target two-dimension human face image after U.S. face can be made truer, landscaping effect is more prominent, after having provided lift face to the user
U.S. face effect displaying further promotes the lift face experience of user.
In order to enable the flow for the virtual lift face method that those skilled in the art takes pictures to face is clearer, tie below
It closes its application in concrete scene to illustrate, be described as follows:
In this example, it demarcates, refers to being demarcated to camera, determine the key point in facial image in three dimensions
In corresponding key point.
In registration phase, as shown in Fig. 4 (a), the multiple angles of user can be obtained by camera module previewing scan face
The two dimensional sample facial image of degree, for example, the two dimensional sample facial image and depth map of nearly 20 different angles of acquisition are used for
Follow-up three-dimensional facial reconstruction, wherein can prompt missing angle and scan progress in scanning, and with each two dimensional sample face
The corresponding depth information of image carries out three-dimensionalreconstruction according to depth information and two dimensional sample facial image, obtains original sample people
Face three-dimensional model,
Face analysis, such as shape of face, ose breadth, nasal height, eyes size, lip thickness are carried out to 3D faceforms, provided whole
Hold advisory information, if user confirm lift face advisory information, according to lift face advisory information determine wait for lift face position key point and
Adjusting parameter, according to adjusting parameter to waiting for that the key point at lift face position is adjusted, and is obtained on original sample human face three-dimensional model
Target sample human face three-dimensional model after virtual lift face.
In turn, as shown in Fig. 4 (b), in cognitive phase, obtain the current original two dimensional facial image of user, and with original
The corresponding depth information of beginning two-dimension human face image carries out three-dimensionalreconstruction according to depth information and the original two dimensional facial image,
Original human face three-dimensional model is obtained, pre-registered face information is inquired, judges whether user registers, user has noted if knowing
Volume, then obtain human face three-dimensional model shaping parameter corresponding to the user, according to human face three-dimensional model shaping parameter to original face
Key point on threedimensional model is adjusted, and obtains the target human face three-dimensional model after virtual lift face, by the mesh after virtual lift face
Mark human face three-dimensional model is mapped to two dimensional surface, obtains target two-dimension human face image.
In conclusion the virtual lift face method that the face of the embodiment of the present application is taken pictures, obtains the current original two dimensional of user
Facial image, and depth information corresponding with original two dimensional facial image, according to depth information and original two dimensional facial image
Three-dimensionalreconstruction is carried out, original human face three-dimensional model is obtained, inquires pre-registered face information, judge whether user registers, if
Know that user has registered, then human face three-dimensional model shaping parameter corresponding to the user is obtained, according to human face three-dimensional model shaping
Parameter is adjusted the key point on original human face three-dimensional model, obtains the target human face three-dimensional model after virtual lift face, will
Target human face three-dimensional model after virtual lift face is mapped to two dimensional surface, obtains target two-dimension human face image.It is based on face as a result,
Threedimensional model carries out U.S. face to chartered user, optimizes U.S. face effect, improving target user expires U.S. face effect
Meaning degree and viscosity with product.
In order to realize that above-described embodiment, the application also propose that the virtual beauty device that a kind of face is taken pictures, Fig. 5 are according to this
The structural schematic diagram for the virtual beauty device that application one embodiment face is taken pictures.As shown in figure 5, the face take pictures it is virtual whole
Capacitance device includes acquisition module 10, reconstructed module 20, enquiry module 30, adjustment module 40 and mapping block 50.
Wherein, acquisition module 10, the original two dimensional facial image current for obtaining user, and with the original two dimensional
The corresponding depth information of facial image.
Reconstructed module 20 is obtained for carrying out three-dimensionalreconstruction according to the depth information and the original two dimensional facial image
Take original human face three-dimensional model.
Enquiry module 30 judges whether the user registers for inquiring pre-registered face information.
In one embodiment of the application, as shown in fig. 6, enquiry module 30 includes extraction unit 31, determination unit 32.
Wherein, the facial characteristics of the user is extracted for analyzing the original two dimensional facial image in extraction unit 31.
Determination unit 32 judges whether the facial characteristics, if depositing for inquiring pre-registered facial database
, it is determined that the user has registered, if being not present, it is determined that the user does not register.
Module 40 is adjusted, for when knowing that the user has registered, it is three-dimensional to obtain face corresponding with the user
Model shaping parameter carries out the key point on the original human face three-dimensional model according to the human face three-dimensional model shaping parameter
Adjustment, obtains the target human face three-dimensional model after virtual lift face.
Mapping block 50 is obtained for the target human face three-dimensional model after the virtual lift face to be mapped to two dimensional surface
Target two-dimension human face image.
It should be noted that the explanation of the aforementioned virtual lift face embodiment of the method taken pictures to face is also applied for the reality
The virtual beauty device that the face of example is taken pictures is applied, details are not described herein again.
In conclusion the virtual beauty device that the face of the embodiment of the present application is taken pictures, obtains the current original two dimensional of user
Facial image, and depth information corresponding with original two dimensional facial image, according to depth information and original two dimensional facial image
Three-dimensionalreconstruction is carried out, original human face three-dimensional model is obtained, inquires pre-registered face information, judge whether user registers, if
Know that user has registered, then human face three-dimensional model shaping parameter corresponding to the user is obtained, according to human face three-dimensional model shaping
Parameter is adjusted the key point on original human face three-dimensional model, obtains the target human face three-dimensional model after virtual lift face, will
Target human face three-dimensional model after virtual lift face is mapped to two dimensional surface, obtains target two-dimension human face image.It is based on face as a result,
Threedimensional model carries out U.S. face to chartered user, optimizes U.S. face effect, improving target user expires U.S. face effect
Meaning degree and viscosity with product.
In order to realize that above-described embodiment, the application also propose a kind of computer readable storage medium, it is stored thereon with calculating
Machine program, the face of realization as in the preceding embodiment is taken pictures virtual whole when which is executed by the processor of mobile terminal
Appearance method.
In order to realize that above-described embodiment, the application also propose a kind of electronic equipment.
Fig. 7 is the internal structure schematic diagram of electronic equipment 200 in one embodiment.The electronic equipment 200 includes passing through to be
Processor 220, memory 230, display 240 and the input unit 250 that bus 210 of uniting connects.Wherein, electronic equipment 200
Memory 230 is stored with operating system and computer-readable instruction.The computer-readable instruction can be executed by processor 220, with
Realize the method for beautifying faces of the application embodiment.The processor 220 supports entire electricity for providing calculating and control ability
The operation of sub- equipment 200.The display 240 of electronic equipment 200 can be liquid crystal display or electric ink display screen etc., defeated
It can be the touch layer covered on display 240 to enter device 250, can also be button, the rail being arranged on 200 shell of electronic equipment
Mark ball or Trackpad can also be external keyboard, Trackpad or mouse etc..The electronic equipment 200 can be mobile phone, tablet electricity
Brain, laptop, personal digital assistant or Wearable (such as Intelligent bracelet, smartwatch, intelligent helmet, Brilliant Eyes
Mirror) etc..
It will be understood by those skilled in the art that structure shown in Fig. 7, is only tied with the relevant part of application scheme
The schematic diagram of structure does not constitute the restriction for the electronic equipment 200 being applied thereon to application scheme, specific electronic equipment
200 may include either combining certain components or with different component cloth than more or fewer components as shown in the figure
It sets.
In order to realize above-described embodiment, the invention also provides a kind of image processing circuit, which includes
Elementary area 310, depth information unit 320 and processing unit 330.Wherein,
Elementary area 310, the original two dimensional facial image current for exporting user.
Depth information unit 320, for exporting depth information corresponding with original two dimensional facial image.
Processing unit 330 is electrically connected with elementary area and depth information unit respectively, for according to depth information and original
Beginning two-dimension human face image carries out three-dimensionalreconstruction, obtains original human face three-dimensional model, inquires pre-registered face information, judges to use
Whether family registers, if knowing, user has registered, and human face three-dimensional model shaping parameter corresponding to the user is obtained, according to face
Threedimensional model shaping parameter is adjusted the key point on original human face three-dimensional model, obtains the target face after virtual lift face
Target human face three-dimensional model after virtual lift face is mapped to two dimensional surface, obtains target two-dimension human face image by threedimensional model.
In the embodiment of the present application, elementary area 310 can specifically include:The imaging sensor 311 and image of electric connection
Signal processing (Image Signal Processing, abbreviation ISP) processor 312.Wherein,
Imaging sensor 311, for exporting raw image data.
ISP processors 312, for according to the raw image data, exporting the original two dimensional facial image.
In the embodiment of the present application, the raw image data that imaging sensor 311 captures is handled by ISP processors 312 first,
ISP processors 312 are analyzed raw image data to capture the one or more controls that can be used for determining imaging sensor 311
The image statistics of parameter processed include the facial image of yuv format or rgb format.Wherein, imaging sensor 311 can wrap
Colour filter array (such as Bayer filters) and corresponding photosensitive unit are included, imaging sensor 311 can obtain each photosensitive list
The luminous intensity and wavelength information that member captures, and the one group of raw image data that can be handled by ISP processors 312 is provided.ISP processing
After device 312 handles raw image data, the facial image of yuv format or rgb format is obtained, and it is single to be sent to processing
Member 330.
Wherein, ISP processors 312, can in various formats pixel by pixel when handling raw image data
Handle raw image data.For example, each image pixel can be with the bit depth of 8,10,12 or 14 bits, ISP processors 312
One or more image processing operations can be carried out to raw image data, collect the statistical information about image data.Wherein, scheme
As processing operation can be carried out by identical or different bit depth precision.
As a kind of possible realization method, the structured light sensor 321 of depth information unit 320, including electric connection
Chip 322 is generated with depth map.Wherein,
Structured light sensor 321, for generating infrared speckle pattern.
Depth map generates chip 322, for according to infrared speckle pattern, exporting depth corresponding with original two dimensional facial image
Information.
In the embodiment of the present application, structured light sensor 321 projects pattern light to object, and obtains object reflection
Structure light infrared speckle pattern is obtained according to the structure light imaging of reflection.Structured light sensor 321 sends out the infrared speckle pattern
It send to depth map and generates chip 322, so that depth map generates the metamorphosis that chip 322 determines according to infrared speckle pattern structure light
Situation, and then determine therefrom that the depth of object, depth map (Depth Map) is obtained, which indicates infrared speckle pattern
In each pixel depth.Depth map generates chip 322 and depth map is sent to processing unit 330.
As a kind of possible realization method, processing unit 330, including:The CPU331 and GPU of electric connection
(Graphics Processing Unit, graphics processor) 332.Wherein,
CPU331, for according to nominal data, facial image and depth map being aligned, according to the facial image and depth after alignment
Degree figure, exports human face three-dimensional model.
If GPU332 obtains human face three-dimensional model corresponding with the user for knowing that the user has registered
Shaping parameter adjusts the key point on the original human face three-dimensional model according to the human face three-dimensional model shaping parameter
It is whole, the target human face three-dimensional model after virtual lift face is obtained, the target human face three-dimensional model after the virtual lift face is mapped to
Two dimensional surface obtains target two-dimension human face image.
In the embodiment of the present application, CPU331 gets facial image from ISP processors 312, and chip 322 is generated from depth map
Depth map is got, in conjunction with the nominal data being previously obtained, facial image can be aligned with depth map, so that it is determined that going out face
The corresponding depth information of each pixel in image.In turn, CPU331 carries out three-dimensionalreconstruction according to depth information and facial image,
Obtain human face three-dimensional model.
Human face three-dimensional model is sent to GPU332 by CPU331, so that GPU332 is executed according to human face three-dimensional model as aforementioned
The virtual lift face method that face described in embodiment is taken pictures, obtains target two-dimension human face image.
Further, image processing circuit can also include:First display unit 341.
First display unit 341 is electrically connected with the processing unit 330, for showing the key for treating lift face position
The corresponding adjustment control of point.
Further, image processing circuit can also include:Second display unit 342.
Second display unit 342 is electrically connected with the processing unit 340, for showing the target sample after virtual lift face
This human face three-dimensional model.
Optionally, image processing circuit can also include:Encoder 350 and memory 360.
In the embodiment of the present application, the face figure after the beautification that GPU332 processing obtains, after can also being encoded by encoder 350
It stores to memory 360, wherein encoder 350 can be realized by coprocessor.
In one embodiment, memory 360 can be multiple, or be divided into multiple memory spaces, store GPU312
Image data that treated can be stored to private memory or dedicated memory space, and may include DMA (Direct Memory
Access, direct direct memory access (DMA)) feature.Memory 360 can be configured as realizing one or more frame buffers.
With reference to Fig. 9, the above process is described in detail.
It should be noted that Fig. 9 is as a kind of schematic diagram of the image processing circuit of possible realization method.For ease of
Illustrate, only shows and the relevant various aspects of the embodiment of the present application.
As shown in figure 9, the raw image data that imaging sensor 311 captures is handled by ISP processors 312 first, at ISP
Reason device 312 is analyzed raw image data to capture the one or more control ginsengs that can be used for determining imaging sensor 311
Several image statistics include the facial image of yuv format or rgb format.Wherein, imaging sensor 311 may include color
Color filter array (such as Bayer filters) and corresponding photosensitive unit, imaging sensor 311 can obtain each photosensitive unit and catch
The luminous intensity and wavelength information caught, and the one group of raw image data that can be handled by ISP processors 312 is provided.ISP processors
312 pairs of raw image datas obtain the facial image of yuv format or rgb format after handling, and are sent to CPU331.
Wherein, ISP processors 312, can in various formats pixel by pixel when handling raw image data
Handle raw image data.For example, each image pixel can be with the bit depth of 8,10,12 or 14 bits, ISP processors 312
One or more image processing operations can be carried out to raw image data, collect the statistical information about image data.Wherein, scheme
As processing operation can be carried out by identical or different bit depth precision.
As shown in figure 9, structured light sensor 321 projects pattern light to object, and obtain the knot of object reflection
Structure light obtains infrared speckle pattern according to the structure light imaging of reflection.The infrared speckle pattern is sent to by structured light sensor 321
Depth map generates chip 322, so that depth map generates the metamorphosis feelings that chip 322 determines according to infrared speckle pattern structure light
Condition, and then determine therefrom that the depth of object, depth map (Depth Map) is obtained, which indicates in infrared speckle pattern
The depth of each pixel.Depth map generates chip 322 and depth map is sent to CPU331.
CPU331 gets facial image from ISP processors 312, and generating chip 322 from depth map gets depth map, ties
The nominal data being previously obtained is closed, facial image can be aligned with depth map, so that it is determined that going out each pixel in facial image
Corresponding depth information.In turn, CPU331 carries out three-dimensionalreconstruction, obtains face three-dimensional mould according to depth information and facial image
Type.
Human face three-dimensional model is sent to GPU332 by CPU331, so that GPU332 is executed according to human face three-dimensional model as aforementioned
Method described in embodiment realizes the virtual lift face of face, obtains the facial image after virtual lift face.GPU332 processing obtains
Facial image after virtual lift face, can be by (including 341 and second display unit of above-mentioned first display unit of display 340
351) it shows, and/or, it is stored to memory 360 after being encoded by encoder 350, wherein encoder 350 can be by coprocessor reality
It is existing.
In one embodiment, memory 360 can be multiple, or be divided into multiple memory spaces, store GPU332
Image data that treated can be stored to private memory or dedicated memory space, and may include DMA (Direct Memory
Access, direct direct memory access (DMA)) feature.Memory 360 can be configured as realizing one or more frame buffers.
For example, following for the processor 220 in Fig. 9 or image processing circuit (the specially CPU331 in utilization Fig. 9
And GPU332) realize control method the step of:
CPU331 obtains two-dimensional facial image and the corresponding depth information of the facial image;CPU331 is according to institute
Depth information and the facial image are stated, three-dimensionalreconstruction is carried out, obtains human face three-dimensional model;GPU332 is obtained and the user couple
The human face three-dimensional model shaping parameter answered, according to the human face three-dimensional model shaping parameter on the original human face three-dimensional model
Key point be adjusted, obtain the target human face three-dimensional model after virtual lift face;GPU332 is by the mesh after the virtual lift face
Mark human face three-dimensional model is mapped to two dimensional surface, obtains target two-dimension human face image.
In the description of this specification, reference term " one embodiment ", " some embodiments ", " example ", " specifically show
The description of example " or " some examples " etc. means specific features, structure, material or spy described in conjunction with this embodiment or example
Point is contained at least one embodiment or example of the application.In the present specification, schematic expression of the above terms are not
It must be directed to identical embodiment or example.Moreover, particular features, structures, materials, or characteristics described can be in office
It can be combined in any suitable manner in one or more embodiments or example.In addition, without conflicting with each other, the skill of this field
Art personnel can tie the feature of different embodiments or examples described in this specification and different embodiments or examples
It closes and combines.
In addition, term " first ", " second " are used for description purposes only, it is not understood to indicate or imply relative importance
Or implicitly indicate the quantity of indicated technical characteristic.Define " first " as a result, the feature of " second " can be expressed or
Implicitly include at least one this feature.In the description of the present application, the meaning of " plurality " is at least two, such as two, three
It is a etc., unless otherwise specifically defined.
Any process described otherwise above or method description are construed as in flow chart or herein, and expression includes
It is one or more for realizing custom logic function or process the step of executable instruction code module, segment or portion
Point, and the range of the preferred embodiment of the application includes other realization, wherein can not press shown or discuss suitable
Sequence, include according to involved function by it is basic simultaneously in the way of or in the opposite order, to execute function, this should be by the application
Embodiment person of ordinary skill in the field understood.
Expression or logic and/or step described otherwise above herein in flow charts, for example, being considered use
In the order list for the executable instruction for realizing logic function, may be embodied in any computer-readable medium, for
Instruction execution system, device or equipment (system of such as computer based system including processor or other can be held from instruction
The instruction fetch of row system, device or equipment and the system executed instruction) it uses, or combine these instruction execution systems, device or set
It is standby and use.For the purpose of this specification, " computer-readable medium " can any can be included, store, communicating, propagating or passing
Defeated program is for instruction execution system, device or equipment or the dress used in conjunction with these instruction execution systems, device or equipment
It sets.The more specific example (non-exhaustive list) of computer-readable medium includes following:Electricity with one or more wiring
Interconnecting piece (electronic device), portable computer diskette box (magnetic device), random access memory (RAM), read-only memory
(ROM), erasable edit read-only storage (EPROM or flash memory), fiber device and portable optic disk is read-only deposits
Reservoir (CDROM).In addition, computer-readable medium can even is that the paper that can print described program on it or other are suitable
Medium, because can be for example by carrying out optical scanner to paper or other media, then into edlin, interpretation or when necessary with it
His suitable method is handled electronically to obtain described program, is then stored in computer storage.
It should be appreciated that each section of the application can be realized with hardware, software, firmware or combination thereof.Above-mentioned
In embodiment, software that multiple steps or method can in memory and by suitable instruction execution system be executed with storage
Or firmware is realized.Such as, if realized in another embodiment with hardware, following skill well known in the art can be used
Any one of art or their combination are realized:With for data-signal realize logic function logic gates from
Logic circuit is dissipated, the application-specific integrated circuit with suitable combinational logic gate circuit, programmable gate array (PGA), scene can compile
Journey gate array (FPGA) etc..
Those skilled in the art are appreciated that realize all or part of step that above-described embodiment method carries
Suddenly it is that relevant hardware can be instructed to complete by program, the program can be stored in a kind of computer-readable storage medium
In matter, which includes the steps that one or a combination set of embodiment of the method when being executed.
In addition, each functional unit in each embodiment of the application can be integrated in a processing module, it can also
That each unit physically exists alone, can also two or more units be integrated in a module.Above-mentioned integrated mould
The form that hardware had both may be used in block is realized, can also be realized in the form of software function module.The integrated module is such as
Fruit is realized in the form of software function module and when sold or used as an independent product, can also be stored in a computer
In read/write memory medium.
Storage medium mentioned above can be read-only memory, disk or CD etc..Although having been shown and retouching above
Embodiments herein is stated, it is to be understood that above-described embodiment is exemplary, and should not be understood as the limit to the application
System, those skilled in the art can be changed above-described embodiment, change, replace and become within the scope of application
Type.
Claims (22)
1. a kind of virtual lift face method that face is taken pictures, which is characterized in that including:
Obtain the current original two dimensional facial image of user, and depth information corresponding with the original two dimensional facial image;
Three-dimensionalreconstruction is carried out according to the depth information and the original two dimensional facial image, obtains original human face three-dimensional model;
Pre-registered face information is inquired, judges whether the user registers;
If knowing, the user has registered, and human face three-dimensional model shaping parameter corresponding with the user is obtained, according to institute
It states human face three-dimensional model shaping parameter to be adjusted the key point on the original human face three-dimensional model, after obtaining virtual lift face
Target human face three-dimensional model;
Target human face three-dimensional model after the virtual lift face is mapped to two dimensional surface, obtains target two-dimension human face image.
2. according to the method described in claim 1, it is characterized in that, the pre-registered face information of the inquiry, described in judgement
Whether user registers, including:
The original two dimensional facial image is analyzed, the facial characteristics of the user is extracted;
Pre-registered facial database is inquired, judges whether the facial characteristics, and if it exists, has then determined the user
Through registration;If being not present, it is determined that the user does not register.
3. according to the method described in claim 1, it is characterized in that, it is described judge whether the user registers after, also wrap
It includes:
If knowing, the user does not register, and extracts the user property feature of the user;
Preset standard faces threedimensional model shaping parameter corresponding with the user property feature is obtained, according to the standard people
Face three-dimensional model shaping parameter is adjusted the key point on the original human face three-dimensional model, obtains the mesh after virtual lift face
Mark human face three-dimensional model.
4. according to the method described in claim 3, it is characterized in that, the user property feature includes:
Gender, age, ethnic group and the colour of skin.
5. according to the method described in claim 1, it is characterized in that, described according to the depth information and the original two dimensional
Facial image carries out three-dimensionalreconstruction:
Extract the user property feature of the user;
According to the user property feature to the original two dimensional facial image carry out landscaping treatment, original two after being beautified
Tie up facial image.
6. according to the method described in claim 1, it is characterized in that, further including:
Obtain the two dimensional sample facial image of the multiple angles of the user, and depth corresponding with each two dimensional sample facial image
Spend information;
Three-dimensionalreconstruction is carried out according to the depth information and the two dimensional sample facial image, obtains original sample face three-dimensional mould
Type;
To waiting for that the key point at lift face position is adjusted on the original sample human face three-dimensional model, the mesh after virtual lift face is obtained
This human face three-dimensional model of standard specimen;
Compare the original sample human face three-dimensional model and the target sample human face three-dimensional model, extraction is corresponding with the user
Human face three-dimensional model shaping parameter.
7. according to the method described in claim 6, it is characterized in that, after the original human face three-dimensional model of acquisition, also wrap
It includes:
Dermatoglyph figure to being covered in the original human face three-dimensional model surface beautifies, the original face after being beautified
Threedimensional model.
8. according to the method described in claim 6, it is characterized in that, described according to the depth information and the two dimensional sample people
Face image carries out three-dimensionalreconstruction, obtains original sample human face three-dimensional model, including:
Key point identification is carried out to each two dimensional sample facial image, obtains positioning key point;
To each facial image, according to the depth information of positioning key point and positioning key point in the two dimensional sample face figure
As upper distance, the relative position of the positioning key point in three dimensions is determined;
According to the relative position of the positioning key point in three dimensions, adjacent positioning key point is connected, original sample is generated
This human face three-dimensional model.
9. according to the method described in claim 6, it is characterized in that, described whole to being waited on the original sample human face three-dimensional model
The key point for holding position is adjusted, and obtains the target sample human face three-dimensional model after virtual lift face, including:
Generate adjustment control corresponding with each key point at lift face position;
It detects the user and treats the touch control operation that the corresponding adjustment control of key point at lift face position carries out, obtain corresponding adjust
Whole parameter;
According to the adjusting parameter to waiting for that the key point at lift face position is adjusted on the original human face three-dimensional model, void is obtained
Target sample human face three-dimensional model after quasi- lift face.
10. according to the method described in claim 6, it is characterized in that, described to being waited on the original sample human face three-dimensional model
The key point at lift face position is adjusted, and obtains the target sample human face three-dimensional model after virtual lift face, including:
The key point at each lift face position is shown on the original sample human face three-dimensional model;
Detection user treats the shifting function that the key point at lift face position carries out, according to the shifting function to the crucial click-through
Row adjustment, obtains the target sample human face three-dimensional model after virtual lift face.
11. according to the method described in claim 6, it is characterized in that, described to being waited on the original sample human face three-dimensional model
The key point at lift face position is adjusted, and obtains the target sample human face three-dimensional model after virtual lift face, including:
Lift face advisory information is provided to the user;
If the user confirms the lift face advisory information, the key point for waiting for lift face position is determined according to the lift face advisory information
And adjusting parameter;
According to the adjusting parameter to waiting for that the key point at lift face position is adjusted on the original sample human face three-dimensional model, obtain
Target sample human face three-dimensional model after to virtual lift face.
12. the virtual beauty device that a kind of face is taken pictures, which is characterized in that including:
Acquisition module, the original two dimensional facial image current for obtaining user, and with the original two dimensional facial image pair
The depth information answered;
Reconstructed module obtains original for carrying out three-dimensionalreconstruction according to the depth information and the original two dimensional facial image
Human face three-dimensional model;
Enquiry module judges whether the user registers for inquiring pre-registered face information;
Module is adjusted, for when knowing that the user has registered, it is whole to obtain human face three-dimensional model corresponding with the user
Shape parameter is adjusted the key point on the original human face three-dimensional model according to the human face three-dimensional model shaping parameter,
Obtain the target human face three-dimensional model after virtual lift face;
Mapping block obtains target two for the target human face three-dimensional model after the virtual lift face to be mapped to two dimensional surface
Tie up facial image.
13. device as claimed in claim 12, which is characterized in that the enquiry module includes:
Extraction unit extracts the facial characteristics of the user for analyzing the original two dimensional facial image;
Determination unit judges whether the facial characteristics, and if it exists, then for inquiring pre-registered facial database
Determine that the user has registered, if being not present, it is determined that the user does not register.
14. a kind of electronic equipment, which is characterized in that including:Memory, processor and storage are on a memory and can be in processor
The computer program of upper operation when the processor executes the computer program, is realized such as any institute in claim 1-11
The virtual lift face method that the face stated is taken pictures.
15. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the program is by processor
The virtual lift face method that the face as described in any in claim 1-11 is taken pictures is realized when execution.
16. a kind of image processing circuit, which is characterized in that described image processing circuit includes:Elementary area, depth information unit
And processing unit;
Described image unit, the original two dimensional facial image current for exporting user;
The depth information unit, for exporting depth information corresponding with the original two dimensional facial image;
The processing unit is electrically connected with described image unit and the depth information unit respectively, for according to the depth
It spends information and the original two dimensional facial image carries out three-dimensionalreconstruction, obtain original human face three-dimensional model, inquiry is pre-registered
Face information, judges whether the user registers, if knowing, the user has registered, and obtains people corresponding with the user
Face three-dimensional model shaping parameter, according to the human face three-dimensional model shaping parameter to the key on the original human face three-dimensional model
Point is adjusted, and the target human face three-dimensional model after virtual lift face is obtained, by the target face three-dimensional mould after the virtual lift face
Type is mapped to two dimensional surface, obtains target two-dimension human face image.
17. image processing circuit according to claim 16, which is characterized in that described image unit, including be electrically connected
Imaging sensor and picture signal handle ISP processors;
Described image sensor, for exporting raw image data;
The ISP processors, for according to the raw image data, exporting the original two dimensional facial image.
18. image processing circuit according to claim 16, which is characterized in that the depth information unit, including it is electrical
The structured light sensor and depth map of connection generate chip;
The structured light sensor, for generating infrared speckle pattern;
The depth map generates chip, for according to the infrared speckle pattern, output to be corresponding with the original two dimensional facial image
Depth information.
19. image processing circuit according to claim 18, which is characterized in that the processing unit, including be electrically connected
CPU and GPU;
Wherein, the CPU is obtained for carrying out three-dimensionalreconstruction according to the depth information and the original two dimensional facial image
Original human face three-dimensional model, and pre-registered face information is inquired, judge whether the user registers;
If the GPU obtains human face three-dimensional model shaping corresponding with the user for knowing that the user has registered
Parameter is adjusted the key point on the original human face three-dimensional model according to the human face three-dimensional model shaping parameter, obtains
It is flat to be mapped to two dimension by the target human face three-dimensional model after to virtual lift face for target human face three-dimensional model after the virtual lift face
Face obtains target two-dimension human face image.
20. image processing circuit according to claim 19, which is characterized in that the GPU is additionally operable to:
Extract the user property feature of the user;
According to the user property feature to the original two dimensional facial image carry out landscaping treatment, original two after being beautified
Tie up facial image.
21. according to claim 16-20 any one of them image processing circuits, which is characterized in that described image processing circuit
It further include the first display unit;
First display unit is electrically connected with the processing unit, for showing that the key point for treating lift face position corresponds to
Adjustment control.
22. according to claim 16-20 any one of them image processing circuits, which is characterized in that described image processing circuit
It further include the second display unit;
Second display unit is electrically connected with the processing unit, for showing the target sample face after virtual lift face
Threedimensional model.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810551058.3A CN108765273B (en) | 2018-05-31 | 2018-05-31 | Virtual face-lifting method and device for face photographing |
PCT/CN2019/089348 WO2019228473A1 (en) | 2018-05-31 | 2019-05-30 | Method and apparatus for beautifying face image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810551058.3A CN108765273B (en) | 2018-05-31 | 2018-05-31 | Virtual face-lifting method and device for face photographing |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108765273A true CN108765273A (en) | 2018-11-06 |
CN108765273B CN108765273B (en) | 2021-03-09 |
Family
ID=64001237
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810551058.3A Active CN108765273B (en) | 2018-05-31 | 2018-05-31 | Virtual face-lifting method and device for face photographing |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN108765273B (en) |
WO (1) | WO2019228473A1 (en) |
Cited By (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110020600A (en) * | 2019-03-05 | 2019-07-16 | 厦门美图之家科技有限公司 | Generate the method for training the data set of face alignment model |
CN110189406A (en) * | 2019-05-31 | 2019-08-30 | 阿里巴巴集团控股有限公司 | Image data mask method and its device |
CN110278029A (en) * | 2019-06-25 | 2019-09-24 | Oppo广东移动通信有限公司 | Data transfer control method and Related product |
CN110310318A (en) * | 2019-07-03 | 2019-10-08 | 北京字节跳动网络技术有限公司 | A kind of effect processing method and device, storage medium and terminal |
CN110321849A (en) * | 2019-07-05 | 2019-10-11 | 腾讯科技(深圳)有限公司 | Image processing method, device and computer readable storage medium |
CN110473295A (en) * | 2019-08-07 | 2019-11-19 | 重庆灵翎互娱科技有限公司 | A kind of method and apparatus that U.S. face processing is carried out based on three-dimensional face model |
WO2019228473A1 (en) * | 2018-05-31 | 2019-12-05 | Oppo广东移动通信有限公司 | Method and apparatus for beautifying face image |
CN110675489A (en) * | 2019-09-25 | 2020-01-10 | 北京达佳互联信息技术有限公司 | Image processing method and device, electronic equipment and storage medium |
CN111031305A (en) * | 2019-11-21 | 2020-04-17 | 北京市商汤科技开发有限公司 | Image processing method and apparatus, image device, and storage medium |
CN111178337A (en) * | 2020-01-07 | 2020-05-19 | 南京甄视智能科技有限公司 | Human face key point data enhancement method, device and system and model training method |
CN111353931A (en) * | 2018-12-24 | 2020-06-30 | 黄庆武整形医生集团(深圳)有限公司 | Shaping simulation method, shaping simulation system, readable storage medium and device |
CN111370100A (en) * | 2020-03-11 | 2020-07-03 | 深圳小佳科技有限公司 | Face-lifting recommendation method and system based on cloud server |
CN111539882A (en) * | 2020-04-17 | 2020-08-14 | 华为技术有限公司 | Interactive method for assisting makeup, terminal and computer storage medium |
CN111966852A (en) * | 2020-06-28 | 2020-11-20 | 北京百度网讯科技有限公司 | Virtual face-lifting method and device based on human face |
CN112150618A (en) * | 2020-10-16 | 2020-12-29 | 四川大学 | Processing method and device for virtual shaping of canthus |
CN112927343A (en) * | 2019-12-05 | 2021-06-08 | 杭州海康威视数字技术股份有限公司 | Image generation method and device |
CN113657357A (en) * | 2021-10-20 | 2021-11-16 | 北京市商汤科技开发有限公司 | Image processing method, image processing device, electronic equipment and storage medium |
CN113724396A (en) * | 2021-09-10 | 2021-11-30 | 广州帕克西软件开发有限公司 | Virtual face-lifting method and device based on face mesh |
CN113763285A (en) * | 2021-09-27 | 2021-12-07 | 北京市商汤科技开发有限公司 | Image processing method and device, electronic equipment and storage medium |
CN113902790A (en) * | 2021-12-09 | 2022-01-07 | 北京的卢深视科技有限公司 | Beauty guidance method, device, electronic equipment and computer readable storage medium |
CN114120414A (en) * | 2021-11-29 | 2022-03-01 | 北京百度网讯科技有限公司 | Image processing method, image processing apparatus, electronic device, and medium |
US11450068B2 (en) | 2019-11-21 | 2022-09-20 | Beijing Sensetime Technology Development Co., Ltd. | Method and device for processing image, and storage medium using 3D model, 2D coordinates, and morphing parameter |
CN115239888A (en) * | 2022-08-31 | 2022-10-25 | 北京百度网讯科技有限公司 | Method, apparatus, electronic device, and medium for reconstructing three-dimensional face image |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2591994A (en) * | 2020-01-31 | 2021-08-18 | Fuel 3D Tech Limited | A method for generating a 3D model |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6283858B1 (en) * | 1997-02-25 | 2001-09-04 | Bgk International Incorporated | Method for manipulating images |
CN101777195A (en) * | 2010-01-29 | 2010-07-14 | 浙江大学 | Three-dimensional face model adjusting method |
CN105938627A (en) * | 2016-04-12 | 2016-09-14 | 湖南拓视觉信息技术有限公司 | Processing method and system for virtual plastic processing on face |
CN106940880A (en) * | 2016-01-04 | 2017-07-11 | 中兴通讯股份有限公司 | A kind of U.S. face processing method, device and terminal device |
CN107705356A (en) * | 2017-09-11 | 2018-02-16 | 广东欧珀移动通信有限公司 | Image processing method and device |
CN107730445A (en) * | 2017-10-31 | 2018-02-23 | 广东欧珀移动通信有限公司 | Image processing method, device, storage medium and electronic equipment |
CN107993209A (en) * | 2017-11-30 | 2018-05-04 | 广东欧珀移动通信有限公司 | Image processing method, device, computer-readable recording medium and electronic equipment |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108040208A (en) * | 2017-12-18 | 2018-05-15 | 信利光电股份有限公司 | A kind of depth U.S. face method, apparatus, equipment and computer-readable recording medium |
CN108765273B (en) * | 2018-05-31 | 2021-03-09 | Oppo广东移动通信有限公司 | Virtual face-lifting method and device for face photographing |
-
2018
- 2018-05-31 CN CN201810551058.3A patent/CN108765273B/en active Active
-
2019
- 2019-05-30 WO PCT/CN2019/089348 patent/WO2019228473A1/en active Application Filing
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6283858B1 (en) * | 1997-02-25 | 2001-09-04 | Bgk International Incorporated | Method for manipulating images |
CN101777195A (en) * | 2010-01-29 | 2010-07-14 | 浙江大学 | Three-dimensional face model adjusting method |
CN106940880A (en) * | 2016-01-04 | 2017-07-11 | 中兴通讯股份有限公司 | A kind of U.S. face processing method, device and terminal device |
CN105938627A (en) * | 2016-04-12 | 2016-09-14 | 湖南拓视觉信息技术有限公司 | Processing method and system for virtual plastic processing on face |
CN107705356A (en) * | 2017-09-11 | 2018-02-16 | 广东欧珀移动通信有限公司 | Image processing method and device |
CN107730445A (en) * | 2017-10-31 | 2018-02-23 | 广东欧珀移动通信有限公司 | Image processing method, device, storage medium and electronic equipment |
CN107993209A (en) * | 2017-11-30 | 2018-05-04 | 广东欧珀移动通信有限公司 | Image processing method, device, computer-readable recording medium and electronic equipment |
Non-Patent Citations (1)
Title |
---|
田炜: "计算机辅助三维整形外科手术关键技术研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 * |
Cited By (37)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019228473A1 (en) * | 2018-05-31 | 2019-12-05 | Oppo广东移动通信有限公司 | Method and apparatus for beautifying face image |
CN111353931B (en) * | 2018-12-24 | 2023-10-03 | 黄庆武整形医生集团(深圳)有限公司 | Shaping simulation method, system, readable storage medium and apparatus |
WO2020135286A1 (en) * | 2018-12-24 | 2020-07-02 | 甄选医美邦(杭州)网络科技有限公司 | Shaping simulation method and system, readable storage medium and device |
CN111353931A (en) * | 2018-12-24 | 2020-06-30 | 黄庆武整形医生集团(深圳)有限公司 | Shaping simulation method, shaping simulation system, readable storage medium and device |
CN110020600A (en) * | 2019-03-05 | 2019-07-16 | 厦门美图之家科技有限公司 | Generate the method for training the data set of face alignment model |
CN110020600B (en) * | 2019-03-05 | 2021-04-16 | 厦门美图之家科技有限公司 | Method for generating a data set for training a face alignment model |
CN110189406A (en) * | 2019-05-31 | 2019-08-30 | 阿里巴巴集团控股有限公司 | Image data mask method and its device |
CN110189406B (en) * | 2019-05-31 | 2023-11-28 | 创新先进技术有限公司 | Image data labeling method and device |
CN110278029B (en) * | 2019-06-25 | 2020-12-22 | Oppo广东移动通信有限公司 | Data transmission control method and related product |
CN110278029A (en) * | 2019-06-25 | 2019-09-24 | Oppo广东移动通信有限公司 | Data transfer control method and Related product |
CN110310318A (en) * | 2019-07-03 | 2019-10-08 | 北京字节跳动网络技术有限公司 | A kind of effect processing method and device, storage medium and terminal |
CN110321849B (en) * | 2019-07-05 | 2023-12-22 | 腾讯科技(深圳)有限公司 | Image data processing method, device and computer readable storage medium |
CN110321849A (en) * | 2019-07-05 | 2019-10-11 | 腾讯科技(深圳)有限公司 | Image processing method, device and computer readable storage medium |
CN110473295A (en) * | 2019-08-07 | 2019-11-19 | 重庆灵翎互娱科技有限公司 | A kind of method and apparatus that U.S. face processing is carried out based on three-dimensional face model |
CN110473295B (en) * | 2019-08-07 | 2023-04-25 | 重庆灵翎互娱科技有限公司 | Method and equipment for carrying out beautifying treatment based on three-dimensional face model |
CN110675489A (en) * | 2019-09-25 | 2020-01-10 | 北京达佳互联信息技术有限公司 | Image processing method and device, electronic equipment and storage medium |
CN110675489B (en) * | 2019-09-25 | 2024-01-23 | 北京达佳互联信息技术有限公司 | Image processing method, device, electronic equipment and storage medium |
CN111031305A (en) * | 2019-11-21 | 2020-04-17 | 北京市商汤科技开发有限公司 | Image processing method and apparatus, image device, and storage medium |
US11450068B2 (en) | 2019-11-21 | 2022-09-20 | Beijing Sensetime Technology Development Co., Ltd. | Method and device for processing image, and storage medium using 3D model, 2D coordinates, and morphing parameter |
CN112927343A (en) * | 2019-12-05 | 2021-06-08 | 杭州海康威视数字技术股份有限公司 | Image generation method and device |
CN112927343B (en) * | 2019-12-05 | 2023-09-05 | 杭州海康威视数字技术股份有限公司 | Image generation method and device |
CN111178337A (en) * | 2020-01-07 | 2020-05-19 | 南京甄视智能科技有限公司 | Human face key point data enhancement method, device and system and model training method |
CN111370100A (en) * | 2020-03-11 | 2020-07-03 | 深圳小佳科技有限公司 | Face-lifting recommendation method and system based on cloud server |
CN111539882A (en) * | 2020-04-17 | 2020-08-14 | 华为技术有限公司 | Interactive method for assisting makeup, terminal and computer storage medium |
CN111966852A (en) * | 2020-06-28 | 2020-11-20 | 北京百度网讯科技有限公司 | Virtual face-lifting method and device based on human face |
CN111966852B (en) * | 2020-06-28 | 2024-04-09 | 北京百度网讯科技有限公司 | Face-based virtual face-lifting method and device |
CN112150618B (en) * | 2020-10-16 | 2022-11-29 | 四川大学 | Processing method and device for virtual shaping of canthus |
CN112150618A (en) * | 2020-10-16 | 2020-12-29 | 四川大学 | Processing method and device for virtual shaping of canthus |
CN113724396A (en) * | 2021-09-10 | 2021-11-30 | 广州帕克西软件开发有限公司 | Virtual face-lifting method and device based on face mesh |
CN113763285A (en) * | 2021-09-27 | 2021-12-07 | 北京市商汤科技开发有限公司 | Image processing method and device, electronic equipment and storage medium |
CN113657357B (en) * | 2021-10-20 | 2022-02-25 | 北京市商汤科技开发有限公司 | Image processing method, image processing device, electronic equipment and storage medium |
CN113657357A (en) * | 2021-10-20 | 2021-11-16 | 北京市商汤科技开发有限公司 | Image processing method, image processing device, electronic equipment and storage medium |
CN114120414A (en) * | 2021-11-29 | 2022-03-01 | 北京百度网讯科技有限公司 | Image processing method, image processing apparatus, electronic device, and medium |
CN113902790B (en) * | 2021-12-09 | 2022-03-25 | 北京的卢深视科技有限公司 | Beauty guidance method, device, electronic equipment and computer readable storage medium |
CN113902790A (en) * | 2021-12-09 | 2022-01-07 | 北京的卢深视科技有限公司 | Beauty guidance method, device, electronic equipment and computer readable storage medium |
CN115239888B (en) * | 2022-08-31 | 2023-09-12 | 北京百度网讯科技有限公司 | Method, device, electronic equipment and medium for reconstructing three-dimensional face image |
CN115239888A (en) * | 2022-08-31 | 2022-10-25 | 北京百度网讯科技有限公司 | Method, apparatus, electronic device, and medium for reconstructing three-dimensional face image |
Also Published As
Publication number | Publication date |
---|---|
CN108765273B (en) | 2021-03-09 |
WO2019228473A1 (en) | 2019-12-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108765273A (en) | The virtual lift face method and apparatus that face is taken pictures | |
CN108447017A (en) | Face virtual face-lifting method and device | |
CN109118569A (en) | Rendering method and device based on threedimensional model | |
CN108764180A (en) | Face identification method, device, electronic equipment and readable storage medium storing program for executing | |
CN105843386B (en) | A kind of market virtual fitting system | |
CN105556508B (en) | The devices, systems, and methods of virtual mirror | |
CN108550185A (en) | Beautifying faces treating method and apparatus | |
CN108876709A (en) | Method for beautifying faces, device, electronic equipment and readable storage medium storing program for executing | |
CN107484428B (en) | Method for displaying objects | |
CN107479801A (en) | Displaying method of terminal, device and terminal based on user's expression | |
US20100189357A1 (en) | Method and device for the virtual simulation of a sequence of video images | |
CN107563304A (en) | Unlocking terminal equipment method and device, terminal device | |
Fyffe et al. | Multi‐view stereo on consistent face topology | |
CN109978984A (en) | Face three-dimensional rebuilding method and terminal device | |
CN107480613A (en) | Face identification method, device, mobile terminal and computer-readable recording medium | |
CN109102559A (en) | Threedimensional model treating method and apparatus | |
CN101779218A (en) | Makeup simulation system, makeup simulation apparatus, makeup simulation method, and makeup simulation program | |
CN109147024A (en) | Expression replacing options and device based on threedimensional model | |
CN108682050A (en) | U.S. face method and apparatus based on threedimensional model | |
WO2020034698A1 (en) | Three-dimensional model-based special effect processing method and device, and electronic apparatus | |
TW200805175A (en) | Makeup simulation system, makeup simulation device, makeup simulation method and makeup simulation program | |
CN109191393A (en) | U.S. face method based on threedimensional model | |
CN108537126A (en) | A kind of face image processing system and method | |
CN109242760A (en) | Processing method, device and the electronic equipment of facial image | |
CN109191584A (en) | Threedimensional model processing method, device, electronic equipment and readable storage medium storing program for executing |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |