CN109147037A - Effect processing method, device and electronic equipment based on threedimensional model - Google Patents
Effect processing method, device and electronic equipment based on threedimensional model Download PDFInfo
- Publication number
- CN109147037A CN109147037A CN201810934012.XA CN201810934012A CN109147037A CN 109147037 A CN109147037 A CN 109147037A CN 201810934012 A CN201810934012 A CN 201810934012A CN 109147037 A CN109147037 A CN 109147037A
- Authority
- CN
- China
- Prior art keywords
- model
- special efficacy
- threedimensional model
- facial image
- effect processing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
Abstract
The application proposes a kind of effect processing method based on threedimensional model, device and electronic equipment, wherein method includes: to obtain collected two-dimensional facial image and the corresponding depth information of facial image;According to depth information and facial image, three-dimensionalreconstruction is carried out to face, to obtain the corresponding threedimensional model of face;Identify expression classification corresponding with two-dimensional facial image;Threedimensional model special efficacy model corresponding with expression classification is merged, the threedimensional model after obtaining special effect processing.This method can be realized the special efficacy model different without user's manual switching, promote the degree of automation of special efficacy addition, and enjoyment and playability of the user in special efficacy adding procedure are promoted, the sense of reality of special efficacy addition is promoted, better effect is natural so that treated.
Description
Technical field
This application involves technical field of electronic equipment more particularly to a kind of effect processing methods based on threedimensional model, dress
It sets and electronic equipment.
Background technique
It is universal with electronic equipment, more and more users like taking pictures using the camera function of electronic equipment or
Person records life.And in order to enable captured image is more interesting, develop it is various for image carry out beautification or
Increase the application program of special efficacy.User can select oneself from all special efficacys that application program carries according to their own needs
The special efficacy liked handles image, so that image vivid and interesting.
Due to the addition of the facial special efficacy such as tears, the active dependent on user is selected, the automation journey for causing special efficacy to be added
Spend it is lower, in addition, to image increase special efficacy carry out on 2d so that special efficacy can not be bonded with image perfection or
Matching, causes image processing effect poor, and the sense of reality of special efficacy addition is not strong.
Summary of the invention
The application proposes a kind of effect processing method based on threedimensional model, device and electronic equipment, to realize without using
Manual switching different special efficacy model in family promotes the degree of automation of special efficacy addition, and promotes user in special efficacy adding procedure
In enjoyment and playability, the sense of reality of special efficacy addition is promoted, so that treated better effect is naturally, existing for solving
The technical problem that the sense of reality that special efficacy is added in technology is not strong and the degree of automation is low.
The application one side embodiment proposes a kind of effect processing method based on threedimensional model, comprising:
Obtain collected two-dimensional facial image and the corresponding depth information of the facial image;
According to the depth information and the facial image, three-dimensionalreconstruction is carried out to face, it is corresponding to obtain the face
Threedimensional model;
Identification expression classification corresponding with the two-dimensional facial image;
Threedimensional model special efficacy model corresponding with the expression classification is merged, three after obtaining special effect processing
Dimension module.
The effect processing method based on threedimensional model of the embodiment of the present application, by obtaining collected two-dimensional face figure
Picture and the corresponding depth information of facial image carry out three-dimensionalreconstruction to face then according to depth information and facial image,
To obtain the corresponding threedimensional model of face, expression classification corresponding with two-dimensional facial image is then identified, finally by three-dimensional mould
Type special efficacy model corresponding with expression classification is merged.The special efficacy model different without user's manual switching as a result, is promoted special
The degree of automation of addition is imitated, and promotes enjoyment and playability of the user in special efficacy adding procedure.In addition, according to user
The expression made determines corresponding special efficacy model, so that special efficacy model be merged with threedimensional model, can promote special efficacy and add
The sense of reality added, so that treated, better effect is natural.
The another aspect embodiment of the application proposes a kind of special effect processing device based on threedimensional model, comprising:
Module is obtained, for obtaining collected two-dimensional facial image and the corresponding depth letter of the facial image
Breath;
Reconstructed module, for three-dimensionalreconstruction being carried out to face, to obtain according to the depth information and the facial image
The corresponding threedimensional model of the face;
Identification module, for identification expression classification corresponding with the two-dimensional facial image;
Fusion Module is obtained for merging threedimensional model special efficacy model corresponding with the expression classification
Threedimensional model after special effect processing.
The special effect processing device based on threedimensional model of the embodiment of the present application, by obtaining collected two-dimensional face figure
Picture and the corresponding depth information of facial image carry out three-dimensionalreconstruction to face then according to depth information and facial image,
To obtain the corresponding threedimensional model of face, expression classification corresponding with two-dimensional facial image is then identified, finally by three-dimensional mould
Type special efficacy model corresponding with expression classification is merged, the threedimensional model after obtaining special effect processing.It is manual without user as a result,
Switch different special efficacy models, promote the degree of automation of special efficacy addition, and promotes pleasure of the user in special efficacy adding procedure
Interest and playability.In addition, the expression made according to user, determines corresponding special efficacy model, thus by special efficacy model and three-dimensional
Model is merged, and the sense of reality of special efficacy addition can be promoted, so that treated, better effect is natural.
The another aspect embodiment of the application proposes a kind of electronic equipment, comprising: memory, processor and is stored in storage
On device and the computer program that can run on a processor, when the processor executes described program, realize as the application is aforementioned
The effect processing method based on threedimensional model that embodiment proposes.
The another aspect embodiment of the application proposes a kind of computer readable storage medium, is stored thereon with computer journey
Sequence, which is characterized in that when the program is executed by processor realize as the application previous embodiment propose based on threedimensional model
Effect processing method.
The additional aspect of the application and advantage will be set forth in part in the description, and will partially become from the following description
It obtains obviously, or recognized by the practice of the application.
Detailed description of the invention
The application is above-mentioned and/or additional aspect and advantage will become from the following description of the accompanying drawings of embodiments
Obviously and it is readily appreciated that, in which:
Fig. 1 is the flow diagram of the effect processing method based on threedimensional model provided by the embodiment of the present application one;
Fig. 2 is the flow diagram of the effect processing method based on threedimensional model provided by the embodiment of the present application two;
Fig. 3 is the flow diagram of the effect processing method based on threedimensional model provided by the embodiment of the present application three;
Fig. 4 is the structural schematic diagram of the special effect processing device based on threedimensional model provided by the embodiment of the present application four;
Fig. 5 is the structural schematic diagram of the special effect processing device based on threedimensional model provided by the embodiment of the present application five;
Fig. 6 is the schematic diagram of internal structure of electronic equipment in one embodiment;
Fig. 7 is a kind of schematic diagram of the image processing circuit as possible implementation;
Fig. 8 is the schematic diagram of the image processing circuit as alternatively possible implementation.
Specific embodiment
Embodiments herein is described below in detail, examples of the embodiments are shown in the accompanying drawings, wherein from beginning to end
Same or similar label indicates same or similar element or element with the same or similar functions.Below with reference to attached
The embodiment of figure description is exemplary, it is intended to for explaining the application, and should not be understood as the limitation to the application.
The technology that the sense of reality that the application is added mainly for special efficacy in the prior art is not strong and the degree of automation is low is asked
Topic, proposes a kind of effect processing method based on threedimensional model.
The effect processing method based on threedimensional model of the embodiment of the present application, by obtaining collected two-dimensional face figure
Picture and the corresponding depth information of facial image carry out three-dimensionalreconstruction to face then according to depth information and facial image,
To obtain the corresponding threedimensional model of face, expression classification corresponding with two-dimensional facial image is then identified, finally by three-dimensional mould
Type special efficacy model corresponding with expression classification is merged, the threedimensional model after obtaining special effect processing.It is manual without user as a result,
Switch different special efficacy models, promote the degree of automation of special efficacy addition, and promotes pleasure of the user in special efficacy adding procedure
Interest and playability.In addition, the expression made according to user, determines corresponding special efficacy model, thus by special efficacy model and three-dimensional
Model is merged, and the sense of reality of special efficacy addition can be promoted, so that treated, better effect is natural.
Below with reference to the accompanying drawings the effect processing method based on threedimensional model, device and the electronics for describing the embodiment of the present application are set
It is standby.
Fig. 1 is the flow diagram of the effect processing method based on threedimensional model provided by the embodiment of the present application one.
As shown in Figure 1, should effect processing method based on threedimensional model the following steps are included:
Step 101, collected two-dimensional facial image and the corresponding depth information of facial image are obtained.
In the embodiment of the present application, electronic equipment may include visible light image sensor, can be based in electronic equipment
Visible light image sensor obtains two-dimensional facial image.Specifically, it is seen that optical image sensor may include visible image capturing
Head, it is seen that light video camera head can be captured to be imaged by the visible light of face reflection, obtains two-dimensional facial image.
In the embodiment of the present application, electronic equipment can also include structure light image sensor, can be based in electronic equipment
Structure light image sensor, obtain the corresponding depth information of facial image.Optionally, structure light image sensor may include
Color-changing lamp and Laser video camera head.Pulse width modulation (Pulse Width Modulation, abbreviation PWM) can modulate radium-shine
For lamp to issue structure light, structure light exposes into face, Laser video camera head can capture by the structure light of face reflection carry out at
Picture obtains the corresponding structure light image of face.Depth engine can calculate according to the corresponding structure light image of face and obtain face
Corresponding depth information, i.e., the corresponding depth information of two-dimensional facial image.
Step 102, according to depth information and facial image, three-dimensionalreconstruction is carried out to face, to obtain face corresponding three
Dimension module.
In the embodiment of the present application, after obtaining depth information and facial image, can according to depth information and facial image,
Three-dimensionalreconstruction is carried out to face, to obtain the corresponding threedimensional model of face.In the application, the structure of the corresponding threedimensional model of face
It builds, is to carry out what three-dimensionalreconstruction obtained according to depth information and facial image, rather than simply obtain RGB data and depth
Data.
As a kind of possible implementation, can by depth information color information corresponding with two-dimensional facial image into
Row fusion, obtains the corresponding threedimensional model of face.Specifically, it can be based on face critical point detection technology, mentioned from depth information
The key point of face is taken, and extracts the key point of face, the key that will then extract from depth information from color information
Point and the key point extracted from color information carry out registration and key point fusion treatment, finally according to fused key point,
Generate the corresponding threedimensional model of face.Wherein, key point behaviour point obvious on the face, or be the point on key position, such as
Key point can be canthus, nose, corners of the mouth etc..
As alternatively possible implementation, it can be based on face critical point detection technology, facial image is closed
The identification of key point, obtains corresponding second key point of facial image, then crucial according to the depth information of the second key point and second
Position of the point on facial image, determines opposite position of corresponding first key point of the second key point in the threedimensional model of face
It sets, so as to the relative position according to the first key point in three dimensions, connects the first adjacent key point, generate part
Face three-dimensional framework.Wherein, local facial may include the faces such as nose, lip, eye, cheek position.
It, can be according to include in different local facial three-dimensional frameworks identical after generating local facial three-dimensional framework
One key point splices different local facial three-dimensional frameworks, obtains the corresponding threedimensional model of face.
Step 103, expression classification corresponding with two-dimensional facial image is identified.
As a kind of possible implementation, it is corresponding with reference to expression, example that user can prerecord different expression classifications
Such as, it is corresponding with reference to expression can to prerecord the expressions classifications such as sad, glad, dejected, angry, thinking by user, is obtaining two
After the facial image of dimension, facial image can be matched with reference to expression, the object reference expression in matching is corresponding
Expression classification, the expression classification as facial image.
As alternatively possible implementation, at least frame facial image acquired before available present frame, and
Expression classification can be determined according to the facial image and an at least frame facial image of present frame afterwards.For example, according to the people of present frame
Face image and an at least frame facial image can determine that eyebrow raises up or drop-down, eyes become larger or become smaller, the corners of the mouth raises up also
It is drop-down etc., and then can determines expression classification.For example, when facial image and an at least frame facial image according to present frame,
When determining that eyes become smaller, canthus raises up, the corners of the mouth raises up, expression classification can be determined for happiness.
Step 104, threedimensional model special efficacy model corresponding with expression classification is merged, three after obtaining special effect processing
Dimension module.
In the embodiment of the present application, the corresponding relationship between expression classification and special efficacy model can be stored in advance, for example, working as table
When feelings classification is sad, special efficacy model can be tears, and when expression classification is indignation, special efficacy model can be flame, work as table
When feelings classification is nervous, special efficacy model can be cold sweat etc..
Wherein, special efficacy model can store in the material database of the application program of electronic equipment, be stored in the material database
Different special efficacy models, alternatively, the application program on electronic equipment can also download new special efficacy model in real time from server,
Newly downloaded special efficacy model is possibly stored in material database.
Optionally, after determining expression classification, above-mentioned corresponding relationship can be inquired, obtains the special efficacy with expression categorical match
Model then merges threedimensional model special efficacy model corresponding with expression classification, the threedimensional model after obtaining special effect processing.
As a kind of possible implementation, in order to promote the display effect of the threedimensional model after special effect processing, enhancing is special
The sense of reality of threedimensional model after effect addition, in the embodiment of the present application, the angle of adjustable special efficacy model relative dimensional model,
So that threedimensional model and special efficacy model angle automatching, after then being rendered to special efficacy model, textures to threedimensional model.
Further, after threedimensional model after obtaining special effect processing, can electronic equipment display interface to special efficacy
Treated, and threedimensional model is shown, so as to the threedimensional model known after special effect processing intuitively changed convenient for user.
The effect processing method based on threedimensional model of the embodiment of the present application, by obtaining collected two-dimensional face figure
Picture and the corresponding depth information of facial image carry out three-dimensionalreconstruction to face then according to depth information and facial image,
To obtain the corresponding threedimensional model of face, expression classification corresponding with two-dimensional facial image is then identified, finally by three-dimensional mould
Type special efficacy model corresponding with expression classification is merged, the threedimensional model after obtaining special effect processing.It is manual without user as a result,
Switch different special efficacy models, promote the degree of automation of special efficacy addition, and promotes pleasure of the user in special efficacy adding procedure
Interest and playability.In addition, the expression made according to user, determines corresponding special efficacy model, thus by special efficacy model and three-dimensional
Model is merged, and the sense of reality of special efficacy addition can be promoted, so that treated, better effect is natural.
As a kind of possible implementation, in order to promote the efficiency of expression classification identification and the accuracy of identification, this
In application, after at least frame facial image acquired before obtaining present frame, only in determining an at least frame facial image
When difference in the position of each key point, with the facial image of present frame between the position of each key point is greater than threshold value, just to working as
The corresponding expression classification of previous frame is identified.Below with reference to Fig. 2, the above process is described in detail.
Fig. 2 is the flow diagram of the effect processing method based on threedimensional model provided by the embodiment of the present application two.
See Fig. 2, on the basis of embodiment shown in Fig. 1, step 103 can specifically include following sub-step:
Step 201, the position of each key point in the facial image of present frame is identified.
Specifically, it can be based on key point identification technology, identify the position of each key point in the facial image of present frame.
Step 202, it at least frame facial image acquired before present frame, identifies and is respectively closed in an at least frame facial image
The position of key point.
In the embodiment of the present application, at least frame facial image acquired before available present frame, and it is based on key point
Identification technology determines the position of each key point in an at least frame facial image.
Step 203, the position for judging each key point in an at least frame facial image, and is respectively closed in the facial image of present frame
Difference between the position of key point is greater than threshold value, if so, executing step 204, otherwise, executes step 205.
Wherein, threshold value can be preset in the plug-in of electronic equipment, alternatively, threshold value can also be set by user
It sets, with no restriction to this.
Step 204, the corresponding expression classification of identification present frame.
Facial image in the embodiment of the present application, when the position of each key point in an at least frame facial image, with present frame
In each key point position between difference when being greater than threshold value, show that the expression tool that user continuously makes has a greater change, this
When, therefore user, which may wish to addition special efficacy, can further identify the corresponding expression classification of present frame, thus triggering following
The step of special efficacy is added, specific identification process may refer to the implementation procedure of step 103 in above-described embodiment, not do herein superfluous
It states.
Step 205, without any processing.
In the embodiment of the present application, the position of each key point in an at least frame facial image, the facial image with present frame
In each key point position between difference when not being greater than threshold value, the expression for showing that user continuously makes does not vary widely,
At this point, user may be not intended to addition special efficacy therefore can be without any processing.
As a kind of possible implementation, referring to Fig. 3, on the basis of embodiment shown in Fig. 1, step 104 specifically may be used
To include following sub-step:
Step 301, according to expression classification, corresponding special efficacy model is obtained.
As a kind of possible implementation, the corresponding relationship between expression classification and special efficacy model can be stored in advance,
After determining the corresponding expression classification of facial image, above-mentioned corresponding relationship can be inquired, obtains the special efficacy with expression categorical match
Model, it is easy to operate, and be easily achieved.
Step 302, the angle for adjusting special efficacy model relative dimensional model, so that threedimensional model and special efficacy model angle
Match.
It should be noted that needing to adjust special efficacy model relative dimensional mould for before special efficacy model pinup picture to threedimensional model
The angle of type, so that threedimensional model and special efficacy model angle automatching.For example, when special efficacy model is tears, as people in threedimensional model
Face be face screen or side to screen when, the display effect of tears is different, and therefore, it is necessary to the deflection angles according to face
Degree adjusts the rotational angle of special efficacy model, so that threedimensional model and special efficacy model angle automatching, and then promote subsequent special efficacy
Treatment effect.
As a kind of possible implementation, it may be predetermined that the applicable angle parameter of different special efficacy models, wherein angle
Spending parameter can be definite value, or be value range (for example, [- 45 °, 45 °]), with no restriction to this.In determining and expression class
After not corresponding special efficacy model, the applicable angle parameter of the special efficacy model can be inquired, then rotation effect model, so that special efficacy
Angle in model between the first line of goal-selling key point, and the second line of preset reference key point in threedimensional model
Meet angle parameter.
Step 303, it according to special efficacy model, in the three-dimensional model, inquires corresponding to textures key point.
It should be understood that different special efficacy models, in the three-dimensional model different to textures key point.For example, as spy
When effect model is tears, general tears are key points corresponding from canthus, the corresponding key point of the wing of nose are run underneath to, then again from nose
The corresponding key point of the wing runs underneath to the corresponding key point of the corners of the mouth, therefore, can be corresponding to the corners of the mouth by canthus to the wing of nose and the wing of nose
Key point, as to textures key point.Alternatively, general cold sweat is key corresponding from forehead when special efficacy model is cold sweat
Point runs underneath to the corresponding key point of eyebrow tail, then runs underneath to the corresponding key point of cheek from the corresponding key point of eyebrow tail, then from face
The corresponding key point of cheek runs underneath to the corresponding key point of chin, therefore, can be by forehead to eyebrow tail, eyebrow tail to cheek and face
Cheek is to the corresponding key point of chin, as to textures key point.
As a kind of possible implementation, different special efficacy models can be pre-established and to pair between textures key point
It should be related to, after determining special efficacy model corresponding with expression classification, above-mentioned corresponding relationship can be inquired, obtained in threedimensional model
In, it is corresponding with the special efficacy model to textures key point.
Step 304, in the three-dimensional model, using special efficacy model it is corresponding to textures key point region as to Maps
Domain.
In the embodiment of the present application, when special efficacy model difference, to textures region difference, in determining threedimensional model, special efficacy
Model is corresponding when textures key point, can using it is corresponding to textures key point region as to textures region.
Step 305, deformation is carried out to special efficacy model, so that the special efficacy after deformation to textures region according to threedimensional model
Model is covered to textures region.
In the embodiment of the present application, determine in threedimensional model behind textures region, can to special efficacy model carry out deformation,
So that the special efficacy model after deformation is covered to textures region, to promote special effect processing effect.
Step 306, after being rendered to special efficacy model, textures to threedimensional model.
In order to enable special efficacy model is matched with threedimensional model, and then guarantee the display effect of the threedimensional model after special effect processing
Fruit, in the application, after being rendered to special efficacy model, textures to threedimensional model.
As a kind of possible implementation, special efficacy model can be rendered according to the light efficiency of threedimensional model, thus
So that the light efficiency of the special efficacy model after rendering is matched with threedimensional model, and then promote the display effect of the threedimensional model after special effect processing
Fruit.
In order to realize above-described embodiment, the application also proposes a kind of special effect processing device based on threedimensional model.
Fig. 4 is the structural schematic diagram of the special effect processing device based on threedimensional model provided by the embodiment of the present application four.
As shown in figure 4, being somebody's turn to do the special effect processing device 100 based on threedimensional model includes: to obtain module 110, reconstructed module
120, identification module 130 and Fusion Module 140.Wherein,
Module 110 is obtained, for obtaining collected two-dimensional facial image and the corresponding depth letter of facial image
Breath.
Reconstructed module 120, for three-dimensionalreconstruction being carried out to face, to obtain face according to depth information and facial image
Corresponding threedimensional model.
Identification module 130, for identification expression classification corresponding with two-dimensional facial image.
Fusion Module 140 obtains at special efficacy for merging threedimensional model special efficacy model corresponding with expression classification
Threedimensional model after reason.
Further, in a kind of possible implementation of the embodiment of the present application, referring to Fig. 5, embodiment shown in Fig. 4
On the basis of, being somebody's turn to do the special effect processing device 100 based on threedimensional model can also include:
As a kind of possible implementation, identification module 130, comprising:
First identification submodule 131, for identification in the facial image of present frame each key point position.
Second identification submodule 132, for identifying an at least frame at least frame facial image acquired before present frame
The position of each key point in facial image.
Third identifies submodule 133, if the position for each key point in an at least frame facial image, the people with present frame
Difference in face image between the position of each key point is greater than threshold value, the corresponding expression classification of identification present frame.
As a kind of possible implementation, Fusion Module 140, comprising:
Acquisition submodule 141, for obtaining corresponding special efficacy model according to expression classification.
Adjusting submodule 142, for adjusting the angle of special efficacy model relative dimensional model, so that threedimensional model and special efficacy mould
Type angle automatching.
As a kind of possible implementation, adjusting submodule 142 is specifically used for: the applicable angle of inquiry special efficacy model
Parameter;Rotation effect model so that in special efficacy model goal-selling key point the first line, with preset reference in threedimensional model
Angle between second line of key point meets angle parameter.
Textures submodule 143, after being rendered to special efficacy model, textures to threedimensional model.
As a kind of possible implementation, textures submodule 143 is specifically used for: according to the light efficiency of threedimensional model, to spy
Effect model is rendered.
Deformation submodule 144, for after being rendered to special efficacy model, before textures to threedimensional model, according to three-dimensional
Model to textures region, deformation is carried out to special efficacy model, so that the special efficacy model after deformation is covered to textures region.
Inquire submodule 145, for according to threedimensional model to textures region, before carrying out deformation to special efficacy model,
According to special efficacy model, in the three-dimensional model, inquire corresponding to textures key point.
Submodule 146 is handled, is used in the three-dimensional model, special efficacy model is corresponding to textures key point region work
For to textures region.
It should be noted that the aforementioned explanation to the effect processing method embodiment based on threedimensional model is also applied for
The special effect processing device 100 based on threedimensional model of the embodiment, details are not described herein again.
The special effect processing device based on threedimensional model of the embodiment of the present application, by obtaining collected two-dimensional face figure
Picture and the corresponding depth information of facial image carry out three-dimensionalreconstruction to face then according to depth information and facial image,
To obtain the corresponding threedimensional model of face, expression classification corresponding with two-dimensional facial image is then identified, finally by three-dimensional mould
Type special efficacy model corresponding with expression classification is merged, the threedimensional model after obtaining special effect processing.It is manual without user as a result,
Switch different special efficacy models, promote the degree of automation of special efficacy addition, and promotes pleasure of the user in special efficacy adding procedure
Interest and playability.In addition, the expression made according to user, determines corresponding special efficacy model, thus by special efficacy model and three-dimensional
Model is merged, and the sense of reality of special efficacy addition can be promoted, so that treated, better effect is natural.
In order to realize above-described embodiment, the application also proposes a kind of electronic equipment, comprising: memory, processor and storage
On a memory and the computer program that can run on a processor, when processor executes program, the aforementioned implementation of the application is realized
The effect processing method based on threedimensional model that example proposes.
In order to realize above-described embodiment, the application also proposes a kind of computer readable storage medium, is stored thereon with calculating
Machine program, which is characterized in that the program realized when being executed by processor the application previous embodiment propose based on threedimensional model
Effect processing method.
Fig. 6 is the schematic diagram of internal structure of electronic equipment 200 in one embodiment.The electronic equipment 200 includes passing through to be
Processor 220, memory 230, display 240 and the input unit 250 that bus 210 of uniting connects.Wherein, electronic equipment 200
Memory 230 is stored with operating system and computer-readable instruction.The computer-readable instruction can be executed by processor 220, with
Realize the effect processing method based on threedimensional model of the application embodiment.The processor 220 is calculated and is controlled for providing
Ability supports the operation of entire electronic equipment 200.The display 240 of electronic equipment 200 can be liquid crystal display or electronics
Ink display screen etc., input unit 250 can be the touch layer covered on display 240, be also possible to 200 shell of electronic equipment
Key, trace ball or the Trackpad of upper setting, are also possible to external keyboard, Trackpad or mouse etc..The electronic equipment 200 can
Be mobile phone, tablet computer, laptop, personal digital assistant or wearable device (such as Intelligent bracelet, smartwatch,
Intelligent helmet, intelligent glasses) etc..
It will be understood by those skilled in the art that structure shown in Fig. 6, only part relevant to application scheme is tied
The schematic diagram of structure does not constitute the restriction for the electronic equipment 200 being applied thereon to application scheme, specific electronic equipment
200 may include perhaps combining certain components or with different component cloth than more or fewer components as shown in the figure
It sets.
For clear explanation electronic equipment provided in this embodiment, referring to Fig. 7, providing the image of the embodiment of the present application
Processing circuit, image processing circuit can be realized using hardware and or software component.
It should be noted that Fig. 7 is a kind of schematic diagram of the image processing circuit as possible implementation.For convenient for
Illustrate, various aspects relevant to the embodiment of the present application are only shown.
Such as Fig. 7, which is specifically included: elementary area 310, depth information unit 320 and processing unit
330.Wherein,
Elementary area 310, for exporting two-dimensional facial image.
Depth information unit 320, for exporting depth information.
In the embodiment of the present application, two-dimensional facial image can be obtained, and believe by depth by elementary area 310
Interest statement member 320 obtains the corresponding depth information of facial image.
Processing unit 330 is electrically connected with elementary area 310 and depth information unit 320 respectively, for according to image list
The corresponding depth information that the two-dimensional facial image and depth information unit 320 that member 310 obtains obtain, carries out face
Three-dimensionalreconstruction identifies expression classification corresponding with two-dimensional facial image, by three-dimensional mould to obtain the corresponding threedimensional model of face
Type special efficacy model corresponding with expression classification is merged, the threedimensional model after obtaining special effect processing.
In the embodiment of the present application, the two-dimensional facial image that elementary area 310 obtains can be sent to processing unit 330,
And the corresponding depth information of facial image that depth information unit 320 obtains can be sent to processing unit 330, processing unit
330 can carry out three-dimensionalreconstruction to face according to facial image and depth information, to obtain the corresponding threedimensional model of face,
It identifies expression classification corresponding with two-dimensional facial image, threedimensional model special efficacy model corresponding with expression classification is melted
It closes, the threedimensional model after obtaining special effect processing.Concrete implementation process may refer to above-mentioned Fig. 1 into Fig. 3 embodiment to being based on
The explanation of the effect processing method of threedimensional model, is not repeated herein.
Further, as a kind of possible implementation of the application, referring to Fig. 8, the basis of embodiment shown in Fig. 7
On, which can also include:
As a kind of possible implementation, elementary area 310 be can specifically include: the imaging sensor of electric connection
311 and image signal process (Image Signal Processing, abbreviation ISP) processor 312.Wherein,
Imaging sensor 311, for exporting raw image data.
ISP processor 312, for exporting facial image according to raw image data.
In the embodiment of the present application, the raw image data that imaging sensor 311 captures is handled by ISP processor 312 first,
ISP processor 312 is analyzed raw image data to capture and can be used for determining that the one or more of imaging sensor 311 are controlled
The image statistics of parameter processed, the facial image including yuv format or rgb format.Wherein, imaging sensor 311 can wrap
Colour filter array (such as Bayer filter) and corresponding photosensitive unit are included, imaging sensor 311 can obtain each photosensitive list
The luminous intensity and wavelength information that member captures, and the one group of raw image data that can be handled by ISP processor 312 is provided.ISP processing
After device 312 handles raw image data, the facial image of yuv format or rgb format is obtained, and it is single to be sent to processing
Member 330.
Wherein, ISP processor 312, can in various formats pixel by pixel when handling raw image data
Handle raw image data.For example, each image pixel can have the bit depth of 8,10,12 or 14 bits, ISP processor 312
One or more image processing operations can be carried out to raw image data, collect the statistical information about image data.Wherein, scheme
As processing operation can be carried out by identical or different bit depth precision.
As a kind of possible implementation, depth information unit 320, the structured light sensor 321 including electric connection
Chip 322 is generated with depth map.Wherein,
Structured light sensor 321, for generating infrared speckle pattern.
Depth map generates chip 322, for exporting depth information according to infrared speckle pattern;Depth information includes depth map.
In the embodiment of the present application, structured light sensor 321 projects pattern light to object, and obtains object reflection
Structure light infrared speckle pattern is obtained according to the structure light imaging of reflection.Structured light sensor 321 sends out the infrared speckle pattern
It send to depth map and generates chip 322, so that depth map generates the metamorphosis that chip 322 determines according to infrared speckle pattern structure light
Situation, and then determine therefrom that the depth of object, depth map (Depth Map) is obtained, which indicates infrared speckle pattern
In each pixel depth.Depth map generates chip 322 and depth map is sent to processing unit 330.
As a kind of possible implementation, processing unit 330, comprising: the CPU331 and GPU of electric connection
(Graphics Processing Unit, graphics processor) 332.Wherein,
CPU331, for facial image and depth map being aligned, according to the facial image and depth after alignment according to nominal data
Degree figure, the corresponding threedimensional model of output face.
GPU332, expression classification corresponding with two-dimensional facial image for identification, by threedimensional model and expression classification pair
The special efficacy model answered is merged, the threedimensional model after obtaining special effect processing.
In the embodiment of the present application, CPU331 gets facial image from ISP processor 312, generates chip 322 from depth map
Depth map is got, in conjunction with the nominal data being previously obtained, facial image can be aligned with depth map, so that it is determined that face out
The corresponding depth information of each pixel in image.In turn, CPU331 carries out face three-dimensional according to depth information and facial image
Reconstruct, to obtain the corresponding threedimensional model of face.
The corresponding threedimensional model of face is sent to GPU332 by CPU331, so that GPU332 is according to the corresponding three-dimensional mould of face
Type is executed as described in previous embodiment based on the effect processing method of threedimensional model, is realized threedimensional model and expression classification
Corresponding special efficacy model is merged, the threedimensional model after obtaining special effect processing.
Further, image processing circuit can also include: display unit 340.
Display unit 340 is electrically connected, for being shown according to the threedimensional model after special effect processing with GPU332.
Specifically, the threedimensional model after the special effect processing that GPU332 is handled can be shown by display 340.
Optionally, image processing circuit can also include: encoder 350 and memory 360.
In the embodiment of the present application, threedimensional model after the special effect processing that GPU332 is handled can also be by encoder 350
It is stored after coding to memory 360, wherein encoder 350 can be realized by coprocessor.
In one embodiment, memory 360 can be multiple, or be divided into multiple memory spaces, store GPU312
Image data that treated can be stored to private memory or dedicated memory space, and may include DMA (Direct Memory
Access, direct direct memory access (DMA)) feature.Memory 360 can be configured to realize one or more frame buffers.
Below with reference to Fig. 8, the above process is described in detail.
As shown in figure 8, the raw image data that imaging sensor 311 captures is handled by ISP processor 312 first, at ISP
Reason device 312 is analyzed raw image data to capture the one or more control ginsengs that can be used for determining imaging sensor 311
Several image statistics, the facial image including yuv format or rgb format, and it is sent to CPU331.
As shown in figure 8, structured light sensor 321 projects pattern light to object, and obtain the knot of object reflection
Structure light obtains infrared speckle pattern according to the structure light imaging of reflection.The infrared speckle pattern is sent to by structured light sensor 321
Depth map generates chip 322, so that depth map generates the metamorphosis feelings that chip 322 determines according to infrared speckle pattern structure light
Condition, and then determine therefrom that the depth of object, obtain depth map (Depth Map).Depth map generates chip 322 and sends out depth map
It send to CPU331.
CPU331 gets facial image from ISP processor 312, generates chip 322 from depth map and gets depth map, ties
The nominal data being previously obtained is closed, facial image can be aligned with depth map, so that it is determined that each pixel in facial image out
Corresponding depth information.In turn, CPU331 carries out three-dimensionalreconstruction to face, to obtain people according to depth information and facial image
The corresponding threedimensional model of face.
The corresponding threedimensional model of face is sent to GPU332 by CPU331, so that GPU332 is held according to the threedimensional model of face
Row, based on the effect processing method of threedimensional model, is realized threedimensional model is corresponding with expression classification as described in previous embodiment
Special efficacy model merged, the threedimensional model after obtaining special effect processing.Three-dimensional after the special effect processing that GPU332 is handled
Model can show by display 340, and/or, it stores after being encoded by encoder 350 to memory 360.
For example, the following are with the processor 220 in Fig. 6 or with image processing circuit (the specially CPU331 in Fig. 8
And GPU332) realize control method the step of:
CPU331 obtains two-dimensional facial image and the corresponding depth information of facial image;CPU331 believes according to depth
Breath and facial image carry out three-dimensionalreconstruction to face, to obtain the corresponding threedimensional model of face;GPU332 identification and two-dimensional people
The corresponding expression classification of face image;GPU332 merges threedimensional model special efficacy model corresponding with expression classification, obtains spy
Effect treated threedimensional model.
In the description of this specification, reference term " one embodiment ", " some embodiments ", " example ", " specifically show
The description of example " or " some examples " etc. means specific features, structure, material or spy described in conjunction with this embodiment or example
Point is contained at least one embodiment or example of the application.In the present specification, schematic expression of the above terms are not
It must be directed to identical embodiment or example.Moreover, particular features, structures, materials, or characteristics described can be in office
It can be combined in any suitable manner in one or more embodiment or examples.In addition, without conflicting with each other, the skill of this field
Art personnel can tie the feature of different embodiments or examples described in this specification and different embodiments or examples
It closes and combines.
In addition, term " first ", " second " are used for descriptive purposes only and cannot be understood as indicating or suggesting relative importance
Or implicitly indicate the quantity of indicated technical characteristic.Define " first " as a result, the feature of " second " can be expressed or
Implicitly include at least one this feature.In the description of the present application, the meaning of " plurality " is at least two, such as two, three
It is a etc., unless otherwise specifically defined.
Any process described otherwise above or method description are construed as in flow chart or herein, and expression includes
It is one or more for realizing custom logic function or process the step of executable instruction code module, segment or portion
Point, and the range of the preferred embodiment of the application includes other realization, wherein can not press shown or discussed suitable
Sequence, including according to related function by it is basic simultaneously in the way of or in the opposite order, to execute function, this should be by the application
Embodiment person of ordinary skill in the field understood.
Expression or logic and/or step described otherwise above herein in flow charts, for example, being considered use
In the order list for the executable instruction for realizing logic function, may be embodied in any computer-readable medium, for
Instruction execution system, device or equipment (such as computer based system, including the system of processor or other can be held from instruction
The instruction fetch of row system, device or equipment and the system executed instruction) it uses, or combine these instruction execution systems, device or set
It is standby and use.For the purpose of this specification, " computer-readable medium ", which can be, any may include, stores, communicates, propagates or pass
Defeated program is for instruction execution system, device or equipment or the dress used in conjunction with these instruction execution systems, device or equipment
It sets.The more specific example (non-exhaustive list) of computer-readable medium include the following: there is the electricity of one or more wirings
Interconnecting piece (electronic device), portable computer diskette box (magnetic device), random access memory (RAM), read-only memory
(ROM), erasable edit read-only storage (EPROM or flash memory), fiber device and portable optic disk is read-only deposits
Reservoir (CDROM).In addition, computer-readable medium can even is that the paper that can print described program on it or other are suitable
Medium, because can then be edited, be interpreted or when necessary with it for example by carrying out optical scanner to paper or other media
His suitable method is handled electronically to obtain described program, is then stored in computer storage.
It should be appreciated that each section of the application can be realized with hardware, software, firmware or their combination.Above-mentioned
In embodiment, software that multiple steps or method can be executed in memory and by suitable instruction execution system with storage
Or firmware is realized.Such as, if realized with hardware in another embodiment, following skill well known in the art can be used
Any one of art or their combination are realized: have for data-signal is realized the logic gates of logic function from
Logic circuit is dissipated, the specific integrated circuit with suitable combinational logic gate circuit, programmable gate array (PGA), scene can compile
Journey gate array (FPGA) etc..
Those skilled in the art are understood that realize all or part of step that above-described embodiment method carries
It suddenly is that relevant hardware can be instructed to complete by program, the program can store in a kind of computer-readable storage medium
In matter, which when being executed, includes the steps that one or a combination set of embodiment of the method.
It, can also be in addition, can integrate in a processing module in each functional unit in each embodiment of the application
It is that each unit physically exists alone, can also be integrated in two or more units in a module.Above-mentioned integrated mould
Block both can take the form of hardware realization, can also be realized in the form of software function module.The integrated module is such as
Fruit is realized and when sold or used as an independent product in the form of software function module, also can store in a computer
In read/write memory medium.
Storage medium mentioned above can be read-only memory, disk or CD etc..Although having been shown and retouching above
Embodiments herein is stated, it is to be understood that above-described embodiment is exemplary, and should not be understood as the limit to the application
System, those skilled in the art can be changed above-described embodiment, modify, replace and become within the scope of application
Type.
Claims (10)
1. a kind of effect processing method based on threedimensional model, which is characterized in that the described method includes:
Obtain collected two-dimensional facial image and the corresponding depth information of the facial image;
According to the depth information and the facial image, three-dimensionalreconstruction is carried out to face, to obtain the face corresponding three
Dimension module;
Identification expression classification corresponding with the two-dimensional facial image;
Threedimensional model special efficacy model corresponding with the expression classification is merged, the three-dimensional mould after obtaining special effect processing
Type.
2. effect processing method according to claim 1, which is characterized in that the identification and the two-dimensional facial image
Corresponding expression classification, comprising:
Identify the position of each key point in the facial image of present frame;
To at least frame facial image acquired before present frame, the position of each key point in an at least frame facial image is identified
It sets;
If the position of each key point, the position with each key point in the facial image of present frame in an at least frame facial image
Between difference be greater than threshold value, the corresponding expression classification of identification present frame.
3. effect processing method according to claim 1, which is characterized in that described by the threedimensional model and the expression
The corresponding special efficacy model of classification is merged, the threedimensional model after obtaining special effect processing, comprising:
According to the expression classification, corresponding special efficacy model is obtained;
The angle of the relatively described threedimensional model of the special efficacy model is adjusted, so that the threedimensional model and the special efficacy model angle
Matching;
After being rendered to the special efficacy model, textures to the threedimensional model.
4. effect processing method according to claim 3, which is characterized in that rendered described to the special efficacy model
Afterwards, before textures to the threedimensional model, further includes:
According to the threedimensional model to textures region, deformation is carried out to the special efficacy model, so that the special efficacy model after deformation
Covering is described to textures region.
5. effect processing method according to claim 4, which is characterized in that it is described according to the threedimensional model to textures
Region, before special efficacy model progress deformation, further includes:
According to the special efficacy model, in the threedimensional model, inquire corresponding to textures key point;
In the threedimensional model, using the special efficacy model it is corresponding to textures key point region as to textures region.
6. effect processing method according to claim 3, which is characterized in that the adjustment special efficacy model is relatively described
The angle of threedimensional model, so that the threedimensional model and the special efficacy model angle automatching, comprising:
Inquire the applicable angle parameter of the special efficacy model;
Rotate the special efficacy model so that in the special efficacy model goal-selling key point the first line, with the three-dimensional mould
Angle in type between the second line of preset reference key point meets the angle parameter.
7. effect processing method according to claim 3, which is characterized in that it is described that the special efficacy model is rendered,
Include:
According to the light efficiency of the threedimensional model, the special efficacy model is rendered.
8. a kind of special effect processing device based on threedimensional model, which is characterized in that described device includes:
Module is obtained, for obtaining collected two-dimensional facial image and the corresponding depth information of the facial image;
Reconstructed module, it is described to obtain for carrying out three-dimensionalreconstruction to face according to the depth information and the facial image
The corresponding threedimensional model of face;
Identification module, for identification expression classification corresponding with the two-dimensional facial image;
Fusion Module obtains special efficacy for merging threedimensional model special efficacy model corresponding with the expression classification
Treated threedimensional model.
9. a kind of electronic equipment characterized by comprising memory, processor and storage are on a memory and can be in processor
The computer program of upper operation, when the processor executes described program, realize as described in any in claim 1-7 based on
The effect processing method of threedimensional model.
10. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the program is by processor
The effect processing method based on threedimensional model as described in any in claim 1-7 is realized when execution.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810934012.XA CN109147037B (en) | 2018-08-16 | 2018-08-16 | Special effect processing method and device based on three-dimensional model and electronic equipment |
PCT/CN2019/088118 WO2020034698A1 (en) | 2018-08-16 | 2019-05-23 | Three-dimensional model-based special effect processing method and device, and electronic apparatus |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810934012.XA CN109147037B (en) | 2018-08-16 | 2018-08-16 | Special effect processing method and device based on three-dimensional model and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109147037A true CN109147037A (en) | 2019-01-04 |
CN109147037B CN109147037B (en) | 2020-09-18 |
Family
ID=64789563
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810934012.XA Active CN109147037B (en) | 2018-08-16 | 2018-08-16 | Special effect processing method and device based on three-dimensional model and electronic equipment |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN109147037B (en) |
WO (1) | WO2020034698A1 (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110310318A (en) * | 2019-07-03 | 2019-10-08 | 北京字节跳动网络技术有限公司 | A kind of effect processing method and device, storage medium and terminal |
WO2020034698A1 (en) * | 2018-08-16 | 2020-02-20 | Oppo广东移动通信有限公司 | Three-dimensional model-based special effect processing method and device, and electronic apparatus |
CN111639613A (en) * | 2020-06-04 | 2020-09-08 | 上海商汤智能科技有限公司 | Augmented reality AR special effect generation method and device and electronic equipment |
CN112004020A (en) * | 2020-08-19 | 2020-11-27 | 北京达佳互联信息技术有限公司 | Image processing method, image processing device, electronic equipment and storage medium |
CN113538696A (en) * | 2021-07-20 | 2021-10-22 | 广州博冠信息科技有限公司 | Special effect generation method and device, storage medium and electronic equipment |
WO2023142650A1 (en) * | 2022-01-30 | 2023-08-03 | 上海商汤智能科技有限公司 | Special effect rendering |
WO2023179346A1 (en) * | 2022-03-25 | 2023-09-28 | 北京字跳网络技术有限公司 | Special effect image processing method and apparatus, electronic device, and storage medium |
Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0930585A1 (en) * | 1998-01-14 | 1999-07-21 | Canon Kabushiki Kaisha | Image processing apparatus |
US6088040A (en) * | 1996-09-17 | 2000-07-11 | Atr Human Information Processing Research Laboratories | Method and apparatus of facial image conversion by interpolation/extrapolation for plurality of facial expression components representing facial image |
CN101021952A (en) * | 2007-03-23 | 2007-08-22 | 北京中星微电子有限公司 | Method and apparatus for realizing three-dimensional video special efficiency |
CN101452582A (en) * | 2008-12-18 | 2009-06-10 | 北京中星微电子有限公司 | Method and device for implementing three-dimensional video specific action |
CN102054291A (en) * | 2009-11-04 | 2011-05-11 | 厦门市美亚柏科信息股份有限公司 | Method and device for reconstructing three-dimensional face based on single face image |
US20140088750A1 (en) * | 2012-09-21 | 2014-03-27 | Kloneworld Pte. Ltd. | Systems, methods and processes for mass and efficient production, distribution and/or customization of one or more articles |
US20140362091A1 (en) * | 2013-06-07 | 2014-12-11 | Ecole Polytechnique Federale De Lausanne | Online modeling for real-time facial animation |
CN104346824A (en) * | 2013-08-09 | 2015-02-11 | 汉王科技股份有限公司 | Method and device for automatically synthesizing three-dimensional expression based on single facial image |
CN104732203A (en) * | 2015-03-05 | 2015-06-24 | 中国科学院软件研究所 | Emotion recognizing and tracking method based on video information |
CN104978764A (en) * | 2014-04-10 | 2015-10-14 | 华为技术有限公司 | Three-dimensional face mesh model processing method and three-dimensional face mesh model processing equipment |
CN105118082A (en) * | 2015-07-30 | 2015-12-02 | 科大讯飞股份有限公司 | Personalized video generation method and system |
CN106920274A (en) * | 2017-01-20 | 2017-07-04 | 南京开为网络科技有限公司 | Mobile terminal 2D key points rapid translating is the human face model building of 3D fusion deformations |
CN107452034A (en) * | 2017-07-31 | 2017-12-08 | 广东欧珀移动通信有限公司 | Image processing method and its device |
CN108062791A (en) * | 2018-01-12 | 2018-05-22 | 北京奇虎科技有限公司 | A kind of method and apparatus for rebuilding human face three-dimensional model |
CN108154550A (en) * | 2017-11-29 | 2018-06-12 | 深圳奥比中光科技有限公司 | Face real-time three-dimensional method for reconstructing based on RGBD cameras |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109147037B (en) * | 2018-08-16 | 2020-09-18 | Oppo广东移动通信有限公司 | Special effect processing method and device based on three-dimensional model and electronic equipment |
-
2018
- 2018-08-16 CN CN201810934012.XA patent/CN109147037B/en active Active
-
2019
- 2019-05-23 WO PCT/CN2019/088118 patent/WO2020034698A1/en active Application Filing
Patent Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6088040A (en) * | 1996-09-17 | 2000-07-11 | Atr Human Information Processing Research Laboratories | Method and apparatus of facial image conversion by interpolation/extrapolation for plurality of facial expression components representing facial image |
EP0930585A1 (en) * | 1998-01-14 | 1999-07-21 | Canon Kabushiki Kaisha | Image processing apparatus |
CN101021952A (en) * | 2007-03-23 | 2007-08-22 | 北京中星微电子有限公司 | Method and apparatus for realizing three-dimensional video special efficiency |
CN101452582A (en) * | 2008-12-18 | 2009-06-10 | 北京中星微电子有限公司 | Method and device for implementing three-dimensional video specific action |
CN102054291A (en) * | 2009-11-04 | 2011-05-11 | 厦门市美亚柏科信息股份有限公司 | Method and device for reconstructing three-dimensional face based on single face image |
US20140088750A1 (en) * | 2012-09-21 | 2014-03-27 | Kloneworld Pte. Ltd. | Systems, methods and processes for mass and efficient production, distribution and/or customization of one or more articles |
US20140362091A1 (en) * | 2013-06-07 | 2014-12-11 | Ecole Polytechnique Federale De Lausanne | Online modeling for real-time facial animation |
CN104346824A (en) * | 2013-08-09 | 2015-02-11 | 汉王科技股份有限公司 | Method and device for automatically synthesizing three-dimensional expression based on single facial image |
CN104978764A (en) * | 2014-04-10 | 2015-10-14 | 华为技术有限公司 | Three-dimensional face mesh model processing method and three-dimensional face mesh model processing equipment |
CN104732203A (en) * | 2015-03-05 | 2015-06-24 | 中国科学院软件研究所 | Emotion recognizing and tracking method based on video information |
CN105118082A (en) * | 2015-07-30 | 2015-12-02 | 科大讯飞股份有限公司 | Personalized video generation method and system |
CN106920274A (en) * | 2017-01-20 | 2017-07-04 | 南京开为网络科技有限公司 | Mobile terminal 2D key points rapid translating is the human face model building of 3D fusion deformations |
CN107452034A (en) * | 2017-07-31 | 2017-12-08 | 广东欧珀移动通信有限公司 | Image processing method and its device |
CN108154550A (en) * | 2017-11-29 | 2018-06-12 | 深圳奥比中光科技有限公司 | Face real-time three-dimensional method for reconstructing based on RGBD cameras |
CN108062791A (en) * | 2018-01-12 | 2018-05-22 | 北京奇虎科技有限公司 | A kind of method and apparatus for rebuilding human face three-dimensional model |
Non-Patent Citations (1)
Title |
---|
马倩: "基于单张照片的三维人脸重建研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020034698A1 (en) * | 2018-08-16 | 2020-02-20 | Oppo广东移动通信有限公司 | Three-dimensional model-based special effect processing method and device, and electronic apparatus |
CN110310318A (en) * | 2019-07-03 | 2019-10-08 | 北京字节跳动网络技术有限公司 | A kind of effect processing method and device, storage medium and terminal |
CN110310318B (en) * | 2019-07-03 | 2022-10-04 | 北京字节跳动网络技术有限公司 | Special effect processing method and device, storage medium and terminal |
CN111639613A (en) * | 2020-06-04 | 2020-09-08 | 上海商汤智能科技有限公司 | Augmented reality AR special effect generation method and device and electronic equipment |
CN111639613B (en) * | 2020-06-04 | 2024-04-16 | 上海商汤智能科技有限公司 | Augmented reality AR special effect generation method and device and electronic equipment |
CN112004020A (en) * | 2020-08-19 | 2020-11-27 | 北京达佳互联信息技术有限公司 | Image processing method, image processing device, electronic equipment and storage medium |
CN112004020B (en) * | 2020-08-19 | 2022-08-12 | 北京达佳互联信息技术有限公司 | Image processing method, image processing device, electronic equipment and storage medium |
CN113538696A (en) * | 2021-07-20 | 2021-10-22 | 广州博冠信息科技有限公司 | Special effect generation method and device, storage medium and electronic equipment |
WO2023142650A1 (en) * | 2022-01-30 | 2023-08-03 | 上海商汤智能科技有限公司 | Special effect rendering |
WO2023179346A1 (en) * | 2022-03-25 | 2023-09-28 | 北京字跳网络技术有限公司 | Special effect image processing method and apparatus, electronic device, and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN109147037B (en) | 2020-09-18 |
WO2020034698A1 (en) | 2020-02-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109147037A (en) | Effect processing method, device and electronic equipment based on threedimensional model | |
CN108764180A (en) | Face identification method, device, electronic equipment and readable storage medium storing program for executing | |
EP3096208B1 (en) | Image processing for head mounted display devices | |
CN108765273A (en) | The virtual lift face method and apparatus that face is taken pictures | |
CN109147024A (en) | Expression replacing options and device based on threedimensional model | |
CN105404392B (en) | Virtual method of wearing and system based on monocular cam | |
CN109118569A (en) | Rendering method and device based on threedimensional model | |
CN108876709A (en) | Method for beautifying faces, device, electronic equipment and readable storage medium storing program for executing | |
TWI421781B (en) | Make-up simulation system, make-up simulation method, make-up simulation method and make-up simulation program | |
CN108550185A (en) | Beautifying faces treating method and apparatus | |
CN108447017A (en) | Face virtual face-lifting method and device | |
US20100079491A1 (en) | Image compositing apparatus and method of controlling same | |
CN107563304A (en) | Unlocking terminal equipment method and device, terminal device | |
CN107481304A (en) | The method and its device of virtual image are built in scene of game | |
KR20090098798A (en) | Method and device for the virtual simulation of a sequence of video images | |
CN109191584A (en) | Threedimensional model processing method, device, electronic equipment and readable storage medium storing program for executing | |
CN109191393A (en) | U.S. face method based on threedimensional model | |
CN109584358A (en) | A kind of three-dimensional facial reconstruction method and device, equipment and storage medium | |
CN107551549A (en) | Video game image method of adjustment and its device | |
CN107656611A (en) | Somatic sensation television game implementation method and device, terminal device | |
CN109272579A (en) | Makeups method, apparatus, electronic equipment and storage medium based on threedimensional model | |
CN109191552A (en) | Threedimensional model processing method, device, electronic equipment and storage medium | |
CN109285214A (en) | Processing method, device, electronic equipment and the readable storage medium storing program for executing of threedimensional model | |
CN109242760A (en) | Processing method, device and the electronic equipment of facial image | |
CN107469355A (en) | Game image creation method and device, terminal device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |