CN109857311A - Generate method, apparatus, terminal and the storage medium of human face three-dimensional model - Google Patents
Generate method, apparatus, terminal and the storage medium of human face three-dimensional model Download PDFInfo
- Publication number
- CN109857311A CN109857311A CN201910113530.XA CN201910113530A CN109857311A CN 109857311 A CN109857311 A CN 109857311A CN 201910113530 A CN201910113530 A CN 201910113530A CN 109857311 A CN109857311 A CN 109857311A
- Authority
- CN
- China
- Prior art keywords
- face
- hair style
- active user
- dimensional model
- feature
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Landscapes
- Processing Or Creating Images (AREA)
Abstract
The disclosure is directed to a kind of method, apparatus, terminal and storage mediums for generating human face three-dimensional model, belong to Internet technical field.This method comprises: display threedimensional model creates interface when the generation for detecting human face three-dimensional model instructs;Interface is created based on the threedimensional model, the hair style feature of active user is obtained, and interface is created based on the threedimensional model, obtains the face features of the active user;Based on the hair style feature and the face features, the first human face three-dimensional model of the active user is generated.Since the conjecture face image is to determine that the conjecture face image is more bonded with the image of active user, makes the unique conjecture face image of the conjecture face image according to the hair style feature and face features of active user.
Description
Technical field
This disclosure relates to Internet technical field more particularly to a kind of method, apparatus, terminal for generating human face three-dimensional model
And storage medium.
Background technique
With the development of internet technology, user can share the life of oneself by various software.But user is sometimes
It may be due to there is no makeup or other reasons inconvenience with true appearance appearance.At this point, user usually can choose by using
Conjecture face model blocks the head of user in the picture of sharing, and user is appeared in point by the image of conjecture face
In the picture enjoyed.
In the related technology, the designed conjecture face image provided in software, user can be from designed virtual
The head of user in picture of the conjecture face image to block sharing is selected in face image.
It is above-mentioned in the related technology, user can only select in software designed conjecture face image to block the picture of sharing
The head of user in face, therefore, the conjecture face image and the resemblance that user can not be bonded, it may appear that more people use one
The case where a conjecture face image, therefore active user cannot be distinguished by conjecture face image.
Summary of the invention
The disclosure provide it is a kind of generate human face three-dimensional model method, apparatus, terminal and storage medium, can overcome due to
Conjecture face image can not be bonded the resemblance of user, it may appear that and more people use the case where conjecture face image, because
And the problem of active user cannot being distinguished by conjecture face image.
According to the first aspect of the embodiments of the present disclosure, a kind of method for generating human face three-dimensional model, the method are provided
Include:
When the generation for detecting human face three-dimensional model instructs, display threedimensional model creates interface;
Interface is created based on the threedimensional model, obtains the hair style feature of active user, and be based on the threedimensional model
Interface is created, the face features of the active user are obtained;
Based on the hair style feature and the face features, the first face three-dimensional mould of the active user is generated
Type.
In one possible implementation, threedimensional model creation interface includes multiple hair style elements and each hair style
The corresponding multiple hair style elemental characteristics of element;
It is described that interface is created based on the threedimensional model, obtain the hair style feature of active user, comprising:
Obtain at least one the hair style element selected;
For each hair style element selected, the corresponding multiple hair style elemental characteristics of the hair style element are shown, and obtain
Take the hair style elemental characteristic selected;
At least one described hair style element and at least one the hair style elemental characteristic selected are formed into the hair style feature.
In alternatively possible implementation, threedimensional model creation interface includes the first shooting button;
It is described that interface is created based on the threedimensional model, obtain the hair style feature of active user, comprising:
When detecting that first shooting button is triggered, the head of the active user is shot by camera, is obtained
To head image;
The hair style feature of the active user is identified from the head image.
In alternatively possible implementation, threedimensional model creation interface includes multiple face facial elements and every
The corresponding multiple facial elements features of personal face facial elements;
It is described that interface is created based on the threedimensional model, obtain the face features of the active user, comprising:
Obtain at least one the face facial elements selected;
For everyone the face facial elements selected, show that the corresponding multiple facial elements of the face facial elements are special
Sign, and obtain the facial elements feature selected;
At least one described face facial elements and at least one the facial elements feature selected are formed into the face
Facial characteristics.
In alternatively possible implementation, threedimensional model creation interface includes the second shooting button;
It is described that interface is created based on the threedimensional model, obtain the face features of the active user, comprising:
When detecting that second shooting button is triggered, the face of the active user is shot by camera, is obtained
To the first facial image;
The face features of the active user are identified from first facial image.
It is described to be based on the hair style feature and the face features in alternatively possible implementation, it generates
The first human face three-dimensional model of the active user, comprising:
Interface is created based on the threedimensional model, obtains the identity characteristic of the active user;
According to the identity characteristic of the active user, standard three-dimensional model corresponding with the identity characteristic is determined;
The hair style feature and the face features are mapped in the standard three-dimensional model, obtained described current
The first human face three-dimensional model of user.
It is described that interface is created based on the threedimensional model in alternatively possible implementation, obtain the current use
The identity characteristic at family, comprising:
Threedimensional model creation interface includes multiple identity elements and the corresponding identity element feature of each identity element,
It obtains at least one identity element selected and the identity element is corresponding to be shown for each identity element selected
Multiple identity element features, and obtain the identity element feature that is selected by least one described identity element and are selected
At least one identity element feature forms the identity characteristic;Alternatively,
Threedimensional model creation interface includes third shooting button, when detecting that the third shooting button is triggered
When, the face of the active user is shot by camera, obtains the second facial image, is identified from second facial image
The identity characteristic of the active user.
It is described that the hair style feature and the face features are mapped to institute in alternatively possible implementation
It states in standard three-dimensional model, obtains the first human face three-dimensional model of the active user, comprising:
The hair style feature and the face features are mapped in the standard three-dimensional model, obtained described current
The second human face three-dimensional model of user;
By in second human face three-dimensional model hair style feature and the face features be set as editable state;
Under editable state, when the fisrt feature that detection is selected, the fisrt feature is obtained by modified the
Two features;
The fisrt feature in second human face three-dimensional model is revised as the second feature, is obtained described current
The first human face three-dimensional model of user.
It is described to be based on the hair style feature and the face features in alternatively possible implementation, it generates
After the first human face three-dimensional model of the active user, the method also includes:
During the active user shoots video, the third facial image of the active user is obtained;
Based on face human facial expression recognition model, the face face of the active user is identified from the third facial image
Portion's expression parameter;
Based on the face Facial Animation Parameters, first human face three-dimensional model is driven to show and the third face figure
As corresponding virtual expression.
In alternatively possible implementation, the method also includes:
Acquire the voice signal of the active user;
First human face three-dimensional model is driven to play the voice signal.
According to the second aspect of an embodiment of the present disclosure, a kind of device for generating human face three-dimensional model, described device are provided
Include:
Display module is configured as when the generation for detecting human face three-dimensional model instructs, and display threedimensional model creates boundary
Face;
First obtains module, is configured as creating interface based on the threedimensional model, obtains the hair style feature of active user,
And interface is created based on the threedimensional model, obtain the face features of the active user;
Generation module is configured as generating the active user based on the hair style feature and the face features
The first human face three-dimensional model.
In one possible implementation, threedimensional model creation interface includes multiple hair style elements and each hair style
The corresponding multiple hair style elemental characteristics of element;
Described first obtains module, is additionally configured to obtain at least one the hair style element selected;For what is selected
Each hair style element shows the corresponding multiple hair style elemental characteristics of the hair style element, and it is special to obtain the hair style element selected
Sign;At least one described hair style element and at least one the hair style elemental characteristic selected are formed into the hair style feature.
In alternatively possible implementation, threedimensional model creation interface includes the first shooting button;
Described first obtains module, is additionally configured to pass through camera shooting when detecting that first shooting button is triggered
Head shoots the head of the active user, obtains head image;The hair style of the active user is identified from the head image
Feature.
In alternatively possible implementation, threedimensional model creation interface includes multiple face facial elements and every
The corresponding multiple facial elements features of personal face facial elements;
Described first obtains module, is additionally configured to obtain at least one the face facial elements selected;For selected
Everyone the face facial elements selected show the corresponding multiple facial elements features of the face facial elements, and obtain and selected
Facial elements feature;At least one described face facial elements and at least one the facial elements feature selected are formed into institute
State face features.
In alternatively possible implementation, threedimensional model creation interface includes the second shooting button;
Described first obtains module, is additionally configured to pass through camera shooting when detecting that second shooting button is triggered
Head shoots the face of the active user, obtains the first facial image;The current use is identified from first facial image
The face features at family.
In alternatively possible implementation, the generation module is additionally configured to create based on the threedimensional model
Interface obtains the identity characteristic of the active user;According to the identity characteristic of the active user, the determining and identity characteristic
Corresponding standard three-dimensional model;The hair style feature and the face features are mapped in the standard three-dimensional model,
Obtain the first human face three-dimensional model of the active user.
In alternatively possible implementation, the generation module is additionally configured to threedimensional model creation interface
Including multiple identity elements and the corresponding identity element feature of each identity element, at least one identity member selected is obtained
Element shows the corresponding multiple identity element features of the identity element for each identity element selected, and obtains selected
The identity element feature selected, will be described at least one described identity element and at least one the identity element feature selected composition
Identity characteristic;Alternatively,
The generation module, being also configured threedimensional model creation interface includes third shooting button, when detecting
When stating third shooting button and being triggered, the face of the active user is shot by camera, obtains the second facial image, from institute
State the identity characteristic that the active user is identified in the second facial image.
In alternatively possible implementation, the generation module is additionally configured to the hair style feature and described
Face features are mapped in the standard three-dimensional model, obtain the second human face three-dimensional model of the active user;By institute
The hair style feature and the face features stated in the second human face three-dimensional model are set as editable state;In editable state
Under, when the fisrt feature that detection is selected, the fisrt feature is obtained by modified second feature;By second face
The fisrt feature in threedimensional model is revised as the second feature, obtains the first face three-dimensional mould of the active user
Type.
In alternatively possible implementation, described device further include:
Second obtains module, is configured as obtaining the active user during active user shoots video
Third facial image;
Identification module is configured as identifying institute from the third facial image based on face human facial expression recognition model
State the face Facial Animation Parameters of active user;
First drive module is configured as driving the first face three-dimensional mould based on the face Facial Animation Parameters
Type shows virtual expression corresponding with the third facial image.
In alternatively possible implementation, described device further include:
Acquisition module is configured as acquiring the voice signal of the active user;
Second drive module is configured as that first human face three-dimensional model is driven to play the voice signal.
According to the third aspect of an embodiment of the present disclosure, a kind of terminal is provided, the terminal includes:
Processor;
Memory for storage processor executable instruction;
Wherein, the processor is configured to:
When the generation for detecting human face three-dimensional model instructs, display threedimensional model creates interface;
Interface is created based on the threedimensional model, obtains the hair style feature of active user, and be based on the threedimensional model
Interface is created, the face features of the active user are obtained;
Based on the hair style feature and the face features, the first face three-dimensional mould of the active user is generated
Type.
According to a fourth aspect of embodiments of the present disclosure, a kind of non-transitorycomputer readable storage medium is provided, institute is worked as
When stating instruction in storage medium and being executed by the processor of terminal, enable the terminal to execute a kind of human face three-dimensional model that generates
Method, which comprises
When the generation for detecting human face three-dimensional model instructs, display threedimensional model creates interface;
Interface is created based on the threedimensional model, obtains the hair style feature of active user, and be based on the threedimensional model
Interface is created, the face features of the active user are obtained;
Based on the hair style feature and the face features, the first face three-dimensional mould of the active user is generated
Type.
According to a fifth aspect of the embodiments of the present disclosure, a kind of application program is provided, when the instruction in application program is by counting
When calculating the processor execution of machine equipment, so that computer equipment is able to carry out a kind of method for generating human face three-dimensional model,
The described method includes:
When the generation for detecting human face three-dimensional model instructs, display threedimensional model creates interface;
Interface is created based on the threedimensional model, obtains the hair style feature of active user, and be based on the threedimensional model
Interface is created, the face features of the active user are obtained;
Based on the hair style feature and the face features, the first face three-dimensional mould of the active user is generated
Type.
The technical scheme provided by this disclosed embodiment can include the following benefits:
In the embodiments of the present disclosure, the hair style feature and face of active user are determined in interface by creating in threedimensional model
Facial characteristics generates the first human face three-dimensional model of active user by the hair style feature and face features.Due to the void
Anthropomorphic shape of face picture be according to the hair style feature and face features of active user determine, therefore the conjecture face image with work as
The image of preceding user is more bonded, and makes the unique conjecture face image of the conjecture face image.
It should be understood that above general description and following detailed description be only it is exemplary and explanatory, not
The disclosure can be limited.
Detailed description of the invention
The drawings herein are incorporated into the specification and forms part of this specification, and shows the implementation for meeting the disclosure
Example, and together with specification for explaining the principles of this disclosure.
Fig. 1 is a kind of flow chart for generating human face three-dimensional model method shown according to an exemplary embodiment.
Fig. 2 is a kind of flow chart for generating human face three-dimensional model method shown according to an exemplary embodiment.
Fig. 3 is a kind of schematic diagram of human face three-dimensional model application interface shown according to an exemplary embodiment.
Fig. 4 is a kind of flow chart for generating human face three-dimensional model method shown according to an exemplary embodiment.
Fig. 5 is the schematic diagram at threedimensional model creation shown according to an exemplary embodiment interface.
Fig. 6 is the schematic diagram at threedimensional model creation shown according to an exemplary embodiment interface.
Fig. 7 is a kind of flow chart for generating human face three-dimensional model method shown according to an exemplary embodiment.
Fig. 8 is a kind of block diagram for generating human face three-dimensional model device shown according to an exemplary embodiment.
Fig. 9 is shown according to an exemplary embodiment a kind of for generating the block diagram of the terminal of human face three-dimensional model.
Specific embodiment
Example embodiments are described in detail here, and the example is illustrated in the accompanying drawings.Following description is related to
When attached drawing, unless otherwise indicated, the same numbers in different drawings indicate the same or similar elements.Following exemplary embodiment
Described in embodiment do not represent all implementations consistent with this disclosure.On the contrary, they be only with it is such as appended
The example of the consistent device and method of some aspects be described in detail in claims, the disclosure.
Fig. 1 is a kind of method flow diagram for generating human face three-dimensional model shown according to an exemplary embodiment, such as Fig. 1 institute
Show, comprising the following steps:
In step s101, when the generation for detecting human face three-dimensional model instructs, display threedimensional model creates interface.
In step s 102, interface is created based on the threedimensional model, obtains the hair style feature of active user, and being based on should
Threedimensional model creates interface, obtains the face features of the active user.
In step s 103, based on the hair style feature and the face features, the first face of the active user is generated
Threedimensional model.
In one possible implementation, threedimensional model creation interface includes multiple hair style elements and each hair style member
The corresponding multiple hair style elemental characteristics of element;
Interface should be created based on the threedimensional model, and obtain the hair style feature of active user, comprising:
Obtain at least one the hair style element selected;
For each hair style element selected, the corresponding multiple hair style elemental characteristics of the hair style element are shown, and obtain
The hair style elemental characteristic selected;
At least one hair style element and at least one the hair style elemental characteristic selected are formed into the hair style feature.
In alternatively possible implementation, it includes the first shooting button which, which creates interface,;
Interface should be created based on the threedimensional model, and obtain the hair style feature of active user, comprising:
When detecting that first shooting button is triggered, the head of the active user is shot by camera, is obtained to the end
Portion's image;
The hair style feature of the active user is identified from the head image.
In alternatively possible implementation, threedimensional model creation interface includes multiple face facial elements and each
The corresponding multiple facial elements features of face facial elements;
Interface should be created based on the threedimensional model, and obtain the face features of the active user, comprising:
Obtain at least one the face facial elements selected;
For everyone the face facial elements selected, the corresponding multiple facial elements of display the people's face facial elements are special
Sign, and obtain the facial elements feature selected;
At least one face facial elements and at least one the facial elements feature selected are formed into the people's face portion
Feature.
In alternatively possible implementation, it includes the second shooting button which, which creates interface,;
Interface should be created based on the threedimensional model, and obtain the face features of the active user, comprising:
When detecting that second shooting button is triggered, the face of the active user is shot by camera, obtains
One facial image;
The face features of the active user are identified from first facial image.
In alternatively possible implementation, it is current should to generate this based on the hair style feature and the face features
The first human face three-dimensional model of user, comprising:
Interface is created based on the threedimensional model, obtains the identity characteristic of the active user;
According to the identity characteristic of the active user, standard three-dimensional model corresponding with the identity characteristic is determined;
By the hair style feature and the people's face portion's Feature Mapping into the standard three-dimensional model, the of the active user is obtained
One human face three-dimensional model.
In alternatively possible implementation, interface should be created based on the threedimensional model, and obtain the body of the active user
Part feature, comprising:
It includes multiple identity elements and the corresponding identity element feature of each identity element that the threedimensional model, which creates interface, is obtained
At least one identity element selected is taken, for each identity element selected, shows that the identity element is corresponding multiple
Identity element feature, and obtain the identity element feature that is selected, by least one identity element and selected at least one
A identity element feature forms the identity characteristic;Alternatively,
It includes third shooting button that the threedimensional model, which creates interface, when detecting that the third shooting button is triggered, is led to
The face that camera shoots the active user is crossed, the second facial image is obtained, the current use is identified from second facial image
The identity characteristic at family.
In alternatively possible implementation, this is by the hair style feature and the people's face portion's Feature Mapping to the standard three
In dimension module, the first human face three-dimensional model of the active user is obtained, comprising:
By the hair style feature and the people's face portion's Feature Mapping into the standard three-dimensional model, the of the active user is obtained
Two human face three-dimensional models;
Editable state is set with the face features by the hair style feature in second human face three-dimensional model;
Under editable state, when the fisrt feature that detection is selected, the fisrt feature is obtained by modified second
Feature;
The fisrt feature in second human face three-dimensional model is revised as the second feature, obtains the of the active user
One human face three-dimensional model.
In alternatively possible implementation, it is current should to generate this based on the hair style feature and the face features
After the first human face three-dimensional model of user, this method further include:
During user shoots video in this prior, the third facial image of the active user is obtained;
Based on face human facial expression recognition model, the face face table of the active user is identified from the third facial image
Feelings parameter;
Based on the people's face Facial Animation Parameters, first human face three-dimensional model is driven to show corresponding with the third facial image
Virtual expression.
In alternatively possible implementation, this method further include:
Acquire the voice signal of the active user;
First human face three-dimensional model is driven to play the voice signal.
In the embodiments of the present disclosure, the hair style feature and face of active user are determined in interface by creating in threedimensional model
Facial characteristics generates the first human face three-dimensional model of active user by the hair style feature and face features.Due to the void
Anthropomorphic shape of face picture be according to the hair style feature and face features of active user determine, therefore the conjecture face image with work as
The image of preceding user is more bonded, and makes the unique conjecture face image of the conjecture face image.
Fig. 2 is a kind of method flow diagram for generating human face three-dimensional model shown according to an exemplary embodiment, in this public affairs
It opens in embodiment, is illustrated so that terminal is by the hair style feature and face features of image recognition user as an example, such as Fig. 2 institute
Show, comprising the following steps:
In step s 201, when the generation for detecting human face three-dimensional model instructs, terminal shows that threedimensional model creates boundary
Face.
Application software is installed in terminal, terminal can detect the generation instruction of human face three-dimensional model in application software, when
When detecting generation instruction, display threedimensional model creates interface.Wherein, threedimensional model creation interface includes the first shooting button
With the second shooting button.First shooting button is for triggering identification hair style feature, and the second shooting button is for triggering identification face
Facial characteristics.Wherein, the first shooting button and the second shooting button can be the same button, or different buttons.
In one possible implementation, the wound of human face three-dimensional model is shown in the target interface of the application software of terminal
Button is built, when terminal detects that the creation button of the human face three-dimensional model is triggered, terminal confirmly detects face three-dimensional mould
The generation of type instructs, and terminal is instructed based on the generation, shows that threedimensional model creates interface on the display interface of terminal.Wherein,
The target interface can be the main interface or the shooting page of application software.
In alternatively possible implementation, when terminal, which detects, face occurs in shooting picture, terminal notifying is used
Whether family opens conjecture face mode, when terminal detects that user selects to open conjecture face mode, confirmly detects face
The generation of threedimensional model instructs, and terminal is instructed based on the generation, shows that threedimensional model creates interface on the display interface of terminal.
Wherein, which can be live streaming application, correspondingly, terminal can be in user live broadcast previous existence at the face three
Dimension module directly uses the human face three-dimensional model in live streaming;Terminal can also generate the face three-dimensional mould during live streaming
Type.The application software can also be any software that can share picture or video, correspondingly, terminal can shooting picture or
Video previous existence at the human face three-dimensional model, when shooting picture or video, directly uses the human face three-dimensional model;Terminal can also be with
The human face three-dimensional model is generated during shooting picture or video.The live streaming software can also can be logged in and be set to be any
The application software of head portrait is set, correspondingly, user can select shooting head portrait when head portrait is arranged, face is triggered in shooting picture
The creation button of threedimensional model, using the human face three-dimensional model of foundation as head portrait, and it is subsequent be broadcast live or shoot when, can be with
Directly appeared in picture using the human face three-dimensional model.
It should be noted is that before the generation that terminal confirmly detects human face three-dimensional model instructs, it is determined whether
Generate the human face three-dimensional model of user;When not having to generate the human face three-dimensional model of user, terminal execution confirmly detects face
The step of generation instruction of threedimensional model.When having generated the human face three-dimensional model of user, terminal can be directly acquired and given birth to
At human face three-dimensional model, or prompt the user whether to regenerate human face three-dimensional model, when receiving confirmation instruction, terminal
The step of executing the generation instruction for confirmly detecting human face three-dimensional model.
In step S202, when detecting that the first shooting button in threedimensional model creation interface is triggered, terminal is logical
The head that camera shoots the active user is crossed, head image is obtained.
When terminal detects that the first shooting button is triggered, terminal is scanned user's head, obtains the user's
Head image.Wherein, which can be any device with camera function.The camera can be taken the photograph for the preposition of terminal
As head or rear camera, which can also be any camera connecting with terminal.The head image includes at least one
The image comprising user's hair style feature is opened, which can may be the back side of user's head for the direct picture of user's head
Image can also be the side image of user's head.
In alternatively possible implementation, terminal can also select head image from local image library.Correspondingly,
Step S202 could alternatively be: when terminal detects that the first shooting button is triggered, terminal opens camera, and shows first
Interface is imaged, is released the button in the first camera shooting interface including picture library;Terminal detects that the picture library is released the button when being triggered, and shows
Show local image library, active user can from the local image library with selecting at least one first image, this at least one
The head feature of active user is included at least to first in image;Terminal obtains the first ground image of active user's selection,
The head image of active user from the first is obtained in image.
In step S203, terminal identifies the hair style feature of the active user from the head image.
Hair style feature includes the hair styles element such as color, length, shape of user's hair.Terminal is known according to the head image
Not Chu the user the hair styles elemental characteristic such as color, length, the shape of hair.
In one possible implementation, hair style feature identification model is stored in terminal, terminal is defeated by the head image
Enter the hair style feature identification model, exports the hair style feature of the active user.Wherein, the hair style feature identification model is for being based on
Head image identifies hair style feature.
In alternatively possible implementation, terminal identify by head image of the server to active user
To the hair style feature of active user.Correspondingly, the step of process, can be with are as follows: terminal to server sends the first identification request,
The head image of the active user is carried in the first identification request;Server receives the first identification request, to first knowledge
It does not invite the head image in asking to be identified, and the hair style feature of the active user recognized is sent to the terminal;The end
End receives the hair style feature for the active user that server is sent.
It, can also be with it should be noted is that terminal can identify the hair style feature of the user from a head image
The hair style feature of the user is identified by multiple head images.When terminal identifies the hair style feature of user from a head image
When, which can be any image for including user's head image, and terminal is known by the head image to user
Not, the hair style feature of user is determined.When terminal identifies the hair style feature of the user by multiple head images, terminal can lead to
The different angle for crossing user's head in different images is identified, and the recognition result of different angle is integrated to obtain currently
Hair style feature.
In step S204, when terminal detects that second shooting button is triggered, it is current that this is shot by camera
The face of user obtains the first facial image.
When terminal detects that the second shooting button is triggered, terminal is scanned the face of user, obtains user's
First facial image.Wherein, the first facial image includes at least one image comprising user's face features, which can
Think the direct picture of user's face or the side image of user's face.
In one possible implementation, terminal can also select the first facial image from local image library.Accordingly
, step S204 could alternatively be: when terminal detects that the second shooting button is triggered, terminal opens camera, and shows
Second images interface, releases the button in the second camera shooting interface including picture library;Terminal detects that the picture library is released the button and is triggered
When, show local image library, active user can select at least one second local image from the local image library, this is at least
The face features of active user are included at least in one second local image;Terminal obtains the second of active user selection
Local image obtains the first facial image of active user from the second local image.
It is also possible to not it should be noted is that the first shooting button and the second shooting button can be identical button
Same button.When the first shooting button and the second shooting button are identical button, terminal detects that the shooting button is touched
After hair, camera is opened, camera shoots the head image and the first facial image of user simultaneously, the head image and the first
Face image can may be different images for same image.When the first shooting button and the second shooting button are different press
When button, terminal detects the first shooting button respectively and whether the second shooting button is triggered, when terminal detect the first shooting by
When button is triggered, terminal opens camera, and shoots the head image of user;When terminal detects that the second shooting button is triggered
When, terminal opens camera, and shoots the first facial image of user.
In step S205, terminal identifies the face features of the active user from first facial image.
Face features include face facial elements and facial elements feature.Face facial elements can be face and skin
Color et al. face facial elements, face can be face, eyebrow, eyes, nose and lip.Facial elements feature is facial elements pair
The feature answered;For example, the corresponding facial elements feature of the eyes is that oxeye, pigsney etc. are more when facial elements are eyes
Shape and/or size of kind eyes etc..
In one possible implementation, facial characteristics identification model is stored in terminal, terminal is by the first face figure
As inputting the facial characteristics identification model, the face features of the active user are exported.Wherein, the facial characteristics identification model
For identifying face features based on facial image.
In alternatively possible implementation, terminal can by server to the first facial image of active user into
Row identification obtains the face features of active user.Correspondingly, the step of process, can be with are as follows: terminal to server sends the
Two identifications are requested, and the first facial image of the active user is carried in the second identification request;Server receives second identification
Request, to the face for the active user that the first facial image in the second identification request identifies, and identification is obtained
Facial characteristics is sent to the terminal;The terminal receives the face features for the active user that server is sent.
It should be noted is that due to the first shooting button and the second shooting button can be identical button can also be with
It is different button, correspondingly, terminal detects the bat when the first shooting button and the second shooting button are identical buttons
When taking the photograph button and being triggered, the hair style feature and facial characteristics of the user in identification image can be synchronized;Terminal can also detect
When being triggered to the shooting button, user's selection is prompted to identify that terminal can basis to hair style feature or facial characteristics
The identification method of user's selection identifies.When the first shooting button and the second shooting button are different button, terminal root
Image is obtained according to the first shooting button or the second shooting button of user's selection and is identified.
Need to illustrate on the other hand, head image and the first facial image can identical image can also be different
Image includes the hair style feature and face of user when head image and the first facial image are identical image, in the image
Facial characteristics.Correspondingly, terminal can identify the hair style feature of user and the face features of user from the image simultaneously.
Terminal can also successively identify the hair style feature and face features of user from image.
It needs to illustrate on the other hand, the disclosure obtains the hair style feature of user and the sequence of face features to terminal
Do not make specific requirement, terminal can first carry out step S202 and execute step S204 again, and terminal can also first carry out step S204 again
Execute step S202;Correspondingly, the disclosure is also not especially limited the sequence of step S203 and step S205, terminal can be first
It executes step S203 and executes step S205 again, terminal can also first carry out step S205 and execute step S203 again.But step
S203 is executed after step S202, and step S205 is executed after step s 204.
In step S206, terminal is based on the hair style feature and the face features, generates the first of the active user
Human face three-dimensional model.
In one possible implementation, terminal is directly based upon the hair style feature recognized and face features generate
The human face three-dimensional model of the active user.Correspondingly, this step can be with are as follows: terminal determines original human face three-dimensional model, according to knowledge
The hair style feature and face features being clipped to are mapped in original human face three-dimensional model, generate the first human face three-dimensional model.
In alternatively possible implementation, terminal obtains the identity characteristic of the user, special according to the identity of the user
Sign, hair style feature and face features generate the human face three-dimensional model of the active user.Correspondingly, terminal is according to the user's
The step of identity characteristic, hair style feature and face features generate the human face three-dimensional model of the active user can be by following
It realizes step (1)-(3), comprising:
(1) terminal is based on the threedimensional model and creates interface, obtains the identity characteristic of the active user.
Identity characteristic includes the identity elements such as the gender of user, age.User can manually select identity characteristic, namely with
The first lower implementation;Terminal can also identify the identity characteristic of user namely second following by the second facial image
Implementation.
The first implementation, it includes that multiple identity elements and each identity element are corresponding which, which creates interface,
Identity element feature, terminal obtain at least one identity element selected, and for each identity element selected, display should
The corresponding multiple identity element features of identity element, and the identity element feature selected is obtained, by least one identity member
Element and at least one the identity element feature selected form the identity characteristic.
Wherein, identity element feature is the corresponding multiple and different feature of the identity element.For example, when identity element is property
When other, identity element feature is male and female;When identity element is the age, identity element feature can be 10,15,20,25 etc.
Any age.
Second of implementation, it includes third shooting button which, which creates interface, when terminal detects the third
When shooting button is triggered, the face of the active user is shot by camera, obtains the second facial image, from second face
The identity characteristic of the active user is identified in image.
In this step, it can store identity characteristic identification model in terminal, terminal should by second facial image input
In identity characteristic identification model, the identity characteristic of the active user is exported.Terminal can also identify the current use by server
The identity characteristic at family.Correspondingly, the step of process, can be with are as follows:
Terminal to server sends third identification request, and the second face of the active user is carried in third identification request
Image;Server receives third identification request, identifies to the second facial image in third identification request, and will know
The identity characteristic for the active user not obtained is sent to the terminal;The terminal receives the body for the active user that server is sent
Part feature.
It should be noted is that third shooting button and the first shooting button can be identical shooting button can also be with
It is different shooting button, also, third shooting button and the second shooting button can be identical button and be also possible to difference
Button.
It needs to illustrate on the other hand, the first facial image and the second facial image can be identical image and be also possible to
Different images.When the first facial image and the second facial image are identical image, terminal can identify simultaneously user's
Face characteristic and identity characteristic.
(2) terminal determines standard three-dimensional model corresponding with the identity characteristic according to the identity characteristic of the active user.
Different standard three-dimensional models can be stored in terminal according to the identity characteristic of user before this step, work as terminal
After the identity characteristic for determining active user, which is determined from the set of the standard three-dimensional model according to the identity characteristic of the user
The corresponding standard three-dimensional model of part feature.Correspondingly, terminal is previously stored the corresponding relationship of identity characteristic Yu standard three-dimensional model,
For example, identity characteristic is male, 20 years old, corresponding standard three-dimensional model was the first standard three-dimensional model, and identity characteristic is female, 20
Year, corresponding standard three-dimensional model are the second standard three-dimensional model.
(3) it is current to obtain this by the hair style feature and the people's face portion's Feature Mapping into the standard three-dimensional model for terminal
The first human face three-dimensional model of user.
In one possible implementation, the hair style feature and face features of the user are mapped to standard by terminal
After in threedimensional model, first human face three-dimensional model is directly obtained, stores first human face three-dimensional model.Alternatively possible
In implementation, after the hair style feature and face features of the user are mapped in standard three-dimensional model by terminal, the is generated
Two human face three-dimensional models, user in second human face three-dimensional model hair style feature and face features modify, when
After user's confirmation, which is confirmed as the first human face three-dimensional model.Correspondingly, this step can by with
Lower step (3-1)-(3-4) is obtained:
(3-1) terminal into the standard three-dimensional model, is deserved the hair style feature and the people's face portion's Feature Mapping
The second human face three-dimensional model of preceding user.
(3-2) terminal sets editable with the face features for the hair style feature in second human face three-dimensional model
State.
When user need in the second human face three-dimensional model hair style feature or the face features modify when, can
To set editable state with the face features for the hair style feature in second human face three-dimensional model.In a kind of possibility
Implementation in, Edit button is shown in terminal interface, when terminal detects that the Edit button is triggered, by second people
Hair style feature and the face features in face three-dimensional model are set as editable state.In alternatively possible implementation
In, when terminal detects the touch operation to a certain hair style feature or face features, terminal is by the hair style feature or people
Face facial characteristics is set as editable state.
(3-3), when terminal detects the fisrt feature selected, obtains the fisrt feature and is modified under editable state
Second feature afterwards.
Terminal determines the corresponding element of the fisrt feature according to the fisrt feature and shows the corresponding other elements of the element
Feature selects second feature in other elements feature, which is replaced fisrt feature.
The fisrt feature in second human face three-dimensional model is revised as the second feature by (3-4) terminal, is deserved
The first human face three-dimensional model of preceding user.
The second human face three-dimensional model after the completion of modification as the first human face three-dimensional model and is stored.As shown in figure 3, should
First human face three-dimensional model is the exclusive human face three-dimensional model of active user, when active user is unwilling with true colors appearance,
It selection can be appeared in video pictures in the form of caricature in shooting interface, selection active user's is special in " sprouting face " option
Belong to human face three-dimensional model to block the head of user in video pictures.
In the embodiments of the present disclosure, select shooting button to the head of active user by creating in interface in threedimensional model
It is scanned analysis, determines the hair style feature and face features of active user, it is special by the hair style feature and face face
Sign generates the first human face three-dimensional model of active user.Make user during shooting video, can be used and user profile
The more like conjecture face image of feature blocks the head of user in video pictures.Also, since the conjecture face image is
It is determined according to the hair style feature and face features of active user, therefore the image of the conjecture face image and active user
It is more bonded, makes the unique conjecture face image of the conjecture face image.
Further, terminal can modify to the conjecture face image automatically generated, change user freely
Hair style feature or face features in conjecture face increase the interest during shooting video, improve user's body
It tests.
Fig. 4 is a kind of method flow diagram for generating human face three-dimensional model shown according to an exemplary embodiment, in this reality
It applies in example, a variety of hair style features and face features is shown with terminal, active user manually selects hair style feature and face face
It is illustrated for portion's feature, as shown in Figure 4, comprising the following steps:
In step S401, when the generation for detecting human face three-dimensional model instructs, terminal shows that threedimensional model creates boundary
Face.
Terminal shows that terminal shows threedimensional model creation interface in the process and step S301 at threedimensional model creation interface
Process is identical, and details are not described herein.It wherein, include at least one hair style element in threedimensional model creation interface, user can be with
Hair style element is selected from least one hair style element, threedimensional model creation interface can also include that the triggering of hair style feature is pressed
Button and face features trigger button determine that user selects when terminal detects that the hair style feature trigger button is clicked
The hair style feature, so that triggering terminal shows at least one hair style element, when terminal detect face features triggering by
When button is clicked, determine that user selects the face features, so that triggering terminal shows at least one facial elements.
In step S402, terminal obtains at least one the hair style element selected in threedimensional model creation interface.
When including at least one hair style element in threedimensional model creation interface, active user can choose one or more
A hair style element;When terminal detects that at least one hair style element is clicked, determine that at least one hair style element is selected,
Obtain at least one the hair style element selected.Wherein, hair style element includes color, length, shape of hair etc..
When including hair style feature trigger button in threedimensional model creation interface, when detecting the hair style feature trigger button
When being triggered, at least one hair style element is shown, active user can select one or more hair style element by clicking;When
When terminal detects that at least one hair style element is clicked, determines that at least one hair style element is selected, obtain this and selected
At least one hair style element.As shown in figure 5, " hair style " button is shown in threedimensional model creation interface, when terminal detects use
When family touches or " hair style " button is somebody's turn to do in click, determine that terminal hair style feature trigger button is triggered, and in the form of images three
Dimension module creation shows the hair styles element such as color, length and shape of hair in interface.
In step S403, for each hair style element selected, terminal shows the corresponding multiple hairs of the hair style element
Type elemental characteristic, and obtain the hair style elemental characteristic selected.
Wherein hair style elemental characteristic refers to the corresponding hair style feature of each hair style element.For example, when hair style element is color,
The hair style elemental characteristic can be the colors such as black, brown, golden yellow;When hair style element is length, the hair style elemental characteristic
It can be long hair, medium-length hair, bob etc.;When the hair style element be shape when, the hair style elemental characteristic can for straight hair, curly hair,
Single horse hair, span tail etc..
When terminal obtains at least one the hair style element selected, it is corresponding that terminal can directly display each hair style element
Multiple hair style elemental characteristics, as shown in figure 5, showing the corresponding multiple color characteristics of the color of hair and multiple length in terminal
Feature.Terminal can also first show the corresponding multiple hair style elemental characteristics of a hair style element, detect the hair selected
The corresponding hair style elemental characteristic of type element and then show the corresponding hair style elemental characteristic of another hair style element.
Wherein, the corresponding hair style elemental characteristic button of each hair style elemental characteristic is shown in terminal, terminal can detecte hair
Whether the corresponding hair style elemental characteristic button of type elemental characteristic is touched or is clicked, when terminal detects that hair style elemental characteristic is corresponding
Hair style elemental characteristic button by touch or click when, determine that the corresponding hair style elemental characteristic of hair style elemental characteristic button is selected
It selects.
In step s 404, terminal is by least one hair style element and at least one the hair style elemental characteristic group selected
At the hair style feature.
At least one hair style element and the corresponding hair style elemental characteristic of each hair style element that terminal is selected according to user,
Determine the hair style feature of the user.For example, terminal detects that the hair style element of user's selection is respectively color and length, accordingly
, terminal detects that the hair style elemental characteristic of user's selection is respectively brown and long hair.Then terminal determines that the hair style of the user is special
Levy brown long hair.
It should be noted is that the hair style feature of at least one hair style element can be shown in the form of text
Come, can also show in the form of images.The disclosure does not limit the display format of at least one hair style element specifically
It is fixed.Also, the hair style feature of at least one hair style element can be special for the hair style of at least one hair style element of terminal storage
Sign, or the hair style feature at least one hair style element that terminal is generated by the head image of surface sweeping user.
In step S405, terminal obtains at least one the face facial elements selected.
It includes at least one face facial elements in interface that the threedimensional model, which creates, and terminal can trigger multiple faces simultaneously
Facial elements are configured, as shown in fig. 6, the threedimensional model creation interface in include the colour of skin, shape of face, eyebrow, eyes, nose,
Lip et al. face facial elements can show that the colour of skin and shape of face are configured on terminal interface simultaneously.Terminal can also be touched successively
At least one face facial elements are sent out to be configured.Wherein, face facial elements include eyes, nose, mouth, the skin of face
Color etc..
In step S406, for everyone the face facial elements selected, display the people's face facial elements are corresponding more
A facial elements feature, and obtain the facial elements feature selected.
Wherein face facial elements feature refers to everyone the corresponding face features of face facial elements.For example, working as face
When facial elements are eyes, the people's face portion elemental characteristic can be single-edge eyelid, double-edged eyelid etc.;When face facial elements are the colour of skin
When, the people's face portion elemental characteristic can be white skin, yellow-toned skin, dark skin etc..
Wherein, everyone the corresponding face facial elements feature button of face portion elemental characteristic is shown in terminal, terminal can
To detect whether the corresponding face facial elements feature button of face facial elements feature is touched or clicked, when terminal detects
When the corresponding face facial elements feature button of face facial elements feature is by touch or click, determine that the people's face facial elements are special
The corresponding face facial elements feature of sign button is selected.
In step S 407, terminal is special by least one face facial elements and at least one facial elements selected
Sign forms the face features.
At least one face facial elements that terminal is selected according to user and everyone corresponding face of face facial elements
Facial elements feature determines the face features of the user.For example, terminal detects the face facial elements point of user's selection
Not Wei eyes and the colour of skin, correspondingly, terminal detect user selection face facial elements feature be respectively double-edged eyelid and yellow
Skin.Then terminal determines that the face features of the user are double-edged eyelid yellow-toned skin.
It should be noted is that the face features of at least one face facial elements can be in the form of text
It shows, can also show in the form of images.Display format of the disclosure at least one face facial elements
It is not especially limited.Also, the face features of at least one face facial elements can be at least the one of terminal storage
The face features of personal face facial elements, or terminal passes through the first Face image synthesis of surface sweeping user at least
The face features of one people's face facial elements.
In step S408, terminal is based on the hair style feature and the face features, generates the first of the active user
Human face three-dimensional model.
This step is identical as step S406, and details are not described herein.
It should be noted is that the disclosure to terminal obtain user hair style feature and face features sequence not
Make specific requirement, terminal can first carry out step S402 and execute step S405 again, and terminal can also first carry out step S405 and hold again
Row step S402;Correspondingly, the disclosure is also not especially limited the sequence of step S403-S404 and step S406-S407, eventually
End can first carry out step S403-S404 and execute step S406-S407 again, and terminal can also first carry out step S406-S407 again
Execute step S403-S404.But step S403-S404 is executed after step S402, step S406-S407 is in step
It is executed after S405.
It needs to illustrate on the other hand, in the embodiments of the present disclosure, hair style feature and face features is obtained to terminal
Method be not especially limited, for example, terminal can by step S202-S203 obtain user hair style feature, pass through step
The face features of S204-S205 acquisition user.Similarly, terminal can also obtain the hair of user by step S402-S404
Type feature obtains the face features of user by step S204-S205.
In the embodiments of the present disclosure, hair style feature and face features are selected in interface by creating in threedimensional model,
The hair style feature and face features for determining active user generate active user by the hair style feature and face features
The first human face three-dimensional model.Since the conjecture face image is true according to the hair style feature and face features of active user
Fixed, therefore the conjecture face image is more bonded with the image of active user, keeps the conjecture face image unique
Conjecture face image.
Further, terminal can modify to the conjecture face image automatically generated, change user freely
Hair style feature or face features in conjecture face increase the interest during shooting video, improve user's body
It tests.
Fig. 7 is a kind of method flow diagram for generating human face three-dimensional model shown according to an exemplary embodiment, in this reality
It applies in example, after generating human face three-dimensional model, is illustrated for the expression by model real-time exhibition user, such as Fig. 7 institute
Show, comprising the following steps:
In step s 701, during user shoots video in this prior, terminal obtains the third party of the active user
Face image.
Terminal obtains the picture during user's shooting video, and the video that terminal can periodically acquire user's shooting is drawn
Face is obtained from the video pictures.The period for obtaining video pictures can need to be configured and change according to user, in this public affairs
It opens in embodiment, the length in period is not especially limited.For example, the period can be 5ms, 10ms or 15ms etc..
It should be noted is that the process of the shooting video can be the process that user is broadcast live by application software,
The process of the shooting video can not be made in the embodiments of the present disclosure by the process that application software shoots video for user
It is specific to limit.
In step S702, terminal is based on face human facial expression recognition model, identifies and deserves from the third facial image
The face Facial Animation Parameters of preceding user.
The straight video pictures that terminal will acquire are input in face human facial expression recognition model, are based on the people's face portion table
Feelings identification model identifies the current Facial Animation Parameters of the user.Wherein, the parameter is one in a possible implementation
A vector, each value represents the stretching of face facial muscles or shrinks situation in the vector.Also, the length of the vector can
To be configured and change as needed, in the embodiments of the present disclosure, the length of the vector is not especially limited.For example, should
The length of vector can be 50,51,52 or 53 etc..
In step S703, terminal be based on the people's face Facial Animation Parameters, drive first human face three-dimensional model show with
The corresponding virtual expression of the third facial image.
It should be noted is that terminal can acquire the voice signal of active user, the first face three-dimensional mould is driven
Type plays the voice signal.In one possible implementation, terminal according to the people's face Facial Animation Parameters determine this first
Virtual expression corresponding with voice signal in human face three-dimensional model.In alternatively possible implementation, terminal is synchronous to be received
The audio signal of user determines virtual expression corresponding with the audio signal according to audio frequency characteristics such as syllables in audio signal.
Also, the voice signal that final drive first human face three-dimensional model plays can be the voice letter of the active user of terminal acquisition
Number acoustic signal, or according to the terminal acquisition active user the voice signal change of voice after voice signal.
In the embodiments of the present disclosure, terminal obtains the current face of user by obtaining the third facial image of active user
Expression determines the current expression parameter of user according to the current facial expression of user, according to the expression parameter that user is current
On Expression Mapping to the first human face three-dimensional model, traditional unicity is broken, has recorded user's using three-dimensional face model
Facial expression increases the interest of shooting video, improves user experience.
Further, the Virtual table love knot of the first current human face three-dimensional model of voice signal and user is closed, so that meeting
The threedimensional model spoken is present not only in movie and television play, is considerably increased the interest of shooting video, is improved user experience.
Fig. 8 is a kind of block diagram for generating human face three-dimensional model device shown according to an exemplary embodiment, such as Fig. 8 institute
Show, which includes:
Display module 801 is configured as the display threedimensional model creation when the generation for detecting human face three-dimensional model instructs
Interface;
First obtains module 802, is configured as creating interface based on the threedimensional model, and the hair style for obtaining active user is special
Sign, and interface is created based on the threedimensional model, obtain the face features of the active user;
Generation module 803, is configured as based on the hair style feature and the face features, generates the of the active user
One human face three-dimensional model.
In one possible implementation, threedimensional model creation interface includes multiple hair style elements and each hair style member
The corresponding multiple hair style elemental characteristics of element;
The first acquisition module 802, is additionally configured to obtain at least one the hair style element selected;For what is selected
Each hair style element shows the corresponding multiple hair style elemental characteristics of the hair style element, and obtains the hair style elemental characteristic selected;
At least one hair style element and at least one the hair style elemental characteristic selected are formed into the hair style feature.
In alternatively possible implementation, it includes the first shooting button which, which creates interface,;
The first acquisition module 802, is additionally configured to pass through camera when detecting that first shooting button is triggered
The head for shooting the active user, obtains head image;The hair style feature of the active user is identified from the head image.
In alternatively possible implementation, threedimensional model creation interface includes multiple face facial elements and each
The corresponding multiple facial elements features of face facial elements;
The first acquisition module 802, is additionally configured to obtain at least one the face facial elements selected;For selected
Everyone the face facial elements selected, display the people's face facial elements corresponding multiple facial elements features, and obtain and selected
Facial elements feature;At least one face facial elements and at least one the facial elements feature selected are formed into the face
Facial characteristics.
In alternatively possible implementation, it includes the second shooting button which, which creates interface,;
The first acquisition module 802, is additionally configured to pass through camera when detecting that second shooting button is triggered
The face for shooting the active user obtains the first facial image;The face of the active user is identified from first facial image
Facial characteristics.
In alternatively possible implementation, which is additionally configured to create boundary based on the threedimensional model
Face obtains the identity characteristic of the active user;According to the identity characteristic of the active user, mark corresponding with the identity characteristic is determined
Quasi-three-dimensional model;By the hair style feature and the people's face portion's Feature Mapping into the standard three-dimensional model, the active user is obtained
The first human face three-dimensional model.
In alternatively possible implementation, which is additionally configured to threedimensional model creation interface packet
Multiple identity elements and the corresponding identity element feature of each identity element are included, at least one identity element selected is obtained,
It for each identity element selected, shows the corresponding multiple identity element features of the identity element, and obtains and selected
At least one identity element and at least one the identity element feature selected are formed identity spy by identity element feature
Sign;Alternatively,
The generation module 803, being also configured threedimensional model creation interface includes third shooting button, when detect this
When three shooting buttons are triggered, the face of the active user is shot by camera, obtains the second facial image, from second people
The identity characteristic of the active user is identified in face image.
In alternatively possible implementation, which is additionally configured to the hair style feature and the face
Facial characteristics is mapped in the standard three-dimensional model, obtains the second human face three-dimensional model of the active user;By second face
Hair style feature and the face features in threedimensional model are set as editable state;Under editable state, when detection quilt
When the fisrt feature of selection, the fisrt feature is obtained by modified second feature;By being somebody's turn to do in second human face three-dimensional model
Fisrt feature is revised as the second feature, obtains the first human face three-dimensional model of the active user.
In alternatively possible implementation, the device further include:
Second obtains module, is configured as obtaining the of the active user during user shoots video in this prior
Three facial images;
Identification module is configured as being identified and being deserved from the third facial image based on face human facial expression recognition model
The face Facial Animation Parameters of preceding user;
First drive module is configured as driving the first human face three-dimensional model exhibition based on the people's face Facial Animation Parameters
Show virtual expression corresponding with the third facial image.
In alternatively possible implementation, the device further include:
Acquisition module is configured as acquiring the voice signal of the active user;
Second drive module is configured as that first human face three-dimensional model is driven to play the voice signal.
In the embodiments of the present disclosure, the hair style feature and face of active user are determined in interface by creating in threedimensional model
Facial characteristics generates the first human face three-dimensional model of active user by the hair style feature and face features.Due to the void
Anthropomorphic shape of face picture be according to the hair style feature and face features of active user determine, therefore the conjecture face image with work as
The image of preceding user is more bonded, and makes the unique conjecture face image of the conjecture face image.
About the device for generating human face three-dimensional model in above-described embodiment, wherein modules execute the concrete mode of operation
It is described in detail in the embodiment of the method in relation to the generation human face three-dimensional model, will be not set forth in detail herein
Explanation.
Fig. 9 is shown according to an exemplary embodiment a kind of for generating the block diagram of the terminal 900 of human face three-dimensional model.
For example, terminal 900 can be mobile phone, computer, digital broadcasting terminal, messaging device, game console, plate set
It is standby, Medical Devices, body-building equipment, personal digital assistant etc..
Referring to Fig. 9, terminal 900 may include following one or more components: processing component 902, memory 904, electric power
Component 906, multimedia component 908, audio component 910, the interface 912 of input/output (I/O), sensor module 914, and
Communication component 916.
The integrated operation of the usual controlling terminal 900 of processing component 902, such as with display, telephone call, data communication, phase
Machine operation and record operate associated operation.Processing component 902 may include that one or more processors 820 refer to execute
It enables, to perform all or part of the steps of the methods described above.In addition, processing component 902 may include one or more modules, just
Interaction between processing component 902 and other assemblies.For example, processing component 902 may include multi-media module, it is more to facilitate
Interaction between media component 908 and processing component 902.
Memory 904 is configured as storing various types of data to support the operation in equipment 900.These data are shown
Example includes the instruction of any application or method for operating in terminal 900, contact data, and telephone book data disappears
Breath, picture, video etc..Memory 904 can be by any kind of volatibility or non-volatile memory device or their group
It closes and realizes, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM) is erasable to compile
Journey read-only memory (EPROM), programmable read only memory (PROM), read-only memory (ROM), magnetic memory, flash
Device, disk or CD.
Power supply module 906 provides electric power for the various assemblies of terminal 900.Power supply module 906 may include power management system
System, one or more power supplys and other with for terminal 900 generate, manage, and distribute the associated component of electric power.
Multimedia component 908 includes the screen of one output interface of offer between the terminal 900 and user.Some
In embodiment, screen may include liquid crystal display (LCD) and touch panel (TP).If screen includes touch panel, screen
It may be implemented as touch screen, to receive input signal from the user.Touch panel includes one or more touch sensors
To sense the gesture on touch, slide, and touch panel.The touch sensor can not only sense the side of touch or sliding action
Boundary, but also detect duration and pressure relevant to the touch or slide.In some embodiments, multimedia component
908 include a front camera and/or rear camera.When equipment 900 is in operation mode, such as screening-mode or video screen module
When formula, front camera and/or rear camera can receive external multi-medium data.Each front camera and postposition are taken the photograph
As head can be a fixed optical lens system or have focusing and optical zoom capabilities.
Audio component 910 is configured as output and/or input audio signal.For example, audio component 910 includes a Mike
Wind (MIC), when terminal 900 is in operation mode, when such as call mode, recording mode, and voice recognition mode, microphone is matched
It is set to reception external audio signal.The received audio signal can be further stored in memory 904 or via communication set
Part 916 is sent.In some embodiments, audio component 910 further includes a loudspeaker, is used for output audio signal.
I/O interface 912 provides interface between processing component 902 and peripheral interface module, and above-mentioned peripheral interface module can
To be keyboard, click wheel, button etc..These buttons may include, but are not limited to: home button, volume button, start button and lock
Determine button.
Sensor module 914 includes one or more sensors, and the state for providing various aspects for terminal 900 is commented
Estimate.For example, sensor module 914 can detecte the state that opens/closes of equipment 900, the relative positioning of component, such as the group
Part is the display and keypad of terminal 900, and sensor module 914 can also detect 900 1 components of terminal 900 or terminal
Position change, the existence or non-existence that user contacts with terminal 900, the temperature in 900 orientation of terminal or acceleration/deceleration and terminal 900
Degree variation.Sensor module 914 may include proximity sensor, be configured to detect without any physical contact attached
The presence of nearly object.Sensor module 914 can also include optical sensor, such as CMOS or ccd image sensor, for being imaged
It is used in.In some embodiments, the sensor module 914 can also include acceleration transducer, gyro sensor,
Magnetic Sensor, pressure sensor or temperature sensor.
Communication component 916 is configured to facilitate the communication of wired or wireless way between terminal 900 and other equipment.Terminal
900 can access the wireless network based on communication standard, such as WiFi, carrier network (such as 2G, 3G, 4G or 5G) or them
Combination.In one exemplary embodiment, communication component 916 is received via broadcast channel from the wide of external broadcasting management system
Broadcast signal or broadcast related information.In one exemplary embodiment, which further includes near-field communication (NFC) mould
Block, to promote short range communication.
In the exemplary embodiment, terminal 900 can be believed by one or more application specific integrated circuit (ASIC), number
Number processor (DSP), digital signal processing appts (DSPD), programmable logic device (PLD), field programmable gate array
(FPGA), controller, microcontroller, microprocessor or other electronic components are realized, for executing above-mentioned generation face three-dimensional mould
Type method.
In the exemplary embodiment, a kind of non-transitorycomputer readable storage medium including instruction, example are additionally provided
It such as include the memory 904 of instruction, above-metioned instruction can be executed by the processor 920 of terminal 900 to complete above-mentioned generation face three
Dimension module method.For example, the non-transitorycomputer readable storage medium can be ROM, random access memory (RAM), CD-
ROM, tape, floppy disk and optical data storage devices etc..
In the exemplary embodiment, a kind of non-transitorycomputer readable storage medium is additionally provided, when in storage medium
Instruction by terminal processor execute when, enable the terminal to execute a kind of generation human face three-dimensional model method, method includes:
When the generation for detecting human face three-dimensional model instructs, display threedimensional model creates interface;
Interface is created based on the threedimensional model, obtains the hair style feature of active user, and create based on the threedimensional model
Interface obtains the face features of the active user;
Based on the hair style feature and the face features, the first human face three-dimensional model of the active user is generated.
In the exemplary embodiment, a kind of application program is additionally provided, when the instruction in application program is by the processing of terminal
When device executes, enable the terminal to execute a kind of generation human face three-dimensional model method, method includes:
When the generation for detecting human face three-dimensional model instructs, display threedimensional model creates interface;
Interface is created based on the threedimensional model, obtains the hair style feature of active user, and create based on the threedimensional model
Interface obtains the face features of the active user;
Based on the hair style feature and the face features, the first human face three-dimensional model of the active user is generated.
Those skilled in the art will readily occur to other realities of the disclosure after considering specification and practicing disclosure herein
Apply scheme.This application is intended to cover any variations, uses, or adaptations of the disclosure, these modifications, purposes or suitable
The variation of answering property follows the general principles of this disclosure and including the undocumented common knowledge in the art of the disclosure or used
Use technological means.The description and examples are only to be considered as illustrative, and the true scope and spirit of the disclosure are by following right
It is required that pointing out.
It should be understood that the present disclosure is not limited to the precise structures that have been described above and shown in the drawings, and
And various modifications and changes may be made without departing from the scope thereof.The scope of the present disclosure is only limited by the accompanying claims.
Claims (10)
1. a kind of method for generating human face three-dimensional model, which is characterized in that the described method includes:
When the generation for detecting human face three-dimensional model instructs, display threedimensional model creates interface;
Interface is created based on the threedimensional model, obtains the hair style feature of active user, and create based on the threedimensional model
Interface obtains the face features of the active user;
Based on the hair style feature and the face features, the first human face three-dimensional model of the active user is generated.
2. the method according to claim 1, wherein threedimensional model creation interface includes multiple hair style elements
Multiple hair style elemental characteristics corresponding with each hair style element;
It is described that interface is created based on the threedimensional model, obtain the hair style feature of active user, comprising:
Obtain at least one the hair style element selected;
For each hair style element selected, the corresponding multiple hair style elemental characteristics of the hair style element are shown, and obtain quilt
The hair style elemental characteristic of selection;
At least one described hair style element and at least one the hair style elemental characteristic selected are formed into the hair style feature.
3. the method according to claim 1, wherein threedimensional model creation interface includes that the first shooting is pressed
Button;
It is described that interface is created based on the threedimensional model, obtain the hair style feature of active user, comprising:
When detecting that first shooting button is triggered, the head of the active user is shot by camera, is obtained to the end
Portion's image;
The hair style feature of the active user is identified from the head image.
4. the method according to claim 1, wherein threedimensional model creation interface includes multiple face faces
Element multiple facial elements features corresponding with everyone face facial elements;
It is described that interface is created based on the threedimensional model, obtain the face features of the active user, comprising:
Obtain at least one the face facial elements selected;
For everyone the face facial elements selected, the corresponding multiple facial elements features of the face facial elements are shown,
And obtain the facial elements feature selected;
At least one described face facial elements and at least one the facial elements feature selected are formed into the face face
Feature.
5. the method according to claim 1, wherein threedimensional model creation interface includes that the second shooting is pressed
Button;
It is described that interface is created based on the threedimensional model, obtain the face features of the active user, comprising:
When detecting that second shooting button is triggered, the face of the active user is shot by camera, obtains
One facial image;
The face features of the active user are identified from first facial image.
6. the method according to claim 1, wherein described special based on the hair style feature and face face
Sign, generates the first human face three-dimensional model of the active user, comprising:
Interface is created based on the threedimensional model, obtains the identity characteristic of the active user;
According to the identity characteristic of the active user, standard three-dimensional model corresponding with the identity characteristic is determined;
The hair style feature and the face features are mapped in the standard three-dimensional model, the active user is obtained
The first human face three-dimensional model.
7. method according to claim 1-6, which is characterized in that described to be based on the hair style feature and the people
Face facial characteristics, after the first human face three-dimensional model for generating the active user, the method also includes:
During the active user shoots video, the third facial image of the active user is obtained;
Based on face human facial expression recognition model, the face face table of the active user is identified from the third facial image
Feelings parameter;
Based on the face Facial Animation Parameters, first human face three-dimensional model is driven to show and the third facial image pair
The virtual expression answered.
8. a kind of device for generating human face three-dimensional model, which is characterized in that described device includes:
Display module is configured as when the generation for detecting human face three-dimensional model instructs, and display threedimensional model creates interface;
First obtains module, is configured as creating interface based on the threedimensional model, obtains the hair style feature of active user, and
Interface is created based on the threedimensional model, obtains the face features of the active user;
Generation module, is configured as based on the hair style feature and the face features, generates the of the active user
One human face three-dimensional model.
9. a kind of terminal, which is characterized in that the terminal includes:
Processor;
Memory for storage processor executable instruction;
Wherein, the processor is configured to:
When the generation for detecting human face three-dimensional model instructs, display threedimensional model creates interface;
Interface is created based on the threedimensional model, obtains the hair style feature of active user, and create based on the threedimensional model
Interface obtains the face features of the active user;
Based on the hair style feature and the face features, the first human face three-dimensional model of the active user is generated.
10. a kind of non-transitorycomputer readable storage medium, which is characterized in that when the instruction in the storage medium is by terminal
Processor when executing, enable the terminal to execute a kind of method for generating human face three-dimensional model, which comprises
When the generation for detecting human face three-dimensional model instructs, display threedimensional model creates interface;
Interface is created based on the threedimensional model, obtains the hair style feature of active user, and create based on the threedimensional model
Interface obtains the face features of the active user;
Based on the hair style feature and the face features, the first human face three-dimensional model of the active user is generated.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910113530.XA CN109857311A (en) | 2019-02-14 | 2019-02-14 | Generate method, apparatus, terminal and the storage medium of human face three-dimensional model |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910113530.XA CN109857311A (en) | 2019-02-14 | 2019-02-14 | Generate method, apparatus, terminal and the storage medium of human face three-dimensional model |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109857311A true CN109857311A (en) | 2019-06-07 |
Family
ID=66897930
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910113530.XA Pending CN109857311A (en) | 2019-02-14 | 2019-02-14 | Generate method, apparatus, terminal and the storage medium of human face three-dimensional model |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109857311A (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110339570A (en) * | 2019-07-17 | 2019-10-18 | 网易(杭州)网络有限公司 | Exchange method, device, storage medium and the electronic device of information |
CN110557625A (en) * | 2019-09-17 | 2019-12-10 | 北京达佳互联信息技术有限公司 | live virtual image broadcasting method, terminal, computer equipment and storage medium |
CN110755847A (en) * | 2019-10-30 | 2020-02-07 | 腾讯科技(深圳)有限公司 | Virtual operation object generation method and device, storage medium and electronic device |
CN110766777A (en) * | 2019-10-31 | 2020-02-07 | 北京字节跳动网络技术有限公司 | Virtual image generation method and device, electronic equipment and storage medium |
CN110782515A (en) * | 2019-10-31 | 2020-02-11 | 北京字节跳动网络技术有限公司 | Virtual image generation method and device, electronic equipment and storage medium |
CN110796721A (en) * | 2019-10-31 | 2020-02-14 | 北京字节跳动网络技术有限公司 | Color rendering method and device of virtual image, terminal and storage medium |
CN110827378A (en) * | 2019-10-31 | 2020-02-21 | 北京字节跳动网络技术有限公司 | Virtual image generation method, device, terminal and storage medium |
CN111638784A (en) * | 2020-05-26 | 2020-09-08 | 浙江商汤科技开发有限公司 | Facial expression interaction method, interaction device and computer storage medium |
US11380037B2 (en) | 2019-10-30 | 2022-07-05 | Tencent Technology (Shenzhen) Company Limited | Method and apparatus for generating virtual operating object, storage medium, and electronic device |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101488234A (en) * | 2009-03-02 | 2009-07-22 | 中山大学 | Facial expression animation synthesizing method based on muscle model |
CN102663820A (en) * | 2012-04-28 | 2012-09-12 | 清华大学 | Three-dimensional head model reconstruction method |
CN104063890A (en) * | 2013-03-22 | 2014-09-24 | 中国移动通信集团福建有限公司 | Method for cartooning human face and system thereof |
CN106372333A (en) * | 2016-08-31 | 2017-02-01 | 北京维盛视通科技有限公司 | Method and device for displaying clothes based on face model |
CN107077750A (en) * | 2014-12-11 | 2017-08-18 | 英特尔公司 | Incarnation selection mechanism |
CN107274493A (en) * | 2017-06-28 | 2017-10-20 | 河海大学常州校区 | A kind of three-dimensional examination hair style facial reconstruction method based on mobile platform |
CN108513089A (en) * | 2017-02-24 | 2018-09-07 | 腾讯科技(深圳)有限公司 | The method and device of group's video session |
CN108629834A (en) * | 2018-05-09 | 2018-10-09 | 华南理工大学 | A kind of three-dimensional hair method for reconstructing based on single picture |
CN108898068A (en) * | 2018-06-06 | 2018-11-27 | 腾讯科技(深圳)有限公司 | A kind for the treatment of method and apparatus and computer readable storage medium of facial image |
CN109191505A (en) * | 2018-08-03 | 2019-01-11 | 北京微播视界科技有限公司 | Static state generates the method, apparatus of human face three-dimensional model, electronic equipment |
-
2019
- 2019-02-14 CN CN201910113530.XA patent/CN109857311A/en active Pending
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101488234A (en) * | 2009-03-02 | 2009-07-22 | 中山大学 | Facial expression animation synthesizing method based on muscle model |
CN102663820A (en) * | 2012-04-28 | 2012-09-12 | 清华大学 | Three-dimensional head model reconstruction method |
CN104063890A (en) * | 2013-03-22 | 2014-09-24 | 中国移动通信集团福建有限公司 | Method for cartooning human face and system thereof |
CN107077750A (en) * | 2014-12-11 | 2017-08-18 | 英特尔公司 | Incarnation selection mechanism |
CN106372333A (en) * | 2016-08-31 | 2017-02-01 | 北京维盛视通科技有限公司 | Method and device for displaying clothes based on face model |
CN108513089A (en) * | 2017-02-24 | 2018-09-07 | 腾讯科技(深圳)有限公司 | The method and device of group's video session |
CN107274493A (en) * | 2017-06-28 | 2017-10-20 | 河海大学常州校区 | A kind of three-dimensional examination hair style facial reconstruction method based on mobile platform |
CN108629834A (en) * | 2018-05-09 | 2018-10-09 | 华南理工大学 | A kind of three-dimensional hair method for reconstructing based on single picture |
CN108898068A (en) * | 2018-06-06 | 2018-11-27 | 腾讯科技(深圳)有限公司 | A kind for the treatment of method and apparatus and computer readable storage medium of facial image |
CN109191505A (en) * | 2018-08-03 | 2019-01-11 | 北京微播视界科技有限公司 | Static state generates the method, apparatus of human face three-dimensional model, electronic equipment |
Non-Patent Citations (1)
Title |
---|
宋连党: "一分钟做卡通头像", 《电脑爱好者》 * |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110339570A (en) * | 2019-07-17 | 2019-10-18 | 网易(杭州)网络有限公司 | Exchange method, device, storage medium and the electronic device of information |
CN110557625A (en) * | 2019-09-17 | 2019-12-10 | 北京达佳互联信息技术有限公司 | live virtual image broadcasting method, terminal, computer equipment and storage medium |
CN110755847B (en) * | 2019-10-30 | 2021-03-16 | 腾讯科技(深圳)有限公司 | Virtual operation object generation method and device, storage medium and electronic device |
CN110755847A (en) * | 2019-10-30 | 2020-02-07 | 腾讯科技(深圳)有限公司 | Virtual operation object generation method and device, storage medium and electronic device |
US11380037B2 (en) | 2019-10-30 | 2022-07-05 | Tencent Technology (Shenzhen) Company Limited | Method and apparatus for generating virtual operating object, storage medium, and electronic device |
CN110766777A (en) * | 2019-10-31 | 2020-02-07 | 北京字节跳动网络技术有限公司 | Virtual image generation method and device, electronic equipment and storage medium |
CN110827378A (en) * | 2019-10-31 | 2020-02-21 | 北京字节跳动网络技术有限公司 | Virtual image generation method, device, terminal and storage medium |
CN110796721A (en) * | 2019-10-31 | 2020-02-14 | 北京字节跳动网络技术有限公司 | Color rendering method and device of virtual image, terminal and storage medium |
CN110782515A (en) * | 2019-10-31 | 2020-02-11 | 北京字节跳动网络技术有限公司 | Virtual image generation method and device, electronic equipment and storage medium |
CN110827378B (en) * | 2019-10-31 | 2023-06-09 | 北京字节跳动网络技术有限公司 | Virtual image generation method, device, terminal and storage medium |
CN110766777B (en) * | 2019-10-31 | 2023-09-29 | 北京字节跳动网络技术有限公司 | Method and device for generating virtual image, electronic equipment and storage medium |
CN111638784A (en) * | 2020-05-26 | 2020-09-08 | 浙江商汤科技开发有限公司 | Facial expression interaction method, interaction device and computer storage medium |
CN111638784B (en) * | 2020-05-26 | 2023-07-18 | 浙江商汤科技开发有限公司 | Facial expression interaction method, interaction device and computer storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109857311A (en) | Generate method, apparatus, terminal and the storage medium of human face three-dimensional model | |
JP6616288B2 (en) | Method, user terminal, and server for information exchange in communication | |
CN105825486B (en) | The method and device of U.S. face processing | |
JP2024028390A (en) | An electronic device that generates an image including a 3D avatar that reflects facial movements using a 3D avatar that corresponds to the face. | |
CN109618184A (en) | Method for processing video frequency and device, electronic equipment and storage medium | |
CN109872297A (en) | Image processing method and device, electronic equipment and storage medium | |
CN104580886B (en) | Filming control method and device | |
CN106506335B (en) | The method and device of sharing video frequency file | |
CN107172497A (en) | Live broadcasting method, apparatus and system | |
CN105447150B (en) | Method for playing music, device and terminal device based on face photograph album | |
CN107240143A (en) | Bag generation method of expressing one's feelings and device | |
CN106126017A (en) | Intelligent identification Method, device and terminal unit | |
WO2022198934A1 (en) | Method and apparatus for generating video synchronized to beat of music | |
CN105095917B (en) | Image processing method, device and terminal | |
EP4300431A1 (en) | Action processing method and apparatus for virtual object, and storage medium | |
CN109428859A (en) | A kind of synchronized communication method, terminal and server | |
CN109033423A (en) | Simultaneous interpretation caption presentation method and device, intelligent meeting method, apparatus and system | |
WO2021232875A1 (en) | Method and apparatus for driving digital person, and electronic device | |
CN110309327A (en) | Audio generation method, device and the generating means for audio | |
CN105528080A (en) | Method and device for controlling mobile terminal | |
CN113014471A (en) | Session processing method, device, terminal and storage medium | |
CN109922252A (en) | The generation method and device of short-sighted frequency, electronic equipment | |
CN114880062B (en) | Chat expression display method, device, electronic device and storage medium | |
CN109325908A (en) | Image processing method and device, electronic equipment and storage medium | |
CN112151041B (en) | Recording method, device, equipment and storage medium based on recorder program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190607 |
|
RJ01 | Rejection of invention patent application after publication |