CN109829965A - Action processing method, device, storage medium and the electronic equipment of faceform - Google Patents

Action processing method, device, storage medium and the electronic equipment of faceform Download PDF

Info

Publication number
CN109829965A
CN109829965A CN201910145480.3A CN201910145480A CN109829965A CN 109829965 A CN109829965 A CN 109829965A CN 201910145480 A CN201910145480 A CN 201910145480A CN 109829965 A CN109829965 A CN 109829965A
Authority
CN
China
Prior art keywords
faceform
acts
partial model
face
characteristic information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910145480.3A
Other languages
Chinese (zh)
Other versions
CN109829965B (en
Inventor
王宇鹭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201910145480.3A priority Critical patent/CN109829965B/en
Publication of CN109829965A publication Critical patent/CN109829965A/en
Application granted granted Critical
Publication of CN109829965B publication Critical patent/CN109829965B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the present application discloses action processing method, device, storage equipment and the electronic device of a kind of faceform.Wherein method includes: the human face expression characteristic information for obtaining target object;The facial expressions and acts of each partial model in the faceform of the target object are determined according to the human face expression characteristic information;Corresponding partial model is controlled according to the facial expressions and acts of partial model each in the faceform, to form the facial expressions and acts of faceform.The embodiment of the present application is by using above-mentioned technical proposal, by the human face expression characteristic information for acquiring target object, control the facial expressions and acts of each partial model in faceform, form facial expressions and acts identical with target object, the expression for enriching faceform improves the personalization and otherness of faceform's expression.

Description

Action processing method, device, storage medium and the electronic equipment of faceform
Technical field
The invention relates to technical field of electronic equipment more particularly to a kind of action processing method of faceform, Device, storage medium and electronic equipment.
Background technique
Three-dimensional modeling is one of application of area of computer graphics most worthy, the threedimensional model generated by three-dimensional modeling Also it is widely used in each different field.
Current threedimensional model is generally static models, or only executes fixed and single simple action, can not allow use Family generates feeling on the spot in person, while using fixed expression mode, thousand people one side when the movement of existing faceform.
Summary of the invention
The embodiment of the present application provides action processing method, device, storage medium and the electronic equipment of a kind of faceform, increases The facial expressions and acts of strong faceform.
In a first aspect, the embodiment of the present application provides the action processing method of faceform a kind of, comprising:
Obtain the human face expression characteristic information of target object;
The expression of each partial model in the faceform of the target object is determined according to the human face expression characteristic information Movement;
Corresponding partial model is controlled according to the facial expressions and acts of partial model each in the faceform, to form faceform Facial expressions and acts.
Second aspect, the embodiment of the present application provide the movement processing unit of faceform a kind of, comprising:
Expressive features data obtaining module, for obtaining the human face expression characteristic information of target object;
Facial expressions and acts determining module, for determining the face mould of the target object according to the human face expression characteristic information The facial expressions and acts of each partial model in type;
Expression control module, for controlling corresponding localized mode according to the facial expressions and acts of partial model each in the faceform Type, to form the facial expressions and acts of faceform.
The third aspect, the embodiment of the present application provide a kind of computer readable storage medium, are stored thereon with computer journey Sequence realizes the action processing method of the faceform as described in the embodiment of the present application when the program is executed by processor.
Fourth aspect, the embodiment of the present application provide a kind of electronic equipment, including memory, processor and are stored in storage On device and the computer program that can run on a processor, the processor realize such as the application when executing the computer program The action processing method of faceform described in embodiment.
The action processing method of the faceform provided in the embodiment of the present application obtains the human face expression feature of target object Information;Determine that the expression of each partial model in the faceform of the target object is dynamic according to the human face expression characteristic information Make;Corresponding partial model is controlled according to the facial expressions and acts of partial model each in the faceform, to form the table of faceform Feelings movement.The embodiment of the present application is by using above-mentioned technical proposal, by acquiring the human face expression characteristic information of target object, control The facial expressions and acts of each partial model in faceform processed form facial expressions and acts identical with target object, enrich face mould The expression of type improves the personalization and otherness of faceform's expression.
Detailed description of the invention
Fig. 1 is a kind of flow diagram of the action processing method of faceform provided by the embodiments of the present application;
Fig. 2 is the flow diagram of the action processing method of another faceform provided by the embodiments of the present application;
Fig. 3 is the flow diagram of the action processing method of another faceform provided by the embodiments of the present application;
Fig. 4 is the flow diagram of the action processing method of another faceform provided by the embodiments of the present application;
Fig. 5 is a kind of structural schematic diagram of the movement processing unit of faceform provided by the embodiments of the present application;
Fig. 6 is the structural schematic diagram of a kind of electronic equipment provided by the embodiments of the present application;
Fig. 7 is the structural schematic diagram of another electronic equipment provided by the embodiments of the present application.
Specific embodiment
Further illustrate the technical solution of the application below with reference to the accompanying drawings and specific embodiments.It is understood that It is that specific embodiment described herein is used only for explaining the application, rather than the restriction to the application.It further needs exist for illustrating , part relevant to the application is illustrated only for ease of description, in attached drawing rather than entire infrastructure.
It should be mentioned that some exemplary embodiments are described as before exemplary embodiment is discussed in greater detail The processing or method described as flow chart.Although each step is described as the processing of sequence by flow chart, many of these Step can be implemented concurrently, concomitantly or simultaneously.In addition, the sequence of each step can be rearranged.When its operation The processing can be terminated when completion, it is also possible to have the additional step being not included in attached drawing.The processing can be with Corresponding to method, function, regulation, subroutine, subprogram etc..
Fig. 1 is a kind of flow diagram of the action processing method of faceform provided by the embodiments of the present application, this method It can be executed by the movement processing unit of faceform, wherein the device can be implemented by software and/or hardware, and can generally be integrated in In electronic equipment.As shown in Figure 1, this method comprises:
Step 101, the human face expression characteristic information for obtaining target object.
Step 102 determines each localized mode in the faceform of the target object according to the human face expression characteristic information The facial expressions and acts of type.
Step 103 controls corresponding partial model according to the facial expressions and acts of partial model each in the faceform, to be formed The facial expressions and acts of faceform.
Illustratively, the electronic equipment in the embodiment of the present application may include that mobile phone, tablet computer and computer etc. are intelligently set It is standby.
Wherein, target object is the expression references object of model, such as can be the human face expression feature letter for obtaining user A The three-dimensional face model of breath, control user A forms expression identical with user A, enhances the individual character of the facial expressions and acts of faceform Change and accuracy, avoid the stereotyped of faceform's expression.
Optionally, the human face expression characteristic information of target object is obtained, comprising: obtain the face figure of the target object Picture;Identify each key area in the facial image, wherein the key area includes face region and face cheek region;It mentions Take the expressive features information of each key area described in the facial image.Wherein it is possible to be based on two-sided camera, structure Light video camera head or the first-class equipment of depth camera obtain the facial image of target object, which includes depth information, people Face image can be to be acquired in real time, is also possible to pre-stored in electronic equipment.Image is carried out to the facial image of acquisition Identification, determines each key area in facial image, wherein key area may include left eye region, right eye region, left eyebrow Region, right brow region, nasal area, mouth region, left lug areas, auris dextra region, left face cheek region and right face cheek region.It can Choosing, it can be and key area identification, the neural network mould which completes are carried out based on neural network model trained in advance Type has the function of the division of face key area, i.e., is input to facial image in neural network model trained in advance, defeated Result is the division result of each key area in facial image out.
In identification facial image after each key area, the expressive features of each key area in facial image are determined Information can be key area correspondingly, the expressive features information of above-mentioned each key area is static expression characteristic information The spatial positional information of interior characteristic point is composed, and illustratively, the characteristic point of mouth region can be but not limited to left and right Two corners of the mouths, upper lip central point and lower lip central point etc., by determining that the spatial positional information of each characteristic point of mouth can determine mouth Regional morphology, and then form the expression of mouth region.Wherein, the spatial positional information of characteristic point can be including horizontal position confidence Breath and depth information.
Optionally, the human face expression characteristic information for obtaining target object may also is that the face for obtaining the target object Video;Identify each key area in the face video, wherein the key area includes face region and face cheek region; Extract the expressive features information of each key area described in the face video.Correspondingly, the table of each key area Feelings characteristic information is dynamic expression characteristic information.In the present embodiment, by successively identifying the pass in face video in each video frame Key range, and determine the expressive features information of the key area in each video frame, according to the sequence of video frame, continuous expression is special Reference ceases to form dynamic expression characteristic information.
The faceform of target object is the three-dimensional face model created according to the face characteristic information of target object, face The human face region of each partial model and target object in model corresponds, included at least in faceform face model and Face cheek model, such as the left eye model in faceform are corresponding with the left eye region of target object, the mouth in faceform Model is corresponding with the mouth region of target object.Human face expression characteristic information according to each human face region of target object is true Determine corresponding facial expressions and acts in faceform, illustratively, the facial expressions and acts of mouth model can include but is not limited to smile, open Mouth is laughed at, is laughed, heresy is laughed at, is curled one's lip, is put out one's tongue.If human face expression characteristic information is static expression characteristic information, can determine pair The static facial expressions and acts of partial model are answered, if human face expression characteristic information is dynamic expression characteristic information, can determine corresponding office The dynamic expression of portion's model acts.It should be noted that the facial expressions and acts of each partial model determined in the present embodiment carry There is an expression parameter (such as spatial positional information of characteristic point) of target object, rather than unified expression mode, improve face The individual character of model expression is talked about and otherness.
After facial expressions and acts by determining each partial model, controls corresponding partial model and complete facial expressions and acts, into And expression identical with target object is formed on faceform, the facial expressions and acts of faceform are enriched, face mould is improved The ease for use of type.Illustratively, the faceform formed by the embodiment of the present application can show that target user's expression abundant is dynamic Make, such as in video call process, be shown the three-dimensional state of target object and expression by threedimensional model, allows call Object has the sense of reality on the spot in person;Such as in the electronic equipments such as trial assembly mirror, by establishing three-dimensional face model makeup on probation When the commodity such as product, identical expression can be shown in three-dimensional face model with the expression of target object, improves user's trial assembly The sense of reality.
The action processing method of the faceform provided in the embodiment of the present application obtains the human face expression feature of target object Information determines that the expression of each partial model in the faceform of the target object is dynamic according to the human face expression characteristic information Make, corresponding partial model is controlled according to the facial expressions and acts of partial model each in the faceform, to form the table of faceform Feelings movement.It is controlled each in faceform by using above scheme by acquiring the human face expression characteristic information of target object The facial expressions and acts of partial model form facial expressions and acts identical with target object, enrich the expression of faceform, improve people The personalization and otherness of face model expression.
Fig. 2 is the flow diagram of the action processing method of another faceform provided by the embodiments of the present application, referring to The method of Fig. 2, the present embodiment include the following steps:
Step 201, the facial image for obtaining the target object.
Each key area in step 202, the identification facial image, wherein the key area includes face region With face cheek region.
Step 203, the static expression characteristic information for extracting each key area described in the facial image.
Step 204, by the static expression characteristic information of any key area and at least one expression of corresponding partial model Action mode is compared, and the current facial expressions and acts mode of corresponding partial model is determined according to comparison result.
Step 205 identifies any key area in the static expression characteristic information of any key area The spatial positional information of at least one characteristic point.
Step 206, the spatial positional information according at least one characteristic point of any key area and the correspondence Partial model is corresponded to described in the current facial expressions and acts scheme control of partial model, the static expression that formation obtains faceform is dynamic Make.
Wherein, for each partial model, multiple facial expressions and acts modes are prestored in electronic equipment, it is dynamic to traverse each expression Operation mode determines the static expression characteristic information of partial model and the similarity of each facial expressions and acts mode, true according to similarity Determine the current facial expressions and acts mode of partial model, illustratively, can be and be determined as the highest facial expressions and acts mode of similarity The current facial expressions and acts mode of partial model.The static expression characteristic information of key area can be the image or table of key area The feelings parameter spatial positional information of characteristic point (such as in key area image), specifically, each pass in identification facial image After key range, it can be and facial image is split based on each key area, based on the key area image after segmentation It is matched with the facial expressions and acts mode of corresponding partial model, determines similarity;It can also be the expression ginseng based on key area Number is matched with the expression parameter in the facial expressions and acts mode of corresponding partial model, determines similarity.
In the present embodiment, multiple characteristic points are stored in each key area, wherein the characteristic point of key area can Being arranged according to user demand, what user selected in key area in advance, for example, the characteristic point of mouth region can be But be not limited to two corners of the mouths of left and right, upper lip central point and lower lip central point etc..The characteristic point of key area can also be by pass Key range carries out limb recognition, the interval selection characteristic point in limb recognition result, wherein improve in edge turning point region special The selection quantity of sign point reduces the selection quantity of characteristic point in edge-smoothing region.For each key area, characteristic point Quantity it is more, expression controls that precision is higher, correspondingly, the quantity of characteristic point is fewer, it is lower that expression controls precision.
It include the spatial positional information of each characteristic point in the present embodiment, in facial image, specifically, can be according to face The horizontal position information of each characteristic point and depth information and man face image acquiring equipment determine target at a distance from image The spatial positional information of each characteristic point of object;Can also be according to the horizontal position information of characteristic point each in facial image and The ratio of depth information and facial image and faceform determines the spatial positional information of each characteristic point.
For any partial model in faceform, current facial expressions and acts mode is set by partial model, and according to Each characteristic point in partial model under the spatial positional information of each characteristic point and current facial expressions and acts mode in corresponding key area Spatial positional information determine spatial position difference, according to spatial position difference adjust character pair point, with formed with target pair As identical static expression.In the present embodiment, since facial expressions and acts mode uniformity is strong, pass through characteristic point each in facial image Adjusting of the spatial positional information to current facial expressions and acts mode improves the personalization and otherness of static expression.
It should be noted that in some embodiments, step 204- step 206 is it may also is that in any key area The spatial positional information of at least one characteristic point of any key area is identified in the static expression characteristic information in domain;According to The spatial positional information of at least one characteristic point of any key area adjusts characteristic point in the corresponding partial model Spatial position so that in partial model the spatial position of characteristic point and the character pair point of key area spatial positional information phase Together or the ratio of spatial position is identical, and formation obtains the static facial expressions and acts of faceform.
The action processing method of the faceform provided in the embodiment of the present application, by determining working as each partial model The spatial positional information that multiple characteristic points of key area are corresponded in preceding facial expressions and acts mode and facial image, by each partial zones After domain is set as current facial expressions and acts mode, parameter regulation is carried out to a partial model based on the spatial positional information of characteristic point, The personalized static expression for forming target object, improves the personalization and otherness of faceform, provides faceform Ease for use.
Fig. 3 is the flow diagram of the action processing method of another faceform provided by the embodiments of the present application, this reality The optinal plan that example is above-described embodiment is applied, correspondingly, as shown in figure 3, the method for the present embodiment includes the following steps:
Step 301, the face video for obtaining the target object.
Each key area in step 302, the identification face video, wherein the key area includes face region With face cheek region.
Step 303, the dynamic expression characteristic information for extracting each key area described in the face video.
Step 304, the dynamic expression characteristic information according to any key area identify in any key area at least The spatial position change track of one characteristic point.
The spatial position change track of at least one characteristic point in any key area is determined as pair by step 305 Answer the expectation spatial position change track of the character pair point of partial model.
Step 306, to any partial model, it is corresponding according to the expectation spatial position change TRAJECTORY CONTROL of each characteristic point The spatial position of characteristic point forms and obtains the dynamic expression movement of faceform.
Wherein, face video can be pre-stored in electronic equipment, can also be and acquires in real time, illustratively, Face video based on two-sided camera, structure light video camera head or the first-class equipment acquisition target object of depth camera, and according to Fixed interval time sends the video data of acquisition to electronic equipment, wherein video capture device, which can be, to be arranged in electricity In sub- equipment, it can also be and be arranged in other electronic equipments.
In the present embodiment, it can be and intercept face area from each video frame of the video shot to target object in advance Domain obtains the face video of target object, identifies each key area in each face video frame.The dynamic of key area Spatial positional information including each characteristic point in the key area in expressive features information is obtained according to the continuity of video frame To the spatial positional information of each characteristic point consecutive variations, the spatial position change track of characteristic point is further obtained.Face mould In type partial model in key area corresponding in facial image include identical characteristic point, by characteristic point in key area Spatial position change track is determined as the expectation spatial position change track of character pair point in partial model, it is expected that spatial position Include the spatial positional information that each characteristic point changes over time in partial model in variation track, that is, it is expected spatial position change rail It include timestamp and the corresponding spatial positional information of timestamp in mark, optionally, it is expected that including more in spatial position change track A data packet, wherein each data packet includes the spatial positional information of each characteristic point of same timestamp, correspondingly, according to described The spatial position of the expectation spatial position change TRAJECTORY CONTROL character pair point of each characteristic point can be according to time sequencing successively Read access time stabs corresponding data packet, adjusts the feature in partial model according to the spatial positional information of characteristic point each in data packet Point forms dynamic facial expressions and acts.
The action processing method of the faceform provided in the embodiment of the present application obtains face by acquiring face video In model in each partial model each characteristic point expectation spatial position change track, control faceform in characteristic point sky Between change in location, form identical with target object dynamic expression, enrich the facial expressions and acts and the sense of reality of faceform, raising The ease for use of faceform.
Fig. 4 is the flow diagram of the action processing method of another faceform provided by the embodiments of the present application, this reality The optinal plan that example is above-described embodiment is applied, correspondingly, as shown in figure 4, the method for the present embodiment includes the following steps:
Step 401, the face characteristic information for obtaining the target object create the mesh according to the face characteristic information Mark the faceform of object.
Step 402, the human face expression characteristic information for obtaining target object.
Step 403 determines each localized mode in the faceform of the target object according to the human face expression characteristic information The facial expressions and acts of type.
Step 404 controls corresponding partial model according to the facial expressions and acts of partial model each in the faceform, to be formed The facial expressions and acts of faceform.
Wherein it is possible to be the face video or facial image for obtaining target object, based in facial image or face video At least one video frame carry out recognition of face, determine the face characteristic information in facial image or video frame.Wherein, face Characteristic information can be the structural parameters including each human face region, and human face region can include but is not limited to eyebrow, eyes, nose Son, mouth, ear and face cheek etc., structural parameters can include but is not limited to length, diameter, color, position and depth.It can Choosing, based on two-sided camera, structure light video camera head or the first-class equipment of depth camera obtain target object face video or Facial image, by carrying out recognition of face to face video or facial image, while according to above equipment and target object away from From, it may be determined that the face characteristic information of target object.
In the present embodiment, the faceform based on face characteristic information creation target object can be based on face characteristic The characteristic information of each human face region successively creates partial model, such as facial contour model, eyebrow pattern, eyes mould in information Type, nose model, mouth model, ear model and face cheek model, combine to obtain the people of target object based on above-mentioned partial model Face model.Faceform based on face characteristic information creation target object can also be based on face characteristic information to having created Model (such as can be the faceform of other users or the history faceform of target object) carries out parameter regulation, obtains The faceform of target object.For example, creating the faceform of the target object according to the face characteristic information, comprising: It is matched in model database according to the face characteristic information, determines the benchmark face model of the target object, In, the model database includes that at least one has created model;The benchmark face is determined according to the face characteristic information Partial model to be adjusted in model;Standard is determined according to the characteristic information that the partial model to be adjusted corresponds to target object Partial model;Based on partial model to be adjusted in benchmark face model described in the model modification of the standard part, described in generation The faceform of target object.
Model database is provided in electronic equipment, which is used to store the history faceform created, The model of creation stored in the face characteristic information and model database is subjected to parameter matching, determines the face characteristic Information and the similarity for having created model;The benchmark face model of the target object is determined according to similarity.Specifically, Determine that face characteristic information may is that the regional area according to the target object face with the similarity for having created model, it will The face characteristic information is divided into multiple standard parts characteristic information;Based on the multiple standard part characteristic information respectively with It is described to have created corresponding partial model in model and matched, determine the partial model quantity of successful match;According to described The face characteristic information and the similarity for having created model are generated with successful partial model quantity, wherein the phase It is positively correlated like degree and the partial model quantity of the successful match, correspondingly, can be the creation model of maximum similarity Or the benchmark face model that the faceform in model is determined as the target object is created.
Optionally, standard localized mode is determined according to the characteristic information of the corresponding target object of the partial model to be adjusted Type, which may is that, to be determined and joins according to the local feature information of the partial model to be adjusted and the local feature information of target object Number regulated value, regulated value adjusts the partial model to be adjusted based on the parameter, by the partial model to be adjusted It is updated to standard partial model;Believed alternatively, can be according to the feature of the corresponding target object of the partial model to be adjusted Breath is modeled, and the standard partial model is generated;Or it can also be according to the corresponding mesh of the partial model to be adjusted The characteristic information of mark object is matched in the model database, the determining local feature information phase with the target object Matched standard partial model.
In the present embodiment, the benchmark face model of target object is determined by similarity, on the basis of existing model into Row parameter regulation makes full use of to model has been created to improve the creation efficiency of faceform, avoids identical localized mode The multiplicating of type creates, and simplifies the creation process of faceform.
After creating faceform, the expressive features information of target object is acquired in real time, it is special according to the expression of target object The reference breath control faceform forms identical facial expressions and acts and is shown.Illustratively, it is regarded in user A and user B In frequency communication process, (video capture device in the first electronic equipment is set with the first electronics for the first electronic equipment of user A Standby associated video capture device) acquisition user A face video, it is electric by the second of the face video user B of user A Sub- equipment, the second electronic equipment of user B create the three-dimensional of user A according to the face characteristic information in the face video of user A Faceform, and according to the human face expression characteristic information in the face video of user A, control the three-dimensional face model of user A Identical facial expressions and acts are formed, so that user B can see that the accurate expression of the hommization of user A is dynamic in real time in the electronic device Make, correspondingly, user B the second electronic equipment (video capture device in the second electronic equipment or with the second electronic equipment Associated video capture device) face video of acquisition user B is sent to the first electronic equipment of user A, equally user A's The three-dimensional face model with facial expressions and acts that user B is formed in first electronic equipment, improves the use in video call process Family authenticity improves user experience.
The action processing method of the faceform provided in the embodiment of the present application, by the human face expression for acquiring target object Characteristic information controls the facial expressions and acts of each partial model in faceform, forms facial expressions and acts identical with target object, rich The rich expression of faceform, improves the personalization and otherness of faceform's expression.
Fig. 5 is a kind of structural block diagram of the movement processing unit of faceform provided by the embodiments of the present application, which can By software and or hardware realization, be typically integrated in electronic equipment, can by execute electronic equipment faceform movement at Reason method controls the expression of faceform.As shown in figure 5, the device includes: expressive features data obtaining module 501, facial expressions and acts determining module 502 and expression control module 503.
Expressive features data obtaining module 501, for obtaining the human face expression characteristic information of target object;
Facial expressions and acts determining module 502, for determining the people of the target object according to the human face expression characteristic information The facial expressions and acts of each partial model in face model;
Expression control module 503, for controlling corresponding office according to the facial expressions and acts of partial model each in the faceform Portion's model, to form the facial expressions and acts of faceform.
The movement processing unit of the faceform provided in the embodiment of the present application, by the human face expression for acquiring target object Characteristic information controls the facial expressions and acts of each partial model in faceform, forms facial expressions and acts identical with target object, rich The rich expression of faceform, improves the personalization and otherness of faceform's expression.
On the basis of the above embodiments, expressive features data obtaining module 501 is used for:
Obtain the facial image or face video of the target object;
Identify each key area in the facial image or face video, wherein the key area includes face Region and face cheek region;
Extract the expressive features information of each key area described in the facial image or the face video.
On the basis of the above embodiments, the expressive features information of each key area includes static table feelings feature letter Breath or dynamic expression characteristic information.
On the basis of the above embodiments, facial expressions and acts determining module 502 includes:
Facial expressions and acts pattern matching unit, for by the static expression characteristic information of any key area and corresponding localized mode At least one facial expressions and acts mode of type is compared;
Current facial expressions and acts pattern determining unit, for determining that the current expression of corresponding partial model is dynamic according to comparison result Operation mode.
On the basis of the above embodiments, expression control module 503 includes:
Spatial positional information determination unit, for described in the static expression characteristic information identification in any key area The spatial positional information of at least one characteristic point of any key area;
First facial expressions and acts processing module, for the space bit according at least one characteristic point of any key area Partial model is corresponded to described in the current facial expressions and acts scheme control of confidence breath and the corresponding partial model, formation obtains face mould The static facial expressions and acts of type.
On the basis of the above embodiments, facial expressions and acts determining module 502 includes:
Spatial position change track determination unit, for the dynamic expression characteristic information according to any key area, identification The spatial position change track of at least one characteristic point in any key area;
It is expected that spatial position change track determination unit, for by least one characteristic point in any key area Spatial position change track is determined as the expectation spatial position change track of the character pair point of corresponding partial model.
It on the basis of the above embodiments, include each characteristic point in partial model in the expectation spatial position change track The spatial positional information changed over time.
Expression control module 503 is used for:
To any partial model, according to the expectation spatial position change TRAJECTORY CONTROL character pair point of each characteristic point Spatial position forms and obtains the dynamic expression movement of faceform.
On the basis of the above embodiments, further includes:
Face characteristic information obtains module, for obtaining the face characteristic information of the target object;
Faceform's creation module, for creating the faceform of the target object according to the face characteristic information.
On the basis of the above embodiments, faceform's creation module is used for:
It is matched in model database according to the face characteristic information, determines the benchmark face of the target object Model, wherein the model database includes that at least one has created model;
Partial model to be adjusted in the benchmark face model is determined according to the face characteristic information;
Standard partial model is determined according to the characteristic information that the partial model to be adjusted corresponds to target object;
Based on partial model to be adjusted in benchmark face model described in the model modification of the standard part, the mesh is generated Mark the faceform of object.
The embodiment of the present application also provides a kind of storage medium comprising computer executable instructions, and the computer is executable It instructs when being executed by computer processor for executing the action processing method of faceform, this method comprises:
Obtain the human face expression characteristic information of target object;
The expression of each partial model in the faceform of the target object is determined according to the human face expression characteristic information Movement;
Corresponding partial model is controlled according to the facial expressions and acts of partial model each in the faceform, to form faceform Facial expressions and acts.
Storage medium --- any various types of memory devices or storage equipment.Term " storage medium " is intended to wrap It includes: install medium, such as CD-ROM, floppy disk or magnetic tape equipment;Computer system memory or random access memory, such as DRAM, DDRRAM, SRAM, EDORAM, Lan Basi (Rambus) RAM etc.;Nonvolatile memory, such as flash memory, magnetic medium (example Such as hard disk or optical storage);Register or the memory component of other similar types etc..Storage medium can further include other types Memory or combinations thereof.In addition, storage medium can be located at program in the first computer system being wherein performed, or It can be located in different second computer systems, second computer system is connected to the first meter by network (such as internet) Calculation machine system.Second computer system can provide program instruction to the first computer for executing.Term " storage medium " can To include two or more that may reside in different location (such as in the different computer systems by network connection) Storage medium.Storage medium can store the program instruction that can be performed by one or more processors and (such as be implemented as counting Calculation machine program).
Certainly, a kind of storage medium comprising computer executable instructions, computer provided by the embodiment of the present application The application any embodiment institute can also be performed in the movement processing operation for the faceform that executable instruction is not limited to the described above Relevant operation in the action processing method of the faceform of offer.
The embodiment of the present application provides a kind of electronic equipment, and people provided by the embodiments of the present application can be integrated in the electronic equipment The movement processing unit of face model.Fig. 6 is the structural schematic diagram of a kind of electronic equipment provided by the embodiments of the present application.Electronic equipment 600 may include: memory 601, processor 602 and be stored in the computer that can be run on memory 601 and in processor 602 Program, the processor 602 realize the movement of the faceform as described in the embodiment of the present application when executing the computer program Processing method.
Electronic equipment provided by the embodiments of the present application controls people by acquiring the human face expression characteristic information of target object The facial expressions and acts of each partial model in face model form facial expressions and acts identical with target object, enrich faceform's Expression improves the personalization and otherness of faceform's expression.
Fig. 7 is the structural schematic diagram of another electronic equipment provided by the embodiments of the present application.The electronic equipment may include: Shell (not shown), memory 701, central processing unit (central processing unit, CPU) 702 (are also known as located Manage device, hereinafter referred to as CPU), circuit board (not shown) and power circuit (not shown).The circuit board is placed in institute State the space interior that shell surrounds;The CPU702 and the memory 701 are arranged on the circuit board;The power supply electricity Road, for each circuit or the device power supply for the electronic equipment;The memory 701, for storing executable program generation Code;The CPU702 is run and the executable journey by reading the executable program code stored in the memory 701 The corresponding computer program of sequence code, to perform the steps of
Obtain the human face expression characteristic information of target object;
The expression of each partial model in the faceform of the target object is determined according to the human face expression characteristic information Movement;
Corresponding partial model is controlled according to the facial expressions and acts of partial model each in the faceform, to form faceform Facial expressions and acts.
The electronic equipment further include: Peripheral Interface 703, RF (Radio Frequency, radio frequency) circuit 705, audio-frequency electric Road 706, loudspeaker 711, power management chip 708, input/output (I/O) subsystem 709, other input/control devicess 710, Touch screen 712, other input/control devicess 710 and outside port 704, these components pass through one or more communication bus Or signal wire 707 communicates.
It should be understood that illustrating the example that electronic equipment 700 is only electronic equipment, and electronic equipment 700 It can have than shown in the drawings more or less component, can combine two or more components, or can be with It is configured with different components.Various parts shown in the drawings can include one or more signal processings and/or dedicated It is realized in the combination of hardware, software or hardware and software including integrated circuit.
The electronic equipment provided in this embodiment for the movement processing operation to faceform is carried out below detailed Description, the electronic equipment take the mobile phone as an example.
Memory 701, the memory 701 can be accessed by CPU702, Peripheral Interface 703 etc., and the memory 701 can It can also include nonvolatile memory to include high-speed random access memory, such as one or more disk memory, Flush memory device or other volatile solid-state parts.
The peripheral hardware that outputs and inputs of equipment can be connected to CPU702 and deposited by Peripheral Interface 703, the Peripheral Interface 703 Reservoir 701.
I/O subsystem 709, the I/O subsystem 709 can be by the input/output peripherals in equipment, such as touch screen 712 With other input/control devicess 710, it is connected to Peripheral Interface 703.I/O subsystem 709 may include 7091 He of display controller For controlling one or more input controllers 7092 of other input/control devicess 710.Wherein, one or more input controls Device 7092 processed receives electric signal from other input/control devicess 710 or sends electric signal to other input/control devicess 710, Other input/control devicess 710 may include physical button (push button, rocker buttons etc.), dial, slide switch, behaviour Vertical pole clicks idler wheel.It is worth noting that input controller 7092 can with it is following any one connect: keyboard, infrared port, The indicating equipment of USB interface and such as mouse.
Touch screen 712, the touch screen 712 are the input interface and output interface between consumer electronic devices and user, Visual output is shown to user, visual output may include figure, text, icon, video etc..
Display controller 7091 in I/O subsystem 709 receives electric signal from touch screen 712 or sends out to touch screen 712 Electric signals.Touch screen 712 detects the contact on touch screen, and the contact that display controller 7091 will test is converted to and is shown The interaction of user interface object on touch screen 712, i.e. realization human-computer interaction, the user interface being shown on touch screen 712 Object can be the icon of running game, the icon for being networked to corresponding network etc..It is worth noting that equipment can also include light Mouse, light mouse are the extensions for the touch sensitive surface for not showing the touch sensitive surface visually exported, or formed by touch screen.
RF circuit 705 is mainly used for establishing the communication of mobile phone Yu wireless network (i.e. network side), realizes mobile phone and wireless network Network sends and receivees.Such as transmitting-receiving short message, Email etc..Specifically, RF circuit 705 receives and sends RF signal, RF Signal is also referred to as electromagnetic signal, and RF circuit 705 converts electrical signals to electromagnetic signal or electromagnetic signal is converted to electric signal, and And it is communicated by the electromagnetic signal with communication network and other equipment.RF circuit 705 may include for executing these The known circuit of function comprising but be not limited to antenna system, RF transceiver, one or more amplifier, tuner, one or Multiple oscillators, digital signal processor, CODEC (COder-DECoder, coder) chipset, Subscriber Identity Module (Subscriber Identity Module, SIM) etc..
Voicefrequency circuit 706 is mainly used for receiving audio from Peripheral Interface 703, which is converted to electric signal, and will The electric signal is sent to loudspeaker 711.
Loudspeaker 711 is reduced to sound for mobile phone to be passed through RF circuit 705 from the received voice signal of wireless network And the sound is played to user.
Power management chip 708, the hardware for being connected by CPU702, I/O subsystem and Peripheral Interface are powered And power management.
This Shen can be performed in movement processing unit, storage medium and the electronic equipment of the faceform provided in above-described embodiment Please faceform provided by any embodiment action processing method, have and execute the corresponding functional module of this method and beneficial Effect.The not technical detail of detailed description in the above-described embodiments, reference can be made to face mould provided by the application any embodiment The action processing method of type.
Note that above are only the preferred embodiment and institute's application technology principle of the application.It will be appreciated by those skilled in the art that The application is not limited to specific embodiment described here, be able to carry out for a person skilled in the art it is various it is apparent variation, The protection scope readjusted and substituted without departing from the application.Therefore, although being carried out by above embodiments to the application It is described in further detail, but the application is not limited only to above embodiments, in the case where not departing from the application design, also It may include more other equivalent embodiments, and scope of the present application is determined by the scope of the appended claims.

Claims (12)

1. a kind of action processing method of faceform characterized by comprising
Obtain the human face expression characteristic information of target object;
The facial expressions and acts of each partial model in the faceform of the target object are determined according to the human face expression characteristic information;
Corresponding partial model is controlled according to the facial expressions and acts of partial model each in the faceform, to form the table of faceform Feelings movement.
2. the method according to claim 1, wherein obtaining the human face expression characteristic information of target object, comprising:
Obtain the facial image or face video of the target object;
Identify each key area in the facial image or face video, wherein the key area includes face region With face cheek region;
Extract the expressive features information of each key area described in the facial image or the face video.
3. according to the method described in claim 2, it is characterized in that, the expressive features information of each key area includes quiet State expressive features information or dynamic expression characteristic information.
4. according to the method described in claim 3, it is characterized in that, determining the target according to the human face expression characteristic information The facial expressions and acts of each partial model in the faceform of object, comprising:
The static expression characteristic information of any key area is carried out at least one facial expressions and acts mode of corresponding partial model It compares;
The current facial expressions and acts mode of corresponding partial model is determined according to comparison result.
5. according to the method described in claim 4, it is characterized in that, dynamic according to the expression of partial model each in the faceform Make to control corresponding partial model, to form the facial expressions and acts of faceform, comprising:
At least one characteristic point of any key area is identified in the static expression characteristic information of any key area Spatial positional information;
According to the spatial positional information of at least one characteristic point of any key area and working as the corresponding partial model Partial model is corresponded to described in preceding facial expressions and acts scheme control, formation obtains the static facial expressions and acts of faceform.
6. according to the method described in claim 3, it is characterized in that, determining the target according to the human face expression characteristic information The facial expressions and acts of each partial model in the faceform of object, comprising:
According to the dynamic expression characteristic information of any key area, at least one characteristic point in any key area is identified Spatial position change track;
The spatial position change track of at least one characteristic point in any key area is determined as corresponding partial model The expectation spatial position change track of character pair point.
7. according to the method described in claim 6, it is characterized in that, including localized mode in the expectation spatial position change track The spatial positional information that each characteristic point changes over time in type;According to the facial expressions and acts control of partial model each in the faceform Corresponding partial model is made, to form the facial expressions and acts of faceform, comprising:
To any partial model, according to the space of the expectation spatial position change TRAJECTORY CONTROL character pair point of each characteristic point Position forms and obtains the dynamic expression movement of faceform.
8. the method according to claim 1, wherein determining the mesh according to the human face expression characteristic information It marks in the faceform of object before the facial expressions and acts of each partial model, further includes:
Obtain the face characteristic information of the target object;
The faceform of the target object is created according to the face characteristic information.
9. according to the method described in claim 8, it is characterized in that, creating the target object according to the face characteristic information Faceform, comprising:
It is matched in model database according to the face characteristic information, determines the benchmark face mould of the target object Type, wherein the model database includes that at least one has created model;
Partial model to be adjusted in the benchmark face model is determined according to the face characteristic information;
Standard partial model is determined according to the characteristic information that the partial model to be adjusted corresponds to target object;
Based on partial model to be adjusted in benchmark face model described in the model modification of the standard part, the target pair is generated The faceform of elephant.
10. the movement processing unit of faceform a kind of characterized by comprising
Expressive features data obtaining module, for obtaining the human face expression characteristic information of target object;
Facial expressions and acts determining module, for being determined in the faceform of the target object according to the human face expression characteristic information The facial expressions and acts of each partial model;
Expression control module, for controlling corresponding partial model according to the facial expressions and acts of partial model each in the faceform, To form the facial expressions and acts of faceform.
11. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the program is by processor The action processing method of the faceform as described in any in claim 1-9 is realized when execution.
12. a kind of electronic equipment, which is characterized in that including memory, processor and storage are on a memory and can be in processor The computer program of operation, the processor realize the people as described in claim 1-9 is any when executing the computer program The action processing method of face model.
CN201910145480.3A 2019-02-27 2019-02-27 Action processing method and device of face model, storage medium and electronic equipment Active CN109829965B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910145480.3A CN109829965B (en) 2019-02-27 2019-02-27 Action processing method and device of face model, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910145480.3A CN109829965B (en) 2019-02-27 2019-02-27 Action processing method and device of face model, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN109829965A true CN109829965A (en) 2019-05-31
CN109829965B CN109829965B (en) 2023-06-27

Family

ID=66864622

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910145480.3A Active CN109829965B (en) 2019-02-27 2019-02-27 Action processing method and device of face model, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN109829965B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111405361A (en) * 2020-03-27 2020-07-10 咪咕文化科技有限公司 Video acquisition method, electronic equipment and computer readable storage medium
CN111638784A (en) * 2020-05-26 2020-09-08 浙江商汤科技开发有限公司 Facial expression interaction method, interaction device and computer storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107330371A (en) * 2017-06-02 2017-11-07 深圳奥比中光科技有限公司 Acquisition methods, device and the storage device of the countenance of 3D facial models
CN107368778A (en) * 2017-06-02 2017-11-21 深圳奥比中光科技有限公司 Method for catching, device and the storage device of human face expression
WO2018137455A1 (en) * 2017-01-25 2018-08-02 迈吉客科技(北京)有限公司 Image interaction method and interaction apparatus
US20180260994A1 (en) * 2016-03-10 2018-09-13 Tencent Technology (Shenzhen) Company Limited Expression animation generation method and apparatus for human face model
CN108525305A (en) * 2018-03-26 2018-09-14 广东欧珀移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN108646920A (en) * 2018-05-16 2018-10-12 Oppo广东移动通信有限公司 Identify exchange method, device, storage medium and terminal device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180260994A1 (en) * 2016-03-10 2018-09-13 Tencent Technology (Shenzhen) Company Limited Expression animation generation method and apparatus for human face model
WO2018137455A1 (en) * 2017-01-25 2018-08-02 迈吉客科技(北京)有限公司 Image interaction method and interaction apparatus
CN107330371A (en) * 2017-06-02 2017-11-07 深圳奥比中光科技有限公司 Acquisition methods, device and the storage device of the countenance of 3D facial models
CN107368778A (en) * 2017-06-02 2017-11-21 深圳奥比中光科技有限公司 Method for catching, device and the storage device of human face expression
CN108525305A (en) * 2018-03-26 2018-09-14 广东欧珀移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN108646920A (en) * 2018-05-16 2018-10-12 Oppo广东移动通信有限公司 Identify exchange method, device, storage medium and terminal device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张泽强等: "基于Candide-3模型的人脸表情动画系统设计与实现", 《福建电脑》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111405361A (en) * 2020-03-27 2020-07-10 咪咕文化科技有限公司 Video acquisition method, electronic equipment and computer readable storage medium
CN111405361B (en) * 2020-03-27 2022-06-14 咪咕文化科技有限公司 Video acquisition method, electronic equipment and computer readable storage medium
CN111638784A (en) * 2020-05-26 2020-09-08 浙江商汤科技开发有限公司 Facial expression interaction method, interaction device and computer storage medium

Also Published As

Publication number Publication date
CN109829965B (en) 2023-06-27

Similar Documents

Publication Publication Date Title
US8725507B2 (en) Systems and methods for synthesis of motion for animation of virtual heads/characters via voice processing in portable devices
US20160134840A1 (en) Avatar-Mediated Telepresence Systems with Enhanced Filtering
CN108958610A (en) Special efficacy generation method, device and electronic equipment based on face
CN105930035A (en) Interface background display method and apparatus
CN109147818A (en) Acoustic feature extracting method, device, storage medium and terminal device
CN109064387A (en) Image special effect generation method, device and electronic equipment
CN108566516A (en) Image processing method, device, storage medium and mobile terminal
CN106936995A (en) A kind of control method of mobile terminal frame per second, device and mobile terminal
CN108646920A (en) Identify exchange method, device, storage medium and terminal device
CN108156317A (en) call voice control method, device and storage medium and mobile terminal
CN109063580A (en) Face identification method, device, electronic equipment and storage medium
CN110363079A (en) Expression exchange method, device, computer installation and computer readable storage medium
KR20190030140A (en) Method for eye-tracking and user terminal for executing the same
CN108629821A (en) Animation producing method and device
CN108920070A (en) Split screen method, apparatus, storage medium and mobile terminal based on special-shaped display screen
CN109446303A (en) Robot interactive method, apparatus, computer equipment and readable storage medium storing program for executing
CN109472912A (en) Method of adjustment, device and the storage medium and intelligent elevated table of intelligent elevated table
CN111583355B (en) Face image generation method and device, electronic equipment and readable storage medium
CN110794964A (en) Interaction method and device for virtual robot, electronic equipment and storage medium
CN109829965A (en) Action processing method, device, storage medium and the electronic equipment of faceform
CN108920071A (en) Control method, device, storage medium and mobile terminal based on special-shaped display screen
CN107608523A (en) Control method, device and the storage medium and mobile terminal of mobile terminal
CN107968890A (en) theme setting method, device, terminal device and storage medium
WO2019184679A1 (en) Method and device for implementing game, storage medium, and electronic apparatus
CN114007099A (en) Video processing method and device for video processing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant