CN110442237A - Expression model generating method and Related product - Google Patents
Expression model generating method and Related product Download PDFInfo
- Publication number
- CN110442237A CN110442237A CN201910701501.5A CN201910701501A CN110442237A CN 110442237 A CN110442237 A CN 110442237A CN 201910701501 A CN201910701501 A CN 201910701501A CN 110442237 A CN110442237 A CN 110442237A
- Authority
- CN
- China
- Prior art keywords
- expression
- coefficient
- intended particle
- target
- particle emission
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000014509 gene expression Effects 0.000 title claims abstract description 262
- 238000000034 method Methods 0.000 title claims abstract description 69
- 239000002245 particle Substances 0.000 claims abstract description 323
- 230000001815 facial effect Effects 0.000 claims abstract description 34
- 230000008921 facial expression Effects 0.000 claims abstract description 16
- 238000004590 computer program Methods 0.000 claims description 19
- 230000006870 function Effects 0.000 claims description 16
- 230000009471 action Effects 0.000 abstract description 6
- 238000010586 diagram Methods 0.000 description 17
- 230000000694 effects Effects 0.000 description 15
- 238000013507 mapping Methods 0.000 description 10
- 238000000513 principal component analysis Methods 0.000 description 6
- 238000012545 processing Methods 0.000 description 4
- 230000008859 change Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000005507 spraying Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000012886 linear function Methods 0.000 description 2
- 238000005192 partition Methods 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000001052 transient effect Effects 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000012512 characterization method Methods 0.000 description 1
- 238000013497 data interchange Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000005611 electricity Effects 0.000 description 1
- 230000007717 exclusion Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 239000007921 spray Substances 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2203/00—Indexing scheme relating to G06F3/00 - G06F3/048
- G06F2203/01—Indexing scheme relating to G06F3/01
- G06F2203/011—Emotion or mood input determined on the basis of sensed human body parameters such as pulse, heart rate or beat, temperature of skin, facial expressions, iris, voice pitch, brain activity patterns
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2203/00—Indexing scheme relating to G06F3/00 - G06F3/048
- G06F2203/01—Indexing scheme relating to G06F3/01
- G06F2203/012—Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- Software Systems (AREA)
- Architecture (AREA)
- Human Computer Interaction (AREA)
- Image Processing (AREA)
Abstract
The embodiment of the present application provides a kind of expression model generating method and Related product, wherein obtains the expression coefficient of face in target facial image;Intended particle emission parameter is determined according to the expression coefficient;It according to the intended particle emission parameter, controls particle and emits in target expression model, the target expression model is the model for intending human face expression according to the expression coefficient module, therefore, is able to ascend accuracy when carrying out action drives to virtual role.
Description
Technical field
This application involves technical field of data processing, and in particular to a kind of expression model generating method and Related product.
Background technique
With the continuous development of electronic technology, electronic technology brings huge change to people's lives.With man-machine
The continuous development of interaction technique has gradually appeared through the human face expression of user and different dummy models is driven to make accordingly
Movement, for example, the virtual role in dummy model is driven to carry out opening one's mouth to act by opening one's mouth.But carrying out dummy model
When driving, generallys use expression driving virtual role and make corresponding expression etc..Therefore, action drives are being carried out to virtual role
When the sense of reality it is lower, it is lower so as to cause accuracy when carrying out virtual role driving.
Summary of the invention
The embodiment of the present application provides a kind of expression model generating method and Related product, is able to ascend and carries out to virtual role
Accuracy when action drives.
In a first aspect, the embodiment of the present application provides a kind of expression model generating method, which comprises
Obtain the expression coefficient of face in target facial image;
Intended particle emission parameter is determined according to the expression coefficient;
According to the intended particle emission parameter, controls particle and emit in target expression model, the target expression mould
Type is to intend the model of human face expression according to the expression coefficient module.
In this example, the expression coefficient of face in available target facial image is determined according to the expression coefficient
Intended particle emission parameter controls particle and emits in target expression model according to the intended particle emission parameter, the mesh
Marking expression model therefore can be true according to the expression coefficient of user for the model for intending human face expression according to the expression coefficient module
Particle emission parameter is made, and particle emission is carried out with intended particle emission parameter in target expression model, so as to mention
The sense of reality to bandwagon effect after model-driven is risen, and then can be promoted to a certain extent to use target expression model to virtual
Accuracy when role drives.
It is described that intended particle emission parameter is determined according to the expression coefficient, comprising:
In the case where the expression coefficient is greater than preset table feelings coefficient threshold, determine corresponding with the expression coefficient
Semantic coefficient;
According to the semantic coefficient, the intended particle emission parameter is determined.
In this example, in the case where expression coefficient is greater than preset table feelings coefficient threshold, the corresponding language of expression coefficient is determined
Adopted coefficient determines intended particle emission parameter according to semantic coefficient, can dynamically determine target according to the value of expression coefficient
Particle emission parameter, so as to promote accuracy when driving to target expression model to a certain extent.
Optionally, the intended particle emission parameter includes the number of particles of each period transmitting, intended particle transmitting
At least one of starting point, intended particle emission rate, the intended particle direction of the launch, intended particle color.
Optionally, described according in the case where the intended particle emission parameter is intended particle emission rate
Semantic coefficient determines the intended particle emission parameter, comprising:
Based on the sine function of the semantic coefficient, the intended particle emission rate is determined.
Optionally, described according in the case where the intended particle emission parameter is the intended particle direction of the launch
Semantic coefficient determines the intended particle emission parameter, comprising:
It obtains intended particle and emits position, intended particle transmitting position is to emit grain in the target expression model
The position of son;
According to the semantic coefficient, determine that particle emission deflecting angle, the transmitting deflecting angle are particle emission direction phase
For the deflecting angle of the center line of the emitting area of intended particle transmitting position instruction, the center line is to pass through the hair
Penetrate regional center point and perpendicular to the straight line of plane where the emitting area;
According to the particle emission deflecting angle, the intended particle direction of the launch is determined.
In this example, particle emission deflecting angle is determined according to target expression semanteme coefficient, according to particle emission deflecting angle
It determines the intended particle direction of the launch, the direction of the launch is determined by deflecting angle, then can promote determining intended particle launch party
To when accuracy it is higher.
Optionally, the acquisition intended particle emits position, comprising:
According to the target expression that the expression coefficient indicates, expression region corresponding with the target expression is determined;
According to the expression region, intended particle transmitting position is determined.
In this example, the target expression indicated by expression coefficient corresponds to expression region, determines that intended particle emits position
It sets, therefore, determines that intended particle emits position according to expression region, can be determined according to specific expression specific
Intended particle emits position, can promote accuracy when intended particle transmitting position determines to a certain extent.
Optionally, described according to the semanteme in the case where the intended particle emission parameter is intended particle color
Coefficient determines the intended particle emission parameter, comprising:
According to the semantic coefficient, the textures color of particIe system is determined;
According to the textures color, the intended particle color is determined.
In this example, textures color is determined according to target semanteme coefficient, it can be dynamically according to target expression semanteme system
Number and then determines intended particle color according to textures color to determine textures color, when can promote intended particle color and determining
Accuracy.
Optionally, described according to the semantic coefficient, determine the textures color of the particIe system, comprising:
In the case where the semantic coefficient is greater than or equal to default semantic coefficient threshold, determine that the textures color is the
One color;
In the case where the semantic coefficient is less than the default semantic coefficient threshold, determine that the textures color is second
Color.
Optionally, the expression coefficient for obtaining face in target facial image, comprising:
The expression coefficient is determined, wherein the mesh using target prior model according to the target facial image
Mark prior model is bilinearity principal component analysis pca model.
Second aspect, the embodiment of the present application provide a kind of expression model display method, which comprises
Obtain the facial image of target user in real time by camera;
On the display apparatus using the target expression model as described in being shown the described in any item methods of first aspect.
The third aspect, the embodiment of the present application provide a kind of expression model generating means, and described device includes: to obtain list
Member, determination unit and control unit, wherein
The acquiring unit, for obtaining the expression coefficient of face in target facial image;
The determination unit, for determining intended particle emission parameter according to the expression coefficient;
Described control unit, for controlling particle and being sent out in target expression model according to the intended particle emission parameter
It penetrates, the target expression model is the model for intending human face expression according to the expression coefficient module.
Optionally, it is described intended particle emission parameter is determined according to the expression coefficient in terms of, the determination unit
For:
In the case where the expression coefficient is greater than preset table feelings coefficient threshold, determine corresponding with the expression coefficient
Semantic coefficient;
According to the semantic coefficient, the intended particle emission parameter is determined.
Optionally, the intended particle emission parameter includes the number of particles of each period transmitting, intended particle transmitting
At least one of starting point, intended particle emission rate, the intended particle direction of the launch, intended particle color.
Optionally, in the case where the intended particle emission parameter is intended particle emission rate, described according to institute
Predicate justice coefficient, in terms of determining the intended particle emission parameter, the determination unit is specifically used for:
Based on the sine function of the semantic coefficient, the intended particle emission rate is determined.
Optionally, in the case where the intended particle emission parameter is the intended particle direction of the launch, described according to institute
Predicate justice coefficient, in terms of determining the intended particle emission parameter, the determination unit is specifically used for:
It obtains intended particle and emits position, intended particle transmitting position is to emit grain in the target expression model
The position of son;
According to the semantic coefficient, determine that particle emission deflecting angle, the transmitting deflecting angle are particle emission direction phase
For the deflecting angle of the center line of the emitting area of intended particle transmitting position instruction, the center line is to pass through the hair
Penetrate regional center point and perpendicular to the straight line of plane where the emitting area;
According to the particle emission deflecting angle, the intended particle direction of the launch is determined.
Optionally, in terms of the acquisition intended particle emits position, the determination unit is specifically used for:
According to the target expression that the expression coefficient indicates, expression region corresponding with the target expression is determined;
According to the expression region, intended particle transmitting position is determined.
Optionally, in the case where the intended particle emission parameter is intended particle color, described according to institute's predicate
Adopted coefficient, in terms of determining the intended particle emission parameter, the determination unit is used for:
According to the semantic coefficient, the textures color of particIe system is determined;
According to the textures color, the intended particle color is determined.
Optionally, described true in terms of the textures color for determining the particIe system described according to the semantic coefficient
Order member is specifically used for:
In the case where the semantic coefficient is greater than or equal to default semantic coefficient threshold, determine that the textures color is the
One color;
In the case where the semantic coefficient is less than the default semantic coefficient threshold, determine that the textures color is second
Color.
Optionally, in the acquisition target facial image in terms of the expression coefficient of face, the acquiring unit is used for:
The expression coefficient is determined, wherein the mesh using target prior model according to the target facial image
Mark prior model is bilinearity principal component analysis pca model.
Fourth aspect, the embodiment of the present application provide a kind of expression model display device, and the displaying device includes obtaining
Unit and display unit, wherein
The acquiring unit, for obtaining the facial image of target user in real time by camera;
The display unit, for generating dress using such as the described in any item expression models of the third aspect on the display apparatus
It sets and shows the target expression model.
5th aspect of the embodiment of the present application provides a kind of terminal, including processor, input equipment, output equipment and storage
Device, the processor, input equipment, output equipment and memory are connected with each other, wherein the memory is for storing computer
Program, the computer program include program instruction, and the processor is configured for calling described program instruction, are executed such as this
Apply for the step instruction in embodiment first aspect or second aspect.
6th aspect of the embodiment of the present application provides a kind of computer readable storage medium, wherein above-mentioned computer can
Read the computer program that storage medium storage is used for electronic data interchange, wherein above-mentioned computer program executes computer
The step some or all of as described in the embodiment of the present application first aspect or second aspect.
7th aspect of the embodiment of the present application provides a kind of computer program product, wherein above-mentioned computer program produces
Product include the non-transient computer readable storage medium for storing computer program, and above-mentioned computer program is operable to make to count
Calculation machine executes the step some or all of as described in the embodiment of the present application first aspect or second aspect.The computer program
Product can be a software installation packet.
Detailed description of the invention
In order to illustrate the technical solutions in the embodiments of the present application or in the prior art more clearly, to embodiment or will show below
There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this
Some embodiments of application for those of ordinary skill in the art without creative efforts, can be with
It obtains other drawings based on these drawings.
Fig. 1 provides a kind of structural schematic diagram of virtual rendering system for the embodiment of the present application;
Fig. 2A provides a kind of flow diagram of expression model generating method for the embodiment of the present application;
Fig. 2 B provides a kind of schematic diagram of intended particle direction of the launch for the embodiment of the present application;
Fig. 3 provides the schematic diagram of another expression model generating method for the embodiment of the present application;
Fig. 4 provides the schematic diagram of another expression model generating method for the embodiment of the present application;
Fig. 5 provides the schematic diagram of another expression model generating method for the embodiment of the present application;
Fig. 6 is a kind of structural schematic diagram of terminal provided by the embodiments of the present application;
Fig. 7 provides a kind of structural schematic diagram of expression model generating means for the embodiment of the present application;
Fig. 8 provides a kind of structural schematic diagram of expression model display device for the embodiment of the present application.
Specific embodiment
Below in conjunction with the attached drawing in the embodiment of the present application, technical solutions in the embodiments of the present application carries out clear, complete
Site preparation description, it is clear that described embodiments are only a part of embodiments of the present application, instead of all the embodiments.It is based on
Embodiment in the application, it is obtained by those of ordinary skill in the art without making creative efforts every other
Embodiment shall fall in the protection scope of this application.
The description and claims of this application and term " first " in above-mentioned attached drawing, " second " etc. are for distinguishing
Different objects, are not use to describe a particular order.In addition, term " includes " and " having " and their any deformations, it is intended that
It is to cover and non-exclusive includes.Such as the process, method, system, product or equipment for containing a series of steps or units do not have
It is defined in listed step or unit, but optionally further comprising the step of not listing or unit, or optionally also wrap
Include other step or units intrinsic for these process, methods, product or equipment.
" embodiment " mentioned in this application is it is meant that a particular feature, structure, or characteristic described can be in conjunction with the embodiments
Included at least one embodiment of the application.The phrase, which occurs, in each position in the description might not each mean phase
Same embodiment, nor the independent or alternative embodiment with other embodiments mutual exclusion.Those skilled in the art are explicitly
Implicitly understand, embodiments described herein can be combined with other embodiments.
Electronic device involved by the embodiment of the present application may include the various handheld devices with wireless communication function,
Mobile unit, wearable device calculate equipment or are connected to other processing equipments and various forms of radio modem
User equipment (user equipment, UE), mobile station (mobile station, MS), terminal device (terminal
Device) etc..For convenience of description, apparatus mentioned above is referred to as electronic device.
In order to better understand a kind of expression model generating method provided by the embodiments of the present application, first below to using table
The virtual rendering system of feelings model generating method is briefly introduced.Referring to Fig. 1, Fig. 1 provides one for the embodiment of the present application
The structural schematic diagram of the virtual rendering system of kind.As shown in Figure 1, virtual rendering system 100 includes processor 101 and target expression mould
Type 102, virtual rendering system 100 receive target facial image, and processor 101 obtains the expression system of face in target facial image
Number, after determining expression coefficient, processor 101 can simulate human face expression in expression model according to expression coefficient, obtain
Target expression model 102, processor 101 determine intended particle emission parameter, intended particle emission parameter according to expression coefficient
Number of particles, intended particle including the transmitting of each period emit starting point, intended particle emission rate, intended particle launch party
To at least one of, intended particle color, processor 101 controls particle in target expression mould according to intended particle emission parameter
Emitting in type 102, obtains bandwagon effect, target expression model 102 is the model for intending human face expression according to expression coefficient module, because
This, it is opposite with existing scheme, particle emission parameter can be determined according to the expression coefficient of user, and in target expression model
In with intended particle emission parameter carry out particle emission, so as to promote the sense of reality to bandwagon effect after model-driven, into
And it can be promoted to a certain extent to accuracy when being driven using target expression model to virtual role.
Optionally, target expression model can be the model of virtual random angle color table feelings, for example, game role, virtual
Personage etc.;It include particIe system (system is virtual system) in virtual rendering system, the particle emission system is for emitting grain
The authenticity and abundant degree when the simulation expression of target expression model can be enhanced to form dynamic particle effect in son,
ParticIe system for example can be 3dmax particIe system etc..
Fig. 2A is please referred to, Fig. 2A provides a kind of flow diagram of expression model generating method for the embodiment of the present application.
As shown in Figure 2 A, expression model generating method includes step 201-203, specific as follows:
201, the expression coefficient of face in target facial image is obtained.
Wherein, determine that the method for expression coefficient can be with according to target facial image are as follows: use principal component analysis
(Principal Component Analysis, PCA) model determines expression coefficient.
202, intended particle emission parameter is determined according to expression coefficient.
Wherein, particle emission parameter is parameter when particIe system carries out particle emission, for example, it may be number of particles,
Particle emission starting point, particle emission direction, particle emission color etc..
Optionally, intended particle emission parameter includes the parameter of multiple classifications, such as: the population of each period transmitting
At least one of amount, intended particle transmitting starting point, intended particle emission rate, the intended particle direction of the launch, intended particle color.
Wherein, the duration of each period can be by setting empirical value or historical data.
203, it according to intended particle emission parameter, controls particle and emits in target expression model, target expression model is
Intend the model of human face expression according to expression coefficient module.
Wherein it is possible to carry out particle emission by using intended particle emission parameter control particIe system.It is specifically as follows:
When intended particle parameter is that intended particle emits starting point, then emit particle in intended particle transmitting starting point;Intended particle parameter
When for intended particle emission rate, then particle is emitted with the intended particle emission rate;Intended particle parameter is each
The number of particles of period transmitting then emits particle with the number of particles that each period emits;Intended particle hair
Penetrate parameter be the intended particle direction of the launch when, then particle is emitted with the intended particle direction of the launch;Intended particle transmitting
When parameter is intended particle color, then send particle with the target grain color when carrying out particle emission, can be certainly
It states and the form of a variety of particle emission parameter combinations is appointed to be emitted, it can while there is plurality of target particle emission parameter, it is right
Particle is emitted.
In a possible embodiment, a kind of possible method for obtaining the expression coefficient of face in target facial image
Are as follows:
Expression coefficient is determined using target prior model according to target facial image, wherein target prior model is
Bilinearity principal component analysis pca model.
Wherein, a kind of possible method for establishing target prior model are as follows: offline building face pca model: off-line model
For bilinearity pca model, bilinearity can be understood as the linear function of two dimensions of shape of face parameter and expression parameter, by the two-wire
Prior model of the property pca model as face 3D shape.When the on-line operation prior model, people is carried out to target facial image
The calibration of face datum mark, obtains M high-precision key point, using the parameter of the face datum mark optimization pca model of calibration, so that
In pca model three-dimensional coordinate point corresponding with high-precision key point two-dimensional surface projection, with corresponding key point two
The location error of projection on dimensional plane is minimum, wherein location error minimum can be understood as the distance between the position of projection
Minimum, when best situation, distance is 0, usually one tend to 0 it is indivisible.It, will after the completion of the building of target prior model
Target facial image is input in model, then can determine expression coefficient.Expression coefficient is the output of target prior model
As a result.
In a possible embodiment, a kind of possible according to expression coefficient, determine intended particle emission parameter
Method includes step A1-A2, specific as follows:
A1, expression coefficient be greater than preset table feelings coefficient threshold in the case where, determine semanteme corresponding with expression coefficient
Coefficient;
A2, according to semantic coefficient, determine intended particle emission parameter.
Wherein, preset table feelings coefficient threshold is to be set by empirical value or historical data.Semantic coefficient is characterization expression
Amplitude, the amplitude of expression are understood that the presentation degree of expression, for example, semantic coefficient can indicate to open so that expression is to open one's mouth as an example
The amplitude of mouth.
, can be with when determining the corresponding semantic coefficient of expression coefficient are as follows: using preset expression coefficient and semantic coefficient it
Between mapping relations, determine semantic coefficient corresponding with expression coefficient.Between preset expression coefficient and expression semanteme coefficient
Mapping relations it is to be understood that preset mapping relations, different expression coefficient ranges correspond to different semantic coefficients.
For example, expression is to open one's mouth, then the expression opened one's mouth can be divided into N group expression, N is integer, such as can be 53 etc., every group of table
The degree of opening one's mouth of feelings is different, and the expression coefficient range of every group of expression is different, and the semantic coefficient of every group of expression is different, semanteme system
Value of the number between [0,1] when semantic coefficient is 0, indicates there is no the expression that 1 indicates that the amplitude of the expression is maximum, for example,
If expression is to open one's mouth, the sum of maximum semantic coefficient of amplitude of opening one's mouth is 1.
When determining intended particle emission parameter according to semantic coefficient, side corresponding with intended particle emission parameter can be used
Method is determined.For example, can be sent out using semantic coefficient and particle when intended particle emission parameter is that intended particle emits quantity
The mapping relations penetrated between quantity are determined;It, can be according to table when intended particle emission parameter is that intended particle emits starting point
Expression region corresponding to several corresponding expressions is with one's heart to be determined;Intended particle emission parameter is intended particle emission rate
When, intended particle emission rate can be determined according to the sine function of semantic coefficient;Intended particle emission parameter is target
When particle emission direction, the intended particle direction of the launch can be determined according to particle emission deflecting angle;Intended particle emission parameter
When for intended particle color, intended particle color can be determined according to the textures color of particIe system.
In a possible embodiment, in the case where intended particle emission parameter is intended particle emission rate, one
Possible according to semantic coefficient, the method for determining intended particle emission parameter of kind are as follows:
Based on the sine function of semantic coefficient, intended particle emission rate is determined.
Wherein it is determined that the specific method of intended particle emission rate can be with are as follows: by following formula, determine intended particle
Emission rate:
Wherein, rate is intended particle emission rate, and sin () is SIN function, and jaw is target expression semanteme coefficient.
Optionally, when determining intended particle emission rate according to semantic coefficient, other functional relations can also be passed through
It is determined, for example, linear function, exponential function etc..
In this example, target expression semanteme system is determined by the mapping relations between expression coefficient and expression semanteme coefficient
Number, determines intended particle emission rate according to the SIN function of target expression semanteme coefficient, can promote mesh to a certain extent
Accuracy when mark particle emission rate determines.
In a possible embodiment, a kind of possible method for obtaining intended particle transmitting position includes step B1-
B2, specific as follows:
B1, the target expression indicated according to expression coefficient, determine expression corresponding with target expression region;
B2, according to expression region, determine that intended particle emits position.
Wherein, expression region can be understood as the generating region of facial expressions and acts, for example, expression is blink, then expression region
It can be understood as ocular, expression is to open one's mouth, then expression region can be understood as mouth region, determine the corresponding expression of expression
The method in region can be with are as follows: according to the mapping relations of expression and preset expression region, determines the expression region of target expression.
Expression and the mapping relations in preset expression region can be set by empirical value or historical data.
Optionally, the corresponding position in expression region can be emitted into position as intended particle.Intended particle emits position
It can be appreciated that intended particle emits starting point, intended particle transmitting starting point can change according to the variation of movement, example
Such as, the expression of imitation be flame expression, then intended particle transmitting starting point follow mouth movement and movement etc..
In this example, the target expression indicated by expression coefficient corresponds to expression region, determines that intended particle emits position
It sets, therefore, determines that intended particle emits position according to expression region, can be determined according to specific expression specific
Intended particle emits position, can promote accuracy when intended particle transmitting position determines to a certain extent.
In a possible embodiment, in the case where intended particle emission parameter is the intended particle direction of the launch, one
Kind is possible according to semantic coefficient, and the method for determining intended particle emission parameter includes step C1-C3, specific as follows:
C1, intended particle transmitting position is obtained, it is to emit particle in target expression model that intended particle, which emits position,
Position;
C2, according to semantic coefficient, determine that particle emission deflecting angle, transmitting deflecting angle are particle emission direction relative to mesh
The deflecting angle of the center line of the emitting area of particle emission position instruction is marked, center line is by emitting area central point and vertically
The straight line of plane where emitting area;
C3, according to particle emission deflecting angle, determine the intended particle direction of the launch.
Wherein it is possible to by the corresponding expression region of expression that expression coefficient indicates, to determine that intended particle emits position
It sets.
A kind of possible according to semantic coefficient, method for determining particle emission deflecting angle are as follows:
According to the following formula, particle emission deflecting angle is determined:
θ=sin (jaw),
Wherein, θ is particle emission deflecting angle, and jaw is semantic coefficient.
Optionally, a kind of possible according to transmitting deflecting angle, determine that the method for the intended particle direction of the launch can be with are as follows: if
When particle effect is the particle effect of jet type, the direction of virtual rending model will be far from as target using particle emission deflecting angle
The particle effect in particle emission direction, jet type for example can be flame, water spray, spraying etc.;If particle effect is suction-type
When particle effect, the direction that will be close to virtual rending model using particle emission deflecting angle is sucked as the intended particle direction of the launch
The particle effect of formula can be for example air-breathing, sucking flame etc..
As shown in Figure 2 B, Fig. 2 B be a kind of intended particle direction of the launch schematic diagram, this is sentenced open one's mouth expression for carry out
Illustrate, amplitude of opening one's mouth is bigger, and the intended particle direction of the launch more deviates target's center's line (direction 2);Amplitude of opening one's mouth is smaller, target grain
The sub- direction of the launch is closer to target's center's line (direction 1).When particle emission, all areas in target position carry out particle
Transmitting.Certainly, in transmitting, can be inceptive direction is the intended particle direction of the launch, and after particle emission goes out, particle is root
According to movement in a curve is thrown, to ultimately form corresponding particle effect.
In a possible embodiment, in the case where intended particle emission parameter is intended particle color, Yi Zhongke
Can according to semantic coefficient, the method for determining intended particle emission parameter includes step D1-D2, specific as follows:
D1, according to semantic coefficient, determine the textures color of particIe system;
D2, according to textures color, determine intended particle color.
Wherein, particIe system is the virtual system for emitting particle.Textures color can directly map out the particle of transmitting
Color, for example, textures color is red, then the color of particle is red.Intended particle color can pass through the face of multiple textures
Color maps to obtain, and the mixing principle of color can be used when mapping, obtain intended particle color.
Optionally, a kind of possible according to semantic coefficient, the method for determining the textures color of particIe system includes step
D11-D12, specific as follows:
D11, in the case where semantic coefficient is greater than or equal to default semantic coefficient threshold, determine that textures color is the first face
Color;
D12, in the case where semantic coefficient is less than default semantic coefficient threshold, determine that textures color is the second color.
Wherein, presetting expression semanteme coefficient threshold is to be set by empirical value or historical data.First color for example can be with
For red etc., the second color for example can be purple.For example, if the effect shown after particle emission is flame rod effect, target
When expression semanteme coefficient is greater than or equal to default expression semanteme coefficient threshold, blushed flame is sprayed, conversely, then spraying purple fire
Flame, certainly, the first color and the second color can arbitrarily be set, and above-mentioned setting by way of example only, is not especially limited.Certainly
Textures color can also be determined in other manners, for example, passing through the mapping between semantic coefficient and textures color
Relationship, to determine that textures color corresponding with semantic coefficient, the mapping relations are to set by empirical value or historical data, this
Place by way of example only, is not especially limited.
According to textures color, determine that the method for intended particle color can be with are as follows: can be directly by textures color, as target
Particle color, when carrying out particle emission, the textures color of particIe system can reflect the color for the particle launched.
In this example, textures color is determined according to target semanteme coefficient, it can be dynamically according to target expression semanteme system
Count the validity promoted when textures color determines to determine textures color.
In a possible embodiment, the embodiment of the present application provides a kind of expression model display method, this method packet
Step E1-E2 is included, specific as follows:
E1, the facial image for obtaining target user in real time by camera;
E2, the method in any of the above-described embodiment is used to show target expression model on the display apparatus.
Wherein, when obtaining the facial image of target user in real time by camera, the face orientation of target user be with
The opposite direction of camera, the i.e. direction opposite with the detection direction of camera.
In this example, the facial image of target user can be obtained in real time by camera, and according to the facial image
Target expression model is shown in the display device of electronic equipment, so as to obtain user's expression in real time and be shown,
User experience can be promoted to a certain extent.
Referring to Fig. 3, Fig. 3 provides the schematic diagram of another expression model generating method for the embodiment of the present application.Such as Fig. 3
Shown, expression model generating method includes step 301-304, specific as follows:
301, the expression coefficient of face in target facial image is obtained;
302, in the case where expression coefficient is greater than preset table feelings coefficient threshold, semanteme corresponding with expression coefficient is determined
Coefficient;
303, according to semantic coefficient, intended particle emission parameter is determined;
Wherein, intended particle emission parameter includes the number of particles of each period transmitting, intended particle transmitting starting point, mesh
Mark at least one of particle emission rate, the intended particle direction of the launch, intended particle color.
304, it according to intended particle emission parameter, controls particle and emits in target expression model, target expression model is
Intend the model of human face expression according to expression coefficient module.
In this example, in the case where expression coefficient is greater than preset table feelings coefficient threshold, the corresponding language of expression coefficient is determined
Adopted coefficient determines intended particle emission parameter according to semantic coefficient, can dynamically determine target according to the value of expression coefficient
Particle emission parameter, so as to promote accuracy when driving to target expression model to a certain extent.
Referring to Fig. 4, Fig. 4 provides the schematic diagram of another expression model generating method for the embodiment of the present application.Such as Fig. 4
Shown, it includes step 401-408 that expression model, which generates, specific as follows:
401, the expression coefficient of face in target facial image is obtained;
402, intended particle emission parameter is determined according to expression coefficient;
Wherein, objective emission parameter can emit position for intended particle.
403, the target expression indicated according to expression coefficient, determines expression corresponding with target expression region;
404, according to expression region, determine that intended particle emits position;
405, it obtains intended particle and emits position, it is to emit particle in target expression model that intended particle, which emits position,
Position;
406, according to semantic coefficient, determine particle emission deflecting angle, transmitting deflecting angle be particle emission direction relative to
Intended particle emits the deflecting angle of the center line of the emitting area of position instruction, and center line is by emitting area central point and to hang down
Directly in the straight line of emitting area place plane;
407, according to particle emission deflecting angle, the intended particle direction of the launch is determined;
408, it according to the intended particle direction of the launch, controls particle and emits in target expression model, target expression model is
Intend the model of human face expression according to expression coefficient module.
In this example, particle emission deflecting angle is determined according to target expression semanteme coefficient, according to particle emission deflecting angle
It determines the intended particle direction of the launch, the direction of the launch is determined by deflecting angle, then can promote determining intended particle launch party
To when accuracy it is higher.
Referring to Fig. 5, Fig. 5 provides the schematic diagram of another expression model generating method for the embodiment of the present application.Such as Fig. 5
Shown, expression model generating method includes step 501-507, specific as follows:
501, the expression coefficient of face in target facial image is obtained;
502, in the case where expression coefficient is greater than preset table feelings coefficient threshold, semanteme corresponding with expression coefficient is determined
Coefficient;
503, according to semantic coefficient, the textures color of particIe system is determined;
504, in the case where semantic coefficient is greater than or equal to default semantic coefficient threshold, determine that textures color is the first face
Color;
505, in the case where semantic coefficient is less than default semantic coefficient threshold, determine that textures color is the second color;
Wherein, presetting expression semanteme coefficient threshold is to be set by empirical value or historical data.First color for example can be with
For red etc., the second color for example can be purple.For example, if the effect shown after particle emission is flame rod effect, target
When expression semanteme coefficient is greater than or equal to default expression semanteme coefficient threshold, blushed flame is sprayed, conversely, then spraying purple fire
Flame, certainly, the first color and the second color can arbitrarily be set, and above-mentioned setting by way of example only, is not especially limited.
506, according to textures color, intended particle color is determined;
507, it according to intended particle color, controls particle and emits in target expression model, according to target expression model
Expression coefficient module intends the model of human face expression.
In this example, textures color is determined according to target semanteme coefficient, it can be dynamically according to target expression semanteme system
Number and then determines intended particle color according to textures color to determine textures color, when can promote intended particle color and determining
Accuracy.
It is consistent with above-described embodiment, referring to Fig. 6, Fig. 6 is that a kind of structure of terminal provided by the embodiments of the present application is shown
Be intended to, as shown, include processor, input equipment, output equipment and memory, processor, input equipment, output equipment and
Memory is connected with each other, wherein for memory for storing computer program, computer program includes program instruction, processor quilt
It is configured to caller instruction, above procedure includes the instruction for executing following steps;
Obtain the expression coefficient of face in target facial image;
Intended particle emission parameter is determined according to expression coefficient;
According to intended particle emission parameter, controls particle and emit in target expression model, according to target expression model
Expression coefficient module intends the model of human face expression.
It is above-mentioned that mainly the scheme of the embodiment of the present application is described from the angle of method side implementation procedure.It is understood that
, in order to realize the above functions, it comprises execute the corresponding hardware configuration of each function and/or software module for terminal.This
Field technical staff should be readily appreciated that, in conjunction with each exemplary unit and algorithm of embodiment description presented herein
Step, the application can be realized with the combining form of hardware or hardware and computer software.Some function actually with hardware also
It is the mode of computer software driving hardware to execute, the specific application and design constraint depending on technical solution.Profession
Technical staff can specifically realize described function to each using distinct methods, but this realization should not be recognized
For beyond scope of the present application.
The embodiment of the present application can carry out the division of functional unit according to above method example to terminal, for example, can be right
The each functional unit of each function division is answered, two or more functions can also be integrated in a processing unit.
Above-mentioned integrated unit both can take the form of hardware realization, can also realize in the form of software functional units.It needs
Illustrate, is schematical, only a kind of logical function partition to the division of unit in the embodiment of the present application, it is practical to realize
When there may be another division manner.
It is consistent with the above, referring to Fig. 7, Fig. 7 provides a kind of expression model generating means for the embodiment of the present application
Structural schematic diagram.As shown in fig. 7, expression model generating means include: acquiring unit 701, determination unit 702 and control unit
703, wherein
Acquiring unit 701, for obtaining the expression coefficient of face in target facial image;
Determination unit 702, for determining intended particle emission parameter according to expression coefficient;
Control unit 703, for controlling particle and emitting in target expression model according to intended particle emission parameter, mesh
Marking expression model is the model for intending human face expression according to expression coefficient module.
Optionally, in terms of determining intended particle emission parameter according to expression coefficient, determination unit 702 is used for:
In the case where expression coefficient is greater than preset table feelings coefficient threshold, semantic system corresponding with expression coefficient is determined
Number;
According to semantic coefficient, intended particle emission parameter is determined.
Optionally, intended particle emission parameter include the number of particles of each period transmitting, intended particle transmitting starting point,
At least one of intended particle emission rate, the intended particle direction of the launch, intended particle color.
Optionally, in the case where intended particle emission parameter is intended particle emission rate, according to semantic coefficient, really
In terms of making intended particle emission parameter, determination unit 702 is specifically used for:
Based on the sine function of semantic coefficient, intended particle emission rate is determined.
Optionally, in the case where intended particle emission parameter is the intended particle direction of the launch, according to semantic coefficient, really
In terms of making intended particle emission parameter, determination unit 702 is specifically used for:
It obtains intended particle and emits position, it is the position for emitting particle in target expression model that intended particle, which emits position,
It sets;
According to semantic coefficient, determine that particle emission deflecting angle, transmitting deflecting angle are particle emission direction relative to target
The deflecting angle of the center line of the emitting area of particle emission position instruction, center line be by emitting area central point and perpendicular to
The straight line of plane where emitting area;
According to particle emission deflecting angle, the intended particle direction of the launch is determined.
Optionally, in terms of obtaining intended particle transmitting position, determination unit 702 is specifically used for:
According to the target expression that expression coefficient indicates, expression corresponding with target expression region is determined;
According to expression region, determine that intended particle emits position.
Optionally, if intended particle emission parameter determines intended particle according to semantic coefficient for intended particle color
In terms of emission parameter, determination unit 702 is used for:
According to semantic coefficient, the textures color of particIe system is determined;
According to textures color, intended particle color is determined.
Optionally, according to semantic coefficient, in terms of the textures color for determining particIe system, determination unit 702 is specifically used
In:
In the case where semantic coefficient is greater than or equal to default semantic coefficient threshold, determine that textures color is the first color;
In the case where semantic coefficient is less than default semantic coefficient threshold, determine that textures color is the second color.
Optionally, in obtaining target facial image in terms of the expression coefficient of face, acquiring unit 702 is used for:
Expression coefficient is determined using target prior model according to target facial image, wherein target prior model is
Bilinearity principal component analysis pca model.
Referring to Fig. 8, Fig. 8 provides a kind of structural schematic diagram of expression model display device for the embodiment of the present application.Exhibition
Showing device includes acquiring unit 801 and display unit 802, wherein
Acquiring unit 801, for obtaining the facial image of target user in real time by camera;
Display unit 802, on the display apparatus using the expression model generating means exhibition such as above-mentioned any embodiment
Show target expression model.
The embodiment of the present application also provides a kind of computer storage medium, wherein computer storage medium storage is for electricity
The computer program of subdata exchange, it is as any in recorded in above method embodiment which execute computer
A kind of some or all of expression model display method step.
The embodiment of the present application also provides a kind of computer program product, and the computer program product includes storing calculating
The non-transient computer readable storage medium of machine program, the computer program make computer execute such as above method embodiment
Some or all of any expression model display method of middle record step.
It should be noted that for the various method embodiments described above, for simple description, therefore, it is stated as a series of
Combination of actions, but those skilled in the art should understand that, the application is not limited by the described action sequence because
According to the application, some steps may be performed in other sequences or simultaneously.Secondly, those skilled in the art should also know
It knows, the embodiments described in the specification are all preferred embodiments, related actions and modules not necessarily the application
It is necessary.
In the above-described embodiments, it all emphasizes particularly on different fields to the description of each embodiment, there is no the portion being described in detail in some embodiment
Point, reference can be made to the related descriptions of other embodiments.
In several embodiments provided herein, it should be understood that disclosed device, it can be by another way
It realizes.For example, the apparatus embodiments described above are merely exemplary, such as the division of the unit, it is only a kind of
Logical function partition, there may be another division manner in actual implementation, such as multiple units or components can combine or can
To be integrated into another system, or some features can be ignored or not executed.Another point, shown or discussed is mutual
Coupling, direct-coupling or communication connection can be through some interfaces, the indirect coupling or communication connection of device or unit,
It can be electrical or other forms.
The unit as illustrated by the separation member may or may not be physically separated, aobvious as unit
The component shown may or may not be physical unit, it can and it is in one place, or may be distributed over multiple
In network unit.It can select some or all of unit therein according to the actual needs to realize the mesh of this embodiment scheme
's.
It, can also be in addition, applying for that each functional unit in bright each embodiment can integrate in one processing unit
It is that each unit physically exists alone, can also be integrated in one unit with two or more units.Above-mentioned integrated list
Member both can take the form of hardware realization, can also be realized in the form of software program module.
If the integrated unit is realized in the form of software program module and sells or use as independent product
When, it can store in a computer-readable access to memory.Based on this understanding, the technical solution of the application substantially or
Person says that all or part of the part that contributes to existing technology or the technical solution can body in the form of software products
Reveal and, which is stored in a memory, including some instructions are used so that a computer equipment
(can be personal computer, server or network equipment etc.) executes all or part of each embodiment the method for the application
Step.And memory above-mentioned includes: USB flash disk, read-only memory (read-only memory, ROM), random access memory
The various media that can store program code such as (random access memory, RAM), mobile hard disk, magnetic or disk.
Those of ordinary skill in the art will appreciate that all or part of the steps in the various methods of above-described embodiment is can
It is completed with instructing relevant hardware by program, which can store in a computer-readable memory, memory
It may include: flash disk, read-only memory, random access device, disk or CD etc..
The embodiment of the present application is described in detail above, specific case used herein to the principle of the application and
Embodiment is expounded, the description of the example is only used to help understand the method for the present application and its core ideas;
At the same time, for those skilled in the art can in specific embodiments and applications according to the thought of the application
There is change place, in conclusion the contents of this specification should not be construed as limiting the present application.
Claims (10)
1. a kind of expression model generating method, which is characterized in that the described method includes:
Obtain the expression coefficient of face in target facial image;
Intended particle emission parameter is determined according to the expression coefficient;
According to the intended particle emission parameter, controls particle and emit in target expression model, the target expression model is
Intend the model of human face expression according to the expression coefficient module.
2. the method according to claim 1, wherein described determine that intended particle is sent out according to the expression coefficient
Penetrate parameter, comprising:
In the case where the expression coefficient is greater than preset table feelings coefficient threshold, semanteme corresponding with the expression coefficient is determined
Coefficient;
According to the semantic coefficient, the intended particle emission parameter is determined.
3. according to the method described in claim 2, it is characterized in that, the intended particle emission parameter includes each period hair
Number of particles, intended particle the transmitting starting point, intended particle emission rate, the intended particle direction of the launch, intended particle color penetrated
At least one of.
4. according to the method described in claim 3, it is characterized in that, being intended particle transmitting in the intended particle emission parameter
It is described according to the semantic coefficient in the case where rate, determine the intended particle emission parameter, comprising:
Based on the sine function of the semantic coefficient, the intended particle emission rate is determined.
5. according to the method described in claim 3, it is characterized in that, being intended particle transmitting in the intended particle emission parameter
It is described according to the semantic coefficient in the case where direction, determine the intended particle emission parameter, comprising:
It obtains intended particle and emits position, intended particle transmitting position is to emit particle in the target expression model
Position;
According to the semantic coefficient, determine particle emission deflecting angle, the transmitting deflecting angle for particle emission direction relative to
The deflecting angle of the center line of the emitting area of the intended particle transmitting position instruction, the center line are to pass through the emitter region
Domain central point and perpendicular to the straight line of plane where the emitting area;
According to the particle emission deflecting angle, the intended particle direction of the launch is determined.
6. a kind of expression model display method, which is characterized in that the described method includes:
Obtain the facial image of target user in real time by camera;
On the display apparatus using the target expression model as described in being shown method described in any one of claim 1 to 5.
7. a kind of expression model generating means, which is characterized in that described device includes: that acquiring unit, determination unit and control are single
Member, wherein
The acquiring unit, for obtaining the expression coefficient of face in target facial image;
The determination unit, for determining intended particle emission parameter according to the expression coefficient;
Described control unit, for controlling particle and emitting in target expression model according to the intended particle emission parameter, institute
Stating target expression model is the model for intending human face expression according to the expression coefficient module.
8. a kind of expression model display device, which is characterized in that the displaying device includes acquiring unit and display unit,
In,
The acquiring unit, for obtaining the facial image of target user in real time by camera;
The display unit, for showing institute using expression model generating means as claimed in claim 7 on the display apparatus
State target expression model.
9. a kind of terminal, which is characterized in that the processor, defeated including processor, input equipment, output equipment and memory
Enter equipment, output equipment and memory to be connected with each other, wherein the memory is for storing computer program, the computer
Program includes program instruction, and the processor is configured for calling described program instruction, is executed such as any one of claim 1-5
The method.
10. a kind of computer readable storage medium, which is characterized in that the computer storage medium is stored with computer program,
The computer program includes program instruction, and described program instruction makes the processor execute such as right when being executed by a processor
It is required that the described in any item methods of 1-5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910701501.5A CN110442237A (en) | 2019-07-31 | 2019-07-31 | Expression model generating method and Related product |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910701501.5A CN110442237A (en) | 2019-07-31 | 2019-07-31 | Expression model generating method and Related product |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110442237A true CN110442237A (en) | 2019-11-12 |
Family
ID=68432441
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910701501.5A Pending CN110442237A (en) | 2019-07-31 | 2019-07-31 | Expression model generating method and Related product |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110442237A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111710035A (en) * | 2020-07-16 | 2020-09-25 | 腾讯科技(深圳)有限公司 | Face reconstruction method and device, computer equipment and storage medium |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101452582A (en) * | 2008-12-18 | 2009-06-10 | 北京中星微电子有限公司 | Method and device for implementing three-dimensional video specific action |
CN109727303A (en) * | 2018-12-29 | 2019-05-07 | 广州华多网络科技有限公司 | Video display method, system, computer equipment, storage medium and terminal |
-
2019
- 2019-07-31 CN CN201910701501.5A patent/CN110442237A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101452582A (en) * | 2008-12-18 | 2009-06-10 | 北京中星微电子有限公司 | Method and device for implementing three-dimensional video specific action |
CN109727303A (en) * | 2018-12-29 | 2019-05-07 | 广州华多网络科技有限公司 | Video display method, system, computer equipment, storage medium and terminal |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111710035A (en) * | 2020-07-16 | 2020-09-25 | 腾讯科技(深圳)有限公司 | Face reconstruction method and device, computer equipment and storage medium |
CN111710035B (en) * | 2020-07-16 | 2023-11-07 | 腾讯科技(深圳)有限公司 | Face reconstruction method, device, computer equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107796395B (en) | It is a kind of for the air navigation aid of indoor objects position, device and terminal device | |
CN108550190A (en) | Augmented reality data processing method, device, computer equipment and storage medium | |
CN109636919B (en) | Holographic technology-based virtual exhibition hall construction method, system and storage medium | |
JP2024074889A (en) | Mixed Reality Spatial Audio | |
WO2022066535A3 (en) | Methods for manipulating objects in an environment | |
CN108537110A (en) | Generate the device and method based on virtual reality of three-dimensional face model | |
CN105094335B (en) | Situation extracting method, object positioning method and its system | |
KR102491140B1 (en) | Method and apparatus for generating virtual avatar | |
CN111862333B (en) | Content processing method and device based on augmented reality, terminal equipment and storage medium | |
CN110339570A (en) | Exchange method, device, storage medium and the electronic device of information | |
CN107111427A (en) | Change video call data | |
CN107330978A (en) | The augmented reality modeling experiencing system and method mapped based on position | |
JP2015138345A (en) | Information processing device, information processing system, block system, and information processing method | |
CN109191593A (en) | Motion control method, device and the equipment of virtual three-dimensional model | |
CN110276774A (en) | Drawing practice, device, terminal and the computer readable storage medium of object | |
CN109035415A (en) | Processing method, device, equipment and the computer readable storage medium of dummy model | |
CN109395387A (en) | Display methods, device, storage medium and the electronic device of threedimensional model | |
CN110276804A (en) | Data processing method and device | |
CN106227327B (en) | A kind of display converting method, device and terminal device | |
CN110442237A (en) | Expression model generating method and Related product | |
CN109445596A (en) | A kind of integral type mixed reality wears display system | |
JP2015136453A (en) | Information processing device, information processing system, assembly type device, and information processing method | |
JP6695997B2 (en) | Information processing equipment | |
WO2021208432A1 (en) | Interaction method and apparatus, interaction system, electronic device, and storage medium | |
CN110378993A (en) | Modeling method and relevant apparatus |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20191112 |