CN113393561A - Method, device and storage medium for generating limb action expression packet of virtual character - Google Patents

Method, device and storage medium for generating limb action expression packet of virtual character Download PDF

Info

Publication number
CN113393561A
CN113393561A CN202110578668.4A CN202110578668A CN113393561A CN 113393561 A CN113393561 A CN 113393561A CN 202110578668 A CN202110578668 A CN 202110578668A CN 113393561 A CN113393561 A CN 113393561A
Authority
CN
China
Prior art keywords
data
joint
model
virtual character
space
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110578668.4A
Other languages
Chinese (zh)
Inventor
金虓
汪成峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Perfect World Beijing Software Technology Development Co Ltd
Original Assignee
Perfect World Beijing Software Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Perfect World Beijing Software Technology Development Co Ltd filed Critical Perfect World Beijing Software Technology Development Co Ltd
Priority to CN202110578668.4A priority Critical patent/CN113393561A/en
Publication of CN113393561A publication Critical patent/CN113393561A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/802D [Two Dimensional] animation, e.g. using sprites

Abstract

The invention provides a method, a device and a storage medium for generating a limb action expression packet of a virtual character, wherein the method comprises the steps of acquiring limb skeleton data of a character; acquiring a pre-created virtual character model, and driving limbs of the virtual character model to generate actions by utilizing limb skeleton data; and generating a limb action expression packet of the virtual character based on the limb action of the virtual character model. The embodiment of the invention can drive the virtual character model to make complex and various movements and postures by utilizing the acquired character limb skeleton data, thereby not only increasing the interaction between the user and the virtual character, but also making more virtual character personalized limb action packages.

Description

Method, device and storage medium for generating limb action expression packet of virtual character
Technical Field
The invention relates to the technical field of computers, in particular to a method and a device for generating a limb action expression packet of a virtual character and a storage medium.
Background
At present, when a virtual character model executes actions, corresponding actions can be executed according to a pre-configured program, and these methods need to set action contents in advance through the program, and cannot effectively control the virtual character model to execute personalized actions according to the body actions of a real user, for example, the self posture actions of the real character cannot be utilized to control the virtual character model to execute corresponding action contents, so that the virtual character model cannot be more flexibly controlled to execute complex and various motions and postures, and the personalized body action scenario package cannot be generated by controlling the virtual character model, so that better play and user experience cannot be provided for a player.
Disclosure of Invention
In view of the above problems, the present invention is provided to provide a method, an apparatus, and a storage medium for generating a physical action profile package of a virtual character, which overcome or at least partially solve the above problems, and can increase the interaction between a user and the virtual character, and also make more physical action profile packages personalized for the virtual character, thereby providing a player with richer play and better user experience.
According to an aspect of the embodiments of the present invention, there is provided a method for generating a limb action profile package of a virtual character, the method including:
acquiring limb skeleton data of a person;
acquiring a pre-created virtual character model, and driving limbs of the virtual character model to generate actions by utilizing the limb skeleton data;
and generating a limb action expression packet of the virtual character based on the limb action of the virtual character model.
Optionally, acquiring limb skeleton data of the person comprises:
calling an augmented reality platform interface to obtain limb skeleton data of a person;
the body skeleton data is obtained by converting figure body action data acquired by the terminal equipment through the augmented reality platform.
Optionally, the limb skeletal data invoked comprises:
initializing a skeleton matrix data set obtained by binding a posture space through the augmented reality platform; and
and mapping the limb action data acquired by the terminal equipment to a human body model space through the augmented reality platform to obtain a skeleton matrix data set.
Optionally, obtaining a pre-created virtual character model, and driving a limb generation action of the virtual character model by using the limb skeleton data, including:
acquiring a pre-established virtual role model;
converting the limb skeleton data into binding posture space data of joints in an engine, wherein the binding posture space data comprises binding posture space position data and binding posture space rotation data;
calculating model space data corresponding to each joint of the virtual character model according to a preset algorithm based on the binding attitude space data;
and driving the limbs of the virtual character model to generate actions by utilizing the model space data.
Optionally, the model space data includes model space position data, and the model space data corresponding to each joint of the virtual character model is calculated according to a preset algorithm based on the binding posture space data, including:
and traversing the related joint names of the virtual character model in sequence from the skeleton root node of the virtual character model, and adopting a formula 1: pos & rot ═ rotp*poslocal+pospCalculating model space position data corresponding to each joint of the virtual character model;
wherein, if any joint is a father joint, rotpDenotes the unit quaternion, poslocalBinding attitude spatial position data, pos, representing said any jointpIs (0, 0, 0); rot if any of the joints is a subluxationpModel spatial rotation data pos representing a parent joint of the arbitrary joint calculated according to formula 1localRepresents the relative position of any joint in the binding posture and under the space of the father joint, pospModel spatial position data representing the parent joint of any joint calculated according to equation 1.
Optionally, the model space data includes model space rotation data, and the model space data corresponding to each joint of the virtual character model is calculated according to a preset algorithm based on the binding posture space data, including:
sequentially traversing the joint names of the virtual character model from the skeleton root node of the virtual character model, and judging whether each joint name appears in the binding attitude space data;
if the joint name appears in the binding posture space data, calculating model space rotation data of the corresponding joint based on the binding posture space rotation data of the appearing joint name;
and if the joint name does not appear in the binding posture space data, calculating model space rotation data of the corresponding joint based on the binding posture space rotation data of the parent joint name corresponding to the joint name which does not appear.
Optionally, the binding posture space data further includes a binding posture matrix of the joint and a current frame posture matrix of the joint, and the model space rotation data of the corresponding joint is calculated based on the binding posture space rotation data of the appeared joint name, including:
obtaining a new current frame attitude matrix by multiplying the inverse matrix of the binding attitude matrix of the joint corresponding to the appeared joint name by the current frame attitude matrix of the corresponding joint;
reserving the rotation data in the new current frame attitude matrix, deleting the position data to obtain a rotation matrix, and calculating the rotation matrix according to a quaternion conversion mode to obtain intermediate rotation data;
and multiplying the intermediate rotation data and the binding posture space rotation data to obtain model space rotation data of the corresponding joint.
Optionally, calculating model spatial rotation data of the corresponding joint based on the bound posture spatial rotation data in which the joint name does not appear includes:
according to equation 2: rot ═ rotp*rotlocalCalculating model space rotation data of corresponding joints, wherein if any joint is a father joint, rotpRepresenting unit quaternion, rotlocalBinding pose spatial rotation data representing the arbitrary joint; rot if any of the joints is a subluxationpModel spatial rotation data, rot, calculated according to equation 2 for a parent joint representing any of the jointslocalRepresenting the relative rotation of any joint in the binding posture and in the parent joint space.
Optionally, after calculating model space data corresponding to each joint of the virtual character model according to a preset algorithm based on the binding posture space data, the method further includes:
obtaining the previous frame of model space data corresponding to the joint after calculating the model space data;
and carrying out linear interpolation calculation on the current frame model space data and the previous frame model space data of the same joint.
Optionally, the driving, by using the model space data, a limb of the virtual character model to generate an action includes:
converting the model space data into skeletal animation data in a corresponding joint space of the virtual character model;
and driving the limbs of the virtual character model to generate actions by using the converted skeletal animation data.
Optionally, generating the body action profile package of the virtual character based on the body action of the virtual character model includes:
intercepting continuous partial body action content or at least one frame of body action picture from the body action of the virtual character model;
and generating a limb action expression packet of the virtual character according to the intercepted part of the limb action content or at least one frame of the limb action picture.
According to another aspect of the embodiments of the present invention, there is also provided a device for generating a limb action profile packet of a virtual character, including:
the acquisition module is suitable for acquiring limb skeleton data of a person;
the driving module is suitable for acquiring a pre-established virtual character model and driving limbs of the virtual character model to generate actions by utilizing the limb skeleton data;
and the generation module is suitable for generating the limb action expression packet of the virtual character based on the limb action of the virtual character model.
According to another aspect of embodiments of the present invention, there is also provided a computer storage medium storing computer program code which, when run on a computing device, causes the computing device to perform the limb movement expression package generation method of a virtual character of any of the above embodiments.
According to another aspect of the embodiments of the present invention, there is also provided a computing device, including: a processor; a memory storing computer program code; the computer program code, when executed by the processor, causes the computing device to perform the limb movement expression package generation method of the virtual character of any of the embodiments above.
In the embodiment of the invention, after the body skeleton data of the character is acquired, the body skeleton data can be used for driving the body of the virtual character model to generate the action, and the body action expression packet of the virtual character is generated. The embodiment of the invention can drive the virtual character model to make complex and various motions and postures by utilizing the body skeleton data, thereby not only increasing the interaction between the user and the virtual character, but also making more individualized body action expression packages of the virtual character, and providing richer playing methods and better user experience for the player.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
The above and other objects, advantages and features of the present invention will become more apparent to those skilled in the art from the following detailed description of specific embodiments thereof, taken in conjunction with the accompanying drawings.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the invention and not to limit the invention. In the drawings:
FIG. 1 is a flow diagram illustrating a method for generating a limb movement expression package of a virtual character according to an embodiment of the invention;
FIG. 2 is a schematic diagram of a process for driving a limb generation action of the virtual character model according to an embodiment of the invention;
fig. 3 is a schematic structural diagram of a limb action profile packet generation apparatus for a virtual character according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a limb action expression packet generation apparatus of a virtual character according to another embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the invention are shown in the drawings, it should be understood that the invention can be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.
In order to solve the above technical problem, an embodiment of the present invention provides a method for generating a limb action profile package of a virtual character. Fig. 1 is a flowchart illustrating a method for generating a limb movement expression package of a virtual character according to an embodiment of the present invention. Referring to fig. 1, the method includes steps S102 to S106.
Step S102, the body skeleton data of the person is obtained.
And step S104, acquiring a pre-created virtual character model, and driving limbs of the virtual character model to generate actions by utilizing limb skeleton data.
In the embodiment of the invention, before the pre-created virtual character model is obtained, the user can select a favorite virtual character, the user can interact with the virtual character model selected by the user, and the body action expression packet of different virtual characters can be made according to the favorite of the user. The virtual character model created in advance in this embodiment is a custom model used in an engine, and a yield is created in advance by art design.
And step S106, generating a body action expression packet of the virtual character based on the body action of the virtual character model.
The embodiment of the invention can drive the virtual character model to make complex and various motions and gestures by utilizing the acquired character limb skeleton data, thereby not only increasing the interaction and the fun between the user and the virtual character, but also making more individualized limb action surface packets of the virtual character, and providing richer playing methods and better user experience for the player.
Referring to step S102, in the embodiment of the present invention, the manner of obtaining the body skeleton data of the person may be to obtain the body skeleton data of the person by calling an augmented reality platform interface, where the body skeleton data may be obtained by converting the body action data of the person, which is acquired by the terminal device through the augmented reality platform. After the terminal equipment collects the body action data of the human body, the body action data are transmitted to the augmented reality platform, and the received body action data are converted into body skeleton data by the augmented reality platform. Furthermore, an engine of the game client of the mobile device may call the body skeleton data through an Application Programming Interface (API).
The augmented reality platform of the embodiment of the invention can adopt an ARKit platform, and developers can use the ARKit platform to create an augmented reality application program. The Augmented Reality platform captures the body actions of the human body collected by the terminal equipment in real time, so that the body actions of the human body can be combined with virtual roles, action capture can be realized by a user or a technician by using movements and postures as input modes of AR (Augmented Reality) experience in real time, the body actions of the human body can be captured by using a camera of the terminal equipment in real time, body positions and movements can be understood as a series of joint and skeleton actions, the movements and postures are used as input of AR experience, and body skeleton data obtained by converting body action data becomes central materials of AR.
The embodiment can configure the arbodyTrackingconfiguration configuration item provided in the augmented reality platform, and capture the body motion data of the human body through the camera of the terminal device. The terminal device in this embodiment may be a mobile terminal or a PC terminal, which is not specifically limited in this embodiment of the present invention. The limb movement may be a simple limb movement or a set of consecutive limb movements, and is not limited in particular. Also, the person making the limb movement includes, but is not limited to, game developers, testers, users, and the like.
In the embodiment of the present invention, the invoked limb bone Data may include each bone matrix Data set Data-skeeleton T obtained by initializing a binding posture space (ART-position space) through an augmented reality platform, and may further include each bone matrix Data set Data-skeeleton L obtained by mapping limb motion Data acquired by a terminal device to a human body model space through the augmented reality platform. In addition, the engine of the embodiment of the invention can adopt an EAR engine, the interface of the augmented reality platform can be connected with the EAR1.5 engine interface of the EAR engine, and the bone matrix Data sets Data-Skeleton L and Data-Skeleton T called from the augmented reality platform can be transmitted into the engine through the EAR1.5 engine interface.
In an embodiment of the invention, binding poses refer to the initial pose of the skeleton in the engine, a general term for 3D modeling. The engine and the augmented reality platform are respectively provided with a set of human skeleton description, the joint definition is different, and one human skeleton is provided with a plurality of joints. The human skeleton description of the engine is determined by art makers, and the augmented reality platform is provided with a set of fixed skeleton descriptions. And the data space of the engine and the augmented reality platform are different, wherein each joint of the engine has position data and rotation data, and the position data and the rotation data are in the model space. Each joint of the augmented reality platform has position data and rotation data, the position data and the rotation data are in a user-defined space of the augmented reality platform, and the position data and the rotation data of a frame of binding posture T-Pose are also in the user-defined space of the augmented reality platform.
Because the data spaces defined by the engine and the augmented reality platform are different, and the augmented reality platform does not provide conversion between the self-defined space and the model space, T-Pose is needed to be adopted as a binding posture when the model is manufactured, the binding posture is consistent with the T-Pose in the augmented reality platform, and a follow-up algorithm depends on the precondition.
Referring to step S104 above, in an embodiment of the present invention, the specific process of obtaining the pre-created virtual character model and driving the limb generation action of the virtual character model by using the limb skeleton data may include steps S1041 to S1044, as shown in fig. 2.
Step S1041, acquiring a virtual character model created in advance.
Step S1042, the limb skeleton data are converted into binding posture space data of the joints in the engine, and the binding posture space data comprise binding posture space position data and binding posture space rotation data.
In this step, the limb bone Data are the bone matrix Data set Data-skeeleton T and the bone matrix Data set Data-skeeleton L of the above embodiment. The embodiment of the invention can convert the bone matrix Data set Data-Skeleton T into the binding attitude space Data MTAnd converting the bone matrix Data set Data-Skeleton L into binding attitude space Data Mm. Wherein the attitude space data M is boundTCan represent a T-Pose posture matrix (4 x 4) of a certain joint, and bind the posture space data MmThe pose matrix of the current frame of a certain joint may be represented. MTAnd MmCan all represent the attitude matrix of the current frame of a certain joint, MTAnd MmUnder the user-defined space of the augmented reality platform, rotation data and position data are respectively contained.
And S1043, calculating model space data corresponding to each joint of the virtual character model according to a preset algorithm based on the binding posture space data.
In this step, the model space data may include model space position data and model space rotation data. In the process of calculating model space data corresponding to each joint of the virtual character model, model space position data and model space rotation data corresponding to each joint may be calculated, and a specific calculation process may be described later.
And step S1044, driving the limbs of the virtual character model to generate actions by utilizing the model space data.
In step S1043 of the embodiment of the present invention, when calculating model spatial position data corresponding to each joint of the virtual character model, model spatial position data corresponding to each joint of the virtual character model may be obtained from the virtual character modelStarting from the skeleton root node, sequentially traversing the related node names of the virtual character model, and adopting a formula 1: pos & rot ═ rotp*poslocal+pospAnd calculating model space position data pos corresponding to each joint of the virtual character model.
Wherein, if any joint is a father joint, rotpRepresenting a unit quaternion. poslocalBinding posture space position data pos representing any jointm(i.e., the position of the joint in the model space in the binding pose), pospIs (0, 0, 0). Rot if any joint is a subluxationpModel spatial rotation data pos representing a parent joint of any joint calculated according to equation 1localRepresents the relative position of any joint in the binding attitude, pos, under the space of the parent jointpModel spatial position data representing the parent joint of any joint calculated according to equation 1.
In this embodiment, the unit quaternion is (1, 0, 0, 0), which may correspond to (w, x, y, z), and is expressed as a rotation state in the case where there is no rotation (or rotation is understood to be 0), and is generally used as a default initial rotation.
In one embodiment of the invention, pos is used to calculate the model spatial position data of any joint at presentlocalIs the amount of positional offset of any joint relative to the parent node of that joint. Assume that the position offset of the joint is denoted posOffset and the parent joint of the current joint is denoted parentBone. The current joint is noted bone. If there is a parent joint (i.e., root node), posOffset is the inverse of the parentBone rotation matrix (bone displacement matrix — parentBone displacement matrix), and if there is no parent joint (i.e., root node), posOffset is the bone displacement matrix.
As can be seen from the above-described process of calculating model spatial position data pos corresponding to each joint of the virtual character model, the model spatial position data pos calculated in the embodiment of the present invention is not only the final calculation result of the corresponding joint, but also needs to be transmitted as a parameter to a sub-joint of the corresponding joint and used as a pos of the sub-jointpAnd (6) performing calculation. I.e. each sub-joint corresponding to any joint in recursion and the model space corresponding to any joint is introducedAnd (4) inter-position data is input into the model space, so that the model space position data corresponding to any joint can be used as the parent joint model space position data of the corresponding child joint.
Continuing to refer to step S1043, when calculating model space rotation data corresponding to each joint of the virtual character model, sequentially traversing the relevant joint names of the virtual character model from the skeleton root node of the virtual character model, determining whether each joint name appears in the binding posture space data, and calculating model space rotation data of the corresponding joint in different manners based on the determination result.
In an optional embodiment of the present invention, if a joint name does not appear in the binding posture space data, model space rotation data of a corresponding joint is calculated based on the binding posture space rotation data of a parent joint name corresponding to the absent joint name.
In this step, if the joint name does not appear in the binding posture space data, it indicates that the joint corresponding to the joint name does not have AR data from the augmented reality platform, and in this case, formula 2 may be adopted: rot ═ rotp*rotlocalModel spatial rotation data rot of the corresponding joint is calculated.
Wherein, if any joint is a father joint, rotpRepresenting unit quaternion, rotlocalBinding posture space rotation data rot representing any jointm(i.e., the rotation of the joint in model space in the binding pose). Rot if any joint is a subluxationpModel spatial rotation data, rot, calculated according to equation 2 for the father joint of any jointlocalRepresenting the relative rotation of any joint in the binding posture and in the parent joint space.
In another optional embodiment of the present invention, if a joint name appears in the binding posture space data, model space rotation data of the corresponding joint is calculated based on the binding posture space rotation data of the appearing joint name.
In this step, if a joint name appears in the binding posture space data, it indicates that the joint corresponding to the joint name has AR data from the augmented reality platform.
In the process of calculating the model space rotation data of the corresponding joint under the condition, firstly, the binding attitude matrix M of the corresponding joint is determined according to the appeared joint nameTIs multiplied by the current frame attitude matrix M of the corresponding jointmObtain a new current frame attitude matrix M, i.e.
Figure BDA0003085221660000091
Then, the new rotation data in the current frame attitude matrix M is reserved and the position data is deleted to obtain a rotation matrix, and the rotation matrix is calculated according to a quaternion conversion mode to obtain intermediate rotation data rotar. Finally, the intermediate rotation data rotarAnd binding the attitude space rotation data rotmMultiplying to obtain model space rotation data rot of the corresponding joint, namely rot ═ rotar*rotm
In one embodiment of the invention, rot is used to calculate the model space rotation data of any joint at presentlocalI.e., the rotational offset of any joint relative to the parent node of that joint. Assume that the rotational offset of the joint is denoted as rotOffset, the parent joint of the current joint is denoted as parentBone, and the current joint is denoted as bone. If there is a parent joint (i.e., a root node), the rotffset is the inverse of the parentBone rotation matrix, and if there is no parent joint (i.e., a root node), the rotffset is the bone rotation matrix.
As can be seen from the process of calculating model space rotation data rot corresponding to each joint of the virtual character model, the model space rotation data rot calculated in the embodiment of the present invention is not only the final calculation result of the corresponding joint, but also needs to be transmitted as a parameter to the sub-joint of the joint and corresponds to the rot of the sub-jointpAnd (6) performing calculation. Namely, each sub-joint corresponding to any joint is recurred, and the model space rotation data corresponding to any joint is transmitted into the model space, so that the model space rotation data corresponding to any joint can be used as the parent joint model space rotation data of the corresponding sub-joint.
According to the embodiment of the invention, matrix Data (marked as Data-Skeleton S) of a root node (root node) in a human Scene space of an augmented reality platform can be acquired, so that the Transform (changed form) of the virtual character model can be adjusted by using the Data-Skeleton S, namely, the root node is used for adjusting the overall position and rotation of the virtual character model, and other child nodes are used for controlling the position and rotation of each joint of the virtual character model.
According to the embodiment of the invention, all joints in the skeleton of the virtual character model are traversed in a preorder traversal mode, so that model space position data pos and model space rotation data rot of the root node can be effectively ensured to be calculated before child nodes.
Because the AR data acquired by the augmented reality platform has a certain degree of jitter, in order to ensure that a smoother data result is obtained, in an embodiment of the invention, interpolation can be performed on model space data of a current frame of a joint and model space data of the joint of a previous frame. Therefore, after the model space data corresponding to each joint of the virtual character model is calculated according to the preset algorithm based on the binding posture space data in step S1043, first, the previous frame of model space data corresponding to the joint after the model space data is calculated is obtained. And then, carrying out linear interpolation calculation on the current frame model space data and the previous frame model space data of the same joint. The model space data comprises model space position data pos and model space rotation data rot, and the embodiment of the invention respectively performs linear interpolation calculation on the current frame data and the previous frame data of the model space position data of the same joint and performs linear interpolation calculation on the current frame data and the previous frame data of the model space rotation data of the same joint, thereby effectively preventing the model space data from shaking and obtaining smoother model space data.
In an embodiment of the present invention, when step S1044 is executed to drive the limbs of the virtual character model to generate the motion by using the model space data, the model space data may be converted into the corresponding joint space (bone space) of the virtual character model to obtain the bone animation data (animation data), and then the converted bone animation data is used to drive the limbs of the virtual character model to generate the motion.
Referring to step S106 above, in an embodiment of the present invention, the limb action expression package of the virtual character generated based on the limb action of the virtual character model may be a dynamic expression package or a static expression package. For example, when generating the dynamic emoticon of the virtual character, the continuous partial body motion content may be intercepted from the body motion of the virtual character model, and then the dynamic body motion emoticon of the virtual character may be generated according to the intercepted partial body motion content. Of course, a dynamic limb action expression package can also be generated according to the complete limb action generated by the limb skeleton data driving virtual character model.
Assuming that the body motion of the user collected by the terminal device is coherent motion of lifting and clapping both hands, and after the body motion collected by the terminal device is converted into body skeleton data by the augmented reality platform, the embodiment of the invention uses the obtained body skeleton data to generate the body action expression packet of the virtual character, which can be the intercepted clapping motion part or the complete motion of lifting and clapping both hands.
For another example, when generating the static emoticon of the virtual character, at least one frame of the body motion picture may be captured from the body motion of the virtual character model, and then the static body motion emoticon of the virtual character may be generated according to the captured at least one frame of the body motion picture.
In the embodiment of the invention, the user can also share the generated limb action expression package to the friends through social software, so that the limb action expression package simulating the character can be shared with the friends, and the interaction among the friends is further increased.
In an embodiment of the invention, in a game scene, a user can invite friends to participate in the generation process of the emoticon.
The real-time embodiment of the invention can set different station templates according to different game scenes and in combination with the contents (such as characters, plants, buildings and other real objects) in each game scene, each station template is correspondingly provided with action keywords, and each action keyword corresponds to different character stations, so that the positions of the virtual character models of the users and the friends can be arranged according to the action keywords selected by the users and the friends.
For example, including "cherry blossom tree" in a game scene, a station template that sets up to this game scene is each one individual position in "cherry blossom tree" the left and right sides, 2 people's station templates promptly, and the action keyword of this station template includes single leg action, the action of squatting half, and single leg action corresponds "cherry blossom tree" left side position, and the action of squatting half corresponds "cherry blossom tree" right side position. Another station template is the next individual position of "cherry blossom tree", and the individual position of the left and right sides of "cherry blossom tree", 3 people's station templates promptly, and the action keyword of this station template includes single leg action, the action of partly squatting, jumps the action, and single leg action corresponds "cherry blossom tree" left side position, jumps the action and corresponds "cherry blossom tree" left side position, and the action of partly squatting corresponds the position under "cherry blossom tree".
The station templates for one game scene can comprise a plurality of station templates, and a user can select a proper station template according to the number of friends required to be invited. For example, if the user needs to invite two friends to participate in the generation of the emoticon, a 2-person standing template may be selected, and if the user needs to invite three friends to participate in the generation of the emoticon, a 3-person standing template may be selected. The 2-person standing position template and the 3-person standing position template may include a plurality of templates, respectively.
After a user selects a target station template in a game scene, a message requesting to participate in generating an expression package can be directly sent to friends, after the friends feed back the agreed message, the virtual character models of the invited friends can enter the game scene where the virtual character models of the user are located, the friends can select action keywords from the master station template and the slave station template, after the friends select the action keywords, the virtual character models of the friends can be directly shown at the positions corresponding to the corresponding action keywords, and after the user and the friends respectively perform limb actions corresponding to the selected keywords, the respective virtual character models can be respectively controlled to generate limb actions so as to generate the limb action expression package common to the user and the friends.
Based on the same inventive concept, the embodiment of the invention also provides a device for generating the limb action expression packet of the virtual character. Fig. 3 is a schematic structural diagram of a limb action profile packet generation apparatus for a virtual character according to an embodiment of the present invention. Referring to fig. 3, the limb movement expression package generating apparatus of the virtual character includes an obtaining module 310, a driving module 320, and a generating module 330
The obtaining module 310 is adapted to obtain limb skeleton data of a person.
And the driving module 320 is adapted to acquire a pre-created virtual character model, and drive the limb of the virtual character model to generate a motion by using the limb skeleton data.
The generating module 330 is adapted to generate a limb action profile package of the virtual character based on the limb actions of the virtual character model.
In an embodiment of the present invention, the obtaining module 310 is adapted to call an augmented reality platform interface to obtain the body skeleton data of the person; the body skeleton data is obtained by converting figure body action data acquired by the terminal equipment through the augmented reality platform.
In one embodiment of the invention, the called limb skeleton data comprises a skeleton matrix data set obtained by initializing a binding posture space through an augmented reality platform; and mapping the limb action data acquired by the terminal equipment to a human body model space through the augmented reality platform to obtain a skeleton matrix data set.
In an embodiment of the present invention, the driving module 320 is further adapted to obtain a pre-created virtual character model; converting limb skeleton data into binding posture space data of joints in an engine, wherein the binding posture space data comprises binding posture space position data and binding posture space rotation data; calculating model space data corresponding to each joint of the virtual character model according to a preset algorithm based on the binding attitude space data; and driving the limbs of the virtual character model to generate actions by using the model space data.
In an embodiment of the present invention, the model space data includes model space position data, and the driving module 320 is further adapted to sequentially traverse the related node names of the virtual character model from the bone root node of the virtual character model, and adopt formula 1: pos & rot ═ rotp*poslocal+pospAnd calculating model space position data corresponding to each joint of the virtual character model. It is composed ofIn, rot if any joint is the father jointpDenotes the unit quaternion, poslocalBinding attitude spatial position data, pos, representing either jointpIs (0, 0, 0); rot if any joint is a subluxationpModel spatial rotation data pos representing a parent joint of any joint calculated according to equation 1localRepresents the relative position of any joint in the binding attitude, pos, under the space of the parent jointpModel spatial position data representing the parent joint of any joint calculated according to equation 1.
In an embodiment of the present invention, the model space data includes model space rotation data, and the driving module 320 is further adapted to sequentially traverse the related joint names of the virtual character model from the skeleton root node of the virtual character model, and determine whether each joint name appears in the binding posture space data; if the joint name appears in the binding posture space data, calculating model space rotation data of the corresponding joint based on the binding posture space rotation data of the appearing joint name; and if the joint name does not appear in the binding posture space data, calculating model space rotation data of the corresponding joint based on the binding posture space rotation data of the parent joint name corresponding to the joint name which does not appear.
In an embodiment of the present invention, the binding posture space data further includes a binding posture matrix of the joint and a current frame posture matrix of the joint, and the driving module 320 is further adapted to obtain a new current frame posture matrix by multiplying an inverse matrix of the binding posture matrix of the joint corresponding to the appeared joint name by the current frame posture matrix of the corresponding joint; reserving the rotation data in the new current frame attitude matrix, deleting the position data to obtain a rotation matrix, and calculating the rotation matrix according to a quaternion conversion mode to obtain intermediate rotation data; and multiplying the intermediate rotation data and the binding posture space rotation data to obtain model space rotation data of the corresponding joint.
In an embodiment of the present invention, the driving module 320 is further adapted to, according to equation 2: rot ═ rotp*rotlocalCalculating model space rotation data of corresponding joints, wherein if any joint is a father joint, rotpRepresenting unit quaternion, rotlocalBinding posture space rotation data representing any joint; rot if any joint is a subluxationpModel spatial rotation data, rot, calculated according to equation 2 for the father joint of any jointlocalRepresenting the relative rotation of any joint in the binding posture and in the parent joint space.
Referring to fig. 4, in an embodiment of the present invention, the apparatus for generating a limb action table package of the virtual character shown in fig. 3 further includes an obtaining module 340 and a calculating module 350.
The obtaining module 340 is adapted to obtain the previous frame of model space data corresponding to the joint after the model space data is calculated.
And the calculating module 350 is adapted to perform linear interpolation calculation on the current frame model space data and the previous frame model space data of the same joint.
In one embodiment of the present invention, the driver module 320 is further adapted to convert the model space data into bone animation data in a corresponding joint space of the virtual character model; and driving the limbs of the virtual character model to generate actions by using the transformed skeletal animation data.
In an embodiment of the present invention, the generating module 330 is further adapted to intercept continuous partial body motion content or at least one frame of body motion picture from the body motion of the virtual character model; and generating a limb action expression packet of the virtual character according to the intercepted part of the limb action content or at least one frame of the limb action picture.
Based on the same inventive concept, an embodiment of the present invention further provides a computer storage medium storing computer program code, which, when run on a computing device, causes the computing device to execute the limb movement emotion bag generation method of the virtual character of any of the above embodiments.
Based on the same inventive concept, an embodiment of the present invention further provides a computing device, including: a processor; a memory storing computer program code; the computer program code, when executed by the processor, causes the computing device to perform the limb movement expression package generation method of the virtual character of any of the embodiments above.
It is clear to those skilled in the art that the specific working processes of the above-described systems, devices, modules and units may refer to the corresponding processes in the foregoing method embodiments, and for the sake of brevity, further description is omitted here.
In addition, the functional units in the embodiments of the present invention may be physically independent of each other, two or more functional units may be integrated together, or all the functional units may be integrated in one processing unit. The integrated functional units may be implemented in the form of hardware, or in the form of software or firmware.
Those of ordinary skill in the art will understand that: the integrated functional units, if implemented in software and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions, so that a computing device (for example, a personal computer, a server, or a network device) executes all or part of the steps of the method according to the embodiments of the present invention when the instructions are executed. And the aforementioned storage medium includes: u disk, removable hard disk, Read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disk, and other various media capable of storing program code.
Alternatively, all or part of the steps of implementing the foregoing method embodiments may be implemented by hardware (such as a personal computer, a server, or a network device) associated with program instructions, which may be stored in a computer-readable storage medium, and when the program instructions are executed by a processor of the computing device, the computing device executes all or part of the steps of the method according to the embodiments of the present invention.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments can be modified or some or all of the technical features can be equivalently replaced within the spirit and principle of the present invention; such modifications or substitutions do not depart from the scope of the present invention.

Claims (14)

1. A method for generating a limb action expression package of a virtual character is characterized by comprising the following steps:
acquiring limb skeleton data of a person;
acquiring a pre-created virtual character model, and driving limbs of the virtual character model to generate actions by utilizing the limb skeleton data;
and generating a limb action expression packet of the virtual character based on the limb action of the virtual character model.
2. The method of claim 1, wherein obtaining extremity skeletal data of the person comprises:
calling an augmented reality platform interface to obtain limb skeleton data of a person;
the body skeleton data is obtained by converting figure body action data acquired by the terminal equipment through the augmented reality platform.
3. The method of claim 2, wherein the limb skeletal data invoked comprises:
initializing a skeleton matrix data set obtained by binding a posture space through the augmented reality platform; and
and mapping the limb action data acquired by the terminal equipment to a human body model space through the augmented reality platform to obtain a skeleton matrix data set.
4. The method according to any one of claims 1-3, wherein obtaining a pre-created virtual character model, and using the limb skeletal data to drive limb generation actions of the virtual character model comprises:
acquiring a pre-established virtual role model;
converting the limb skeleton data into binding posture space data of joints in an engine, wherein the binding posture space data comprises binding posture space position data and binding posture space rotation data;
calculating model space data corresponding to each joint of the virtual character model according to a preset algorithm based on the binding attitude space data;
and driving the limbs of the virtual character model to generate actions by utilizing the model space data.
5. The method of claim 4, wherein the model space data includes model space position data, and the calculation of the model space data corresponding to each joint of the virtual character model based on the binding posture space data according to a preset algorithm includes:
and traversing the related joint names of the virtual character model in sequence from the skeleton root node of the virtual character model, and adopting a formula 1:
Figure FDA0003085221650000021
calculating model space position data corresponding to each joint of the virtual character model;
wherein, if any joint is a father joint, rotpDenotes the unit quaternion, poslocalBinding attitude spatial position data, pos, representing said any jointpIs (0, 0, 0); rot if any of the joints is a subluxationpModel spatial rotation data pos representing a parent joint of the arbitrary joint calculated according to formula 1localRepresents the relative position of any joint in the binding posture and under the space of the father joint, pospModel spatial position data representing the parent joint of any joint calculated according to equation 1.
6. The method of claim 4, wherein the model space data includes model space rotation data, and the calculating of the model space data corresponding to each joint of the virtual character model based on the binding posture space data according to a preset algorithm includes:
sequentially traversing the joint names of the virtual character model from the skeleton root node of the virtual character model, and judging whether each joint name appears in the binding attitude space data;
if the joint name appears in the binding posture space data, calculating model space rotation data of the corresponding joint based on the binding posture space rotation data of the appearing joint name;
and if the joint name does not appear in the binding posture space data, calculating model space rotation data of the corresponding joint based on the binding posture space rotation data of the parent joint name corresponding to the joint name which does not appear.
7. The method of claim 6, wherein the binding posture space data further comprises a binding posture matrix of a joint and a current frame posture matrix of the joint, and the calculating of the model space rotation data of the corresponding joint based on the binding posture space rotation data of the appeared joint name comprises:
obtaining a new current frame attitude matrix by multiplying the inverse matrix of the binding attitude matrix of the joint corresponding to the appeared joint name by the current frame attitude matrix of the corresponding joint;
reserving the rotation data in the new current frame attitude matrix, deleting the position data to obtain a rotation matrix, and calculating the rotation matrix according to a quaternion conversion mode to obtain intermediate rotation data;
and multiplying the intermediate rotation data and the binding posture space rotation data to obtain model space rotation data of the corresponding joint.
8. The method of claim 6, wherein computing model spatial rotation data for a corresponding joint based on binding pose spatial rotation data for which no joint name occurs comprises:
according to equation 2: rot ═ rotp*rotlocalCalculating model space rotation data of the corresponding joint; wherein, if any jointBeing the father joint, rotpRepresenting unit quaternion, rotlocalBinding pose spatial rotation data representing the arbitrary joint; rot if any of the joints is a subluxationpModel spatial rotation data, rot, calculated according to equation 2 for a parent joint representing any of the jointslocalRepresenting the relative rotation of any joint in the binding posture and in the parent joint space.
9. The method of claim 4, wherein after calculating model space data corresponding to each joint of the virtual character model according to a preset algorithm based on the binding posture space data, the method further comprises:
obtaining the previous frame of model space data corresponding to the joint after calculating the model space data;
and carrying out linear interpolation calculation on the current frame model space data and the previous frame model space data of the same joint.
10. The method of claim 4, wherein using the model space data to drive the limb generation action of the virtual character model comprises:
converting the model space data into skeleton animation data in joint spaces corresponding to the virtual character models;
and driving the limbs of the virtual character model to generate actions by using the converted skeletal animation data.
11. The method of any of claims 1-3, wherein generating the limb action profile package for the virtual character based on the limb actions of the virtual character model comprises:
intercepting continuous partial body action content or at least one frame of body action picture from the body action of the virtual character model;
and generating a limb action expression packet of the virtual character according to the intercepted part of the limb action content or at least one frame of the limb action picture.
12. A limb action profile packet generation apparatus for a virtual character, comprising:
the acquisition module is suitable for acquiring limb skeleton data of a person;
the driving module is suitable for acquiring a pre-established virtual character model and driving limbs of the virtual character model to generate actions by utilizing the limb skeleton data;
and the generation module is suitable for generating the limb action expression packet of the virtual character based on the limb action of the virtual character model.
13. A computer storage medium storing computer program code which, when run on a computing device, causes the computing device to perform the limb action profile generation method of a virtual character of any of claims 1-11.
14. A computing device, comprising: a processor; a memory storing computer program code; the computer program code, when executed by the processor, causes the computing device to perform the limb action scenario package generation method of a virtual character of any of claims 1-11.
CN202110578668.4A 2021-05-26 2021-05-26 Method, device and storage medium for generating limb action expression packet of virtual character Pending CN113393561A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110578668.4A CN113393561A (en) 2021-05-26 2021-05-26 Method, device and storage medium for generating limb action expression packet of virtual character

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110578668.4A CN113393561A (en) 2021-05-26 2021-05-26 Method, device and storage medium for generating limb action expression packet of virtual character

Publications (1)

Publication Number Publication Date
CN113393561A true CN113393561A (en) 2021-09-14

Family

ID=77619279

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110578668.4A Pending CN113393561A (en) 2021-05-26 2021-05-26 Method, device and storage medium for generating limb action expression packet of virtual character

Country Status (1)

Country Link
CN (1) CN113393561A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107274466A (en) * 2017-05-31 2017-10-20 珠海金山网络游戏科技有限公司 The methods, devices and systems that a kind of real-time double is caught
CN109000633A (en) * 2017-06-06 2018-12-14 大连理工大学 Human body attitude motion capture algorithm design based on isomeric data fusion
CN109671141A (en) * 2018-11-21 2019-04-23 深圳市腾讯信息技术有限公司 The rendering method and device of image, storage medium, electronic device
CN109816773A (en) * 2018-12-29 2019-05-28 深圳市瑞立视多媒体科技有限公司 A kind of driving method, plug-in unit and the terminal device of the skeleton model of virtual portrait
CN110472497A (en) * 2019-07-08 2019-11-19 西安工程大学 A kind of motion characteristic representation method merging rotation amount
CN112348933A (en) * 2020-11-18 2021-02-09 北京达佳互联信息技术有限公司 Animation generation method and device, electronic equipment and storage medium
US20210082179A1 (en) * 2019-08-13 2021-03-18 Texel Llc Method and System for Remote Clothing Selection

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107274466A (en) * 2017-05-31 2017-10-20 珠海金山网络游戏科技有限公司 The methods, devices and systems that a kind of real-time double is caught
CN109000633A (en) * 2017-06-06 2018-12-14 大连理工大学 Human body attitude motion capture algorithm design based on isomeric data fusion
CN109671141A (en) * 2018-11-21 2019-04-23 深圳市腾讯信息技术有限公司 The rendering method and device of image, storage medium, electronic device
CN109816773A (en) * 2018-12-29 2019-05-28 深圳市瑞立视多媒体科技有限公司 A kind of driving method, plug-in unit and the terminal device of the skeleton model of virtual portrait
CN110472497A (en) * 2019-07-08 2019-11-19 西安工程大学 A kind of motion characteristic representation method merging rotation amount
US20210082179A1 (en) * 2019-08-13 2021-03-18 Texel Llc Method and System for Remote Clothing Selection
CN112348933A (en) * 2020-11-18 2021-02-09 北京达佳互联信息技术有限公司 Animation generation method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
KR102055995B1 (en) Apparatus and method to generate realistic rigged three dimensional (3d) model animation for view-point transform
Chen et al. KinÊtre: animating the world with the human body
CN108961367A (en) The method, system and device of role image deformation in the live streaming of three-dimensional idol
JP7268071B2 (en) Virtual avatar generation method and generation device
EP1606698A2 (en) Apparatus and method for generating behaviour in an object
KR102374307B1 (en) Modification of animated characters
CN110178158A (en) Information processing unit, information processing method and program
CN111530088B (en) Method and device for generating real-time expression picture of game role
Lupetti et al. Phygital play HRI in a new gaming scenario
WO2023216646A1 (en) Driving processing method and apparatus for three-dimensional virtual model, device, and storage medium
KR101996973B1 (en) System and method for generating a video
CN115331265A (en) Training method of posture detection model and driving method and device of digital person
WO2017190213A1 (en) Dynamic motion simulation methods and systems
US9652879B2 (en) Animation of a virtual object
CN113393561A (en) Method, device and storage medium for generating limb action expression packet of virtual character
Lin et al. PuppetTalk: Conversation between glove puppetry and internet of things
US20230120883A1 (en) Inferred skeletal structure for practical 3d assets
CN115526967A (en) Animation generation method and device for virtual model, computer equipment and storage medium
CN109360274A (en) Immersive VR construction method, device, intelligent elevated table and storage medium
Roth et al. Avatar Embodiment, Behavior Replication, and Kinematics in Virtual Reality.
Lin et al. Temporal IK: Data-Driven Pose Estimation for Virtual Reality
Suguitan et al. What is it like to be a bot? Variable perspective embodied telepresence for crowdsourcing robot movements
CN111009022B (en) Model animation generation method and device
KR20240055025A (en) Inferred skeletal structures for practical 3D assets
CN113908553A (en) Game character expression generation method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20210914

Assignee: Beijing Xuanguang Technology Co.,Ltd.

Assignor: Perfect world (Beijing) software technology development Co.,Ltd.

Contract record no.: X2022990000514

Denomination of invention: Method, device and storage medium for generating body action expression package of virtual character

License type: Exclusive License

Record date: 20220817

EE01 Entry into force of recordation of patent licensing contract