CN114519758A - Method and device for driving virtual image and server - Google Patents

Method and device for driving virtual image and server Download PDF

Info

Publication number
CN114519758A
CN114519758A CN202210188526.1A CN202210188526A CN114519758A CN 114519758 A CN114519758 A CN 114519758A CN 202210188526 A CN202210188526 A CN 202210188526A CN 114519758 A CN114519758 A CN 114519758A
Authority
CN
China
Prior art keywords
data
topological
image
joint
collision
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210188526.1A
Other languages
Chinese (zh)
Inventor
周凡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Huya Technology Co Ltd
Original Assignee
Guangzhou Huya Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Huya Technology Co Ltd filed Critical Guangzhou Huya Technology Co Ltd
Priority to CN202210188526.1A priority Critical patent/CN114519758A/en
Publication of CN114519758A publication Critical patent/CN114519758A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/01Social networking

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Primary Health Care (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • General Health & Medical Sciences (AREA)
  • Strategic Management (AREA)
  • Tourism & Hospitality (AREA)
  • Economics (AREA)
  • General Business, Economics & Management (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the application provides a method, a device and a server for driving an avatar, and relates to the technical field of video image processing. A first topological image and a second topological image are respectively constructed from a human body detection algorithm end and a target virtual image end, and a target virtual object corresponding to the second topological image can execute the same action with a target person in a synchronous restoration mode of action posture data. Compared with the scheme that the human body action data obtained from the target person by directly adopting a human body detection algorithm in the prior art drives the target virtual image, the technical scheme provided by the application can overcome the driving effect difference when the same human body action data drives the target virtual images with different body characteristics, can optimize the driving effect of the target virtual image, and improves the interactive experience. In addition, the process does not involve complex algorithms, the resource occupation is small, the time consumption is short, and different human body detection algorithms and different target virtual images can be compatible.

Description

Method and device for driving virtual image and server
Technical Field
The application relates to the technical field of internet live broadcast, in particular to a method and a device for driving an avatar and a server.
Background
With the continuous development of mobile internet technology and network communication technology, live webcasting is rapidly developed and applied in daily work and life of people. For example, a user may view live content provided by various anchor broadcasters of a live broadcast platform on line through a device such as a smart phone, a computer, or a tablet computer, or the user may provide live content on a corresponding live broadcast platform at any time and any place through a device such as a smart phone, a computer, or a tablet computer, so as to be viewed by others.
Currently, common live broadcast modes include live broadcast interaction methods based on live broadcast real persons, wherein live broadcast pictures of the live broadcast real persons are collected to interact with audiences of a live broadcast room through a live broadcast room, and the audiences can carry out live broadcast interaction with the live broadcast through modes of sending barrages, enjoying, delivering gifts, connecting with wheat and the like. In addition, for some specific live broadcast scenes, in order to provide diversified live broadcast experience, a virtual live broadcast mode based on an avatar is also widely applied. Compared with a live broadcast mode by a live anchor, the virtual live broadcast does not need the live anchor to carry out live interaction, and the live broadcast can be carried out with audiences by controlling the behavior of the virtual image simulation background live broadcast at the background by the live broadcast.
Based on the live broadcast mode of the avatar, the anchor can perform communication interaction with the audience through the avatar, for example, the avatar (e.g., cartoon character image) consistent with the action of the anchor limb can be used for performing game interaction with the audience to increase the live broadcast effect. In order to achieve better live effect and user experience, it is a matter of great concern for those skilled in the art to effectively drive the avatar to perform the same or almost the same interactive action as the anchor.
Disclosure of Invention
In order to solve the above technical problem, the present application provides a method, an apparatus and a server for driving an avatar.
In a first aspect, an embodiment of the present application provides a method for driving an avatar, where the method includes:
acquiring human body action data of a target person based on a human body detection algorithm, and migrating the human body action data to a general skeleton topological structure having a joint mapping relation with the human body detection algorithm to obtain a first topological image;
acquiring body data of a target virtual image, and obtaining a second topological image based on the mapping relation between the body data and the universal skeleton topological structure, wherein the driving posture in the human body detection algorithm is aligned with the driving posture in the target virtual image semantically;
Synchronously restoring the action posture data of the second topological image based on the action posture data of the first topological image;
and driving the target virtual image to move according to the repaired action posture data of the second topological image.
In a possible implementation manner, before the step of obtaining human motion data of a target person based on a human detection algorithm, and migrating the human motion data to a general skeleton topology structure having a joint mapping relationship with the human detection algorithm to obtain a first topology image, the method further includes:
creating the universal bone topology, wherein the universal bone topology comprises a head, limbs, fingers and expression bases;
defining the number of joints in the generic skeletal topology, the semantics of each joint, and the hierarchical relationships between different joints.
In a possible implementation manner, before the step of obtaining human motion data of a target person based on a human detection algorithm, and migrating the human motion data to a general skeleton topology structure having a joint mapping relationship with the human detection algorithm to obtain a first topology image, the method further includes:
Creating a first mapping relation between joints in the human body detection algorithm and joints in the universal skeleton topological structure to obtain a first configuration file;
creating a second mapping relation between the joints in the target virtual image and the joints in the universal skeleton topological structure to obtain a second configuration file;
and based on the human body detection algorithm and the common driving posture of the target virtual image, creating driving posture information and collision body parameter information generated according to the characteristics of the target virtual image to obtain a third configuration file.
In a possible implementation manner, the step of obtaining human body motion data of a target person based on a human body detection algorithm, and migrating the human body motion data to a general bone topology structure having a joint mapping relationship with the human body detection algorithm to obtain a first topology image includes:
based on the first mapping relation, migrating the human body action data into the universal skeleton topological structure to obtain a first topological image, wherein the human body action data comprise joint local rotation data and joint local translation data of the target person;
the step of obtaining the body data of the target virtual image and obtaining a second topological image based on the mapping relation between the body data and the universal skeleton topological structure comprises the following steps:
Migrating the body data into the universal bone topology structure based on the second mapping relationship to obtain the second topology image, wherein the body data comprises joint local translation data of the virtual image.
In one possible implementation manner, the synchronous repairing includes a step of sliding repairing and a step of collision repairing, and the step of synchronously repairing the action posture data of the second topological character based on the action posture data of the first topological character includes:
obtaining touchdown information whether a foot of the first topological image is in contact with the ground or not based on the human body action data, and performing step-sliding repair on the second topological image according to the human body action data, root joint translation data, joint local translation data of the target virtual image and the touchdown information;
detecting the collision relation of different joint collision bodies in the second topological image after the action posture data of the first topological image is transferred to the second topological image, determining an obstacle and a collision restoration object in the different joint collision bodies based on the preset joint collision body priority, and performing collision restoration on the collision restoration object.
In one possible implementation, the step of obtaining touchdown information on whether a foot of the first topological character is in contact with the ground based on the human body motion data, and performing a step-down restoration on the second topological character according to the human body motion data, root joint translation data, joint local translation data of a target virtual character, and the touchdown information includes:
obtaining the foot position of the first topological image in each frame of image and the length of the lower body by adopting positive kinematics based on the human motion data;
calculating the ratio of the distance between the same foot position in the current frame and the previous preset frame to the length of the lower body, and calculating the height difference value between the heights of the two feet in the current frame;
comparing the ratio and the height difference value with a first threshold value and a second threshold value respectively, and obtaining grounding contact information of whether the foot of the first topological image is in contact with the ground or not according to a comparison result;
scaling the root joint translation data based on a ratio between a length of a lower body of the first topological persona and a length of a lower body of the second topological persona;
modifying an articulation chain of a lower body of the second topological persona based on the touchdown information to cause the second topological persona to touchdown or lift off simultaneously with the first topological persona.
In a possible implementation manner, the step of comparing the ratio and the height difference value with a first threshold and a second threshold, respectively, and obtaining touchdown information of whether the foot of the first topological character contacts the ground according to the comparison result includes:
if the ratio is smaller than the first threshold value and the height difference value is smaller than the second threshold value, judging that the two feet of the first topological image are in contact with the ground;
if the ratio is smaller than the first threshold value and the height difference value is not smaller than the second threshold value, the foot with the lower height of the first topological image is judged to be in contact with the ground;
and if the ratio is not less than the first threshold value, judging that the first topological image is off-ground by two feet.
In a possible implementation manner, the step of detecting a collision relationship after the movement posture data of different joint colliders in the second topological figure is migrated to the second topological figure, determining an obstacle and a collision restoration in the different joint colliders based on a preset joint collider priority, and performing collision restoration on the collision restoration includes:
Generating a joint collision volume on a joint of the second topological persona;
calculating collision depth and collision direction between different joint collision bodies of the second topological image after the movement posture data of the first topological image is transferred to the second topological image;
determining an obstacle and a collision restoration in different joint colliders based on the preset priority of the joint collider, and calculating the collision restoration position based on the collision depth and the collision direction between the different joint colliders;
and adjusting local rotation data of the joint in the joint motion chain where the collision restoration object is located to obtain motion attitude data after collision restoration based on the position of the collision restoration object after collision restoration.
In a second aspect, embodiments of the present application further provide an avatar driving apparatus, the apparatus including:
the system comprises a first determination module, a second determination module and a third determination module, wherein the first determination module is used for acquiring human body action data of a target person based on a human body detection algorithm, and transferring the human body action data into a general skeleton topological structure which has a joint mapping relation with the human body detection algorithm to obtain a first topological image;
the second determination module is used for acquiring body data of a target virtual image and obtaining a second topological image based on the mapping relation between the body data and the universal skeleton topological structure, wherein the driving posture in the human body detection algorithm is aligned with the driving posture in the target virtual image;
The restoration module is used for synchronously restoring the action posture data of the second topological image based on the action posture data of the first topological image;
and the driving module is used for driving the target virtual image to move according to the repaired action posture data of the second topological image.
In a third aspect, an embodiment of the present application further provides a server, where the server includes a processor, a communication unit, and a computer-readable storage medium, where the processor, the communication unit, and the computer-readable storage medium are connected through a bus system, the communication unit is configured to connect an electronic device to implement data interaction between the server and the electronic device, the computer-readable storage medium is configured to store a program, an instruction, or code, and the processor is configured to execute the program, the instruction, or the code in the computer-readable storage medium to implement the method for driving an avatar in any one of the possible implementation manners in the first aspect.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, where instructions are stored, and when the instructions are executed, the instructions cause a computer to execute the method for driving an avatar in the first aspect or any one of the possible implementation manners of the first aspect.
Based on any one of the above aspects, the method, the device and the server for driving the avatar provided by the embodiments of the present application, compared with the scheme in the prior art that the target avatar is driven by the human motion data obtained from the target person by directly adopting the human detection algorithm, the technical scheme provided by the present application can overcome the driving effect difference when the same human motion data drives the target avatar with different body characteristics, can optimize the driving effect of the target avatar, and improve the interactive experience.
Drawings
To more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that need to be called in the embodiments are briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and those skilled in the art can also obtain other related drawings based on the drawings without inventive efforts.
Fig. 1 is an interaction scene schematic diagram of a driving system of an avatar provided in an embodiment of the present application;
fig. 2 is a schematic flowchart of a method for driving an avatar according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a driving posture and a schematic diagram of a collision volume in the driving posture according to an embodiment of the present disclosure;
FIG. 4 is a schematic flow chart illustrating the sub-steps of step S23 in FIG. 2;
FIG. 5 is a flowchart illustrating the sub-steps of step S231 in FIG. 4;
FIG. 6 is a flowchart illustrating the sub-steps of step S232 in FIG. 4;
FIG. 7 is a schematic diagram of a collision of a collider according to an embodiment of the present application;
FIG. 8 is a second schematic diagram of a collision of a collider according to the embodiment of the present application;
fig. 9 is a functional block diagram of a driving apparatus of an avatar provided in an embodiment of the present application;
fig. 10 is a schematic diagram of a possible structure of a server according to an embodiment of the present application.
Detailed Description
In order to make the purpose, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it should be understood that the drawings in the present application are for illustrative and descriptive purposes only and are not used to limit the scope of protection of the present application. Additionally, it should be understood that the schematic drawings are not necessarily drawn to scale. The flowcharts used in this application illustrate operations implemented according to some of the embodiments of the present application. It should be understood that the operations of the flow diagrams may be performed out of order, and steps without logical context may be performed in reverse order or simultaneously. One skilled in the art, under the guidance of this application, may add one or more other operations to, or remove one or more operations from, the flowchart.
In addition, the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. The components of the embodiments of the present application, as generally described and illustrated in the figures herein, could be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application.
In order to solve the technical problems mentioned in the background art, in one possible implementation scheme in the prior art, the human body motion data (such as joint motion data) of the target person may be obtained by using a human body detection algorithm, and then the human body motion data is applied to the corresponding joints of the avatar, so as to drive the avatar.
In the above-described aspect, the inventors found that driving different avatars with joint movement data of the same target person causes a problem of inconsistent driving effects of the avatars (e.g., the same joint movement data causes avatar penetration, for example, hands of an avatar of a fat body type are more easily penetrated into the head or body than hands of an avatar of a lean body type) because differences in body characteristics of different avatars (e.g., difference in height and thinness of a body type or difference in length of the same joint) are not considered. In addition, because the joint length defined in the human body detection algorithm is inconsistent with the joint length of the corresponding joint in the virtual image, after the forward kinematics processing is executed, the positions of the tail joints (such as two feet) are different, and the technical problem of foot sliding is further caused.
In order to solve the above technical problems in the prior art, in the embodiments of the present application, a driving system of an avatar that may be applied in the present application is introduced first for facilitating understanding of the present application, and it can be understood that the following introduced driving system of an avatar is only for explaining possible application scenarios of the present application, and the present application may also be applied in other application scenarios besides the following scenarios.
Referring to fig. 1, fig. 1 shows a schematic view of a possible interaction scene of a driving system of an avatar provided by the present application. The driving system 10 of the avatar may include a server 100, a cast end 200 and a viewer end 300 which are communicatively connected, and the server 100 may provide video image processing support for the cast end 200, such as human body detection algorithm stored in the server 100 to obtain human body motion data by processing the live video.
In the embodiment of the present application, the anchor terminal 200 and the audience terminal 300 may be, but are not limited to, a smart phone, a personal digital assistant, a tablet computer, a personal computer, a notebook computer, a virtual reality terminal device, an augmented reality terminal device, and the like. In a specific implementation, there may be multiple anchor peers 200 and viewer peers 300 accessing the server 100, and only one anchor peer 200 and two viewer peers 300 are shown in fig. 1. The anchor terminal 200 and the viewer terminal 300 may be installed with a live service program, for example, the service program may be an application APP or an applet related to internet live broadcast used in a computer or a smart phone.
In the embodiment of the present application, the server 100 may be a single physical server, or may be a server group composed of a plurality of physical servers for performing different data processing functions. The server group may be centralized or distributed (e.g., the server 100 may be a distributed system). In some possible embodiments, such as where the server 100 employs a single physical server, the physical server may be assigned different logical server components based on different business functions.
It will be appreciated that the live scene shown in fig. 1 is only one possible example, and in other possible embodiments, the live scene may include only a part of the components shown in fig. 1 or may also include other components.
The following describes an exemplary method for driving an avatar according to an embodiment of the present application with reference to an application scenario shown in fig. 1. Referring to fig. 2, the method for driving an avatar provided by the embodiment of the present application may be executed by the server 100, the order of some steps in the method for driving an avatar according to the embodiment of the present application may be interchanged according to actual needs, or some steps may be omitted or deleted, and the detailed steps of the method for driving an avatar executed by the server 100 are described as follows.
And step S21, acquiring human body action data of the target person based on the human body detection algorithm, and migrating the human body action data to a general skeleton topological structure with a joint mapping relation with the human body detection algorithm to obtain a first topological image.
The human detection algorithm may include an optical human-capture detection algorithm, an inertial human-capture detection algorithm, a visual human-detection algorithm, and the like. In the scenario provided by the embodiment of the present application, the human body detection algorithm may be a human body detection algorithm based on RGB video images, and for example, after the server 100 acquires the live video frame image of the anchor 200, the live video frame image is subjected to human body detection to obtain human body motion data of the target person. The human body detection algorithm is defined with each joint of the human body, the mapping between the joints defined in the human body detection algorithm and the joints in the universal skeleton topological structure can be pre-established to establish the corresponding relation between the joints with the same joint semantics, and the joints richer than the joints of the human body detection algorithm can be defined in the universal skeleton topological structure to ensure that the human body action data can be completely migrated to the joints of the universal skeleton topological structure corresponding to the joints in the human body detection algorithm in the process of migrating the human body action data to the skeleton topological structure, so that the loss of the human body action data in the migrating process is avoided.
And step S22, acquiring body data of the target virtual image, and obtaining a second topological image based on the mapping relation between the body data and the universal skeleton topological structure.
The target avatar may be a human-shaped avatar having a body structure similar to a human body, wherein the driving pose in the human body detection algorithm is semantically aligned with the driving pose in the target avatar, and the driving pose in the human body detection algorithm is semantically aligned with the driving pose in the target avatar at each detection, so as to ensure that the human body motion data obtained by the human body detection algorithm can make the target avatar perform the same motion as the target character from a static driving pose. Based on the mapping relationship of the joints in the body data of the target avatar and the joints in the generic bone topology, a second topology avatar may be derived. A richer set of joints than those of the target avatar may be defined in the generic skeletal topology to ensure that the second topology has a body configuration corresponding to the target virtual object.
And step S23, synchronously repairing the action posture data of the second topological image based on the action posture data of the first topological image.
In the embodiment of the application, after the human body action data is migrated to the general skeleton topological structure, a moving first topological image can be formed, the second topological image can be synchronously restored based on the action posture data of the first topological image, so that the second topological image executes the action same as the first topological image, and the second topological image can execute the action same as the first topological image through the synchronous restoration because the first topological image and the second topological image are obtained based on different mapping relations, so that the technical problem that the second topological image is possible to have sliding and/or insertion when the motion posture data of the first topological image is directly migrated to the second topological image is solved.
And step S24, driving the target virtual image to move according to the motion posture data of the repaired second topological image.
In the embodiment of the application, the target virtual image is driven to move based on the repaired action posture data of the second topological image, so that the second topological image can be ensured to be consistent with the action of the target person, and the technical problem of poor driving effect caused by the difference of the body characteristics of the target virtual image can be solved. After completing the driving of the target avatar, the server 100 may transmit the video frame image formed after the driving of the target avatar to the viewer terminal 300, and display the action of the target avatar at the viewer terminal 300.
According to the scheme provided by the embodiment of the application, firstly, human body action data of a target person is obtained through a human body detection algorithm, and the human body action data is migrated to a general skeleton topological structure to obtain a first topological image; then, based on the mapping relation between the body data of the target virtual image and the universal skeleton topological structure, obtaining a second topological image; then synchronously restoring the action attitude data of the second topological image by adopting the action attitude data of the first topological image; and finally, driving the target virtual image to move according to the motion posture data of the repaired second topological image. In the process, the first topological image and the second topological image are respectively constructed from the human body detection algorithm end and the target virtual image end, and the target virtual object corresponding to the second topological image can execute the same action with the target character in a mode of synchronously repairing action posture data, and the poor driving effect can not exist. Compared with the scheme that the human body action data obtained from the target person by directly adopting a human body detection algorithm in the prior art drives the target virtual image, the technical scheme provided by the application can overcome the driving effect difference when the same human body action data drives the target virtual images with different body characteristics, can optimize the driving effect of the target virtual image, and improves the interactive experience. In addition, the process does not involve complex algorithms (such as neural network learning models), the resource occupation is small, the time consumption is short, and different human body detection algorithms and different target virtual images can be compatible.
Further, before step S21, the method for driving an avatar provided by the embodiment of the present application may further include a step of defining a general bone topology, which may be implemented as follows.
First, a generic bone topology is created.
In the embodiment of the present application, the skeleton topology is a human skeleton structure, and the general skeleton topology may include a head, four limbs, fingers, and an expression base, where the head and the four limbs are used for limb driving, the fingers are used for gesture driving, and the expression base is used for expression driving.
Then, the number of joints in the generic skeletal topology, the semantics of the individual joints and the hierarchical relationships between the different joints are defined.
In this embodiment of the present application, the number of joints defined in the general skeleton topology may be greater than the number of joints in the human body detection algorithm and joints in the virtual character, taking spinal joints as an example, the number of spinal joints in the human body detection algorithm and joints in the virtual character may be 3, and the number of spinal joints defined in the general skeleton topology may be 4, and meanwhile, the semantics of the joints of the same joint in the human body detection algorithm, the virtual character and the general skeleton topology are the same, so that it may be ensured that joint data is not lost when the joint data in the human body detection algorithm and the joint data of the virtual character are mapped into the general skeleton topology. Illustratively, the hierarchical relationship between different joints can be represented by a joint kinematic chain, and taking the arm as an example, the joint kinematic chain of the arm can be represented as shoulder (shoulder) -elbow (elbow) -wrist (hand) -finger (finger).
Further, the driving method of the avatar provided by the embodiment of the present application may further include creating a mapping relationship between joints in the human body detection algorithm, joints in the general bone topology, and joints in the target avatar, and the specific process may be as follows.
Firstly, creating a first mapping relation between joints in a human body detection algorithm and joints in a universal skeleton topological structure to obtain a first configuration file;
then, creating a second mapping relation between the joints in the target virtual image and the joints in the universal skeleton topological structure to obtain a second configuration file;
and finally, based on a human body detection algorithm and a common driving posture of the target virtual image, creating driving posture information and collision body parameter information generated according to the characteristics of the target virtual image to obtain a third configuration file.
In the process, the first configuration file and the second configuration file can be obtained in a mode of editing by a user, exemplarily, the first configuration file and the second configuration file can be edited in a manual mode, then, semantic alignment is carried out according to joint names so as to detect whether mapping between different joints is established incorrectly in the editing process, and when incorrect mapping is established, an error reminding message is displayed to remind the user to adjust the joints with the incorrect mapping, so that errors caused by manual editing are avoided. The third configuration file can automatically generate configuration information according to a human body detection algorithm and a common driving posture of the target virtual image, the configuration information comprises driving posture information and collision body parameter information generated according to characteristics of the target virtual image, and the characteristics of the target virtual image comprise the driving posture of the target virtual image and the size of the target virtual image. Referring to fig. 3, the driving gesture may be an a post driving gesture or a T post driving gesture, and illustratively, the driving gesture information may include gesture data corresponding to the target avatar when switching between the a post driving gesture and the T post driving gesture, and the collision volume parameter information may include parameters for generating a collision volume corresponding to the body volume of the target avatar. In the embodiment of the application, automatic analysis of the uploaded target avatar is supported to obtain animation data of the target avatar aligned with the driving posture of the human body detection algorithm, the animation data comprises joint local rotation data and joint local translation data, and the target avatar can execute the same action as the target character by overlapping the action posture data of the repaired second topological avatar on the animation data.
As a possible implementation manner, step S21 in this embodiment of the application may migrate the human motion data into the general bone topology based on the first mapping relationship to obtain a first topological representation, where the human motion data includes joint local rotation data and joint local translation data of the target person. The human body action data are migrated into the general skeleton topological structure, and then a first topological image consistent with the action of the target person can be obtained.
As a possible implementation manner, step S22 in this embodiment of the present application may migrate the body data into the general bone topology based on the second mapping relationship, so as to obtain a second topological representation, where the body data includes joint local translation data of the virtual representation.
Referring to fig. 4, in the embodiment of the present application, the synchronous repair includes a sliding step repair and a collision repair, and step S23 may be implemented as follows.
And S231, obtaining the touchdown information of whether the foot of the first topological image is in contact with the ground or not based on the human body action data, and performing step sliding repair on the second topological image according to the human body action data, the root joint translation data, the joint local translation data of the target virtual image and the touchdown information.
The root joint translation data may refer to translation data of a root joint in a general bone topology, where the root joint corresponds to a hip joint in the general bone topology, and in this embodiment, the root joint is different from other joints (such as a finger joint), and the translation data and the rotation data of the root joint are global data. The touchdown information includes bipedal touchdown information, one-foot touchdown information, or bipedal touchdown information.
Step S232, detecting the collision relation of different joint collision bodies in the second topological image after the movement posture data of the first topological image is transferred to the second topological image, determining obstacles and collision repairs in the different joint collision bodies based on the preset joint collision body priority, and performing collision repairs on the collision repairs.
Further, in the embodiment of the present application, please refer to fig. 5, step S231 may be implemented by the following sub-steps.
In the sub-step S2311, the foot position and the length of the lower body of the first topological image in each frame image are obtained by using positive kinematics based on the human motion data.
In this sub-step, the foot position of the second topological character in each frame image and the length of the lower body may be calculated using positive kinematics based on the joint local rotation data and the joint local translation data.
In the sub-step S2312, a ratio of a distance between the current frame and the same foot position in the previous preset frame to the length of the lower body is calculated, and a height difference value between heights of the two feet in the current frame is calculated.
Illustratively, the ratio in this sub-step may be a ratio between a distance between the same foot position (e.g., the left ankle and the left toe) in the current frame and a previous preset frame (e.g., 5 frames ago) and a length of the lower body.
And a substep S2313 of comparing the ratio and the height difference value with a first threshold value and a second threshold value respectively and obtaining the touchdown information of whether the foot of the first topological image is in contact with the ground or not according to the comparison result.
Exemplarily, if the ratio is smaller than a first threshold value and the height difference value is smaller than a second threshold value, it is determined that the feet of the first topological figure are in contact with the ground; if the ratio is smaller than the first threshold value and the height difference value is not smaller than the second threshold value, the foot with the lower height of the first topological image is judged to be in contact with the ground; and if the ratio is not less than the first threshold value, judging that the feet of the first topological image are off the ground. Wherein, the first threshold value can be 0.05, and the second threshold value can be 5 cm-10 cm.
In sub-step S2314, the root joint translation data is scaled based on a ratio between the length of the lower body of the first topology representation and the length of the lower body of the second topology representation.
Through this sub-step, the global motion of the first topology representation and the second topology representation can be matched.
Sub-step S2315, modifying the articulation chain of the lower body of the second topological representation based on the touchdown information such that the second topological representation touches down or off the ground simultaneously with the first topological representation.
In this sub-step, when a certain foot touch is detected, the positions and directions of the ankle and the foot of the foot at this time are saved as an optimization target Goal, the foot position of each frame of the subsequent images needs to be moved to the optimization target Goal by using an inverse motion algorithm (such as PBIK algorithm) until the foot touch is detected, and the optimization target Goal is updated again when the touch is detected next time. When the joint movement chain of the lower half of the body mainly related to the sliding step repair, such as the crotch (leftupper) -knee (leftfoot) -ankle (leftfoot) -toe (lefttoe), local rotation data of each joint on the joint movement chain can be corrected during repair, the position of the foot on the target virtual image is fixed and does not slide, damage to the original animation is reduced as much as possible, and the smoothness and the naturalness of the movement are ensured.
Further, in the embodiment of the present application, referring to fig. 6, step S232 may be implemented by the following sub-steps.
In sub-step S2321, a joint collision volume is generated on the joint of the second topological representation.
In this sub-step, collision volumes for respective joints, such as the T POSE collision volumes in FIG. 3, may be generated based on the image character skinning over the joints of the second topological image. The collision body can be a capsule body or a ball body, the collision body is bound with the corresponding joint, the size of the collision body can be controlled by adjusting the threshold value of the skin weight, the size of the collision body can be increased when the size of the target virtual image is fat, and the size of the collision body can be decreased when the size of the target virtual image is thin.
And a substep S2322 of calculating the collision depth and the collision direction between different joint colliders of the second topological image after the motion posture data of the first topological image is transferred to the second topological image.
Since the collision bodies are bound with the corresponding joints, the motion of the joints during driving can also cause the motion of the collision bodies (for example, the hand collision bodies can also move during hand driving), and the collision condition of each joint collision body can be calculated during driving. In the embodiment of the present application, the collision body may include two types, i.e., a capsule body and a sphere body, assuming that the parameters of the capsule body are endpoints P1, P2 and a radius R, the parameters of the sphere body are a sphere center v and a radius R, a distance between a point and a point is represented as | |, a shortest distance between a point and a line segment is represented as | | | | | | | | |, and a shortest distance between a line segment and a line segment is represented as | | | | | | |. When two spheres collide, the | v1-v2| < ═ r1+ r2, and the collision depth is r1+ r2- | v1-v2 |; when the sphere collides with the capsule body, | | v- (P1, P2) | < | (R + R), the collision depth is R + R- | | | v- (P1, P2) | |; when the capsule body collides with the capsule body, | | (P1, P2) - (P3, P4) | < | > (R1 + R2), the collision depth is R1+ R2- | | (P1, P2) - (P3, P4) | | |.
In sub-step S2323, the obstacle and the collision restoration are determined among different joint colliders based on the preset joint collider priorities, and the position of the collision restoration after collision restoration is calculated based on the collision depth and the collision direction between the different joint colliders.
And a substep S2324 of adjusting the local rotation data of the joint in the joint motion chain where the collision restoration object is located based on the position of the collision restoration object after collision restoration to obtain the motion attitude data after collision restoration.
Taking collision between four limbs and other parts of the body as an example, the priority of joint collision between other parts of the body (for example, the head and the human body) may be set higher, and the priority of joint collision between four limbs may be set lower. Then in this application embodiment, other parts of the body may act as obstacles and limbs may act as crash repairs. Referring to fig. 7, when a hand collides with a head, collision depths and directions are d1 and n1, respectively, the hand collides with a chest at the same time, and the collision depths and directions are d2 and n2, respectively, then a final collision result is d1 n1+ d2 n2, when the position of the hand is p, the position after repair is p + d, the position after repair is used as an optimization target Goal for insertion repair, then the hand position is optimized to the optimization target Goal based on an inverse motion algorithm, local rotation data of a motion joint chain shoulder (short) -elbow (elbow) -hand (hand) is modified, and insertion of the hand and other parts is avoided. When the four limbs collide with each other again, for example, when the two hands intersect, please refer to fig. 8, the collision direction may be any one of the two directions, the collision results of the two hands are dm and-dm, the positions of the two hands are p1 and p2, and the collision results with other parts of the body are d1 and d2, respectively, then the positions after the two hands are repaired are p1+ d1+ dm/2 and p2+ d2-dm/2, respectively, that is, both the two hands are subjected to certain position modification, and the two-hands penetration optimization is also performed based on the same flow as that in fig. 7.
In the method for driving the virtual image, provided by the embodiment of the application, under the condition that the driving posture adopted by the human body detection algorithm is not changed, the first configuration file is related to the human body detection algorithm, and the second configuration file and the third configuration file are related to the target virtual image, so that decoupling of the human body detection algorithm and two sides of the target virtual image is realized, and therefore the second configuration file and the third configuration file are kept unchanged.
Further, referring to fig. 9, fig. 9 is a functional module schematic diagram of a driving apparatus 400 of an avatar provided in the embodiment of the present application, and the embodiment of the present application may divide the driving apparatus 400 of the avatar into functional modules according to a method embodiment executed by a server, that is, the following functional modules corresponding to the driving apparatus 400 of the avatar may be used to execute the above method embodiments. The avatar-based driving apparatus 400 may include a first determination module 410, a second determination module 420, a repair module 430, and a driving module 440, and the functions of the functional modules of the avatar-based driving apparatus 400 are described in detail below.
The first determining module 410 is configured to obtain human body motion data of a target person based on a human body detection algorithm, and migrate the human body motion data into a general bone topology structure having a joint mapping relationship with the human body detection algorithm to obtain a first topology image.
The human body detection algorithm may include an optical human body detection algorithm, an inertial human body detection algorithm, a visual human body detection algorithm, and the like, and in the scenario provided in the embodiment of the present application, the human body detection algorithm may be a human body detection algorithm based on RGB video images, and for example, after the server 100 acquires the live broadcast video frame image of the anchor terminal 200, the server performs human body detection on the live broadcast video frame image to acquire human body motion data of the target person. The human body detection algorithm is defined with each joint of the human body, the mapping between the joints defined in the human body detection algorithm and the joints in the universal skeleton topological structure can be pre-established to establish the corresponding relation between the joints with the same joint semantics, and the joints richer than the joints of the human body detection algorithm can be defined in the universal skeleton topological structure to ensure that the human body action data can be completely migrated to the joints of the universal skeleton topological structure corresponding to the joints in the human body detection algorithm in the process of migrating the human body action data to the skeleton topological structure, so that the loss of the human body action data in the migrating process is avoided.
In this embodiment, the first determining module 410 may be configured to perform the step S21, and for the detailed implementation of the first determining module 410, reference may be made to the detailed description about the step S21.
The second determining module 420 is configured to obtain body data of the target avatar, and obtain a second topology avatar based on a mapping relationship between the body data and the general bone topology.
The target avatar may be a human-shaped avatar having a body structure similar to a human body, wherein the driving pose in the human body detection algorithm is semantically aligned with the driving pose in the target avatar, and the driving pose in the human body detection algorithm is semantically aligned with the driving pose in the target avatar at each detection, so as to ensure that the human body motion data obtained by the human body detection algorithm can make the target avatar perform the same motion as the target character from a static driving pose. Based on the mapping relationship of the joints in the body data of the target avatar and the joints in the generic bone topology, a second topology avatar may be derived. A richer set of joints than those of the target avatar may be defined in the generic skeletal topology to ensure that the second topology has a body configuration corresponding to the target virtual object.
The second determining module 420 in this embodiment may be configured to perform the step S22 described above, and for the detailed implementation of the second determining module 420, reference may be made to the detailed description about the step S22 described above.
And the repairing module 430 is used for synchronously repairing the action posture data of the second topological image based on the action posture data of the first topological image.
In the embodiment of the application, after the human body action data is migrated to the general skeleton topological structure, a moving first topological image can be formed, the second topological image can be synchronously restored based on the action posture data of the first topological image, so that the second topological image executes the action same as the first topological image, and the second topological image can execute the action same as the first topological image through the synchronous restoration because the first topological image and the second topological image are obtained based on different mapping relations, so that the technical problem that the second topological image is possible to have sliding and/or insertion when the motion posture data of the first topological image is directly migrated to the second topological image is solved.
The repair module 430 in this embodiment may be configured to perform the step S23 described above, and for the detailed implementation of the repair module 430, reference may be made to the detailed description of the step S23 described above.
And the driving module 440 is configured to drive the target avatar to move according to the motion posture data of the repaired second topology avatar.
In this embodiment, the driving module 440 may be configured to perform the step S24, and reference may be made to the detailed description about the step S24 regarding the detailed implementation manner of the driving module 440.
It should be noted that the division of each module in the above apparatus or system is only a logical division, and all or part of the actual implementation may be integrated into one physical entity or may be physically separated. And these modules may all be implemented in the form of software (e.g., open source software) that can be invoked by a processor; or can be implemented in the form of hardware; and part of the modules can be realized in the form of calling software by a processor, and part of the modules can be realized in the form of hardware. As an example, the repair module 430 may be implemented by a single processor, and may be stored in a memory of the apparatus or system in the form of program code, and a certain processor of the apparatus or system calls and executes the functions of the repair module 430, and the implementation of other modules is similar, and will not be described herein again. In addition, the modules can be wholly or partially integrated together or can be independently realized. The processor described herein may be an integrated circuit with signal processing capability, and in the implementation process, each step or each module in the above technical solutions may be implemented in the form of an integrated logic circuit in the processor or a software program executed.
Referring to fig. 10, fig. 10 is a schematic diagram illustrating a hardware structure of a server 100 for implementing the driving method of the avatar according to an embodiment of the present disclosure. As shown in fig. 10, the server 100 may include a processor 110, a computer-readable storage medium 120, a bus 130, and a communication unit 140.
In a specific implementation process, the processor 110 executes computer-executable instructions (for example, the respective modules in the driving apparatus 400 of the avatar shown in fig. 9) stored in the computer-readable storage medium 120, so that the processor 110 may execute the video coding parameter combination determination method according to the above method embodiment, wherein the processor 110, the computer-readable storage medium 120, and the communication unit 140 may be connected through the bus 130.
For a specific implementation process of the processor 110, reference may be made to the above-mentioned method embodiments executed by the server 100, which implement the principle and the technical effect similarly, and no further description is given here in this embodiment of the application.
The computer-readable storage medium 120 may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Read-Only Memory (EPROM), an electrically Erasable Read-Only Memory (EEPROM), and the like. The memory 111 is used to store programs or data.
The bus 130 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, the buses in the figures of the present application are not limited to only one bus or one type of bus.
In the interaction scenario provided by the embodiment of the present application, the communication unit 140 may be configured to communicate with the anchor terminal 200 and the viewer terminal 300, so as to implement data interaction between the server 100 and the anchor terminal 200 and the viewer terminal 300.
In addition, the embodiment of the present application further provides a readable storage medium, in which a computer executing instruction is stored, and when a processor executes the computer executing instruction, the method for driving the avatar as described above is implemented.
To sum up, according to the method, the device and the server for driving the virtual image provided by the embodiment of the application, firstly, human body action data of a target person is obtained through a human body detection algorithm, and the human body action data is migrated to a general skeleton topological structure to obtain a first topological image; then, based on the mapping relation between the body data of the target virtual image and the universal skeleton topological structure, obtaining a second topological image; then synchronously restoring the action attitude data of the second topological image by adopting the action attitude data of the first topological image; and finally, driving the target virtual image to move according to the repaired action posture data of the second topological image. In the process, the first topological image and the second topological image are respectively constructed from the human body detection algorithm end and the target virtual image end, and the target virtual object corresponding to the second topological image can execute the same action with the target character in a mode of synchronously repairing action posture data, and the poor driving effect can not exist. Compared with the scheme that the human body action data obtained from the target person by directly adopting a human body detection algorithm in the prior art drives the target virtual image, the technical scheme provided by the application can overcome the driving effect difference when the same human body action data drives the target virtual images with different body characteristics, can optimize the driving effect of the target virtual image, and improves the interactive experience. In addition, the process does not involve complex algorithms (such as neural network learning models), the resource occupation is small, the time consumption is short, and different human body detection algorithms and different target virtual images can be compatible.
The embodiments described above are only a part of the embodiments of the present application, and not all of the embodiments. The components of the embodiments of the present application, as generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations. Thus, the detailed description of the embodiments of the present application provided in the accompanying drawings is not intended to limit the scope of the application, but is merely representative of selected embodiments of the application. Based on this, the protection scope of the present application shall be subject to the protection scope of the claims. Moreover, all other embodiments that can be made available by a person skilled in the art without making any inventive step based on the embodiments of the present application shall fall within the scope of protection of the present application.

Claims (10)

1. A method of driving an avatar, the method comprising:
acquiring human body action data of a target person based on a human body detection algorithm, and migrating the human body action data to a general skeleton topological structure having a joint mapping relation with the human body detection algorithm to obtain a first topological image;
acquiring body data of a target virtual image, and acquiring a second topological image based on the mapping relation between the body data and the universal skeleton topological structure, wherein the driving posture in the human body detection algorithm is aligned with the driving posture in the target virtual image semantically;
Synchronously restoring the action posture data of the second topological image based on the action posture data of the first topological image;
and driving the target virtual image to move according to the repaired action posture data of the second topological image.
2. The avatar driving method of claim 1, wherein prior to said step of obtaining human motion data of a target person based on a human detection algorithm, migrating said human motion data into a common skeletal topology having a joint mapping relationship with said human detection algorithm, resulting in a first topological avatar, said method further comprises:
creating the universal bone topology, wherein the universal bone topology comprises a head, limbs, fingers and expression bases;
defining the number of joints in the generic skeletal topology, the semantics of each joint, and the hierarchical relationships between different joints.
3. The avatar driving method of claim 2, wherein prior to said step of obtaining human motion data of the target person based on a human detection algorithm, migrating said human motion data into a common skeletal topology having a joint mapping relationship with said human detection algorithm, resulting in a first topological avatar, said method further comprises:
Creating a first mapping relation between joints in the human body detection algorithm and joints in the universal skeleton topological structure to obtain a first configuration file;
creating a second mapping relation between the joints in the target virtual image and the joints in the universal skeleton topological structure to obtain a second configuration file;
and based on the human body detection algorithm and the common driving posture of the target virtual image, creating driving posture information and collision body parameter information generated according to the characteristics of the target virtual image to obtain a third configuration file.
4. The avatar driving method of claim 3, wherein said step of obtaining human motion data of a target person based on a human detection algorithm, migrating said human motion data into a general bone topology having a joint mapping relationship with said human detection algorithm, to obtain a first topology avatar, comprises:
based on the first mapping relation, migrating the human body action data into the universal skeleton topological structure to obtain a first topological image, wherein the human body action data comprise joint local rotation data and joint local translation data of the target person;
The step of obtaining the body data of the target virtual image and obtaining a second topological image based on the mapping relation between the body data and the universal skeleton topological structure comprises the following steps:
migrating the body data into the universal bone topology structure based on the second mapping relationship to obtain the second topology image, wherein the body data comprises joint local translation data of the virtual image.
5. The avatar driving method according to any one of claims 1-4, wherein said synchronous restoration includes a step of sliding restoration and a step of collision restoration, and the step of synchronously restoring the motion pose data of the second topology based on the motion pose data of the first topology includes:
obtaining touchdown information whether a foot of the first topological image is in contact with the ground or not based on the human body action data, and performing step-sliding restoration on the second topological image according to the human body action data, root joint translation data, joint local translation data of the target virtual image and the touchdown information;
detecting the collision relation of different joint collision bodies in the second topological image after the action posture data of the first topological image is transferred to the second topological image, determining an obstacle and a collision restoration object in the different joint collision bodies based on the preset joint collision body priority, and performing collision restoration on the collision restoration object.
6. The avatar driving method of claim 5, wherein the step of obtaining touchdown information on whether a foot of the first topological avatar is in contact with the ground based on the human body motion data, and performing a step-down restoration on the second topological avatar according to the human body motion data, root joint translation data, joint local translation data of the target avatar, and the touchdown information, comprises:
obtaining the foot position of the first topological image in each frame of image and the length of the lower body by adopting positive kinematics based on the human motion data;
calculating the ratio of the distance between the same foot position in the current frame and the previous preset frame to the length of the lower body, and calculating the height difference value between the heights of the two feet in the current frame;
comparing the ratio and the height difference value with a first threshold value and a second threshold value respectively, and obtaining the grounding information of whether the foot of the first topological image is in contact with the ground or not according to the comparison result;
scaling the root joint translation data based on a ratio between a length of a lower body of the first topology persona and a length of a lower body of the second topology persona;
Modifying an articulation chain of a lower body of the second topological persona based on the touchdown information to cause the second topological persona to touch or lift off the ground simultaneously with the first topological persona.
7. The avatar driving method of claim 6, wherein the step of comparing the ratio and the height difference with a first threshold and a second threshold, respectively, and obtaining touchdown information on whether the foot of the first avatar contacts the ground according to the comparison result comprises:
if the ratio is smaller than the first threshold value and the height difference value is smaller than the second threshold value, judging that the two feet of the first topological image are in contact with the ground;
if the ratio is smaller than the first threshold value and the height difference value is not smaller than the second threshold value, the foot with the lower height of the first topological image is judged to be in contact with the ground;
and if the ratio is not smaller than the first threshold value, judging that the first topological image has the feet off the ground.
8. The avatar driving method of claim 5, wherein the step of detecting collision relations of different joint colliders in the second topology image after the motion pose data of the first topology image is migrated to the second topology image, determining an obstacle and a collision restoration in the different joint colliders based on a preset joint collider priority, and performing collision restoration on the collision restoration comprises:
Generating a joint collision volume on a joint of the second topological persona;
calculating collision depth and collision direction between different joint collision bodies of the second topological image after the movement posture data of the first topological image is transferred to the second topological image;
determining an obstacle and a collision restoration in different joint colliders based on the preset priority of the joint collider, and calculating the collision restoration position based on the collision depth and the collision direction between the different joint colliders;
and adjusting local rotation data of the joint in the joint motion chain where the collision restoration object is located to obtain motion attitude data after collision restoration based on the position of the collision restoration object after collision restoration.
9. An avatar driving apparatus, comprising:
the system comprises a first determination module, a second determination module and a third determination module, wherein the first determination module is used for acquiring human body action data of a target person based on a human body detection algorithm, and transferring the human body action data into a general skeleton topological structure which has a joint mapping relation with the human body detection algorithm to obtain a first topological image;
the second determination module is used for acquiring body data of a target virtual image and obtaining a second topological image based on the mapping relation between the body data and the universal skeleton topological structure, wherein the driving posture in the human body detection algorithm is aligned with the driving posture in the target virtual image semantically;
The restoration module is used for synchronously restoring the action posture data of the second topological image based on the action posture data of the first topological image;
and the driving module is used for driving the target virtual image to move according to the repaired action posture data of the second topological image.
10. A server, characterized in that the server comprises a processor, a communication unit and a computer-readable storage medium, the processor, the communication unit and the computer-readable storage medium are connected through a bus system, the communication unit is used for connecting an electronic device to realize data interaction between the server and the electronic device, the computer-readable storage medium is used for storing programs, instructions or codes, and the processor is used for executing the programs, instructions or codes in the computer-readable storage medium to realize the method for driving the avatar according to any one of claims 1-8.
CN202210188526.1A 2022-02-28 2022-02-28 Method and device for driving virtual image and server Pending CN114519758A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210188526.1A CN114519758A (en) 2022-02-28 2022-02-28 Method and device for driving virtual image and server

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210188526.1A CN114519758A (en) 2022-02-28 2022-02-28 Method and device for driving virtual image and server

Publications (1)

Publication Number Publication Date
CN114519758A true CN114519758A (en) 2022-05-20

Family

ID=81599289

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210188526.1A Pending CN114519758A (en) 2022-02-28 2022-02-28 Method and device for driving virtual image and server

Country Status (1)

Country Link
CN (1) CN114519758A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115359171A (en) * 2022-10-21 2022-11-18 北京百度网讯科技有限公司 Virtual image processing method and device, electronic equipment and storage medium
CN116051699A (en) * 2023-03-29 2023-05-02 腾讯科技(深圳)有限公司 Dynamic capture data processing method, device, equipment and storage medium
CN116228939A (en) * 2022-12-13 2023-06-06 北京百度网讯科技有限公司 Digital person driving method, digital person driving device, electronic equipment and storage medium

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115359171A (en) * 2022-10-21 2022-11-18 北京百度网讯科技有限公司 Virtual image processing method and device, electronic equipment and storage medium
CN115359171B (en) * 2022-10-21 2023-04-07 北京百度网讯科技有限公司 Virtual image processing method and device, electronic equipment and storage medium
CN116228939A (en) * 2022-12-13 2023-06-06 北京百度网讯科技有限公司 Digital person driving method, digital person driving device, electronic equipment and storage medium
CN116051699A (en) * 2023-03-29 2023-05-02 腾讯科技(深圳)有限公司 Dynamic capture data processing method, device, equipment and storage medium
CN116051699B (en) * 2023-03-29 2023-06-02 腾讯科技(深圳)有限公司 Dynamic capture data processing method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN114519758A (en) Method and device for driving virtual image and server
Gonzalez-Franco et al. The rocketbox library and the utility of freely available rigged avatars
US8514262B2 (en) Information processing apparatus and network conference system
KR20220025023A (en) Animation processing method and apparatus, computer storage medium, and electronic device
CN109126121B (en) AR terminal interconnection method, system, device and computer readable storage medium
CN112950751B (en) Gesture action display method and device, storage medium and system
CN110942501B (en) Virtual image switching method and device, electronic equipment and storage medium
WO2022218085A1 (en) Method and apparatus for obtaining virtual image, computer device, computer-readable storage medium, and computer program product
US11783523B2 (en) Animation control method and apparatus, storage medium, and electronic device
CN112381003A (en) Motion capture method, motion capture device, motion capture equipment and storage medium
Fechteler et al. A framework for realistic 3D tele-immersion
CN112596611A (en) Virtual reality role synchronous control method and control device based on somatosensory positioning
Fu et al. Real-time multimodal human–avatar interaction
WO2021258598A1 (en) Method for adjusting displayed picture, and smart terminal and readable storage medium
CN111739134B (en) Model processing method and device for virtual character and readable storage medium
CN117058284A (en) Image generation method, device and computer readable storage medium
US20230410398A1 (en) System and method for animating an avatar in a virtual world
CN116958344A (en) Animation generation method and device for virtual image, computer equipment and storage medium
WO2023103380A1 (en) Intelligent device control method and apparatus, and server and storage medium
CN115619484A (en) Method for displaying virtual commodity object, electronic equipment and computer storage medium
KR20220023005A (en) Realistic Interactive Edutainment System Using Tangible Elements
CN113222178A (en) Model training method, user interface generation method, device and storage medium
CN115937371B (en) Character model generation method and system
KR20200134623A (en) Apparatus and Method for providing facial motion retargeting of 3 dimensional virtual character
CN116805344B (en) Digital human action redirection method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination