WO2023207477A1 - Procédé et appareil de réparation de données d'animation, dispositif, support de stockage et produit-programme - Google Patents

Procédé et appareil de réparation de données d'animation, dispositif, support de stockage et produit-programme Download PDF

Info

Publication number
WO2023207477A1
WO2023207477A1 PCT/CN2023/084373 CN2023084373W WO2023207477A1 WO 2023207477 A1 WO2023207477 A1 WO 2023207477A1 CN 2023084373 W CN2023084373 W CN 2023084373W WO 2023207477 A1 WO2023207477 A1 WO 2023207477A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
key frame
animation
repair
repaired
Prior art date
Application number
PCT/CN2023/084373
Other languages
English (en)
Chinese (zh)
Inventor
许广龙
李志豪
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Publication of WO2023207477A1 publication Critical patent/WO2023207477A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts

Definitions

  • the embodiments of the present application relate to the field of data processing technology, and in particular to an animation data repair method, device, equipment, storage medium and program product.
  • animation can be produced based on the motion capture data obtained through motion capture.
  • the height ratio of the motion capture object is inconsistent with the height ratio of the animated character, there are often interspersed phenomena in the animation data obtained by redirection, for example, the limbs of animated characters are interspersed, and the limbs of animated characters and objects in the animation Interspersed between etc.
  • animators use manual repair methods to detect frame-by-frame based on reference videos (such as videos corresponding to motion capture data) or the needs of action directors to discover animation data segments with interspersed phenomena, and then target these animation data segments Perform interspersed repair to obtain the repaired animation data.
  • reference videos such as videos corresponding to motion capture data
  • action directors to discover animation data segments with interspersed phenomena, and then target these animation data segments Perform interspersed repair to obtain the repaired animation data.
  • this process usually requires a lot of manpower and time, and often requires waiting for hours or even days to obtain animation data ranging from tens of seconds to minutes.
  • the repair efficiency of animation data is not high.
  • Embodiments of the present application provide an animation data repair method, device, equipment, storage medium and program product, which can improve animation data repair efficiency.
  • the technical solution may include the following content.
  • a method for repairing animation data includes:
  • Obtain animation data to be repaired where the animation data to be repaired includes multi-frame animation data obtained by redirecting motion capture data;
  • collision detection is performed on the objects in the multi-frame animation data to obtain at least one key frame data group, each of the key frame data groups corresponding to an interleaving process;
  • At least one repaired data segment is obtained, where the repaired data segment refers to the data segment that is repaired by the interleaving process
  • the at least one repair data segment and the animation data to be repaired are superimposed to obtain repair animation data.
  • an animation data repair device is provided, and the device includes:
  • An animation data acquisition module used to acquire animation data to be repaired, where the animation data to be repaired includes multi-frame animation data obtained by redirecting motion capture data;
  • a collision body creation module configured to create a collision body for at least one object in the animation data to be repaired, and obtain a collection of collision bodies corresponding to the at least one object, and the collection of collision bodies includes at least one collision body;
  • a key frame acquisition module configured to perform collision detection on the objects in the multi-frame animation data based on the set of colliders respectively corresponding to the at least one object, and obtain at least one key frame data group, each of the key frame data The group corresponds to an interspersed process;
  • a repair fragment acquisition module configured to obtain at least one repair data fragment based on the at least one key frame data group, where the repair data fragment refers to the data fragment after the interleaving process is repaired;
  • An animation data repair module is configured to superimpose the at least one repair data segment and the animation data to be repaired according to the time periods corresponding to the at least one repair data segment to obtain repair animation data.
  • a computer device includes a processor and a memory.
  • a computer program is stored in the memory.
  • the computer program is loaded and executed by the processor to implement the above. Animation data repair method.
  • a computer-readable storage medium is provided.
  • a computer program is stored in the readable storage medium, and the computer program is loaded and executed by a processor to implement the above animation data repair method.
  • a computer program product includes a computer program, and the computer program is stored in a computer-readable storage medium.
  • the processor of the computer device reads the computer program from the computer-readable storage medium, and the processor executes the computer program, so that the computer device performs the above animation data repair method.
  • the technical solution provided by the embodiment of the present application can improve the acquisition efficiency of the interleaving process, thereby improving the repair efficiency of animation data.
  • Figure 1 is a schematic diagram of a solution implementation environment provided by an embodiment of the present application.
  • Figure 2 is a flow chart of an animation data repair method provided by an embodiment of the present application.
  • Figure 3 is a schematic diagram of a collision body creation method provided by an embodiment of the present application.
  • Figure 4 is a flow chart of a method for obtaining a key frame data group provided by an embodiment of the present application
  • Figure 5 is a schematic diagram of interspersed repair under the interaction between animated characters and the ground provided by an embodiment of the present application
  • Figure 6 is a schematic diagram of interspersed repair under interaction between animated characters provided by an embodiment of the present application.
  • Figure 7 is a schematic diagram of the creation of a collision body under the interaction between animated characters provided by an embodiment of the present application.
  • Figure 8 is a schematic diagram of the creation of a collision body under the interaction between animated characters and props provided by an embodiment of the present application
  • Figure 9 is a schematic diagram of collision detection under a surrounding ball provided by an embodiment of the present application.
  • Figure 10 is a block diagram of an animation data repair device provided by an embodiment of the present application.
  • Figure 11 is a block diagram of an animation data repair device provided by another embodiment of the present application.
  • Figure 12 is a block diagram of a computer device provided by an embodiment of the present application.
  • Motion Capture The process of recording motion data of motion capture objects (such as human bodies, animals, and other moving objects) with the help of cameras or inertial devices.
  • Retargeting A computer graphics term that specifically refers to the process of mapping motion capture data obtained through motion capture from motion capture objects to animated characters (or robots, game characters, etc.).
  • Collision Detection A computer graphics term, specifically referring to the process of determining whether two objects intersect in a computer.
  • Polygon mesh A term in computer graphics, specifically referring to a geometric model composed of a series of polygons (such as Geometric models of human bodies, animals, objects, etc.), which contain geometric information such as grid points, grid edges, and grid surfaces.
  • Skeleton Computer graphics term. If an object in the geometric model affects a series of grid points of the polygon mesh and can cause the series of grid points to follow the movement or deformation of the object, the object is Call it a skeleton. A sequence of bones can form a bone hierarchy.
  • Bounding Ball A computer graphics term that defines a ball with the smallest radius such that all grid points corresponding to a Mesh are inside the ball, then the ball is called the bounding ball of the Mesh.
  • FK Forward Kinematic
  • IK Inverse Kinematic
  • Figure 1 shows a schematic diagram of a solution implementation environment provided by an embodiment of the present application.
  • the implementation environment of this solution can realize the architecture of animation data repair system.
  • the implementation environment may include: a terminal 10 and a server 20 .
  • the terminal 10 may be an electronic device such as a mobile phone, a tablet computer, a PC (Personal Computer), a wearable device, an intelligent robot, a camera, an inertial device, and the like.
  • the terminal 10 can be used to obtain motion capture data, and provide the motion capture data to the server 20, so that the motion capture data can be converted, repaired, and other processes through the server 20 to obtain animation data.
  • a client running a target application can be installed in the terminal 10.
  • the client of the target application can be used to convert, repair, and other processing on the motion capture data to obtain animation data.
  • the above target applications may be animation production applications, animation restoration applications, video production applications, game production applications, entertainment interactive applications, simulation applications, simulation learning applications, etc.
  • the embodiments of the present application do not limit this.
  • the server 20 can be used to convert, repair, and other processing on the motion capture data to obtain animation data.
  • the server 20 may also be used to provide background services for clients of target applications in the terminal 10 .
  • the server 20 may be a backend server for clients of the target application.
  • the server 20 may be an independent physical server, a server cluster or a distributed system composed of multiple physical servers, or a cloud server that provides cloud computing services.
  • the terminal 10 and the server 20 can communicate through the network 30.
  • the network can be a wired network or a wireless network.
  • the execution subject of each step may be a computer device.
  • a computer device can be any electronic device capable of storing and processing data.
  • the computer device may be the server 20 in FIG. 1 , the terminal device 10 in FIG. 1 , or another device other than the terminal device 10 and the server 20 .
  • the technical solutions provided by the embodiments of this application are applicable to any scene that requires animation data repair, such as virtual object fitting scenes (such as virtual people, virtual animals, etc.), game production scenes (such as action settings of game characters), and video production scenes. , animation data repair scenes, etc.
  • the technical solution provided by the embodiments of the present application can improve the repair efficiency of animation data.
  • the terminal 10 sends the collected motion capture data to the server 20.
  • the server 20 redirects the motion capture data to obtain the animation data to be repaired.
  • the server 20 performs a collision on at least one object in the animation data to be repaired. Create, then perform collision detection based on the created collision body, automatically obtain key frame data (used to characterize the interleaving process), then automatically obtain repair data fragments based on key frame data (i.e., data fragments that are repaired during the interleaving process), and finally based on repair
  • the data fragment automatically repairs the animation data to be repaired and obtains the repaired animation data.
  • Figure 2 shows a flow chart of an animation data repair method provided by an embodiment of the present application.
  • the execution subject of each step of the method can be the terminal 10 or the server 20 in the solution implementation environment shown in Figure 1.
  • the execution subject of each step is a "computer device" for introduction and explanation.
  • the method may include at least one of the following steps (201-205).
  • Step 201 Obtain animation data to be repaired.
  • the animation data to be repaired includes multi-frame animation data obtained by redirecting motion capture data.
  • the motion capture data refers to the data obtained by capturing the motion of the motion capture object.
  • a motion capture device such as cameras, inertial devices, etc.
  • capture motion data of motion capture objects such as standing, walking, running, interaction, etc. motion data
  • the frame rate of motion capture data can be set according to actual usage requirements, such as 25Hz, 30Hz, etc.
  • the embodiments of the present application do not limit the specific type of the motion capture device, and do not limit the motion capture frequency of the motion capture device.
  • Motion capture object An object whose movements change and whose movements are captured.
  • the motion capture objects in the embodiments of the present application are not limited to objects in the real world, but can also be objects in the virtual world.
  • the environment in which the objects are located is not limited.
  • the specific type of the motion capture object includes but is not limited to real people, real animals, real robots and other objects that can produce motion activities.
  • the specific type of the motion capture object includes but is not limited to virtual characters, virtual animals, virtual animation personnel and other objects that can generate motion activities.
  • for motion capture The specific object type of the object is not limited.
  • only the motion capture object being a motion capture actor is used as an example for explanation. For other types of motion capture objects, refer to the motion capture actor.
  • Redirect object Another object with the same actions as the mocap object.
  • the redirected object in the embodiment of the present application is not limited to objects in the real world, but may also be objects in the virtual world.
  • control parameters can be generated based on the motion capture data of the above motion capture actor, thereby controlling the real robot to produce the same actions as the motion capture actor.
  • the embodiments of the present application are mainly explained on the assumption that the redirection object is a virtual object in the virtual world. Of course, those skilled in the art should know that the redirection object is not limited to virtual objects in the virtual world.
  • animation data refers to the data obtained by redirecting the actions of the motion capture object in the motion capture data to the redirection object in the animation data.
  • the animation data of different frames correspond to the actions at different times.
  • the motion capture object generates action A
  • the animation data of the redirected object in the Nth frame of the animation is determined, where the animation data indicates that the redirected object is in Perform action A.
  • the animation data includes, but is not limited to, position data and motion data of the redirected object in each frame of the animation.
  • motion capture objects can include people, animals, plants, objects, etc.
  • Retargeting objects refer to objects in animation data that need to fit the actions of motion capture objects, such as virtual people, virtual animals, animated characters, game characters, objects Models, etc., are not limited in the embodiments of this application.
  • the above motion capture data may correspond to multiple motion capture objects, that is, the animation data includes redirection objects corresponding to the multiple motion capture objects.
  • animation data can be interaction data between animated characters
  • animation data can also be interaction data between animated characters and object models
  • animation data can also be interaction data between object models.
  • the embodiments of this application are useful for This is not a limitation.
  • the interaction data here can be considered as the data of physical collisions.
  • the animation data to be repaired refers to the animation data that needs to be repaired if there is action data of the redirected object.
  • the animation data due to differences between the skeletal data (such as height proportion) of the motion capture object (such as the motion capture actor) and the skeletal data (such as the height proportion) of the retargeted object (such as the animated character), there are interleavings in the animation data.
  • phenomenon such as limbs intersecting with limbs, limbs intersecting with object models, etc.
  • the animation data with the intersecting phenomenon can be determined as the animation data to be repaired.
  • the animation data to be repaired may be skeletal animation data.
  • the skeletal animation data refers to animation data that uses the bones of the redirected object to display actions.
  • the skeletal animation data may include position information, steering information, movement information, etc. of the bones of the redirected object in each frame.
  • the process of obtaining the animation data to be repaired can be as follows: obtain the motion capture data corresponding to the motion capture object; convert the motion capture data to obtain the skeletal motion capture data, which corresponds to the bones of the motion capture object Data; redirect the skeletal motion capture data to obtain the animation data to be repaired.
  • skeletal motion capture data refers to animation data that uses the skeleton of a motion capture object to display actions.
  • Bone data can include bone size, number of bones, bone level association information, etc.
  • the motion capture data corresponding to the motion capture object is converted into skeletal motion capture data, and then based on the skeletal data of the redirected object and the skeletal data of the motion capture object, the skeletal motion capture data is reconstructed.
  • the animation data to be repaired Oriented to the animation data to be repaired.
  • forward dynamics or reverse dynamics can be used to obtain the animation data to be repaired.
  • the technical solution provided by the embodiment of this application determines the repair animation data based on the skeletal motion capture data.
  • the interaction between the motion capture objects is directly reflected in the interaction of the bones.
  • the interaction here can be understood as a collision.
  • motion capture object branches with different bone sizes may or may not interact with each other.
  • the redefinition direct the skeletal motion capture data to the skeletal data of the object and the skeletal data of the motion capture object into the animation data to be repaired, and determine the animation data to be repaired based on the skeletal data.
  • the interaction between the motion capture objects does not Represents the interaction between redirected objects. Therefore, using the corresponding skeletal data as a medium to determine the animation that needs to be repaired can improve the accuracy of the obtained animation data to be repaired, and at the same time improve the efficiency of the obtained animation data to be repaired. .
  • Step 202 Create a collision body for at least one object in the animation data to be repaired, and obtain a collection of collision bodies corresponding to at least one object.
  • the collection of collision bodies includes at least one collision body.
  • At least one object in the animation data to be repaired is the above-mentioned redirection object.
  • the collider is a three-dimensional object with a specific volume or shape used for collision detection of redirected objects.
  • the collider may refer to a surrounding sphere, a capsule, a bounding box, etc.
  • the embodiment of the present application does not limit the volume or shape of the collider. .
  • since the redirection object can correspond to multiple bones, in order to ensure the accuracy of collision detection, a collision body can be created for each bone.
  • the creation process of the collision body can be as follows: for the first object in at least one object, obtain the polygon mesh of the first object, which is used to represent the shape of the object; for the target corresponding to the first object Part, obtain the target local polygon mesh corresponding to the target part.
  • the target local polygon mesh is affected by the target part; based on the grid points corresponding to the target local polygon mesh, a collision body of the target part is constructed, and the collision body of the target part surrounds the target.
  • Grid points corresponding to the local polygon mesh construct collision bodies for each part corresponding to the first object to obtain a set of collision bodies corresponding to the first object.
  • the first object may refer to any object among the at least one object.
  • the target part may refer to any part of the first object, and the part may be represented by a bone.
  • the target local polygon mesh can be a local polygon mesh affected by the target bone.
  • the smallest collision body that can surround all grid points corresponding to the target local polygon mesh can be determined as the collision body of the target part.
  • the target site can be adaptively set according to actual usage requirements. For example, you can create colliders for only part of the redirected object, or you can create colliders for all parts of the redirected object.
  • the target site is a site in the redirected object where the interaction will occur.
  • at least one part of the redirected object where interaction will occur is confirmed as the target part.
  • a collision body 302 corresponding to the head skeleton 301 can be constructed based on all grid points corresponding to the head skeleton 301 .
  • a collision body corresponding to the hand bones of the animated character 300 can be constructed based on all grid points corresponding to the hand bones of the animated character 300
  • a collision corresponding to the foot bones can be constructed based on all grid points corresponding to the foot bones of the animated character 300. body.
  • the collision body corresponding to the target part moves with the target part.
  • the target part and the collider corresponding to the target part can be set to a parent-child relationship, with the target part being the parent and the collider corresponding to the target part being the child. Then the collider corresponding to the target part will move following the movement of the target part, so that It avoids calculating and creating collision bodies frame by frame, thus greatly reducing the amount of calculation and calculation time, which is beneficial to improving the efficiency of animation data repair.
  • the collision body 302 will move along with the head skeleton 301 .
  • the center position of the collider is bound to the bone data of the redirected object, so that when the redirected object changes, the collider moves accordingly.
  • Step 203 Based on the set of colliders corresponding to at least one object, collision detection is performed on the objects in the multi-frame animation data to obtain at least one key frame data group, and each key frame data group corresponds to an interleaving process.
  • Each keyframe data group can include multiple frames of keyframe data.
  • Each frame of keyframe data corresponds to one frame of animation data.
  • This frame of animation data includes key interleaving actions in the interleaving process.
  • a complete interleaving process can use multiple key interleaving actions. representation.
  • the interspersing action can be considered as the overlap of the spaces occupied by the redirected object and other objects in the animation.
  • object A does not occupy the same space as object B in the real world. That is, the space occupied by object A in the real world and The space occupied by object B in the real world should be independent of each other and non-overlapping.
  • the redirected object and other objects in the animated world should also occupy different spaces, and the space occupied by the redirected object and the space occupied by other objects in the animated world should be independent of each other, unless here
  • the space occupied by other objects is It can be affected by the redirected object.
  • the redirected object may fall into the swamp.
  • the redirected object and other objects overlap in space, but this is not considered a interleaving action.
  • the interleaving action in the embodiment of the present application is considered to be that the spaces occupied by at least two objects that do not change in volume produce at least a partial overlap.
  • the embodiment of the present application does not limit the interspersing width of the interspersing behavior, that is, it does not limit the overlapping width.
  • the above key frame data group includes start key frame data, peak key frame data and end key frame data; where the start key frame data refers to the animation data corresponding to the starting moment of the interspersed behavior, and the peak key frame data It refers to the animation data corresponding to the maximum extent of the interspersed behavior, and the end key frame data refers to the animation data corresponding to the end moment of the interspersed behavior.
  • One interspersed behavior corresponds to one interspersed process. For example, if there is an interpenetration between the limbs of animated character A and the limbs of animated character B, then the interpenetration process from the beginning of the limb to the end of the limb is an interpenetration process.
  • step 203 may also include the following sub-steps:
  • Step 203a For the target frame animation data in the multi-frame animation data, perform collision detection on the objects in the target frame animation data, and obtain at least one interleaving degree quantified value corresponding to the target frame animation data.
  • the interleaving degree quantified value is used to represent at least The degree of interpenetration between pairs of objects within an object.
  • the target frame animation data may refer to any frame of animation data among multiple frames of animation data.
  • the quantitative value of the degree of interleaving is positively correlated with the degree of interleaving between pairs of objects. For example, assuming that there is an intersection between the limbs of animated character A and the limbs of animated character B, when it is detected that the two limbs first come into contact, the quantified value of the degree of intersection can be set to 0, and the intersection of the two limbs is detected. As the degree gradually increases (for example, relative to the intersection point of two limbs, the end of the limb to the intersection point accounts for an increasingly larger portion of the entire limb), the quantified value of the degree of interpenetration gradually increases.
  • the quantified value of the interpenetration degree can be set to 1. Then it is detected that the two limbs are gradually separated, and the quantified value of the degree of interpenetration gradually decreases. When it is detected that the two limbs are separated (that is, the penetration is completed), the quantified value of the penetration degree can be set to 0. Optionally, if it is detected that there is no contact between the two limbs, the quantified value of the degree of penetration can be directly set to 0.
  • the quantified value of the degree of interspersion is determined based on the overlapping space between the space occupied by the limbs of the animated character A and the space occupied by the limbs of the animated character B.
  • the quantified value of the interleaved degree and the overlapping space are determined. proportional to the size of the space.
  • at least two points are distributed on the surface of the overlapping space, and the maximum value of the distance between the two points on the surface of the overlapping space is used as the quantified value of the interleaving degree.
  • Each object pair can obtain at least one interleaving degree quantified value sequence in units of time, and then can generate at least one interleaving degree quantified value curve.
  • Each peak segment in the interleaving degree quantified value curve can correspond to an interleaving process.
  • an object pair can correspond to multiple interleaving processes in the same period, and the sequence of quantified interleaving degree values corresponding to each action can be obtained in units of actions, thereby determining whether a certain action is an interleaving behavior and whether a certain action is an interleaving behavior.
  • Keyframe data group corresponding to an action.
  • the target frame animation data can correspond to multiple actions or multiple object pairs
  • the target frame animation data can correspond to multiple quantized values of the degree of interspersion.
  • Step 203b Determine at least one key frame data group from the multi-frame animation data based on at least one interleaving degree quantization value corresponding to the multi-frame animation data.
  • the object pair includes a first object and a second object with an interleaving behavior, and based on the quantified value of the interleaving degree corresponding to the object pair, obtain at least one corresponding object pair from the multi-frame animation data.
  • Candidate frame data sequence wherein each candidate frame data sequence corresponds to an interleaving behavior between the first object and the second object; for the target candidate frame data sequence in at least one candidate frame data sequence, for the target candidate frame data sequence
  • Key frames are selected to obtain a key frame data group corresponding to the target candidate frame data sequence; based on at least one candidate frame data sequence, at least one key frame data group corresponding to the object pair is determined.
  • the second object may refer to any object in the at least one object except the first object.
  • the candidate frame data sequence may include multiple frames of candidate frame data, and each frame of candidate frame data corresponds to an interleaving action of an interleaving behavior at a certain moment.
  • the candidate frame data sequence may correspond to the interleaving behaviors of a certain object pair in different time periods, and the candidate frame data sequence may also correspond to the different interspersing behaviors of a certain object pair in the same time period.
  • the embodiment of the present application Not limited.
  • the first object may be obtained based on at least one interleaved degree quantified value sequence between the first object and the second object.
  • the multi-frame animation data corresponding to the peak segment in the interspersed degree quantized value curve can be determined as the candidate frame data sequence.
  • the target candidate frame data sequence may refer to any candidate frame data sequence among the at least one candidate frame data sequence.
  • the method of selecting keyframes corresponding to the target candidate frame data sequence is introduced.
  • the specific content can be as follows : For the target interleaving behavior corresponding to the target candidate frame data sequence, the animation data of the previous frame of the animation data corresponding to the first interleaving degree quantification value of the target interleaving behavior is not zero is determined as the start of the target candidate frame data sequence.
  • the animation data with the maximum quantified interleaving degree value corresponding to the target interleaving behavior is determined as the peak keyframe data corresponding to the target candidate frame data sequence; the first interleaving degree quantified value after the peak keyframe data is zero
  • the animation data is determined to be the end key frame data corresponding to the target candidate frame data sequence.
  • animation data may sometimes be biased, when confirming the start keyframe data or the end keyframe data, not only the previous frame of the animation data corresponding to the first interleaving degree quantified value of the target interleaving behavior is not zero is considered.
  • the first animation data corresponding to the target interleaving behavior is The animation data of the previous frame of animation data whose interleaving level quantization value is not zero is determined as the starting key frame data corresponding to the target candidate frame data sequence, and the first interleaving level quantization value after the peak key frame data is zero.
  • Animation data is determined to be the end key frame data corresponding to the target candidate frame data sequence.
  • S-frame animation data and the K-frame animation data have a quantized value of the interleaving degree that is not zero
  • the S-frame animation data and the K-frame animation data have a specific frame of animation data that has a quantified interleaving value that is not zero.
  • Number determine the starting keyframe data, peak keyframe data and end keyframe data. Among them, K and S are positive integers.
  • the embodiment of the present application does not limit the timing of determining the starting key frame data, the peak key frame data, and the ending key frame data.
  • Step 204 Obtain at least one repaired data segment based on at least one key frame data group.
  • the repaired data segment refers to the data segment that has been repaired during the interleaving process.
  • repairing the data fragments may refer to repairing data fragments in which there is no interleaving phenomenon between redirect objects corresponding to the data fragments.
  • the repair data fragment can correspond to a complete action, that is, in addition to the key frame data corresponding to the action, it also includes transition frame data between the key frame data.
  • the transition frame data is used to achieve a smooth transition of the action.
  • At least one key frame data group can be repaired respectively to obtain at least one repaired key frame data group; wherein, the interspersed behavior in the repaired key frame data group is repaired; and then at least one repaired key frame data group can be repaired respectively. Perform transition frame interpolation on the keyframe data group to obtain at least one repaired data segment.
  • At least one object is redirected again so that there is no interleaving behavior between at least one object. For example, based on the skeletal data corresponding to at least one object, and guided by the actions corresponding to the key frame data group, the actions of at least one object are recalculated and corrected respectively, so that the actions of at least one object do not intersperse. Next, get as close as possible to the action corresponding to the keyframe data group, thereby obtaining the repaired keyframe data group.
  • the repaired data fragments with complete movements can be obtained by performing transition frame interpolation between each frame of animation data in the repaired keyframe data group.
  • the transition frame can be constructed according to the set motion interpolation algorithm.
  • the animation curve composed of keyframe data groups has a very high beating frequency (that is, frequent switching back and forth between interspersed and non-interspersed), the animation curve of the repaired animation data will also beat at a high frequency, which is very unstable. Therefore, the embodiment of the present application can introduce a filter to filter the high-frequency signal, and thereby obtain repair animation data with a stable animation curve.
  • the completion process of the action has a certain relationship with the skeletal data. For example, for the same distance, a larger stride can be completed with fewer strides, while a smaller stride needs to be completed with more strides. Therefore, for this point, you can redirect the object's bone data based on and the difference between the skeletal data of the motion capture object, and further adjust the time period corresponding to the key frame data.
  • the above optimization process may include the following: for the first object pair in at least one object, filter at least one key frame data group corresponding to the first object pair to obtain at least one key frame data group corresponding to the first object pair.
  • the filtered key frame data group wherein the time period length corresponding to the filtered key frame data group is greater than the first duration threshold; post-processing is performed on at least one filtered key frame data group corresponding to the first object pair to obtain At least one adjusted keyframe data group corresponding to the first object pair; wherein the action span of the adjusted keyframe data group conforms to the skeletal data of the object pair; and the adjusted keyframes corresponding to at least one keyframe data group are obtained. data group.
  • the first duration threshold is used for high-frequency filtering to filter out too short interspersed behaviors (that is, it does not affect the animation effect) and data noise (such as jitter).
  • the first duration threshold can be set according to actual usage requirements, such as 0.05s, 0.07s, 0.1s, etc., which is not limited in the embodiments of the present application.
  • the above post-processing process may be as follows: determine the first adjustment parameter based on the skeletal data of the first object pair and the skeletal data of the motion capture object corresponding to the first object pair; in the case where the first adjustment parameter is greater than the first threshold Next, the time periods corresponding to at least one filtered key frame data group are expanded backward to obtain at least one adjusted key frame data group; when the first adjustment parameter is less than the first threshold, at least one filtered key frame data group is The corresponding time periods of the key frame data groups are compressed forward to obtain at least one adjusted key frame data group. In some embodiments, when the first adjustment parameter is equal to the first threshold, the time periods respectively corresponding to the filtered key frame data groups are not expanded backward or compressed forward.
  • the first adjustment parameter is used to adjust the timestamp of the key frame data to make the action process corresponding to the key frame data smoother and more reasonable.
  • the difference between the skeletal data of the first object pair and the skeletal data of the motion capture object corresponding to the first object pair can be determined as the first adjustment parameter, or the skeletal data of the first object pair and The ratio between the first object and the corresponding skeleton data of the motion capture object is determined as the first adjustment parameter, which is not limited in the embodiment of the present application.
  • the heights of the two redirected objects in the first object pair and the heights of the two motion capture objects can be determined.
  • the key frame data when there are multiple pairs of objects in one key frame data, it can be determined whether the key frame data is compressed forward or expanded based on the weighted sum of the first adjustment parameters corresponding to at least one object pair. .
  • the specific offset corresponding to the keyframe data (that is, the degree of compression or expansion) is related to the height of the redirected object. The higher the height of the redirected object, the greater the offset.
  • the first threshold corresponds to the first adjustment parameter, and the first threshold can be set and adjusted based on experience values.
  • the first threshold can be set to 1, which indicates that the height of the first object pair and the height of the motion capture object corresponding to the first object pair are generally the same. , the time period of the keyframe data does not need to be adjusted at this time.
  • the first adjustment parameter is greater than 1, it indicates that the height of the first object pair is overall greater than the height of the motion capture object corresponding to the first object pair, and the time period of the key frame data can be expanded backward (such as the above Peak keyframe data and end keyframe data are delayed).
  • the first adjustment parameter is less than 1, it means that the height of the first object pair is overall smaller than the height of the motion capture object corresponding to the first object pair, and the time period of the key frame data can be compressed forward (such as the above Peak keyframe data and end keyframe data are moved forward).
  • At least one repair data segment can be obtained based on the adjusted key frame data group corresponding to at least one key frame data group.
  • Step 205 Superimpose at least one repair data segment and the animation data to be repaired according to the time periods corresponding to the at least one repair data segment to obtain repair animation data.
  • the original data segment corresponding to the target time period can be obtained from the animation data to be repaired according to the target time period corresponding to the target repair data segment; based on the corresponding original data segment The skeletal action parameters and the redirection repair parameters corresponding to the target repair data fragment are superimposed on the original data fragment and the target repair data fragment to obtain the repair animation data corresponding to the target period.
  • the target repair data fragment may be any repair data fragment among the at least one repair data fragment.
  • Bone action parameters are used to indicate the action of the redirected object, such as the rotation angle, movement distance, rotation direction, etc. of each bone.
  • Heavy Orientation repair parameters are used to indicate the status of the redirected object, such as the position, posture, etc. of the redirected object.
  • the skeletal action parameters can be used to make the action of the redirected object match the action of the motion capture object as much as possible, and the redirection repair parameters can be used to repair the state of the redirected object so that there is no interleaving between the redirected objects.
  • the embodiment of the present application reserves it and does not process it.
  • the animation data 501 is a certain frame of animation data in the animation data to be repaired.
  • the hand of the animated character intersperses with the ground.
  • the repair animation data 502 corresponding to the animation data 501 is obtained.
  • the interpenetration phenomenon between the animated character's hand and the ground has been repaired, that is, the animated character's hand No longer interspersed between the base and the ground.
  • the animation data 601 is a certain frame of animation data in the animation data to be repaired.
  • the hand of the animated character 603 and the animated character 604 are interspersed.
  • the repaired animation data 602 corresponding to the animation data 601 is obtained.
  • the interpenetration phenomenon between the hand of the animated character 603 and the animated character 604 has been repaired, that is, the animation The hand of the character 603 is no longer interspersed with the animated character 604.
  • the technical solution provided by the embodiments of the present application is to create a collision body for at least one object in the animation data to be repaired, and then based on the created collision body, perform collision detection on multiple frames of animation data corresponding to the animation data to be repaired. , the key frame data group is obtained, and the automatic acquisition of the interleaving process is realized.
  • the technical solution provided by the embodiment of the present application can improve the acquisition efficiency of the interleaving process, thereby improving the efficiency of the interleaving process. Repair efficiency of animation data.
  • the quality of the keyframe data group can be improved, thereby improving the repair quality of the animation data.
  • the rationality of the distribution of the key frame data group can also be improved, thereby improving the rationality of the repair of the animation data.
  • the degree of interspersing behavior at different stages can be quickly and easily obtained, which is conducive to improving the extraction efficiency of key frame data, thereby further improving the efficiency of repairing animation data.
  • the animation data repair method provided by the embodiment of the present application is described.
  • the specific content may be as follows:
  • the animation data 601 is a certain frame in the animation data to be repaired.
  • the animation data 601 there are intersperses between the hands of the animated character.
  • the enclosing sphere is the sphere with the smallest diameter that can enclose the local polygon mesh.
  • a set of bounding balls corresponding to the animated character 702 is obtained. You can set the bounding ball and bones to have a parent-child relationship, and the bounding ball will follow the bones.
  • collision detection is performed on each frame of animation data corresponding to the animation data to be repaired, and a collision detection result corresponding to each frame of animation data is obtained.
  • the distance between the centers of the two surrounding spheres is greater than the sum of the radii of the two surrounding spheres, it is determined that the corresponding parts of the two surrounding spheres are not interspersed; or, if the distance between the centers of the two surrounding spheres is The distance is equal to the sum of the radii of the two enclosing spheres, then determine whether the corresponding parts of the two enclosing spheres begin to intersect or end intersecting; or, if the distance between the centers of the two enclosing spheres is less than the sum of the radii of the two enclosing spheres , then it is determined that the corresponding parts of the two surrounding spheres are in the interspersed state.
  • the circle center distance the distance between the center of the circle of the surrounding sphere 901 and the center of the circle of the surrounding sphere 902 (hereinafter referred to as the circle center distance) is greater than the sum of the radii between the surrounding sphere 901 and the surrounding sphere 902, then the corresponding radius of the surrounding sphere 901 is determined.
  • the part (such as the hand) and the part corresponding to the surrounding ball 902 (such as the hand) are not interspersed.
  • the enclosing ball 901 corresponding to The parts (such as hands) and the parts (such as hands) corresponding to the surrounding ball 902 begin to intersect.
  • the center distance between the enclosing ball 901 and the enclosing ball 902 becomes smaller than the sum of the radii between the enclosing ball 901 and the enclosing ball 902, then the parts corresponding to the enclosing ball 901 (such as the hand) and the parts corresponding to the enclosing ball 902 are determined. (such as hands) in a penetrating state.
  • the collision detection results corresponding to each frame of animation data are solved to obtain at least one interspersed degree quantified value corresponding to each frame of animation data.
  • the quantitative value of the degree of interleaving is positively correlated with the degree of interpenetration.
  • one quantified value of the degree of interspersion can be obtained based on the collision detection result 703
  • another quantified value of the degree of interspersion can be obtained based on the collision detection result 704 .
  • At least one key frame data group is determined from the multi-frame animation data based on at least one interleaving degree quantization value respectively corresponding to each frame of animation data.
  • the hand interpenetration phenomenon obtain all the quantified values of the degree of interpenetration corresponding to the hand interpenetration phenomenon, generate a quantified value curve of the degree of interpenetration, and determine the corresponding start point of the hand interpenetration phenomenon based on the quantified value curve of the degree of interpenetration.
  • Key frame data, peak key frame data and end key frame data are used to obtain the key frame data group corresponding to the hand interspersion phenomenon.
  • Filter and post-process at least one key frame data group respectively to obtain at least one adjusted key frame data group.
  • Repair at least one adjusted key frame data group respectively to obtain at least one repaired key frame data group; perform transition frame interpolation on at least one repaired key frame data group respectively to obtain at least one repaired data segment.
  • At least one repaired data segment is superimposed on the animation data to be repaired to obtain the repaired animation data.
  • the technical solution provided by the embodiments of the present application is to create a collision body for at least one object in the animation data to be repaired, and then based on the created collision body, perform collision detection on multiple frames of animation data corresponding to the animation data to be repaired. , the key frame data group is obtained, and the automatic acquisition of the interleaving process is realized.
  • the technical solution provided by the embodiment of the present application can improve the acquisition efficiency of the interleaving process, thereby improving the efficiency of the interleaving process. Repair efficiency of animation data.
  • FIG 10 shows a block diagram of an animation data repair device provided by an embodiment of the present application.
  • the device has the function of implementing the above method example, and the function can be implemented by hardware, or can be implemented by hardware executing corresponding software.
  • the device may be the computer equipment introduced above, or may be provided in the computer equipment.
  • the device 1000 includes: an animation data acquisition module 1001, a collision body creation module 1002, a key frame acquisition module 1003, a repair fragment acquisition module 1004, and an animation data repair module 1005.
  • the animation data acquisition module 1001 is used to acquire animation data to be repaired, where the animation data to be repaired includes multi-frame animation data obtained by redirecting motion capture data.
  • the collision body creation module 1002 is used to create a collision body for at least one object in the animation data to be repaired, and obtain a collection of collision bodies respectively corresponding to the at least one object, and the collection of collision bodies includes at least one collision body.
  • the key frame acquisition module 1003 is configured to perform collision detection on the objects in the multi-frame animation data based on the set of colliders respectively corresponding to the at least one object, and obtain at least one key frame data group, each of the key frames data group Corresponds to a interspersed process.
  • the repair segment acquisition module 1004 is configured to obtain at least one repair data segment based on the at least one key frame data group, where the repair data segment refers to the data segment after the interleaving process is repaired.
  • the animation data repair module 1005 is configured to superimpose the at least one repair data segment and the animation data to be repaired according to the time periods corresponding to the at least one repair data segment to obtain repair animation data.
  • the key frame acquisition module 1003 is used to:
  • collision detection is performed on the objects in the target frame animation data to obtain at least one quantified interleaving degree value corresponding to the target frame animation data.
  • the quantified interleaving degree value is Used to characterize the degree of interpenetration between pairs of objects in the at least one object;
  • the at least one key frame data group is determined from the multi-frame animation data based on at least one interleaving degree quantization value respectively corresponding to the multi-frame animation data.
  • the object pair includes a first object and a second object in which interleaving behavior exists.
  • the key frame acquisition module 1003 is also used to:
  • At least one candidate frame data sequence corresponding to the object pair is obtained from the multi-frame animation data based on the quantified value of the interleaving degree corresponding to the object pair; wherein, each of the The candidate frame data sequence corresponds to an interleaving behavior between the first object and the second object;
  • For the target candidate frame data sequence in the at least one candidate frame data sequence perform key frame selection on the target candidate frame data sequence to obtain a key frame data group corresponding to the target candidate frame data sequence;
  • At least one key frame data group corresponding to the object pair is determined.
  • the key frame data group includes starting key frame data, peak key frame data and ending key frame data; wherein the starting key frame data refers to the starting moment of the interspersed behavior
  • the peak key frame data refers to the animation data corresponding to the maximum extent of the interspersed behavior
  • the end key frame data refers to the animation data corresponding to the end moment of the interspersed behavior
  • the key frame acquisition Module 1003 also used for:
  • the animation data of the previous frame of animation data corresponding to the first interleaving degree quantified value of the target interleaving behavior is not zero is determined as the target candidate frame data.
  • the animation data whose first interleaving degree quantization value after the peak key frame data is zero is determined as the end key frame data corresponding to the target candidate frame data sequence.
  • the repair fragment acquisition module 1004 is used to:
  • Transition frame interpolation is performed on the at least one repaired key frame data group respectively to obtain the at least one repaired data segment.
  • the device 1000 further includes: a key frame filtering module 1006 and a key frame adjusting module 1007.
  • the key frame filtering module 1006 is configured to filter at least one key frame data group corresponding to the first object pair in the at least one object, and obtain at least one key frame data group corresponding to the first object pair. A filtered key frame data group; wherein the length of the time period corresponding to the filtered key frame data group is greater than the first duration threshold.
  • the key frame adjustment module 1007 is configured to perform post-processing on at least one filtered key frame data group corresponding to the first object pair, respectively, to obtain at least one adjusted key frame data group corresponding to the first object pair; Wherein, the action span of the adjusted key frame data group conforms to the skeletal data of the object pair.
  • the key frame adjustment module 1007 is also used to obtain the adjusted key frame data group corresponding to the at least one key frame data group.
  • the repair segment acquisition module 1004 is also configured to obtain the at least one repair data segment based on the adjusted key frame data group corresponding to the at least one key frame data group.
  • the key frame adjustment module 1007 is used for:
  • the collision body creation module 1002 is used for:
  • Colliders are constructed for each part corresponding to the first object to obtain a set of colliders corresponding to the first object.
  • the collision body corresponding to the target part moves along with the target part.
  • the device 1000 further includes: a penetration state determination module 1008.
  • the collision body is an enclosing ball, and the penetration state determination module 1008 is used to:
  • the distance between the centers of the two surrounding spheres is equal to the sum of the radii of the two surrounding spheres, then it is determined that the corresponding parts of the two surrounding spheres start to intersect or end the interleaving;
  • the distance between the centers of the two surrounding spheres is less than the sum of the radii of the two surrounding spheres, it is determined that the corresponding parts of the two surrounding spheres are in an intersecting state.
  • the animation data repair module 1005 is used to:
  • For the target repair data segment in the at least one repair data segment obtain the original data segment corresponding to the target time period from the animation data to be repaired according to the target time period corresponding to the target repair data segment;
  • the original data fragments and the target repair data fragments are superimposed to obtain the repair animation corresponding to the target period. data.
  • the animation data acquisition module 1001 is used to:
  • the skeletal motion capture data is redirected to obtain the animation data to be repaired.
  • the technical solution provided by the embodiments of the present application is to create a collision body for at least one object in the animation data to be repaired, and then based on the created collision body, perform collision detection on multiple frames of animation data corresponding to the animation data to be repaired. , the key frame data group is obtained, and the automatic acquisition of the interleaving process is realized.
  • the technical solution provided by the embodiment of the present application can improve the acquisition efficiency of the interleaving process, thereby improving the efficiency of the interleaving process. Repair efficiency of animation data.
  • Figure 12 shows a structural block diagram of a computer device provided by an embodiment of the present application.
  • the computer device can be used to implement the animation data repair method provided in the above embodiment. Specifically, it may include the following.
  • the computer device 1200 includes a central processing unit (such as CPU (Central Processing Unit, central processing unit), GPU (Graphics Processing Unit, graphics processor) and FPGA (Field Programmable Gate Array, field programmable logic gate array), etc.) 1201, A system memory 1204 including a RAM (Random-Access Memory) 1202 and a ROM (Read-Only Memory) 1203, and a system bus 1205 connecting the system memory 1204 and the central processing unit 1201.
  • the computer device 1200 also includes a basic input/output system (I/O system) 1206 that helps transfer information between various devices in the server, and a module 1215 that stores an operating system 1213, application programs 1214, and other programs.
  • the basic input/output system 1206 includes a display 1208 for displaying information and an input device 1209 such as a mouse and a keyboard for the user to input information.
  • the display 1208 and the input device 1209 are both connected to the central processing unit 1201 through the input and output controller 1210 connected to the system bus 1205.
  • the basic input/output system 1206 may also include an input/output controller 1210 for receiving and processing input from a plurality of other devices such as a keyboard, mouse, or electronic stylus.
  • input and output controller 1210 also provides output to a display screen, printer, or other type of output device.
  • the mass storage device 1207 is connected to the central processing unit 1201 through a mass storage controller (not shown) connected to the system bus 1205 .
  • the mass storage device 1207 and its associated computer-readable media provide non-volatile storage for the computer device 1200 . That is, the mass storage device 1207 may include a computer-readable medium (not shown) such as a hard disk or a CD-ROM (Compact Disc Read-Only Memory) drive.
  • the computer-readable media may include computer storage media and communication media.
  • Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer programs, data structures, program modules or other data.
  • Computer storage media include RAM, ROM, EPROM (Erasable Programmable Read-Only Memory, Erasable Programmable Read-Only Memory), EEPROM (Electrically Erasable Programmable Read-Only Memory, Electrically Erasable Programmable Read-Only Memory), flash memory or Other solid-state storage technologies, CD-ROM, DVD (Digital Video Disc, high-density digital video disc) or other optical storage, tape cassettes, magnetic tapes, disk storage or other magnetic storage devices.
  • RAM random access memory
  • ROM read-only memory
  • EPROM Erasable Programmable Read-Only Memory
  • EEPROM Electrically Erasable Programmable Read-Only Memory
  • flash memory or Other solid-state storage technologies
  • the computer device 1200 can also be connected to a remote computer on the network through a network such as the Internet to run. That is, the computer device 1200 can be connected to the network 1212 through the network interface unit 1211 connected to the system bus 1205, or the network interface unit 1211 can also be used to connect to other types of networks or remote computer systems (not shown) .
  • the memory also includes a computer program, which is stored in the memory and configured to be executed by one or more processors to implement the above animation data repair method.
  • a computer-readable storage medium is also provided, and a computer program is stored in the storage medium. When executed by a processor, the computer program implements the above animation data repair method.
  • the computer-readable storage medium may include: ROM (Read-Only Memory), RAM (Random-Access Memory), SSD (Solid State Drives, solid state drive) or optical disk, etc.
  • random access memory can include ReRAM (Resistance Random Access Memory, resistive random access memory) and DRAM (Dynamic Random Access Memory, dynamic random access memory).
  • a computer program product including a computer program stored in a computer-readable storage medium.
  • a processor of a computer device from which the computer The computer program is read from a readable storage medium, and the processor executes the computer program, so that the computer device performs the above animation data repair method.
  • this application can display a prompt interface, pop-up window or output voice prompt information before collecting user-related data and during the process of collecting user-related data.
  • the prompt interface, pop-up window or voice prompt information is used In order to prompt the user that his/her relevant data is currently being collected, this application only starts to execute the relevant steps for obtaining the user's relevant data after obtaining the user's confirmation operation on the prompt interface or pop-up window, otherwise (that is, the user's confirmation operation is not obtained) When the prompt interface or pop-up window sends a confirmation operation), the relevant steps of obtaining user-related data are completed, that is, the user-related data is not obtained.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

L'invention concerne un procédé et un appareil de réparation de données d'animation, un dispositif, un support de stockage et un produit-programme, se rapportant au domaine technique du traitement de données. Le procédé consiste à : acquérir des données d'animation à réparer obtenues par redirection de données de capture de mouvement (201) ; créer des colliders pour au moins un objet dans les données d'animation à réparer, de façon à obtenir un ensemble de colliders correspondant respectivement audit au moins un objet (202) ; effectuer respectivement, sur la base de l'ensemble de colliders correspondant respectivement audit au moins un objet, une détection de collision sur des objets dans de multiples images de données d'animation, de façon à obtenir au moins un groupe de données d'image clé (203) ; obtenir, sur la base dudit au moins un groupe de données d'image clé, au moins un segment de données réparé (204) ; et superposer ledit au moins un segment de données réparé aux données d'animation à réparer, de façon à obtenir des données d'animation réparées (205). La présente demande réalise une acquisition automatique d'un groupe de données d'image clé (c'est-à-dire, un processus d'interpolation), améliore l'efficacité de réparation de données d'animation.
PCT/CN2023/084373 2022-04-27 2023-03-28 Procédé et appareil de réparation de données d'animation, dispositif, support de stockage et produit-programme WO2023207477A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210454407.6A CN117011498A (zh) 2022-04-27 2022-04-27 动画数据修复方法、装置、设备、存储介质及程序产品
CN202210454407.6 2022-04-27

Publications (1)

Publication Number Publication Date
WO2023207477A1 true WO2023207477A1 (fr) 2023-11-02

Family

ID=88517332

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/084373 WO2023207477A1 (fr) 2022-04-27 2023-03-28 Procédé et appareil de réparation de données d'animation, dispositif, support de stockage et produit-programme

Country Status (2)

Country Link
CN (1) CN117011498A (fr)
WO (1) WO2023207477A1 (fr)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102725038A (zh) * 2009-09-15 2012-10-10 索尼公司 组合多传感输入以用于数字动画
US9827496B1 (en) * 2015-03-27 2017-11-28 Electronics Arts, Inc. System for example-based motion synthesis
CN112001989A (zh) * 2020-07-28 2020-11-27 完美世界(北京)软件科技发展有限公司 虚拟对象的控制方法及装置、存储介质、电子装置
CN112270734A (zh) * 2020-10-19 2021-01-26 北京大米科技有限公司 一种动画生成方法、可读存储介质和电子设备
CN113724169A (zh) * 2021-09-08 2021-11-30 广州虎牙科技有限公司 蒙皮穿插修复方法、系统及计算机设备
CN113888680A (zh) * 2021-09-29 2022-01-04 广州虎牙科技有限公司 一种三维模型穿插修复的方法、装置及设备

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102725038A (zh) * 2009-09-15 2012-10-10 索尼公司 组合多传感输入以用于数字动画
US9827496B1 (en) * 2015-03-27 2017-11-28 Electronics Arts, Inc. System for example-based motion synthesis
CN112001989A (zh) * 2020-07-28 2020-11-27 完美世界(北京)软件科技发展有限公司 虚拟对象的控制方法及装置、存储介质、电子装置
CN112270734A (zh) * 2020-10-19 2021-01-26 北京大米科技有限公司 一种动画生成方法、可读存储介质和电子设备
CN113724169A (zh) * 2021-09-08 2021-11-30 广州虎牙科技有限公司 蒙皮穿插修复方法、系统及计算机设备
CN113888680A (zh) * 2021-09-29 2022-01-04 广州虎牙科技有限公司 一种三维模型穿插修复的方法、装置及设备

Also Published As

Publication number Publication date
CN117011498A (zh) 2023-11-07

Similar Documents

Publication Publication Date Title
JP6898602B2 (ja) パーツベースのキーフレーム及び先験的モデルを用いたロバストなメッシュトラッキング及び融合
JP7125992B2 (ja) 現実世界の仮想現実マップを用いた仮想現実(vr)ゲーム環境の構築
US10860838B1 (en) Universal facial expression translation and character rendering system
JP7457082B2 (ja) 反応型映像生成方法及び生成プログラム
CN111161395B (zh) 一种人脸表情的跟踪方法、装置及电子设备
US20120021828A1 (en) Graphical user interface for modification of animation data using preset animation samples
CN109189302B (zh) Ar虚拟模型的控制方法及装置
JP2015079502A (ja) オブジェクト追跡方法、オブジェクト追跡装置、及び追跡特徴選択方法
JP7483979B2 (ja) 多次元反応型映像を再生する方法及び装置
TW202309834A (zh) 模型重建方法及電子設備和電腦可讀儲存介質
CN113705520A (zh) 动作捕捉方法、装置及服务器
CN115331265A (zh) 姿态检测模型的训练方法和数字人的驱动方法、装置
WO2023207477A1 (fr) Procédé et appareil de réparation de données d'animation, dispositif, support de stockage et produit-programme
CN114452646A (zh) 虚拟对象透视处理方法、装置及计算机设备
CN108734774B (zh) 虚拟肢体构建方法及装置、人机交互方法
CN107820622A (zh) 一种虚拟3d场景制作方法及相关设备
WO2024027063A1 (fr) Procédé et appareil de flux continu en direct, support de stockage, dispositif électronique et produit
de Castro et al. Interaction by Hand-Tracking in a Virtual Reality Environment
CN115953520B (zh) 一种虚拟场景的记录回放方法、装置、电子设备及介质
KR102349002B1 (ko) 동적 프로젝션 매핑 연동 제스처 인식 인터랙티브 미디어 콘텐츠 제작 방법 및 장치
CN116132653A (zh) 三维模型的处理方法、装置、存储介质及计算机设备
CN114296622B (zh) 图像处理方法、装置、电子设备及存储介质
US11074738B1 (en) System for creating animations using component stress indication
CN115984943B (zh) 面部表情捕捉及模型训练方法、装置、设备、介质及产品
US8896607B1 (en) Inverse kinematics for rigged deformable characters

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23794912

Country of ref document: EP

Kind code of ref document: A1