WO2023207477A1 - Animation data repair method and apparatus, device, storage medium, and program product - Google Patents

Animation data repair method and apparatus, device, storage medium, and program product Download PDF

Info

Publication number
WO2023207477A1
WO2023207477A1 PCT/CN2023/084373 CN2023084373W WO2023207477A1 WO 2023207477 A1 WO2023207477 A1 WO 2023207477A1 CN 2023084373 W CN2023084373 W CN 2023084373W WO 2023207477 A1 WO2023207477 A1 WO 2023207477A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
key frame
animation
repair
repaired
Prior art date
Application number
PCT/CN2023/084373
Other languages
French (fr)
Chinese (zh)
Inventor
许广龙
李志豪
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Publication of WO2023207477A1 publication Critical patent/WO2023207477A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts

Definitions

  • the embodiments of the present application relate to the field of data processing technology, and in particular to an animation data repair method, device, equipment, storage medium and program product.
  • animation can be produced based on the motion capture data obtained through motion capture.
  • the height ratio of the motion capture object is inconsistent with the height ratio of the animated character, there are often interspersed phenomena in the animation data obtained by redirection, for example, the limbs of animated characters are interspersed, and the limbs of animated characters and objects in the animation Interspersed between etc.
  • animators use manual repair methods to detect frame-by-frame based on reference videos (such as videos corresponding to motion capture data) or the needs of action directors to discover animation data segments with interspersed phenomena, and then target these animation data segments Perform interspersed repair to obtain the repaired animation data.
  • reference videos such as videos corresponding to motion capture data
  • action directors to discover animation data segments with interspersed phenomena, and then target these animation data segments Perform interspersed repair to obtain the repaired animation data.
  • this process usually requires a lot of manpower and time, and often requires waiting for hours or even days to obtain animation data ranging from tens of seconds to minutes.
  • the repair efficiency of animation data is not high.
  • Embodiments of the present application provide an animation data repair method, device, equipment, storage medium and program product, which can improve animation data repair efficiency.
  • the technical solution may include the following content.
  • a method for repairing animation data includes:
  • Obtain animation data to be repaired where the animation data to be repaired includes multi-frame animation data obtained by redirecting motion capture data;
  • collision detection is performed on the objects in the multi-frame animation data to obtain at least one key frame data group, each of the key frame data groups corresponding to an interleaving process;
  • At least one repaired data segment is obtained, where the repaired data segment refers to the data segment that is repaired by the interleaving process
  • the at least one repair data segment and the animation data to be repaired are superimposed to obtain repair animation data.
  • an animation data repair device is provided, and the device includes:
  • An animation data acquisition module used to acquire animation data to be repaired, where the animation data to be repaired includes multi-frame animation data obtained by redirecting motion capture data;
  • a collision body creation module configured to create a collision body for at least one object in the animation data to be repaired, and obtain a collection of collision bodies corresponding to the at least one object, and the collection of collision bodies includes at least one collision body;
  • a key frame acquisition module configured to perform collision detection on the objects in the multi-frame animation data based on the set of colliders respectively corresponding to the at least one object, and obtain at least one key frame data group, each of the key frame data The group corresponds to an interspersed process;
  • a repair fragment acquisition module configured to obtain at least one repair data fragment based on the at least one key frame data group, where the repair data fragment refers to the data fragment after the interleaving process is repaired;
  • An animation data repair module is configured to superimpose the at least one repair data segment and the animation data to be repaired according to the time periods corresponding to the at least one repair data segment to obtain repair animation data.
  • a computer device includes a processor and a memory.
  • a computer program is stored in the memory.
  • the computer program is loaded and executed by the processor to implement the above. Animation data repair method.
  • a computer-readable storage medium is provided.
  • a computer program is stored in the readable storage medium, and the computer program is loaded and executed by a processor to implement the above animation data repair method.
  • a computer program product includes a computer program, and the computer program is stored in a computer-readable storage medium.
  • the processor of the computer device reads the computer program from the computer-readable storage medium, and the processor executes the computer program, so that the computer device performs the above animation data repair method.
  • the technical solution provided by the embodiment of the present application can improve the acquisition efficiency of the interleaving process, thereby improving the repair efficiency of animation data.
  • Figure 1 is a schematic diagram of a solution implementation environment provided by an embodiment of the present application.
  • Figure 2 is a flow chart of an animation data repair method provided by an embodiment of the present application.
  • Figure 3 is a schematic diagram of a collision body creation method provided by an embodiment of the present application.
  • Figure 4 is a flow chart of a method for obtaining a key frame data group provided by an embodiment of the present application
  • Figure 5 is a schematic diagram of interspersed repair under the interaction between animated characters and the ground provided by an embodiment of the present application
  • Figure 6 is a schematic diagram of interspersed repair under interaction between animated characters provided by an embodiment of the present application.
  • Figure 7 is a schematic diagram of the creation of a collision body under the interaction between animated characters provided by an embodiment of the present application.
  • Figure 8 is a schematic diagram of the creation of a collision body under the interaction between animated characters and props provided by an embodiment of the present application
  • Figure 9 is a schematic diagram of collision detection under a surrounding ball provided by an embodiment of the present application.
  • Figure 10 is a block diagram of an animation data repair device provided by an embodiment of the present application.
  • Figure 11 is a block diagram of an animation data repair device provided by another embodiment of the present application.
  • Figure 12 is a block diagram of a computer device provided by an embodiment of the present application.
  • Motion Capture The process of recording motion data of motion capture objects (such as human bodies, animals, and other moving objects) with the help of cameras or inertial devices.
  • Retargeting A computer graphics term that specifically refers to the process of mapping motion capture data obtained through motion capture from motion capture objects to animated characters (or robots, game characters, etc.).
  • Collision Detection A computer graphics term, specifically referring to the process of determining whether two objects intersect in a computer.
  • Polygon mesh A term in computer graphics, specifically referring to a geometric model composed of a series of polygons (such as Geometric models of human bodies, animals, objects, etc.), which contain geometric information such as grid points, grid edges, and grid surfaces.
  • Skeleton Computer graphics term. If an object in the geometric model affects a series of grid points of the polygon mesh and can cause the series of grid points to follow the movement or deformation of the object, the object is Call it a skeleton. A sequence of bones can form a bone hierarchy.
  • Bounding Ball A computer graphics term that defines a ball with the smallest radius such that all grid points corresponding to a Mesh are inside the ball, then the ball is called the bounding ball of the Mesh.
  • FK Forward Kinematic
  • IK Inverse Kinematic
  • Figure 1 shows a schematic diagram of a solution implementation environment provided by an embodiment of the present application.
  • the implementation environment of this solution can realize the architecture of animation data repair system.
  • the implementation environment may include: a terminal 10 and a server 20 .
  • the terminal 10 may be an electronic device such as a mobile phone, a tablet computer, a PC (Personal Computer), a wearable device, an intelligent robot, a camera, an inertial device, and the like.
  • the terminal 10 can be used to obtain motion capture data, and provide the motion capture data to the server 20, so that the motion capture data can be converted, repaired, and other processes through the server 20 to obtain animation data.
  • a client running a target application can be installed in the terminal 10.
  • the client of the target application can be used to convert, repair, and other processing on the motion capture data to obtain animation data.
  • the above target applications may be animation production applications, animation restoration applications, video production applications, game production applications, entertainment interactive applications, simulation applications, simulation learning applications, etc.
  • the embodiments of the present application do not limit this.
  • the server 20 can be used to convert, repair, and other processing on the motion capture data to obtain animation data.
  • the server 20 may also be used to provide background services for clients of target applications in the terminal 10 .
  • the server 20 may be a backend server for clients of the target application.
  • the server 20 may be an independent physical server, a server cluster or a distributed system composed of multiple physical servers, or a cloud server that provides cloud computing services.
  • the terminal 10 and the server 20 can communicate through the network 30.
  • the network can be a wired network or a wireless network.
  • the execution subject of each step may be a computer device.
  • a computer device can be any electronic device capable of storing and processing data.
  • the computer device may be the server 20 in FIG. 1 , the terminal device 10 in FIG. 1 , or another device other than the terminal device 10 and the server 20 .
  • the technical solutions provided by the embodiments of this application are applicable to any scene that requires animation data repair, such as virtual object fitting scenes (such as virtual people, virtual animals, etc.), game production scenes (such as action settings of game characters), and video production scenes. , animation data repair scenes, etc.
  • the technical solution provided by the embodiments of the present application can improve the repair efficiency of animation data.
  • the terminal 10 sends the collected motion capture data to the server 20.
  • the server 20 redirects the motion capture data to obtain the animation data to be repaired.
  • the server 20 performs a collision on at least one object in the animation data to be repaired. Create, then perform collision detection based on the created collision body, automatically obtain key frame data (used to characterize the interleaving process), then automatically obtain repair data fragments based on key frame data (i.e., data fragments that are repaired during the interleaving process), and finally based on repair
  • the data fragment automatically repairs the animation data to be repaired and obtains the repaired animation data.
  • Figure 2 shows a flow chart of an animation data repair method provided by an embodiment of the present application.
  • the execution subject of each step of the method can be the terminal 10 or the server 20 in the solution implementation environment shown in Figure 1.
  • the execution subject of each step is a "computer device" for introduction and explanation.
  • the method may include at least one of the following steps (201-205).
  • Step 201 Obtain animation data to be repaired.
  • the animation data to be repaired includes multi-frame animation data obtained by redirecting motion capture data.
  • the motion capture data refers to the data obtained by capturing the motion of the motion capture object.
  • a motion capture device such as cameras, inertial devices, etc.
  • capture motion data of motion capture objects such as standing, walking, running, interaction, etc. motion data
  • the frame rate of motion capture data can be set according to actual usage requirements, such as 25Hz, 30Hz, etc.
  • the embodiments of the present application do not limit the specific type of the motion capture device, and do not limit the motion capture frequency of the motion capture device.
  • Motion capture object An object whose movements change and whose movements are captured.
  • the motion capture objects in the embodiments of the present application are not limited to objects in the real world, but can also be objects in the virtual world.
  • the environment in which the objects are located is not limited.
  • the specific type of the motion capture object includes but is not limited to real people, real animals, real robots and other objects that can produce motion activities.
  • the specific type of the motion capture object includes but is not limited to virtual characters, virtual animals, virtual animation personnel and other objects that can generate motion activities.
  • for motion capture The specific object type of the object is not limited.
  • only the motion capture object being a motion capture actor is used as an example for explanation. For other types of motion capture objects, refer to the motion capture actor.
  • Redirect object Another object with the same actions as the mocap object.
  • the redirected object in the embodiment of the present application is not limited to objects in the real world, but may also be objects in the virtual world.
  • control parameters can be generated based on the motion capture data of the above motion capture actor, thereby controlling the real robot to produce the same actions as the motion capture actor.
  • the embodiments of the present application are mainly explained on the assumption that the redirection object is a virtual object in the virtual world. Of course, those skilled in the art should know that the redirection object is not limited to virtual objects in the virtual world.
  • animation data refers to the data obtained by redirecting the actions of the motion capture object in the motion capture data to the redirection object in the animation data.
  • the animation data of different frames correspond to the actions at different times.
  • the motion capture object generates action A
  • the animation data of the redirected object in the Nth frame of the animation is determined, where the animation data indicates that the redirected object is in Perform action A.
  • the animation data includes, but is not limited to, position data and motion data of the redirected object in each frame of the animation.
  • motion capture objects can include people, animals, plants, objects, etc.
  • Retargeting objects refer to objects in animation data that need to fit the actions of motion capture objects, such as virtual people, virtual animals, animated characters, game characters, objects Models, etc., are not limited in the embodiments of this application.
  • the above motion capture data may correspond to multiple motion capture objects, that is, the animation data includes redirection objects corresponding to the multiple motion capture objects.
  • animation data can be interaction data between animated characters
  • animation data can also be interaction data between animated characters and object models
  • animation data can also be interaction data between object models.
  • the embodiments of this application are useful for This is not a limitation.
  • the interaction data here can be considered as the data of physical collisions.
  • the animation data to be repaired refers to the animation data that needs to be repaired if there is action data of the redirected object.
  • the animation data due to differences between the skeletal data (such as height proportion) of the motion capture object (such as the motion capture actor) and the skeletal data (such as the height proportion) of the retargeted object (such as the animated character), there are interleavings in the animation data.
  • phenomenon such as limbs intersecting with limbs, limbs intersecting with object models, etc.
  • the animation data with the intersecting phenomenon can be determined as the animation data to be repaired.
  • the animation data to be repaired may be skeletal animation data.
  • the skeletal animation data refers to animation data that uses the bones of the redirected object to display actions.
  • the skeletal animation data may include position information, steering information, movement information, etc. of the bones of the redirected object in each frame.
  • the process of obtaining the animation data to be repaired can be as follows: obtain the motion capture data corresponding to the motion capture object; convert the motion capture data to obtain the skeletal motion capture data, which corresponds to the bones of the motion capture object Data; redirect the skeletal motion capture data to obtain the animation data to be repaired.
  • skeletal motion capture data refers to animation data that uses the skeleton of a motion capture object to display actions.
  • Bone data can include bone size, number of bones, bone level association information, etc.
  • the motion capture data corresponding to the motion capture object is converted into skeletal motion capture data, and then based on the skeletal data of the redirected object and the skeletal data of the motion capture object, the skeletal motion capture data is reconstructed.
  • the animation data to be repaired Oriented to the animation data to be repaired.
  • forward dynamics or reverse dynamics can be used to obtain the animation data to be repaired.
  • the technical solution provided by the embodiment of this application determines the repair animation data based on the skeletal motion capture data.
  • the interaction between the motion capture objects is directly reflected in the interaction of the bones.
  • the interaction here can be understood as a collision.
  • motion capture object branches with different bone sizes may or may not interact with each other.
  • the redefinition direct the skeletal motion capture data to the skeletal data of the object and the skeletal data of the motion capture object into the animation data to be repaired, and determine the animation data to be repaired based on the skeletal data.
  • the interaction between the motion capture objects does not Represents the interaction between redirected objects. Therefore, using the corresponding skeletal data as a medium to determine the animation that needs to be repaired can improve the accuracy of the obtained animation data to be repaired, and at the same time improve the efficiency of the obtained animation data to be repaired. .
  • Step 202 Create a collision body for at least one object in the animation data to be repaired, and obtain a collection of collision bodies corresponding to at least one object.
  • the collection of collision bodies includes at least one collision body.
  • At least one object in the animation data to be repaired is the above-mentioned redirection object.
  • the collider is a three-dimensional object with a specific volume or shape used for collision detection of redirected objects.
  • the collider may refer to a surrounding sphere, a capsule, a bounding box, etc.
  • the embodiment of the present application does not limit the volume or shape of the collider. .
  • since the redirection object can correspond to multiple bones, in order to ensure the accuracy of collision detection, a collision body can be created for each bone.
  • the creation process of the collision body can be as follows: for the first object in at least one object, obtain the polygon mesh of the first object, which is used to represent the shape of the object; for the target corresponding to the first object Part, obtain the target local polygon mesh corresponding to the target part.
  • the target local polygon mesh is affected by the target part; based on the grid points corresponding to the target local polygon mesh, a collision body of the target part is constructed, and the collision body of the target part surrounds the target.
  • Grid points corresponding to the local polygon mesh construct collision bodies for each part corresponding to the first object to obtain a set of collision bodies corresponding to the first object.
  • the first object may refer to any object among the at least one object.
  • the target part may refer to any part of the first object, and the part may be represented by a bone.
  • the target local polygon mesh can be a local polygon mesh affected by the target bone.
  • the smallest collision body that can surround all grid points corresponding to the target local polygon mesh can be determined as the collision body of the target part.
  • the target site can be adaptively set according to actual usage requirements. For example, you can create colliders for only part of the redirected object, or you can create colliders for all parts of the redirected object.
  • the target site is a site in the redirected object where the interaction will occur.
  • at least one part of the redirected object where interaction will occur is confirmed as the target part.
  • a collision body 302 corresponding to the head skeleton 301 can be constructed based on all grid points corresponding to the head skeleton 301 .
  • a collision body corresponding to the hand bones of the animated character 300 can be constructed based on all grid points corresponding to the hand bones of the animated character 300
  • a collision corresponding to the foot bones can be constructed based on all grid points corresponding to the foot bones of the animated character 300. body.
  • the collision body corresponding to the target part moves with the target part.
  • the target part and the collider corresponding to the target part can be set to a parent-child relationship, with the target part being the parent and the collider corresponding to the target part being the child. Then the collider corresponding to the target part will move following the movement of the target part, so that It avoids calculating and creating collision bodies frame by frame, thus greatly reducing the amount of calculation and calculation time, which is beneficial to improving the efficiency of animation data repair.
  • the collision body 302 will move along with the head skeleton 301 .
  • the center position of the collider is bound to the bone data of the redirected object, so that when the redirected object changes, the collider moves accordingly.
  • Step 203 Based on the set of colliders corresponding to at least one object, collision detection is performed on the objects in the multi-frame animation data to obtain at least one key frame data group, and each key frame data group corresponds to an interleaving process.
  • Each keyframe data group can include multiple frames of keyframe data.
  • Each frame of keyframe data corresponds to one frame of animation data.
  • This frame of animation data includes key interleaving actions in the interleaving process.
  • a complete interleaving process can use multiple key interleaving actions. representation.
  • the interspersing action can be considered as the overlap of the spaces occupied by the redirected object and other objects in the animation.
  • object A does not occupy the same space as object B in the real world. That is, the space occupied by object A in the real world and The space occupied by object B in the real world should be independent of each other and non-overlapping.
  • the redirected object and other objects in the animated world should also occupy different spaces, and the space occupied by the redirected object and the space occupied by other objects in the animated world should be independent of each other, unless here
  • the space occupied by other objects is It can be affected by the redirected object.
  • the redirected object may fall into the swamp.
  • the redirected object and other objects overlap in space, but this is not considered a interleaving action.
  • the interleaving action in the embodiment of the present application is considered to be that the spaces occupied by at least two objects that do not change in volume produce at least a partial overlap.
  • the embodiment of the present application does not limit the interspersing width of the interspersing behavior, that is, it does not limit the overlapping width.
  • the above key frame data group includes start key frame data, peak key frame data and end key frame data; where the start key frame data refers to the animation data corresponding to the starting moment of the interspersed behavior, and the peak key frame data It refers to the animation data corresponding to the maximum extent of the interspersed behavior, and the end key frame data refers to the animation data corresponding to the end moment of the interspersed behavior.
  • One interspersed behavior corresponds to one interspersed process. For example, if there is an interpenetration between the limbs of animated character A and the limbs of animated character B, then the interpenetration process from the beginning of the limb to the end of the limb is an interpenetration process.
  • step 203 may also include the following sub-steps:
  • Step 203a For the target frame animation data in the multi-frame animation data, perform collision detection on the objects in the target frame animation data, and obtain at least one interleaving degree quantified value corresponding to the target frame animation data.
  • the interleaving degree quantified value is used to represent at least The degree of interpenetration between pairs of objects within an object.
  • the target frame animation data may refer to any frame of animation data among multiple frames of animation data.
  • the quantitative value of the degree of interleaving is positively correlated with the degree of interleaving between pairs of objects. For example, assuming that there is an intersection between the limbs of animated character A and the limbs of animated character B, when it is detected that the two limbs first come into contact, the quantified value of the degree of intersection can be set to 0, and the intersection of the two limbs is detected. As the degree gradually increases (for example, relative to the intersection point of two limbs, the end of the limb to the intersection point accounts for an increasingly larger portion of the entire limb), the quantified value of the degree of interpenetration gradually increases.
  • the quantified value of the interpenetration degree can be set to 1. Then it is detected that the two limbs are gradually separated, and the quantified value of the degree of interpenetration gradually decreases. When it is detected that the two limbs are separated (that is, the penetration is completed), the quantified value of the penetration degree can be set to 0. Optionally, if it is detected that there is no contact between the two limbs, the quantified value of the degree of penetration can be directly set to 0.
  • the quantified value of the degree of interspersion is determined based on the overlapping space between the space occupied by the limbs of the animated character A and the space occupied by the limbs of the animated character B.
  • the quantified value of the interleaved degree and the overlapping space are determined. proportional to the size of the space.
  • at least two points are distributed on the surface of the overlapping space, and the maximum value of the distance between the two points on the surface of the overlapping space is used as the quantified value of the interleaving degree.
  • Each object pair can obtain at least one interleaving degree quantified value sequence in units of time, and then can generate at least one interleaving degree quantified value curve.
  • Each peak segment in the interleaving degree quantified value curve can correspond to an interleaving process.
  • an object pair can correspond to multiple interleaving processes in the same period, and the sequence of quantified interleaving degree values corresponding to each action can be obtained in units of actions, thereby determining whether a certain action is an interleaving behavior and whether a certain action is an interleaving behavior.
  • Keyframe data group corresponding to an action.
  • the target frame animation data can correspond to multiple actions or multiple object pairs
  • the target frame animation data can correspond to multiple quantized values of the degree of interspersion.
  • Step 203b Determine at least one key frame data group from the multi-frame animation data based on at least one interleaving degree quantization value corresponding to the multi-frame animation data.
  • the object pair includes a first object and a second object with an interleaving behavior, and based on the quantified value of the interleaving degree corresponding to the object pair, obtain at least one corresponding object pair from the multi-frame animation data.
  • Candidate frame data sequence wherein each candidate frame data sequence corresponds to an interleaving behavior between the first object and the second object; for the target candidate frame data sequence in at least one candidate frame data sequence, for the target candidate frame data sequence
  • Key frames are selected to obtain a key frame data group corresponding to the target candidate frame data sequence; based on at least one candidate frame data sequence, at least one key frame data group corresponding to the object pair is determined.
  • the second object may refer to any object in the at least one object except the first object.
  • the candidate frame data sequence may include multiple frames of candidate frame data, and each frame of candidate frame data corresponds to an interleaving action of an interleaving behavior at a certain moment.
  • the candidate frame data sequence may correspond to the interleaving behaviors of a certain object pair in different time periods, and the candidate frame data sequence may also correspond to the different interspersing behaviors of a certain object pair in the same time period.
  • the embodiment of the present application Not limited.
  • the first object may be obtained based on at least one interleaved degree quantified value sequence between the first object and the second object.
  • the multi-frame animation data corresponding to the peak segment in the interspersed degree quantized value curve can be determined as the candidate frame data sequence.
  • the target candidate frame data sequence may refer to any candidate frame data sequence among the at least one candidate frame data sequence.
  • the method of selecting keyframes corresponding to the target candidate frame data sequence is introduced.
  • the specific content can be as follows : For the target interleaving behavior corresponding to the target candidate frame data sequence, the animation data of the previous frame of the animation data corresponding to the first interleaving degree quantification value of the target interleaving behavior is not zero is determined as the start of the target candidate frame data sequence.
  • the animation data with the maximum quantified interleaving degree value corresponding to the target interleaving behavior is determined as the peak keyframe data corresponding to the target candidate frame data sequence; the first interleaving degree quantified value after the peak keyframe data is zero
  • the animation data is determined to be the end key frame data corresponding to the target candidate frame data sequence.
  • animation data may sometimes be biased, when confirming the start keyframe data or the end keyframe data, not only the previous frame of the animation data corresponding to the first interleaving degree quantified value of the target interleaving behavior is not zero is considered.
  • the first animation data corresponding to the target interleaving behavior is The animation data of the previous frame of animation data whose interleaving level quantization value is not zero is determined as the starting key frame data corresponding to the target candidate frame data sequence, and the first interleaving level quantization value after the peak key frame data is zero.
  • Animation data is determined to be the end key frame data corresponding to the target candidate frame data sequence.
  • S-frame animation data and the K-frame animation data have a quantized value of the interleaving degree that is not zero
  • the S-frame animation data and the K-frame animation data have a specific frame of animation data that has a quantified interleaving value that is not zero.
  • Number determine the starting keyframe data, peak keyframe data and end keyframe data. Among them, K and S are positive integers.
  • the embodiment of the present application does not limit the timing of determining the starting key frame data, the peak key frame data, and the ending key frame data.
  • Step 204 Obtain at least one repaired data segment based on at least one key frame data group.
  • the repaired data segment refers to the data segment that has been repaired during the interleaving process.
  • repairing the data fragments may refer to repairing data fragments in which there is no interleaving phenomenon between redirect objects corresponding to the data fragments.
  • the repair data fragment can correspond to a complete action, that is, in addition to the key frame data corresponding to the action, it also includes transition frame data between the key frame data.
  • the transition frame data is used to achieve a smooth transition of the action.
  • At least one key frame data group can be repaired respectively to obtain at least one repaired key frame data group; wherein, the interspersed behavior in the repaired key frame data group is repaired; and then at least one repaired key frame data group can be repaired respectively. Perform transition frame interpolation on the keyframe data group to obtain at least one repaired data segment.
  • At least one object is redirected again so that there is no interleaving behavior between at least one object. For example, based on the skeletal data corresponding to at least one object, and guided by the actions corresponding to the key frame data group, the actions of at least one object are recalculated and corrected respectively, so that the actions of at least one object do not intersperse. Next, get as close as possible to the action corresponding to the keyframe data group, thereby obtaining the repaired keyframe data group.
  • the repaired data fragments with complete movements can be obtained by performing transition frame interpolation between each frame of animation data in the repaired keyframe data group.
  • the transition frame can be constructed according to the set motion interpolation algorithm.
  • the animation curve composed of keyframe data groups has a very high beating frequency (that is, frequent switching back and forth between interspersed and non-interspersed), the animation curve of the repaired animation data will also beat at a high frequency, which is very unstable. Therefore, the embodiment of the present application can introduce a filter to filter the high-frequency signal, and thereby obtain repair animation data with a stable animation curve.
  • the completion process of the action has a certain relationship with the skeletal data. For example, for the same distance, a larger stride can be completed with fewer strides, while a smaller stride needs to be completed with more strides. Therefore, for this point, you can redirect the object's bone data based on and the difference between the skeletal data of the motion capture object, and further adjust the time period corresponding to the key frame data.
  • the above optimization process may include the following: for the first object pair in at least one object, filter at least one key frame data group corresponding to the first object pair to obtain at least one key frame data group corresponding to the first object pair.
  • the filtered key frame data group wherein the time period length corresponding to the filtered key frame data group is greater than the first duration threshold; post-processing is performed on at least one filtered key frame data group corresponding to the first object pair to obtain At least one adjusted keyframe data group corresponding to the first object pair; wherein the action span of the adjusted keyframe data group conforms to the skeletal data of the object pair; and the adjusted keyframes corresponding to at least one keyframe data group are obtained. data group.
  • the first duration threshold is used for high-frequency filtering to filter out too short interspersed behaviors (that is, it does not affect the animation effect) and data noise (such as jitter).
  • the first duration threshold can be set according to actual usage requirements, such as 0.05s, 0.07s, 0.1s, etc., which is not limited in the embodiments of the present application.
  • the above post-processing process may be as follows: determine the first adjustment parameter based on the skeletal data of the first object pair and the skeletal data of the motion capture object corresponding to the first object pair; in the case where the first adjustment parameter is greater than the first threshold Next, the time periods corresponding to at least one filtered key frame data group are expanded backward to obtain at least one adjusted key frame data group; when the first adjustment parameter is less than the first threshold, at least one filtered key frame data group is The corresponding time periods of the key frame data groups are compressed forward to obtain at least one adjusted key frame data group. In some embodiments, when the first adjustment parameter is equal to the first threshold, the time periods respectively corresponding to the filtered key frame data groups are not expanded backward or compressed forward.
  • the first adjustment parameter is used to adjust the timestamp of the key frame data to make the action process corresponding to the key frame data smoother and more reasonable.
  • the difference between the skeletal data of the first object pair and the skeletal data of the motion capture object corresponding to the first object pair can be determined as the first adjustment parameter, or the skeletal data of the first object pair and The ratio between the first object and the corresponding skeleton data of the motion capture object is determined as the first adjustment parameter, which is not limited in the embodiment of the present application.
  • the heights of the two redirected objects in the first object pair and the heights of the two motion capture objects can be determined.
  • the key frame data when there are multiple pairs of objects in one key frame data, it can be determined whether the key frame data is compressed forward or expanded based on the weighted sum of the first adjustment parameters corresponding to at least one object pair. .
  • the specific offset corresponding to the keyframe data (that is, the degree of compression or expansion) is related to the height of the redirected object. The higher the height of the redirected object, the greater the offset.
  • the first threshold corresponds to the first adjustment parameter, and the first threshold can be set and adjusted based on experience values.
  • the first threshold can be set to 1, which indicates that the height of the first object pair and the height of the motion capture object corresponding to the first object pair are generally the same. , the time period of the keyframe data does not need to be adjusted at this time.
  • the first adjustment parameter is greater than 1, it indicates that the height of the first object pair is overall greater than the height of the motion capture object corresponding to the first object pair, and the time period of the key frame data can be expanded backward (such as the above Peak keyframe data and end keyframe data are delayed).
  • the first adjustment parameter is less than 1, it means that the height of the first object pair is overall smaller than the height of the motion capture object corresponding to the first object pair, and the time period of the key frame data can be compressed forward (such as the above Peak keyframe data and end keyframe data are moved forward).
  • At least one repair data segment can be obtained based on the adjusted key frame data group corresponding to at least one key frame data group.
  • Step 205 Superimpose at least one repair data segment and the animation data to be repaired according to the time periods corresponding to the at least one repair data segment to obtain repair animation data.
  • the original data segment corresponding to the target time period can be obtained from the animation data to be repaired according to the target time period corresponding to the target repair data segment; based on the corresponding original data segment The skeletal action parameters and the redirection repair parameters corresponding to the target repair data fragment are superimposed on the original data fragment and the target repair data fragment to obtain the repair animation data corresponding to the target period.
  • the target repair data fragment may be any repair data fragment among the at least one repair data fragment.
  • Bone action parameters are used to indicate the action of the redirected object, such as the rotation angle, movement distance, rotation direction, etc. of each bone.
  • Heavy Orientation repair parameters are used to indicate the status of the redirected object, such as the position, posture, etc. of the redirected object.
  • the skeletal action parameters can be used to make the action of the redirected object match the action of the motion capture object as much as possible, and the redirection repair parameters can be used to repair the state of the redirected object so that there is no interleaving between the redirected objects.
  • the embodiment of the present application reserves it and does not process it.
  • the animation data 501 is a certain frame of animation data in the animation data to be repaired.
  • the hand of the animated character intersperses with the ground.
  • the repair animation data 502 corresponding to the animation data 501 is obtained.
  • the interpenetration phenomenon between the animated character's hand and the ground has been repaired, that is, the animated character's hand No longer interspersed between the base and the ground.
  • the animation data 601 is a certain frame of animation data in the animation data to be repaired.
  • the hand of the animated character 603 and the animated character 604 are interspersed.
  • the repaired animation data 602 corresponding to the animation data 601 is obtained.
  • the interpenetration phenomenon between the hand of the animated character 603 and the animated character 604 has been repaired, that is, the animation The hand of the character 603 is no longer interspersed with the animated character 604.
  • the technical solution provided by the embodiments of the present application is to create a collision body for at least one object in the animation data to be repaired, and then based on the created collision body, perform collision detection on multiple frames of animation data corresponding to the animation data to be repaired. , the key frame data group is obtained, and the automatic acquisition of the interleaving process is realized.
  • the technical solution provided by the embodiment of the present application can improve the acquisition efficiency of the interleaving process, thereby improving the efficiency of the interleaving process. Repair efficiency of animation data.
  • the quality of the keyframe data group can be improved, thereby improving the repair quality of the animation data.
  • the rationality of the distribution of the key frame data group can also be improved, thereby improving the rationality of the repair of the animation data.
  • the degree of interspersing behavior at different stages can be quickly and easily obtained, which is conducive to improving the extraction efficiency of key frame data, thereby further improving the efficiency of repairing animation data.
  • the animation data repair method provided by the embodiment of the present application is described.
  • the specific content may be as follows:
  • the animation data 601 is a certain frame in the animation data to be repaired.
  • the animation data 601 there are intersperses between the hands of the animated character.
  • the enclosing sphere is the sphere with the smallest diameter that can enclose the local polygon mesh.
  • a set of bounding balls corresponding to the animated character 702 is obtained. You can set the bounding ball and bones to have a parent-child relationship, and the bounding ball will follow the bones.
  • collision detection is performed on each frame of animation data corresponding to the animation data to be repaired, and a collision detection result corresponding to each frame of animation data is obtained.
  • the distance between the centers of the two surrounding spheres is greater than the sum of the radii of the two surrounding spheres, it is determined that the corresponding parts of the two surrounding spheres are not interspersed; or, if the distance between the centers of the two surrounding spheres is The distance is equal to the sum of the radii of the two enclosing spheres, then determine whether the corresponding parts of the two enclosing spheres begin to intersect or end intersecting; or, if the distance between the centers of the two enclosing spheres is less than the sum of the radii of the two enclosing spheres , then it is determined that the corresponding parts of the two surrounding spheres are in the interspersed state.
  • the circle center distance the distance between the center of the circle of the surrounding sphere 901 and the center of the circle of the surrounding sphere 902 (hereinafter referred to as the circle center distance) is greater than the sum of the radii between the surrounding sphere 901 and the surrounding sphere 902, then the corresponding radius of the surrounding sphere 901 is determined.
  • the part (such as the hand) and the part corresponding to the surrounding ball 902 (such as the hand) are not interspersed.
  • the enclosing ball 901 corresponding to The parts (such as hands) and the parts (such as hands) corresponding to the surrounding ball 902 begin to intersect.
  • the center distance between the enclosing ball 901 and the enclosing ball 902 becomes smaller than the sum of the radii between the enclosing ball 901 and the enclosing ball 902, then the parts corresponding to the enclosing ball 901 (such as the hand) and the parts corresponding to the enclosing ball 902 are determined. (such as hands) in a penetrating state.
  • the collision detection results corresponding to each frame of animation data are solved to obtain at least one interspersed degree quantified value corresponding to each frame of animation data.
  • the quantitative value of the degree of interleaving is positively correlated with the degree of interpenetration.
  • one quantified value of the degree of interspersion can be obtained based on the collision detection result 703
  • another quantified value of the degree of interspersion can be obtained based on the collision detection result 704 .
  • At least one key frame data group is determined from the multi-frame animation data based on at least one interleaving degree quantization value respectively corresponding to each frame of animation data.
  • the hand interpenetration phenomenon obtain all the quantified values of the degree of interpenetration corresponding to the hand interpenetration phenomenon, generate a quantified value curve of the degree of interpenetration, and determine the corresponding start point of the hand interpenetration phenomenon based on the quantified value curve of the degree of interpenetration.
  • Key frame data, peak key frame data and end key frame data are used to obtain the key frame data group corresponding to the hand interspersion phenomenon.
  • Filter and post-process at least one key frame data group respectively to obtain at least one adjusted key frame data group.
  • Repair at least one adjusted key frame data group respectively to obtain at least one repaired key frame data group; perform transition frame interpolation on at least one repaired key frame data group respectively to obtain at least one repaired data segment.
  • At least one repaired data segment is superimposed on the animation data to be repaired to obtain the repaired animation data.
  • the technical solution provided by the embodiments of the present application is to create a collision body for at least one object in the animation data to be repaired, and then based on the created collision body, perform collision detection on multiple frames of animation data corresponding to the animation data to be repaired. , the key frame data group is obtained, and the automatic acquisition of the interleaving process is realized.
  • the technical solution provided by the embodiment of the present application can improve the acquisition efficiency of the interleaving process, thereby improving the efficiency of the interleaving process. Repair efficiency of animation data.
  • FIG 10 shows a block diagram of an animation data repair device provided by an embodiment of the present application.
  • the device has the function of implementing the above method example, and the function can be implemented by hardware, or can be implemented by hardware executing corresponding software.
  • the device may be the computer equipment introduced above, or may be provided in the computer equipment.
  • the device 1000 includes: an animation data acquisition module 1001, a collision body creation module 1002, a key frame acquisition module 1003, a repair fragment acquisition module 1004, and an animation data repair module 1005.
  • the animation data acquisition module 1001 is used to acquire animation data to be repaired, where the animation data to be repaired includes multi-frame animation data obtained by redirecting motion capture data.
  • the collision body creation module 1002 is used to create a collision body for at least one object in the animation data to be repaired, and obtain a collection of collision bodies respectively corresponding to the at least one object, and the collection of collision bodies includes at least one collision body.
  • the key frame acquisition module 1003 is configured to perform collision detection on the objects in the multi-frame animation data based on the set of colliders respectively corresponding to the at least one object, and obtain at least one key frame data group, each of the key frames data group Corresponds to a interspersed process.
  • the repair segment acquisition module 1004 is configured to obtain at least one repair data segment based on the at least one key frame data group, where the repair data segment refers to the data segment after the interleaving process is repaired.
  • the animation data repair module 1005 is configured to superimpose the at least one repair data segment and the animation data to be repaired according to the time periods corresponding to the at least one repair data segment to obtain repair animation data.
  • the key frame acquisition module 1003 is used to:
  • collision detection is performed on the objects in the target frame animation data to obtain at least one quantified interleaving degree value corresponding to the target frame animation data.
  • the quantified interleaving degree value is Used to characterize the degree of interpenetration between pairs of objects in the at least one object;
  • the at least one key frame data group is determined from the multi-frame animation data based on at least one interleaving degree quantization value respectively corresponding to the multi-frame animation data.
  • the object pair includes a first object and a second object in which interleaving behavior exists.
  • the key frame acquisition module 1003 is also used to:
  • At least one candidate frame data sequence corresponding to the object pair is obtained from the multi-frame animation data based on the quantified value of the interleaving degree corresponding to the object pair; wherein, each of the The candidate frame data sequence corresponds to an interleaving behavior between the first object and the second object;
  • For the target candidate frame data sequence in the at least one candidate frame data sequence perform key frame selection on the target candidate frame data sequence to obtain a key frame data group corresponding to the target candidate frame data sequence;
  • At least one key frame data group corresponding to the object pair is determined.
  • the key frame data group includes starting key frame data, peak key frame data and ending key frame data; wherein the starting key frame data refers to the starting moment of the interspersed behavior
  • the peak key frame data refers to the animation data corresponding to the maximum extent of the interspersed behavior
  • the end key frame data refers to the animation data corresponding to the end moment of the interspersed behavior
  • the key frame acquisition Module 1003 also used for:
  • the animation data of the previous frame of animation data corresponding to the first interleaving degree quantified value of the target interleaving behavior is not zero is determined as the target candidate frame data.
  • the animation data whose first interleaving degree quantization value after the peak key frame data is zero is determined as the end key frame data corresponding to the target candidate frame data sequence.
  • the repair fragment acquisition module 1004 is used to:
  • Transition frame interpolation is performed on the at least one repaired key frame data group respectively to obtain the at least one repaired data segment.
  • the device 1000 further includes: a key frame filtering module 1006 and a key frame adjusting module 1007.
  • the key frame filtering module 1006 is configured to filter at least one key frame data group corresponding to the first object pair in the at least one object, and obtain at least one key frame data group corresponding to the first object pair. A filtered key frame data group; wherein the length of the time period corresponding to the filtered key frame data group is greater than the first duration threshold.
  • the key frame adjustment module 1007 is configured to perform post-processing on at least one filtered key frame data group corresponding to the first object pair, respectively, to obtain at least one adjusted key frame data group corresponding to the first object pair; Wherein, the action span of the adjusted key frame data group conforms to the skeletal data of the object pair.
  • the key frame adjustment module 1007 is also used to obtain the adjusted key frame data group corresponding to the at least one key frame data group.
  • the repair segment acquisition module 1004 is also configured to obtain the at least one repair data segment based on the adjusted key frame data group corresponding to the at least one key frame data group.
  • the key frame adjustment module 1007 is used for:
  • the collision body creation module 1002 is used for:
  • Colliders are constructed for each part corresponding to the first object to obtain a set of colliders corresponding to the first object.
  • the collision body corresponding to the target part moves along with the target part.
  • the device 1000 further includes: a penetration state determination module 1008.
  • the collision body is an enclosing ball, and the penetration state determination module 1008 is used to:
  • the distance between the centers of the two surrounding spheres is equal to the sum of the radii of the two surrounding spheres, then it is determined that the corresponding parts of the two surrounding spheres start to intersect or end the interleaving;
  • the distance between the centers of the two surrounding spheres is less than the sum of the radii of the two surrounding spheres, it is determined that the corresponding parts of the two surrounding spheres are in an intersecting state.
  • the animation data repair module 1005 is used to:
  • For the target repair data segment in the at least one repair data segment obtain the original data segment corresponding to the target time period from the animation data to be repaired according to the target time period corresponding to the target repair data segment;
  • the original data fragments and the target repair data fragments are superimposed to obtain the repair animation corresponding to the target period. data.
  • the animation data acquisition module 1001 is used to:
  • the skeletal motion capture data is redirected to obtain the animation data to be repaired.
  • the technical solution provided by the embodiments of the present application is to create a collision body for at least one object in the animation data to be repaired, and then based on the created collision body, perform collision detection on multiple frames of animation data corresponding to the animation data to be repaired. , the key frame data group is obtained, and the automatic acquisition of the interleaving process is realized.
  • the technical solution provided by the embodiment of the present application can improve the acquisition efficiency of the interleaving process, thereby improving the efficiency of the interleaving process. Repair efficiency of animation data.
  • Figure 12 shows a structural block diagram of a computer device provided by an embodiment of the present application.
  • the computer device can be used to implement the animation data repair method provided in the above embodiment. Specifically, it may include the following.
  • the computer device 1200 includes a central processing unit (such as CPU (Central Processing Unit, central processing unit), GPU (Graphics Processing Unit, graphics processor) and FPGA (Field Programmable Gate Array, field programmable logic gate array), etc.) 1201, A system memory 1204 including a RAM (Random-Access Memory) 1202 and a ROM (Read-Only Memory) 1203, and a system bus 1205 connecting the system memory 1204 and the central processing unit 1201.
  • the computer device 1200 also includes a basic input/output system (I/O system) 1206 that helps transfer information between various devices in the server, and a module 1215 that stores an operating system 1213, application programs 1214, and other programs.
  • the basic input/output system 1206 includes a display 1208 for displaying information and an input device 1209 such as a mouse and a keyboard for the user to input information.
  • the display 1208 and the input device 1209 are both connected to the central processing unit 1201 through the input and output controller 1210 connected to the system bus 1205.
  • the basic input/output system 1206 may also include an input/output controller 1210 for receiving and processing input from a plurality of other devices such as a keyboard, mouse, or electronic stylus.
  • input and output controller 1210 also provides output to a display screen, printer, or other type of output device.
  • the mass storage device 1207 is connected to the central processing unit 1201 through a mass storage controller (not shown) connected to the system bus 1205 .
  • the mass storage device 1207 and its associated computer-readable media provide non-volatile storage for the computer device 1200 . That is, the mass storage device 1207 may include a computer-readable medium (not shown) such as a hard disk or a CD-ROM (Compact Disc Read-Only Memory) drive.
  • the computer-readable media may include computer storage media and communication media.
  • Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer programs, data structures, program modules or other data.
  • Computer storage media include RAM, ROM, EPROM (Erasable Programmable Read-Only Memory, Erasable Programmable Read-Only Memory), EEPROM (Electrically Erasable Programmable Read-Only Memory, Electrically Erasable Programmable Read-Only Memory), flash memory or Other solid-state storage technologies, CD-ROM, DVD (Digital Video Disc, high-density digital video disc) or other optical storage, tape cassettes, magnetic tapes, disk storage or other magnetic storage devices.
  • RAM random access memory
  • ROM read-only memory
  • EPROM Erasable Programmable Read-Only Memory
  • EEPROM Electrically Erasable Programmable Read-Only Memory
  • flash memory or Other solid-state storage technologies
  • the computer device 1200 can also be connected to a remote computer on the network through a network such as the Internet to run. That is, the computer device 1200 can be connected to the network 1212 through the network interface unit 1211 connected to the system bus 1205, or the network interface unit 1211 can also be used to connect to other types of networks or remote computer systems (not shown) .
  • the memory also includes a computer program, which is stored in the memory and configured to be executed by one or more processors to implement the above animation data repair method.
  • a computer-readable storage medium is also provided, and a computer program is stored in the storage medium. When executed by a processor, the computer program implements the above animation data repair method.
  • the computer-readable storage medium may include: ROM (Read-Only Memory), RAM (Random-Access Memory), SSD (Solid State Drives, solid state drive) or optical disk, etc.
  • random access memory can include ReRAM (Resistance Random Access Memory, resistive random access memory) and DRAM (Dynamic Random Access Memory, dynamic random access memory).
  • a computer program product including a computer program stored in a computer-readable storage medium.
  • a processor of a computer device from which the computer The computer program is read from a readable storage medium, and the processor executes the computer program, so that the computer device performs the above animation data repair method.
  • this application can display a prompt interface, pop-up window or output voice prompt information before collecting user-related data and during the process of collecting user-related data.
  • the prompt interface, pop-up window or voice prompt information is used In order to prompt the user that his/her relevant data is currently being collected, this application only starts to execute the relevant steps for obtaining the user's relevant data after obtaining the user's confirmation operation on the prompt interface or pop-up window, otherwise (that is, the user's confirmation operation is not obtained) When the prompt interface or pop-up window sends a confirmation operation), the relevant steps of obtaining user-related data are completed, that is, the user-related data is not obtained.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

An animation data repair method and apparatus, a device, a storage medium and a program product, relating to the technical field of data processing. The method comprises: acquiring animation data to be repaired obtained by redirection of motion capture data (201); creating colliders for at least one object in the animation data to be repaired, so as to obtain a collider set respectively corresponding to the at least one object (202); respectively performing, on the basis of the collider set respectively corresponding to the at least one object, collision detection on objects in multiple frames of animation data, so as to obtain at least one key frame data group (203); obtaining, on the basis of the at least one key frame data group, at least one repaired data segment (204); and superimposing the at least one repaired data segment with the animation data to be repaired, so as to obtain repaired animation data (205). The present application realizes automatic acquisition of a key frame data group (i.e., an interpolation process), improves animation data repair efficiency.

Description

动画数据修复方法、装置、设备、存储介质及程序产品Animation data repair method, device, equipment, storage medium and program product
本申请要求于2022年04月27日提交的申请号为202210454407.6、发明名称为“动画数据修复方法、装置、设备、存储介质及程序产品”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims priority to the Chinese patent application with application number 202210454407.6 and the invention name "Animation Data Repair Method, Device, Equipment, Storage Media and Program Products" submitted on April 27, 2022, the entire content of which is incorporated by reference. in this application.
技术领域Technical field
本申请实施例涉及数据处理技术领域,特别涉及一种动画数据修复方法、装置、设备、存储介质及程序产品。The embodiments of the present application relate to the field of data processing technology, and in particular to an animation data repair method, device, equipment, storage medium and program product.
背景技术Background technique
目前,可以基于动作捕捉得到的动捕数据,进行动画的制作。然而,由于动捕对象的身高比例与动画角色的身高比例不一致,重定向得到的动画数据中,往往会存在穿插现象,例如,动画角色之间的肢体穿插、动画角色的肢体与动画中的物体之间的穿插等。Currently, animation can be produced based on the motion capture data obtained through motion capture. However, since the height ratio of the motion capture object is inconsistent with the height ratio of the animated character, there are often interspersed phenomena in the animation data obtained by redirection, for example, the limbs of animated characters are interspersed, and the limbs of animated characters and objects in the animation Interspersed between etc.
在相关技术中,动画师根据参考视频(如动捕数据对应的视频)或动作导演的需求,采用手工修复的方式逐帧检测,以发现存在穿插现象的动画数据片段,然后针对这些动画数据片段进行穿插修复,从而得到修复后动画数据。但此过程通常需要耗费大量的人力和时间,且往往需要等待数小时甚至数天才能得到几十秒到几分钟的动画数据,动画数据的修复效率不高。In related technologies, animators use manual repair methods to detect frame-by-frame based on reference videos (such as videos corresponding to motion capture data) or the needs of action directors to discover animation data segments with interspersed phenomena, and then target these animation data segments Perform interspersed repair to obtain the repaired animation data. However, this process usually requires a lot of manpower and time, and often requires waiting for hours or even days to obtain animation data ranging from tens of seconds to minutes. The repair efficiency of animation data is not high.
发明内容Contents of the invention
本申请实施例提供了一种动画数据修复方法、装置、设备、存储介质及程序产品,能够提高动画数据的修复效率,所述技术方案可以包括如下内容。Embodiments of the present application provide an animation data repair method, device, equipment, storage medium and program product, which can improve animation data repair efficiency. The technical solution may include the following content.
根据本申请实施例的一个方面,提供了一种动画数据修复方法,所述方法包括:According to one aspect of the embodiments of the present application, a method for repairing animation data is provided. The method includes:
获取待修复动画数据,所述待修复动画数据包括由动捕数据重定向得到的多帧动画数据;Obtain animation data to be repaired, where the animation data to be repaired includes multi-frame animation data obtained by redirecting motion capture data;
对所述待修复动画数据中的至少一个对象进行碰撞体创建,得到所述至少一个对象分别对应的碰撞体集合,所述碰撞体集合中包括至少一个碰撞体;Create a collision body for at least one object in the animation data to be repaired, and obtain a collection of collision bodies corresponding to the at least one object, and the collection of collision bodies includes at least one collision body;
基于所述至少一个对象分别对应的碰撞体集合,分别对所述多帧动画数据中的对象进行碰撞检测,得到至少一个关键帧数据组,每个所述关键帧数据组对应一个穿插过程;Based on the set of colliders respectively corresponding to the at least one object, collision detection is performed on the objects in the multi-frame animation data to obtain at least one key frame data group, each of the key frame data groups corresponding to an interleaving process;
基于所述至少一个关键帧数据组,得到至少一个修复数据片段,所述修复数据片段是指所述穿插过程被修复后的数据片段;Based on the at least one key frame data group, at least one repaired data segment is obtained, where the repaired data segment refers to the data segment that is repaired by the interleaving process;
按照所述至少一个修复数据片段分别对应的时间段,将所述至少一个修复数据片段与所述待修复动画数据进行叠加,得到修复动画数据。According to the time periods corresponding to the at least one repair data segment, the at least one repair data segment and the animation data to be repaired are superimposed to obtain repair animation data.
根据本申请实施例的一个方面,提供了一种动画数据修复装置,所述装置包括:According to one aspect of the embodiment of the present application, an animation data repair device is provided, and the device includes:
动画数据获取模块,用于获取待修复动画数据,所述待修复动画数据包括由动捕数据重定向得到的多帧动画数据;An animation data acquisition module, used to acquire animation data to be repaired, where the animation data to be repaired includes multi-frame animation data obtained by redirecting motion capture data;
碰撞体创建模块,用于对所述待修复动画数据中的至少一个对象进行碰撞体创建,得到所述至少一个对象分别对应的碰撞体集合,所述碰撞体集合中包括至少一个碰撞体;A collision body creation module, configured to create a collision body for at least one object in the animation data to be repaired, and obtain a collection of collision bodies corresponding to the at least one object, and the collection of collision bodies includes at least one collision body;
关键帧获取模块,用于基于所述至少一个对象分别对应的碰撞体集合,分别对所述多帧动画数据中的对象进行碰撞检测,得到至少一个关键帧数据组,每个所述关键帧数据组对应一个穿插过程;A key frame acquisition module, configured to perform collision detection on the objects in the multi-frame animation data based on the set of colliders respectively corresponding to the at least one object, and obtain at least one key frame data group, each of the key frame data The group corresponds to an interspersed process;
修复片段获取模块,用于基于所述至少一个关键帧数据组,得到至少一个修复数据片段,所述修复数据片段是指所述穿插过程被修复后的数据片段; A repair fragment acquisition module, configured to obtain at least one repair data fragment based on the at least one key frame data group, where the repair data fragment refers to the data fragment after the interleaving process is repaired;
动画数据修复模块,用于按照所述至少一个修复数据片段分别对应的时间段,将所述至少一个修复数据片段与所述待修复动画数据进行叠加,得到修复动画数据。An animation data repair module is configured to superimpose the at least one repair data segment and the animation data to be repaired according to the time periods corresponding to the at least one repair data segment to obtain repair animation data.
根据本申请实施例的一个方面,提供了一种计算机设备,所述计算机设备包括处理器和存储器,所述存储器中存储有计算机程序,所述计算机程序由所述处理器加载并执行以实现上述动画数据修复方法。According to an aspect of an embodiment of the present application, a computer device is provided. The computer device includes a processor and a memory. A computer program is stored in the memory. The computer program is loaded and executed by the processor to implement the above. Animation data repair method.
根据本申请实施例的一个方面,提供了一种计算机可读存储介质,所述可读存储介质中存储有计算机程序,所述计算机程序由处理器加载并执行以实现上述动画数据修复方法。According to one aspect of the embodiments of the present application, a computer-readable storage medium is provided. A computer program is stored in the readable storage medium, and the computer program is loaded and executed by a processor to implement the above animation data repair method.
根据本申请实施例的一个方面,提供了一种计算机程序产品,该计算机程序产品包括计算机程序,该计算机程序存储在计算机可读存储介质中。计算机设备的处理器从计算机可读存储介质读取该计算机程序,处理器执行该计算机程序,使得该计算机设备执行上述动画数据修复方法。According to one aspect of an embodiment of the present application, a computer program product is provided. The computer program product includes a computer program, and the computer program is stored in a computer-readable storage medium. The processor of the computer device reads the computer program from the computer-readable storage medium, and the processor executes the computer program, so that the computer device performs the above animation data repair method.
本申请实施例提供的技术方案可以包括如下有益效果:The technical solutions provided by the embodiments of this application may include the following beneficial effects:
通过对待修复动画数据中的至少一个对象进行碰撞体创建,再基于创建得到碰撞体,分别对待修复动画数据对应的多帧动画数据进行碰撞检测,得到关键帧数据组,实现了穿插过程的自动获取,相比于相关技术中通过手工方式逐帧检测来获取穿插过程,本申请实施例提供的技术方案可以提高穿插过程的获取效率,进而提高动画数据的修复效率。By creating a collision body for at least one object in the animation data to be repaired, and then obtaining the collision body based on the creation, collision detection is performed on the multi-frame animation data corresponding to the animation data to be repaired, and the key frame data group is obtained, realizing the automatic acquisition of the interleaving process. , Compared with the related art in which the interleaving process is obtained through manual frame-by-frame detection, the technical solution provided by the embodiment of the present application can improve the acquisition efficiency of the interleaving process, thereby improving the repair efficiency of animation data.
附图说明Description of drawings
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments will be briefly introduced below. Obviously, the drawings in the following description are only some embodiments of the present application. For those of ordinary skill in the art, other drawings can also be obtained based on these drawings without exerting creative efforts.
图1是本申请一个实施例提供的方案实施环境的示意图;Figure 1 is a schematic diagram of a solution implementation environment provided by an embodiment of the present application;
图2是本申请一个实施例提供的动画数据修复方法的流程图;Figure 2 is a flow chart of an animation data repair method provided by an embodiment of the present application;
图3是本申请一个实施例提供的碰撞体创建方法的示意图;Figure 3 is a schematic diagram of a collision body creation method provided by an embodiment of the present application;
图4是本申请一个实施例提供的关键帧数据组的获取方法的流程图;Figure 4 is a flow chart of a method for obtaining a key frame data group provided by an embodiment of the present application;
图5是本申请一个实施例提供的动画角色与地面交互下的穿插修复的示意图;Figure 5 is a schematic diagram of interspersed repair under the interaction between animated characters and the ground provided by an embodiment of the present application;
图6是本申请一个实施例提供的动画角色之间交互下的穿插修复的示意图;Figure 6 is a schematic diagram of interspersed repair under interaction between animated characters provided by an embodiment of the present application;
图7是本申请一个实施例提供的动画角色之间交互下的碰撞体创建的示意图;Figure 7 is a schematic diagram of the creation of a collision body under the interaction between animated characters provided by an embodiment of the present application;
图8是本申请一个实施例提供的动画角色与道具之间交互下的碰撞体创建的示意图;Figure 8 is a schematic diagram of the creation of a collision body under the interaction between animated characters and props provided by an embodiment of the present application;
图9是本申请一个实施例提供的包围球下的碰撞检测的示意图;Figure 9 is a schematic diagram of collision detection under a surrounding ball provided by an embodiment of the present application;
图10是本申请一个实施例提供的动画数据修复装置的框图;Figure 10 is a block diagram of an animation data repair device provided by an embodiment of the present application;
图11是本申请另一个实施例提供的动画数据修复装置的框图;Figure 11 is a block diagram of an animation data repair device provided by another embodiment of the present application;
图12是本申请一个实施例提供的计算机设备的框图。Figure 12 is a block diagram of a computer device provided by an embodiment of the present application.
具体实施方式Detailed ways
为使本申请的目的、技术方案和优点更加清楚,下面将结合附图对本申请实施方式作进一步地详细描述。In order to make the purpose, technical solutions and advantages of the present application clearer, the embodiments of the present application will be further described in detail below with reference to the accompanying drawings.
在对本申请实施例进行介绍说明之前,先对本申请涉及的一些名词进行定义说明。Before introducing the embodiments of this application, some terms involved in this application are first defined and explained.
1、动作捕捉(Motion Capture,Mocap):借助相机或者惯性设备,记录动捕对象(如人体、动物、以及其他运动物体)的运动数据的过程。1. Motion Capture (Mocap): The process of recording motion data of motion capture objects (such as human bodies, animals, and other moving objects) with the help of cameras or inertial devices.
2、重定向(Retargeting):计算机图形学术语,特指将通过将动作捕捉得到的动捕数据从动捕对象映射到动画角色(或机器人、游戏角色等)上的过程。2. Retargeting: A computer graphics term that specifically refers to the process of mapping motion capture data obtained through motion capture from motion capture objects to animated characters (or robots, game characters, etc.).
3、碰撞检测(Collision Detection):计算机图形学术语,特指计算机中判断两个物体是否相交的过程。3. Collision Detection: A computer graphics term, specifically referring to the process of determining whether two objects intersect in a computer.
4、多边形网格(Mesh):计算机图形学术语,特指由一系列多边形组成的几何模型(如 人体、动物、物体等几何模型),其包含网格点、网格边、网格面等几何信息。4. Polygon mesh (Mesh): A term in computer graphics, specifically referring to a geometric model composed of a series of polygons (such as Geometric models of human bodies, animals, objects, etc.), which contain geometric information such as grid points, grid edges, and grid surfaces.
5、骨骼(Skeleton):计算机图形学术语,若几何模型中的一个物体影响多边形网格的一系列网格点,且能够使得该一系列网格点跟随该物体运动或者变形,则该物体被称之为骨骼。一系列骨骼可以组成骨骼层级。5. Skeleton: Computer graphics term. If an object in the geometric model affects a series of grid points of the polygon mesh and can cause the series of grid points to follow the movement or deformation of the object, the object is Call it a skeleton. A sequence of bones can form a bone hierarchy.
6、包围球(Bounding Ball):计算机图形学术语,定义一个半径最小的球,使得某Mesh对应的所有网格点都在该球内部,则称该球为该Mesh的包围球。6. Bounding Ball: A computer graphics term that defines a ball with the smallest radius such that all grid points corresponding to a Mesh are inside the ball, then the ball is called the bounding ball of the Mesh.
7、FK(Forward Kinematic,正向动力学):根据骨骼层级,先计算父骨骼的旋转位置,然后计算次一级骨骼的位置和旋转,并逐级向下计算直到末端子骨骼计算完毕的过程。7. FK (Forward Kinematic): Based on the bone hierarchy, first calculate the rotation position of the parent bone, then calculate the position and rotation of the next-level bone, and calculate downward step by step until the end child bone is calculated. .
8、IK(Inverse Kinematic,反向动力学):根据子级骨骼的旋转和位置,反向求解父级骨骼的位置和旋转的过程。8. IK (Inverse Kinematic): The process of reversely solving the position and rotation of the parent bone based on the rotation and position of the child bone.
请参考图1,其示出了本申请一个实施例提供的方案实施环境的示意图。该方案实施环境可以实现成为动画数据修复系统的架构。该实施环境可以包括:终端10和服务器20。Please refer to Figure 1, which shows a schematic diagram of a solution implementation environment provided by an embodiment of the present application. The implementation environment of this solution can realize the architecture of animation data repair system. The implementation environment may include: a terminal 10 and a server 20 .
终端10可以是诸如手机、平板电脑、PC(Personal Computer,个人计算机)、可穿戴设备、智能机器人、相机、惯性设备等电子设备。终端10可用于获取动捕数据,并将动捕数据提供给服务器20,以通过服务器20对动捕数据进行转化、修复等处理,得到动画数据。在一些实施例中,终端10中可以安装运行目标应用程序的客户端,该目标应用程序的客户端可用于对动捕数据进行转化、修复等处理,得到动画数据。上述目标应用程序可以是诸如动画制作类应用程序、动画修复类应用程序、视频制作类应用程序、游戏制作类应用程序、娱乐交互类应用程序、模拟仿真类应用程序、模拟学习类应用程序等,本申请实施例对此不作限定。The terminal 10 may be an electronic device such as a mobile phone, a tablet computer, a PC (Personal Computer), a wearable device, an intelligent robot, a camera, an inertial device, and the like. The terminal 10 can be used to obtain motion capture data, and provide the motion capture data to the server 20, so that the motion capture data can be converted, repaired, and other processes through the server 20 to obtain animation data. In some embodiments, a client running a target application can be installed in the terminal 10. The client of the target application can be used to convert, repair, and other processing on the motion capture data to obtain animation data. The above target applications may be animation production applications, animation restoration applications, video production applications, game production applications, entertainment interactive applications, simulation applications, simulation learning applications, etc. The embodiments of the present application do not limit this.
服务器20可用于对动捕数据进行转化、修复等处理,得到动画数据。可选地,服务器20还可以用于为终端10中的目标应用程序的客户端提供后台服务。例如,服务器20可以是目标应用程序的客户端的后台服务器。服务器20可以是独立的物理服务器,也可以是多个物理服务器构成的服务器集群或者分布式系统,还可以是提供云计算服务的云服务器。The server 20 can be used to convert, repair, and other processing on the motion capture data to obtain animation data. Optionally, the server 20 may also be used to provide background services for clients of target applications in the terminal 10 . For example, the server 20 may be a backend server for clients of the target application. The server 20 may be an independent physical server, a server cluster or a distributed system composed of multiple physical servers, or a cloud server that provides cloud computing services.
终端10和服务器20之间可以通过网络30进行通信。该网络可以是有线网络,也可以是无线网络。The terminal 10 and the server 20 can communicate through the network 30. The network can be a wired network or a wireless network.
本申请实施例提供的方法,各步骤的执行主体可以是计算机设备。计算机设备可以是任何具备数据的存储和处理能力的电子设备。例如,计算机设备可以是图1中的服务器20,可以是图1中的终端设备10,也可以是除终端设备10和服务器20以外的另一设备。In the method provided by the embodiments of the present application, the execution subject of each step may be a computer device. A computer device can be any electronic device capable of storing and processing data. For example, the computer device may be the server 20 in FIG. 1 , the terminal device 10 in FIG. 1 , or another device other than the terminal device 10 and the server 20 .
本申请实施例提供的技术方案适用于任何需要动画数据修复的场景中,诸如虚拟对象拟合场景(如虚拟人、虚拟动物等)、游戏制作场景(如游戏角色的动作设置)、视频制作场景、动画数据修复场景等。本申请实施例提供的技术方案能够提高动画数据的修复效率。The technical solutions provided by the embodiments of this application are applicable to any scene that requires animation data repair, such as virtual object fitting scenes (such as virtual people, virtual animals, etc.), game production scenes (such as action settings of game characters), and video production scenes. , animation data repair scenes, etc. The technical solution provided by the embodiments of the present application can improve the repair efficiency of animation data.
示例性地,参考图1,终端10将采集到的动捕数据发送给服务器20,服务器20基于动捕数据重定向得到待修复动画数据,服务器20对待修复动画数据中的至少一个对象进行碰撞体创建,再基于创建得到的碰撞体进行碰撞检测,自动获取关键帧数据(用于表征穿插过程),然后基于关键帧数据自动获取修复数据片段(即穿插过程被修复的数据片段),最后基于修复数据片段,自动对待修复动画数据进行修复,得到修复动画数据。For example, referring to Figure 1, the terminal 10 sends the collected motion capture data to the server 20. The server 20 redirects the motion capture data to obtain the animation data to be repaired. The server 20 performs a collision on at least one object in the animation data to be repaired. Create, then perform collision detection based on the created collision body, automatically obtain key frame data (used to characterize the interleaving process), then automatically obtain repair data fragments based on key frame data (i.e., data fragments that are repaired during the interleaving process), and finally based on repair The data fragment automatically repairs the animation data to be repaired and obtains the repaired animation data.
下文将对本申请实施例提供的动画数据修复方法进行详细介绍。The animation data repair method provided by the embodiment of the present application will be introduced in detail below.
请参考图2,其示出了本申请一个实施例提供的动画数据修复方法的流程图,该方法各步骤的执行主体可以是图1所示方案实施环境中的终端10或服务器20。在下文方法实施例中,为了便于描述,仅以各步骤的执行主体为“计算机设备”进行介绍说明,该方法可以包括如下几个步骤(201~205)中的至少一个步骤。Please refer to Figure 2, which shows a flow chart of an animation data repair method provided by an embodiment of the present application. The execution subject of each step of the method can be the terminal 10 or the server 20 in the solution implementation environment shown in Figure 1. In the following method embodiments, for convenience of description, only the execution subject of each step is a "computer device" for introduction and explanation. The method may include at least one of the following steps (201-205).
步骤201,获取待修复动画数据,该待修复动画数据包括由动捕数据重定向得到的多帧动画数据。Step 201: Obtain animation data to be repaired. The animation data to be repaired includes multi-frame animation data obtained by redirecting motion capture data.
其中,动捕数据是指对动捕对象的动作进行捕捉得到的数据。例如,可以通过动捕装置 (如相机、惯性设备等),对动捕对象进行动作数据(如站立、走、跑、交互等动作数据)捕捉,得到动捕数据。可选地,动捕数据的帧率可以根据实际使用需求进行设置,如25Hz、30Hz等。本申请实施例对于动捕装置的具体类型不作限定,对于动捕装置的动捕频率不作限定。Among them, the motion capture data refers to the data obtained by capturing the motion of the motion capture object. For example, through a motion capture device (such as cameras, inertial devices, etc.), capture motion data of motion capture objects (such as standing, walking, running, interaction, etc. motion data) to obtain motion capture data. Optionally, the frame rate of motion capture data can be set according to actual usage requirements, such as 25Hz, 30Hz, etc. The embodiments of the present application do not limit the specific type of the motion capture device, and do not limit the motion capture frequency of the motion capture device.
动捕对象:发生动作变化且被捕捉动作变化的对象。本申请实施例中动捕捉对象并不仅限于真实世界中的对象,还可以是虚拟世界中的对象,本申请实施例中对于对象所处的环境不作限定。可选地,动捕对象是真实世界中的对象时,动捕对象的具体类型包括但不限于真实人物、真实动物、真实机器人等可以产生动作活动的对象。可选地,动捕对象是虚拟世界中的对象时,动捕对象的具体类型包括但不限于虚拟人物、虚拟动物、虚拟动漫人员等可以产生动作活动的对象,本申请实施例中对于动捕对象的具体对象类型不作限定。当然,在以下实施例中,仅以动捕对象为动捕演员为例进行解释说明,其他类型的动捕对象参考动捕演员。Motion capture object: An object whose movements change and whose movements are captured. The motion capture objects in the embodiments of the present application are not limited to objects in the real world, but can also be objects in the virtual world. In the embodiments of the present application, the environment in which the objects are located is not limited. Optionally, when the motion capture object is an object in the real world, the specific type of the motion capture object includes but is not limited to real people, real animals, real robots and other objects that can produce motion activities. Optionally, when the motion capture object is an object in the virtual world, the specific type of the motion capture object includes but is not limited to virtual characters, virtual animals, virtual animation personnel and other objects that can generate motion activities. In the embodiment of the present application, for motion capture The specific object type of the object is not limited. Of course, in the following embodiments, only the motion capture object being a motion capture actor is used as an example for explanation. For other types of motion capture objects, refer to the motion capture actor.
重定向对象:与动捕对象具有相同动作的另一个对象。本申请实施例中重定向象并不仅限于真实世界中的对象,还可以是虚拟世界中的对象。以重定向对象是真实世界中真实机器人为例,基于上述动捕演员的动捕数据可以生成控制参数,从而控制真实机器人产生与动捕演员相同的动作。不过,本申请实施例主要以重定向对象是虚拟世界中的虚拟对象来进行解释说明,当然,本领域技术人员应当知悉的是,重定向对象并不仅限于虚拟世界中的虚拟对象。在本申请实施例中,动画数据是指将动捕数据中的动捕对象的动作,重定向到动画数据中的重定向对象上得到的数据,不同帧的动画数据对应于不同时刻下的动作。可选地,在第一时刻,动捕对象产生动作A,则根据动捕对象的动捕数据,确定出动画中第N帧中重定向对象的动画数据,这里的动画数据表示重定向对象在执行动作A。在一些实施例中,动画数据包括但不限于动画中每一帧中重定向对象的位置数据以及动作数据。其中,动捕对象可以包括人、动物、植物、物体等,重定向对象是指动画数据中的需要拟合动捕对象的动作的对象,诸如虚拟人、虚拟动物、动画角色、游戏角色、物体模型等,本申请实施例对此不作限定。可选地,上述动捕数据可以对应多个动捕对象,也即动画数据中包括与该多个动捕对象对应的重定向对象。Redirect object: Another object with the same actions as the mocap object. The redirected object in the embodiment of the present application is not limited to objects in the real world, but may also be objects in the virtual world. Taking the redirection object as a real robot in the real world as an example, control parameters can be generated based on the motion capture data of the above motion capture actor, thereby controlling the real robot to produce the same actions as the motion capture actor. However, the embodiments of the present application are mainly explained on the assumption that the redirection object is a virtual object in the virtual world. Of course, those skilled in the art should know that the redirection object is not limited to virtual objects in the virtual world. In the embodiment of this application, animation data refers to the data obtained by redirecting the actions of the motion capture object in the motion capture data to the redirection object in the animation data. The animation data of different frames correspond to the actions at different times. . Optionally, at the first moment, the motion capture object generates action A, then according to the motion capture data of the motion capture object, the animation data of the redirected object in the Nth frame of the animation is determined, where the animation data indicates that the redirected object is in Perform action A. In some embodiments, the animation data includes, but is not limited to, position data and motion data of the redirected object in each frame of the animation. Among them, motion capture objects can include people, animals, plants, objects, etc. Retargeting objects refer to objects in animation data that need to fit the actions of motion capture objects, such as virtual people, virtual animals, animated characters, game characters, objects Models, etc., are not limited in the embodiments of this application. Optionally, the above motion capture data may correspond to multiple motion capture objects, that is, the animation data includes redirection objects corresponding to the multiple motion capture objects.
示例性地,动画数据可以为动画角色之间的交互数据,动画数据也可以为动画角色与物体模型之间的交互数据,动画数据还可以为物体模型之间的交互数据,本申请实施例对此不作限定。这里的交互数据可以认为是发生肢体碰撞的数据。For example, animation data can be interaction data between animated characters, animation data can also be interaction data between animated characters and object models, and animation data can also be interaction data between object models. The embodiments of this application are useful for This is not a limitation. The interaction data here can be considered as the data of physical collisions.
可选地,待修复动画数据是指存在重定向对象的动作数据需要修复的动画数据。示例性地,由于动捕对象(如动捕演员)的骨骼数据(如身高比例)和重定向对象(如动画角色)的骨骼数据(如身高比例)之间存在差异,导致动画数据中存在穿插现象(如肢体与肢体穿插、肢体与物体模型穿插等),该存在穿插现象的动画数据即可被确定为待修复动画数据。Optionally, the animation data to be repaired refers to the animation data that needs to be repaired if there is action data of the redirected object. For example, due to differences between the skeletal data (such as height proportion) of the motion capture object (such as the motion capture actor) and the skeletal data (such as the height proportion) of the retargeted object (such as the animated character), there are interleavings in the animation data. phenomenon (such as limbs intersecting with limbs, limbs intersecting with object models, etc.), the animation data with the intersecting phenomenon can be determined as the animation data to be repaired.
待修复动画数据可以为骨骼动画数据,该骨骼动画数据是指以重定向对象的骨骼进行动作展示的动画数据。该骨骼动画数据可以包括重定向对象的骨骼在各帧下的位置信息、转向信息、移动信息等。The animation data to be repaired may be skeletal animation data. The skeletal animation data refers to animation data that uses the bones of the redirected object to display actions. The skeletal animation data may include position information, steering information, movement information, etc. of the bones of the redirected object in each frame.
在一个示例中,待修复动画数据的获取过程可以如下:获取动捕对象对应的动捕数据;对动捕数据进行转化,得到骨骼动捕数据,该骨骼动捕数据对应于动捕对象的骨骼数据;对骨骼动捕数据进行重定向,得到待修复动画数据。其中,骨骼动捕数据是指以动捕对象的骨骼进行动作展示的动画数据。骨骼数据可以包括骨骼尺寸、骨骼数量、骨骼层级关联信息等。In one example, the process of obtaining the animation data to be repaired can be as follows: obtain the motion capture data corresponding to the motion capture object; convert the motion capture data to obtain the skeletal motion capture data, which corresponds to the bones of the motion capture object Data; redirect the skeletal motion capture data to obtain the animation data to be repaired. Among them, skeletal motion capture data refers to animation data that uses the skeleton of a motion capture object to display actions. Bone data can include bone size, number of bones, bone level association information, etc.
示例性地,根据动捕对象的骨骼数据,将动捕对象对应的动捕数据转化为骨骼动捕数据,再基于重定向对象的骨骼数据和动捕对象的骨骼数据,将骨骼动捕数据重定向为待修复动画数据。可选地,可以利用正向动力学或者反向动力学,实现待修复动画数据的获取。For example, based on the skeletal data of the motion capture object, the motion capture data corresponding to the motion capture object is converted into skeletal motion capture data, and then based on the skeletal data of the redirected object and the skeletal data of the motion capture object, the skeletal motion capture data is reconstructed. Oriented to the animation data to be repaired. Optionally, forward dynamics or reverse dynamics can be used to obtain the animation data to be repaired.
本申请实施例提供的技术方案,基于骨骼动捕数据来确定修复动画数据,动捕对象之间的交互,直接表现在骨骼的交互上,此处交互可以理解为发生碰撞,针对相同的位置、相同的动作,不同骨骼大小的动捕对象支之间可能交互,也可能不发生交互。进一步,根据重定 向对象的骨骼数据和动捕对象的骨骼数据,将骨骼动捕数据重定向为待修复动画数据,根据骨骼数据来确定待修复动画数据,可以理解的是动捕对象之间发生交互,并不代表重定向对象之间发生交互,因此以各自对应的骨骼数据为媒介,确定需要修复的动画,可以提升获取到的待修复动画数据的准确性,同时提高了获取到的待修复动画数据的效率。The technical solution provided by the embodiment of this application determines the repair animation data based on the skeletal motion capture data. The interaction between the motion capture objects is directly reflected in the interaction of the bones. The interaction here can be understood as a collision. For the same position, For the same movement, motion capture object branches with different bone sizes may or may not interact with each other. Further, according to the redefinition Redirect the skeletal motion capture data to the skeletal data of the object and the skeletal data of the motion capture object into the animation data to be repaired, and determine the animation data to be repaired based on the skeletal data. It can be understood that the interaction between the motion capture objects does not Represents the interaction between redirected objects. Therefore, using the corresponding skeletal data as a medium to determine the animation that needs to be repaired can improve the accuracy of the obtained animation data to be repaired, and at the same time improve the efficiency of the obtained animation data to be repaired. .
步骤202,对待修复动画数据中的至少一个对象进行碰撞体创建,得到至少一个对象分别对应的碰撞体集合,碰撞体集合中包括至少一个碰撞体。Step 202: Create a collision body for at least one object in the animation data to be repaired, and obtain a collection of collision bodies corresponding to at least one object. The collection of collision bodies includes at least one collision body.
待修复动画数据中的至少一个对象即为上述重定向对象。碰撞体是用于重定向对象的碰撞检测的具有特定体积或者形状的三维对象,该碰撞体可以是指包围球、胶囊体、包围盒等,本申请实施例对碰撞体的体积或者形状不作限定。在本申请实施例中,由于重定向对象可以对应多个骨骼,为了保证碰撞检测的准确性,可以为各个骨骼分别创建一个碰撞体。At least one object in the animation data to be repaired is the above-mentioned redirection object. The collider is a three-dimensional object with a specific volume or shape used for collision detection of redirected objects. The collider may refer to a surrounding sphere, a capsule, a bounding box, etc. The embodiment of the present application does not limit the volume or shape of the collider. . In this embodiment of the present application, since the redirection object can correspond to multiple bones, in order to ensure the accuracy of collision detection, a collision body can be created for each bone.
在一个示例中,碰撞体的创建过程可以如下:对于至少一个对象中的第一对象,获取第一对象的多边形网格,该多边形网格用于表征对象的外形;对于第一对象对应的目标部位,获取目标部位对应的目标局部多边形网格,该目标局部多边形网格受目标部位影响;基于目标局部多边形网格对应的网格点,构建目标部位的碰撞体,目标部位的碰撞体包围目标局部多边形网格对应的网格点;对第一对象对应的各个部位进行碰撞体构建,得到第一对象对应的碰撞体集合。In one example, the creation process of the collision body can be as follows: for the first object in at least one object, obtain the polygon mesh of the first object, which is used to represent the shape of the object; for the target corresponding to the first object Part, obtain the target local polygon mesh corresponding to the target part. The target local polygon mesh is affected by the target part; based on the grid points corresponding to the target local polygon mesh, a collision body of the target part is constructed, and the collision body of the target part surrounds the target. Grid points corresponding to the local polygon mesh; construct collision bodies for each part corresponding to the first object to obtain a set of collision bodies corresponding to the first object.
其中,第一对象可以是指至少一个对象中的任一对象。目标部位可以是指第一对象的任一部位,该部位可以用骨骼进行指代。目标局部多边形网格可以为受目标骨骼所影响的局部多边形网格。可选地,可以将能够包围目标局部多边形网格对应的所有网格点的最小碰撞体,确定为目标部位的碰撞体。可选地,可以根据实际使用需求适应性设置目标部位。例如,可以只对重定向对象的部分部位进行碰撞体创建,也可以对重定向对象的所有部位进行碰撞体创建。在一些实施例中,目标部位是重定向对象中会发生交互的部位。可选地,重定向对象中会发生交互的至少一个部位确认为目标部位。Wherein, the first object may refer to any object among the at least one object. The target part may refer to any part of the first object, and the part may be represented by a bone. The target local polygon mesh can be a local polygon mesh affected by the target bone. Optionally, the smallest collision body that can surround all grid points corresponding to the target local polygon mesh can be determined as the collision body of the target part. Optionally, the target site can be adaptively set according to actual usage requirements. For example, you can create colliders for only part of the redirected object, or you can create colliders for all parts of the redirected object. In some embodiments, the target site is a site in the redirected object where the interaction will occur. Optionally, at least one part of the redirected object where interaction will occur is confirmed as the target part.
示例性地,参考图3,对于动画角色300,可以根据头部骨骼301对应的所有网格点,构建头部骨骼301对应的碰撞体302。同时,可以根据动画角色300的手部骨骼对应的所有网格点,构建手部骨骼对应的碰撞体,以及根据动画角色300的脚部骨骼对应的所有网格点,构建脚部骨骼对应的碰撞体。For example, referring to FIG. 3 , for the animated character 300 , a collision body 302 corresponding to the head skeleton 301 can be constructed based on all grid points corresponding to the head skeleton 301 . At the same time, a collision body corresponding to the hand bones of the animated character 300 can be constructed based on all grid points corresponding to the hand bones of the animated character 300, and a collision corresponding to the foot bones can be constructed based on all grid points corresponding to the foot bones of the animated character 300. body.
可选地,目标部位对应的碰撞体跟随目标部位移动。示例性地,可以将目标部位和目标部位对应的碰撞体设置为父子关系,目标部位为父,目标部位对应的碰撞体为子,则目标部位对应的碰撞体会跟随目标部位移动而移动,如此可以避免逐帧计算和创建碰撞体,从而大大减少了运算量和运算时间,有利于提高动画数据的修复效率。例如,参考图3,将头部骨骼301和碰撞体302设置为父子关系,则碰撞体302会跟随头部骨骼301移动。在一些实施例中,在构建目标部位的碰撞体时,将碰撞体的中心位置和重定向对象的骨骼数据绑定起来,实现重定向对象发生改变的同时,碰撞体跟着发生移动。Optionally, the collision body corresponding to the target part moves with the target part. For example, the target part and the collider corresponding to the target part can be set to a parent-child relationship, with the target part being the parent and the collider corresponding to the target part being the child. Then the collider corresponding to the target part will move following the movement of the target part, so that It avoids calculating and creating collision bodies frame by frame, thus greatly reducing the amount of calculation and calculation time, which is beneficial to improving the efficiency of animation data repair. For example, referring to FIG. 3 , if the head skeleton 301 and the collision body 302 are set to a parent-child relationship, the collision body 302 will move along with the head skeleton 301 . In some embodiments, when constructing the collider of the target part, the center position of the collider is bound to the bone data of the redirected object, so that when the redirected object changes, the collider moves accordingly.
步骤203,基于至少一个对象分别对应的碰撞体集合,分别对多帧动画数据中的对象进行碰撞检测,得到至少一个关键帧数据组,每个关键帧数据组对应一个穿插过程。Step 203: Based on the set of colliders corresponding to at least one object, collision detection is performed on the objects in the multi-frame animation data to obtain at least one key frame data group, and each key frame data group corresponds to an interleaving process.
每个关键帧数据组可以包括多帧关键帧数据,每帧关键帧数据对应一帧动画数据,该帧动画数据包括穿插过程中的关键穿插动作,一个完整的穿插过程可以用多个关键穿插动作表征。Each keyframe data group can include multiple frames of keyframe data. Each frame of keyframe data corresponds to one frame of animation data. This frame of animation data includes key interleaving actions in the interleaving process. A complete interleaving process can use multiple key interleaving actions. representation.
在一些实施例中,穿插动作可以认为是重定向对象和动画中的其他对象分别占据的空间发生了重叠。根据正常世界中的逻辑,不同对象之间一般不会产生叠加,例如,对象A在真实世界中不会和对象B存在一部分占据相同的空间,也即,对象A在真实世界中占据的空间和对象B在真实世界中占据的空间应当是彼此独立且不重合的。以此类推动画世界中,重定向对象和动画世界中的其他对象应当也是分别占据不同空间的,且重定向对象占据的空间和动画世界中的其他对象占据的空间应该是彼此独立的,除非这里的其他对象所占据的空间是 可以被重定向对象影响的,例如其他对象是沼泽,重定向对象可能掉进沼泽里,此时,重定向对象和其他对象在空间上发生重合,但此时不认为是穿插动作。本申请实施例中的穿插动作认为是不会发生体积改变的至少两个对象分别占据的空间产生了至少一部分的重合。当然,本申请实施例对于穿插行为的穿插幅度不作限定,也即对于重合幅度不作限定。In some embodiments, the interspersing action can be considered as the overlap of the spaces occupied by the redirected object and other objects in the animation. According to the logic in the normal world, there is generally no superposition between different objects. For example, object A does not occupy the same space as object B in the real world. That is, the space occupied by object A in the real world and The space occupied by object B in the real world should be independent of each other and non-overlapping. In this way, in the animated world, the redirected object and other objects in the animated world should also occupy different spaces, and the space occupied by the redirected object and the space occupied by other objects in the animated world should be independent of each other, unless here The space occupied by other objects is It can be affected by the redirected object. For example, if other objects are swamps, the redirected object may fall into the swamp. At this time, the redirected object and other objects overlap in space, but this is not considered a interleaving action. The interleaving action in the embodiment of the present application is considered to be that the spaces occupied by at least two objects that do not change in volume produce at least a partial overlap. Of course, the embodiment of the present application does not limit the interspersing width of the interspersing behavior, that is, it does not limit the overlapping width.
可选地,上述关键帧数据组包括起始关键帧数据、峰值关键帧数据和结束关键帧数据;其中,起始关键帧数据是指穿插行为的起始时刻对应的动画数据,峰值关键帧数据是指穿插行为达到最大程度对应的动画数据,结束关键帧数据是指穿插行为的结束时刻对应的动画数据。一个穿插行为对应一个穿插过程。例如,动画角色A的肢体和动画角色B的肢体之间存在穿插,则从肢体开始穿插到肢体结束穿插为一个穿插过程。Optionally, the above key frame data group includes start key frame data, peak key frame data and end key frame data; where the start key frame data refers to the animation data corresponding to the starting moment of the interspersed behavior, and the peak key frame data It refers to the animation data corresponding to the maximum extent of the interspersed behavior, and the end key frame data refers to the animation data corresponding to the end moment of the interspersed behavior. One interspersed behavior corresponds to one interspersed process. For example, if there is an interpenetration between the limbs of animated character A and the limbs of animated character B, then the interpenetration process from the beginning of the limb to the end of the limb is an interpenetration process.
在一个示例中,参考图4,步骤203还可以包括如下几个子步骤:In an example, referring to Figure 4, step 203 may also include the following sub-steps:
步骤203a,对于多帧动画数据中的目标帧动画数据,对目标帧动画数据中的对象进行碰撞检测,得到目标帧动画数据对应的至少一个穿插程度量化值,该穿插程度量化值用于表征至少一个对象中的对象对之间的穿插程度。Step 203a: For the target frame animation data in the multi-frame animation data, perform collision detection on the objects in the target frame animation data, and obtain at least one interleaving degree quantified value corresponding to the target frame animation data. The interleaving degree quantified value is used to represent at least The degree of interpenetration between pairs of objects within an object.
其中,目标帧动画数据可以是指多帧动画数据中的任一帧动画数据。穿插程度量化值与对象对之间的穿插程度呈正相关。示例性地,设动画角色A的肢体和动画角色B的肢体之间存在穿插,在检测到两个肢体刚开始接触的时候,可以将穿插程度量化值设置为0,检测到两个肢体的穿插程度的逐渐增加(如相对于两个肢体的交叉点,肢体末端到该交叉点占整个肢体的部分越来越大),则穿插程度量化值逐渐增大。在检测到两个肢体穿插到最大程度的时候,可以将穿插程度量化值设置为1。然后检测到两个肢体逐渐脱离,则穿插程度量化值逐渐减小。在检测到两个肢体脱离(即结束穿插)的时候,可以将穿插程度量化值设置为0。可选地,若检测到两个肢体之间不接触,则可以直接将穿插程度量化值设置为0。The target frame animation data may refer to any frame of animation data among multiple frames of animation data. The quantitative value of the degree of interleaving is positively correlated with the degree of interleaving between pairs of objects. For example, assuming that there is an intersection between the limbs of animated character A and the limbs of animated character B, when it is detected that the two limbs first come into contact, the quantified value of the degree of intersection can be set to 0, and the intersection of the two limbs is detected. As the degree gradually increases (for example, relative to the intersection point of two limbs, the end of the limb to the intersection point accounts for an increasingly larger portion of the entire limb), the quantified value of the degree of interpenetration gradually increases. When it is detected that two limbs are intersecting to the maximum extent, the quantified value of the interpenetration degree can be set to 1. Then it is detected that the two limbs are gradually separated, and the quantified value of the degree of interpenetration gradually decreases. When it is detected that the two limbs are separated (that is, the penetration is completed), the quantified value of the penetration degree can be set to 0. Optionally, if it is detected that there is no contact between the two limbs, the quantified value of the degree of penetration can be directly set to 0.
在一些实施例中,根据动画角色A的肢体所占据的空间和动画角色B的肢体之间所占据的空间的重合空间,确定穿插程度量化值,可选地,穿插程度量化值和重合空间的空间大小成正比。在一些实施例中,重合空间的表面上分布有至少两个点,将重合空间的表面上两个点的距离的最大值,作为穿插程度量化值。In some embodiments, the quantified value of the degree of interspersion is determined based on the overlapping space between the space occupied by the limbs of the animated character A and the space occupied by the limbs of the animated character B. Optionally, the quantified value of the interleaved degree and the overlapping space are determined. proportional to the size of the space. In some embodiments, at least two points are distributed on the surface of the overlapping space, and the maximum value of the distance between the two points on the surface of the overlapping space is used as the quantified value of the interleaving degree.
每个对象对都可以得到至少一个以时间为单位的穿插程度量化值序列,进而可以生成至少一个穿插程度量化值曲线,穿插程度量化值曲线中的每个波峰段可以对应一个穿插过程。可选地,一个对象对在同一时段内,可以对应多个穿插过程,则可以以动作为单位,获取各个动作分别对应的穿插程度量化值序列,从而确定某一动作是否为穿插行为,以及某一动作对应的关键帧数据组。Each object pair can obtain at least one interleaving degree quantified value sequence in units of time, and then can generate at least one interleaving degree quantified value curve. Each peak segment in the interleaving degree quantified value curve can correspond to an interleaving process. Optionally, an object pair can correspond to multiple interleaving processes in the same period, and the sequence of quantified interleaving degree values corresponding to each action can be obtained in units of actions, thereby determining whether a certain action is an interleaving behavior and whether a certain action is an interleaving behavior. Keyframe data group corresponding to an action.
由于目标帧动画数据可以对应有多个动作,或者多个对象对,因此,目标帧动画数据可以对应多个穿插程度量化值。Since the target frame animation data can correspond to multiple actions or multiple object pairs, the target frame animation data can correspond to multiple quantized values of the degree of interspersion.
步骤203b,基于多帧动画数据分别对应的至少一个穿插程度量化值,从多帧动画数据中确定至少一个关键帧数据组。Step 203b: Determine at least one key frame data group from the multi-frame animation data based on at least one interleaving degree quantization value corresponding to the multi-frame animation data.
可选地,对于存在穿插行为的对象对,对象对包括存在穿插行为的第一对象和第二对象,基于对象对对应的穿插程度量化值,从多帧动画数据中获取对象对对应的至少一个候选帧数据序列;其中,每个候选帧数据序列对应于第一对象和第二对象之间的一个穿插行为;对于至少一个候选帧数据序列中的目标候选帧数据序列,对目标候选帧数据序列进行关键帧选取,得到目标候选帧数据序列对应的关键帧数据组;基于至少一个候选帧数据序列,确定对象对对应的至少一个关键帧数据组。Optionally, for an object pair with an interleaving behavior, the object pair includes a first object and a second object with an interleaving behavior, and based on the quantified value of the interleaving degree corresponding to the object pair, obtain at least one corresponding object pair from the multi-frame animation data. Candidate frame data sequence; wherein each candidate frame data sequence corresponds to an interleaving behavior between the first object and the second object; for the target candidate frame data sequence in at least one candidate frame data sequence, for the target candidate frame data sequence Key frames are selected to obtain a key frame data group corresponding to the target candidate frame data sequence; based on at least one candidate frame data sequence, at least one key frame data group corresponding to the object pair is determined.
第二对象可以是指至少一个对象中除第一对象之外的任一对象。候选帧数据序列可以包括多帧候选帧数据,每帧候选帧数据对应于一个穿插行为在某一时刻下的穿插动作。可选地,候选帧数据序列可以对应于某一对象对在不同时段内的穿插行为,候选帧数据序列也可以对应于某一对象对在同一时段内的不同穿插行为,本申请实施例对此不作限定。The second object may refer to any object in the at least one object except the first object. The candidate frame data sequence may include multiple frames of candidate frame data, and each frame of candidate frame data corresponds to an interleaving action of an interleaving behavior at a certain moment. Optionally, the candidate frame data sequence may correspond to the interleaving behaviors of a certain object pair in different time periods, and the candidate frame data sequence may also correspond to the different interspersing behaviors of a certain object pair in the same time period. In this regard, the embodiment of the present application Not limited.
示例性地,可以根据第一对象和第二对象之间的至少一个穿插程度量化值序列,获取第 一对象和第二对象对应的至少一个候选帧数据序列。例如,可以将穿插程度量化值曲线中的波峰段对应的多帧动画数据,确定为候选帧数据序列。Exemplarily, the first object may be obtained based on at least one interleaved degree quantified value sequence between the first object and the second object. At least one candidate frame data sequence corresponding to an object and a second object. For example, the multi-frame animation data corresponding to the peak segment in the interspersed degree quantized value curve can be determined as the candidate frame data sequence.
目标候选帧数据序列可以是指至少一个候选帧数据序列中的任一候选帧数据序列。在一个示例中,以关键帧数据组包括起始关键帧数据、峰值关键帧数据和结束关键帧数据为例,对目标候选帧数据序列对应的关键帧的选取方法进行介绍,其具体内容可以如下:对于目标候选帧数据序列对应的目标穿插行为,将目标穿插行为对应的第一个穿插程度量化值不为零的动画数据的前一帧动画数据,确定为目标候选帧数据序列对应的起始关键帧数据;将目标穿插行为对应的穿插程度量化值为最大的动画数据,确定为目标候选帧数据序列对应的峰值关键帧数据;将峰值关键帧数据之后的第一个穿插程度量化值为零的动画数据,确定为目标候选帧数据序列对应的结束关键帧数据。当然考虑到动画数据有时候可能存在偏差,在确认起始关键帧数据或者结束关键帧数据,并不仅考虑目标穿插行为对应的第一个穿插程度量化值不为零的动画数据的前一帧动画数据以及峰值关键帧数据之后的第一个穿插程度量化值为零的动画数据,还要参考目标穿插行为对应的第一个穿插程度量化值不为零的动画数据之前的S帧动画数据以及峰值关键帧数据之后的第一个穿插程度量化值为零的动画数据之后的K帧动画数据,在该S帧动画数据以及K帧动画数据均为零的情况下,将目标穿插行为对应的第一个穿插程度量化值不为零的动画数据的前一帧动画数据,确定为目标候选帧数据序列对应的起始关键帧数据,将峰值关键帧数据之后的第一个穿插程度量化值为零的动画数据,确定为目标候选帧数据序列对应的结束关键帧数据。在该S帧动画数据以及K帧动画数据存在穿插程度量化值不为零的情况下,重新根据该S帧动画数据以及K帧动画数据中存在穿插程度量化值不为零的动画数据的具体帧数,确定起始关键帧数据、峰值关键帧数据和结束关键帧数据。其中,K和S为正整数。The target candidate frame data sequence may refer to any candidate frame data sequence among the at least one candidate frame data sequence. In an example, taking the keyframe data group including starting keyframe data, peak keyframe data and end keyframe data as an example, the method of selecting keyframes corresponding to the target candidate frame data sequence is introduced. The specific content can be as follows : For the target interleaving behavior corresponding to the target candidate frame data sequence, the animation data of the previous frame of the animation data corresponding to the first interleaving degree quantification value of the target interleaving behavior is not zero is determined as the start of the target candidate frame data sequence. Keyframe data; the animation data with the maximum quantified interleaving degree value corresponding to the target interleaving behavior is determined as the peak keyframe data corresponding to the target candidate frame data sequence; the first interleaving degree quantified value after the peak keyframe data is zero The animation data is determined to be the end key frame data corresponding to the target candidate frame data sequence. Of course, considering that animation data may sometimes be biased, when confirming the start keyframe data or the end keyframe data, not only the previous frame of the animation data corresponding to the first interleaving degree quantified value of the target interleaving behavior is not zero is considered. data and the animation data of the first intersection level quantization value after the peak key frame data is zero, and also refer to the S frame animation data and peak value before the first animation data of the intersection level quantization value corresponding to the target interspersion behavior is not zero. The K frame animation data after the first animation data with a quantified interleaving degree value of zero after the key frame data. When the S frame animation data and the K frame animation data are both zero, the first animation data corresponding to the target interleaving behavior is The animation data of the previous frame of animation data whose interleaving level quantization value is not zero is determined as the starting key frame data corresponding to the target candidate frame data sequence, and the first interleaving level quantization value after the peak key frame data is zero. Animation data is determined to be the end key frame data corresponding to the target candidate frame data sequence. When the S-frame animation data and the K-frame animation data have a quantized value of the interleaving degree that is not zero, the S-frame animation data and the K-frame animation data have a specific frame of animation data that has a quantified interleaving value that is not zero. Number, determine the starting keyframe data, peak keyframe data and end keyframe data. Among them, K and S are positive integers.
可选地,本申请实施例对起始关键帧数据、峰值关键帧数据和结束关键帧数据的确定时序不作限定。Optionally, the embodiment of the present application does not limit the timing of determining the starting key frame data, the peak key frame data, and the ending key frame data.
步骤204,基于至少一个关键帧数据组,得到至少一个修复数据片段,修复数据片段是指穿插过程被修复后的数据片段。Step 204: Obtain at least one repaired data segment based on at least one key frame data group. The repaired data segment refers to the data segment that has been repaired during the interleaving process.
其中,修复数据片段可以是指修复数据片段对应的重定向对象之间不存在穿插现象的数据片段。该修复数据片段可以对应一个完整的动作,也即除了包括动作对应的关键帧数据,还包括关键帧数据之间过渡帧数据,过渡帧数据用于实现动作的顺滑过渡。Wherein, repairing the data fragments may refer to repairing data fragments in which there is no interleaving phenomenon between redirect objects corresponding to the data fragments. The repair data fragment can correspond to a complete action, that is, in addition to the key frame data corresponding to the action, it also includes transition frame data between the key frame data. The transition frame data is used to achieve a smooth transition of the action.
可选地,可以分别对至少一个关键帧数据组进行修复,得到至少一个修复后的关键帧数据组;其中,修复后的关键帧数据组中的穿插行为被修复;再分别对至少一个修复后的关键帧数据组进行过渡帧插补,得到至少一个修复数据片段。Optionally, at least one key frame data group can be repaired respectively to obtain at least one repaired key frame data group; wherein, the interspersed behavior in the repaired key frame data group is repaired; and then at least one repaired key frame data group can be repaired respectively. Perform transition frame interpolation on the keyframe data group to obtain at least one repaired data segment.
其中,对于关键帧数据组中的各帧动画数据,对至少一个对象之间再次重定向处理,以使得至少一个对象之间不存在穿插行为。例如,基于至少一个对象分别对应的骨骼数据,并以关键帧数据组对应的动作为指导,分别对至少一个对象的动作进行重计算和修正,使得至少一个对象的动作在不存在穿插现象的情况下,尽可能地接近关键帧数据组对应的动作,从而得到修复后的关键帧数据组。Wherein, for each frame of animation data in the key frame data group, at least one object is redirected again so that there is no interleaving behavior between at least one object. For example, based on the skeletal data corresponding to at least one object, and guided by the actions corresponding to the key frame data group, the actions of at least one object are recalculated and corrected respectively, so that the actions of at least one object do not intersperse. Next, get as close as possible to the action corresponding to the keyframe data group, thereby obtaining the repaired keyframe data group.
可以通过对修复后的关键帧数据组中的各帧动画数据之间进行过渡帧插补,得到具有完整动作的修复数据片段。该过渡帧可以根据设定的动作插补算法,进行构建。The repaired data fragments with complete movements can be obtained by performing transition frame interpolation between each frame of animation data in the repaired keyframe data group. The transition frame can be constructed according to the set motion interpolation algorithm.
当由关键帧数据组组成的动画曲线的跳动频率非常高时(即在穿插与不穿插之间来回的频繁切换),修复完成的动画数据的动画曲线也随之高频跳动,很不稳定。因此,本申请实施例可以引入滤波器,对高频信号进行过滤,进而可以得到具有稳定的动画曲线的修复动画数据。另外,由于动作的完成过程与骨骼数据有一定的关联。例如,对于同一段距离,步幅大的可以通过更少的跨步次数完成,而步幅小的需要通过更多的跨步次数完成,因此,针对此点,可以根据重定向对象的骨骼数据和动捕对象的骨骼数据之间的差异,进一步地对关键帧数据对应的时间段进行调整。 When the animation curve composed of keyframe data groups has a very high beating frequency (that is, frequent switching back and forth between interspersed and non-interspersed), the animation curve of the repaired animation data will also beat at a high frequency, which is very unstable. Therefore, the embodiment of the present application can introduce a filter to filter the high-frequency signal, and thereby obtain repair animation data with a stable animation curve. In addition, because the completion process of the action has a certain relationship with the skeletal data. For example, for the same distance, a larger stride can be completed with fewer strides, while a smaller stride needs to be completed with more strides. Therefore, for this point, you can redirect the object's bone data based on and the difference between the skeletal data of the motion capture object, and further adjust the time period corresponding to the key frame data.
在一个示例中,上述优化过程可以包括如下内容:对于至少一个对象中的第一对象对,分别对第一对象对对应的至少一个关键帧数据组进行过滤,得到第一对象对对应的至少一个过滤后的关键帧数据组;其中,过滤后的关键帧数据组对应的时间段长度大于第一时长阈值;分别对第一对象对对应的至少一个过滤后的关键帧数据组进行后处理,得到第一对象对对应的至少一个调整后的关键帧数据组;其中,调整后的关键帧数据组的动作跨度符合对象对的骨骼数据;获取至少一个关键帧数据组分别对应的调整后的关键帧数据组。In one example, the above optimization process may include the following: for the first object pair in at least one object, filter at least one key frame data group corresponding to the first object pair to obtain at least one key frame data group corresponding to the first object pair. The filtered key frame data group; wherein the time period length corresponding to the filtered key frame data group is greater than the first duration threshold; post-processing is performed on at least one filtered key frame data group corresponding to the first object pair to obtain At least one adjusted keyframe data group corresponding to the first object pair; wherein the action span of the adjusted keyframe data group conforms to the skeletal data of the object pair; and the adjusted keyframes corresponding to at least one keyframe data group are obtained. data group.
第一时长阈值用于高频过滤,以过滤掉过短的穿插行为(即不影响动画效果)和数据噪声(如抖动)。例如,该第一时长阈值可以根据实际使用需求进行设置,诸如0.05s、0.07s、0.1s等,本申请实施例对此不作限定。The first duration threshold is used for high-frequency filtering to filter out too short interspersed behaviors (that is, it does not affect the animation effect) and data noise (such as jitter). For example, the first duration threshold can be set according to actual usage requirements, such as 0.05s, 0.07s, 0.1s, etc., which is not limited in the embodiments of the present application.
可选地,上述后处理过程可以如下:基于第一对象对的骨骼数据和第一对象对对应的动捕对象的骨骼数据,确定第一调整参数;在第一调整参数大于第一阈值的情况下,将至少一个过滤后的关键帧数据组分别对应的时间段向后扩充,得到至少一个调整后的关键帧数据组;在第一调整参数小于第一阈值的情况下,将至少一个过滤后的关键帧数据组分别对应的时间段向前压缩,得到至少一个调整后的关键帧数据组。在一些实施例中,在第一调整参数等于第一阈值的情况下,不对过滤后的关键帧数据组分别对应的时间段向后扩充或者向前压缩。Optionally, the above post-processing process may be as follows: determine the first adjustment parameter based on the skeletal data of the first object pair and the skeletal data of the motion capture object corresponding to the first object pair; in the case where the first adjustment parameter is greater than the first threshold Next, the time periods corresponding to at least one filtered key frame data group are expanded backward to obtain at least one adjusted key frame data group; when the first adjustment parameter is less than the first threshold, at least one filtered key frame data group is The corresponding time periods of the key frame data groups are compressed forward to obtain at least one adjusted key frame data group. In some embodiments, when the first adjustment parameter is equal to the first threshold, the time periods respectively corresponding to the filtered key frame data groups are not expanded backward or compressed forward.
其中,第一调整参数用于调整关键帧数据的时间戳,以使得关键帧数据对应的动作过程更为平滑合理。可选地,可以将第一对象对的骨骼数据和第一对象对对应的动捕对象的骨骼数据之间的差值,确定为第一调整参数,也可以将第一对象对的骨骼数据和第一对象对对应的动捕对象的骨骼数据之间的比值,确定为第一调整参数,本申请实施例对此不作限定。Among them, the first adjustment parameter is used to adjust the timestamp of the key frame data to make the action process corresponding to the key frame data smoother and more reasonable. Optionally, the difference between the skeletal data of the first object pair and the skeletal data of the motion capture object corresponding to the first object pair can be determined as the first adjustment parameter, or the skeletal data of the first object pair and The ratio between the first object and the corresponding skeleton data of the motion capture object is determined as the first adjustment parameter, which is not limited in the embodiment of the present application.
示例性地,基于第一对象对的骨骼数据和第一对象对对应的动捕对象的骨骼数据,可以确定第一对象对中的两个重定向对象的身高,以及两个动捕对象的身高,再基于两个重定向对象的身高(如平均值)和两个动捕对象的身高(如平均值)之间的比值,确定第一调整参数。For example, based on the skeletal data of the first object pair and the skeletal data of the motion capture object corresponding to the first object pair, the heights of the two redirected objects in the first object pair and the heights of the two motion capture objects can be determined. , and then determine the first adjustment parameter based on the ratio between the heights of the two redirected objects (such as the average value) and the heights of the two motion capture objects (such as the average value).
可选地,在一个关键帧数据中存在多对对象对的情况下,可以基于至少一个对象对分别对应的第一调整参数的加权和,来确定该关键帧数据是向前压缩还是向后扩充。关键帧数据对应的具体偏移量(即压缩或扩充的程度)与重定向对象的身高相关,重定向对象的身高越高,该偏移量越大。Optionally, when there are multiple pairs of objects in one key frame data, it can be determined whether the key frame data is compressed forward or expanded based on the weighted sum of the first adjustment parameters corresponding to at least one object pair. . The specific offset corresponding to the keyframe data (that is, the degree of compression or expansion) is related to the height of the redirected object. The higher the height of the redirected object, the greater the offset.
第一阈值与第一调整参数对应,第一阈值可以根据经验值进行设置和调整。The first threshold corresponds to the first adjustment parameter, and the first threshold can be set and adjusted based on experience values.
例如,在将身高比值确定为第一调整参数的情况下,可以将第一阈值设置为1,该1表示第一对象对的身高和第一对象对对应的动捕对象的身高在整体上相同,此时可以不对关键帧数据的时间段进行调整。在第一调整参数大于1的情况下,则表明第一对象对的身高在整体上大于第一对象对对应的动捕对象的身高,可以将关键帧数据的时间段向后扩充(如将上述峰值关键帧数据和结束关键帧数据后延)。在第一调整参数小于1的情况下,则表明第一对象对的身高在整体上小于第一对象对对应的动捕对象的身高,可以将关键帧数据的时间段向前压缩(如将上述峰值关键帧数据和结束关键帧数据前移)。For example, when the height ratio is determined as the first adjustment parameter, the first threshold can be set to 1, which indicates that the height of the first object pair and the height of the motion capture object corresponding to the first object pair are generally the same. , the time period of the keyframe data does not need to be adjusted at this time. In the case where the first adjustment parameter is greater than 1, it indicates that the height of the first object pair is overall greater than the height of the motion capture object corresponding to the first object pair, and the time period of the key frame data can be expanded backward (such as the above Peak keyframe data and end keyframe data are delayed). When the first adjustment parameter is less than 1, it means that the height of the first object pair is overall smaller than the height of the motion capture object corresponding to the first object pair, and the time period of the key frame data can be compressed forward (such as the above Peak keyframe data and end keyframe data are moved forward).
采用上述方法,基于至少一个关键帧数据组分别对应的调整后的关键帧数据组,可以得到至少一个修复数据片段。Using the above method, at least one repair data segment can be obtained based on the adjusted key frame data group corresponding to at least one key frame data group.
步骤205,按照至少一个修复数据片段分别对应的时间段,将至少一个修复数据片段与待修复动画数据进行叠加,得到修复动画数据。Step 205: Superimpose at least one repair data segment and the animation data to be repaired according to the time periods corresponding to the at least one repair data segment to obtain repair animation data.
可选地,对于至少一个修复数据片段中的目标修复数据片段,可以根据目标修复数据片段对应的目标时间段,从待修复动画数据中获取目标时间段对应的原始数据片段;基于原始数据片段对应的骨骼动作参数和目标修复数据片段对应的重定向修复参数,对原始数据片段和目标修复数据片段进行叠加,得到目标时段对应的修复动画数据。Optionally, for the target repair data segment in at least one repair data segment, the original data segment corresponding to the target time period can be obtained from the animation data to be repaired according to the target time period corresponding to the target repair data segment; based on the corresponding original data segment The skeletal action parameters and the redirection repair parameters corresponding to the target repair data fragment are superimposed on the original data fragment and the target repair data fragment to obtain the repair animation data corresponding to the target period.
其中,目标修复数据片段可以是至少一个修复数据片段中的任一修复数据片段。骨骼动作参数用于指示重定向对象的动作,诸如各个骨骼的旋转角度、移动距离、旋转方向等。重 定向修复参数用于指示重定向对象的状态,诸如重定向对象的位置、姿态等。The target repair data fragment may be any repair data fragment among the at least one repair data fragment. Bone action parameters are used to indicate the action of the redirected object, such as the rotation angle, movement distance, rotation direction, etc. of each bone. Heavy Orientation repair parameters are used to indicate the status of the redirected object, such as the position, posture, etc. of the redirected object.
骨骼动作参数可用于使得重定向对象的动作尽可能拟合动捕对象的动作,重定向修复参数可用于修复重定向对象的状态,使得重定向对象之间不存在穿插现象。The skeletal action parameters can be used to make the action of the redirected object match the action of the motion capture object as much as possible, and the redirection repair parameters can be used to repair the state of the redirected object so that there is no interleaving between the redirected objects.
可选地,对于待修复动画数据中剩余原始动画数据(即除去修复数据片段对应的原始数据片段),本申请实施例给予保留,不进行处理。Optionally, for the remaining original animation data in the animation data to be repaired (that is, the original data fragments corresponding to the repaired data fragments are removed), the embodiment of the present application reserves it and does not process it.
示例性地,参考图5,在动画角色与动画环境(如动画环境中的物体、地面、墙壁等)的交互过程中,动画角色的手部与地面之间存在穿插。其中,动画数据501为待修复动画数据中的某一帧动画数据,在该动画数据501中,动画角色的手部与地面之间穿插。在对待修复动画数据进行修复之后,得到动画数据501对应的修复动画数据502,在该修复动画数据502中,动画角色的手部与地面之间的穿插现象已被修复,也即动画角色的手部与地面之间不再穿插。For example, referring to Figure 5, during the interaction process between the animated character and the animated environment (such as objects, the ground, walls, etc. in the animated environment), there is an intersection between the hand of the animated character and the ground. Among them, the animation data 501 is a certain frame of animation data in the animation data to be repaired. In the animation data 501, the hand of the animated character intersperses with the ground. After the animation data to be repaired is repaired, the repair animation data 502 corresponding to the animation data 501 is obtained. In the repair animation data 502, the interpenetration phenomenon between the animated character's hand and the ground has been repaired, that is, the animated character's hand No longer interspersed between the base and the ground.
又例如,参考图6,在动画角色之间的交互过程中,动画角色603的手部与动画角色604之间存在穿插。其中,动画数据601为待修复动画数据中的某一帧动画数据,在该动画数据601中,动画角色603的手部与动画角色604之间穿插。在对待修复动画数据进行修复之后,得到动画数据601对应的修复动画数据602,在该修复动画数据602中,动画角色603的手部与动画角色604之间的穿插现象已被修复,也即动画角色603的手部与动画角色604之间不再穿插。For another example, referring to FIG. 6 , during the interaction process between the animated characters, there is an intersection between the hands of the animated character 603 and the animated character 604 . Among them, the animation data 601 is a certain frame of animation data in the animation data to be repaired. In the animation data 601, the hand of the animated character 603 and the animated character 604 are interspersed. After the animation data to be repaired is repaired, the repaired animation data 602 corresponding to the animation data 601 is obtained. In the repaired animation data 602, the interpenetration phenomenon between the hand of the animated character 603 and the animated character 604 has been repaired, that is, the animation The hand of the character 603 is no longer interspersed with the animated character 604.
综上所述,本申请实施例提供的技术方案,通过对待修复动画数据中的至少一个对象进行碰撞体创建,再基于创建得到碰撞体,分别对待修复动画数据对应的多帧动画数据进行碰撞检测,得到关键帧数据组,实现了穿插过程的自动获取,相比于相关技术中通过手工方式逐帧检测来获取穿插过程,本申请实施例提供的技术方案可以提高穿插过程的获取效率,进而提高动画数据的修复效率。To sum up, the technical solution provided by the embodiments of the present application is to create a collision body for at least one object in the animation data to be repaired, and then based on the created collision body, perform collision detection on multiple frames of animation data corresponding to the animation data to be repaired. , the key frame data group is obtained, and the automatic acquisition of the interleaving process is realized. Compared with obtaining the interleaving process through manual frame-by-frame detection in related technologies, the technical solution provided by the embodiment of the present application can improve the acquisition efficiency of the interleaving process, thereby improving the efficiency of the interleaving process. Repair efficiency of animation data.
另外,通过对关键帧数据组进行过滤和后处理,可以提高关键帧数据组的质量,从而可以提高动画数据的修复质量。同时,通过对关键帧数据组进行过滤和后处理,还可以提高关键帧数据组的分布合理性,从而提高动画数据的修复合理性。In addition, by filtering and post-processing the keyframe data group, the quality of the keyframe data group can be improved, thereby improving the repair quality of the animation data. At the same time, by filtering and post-processing the key frame data group, the rationality of the distribution of the key frame data group can also be improved, thereby improving the rationality of the repair of the animation data.
另外,通过对碰撞检测结果进行解算量化,可以快速便捷的获取穿插行为在不同阶段的穿插程度,有利于提高关键帧数据的提取效率,从而进一步提高了动画数据的修复效率。In addition, by solving and quantifying the collision detection results, the degree of interspersing behavior at different stages can be quickly and easily obtained, which is conducive to improving the extraction efficiency of key frame data, thereby further improving the efficiency of repairing animation data.
在一个示例性实施例中,以碰撞体为包围球为例,对本申请实施例提供的动画数据修复方法进行说明,其具体内容可以如下:In an exemplary embodiment, taking the collision body as a surrounding ball as an example, the animation data repair method provided by the embodiment of the present application is described. The specific content may be as follows:
获取待修复动画数据。在本申请实施例中,该待修复动画数据中存在穿插现象。例如,参考图6,动画数据601为待修复动画数据中的某一帧,在动画数据601中,动画角色的手部之间存在穿插。Get the animation data to be repaired. In this embodiment of the present application, there is an interleaving phenomenon in the animation data to be repaired. For example, referring to FIG. 6 , the animation data 601 is a certain frame in the animation data to be repaired. In the animation data 601 , there are intersperses between the hands of the animated character.
对待修复动画数据中的至少一个对象进行包围球创建,得到至少一个对象分别对应的包围球集合。示例性地,参考图7,在动画角色701和动画角色702的交互过程中,基于动画角色701的骨骼分布,对动画角色701对应的多边形网格进行划分,得到动画角色701的各个骨骼分别对应的局部多边形网格,再基于各个骨骼分别对应的局部多边形网格,构建各个骨骼分别对应的包围球,得到动画角色701对应的包围球集合。其中,包围球为直径最小的且可以包围局部多边形网格的球体。采用相同的方法,基于动画角色702的骨骼分布,得到动画角色702对应的包围球集合。可以将包围球和骨骼设置为父子关系,则包围球跟随骨骼移动。Create a bounding ball for at least one object in the animation data to be repaired, and obtain a set of bounding balls corresponding to at least one object. For example, referring to Figure 7, during the interaction process between the animated character 701 and the animated character 702, based on the bone distribution of the animated character 701, the polygon mesh corresponding to the animated character 701 is divided, and each bone of the animated character 701 is obtained. , and then based on the local polygon mesh corresponding to each bone, a bounding ball corresponding to each bone is constructed, and a set of bounding balls corresponding to the animated character 701 is obtained. Among them, the enclosing sphere is the sphere with the smallest diameter that can enclose the local polygon mesh. Using the same method, based on the bone distribution of the animated character 702, a set of bounding balls corresponding to the animated character 702 is obtained. You can set the bounding ball and bones to have a parent-child relationship, and the bounding ball will follow the bones.
又例如,参考图8,在动画角色801和道具802的交互过程中,由于只有动画角色801的手部,与道具802与接触,则可以只对动画角色801的手部进行包围球的创建。For another example, referring to Figure 8, during the interaction process between the animated character 801 and the prop 802, since only the hands of the animated character 801 are in contact with the prop 802, the surrounding ball can be created only for the hands of the animated character 801.
基于至少一个对象分别对应的包围球集合,对待修复动画数据对应的每帧动画数据进行碰撞检测,得到每帧动画数据分别对应的碰撞检测结果。 Based on a set of bounding balls corresponding to at least one object, collision detection is performed on each frame of animation data corresponding to the animation data to be repaired, and a collision detection result corresponding to each frame of animation data is obtained.
示例性地,若两个包围球的球心之间的距离,大于两个包围球的半径和,则确定两个包围球对应的部位未穿插;或者,若两个包围球的球心之间的距离,等于两个包围球的半径和,则确定两个包围球对应的部位开始穿插或结束穿插;或者,若两个包围球的球心之间的距离,小于两个包围球的半径和,则确定两个包围球对应的部位处于穿插状态。For example, if the distance between the centers of the two surrounding spheres is greater than the sum of the radii of the two surrounding spheres, it is determined that the corresponding parts of the two surrounding spheres are not interspersed; or, if the distance between the centers of the two surrounding spheres is The distance is equal to the sum of the radii of the two enclosing spheres, then determine whether the corresponding parts of the two enclosing spheres begin to intersect or end intersecting; or, if the distance between the centers of the two enclosing spheres is less than the sum of the radii of the two enclosing spheres , then it is determined that the corresponding parts of the two surrounding spheres are in the interspersed state.
例如,参考图9,若包围球901的圆心和包围球902的圆心之间的距离(以下简称圆心距离),大于包围球901和包围球902之间的半径和,则确定包围球901对应的部位(如手部)和包围球902对应的部位(如手部)未穿插。在包围球901和包围球902的移动过程中,若包围球901和包围球902之间的圆心距离,变为等于包围球901和包围球902之间的半径和,则确定包围球901对应的部位(如手部)和包围球902对应的部位(如手部)开始穿插。若包围球901和包围球902之间的圆心距离,变为小于包围球901和包围球902之间的半径和,则确定包围球901对应的部位(如手部)和包围球902对应的部位(如手部)处于穿插状态。For example, referring to Figure 9, if the distance between the center of the circle of the surrounding sphere 901 and the center of the circle of the surrounding sphere 902 (hereinafter referred to as the circle center distance) is greater than the sum of the radii between the surrounding sphere 901 and the surrounding sphere 902, then the corresponding radius of the surrounding sphere 901 is determined. The part (such as the hand) and the part corresponding to the surrounding ball 902 (such as the hand) are not interspersed. During the movement of the enclosing ball 901 and the enclosing ball 902, if the center distance between the enclosing ball 901 and the enclosing ball 902 becomes equal to the sum of the radii between the enclosing ball 901 and the enclosing ball 902, then the enclosing ball 901 corresponding to The parts (such as hands) and the parts (such as hands) corresponding to the surrounding ball 902 begin to intersect. If the center distance between the enclosing ball 901 and the enclosing ball 902 becomes smaller than the sum of the radii between the enclosing ball 901 and the enclosing ball 902, then the parts corresponding to the enclosing ball 901 (such as the hand) and the parts corresponding to the enclosing ball 902 are determined. (such as hands) in a penetrating state.
对每帧动画数据分别对应的碰撞检测结果进行解算,得到每帧动画数据分别对应的至少一个穿插程度量化值。其中,穿插程度量化值与穿插程度呈正相关关系。例如,参考图7,基于碰撞检测结果703可以得到一个穿插程度量化值,基于碰撞检测结果704可以得到另一个穿插程度量化值。The collision detection results corresponding to each frame of animation data are solved to obtain at least one interspersed degree quantified value corresponding to each frame of animation data. Among them, the quantitative value of the degree of interleaving is positively correlated with the degree of interpenetration. For example, referring to FIG. 7 , one quantified value of the degree of interspersion can be obtained based on the collision detection result 703 , and another quantified value of the degree of interspersion can be obtained based on the collision detection result 704 .
基于每帧动画数据分别对应的至少一个穿插程度量化值,从多帧动画数据中确定至少一个关键帧数据组。例如,参考图7,对于手部穿插现象,获取该手部穿插现象对应的所有穿插程度量化值,生成穿插程度量化值曲线,根据该穿插程度量化值曲线,确定手部穿插现象对应的起始关键帧数据、峰值关键帧数据和结束关键帧数据,得到手部穿插现象对应的关键帧数据组。At least one key frame data group is determined from the multi-frame animation data based on at least one interleaving degree quantization value respectively corresponding to each frame of animation data. For example, referring to Figure 7, for the hand interpenetration phenomenon, obtain all the quantified values of the degree of interpenetration corresponding to the hand interpenetration phenomenon, generate a quantified value curve of the degree of interpenetration, and determine the corresponding start point of the hand interpenetration phenomenon based on the quantified value curve of the degree of interpenetration. Key frame data, peak key frame data and end key frame data are used to obtain the key frame data group corresponding to the hand interspersion phenomenon.
分别对至少一个关键帧数据组进行过滤、后处理,得到至少一个调整后的关键帧数据组。分别对至少一个调整后的关键帧数据组进行修复,得到至少一个修复后的关键帧数据组;分别对至少一个修复后的关键帧数据组进行过渡帧插补,得到至少一个修复数据片段。Filter and post-process at least one key frame data group respectively to obtain at least one adjusted key frame data group. Repair at least one adjusted key frame data group respectively to obtain at least one repaired key frame data group; perform transition frame interpolation on at least one repaired key frame data group respectively to obtain at least one repaired data segment.
按照至少一个修复数据片段分别对应的时间段,将至少一个修复数据片段与待修复动画数据进行叠加,得到修复动画数据。According to the time periods corresponding to the at least one repaired data segment, at least one repaired data segment is superimposed on the animation data to be repaired to obtain the repaired animation data.
综上所述,本申请实施例提供的技术方案,通过对待修复动画数据中的至少一个对象进行碰撞体创建,再基于创建得到碰撞体,分别对待修复动画数据对应的多帧动画数据进行碰撞检测,得到关键帧数据组,实现了穿插过程的自动获取,相比于相关技术中通过手工方式逐帧检测来获取穿插过程,本申请实施例提供的技术方案可以提高穿插过程的获取效率,进而提高动画数据的修复效率。To sum up, the technical solution provided by the embodiments of the present application is to create a collision body for at least one object in the animation data to be repaired, and then based on the created collision body, perform collision detection on multiple frames of animation data corresponding to the animation data to be repaired. , the key frame data group is obtained, and the automatic acquisition of the interleaving process is realized. Compared with obtaining the interleaving process through manual frame-by-frame detection in related technologies, the technical solution provided by the embodiment of the present application can improve the acquisition efficiency of the interleaving process, thereby improving the efficiency of the interleaving process. Repair efficiency of animation data.
下述为本申请装置实施例,可以用于执行本申请方法实施例。对于本申请装置实施例中未披露的细节,请参照本申请方法实施例。The following are device embodiments of the present application, which can be used to execute method embodiments of the present application. For details not disclosed in the device embodiments of this application, please refer to the method embodiments of this application.
参考图10,其示出了本申请一个实施例提供的动画数据修复装置的框图。该装置具有实现上述方法示例的功能,所述功能可以由硬件实现,也可以由硬件执行相应的软件实现。该装置可以是上文介绍的计算机设备,也可以设置在计算机设备中。如图10所示,该装置1000包括:动画数据获取模块1001、碰撞体创建模块1002、关键帧获取模块1003、修复片段获取模块1004和动画数据修复模块1005。Referring to Figure 10, which shows a block diagram of an animation data repair device provided by an embodiment of the present application. The device has the function of implementing the above method example, and the function can be implemented by hardware, or can be implemented by hardware executing corresponding software. The device may be the computer equipment introduced above, or may be provided in the computer equipment. As shown in Figure 10, the device 1000 includes: an animation data acquisition module 1001, a collision body creation module 1002, a key frame acquisition module 1003, a repair fragment acquisition module 1004, and an animation data repair module 1005.
动画数据获取模块1001,用于获取待修复动画数据,所述待修复动画数据包括由动捕数据重定向得到的多帧动画数据。The animation data acquisition module 1001 is used to acquire animation data to be repaired, where the animation data to be repaired includes multi-frame animation data obtained by redirecting motion capture data.
碰撞体创建模块1002,用于对所述待修复动画数据中的至少一个对象进行碰撞体创建,得到所述至少一个对象分别对应的碰撞体集合,所述碰撞体集合中包括至少一个碰撞体。The collision body creation module 1002 is used to create a collision body for at least one object in the animation data to be repaired, and obtain a collection of collision bodies respectively corresponding to the at least one object, and the collection of collision bodies includes at least one collision body.
关键帧获取模块1003,用于基于所述至少一个对象分别对应的碰撞体集合,分别对所述多帧动画数据中的对象进行碰撞检测,得到至少一个关键帧数据组,每个所述关键帧数据组 对应一个穿插过程。The key frame acquisition module 1003 is configured to perform collision detection on the objects in the multi-frame animation data based on the set of colliders respectively corresponding to the at least one object, and obtain at least one key frame data group, each of the key frames data group Corresponds to a interspersed process.
修复片段获取模块1004,用于基于所述至少一个关键帧数据组,得到至少一个修复数据片段,所述修复数据片段是指所述穿插过程被修复后的数据片段。The repair segment acquisition module 1004 is configured to obtain at least one repair data segment based on the at least one key frame data group, where the repair data segment refers to the data segment after the interleaving process is repaired.
动画数据修复模块1005,用于按照所述至少一个修复数据片段分别对应的时间段,将所述至少一个修复数据片段与所述待修复动画数据进行叠加,得到修复动画数据。The animation data repair module 1005 is configured to superimpose the at least one repair data segment and the animation data to be repaired according to the time periods corresponding to the at least one repair data segment to obtain repair animation data.
在一个示例性实施例中,所述关键帧获取模块1003,用于:In an exemplary embodiment, the key frame acquisition module 1003 is used to:
对于所述多帧动画数据中的目标帧动画数据,对所述目标帧动画数据中的对象进行碰撞检测,得到所述目标帧动画数据对应的至少一个穿插程度量化值,所述穿插程度量化值用于表征所述至少一个对象中的对象对之间的穿插程度;For the target frame animation data in the multi-frame animation data, collision detection is performed on the objects in the target frame animation data to obtain at least one quantified interleaving degree value corresponding to the target frame animation data. The quantified interleaving degree value is Used to characterize the degree of interpenetration between pairs of objects in the at least one object;
基于所述多帧动画数据分别对应的至少一个穿插程度量化值,从所述多帧动画数据中确定所述至少一个关键帧数据组。The at least one key frame data group is determined from the multi-frame animation data based on at least one interleaving degree quantization value respectively corresponding to the multi-frame animation data.
在一些实施例中,所述对象对包括存在穿插行为的第一对象和第二对象。In some embodiments, the object pair includes a first object and a second object in which interleaving behavior exists.
在一个示例性实施例中,所述关键帧获取模块1003,还用于:In an exemplary embodiment, the key frame acquisition module 1003 is also used to:
对于存在穿插行为的所述对象对,基于所述对象对对应的穿插程度量化值,从所述多帧动画数据中获取所述对象对对应的至少一个候选帧数据序列;其中,每个所述候选帧数据序列对应于所述第一对象和所述第二对象之间的一个穿插行为;For the object pair with interleaving behavior, at least one candidate frame data sequence corresponding to the object pair is obtained from the multi-frame animation data based on the quantified value of the interleaving degree corresponding to the object pair; wherein, each of the The candidate frame data sequence corresponds to an interleaving behavior between the first object and the second object;
对于所述至少一个候选帧数据序列中的目标候选帧数据序列,对所述目标候选帧数据序列进行关键帧选取,得到所述目标候选帧数据序列对应的关键帧数据组;For the target candidate frame data sequence in the at least one candidate frame data sequence, perform key frame selection on the target candidate frame data sequence to obtain a key frame data group corresponding to the target candidate frame data sequence;
基于所述至少一个候选帧数据序列,确定所述对象对对应的至少一个关键帧数据组。Based on the at least one candidate frame data sequence, at least one key frame data group corresponding to the object pair is determined.
在一个示例性实施例中,所述关键帧数据组包括起始关键帧数据、峰值关键帧数据和结束关键帧数据;其中,所述起始关键帧数据是指所述穿插行为的起始时刻对应的动画数据,所述峰值关键帧数据是指所述穿插行为达到最大程度对应的动画数据,所述结束关键帧数据是指所述穿插行为的结束时刻对应的动画数据;所述关键帧获取模块1003,还用于:In an exemplary embodiment, the key frame data group includes starting key frame data, peak key frame data and ending key frame data; wherein the starting key frame data refers to the starting moment of the interspersed behavior Corresponding animation data, the peak key frame data refers to the animation data corresponding to the maximum extent of the interspersed behavior, and the end key frame data refers to the animation data corresponding to the end moment of the interspersed behavior; the key frame acquisition Module 1003, also used for:
对于所述目标候选帧数据序列对应的目标穿插行为,将所述目标穿插行为对应的第一个穿插程度量化值不为零的动画数据的前一帧动画数据,确定为所述目标候选帧数据序列对应的起始关键帧数据;For the target interleaving behavior corresponding to the target candidate frame data sequence, the animation data of the previous frame of animation data corresponding to the first interleaving degree quantified value of the target interleaving behavior is not zero is determined as the target candidate frame data. The starting keyframe data corresponding to the sequence;
将所述目标穿插行为对应的穿插程度量化值为最大的动画数据,确定为所述目标候选帧数据序列对应的峰值关键帧数据;Determine the animation data with the largest quantified interleaving degree value corresponding to the target interleaving behavior as the peak key frame data corresponding to the target candidate frame data sequence;
将所述峰值关键帧数据之后的第一个穿插程度量化值为零的动画数据,确定为所述目标候选帧数据序列对应的结束关键帧数据。The animation data whose first interleaving degree quantization value after the peak key frame data is zero is determined as the end key frame data corresponding to the target candidate frame data sequence.
在一个示例性实施例中,所述修复片段获取模块1004,用于:In an exemplary embodiment, the repair fragment acquisition module 1004 is used to:
分别对所述至少一个关键帧数据组进行修复,得到至少一个修复后的关键帧数据组;其中,所述修复后的关键帧数据组中的穿插行为被修复;Repair the at least one key frame data group respectively to obtain at least one repaired key frame data group; wherein the interspersed behavior in the repaired key frame data group is repaired;
分别对所述至少一个修复后的关键帧数据组进行过渡帧插补,得到所述至少一个修复数据片段。Transition frame interpolation is performed on the at least one repaired key frame data group respectively to obtain the at least one repaired data segment.
在一个示例性实施例中,如图11所示,所述装置1000,还包括:关键帧过滤模块1006和关键帧调整模块1007。In an exemplary embodiment, as shown in Figure 11, the device 1000 further includes: a key frame filtering module 1006 and a key frame adjusting module 1007.
关键帧过滤模块1006,用于对于所述至少一个对象中的第一对象对,分别对所述第一对象对对应的至少一个关键帧数据组进行过滤,得到所述第一对象对对应的至少一个过滤后的关键帧数据组;其中,所述过滤后的关键帧数据组对应的时间段长度大于第一时长阈值。The key frame filtering module 1006 is configured to filter at least one key frame data group corresponding to the first object pair in the at least one object, and obtain at least one key frame data group corresponding to the first object pair. A filtered key frame data group; wherein the length of the time period corresponding to the filtered key frame data group is greater than the first duration threshold.
关键帧调整模块1007,用于分别对所述第一对象对对应的至少一个过滤后的关键帧数据组进行后处理,得到所述第一对象对对应的至少一个调整后的关键帧数据组;其中,所述调整后的关键帧数据组的动作跨度符合对象对的骨骼数据。The key frame adjustment module 1007 is configured to perform post-processing on at least one filtered key frame data group corresponding to the first object pair, respectively, to obtain at least one adjusted key frame data group corresponding to the first object pair; Wherein, the action span of the adjusted key frame data group conforms to the skeletal data of the object pair.
所述关键帧调整模块1007,还用于获取所述至少一个关键帧数据组分别对应的调整后的关键帧数据组。 The key frame adjustment module 1007 is also used to obtain the adjusted key frame data group corresponding to the at least one key frame data group.
所述修复片段获取模块1004,还用于基于所述至少一个关键帧数据组分别对应的调整后的关键帧数据组,得到所述至少一个修复数据片段。The repair segment acquisition module 1004 is also configured to obtain the at least one repair data segment based on the adjusted key frame data group corresponding to the at least one key frame data group.
在一个示例性实施例中,所述关键帧调整模块1007,用于:In an exemplary embodiment, the key frame adjustment module 1007 is used for:
基于所述第一对象对的骨骼数据和所述第一对象对对应的动捕对象的骨骼数据,确定第一调整参数;Determine a first adjustment parameter based on the skeletal data of the first object pair and the skeletal data of the motion capture object corresponding to the first object pair;
在所述第一调整参数大于第一阈值的情况下,将所述至少一个过滤后的关键帧数据组分别对应的时间段向后扩充,得到所述至少一个调整后的关键帧数据组;When the first adjustment parameter is greater than the first threshold, extending the time periods corresponding to the at least one filtered key frame data group backward to obtain the at least one adjusted key frame data group;
在所述第一调整参数小于所述第一阈值的情况下,将所述至少一个过滤后的关键帧数据组分别对应的时间段向前压缩,得到所述至少一个调整后的关键帧数据组。When the first adjustment parameter is less than the first threshold, compress the time periods corresponding to the at least one filtered key frame data group forward to obtain the at least one adjusted key frame data group. .
在一个示例性实施例中,所述碰撞体创建模块1002,用于:In an exemplary embodiment, the collision body creation module 1002 is used for:
对于所述至少一个对象中的第一对象,获取所述第一对象的多边形网格,所述多边形网格用于表征对象的外形;For the first object in the at least one object, obtain a polygon mesh of the first object, where the polygon mesh is used to represent the appearance of the object;
对于所述第一对象对应的目标部位,获取所述目标部位对应的目标局部多边形网格,所述目标局部多边形网格受所述目标部位影响;For the target part corresponding to the first object, obtain a target local polygon mesh corresponding to the target part, and the target local polygon mesh is affected by the target part;
基于所述目标局部多边形网格对应的网格点,构建所述目标部位的碰撞体,所述目标部位的碰撞体包围所述目标局部多边形网格对应的网格点;Based on the grid points corresponding to the target local polygon grid, construct a collision body of the target part, and the collision body of the target part surrounds the grid points corresponding to the target local polygon grid;
对所述第一对象对应的各个部位进行碰撞体构建,得到所述第一对象对应的碰撞体集合。Colliders are constructed for each part corresponding to the first object to obtain a set of colliders corresponding to the first object.
在一个示例性实施例中,所述目标部位对应的碰撞体跟随所述目标部位移动。In an exemplary embodiment, the collision body corresponding to the target part moves along with the target part.
在一个示例性实施例中,如图11所示,所述装置1000,还包括:穿插状态确定模块1008。In an exemplary embodiment, as shown in Figure 11, the device 1000 further includes: a penetration state determination module 1008.
所述碰撞体为包围球,所述穿插状态确定模块1008,用于:The collision body is an enclosing ball, and the penetration state determination module 1008 is used to:
若两个包围球的球心之间的距离,大于所述两个包围球的半径和,则确定所述两个包围球对应的部位未穿插;If the distance between the centers of the two surrounding spheres is greater than the sum of the radii of the two surrounding spheres, it is determined that the corresponding parts of the two surrounding spheres are not interspersed;
或者,若两个包围球的球心之间的距离,等于所述两个包围球的半径和,则确定所述两个包围球对应的部位开始穿插或结束穿插;Or, if the distance between the centers of the two surrounding spheres is equal to the sum of the radii of the two surrounding spheres, then it is determined that the corresponding parts of the two surrounding spheres start to intersect or end the interleaving;
或者,若两个包围球的球心之间的距离,小于所述两个包围球的半径和,则确定所述两个包围球对应的部位处于穿插状态。Alternatively, if the distance between the centers of the two surrounding spheres is less than the sum of the radii of the two surrounding spheres, it is determined that the corresponding parts of the two surrounding spheres are in an intersecting state.
在一个示例性实施例中,所述动画数据修复模块1005,用于:In an exemplary embodiment, the animation data repair module 1005 is used to:
对于所述至少一个修复数据片段中的目标修复数据片段,根据所述目标修复数据片段对应的目标时间段,从所述待修复动画数据中获取所述目标时间段对应的原始数据片段;For the target repair data segment in the at least one repair data segment, obtain the original data segment corresponding to the target time period from the animation data to be repaired according to the target time period corresponding to the target repair data segment;
基于所述原始数据片段对应的骨骼动作参数和所述目标修复数据片段对应的重定向修复参数,对所述原始数据片段和所述目标修复数据片段进行叠加,得到所述目标时段对应的修复动画数据。Based on the skeletal action parameters corresponding to the original data fragments and the redirection repair parameters corresponding to the target repair data fragments, the original data fragments and the target repair data fragments are superimposed to obtain the repair animation corresponding to the target period. data.
在一个示例性实施例中,所述动画数据获取模块1001,用于:In an exemplary embodiment, the animation data acquisition module 1001 is used to:
获取动捕对象对应的动捕数据;Obtain the motion capture data corresponding to the motion capture object;
对所述动捕数据进行转化,得到骨骼动捕数据,所述骨骼动捕数据对应于所述动捕对象的骨骼数据;Convert the motion capture data to obtain skeletal motion capture data, where the skeletal motion capture data corresponds to the skeletal data of the motion capture object;
对所述骨骼动捕数据进行重定向,得到所述待修复动画数据。The skeletal motion capture data is redirected to obtain the animation data to be repaired.
综上所述,本申请实施例提供的技术方案,通过对待修复动画数据中的至少一个对象进行碰撞体创建,再基于创建得到碰撞体,分别对待修复动画数据对应的多帧动画数据进行碰撞检测,得到关键帧数据组,实现了穿插过程的自动获取,相比于相关技术中通过手工方式逐帧检测来获取穿插过程,本申请实施例提供的技术方案可以提高穿插过程的获取效率,进而提高动画数据的修复效率。To sum up, the technical solution provided by the embodiments of the present application is to create a collision body for at least one object in the animation data to be repaired, and then based on the created collision body, perform collision detection on multiple frames of animation data corresponding to the animation data to be repaired. , the key frame data group is obtained, and the automatic acquisition of the interleaving process is realized. Compared with obtaining the interleaving process through manual frame-by-frame detection in related technologies, the technical solution provided by the embodiment of the present application can improve the acquisition efficiency of the interleaving process, thereby improving the efficiency of the interleaving process. Repair efficiency of animation data.
需要说明的是,上述实施例提供的装置,在实现其功能时,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将设备的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。另外,上述 实施例提供的装置与方法实施例属于同一构思,其具体实现过程详见方法实施例,这里不再赘述。It should be noted that when implementing the functions of the device provided by the above embodiments, only the division of the above functional modules is used as an example. In practical applications, the above functions can be allocated to different functional modules according to needs, that is, The internal structure of the device is divided into different functional modules to complete all or part of the functions described above. In addition, the above The devices and method embodiments provided in the embodiments belong to the same concept. For details of the implementation process, please refer to the method embodiments, which will not be described again here.
请参考图12,其示出了本申请一个实施例提供的计算机设备的结构框图。该计算机设备可以用于实施上述实施例中提供的动画数据修复方法。具体可以包括如下内容。Please refer to Figure 12, which shows a structural block diagram of a computer device provided by an embodiment of the present application. The computer device can be used to implement the animation data repair method provided in the above embodiment. Specifically, it may include the following.
该计算机设备1200包括中央处理单元(如CPU(Central Processing Unit,中央处理器)、GPU(Graphics Processing Unit,图形处理器)和FPGA(Field Programmable Gate Array,现场可编程逻辑门阵列)等)1201、包括RAM(Random-Access Memory,随机存取存储器)1202和ROM(Read-Only Memory,只读存储器)1203的系统存储器1204,以及连接系统存储器1204和中央处理单元1201的系统总线1205。该计算机设备1200还包括帮助服务器内的各个器件之间传输信息的基本输入/输出系统(Input Output System,I/O系统)1206,和用于存储操作系统1213、应用程序1214和其他程序模块1215的大容量存储设备1207。The computer device 1200 includes a central processing unit (such as CPU (Central Processing Unit, central processing unit), GPU (Graphics Processing Unit, graphics processor) and FPGA (Field Programmable Gate Array, field programmable logic gate array), etc.) 1201, A system memory 1204 including a RAM (Random-Access Memory) 1202 and a ROM (Read-Only Memory) 1203, and a system bus 1205 connecting the system memory 1204 and the central processing unit 1201. The computer device 1200 also includes a basic input/output system (I/O system) 1206 that helps transfer information between various devices in the server, and a module 1215 that stores an operating system 1213, application programs 1214, and other programs. Mass storage device 1207.
该基本输入/输出系统1206包括有用于显示信息的显示器1208和用于用户输入信息的诸如鼠标、键盘之类的输入设备1209。其中,该显示器1208和输入设备1209都通过连接到系统总线1205的输入输出控制器1210连接到中央处理单元1201。该基本输入/输出系统1206还可以包括输入输出控制器1210以用于接收和处理来自键盘、鼠标、或电子触控笔等多个其他设备的输入。类似地,输入输出控制器1210还提供输出到显示屏、打印机或其他类型的输出设备。The basic input/output system 1206 includes a display 1208 for displaying information and an input device 1209 such as a mouse and a keyboard for the user to input information. Wherein, the display 1208 and the input device 1209 are both connected to the central processing unit 1201 through the input and output controller 1210 connected to the system bus 1205. The basic input/output system 1206 may also include an input/output controller 1210 for receiving and processing input from a plurality of other devices such as a keyboard, mouse, or electronic stylus. Similarly, input and output controller 1210 also provides output to a display screen, printer, or other type of output device.
该大容量存储设备1207通过连接到系统总线1205的大容量存储控制器(未示出)连接到中央处理单元1201。该大容量存储设备1207及其相关联的计算机可读介质为计算机设备1200提供非易失性存储。也就是说,该大容量存储设备1207可以包括诸如硬盘或者CD-ROM(Compact Disc Read-Only Memory,只读光盘)驱动器之类的计算机可读介质(未示出)。The mass storage device 1207 is connected to the central processing unit 1201 through a mass storage controller (not shown) connected to the system bus 1205 . The mass storage device 1207 and its associated computer-readable media provide non-volatile storage for the computer device 1200 . That is, the mass storage device 1207 may include a computer-readable medium (not shown) such as a hard disk or a CD-ROM (Compact Disc Read-Only Memory) drive.
不失一般性,该计算机可读介质可以包括计算机存储介质和通信介质。计算机存储介质包括以用于存储诸如计算机程序、数据结构、程序模块或其他数据等信息的任何方法或技术实现的易失性和非易失性、可移动和不可移动介质。计算机存储介质包括RAM、ROM、EPROM(Erasable Programmable Read-Only Memory,可擦写可编程只读存储器)、EEPROM(Electrically Erasable Programmable Read-Only Memory,电可擦写可编程只读存储器)、闪存或其他固态存储技术,CD-ROM、DVD(Digital Video Disc,高密度数字视频光盘)或其他光学存储、磁带盒、磁带、磁盘存储或其他磁性存储设备。当然,本领域技术人员可知该计算机存储介质不局限于上述几种。上述的系统存储器1204和大容量存储设备1207可以统称为存储器。Without loss of generality, the computer-readable media may include computer storage media and communication media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer programs, data structures, program modules or other data. Computer storage media include RAM, ROM, EPROM (Erasable Programmable Read-Only Memory, Erasable Programmable Read-Only Memory), EEPROM (Electrically Erasable Programmable Read-Only Memory, Electrically Erasable Programmable Read-Only Memory), flash memory or Other solid-state storage technologies, CD-ROM, DVD (Digital Video Disc, high-density digital video disc) or other optical storage, tape cassettes, magnetic tapes, disk storage or other magnetic storage devices. Of course, those skilled in the art will know that the computer storage medium is not limited to the above types. The above-mentioned system memory 1204 and mass storage device 1207 may be collectively referred to as memory.
根据本申请实施例,该计算机设备1200还可以通过诸如因特网等网络连接到网络上的远程计算机运行。也即计算机设备1200可以通过连接在该系统总线1205上的网络接口单元1211连接到网络1212,或者说,也可以使用网络接口单元1211来连接到其他类型的网络或远程计算机系统(未示出)。According to the embodiment of the present application, the computer device 1200 can also be connected to a remote computer on the network through a network such as the Internet to run. That is, the computer device 1200 can be connected to the network 1212 through the network interface unit 1211 connected to the system bus 1205, or the network interface unit 1211 can also be used to connect to other types of networks or remote computer systems (not shown) .
所述存储器还包括计算机程序,计算机程序存储于存储器中,且经配置以由一个或者一个以上处理器执行,以实现上述动画数据修复方法。The memory also includes a computer program, which is stored in the memory and configured to be executed by one or more processors to implement the above animation data repair method.
在一个示例性实施例中,还提供了一种计算机可读存储介质,所述存储介质中存储有计算机程序,所述计算机程序在被处理器执行时以实现上述动画数据修复方法。In an exemplary embodiment, a computer-readable storage medium is also provided, and a computer program is stored in the storage medium. When executed by a processor, the computer program implements the above animation data repair method.
可选地,该计算机可读存储介质可以包括:ROM(Read-Only Memory,只读存储器)、RAM(Random-Access Memory,随机存储器)、SSD(Solid State Drives,固态硬盘)或光盘等。其中,随机存取记忆体可以包括ReRAM(Resistance Random Access Memory,电阻式随机存取记忆体)和DRAM(Dynamic Random Access Memory,动态随机存取存储器)。Optionally, the computer-readable storage medium may include: ROM (Read-Only Memory), RAM (Random-Access Memory), SSD (Solid State Drives, solid state drive) or optical disk, etc. Among them, random access memory can include ReRAM (Resistance Random Access Memory, resistive random access memory) and DRAM (Dynamic Random Access Memory, dynamic random access memory).
在一个示例性实施例中,还提供了一种计算机程序产品,所述计算机程序产品包括计算机程序,所述计算机程序存储在计算机可读存储介质中。计算机设备的处理器从所述计算机 可读存储介质中读取所述计算机程序,所述处理器执行所述计算机程序,使得所述计算机设备执行上述动画数据修复方法。In an exemplary embodiment, a computer program product is also provided, the computer program product including a computer program stored in a computer-readable storage medium. a processor of a computer device from which the computer The computer program is read from a readable storage medium, and the processor executes the computer program, so that the computer device performs the above animation data repair method.
需要说明的是,本申请在收集用户的相关数据之前以及在收集用户的相关数据的过程中,都可以显示提示界面、弹窗或输出语音提示信息,该提示界面、弹窗或语音提示信息用于提示用户当前正在搜集其相关数据,使得本申请仅仅在获取到用户对该提示界面或者弹窗发出的确认操作后,才开始执行获取用户相关数据的相关步骤,否则(即未获取到用户对该提示界面或者弹窗发出的确认操作时),结束获取用户相关数据的相关步骤,即不获取用户的相关数据。换句话说,本申请所采集的所有用户数据(包括动捕数据、动画数据、骨骼数据等等),处理严格根据相关国家法律法规的要求,获取个人信息主体的知情同意或单独同意,并在法律法规及个人信息主体的授权范围内,开展后续数据使用及处理行为。It should be noted that this application can display a prompt interface, pop-up window or output voice prompt information before collecting user-related data and during the process of collecting user-related data. The prompt interface, pop-up window or voice prompt information is used In order to prompt the user that his/her relevant data is currently being collected, this application only starts to execute the relevant steps for obtaining the user's relevant data after obtaining the user's confirmation operation on the prompt interface or pop-up window, otherwise (that is, the user's confirmation operation is not obtained) When the prompt interface or pop-up window sends a confirmation operation), the relevant steps of obtaining user-related data are completed, that is, the user-related data is not obtained. In other words, all user data collected by this application (including motion capture data, animation data, skeletal data, etc.) will be processed strictly in accordance with the requirements of relevant national laws and regulations, with the informed consent or separate consent of the personal information subject obtained, and Carry out subsequent data use and processing within the scope of laws, regulations and the authorization of the personal information subject.
应当理解的是,在本文中提及的“多个”是指两个或两个以上。“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。字符“/”一般表示前后关联对象是一种“或”的关系。另外,本文中描述的步骤编号,仅示例性示出了步骤间的一种可能的执行先后顺序,在一些其它实施例中,上述步骤也可以不按照编号顺序来执行,如两个不同编号的步骤同时执行,或者两个不同编号的步骤按照与图示相反的顺序执行,本申请实施例对此不作限定。It should be understood that "plurality" mentioned in this article means two or more. "And/or" describes the relationship between related objects, indicating that there can be three relationships. For example, A and/or B can mean: A exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the related objects are in an "or" relationship. In addition, the step numbers described in this article only illustrate a possible execution sequence between the steps. In some other embodiments, the above steps may not be executed in the numbering sequence, such as two different numbers. The steps are executed simultaneously, or two steps with different numbers are executed in the reverse order as shown in the figure, which is not limited in the embodiments of the present application.
以上所述仅为本申请的示例性实施例,并不用以限制本申请,凡在本申请的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。 The above are only exemplary embodiments of the present application and are not intended to limit the present application. Any modifications, equivalent substitutions, improvements, etc. made within the spirit and principles of the present application shall be included in the protection of the present application. within the range.

Claims (16)

  1. 一种动画数据修复方法,所述方法由计算机设备执行,所述方法包括:An animation data repair method, the method is executed by a computer device, the method includes:
    获取待修复动画数据,所述待修复动画数据包括由动捕数据重定向得到的多帧动画数据;Obtain animation data to be repaired, where the animation data to be repaired includes multi-frame animation data obtained by redirecting motion capture data;
    对所述待修复动画数据中的至少一个对象进行碰撞体创建,得到所述至少一个对象分别对应的碰撞体集合,所述碰撞体集合中包括至少一个碰撞体;Create a collision body for at least one object in the animation data to be repaired, and obtain a collection of collision bodies corresponding to the at least one object, and the collection of collision bodies includes at least one collision body;
    基于所述至少一个对象分别对应的碰撞体集合,分别对所述多帧动画数据中的对象进行碰撞检测,得到至少一个关键帧数据组,每个所述关键帧数据组对应一个穿插过程;Based on the set of colliders respectively corresponding to the at least one object, collision detection is performed on the objects in the multi-frame animation data to obtain at least one key frame data group, each of the key frame data groups corresponding to an interleaving process;
    基于所述至少一个关键帧数据组,得到至少一个修复数据片段,所述修复数据片段是指所述穿插过程被修复后的数据片段;Based on the at least one key frame data group, at least one repaired data segment is obtained, where the repaired data segment refers to the data segment that is repaired by the interleaving process;
    按照所述至少一个修复数据片段分别对应的时间段,将所述至少一个修复数据片段与所述待修复动画数据进行叠加,得到修复动画数据。According to the time periods corresponding to the at least one repair data segment, the at least one repair data segment and the animation data to be repaired are superimposed to obtain repair animation data.
  2. 根据权利要求1所述的方法,其中,基于所述至少一个对象分别对应的碰撞体集合,分别对所述多帧动画数据中的对象进行碰撞检测,得到至少一个关键帧数据组,包括:The method according to claim 1, wherein collision detection is performed on the objects in the multi-frame animation data based on the set of colliders respectively corresponding to the at least one object, and at least one key frame data group is obtained, including:
    对于所述多帧动画数据中的目标帧动画数据,对所述目标帧动画数据中的对象进行碰撞检测,得到所述目标帧动画数据对应的至少一个穿插程度量化值,所述穿插程度量化值用于表征所述至少一个对象中的对象对之间的穿插程度;For the target frame animation data in the multi-frame animation data, collision detection is performed on the objects in the target frame animation data to obtain at least one quantified interleaving degree value corresponding to the target frame animation data. The quantified interleaving degree value is Used to characterize the degree of interpenetration between pairs of objects in the at least one object;
    基于所述多帧动画数据分别对应的至少一个穿插程度量化值,从所述多帧动画数据中确定所述至少一个关键帧数据组。The at least one key frame data group is determined from the multi-frame animation data based on at least one interleaving degree quantization value respectively corresponding to the multi-frame animation data.
  3. 根据权利要求2所述的方法,其中,所述对象对包括存在穿插行为的第一对象和第二对象;The method according to claim 2, wherein the object pair includes a first object and a second object with interleaving behavior;
    所述基于所述多帧动画数据分别对应的至少一个穿插程度量化值,从所述多帧动画数据中确定所述至少一个关键帧数据组,包括:Determining the at least one key frame data group from the multi-frame animation data based on at least one interspersed degree quantization value respectively corresponding to the multi-frame animation data includes:
    对于存在穿插行为的所述对象对,基于所述对象对对应的穿插程度量化值,从所述多帧动画数据中获取所述对象对对应的至少一个候选帧数据序列;其中,每个所述候选帧数据序列对应于所述第一对象和所述第二对象之间的一个穿插行为;For the object pair with interleaving behavior, at least one candidate frame data sequence corresponding to the object pair is obtained from the multi-frame animation data based on the quantified value of the interleaving degree corresponding to the object pair; wherein, each of the The candidate frame data sequence corresponds to an interleaving behavior between the first object and the second object;
    对于所述至少一个候选帧数据序列中的目标候选帧数据序列,对所述目标候选帧数据序列进行关键帧选取,得到所述目标候选帧数据序列对应的关键帧数据组;For the target candidate frame data sequence in the at least one candidate frame data sequence, perform key frame selection on the target candidate frame data sequence to obtain a key frame data group corresponding to the target candidate frame data sequence;
    基于所述至少一个候选帧数据序列,确定所述对象对对应的至少一个关键帧数据组。Based on the at least one candidate frame data sequence, at least one key frame data group corresponding to the object pair is determined.
  4. 根据权利要求3所述的方法,其中,所述关键帧数据组包括起始关键帧数据、峰值关键帧数据和结束关键帧数据;其中,所述起始关键帧数据是指所述穿插行为的起始时刻对应的动画数据,所述峰值关键帧数据是指所述穿插行为达到最大程度对应的动画数据,所述结束关键帧数据是指所述穿插行为的结束时刻对应的动画数据;The method according to claim 3, wherein the key frame data group includes starting key frame data, peak key frame data and end key frame data; wherein the starting key frame data refers to the interspersed behavior Animation data corresponding to the starting moment, the peak key frame data refers to the animation data corresponding to the maximum extent of the interspersed behavior, and the end key frame data refers to the animation data corresponding to the end moment of the interspersed behavior;
    所述对所述目标候选帧数据序列进行关键帧选取,得到所述目标候选帧数据序列对应的关键帧数据组,包括:The key frame selection of the target candidate frame data sequence to obtain the key frame data group corresponding to the target candidate frame data sequence includes:
    对于所述目标候选帧数据序列对应的目标穿插行为,将所述目标穿插行为对应的第一个穿插程度量化值不为零的动画数据的前一帧动画数据,确定为所述目标候选帧数据序列对应的起始关键帧数据;For the target interleaving behavior corresponding to the target candidate frame data sequence, the animation data of the previous frame of animation data corresponding to the first interleaving degree quantified value of the target interleaving behavior is not zero is determined as the target candidate frame data. The starting keyframe data corresponding to the sequence;
    将所述目标穿插行为对应的穿插程度量化值为最大的动画数据,确定为所述目标候选帧数据序列对应的峰值关键帧数据;Determine the animation data with the largest quantified interleaving degree value corresponding to the target interleaving behavior as the peak key frame data corresponding to the target candidate frame data sequence;
    将所述峰值关键帧数据之后的第一个穿插程度量化值为零的动画数据,确定为所述目标候选帧数据序列对应的结束关键帧数据。 The animation data whose first interleaving degree quantization value after the peak key frame data is zero is determined as the end key frame data corresponding to the target candidate frame data sequence.
  5. 根据权利要求1所述的方法,其中,所述基于所述至少一个关键帧数据组,得到至少一个修复数据片段,包括:The method according to claim 1, wherein said obtaining at least one repair data segment based on said at least one key frame data group includes:
    分别对所述至少一个关键帧数据组进行修复,得到至少一个修复后的关键帧数据组;其中,所述修复后的关键帧数据组中的穿插行为被修复;Repair the at least one key frame data group respectively to obtain at least one repaired key frame data group; wherein the interspersed behavior in the repaired key frame data group is repaired;
    分别对所述至少一个修复后的关键帧数据组进行过渡帧插补,得到所述至少一个修复数据片段。Transition frame interpolation is performed on the at least one repaired key frame data group respectively to obtain the at least one repaired data segment.
  6. 根据权利要求1所述的方法,其中,所述基于所述至少一个关键帧数据组,得到至少一个修复数据片段之前,还包括:The method according to claim 1, wherein before obtaining at least one repair data segment based on the at least one key frame data group, the method further includes:
    对于所述至少一个对象中的第一对象对,分别对所述第一对象对对应的至少一个关键帧数据组进行过滤,得到所述第一对象对对应的至少一个过滤后的关键帧数据组;其中,所述过滤后的关键帧数据组对应的时间段长度大于第一时长阈值;For the first object pair in the at least one object, filter at least one key frame data group corresponding to the first object pair to obtain at least one filtered key frame data group corresponding to the first object pair. ; Wherein, the length of the time period corresponding to the filtered key frame data group is greater than the first duration threshold;
    分别对所述第一对象对对应的至少一个过滤后的关键帧数据组进行后处理,得到所述第一对象对对应的至少一个调整后的关键帧数据组;其中,所述调整后的关键帧数据组的动作跨度符合对象对的骨骼数据;Perform post-processing on at least one filtered key frame data group corresponding to the first object pair respectively to obtain at least one adjusted key frame data group corresponding to the first object pair; wherein, the adjusted key frame data group The action span of the frame data group conforms to the skeletal data of the object pair;
    获取所述至少一个关键帧数据组分别对应的调整后的关键帧数据组;Obtain the adjusted key frame data group corresponding to the at least one key frame data group;
    所述基于所述至少一个关键帧数据组,得到至少一个修复数据片段,包括:Obtaining at least one repair data segment based on the at least one key frame data group includes:
    基于所述至少一个关键帧数据组分别对应的调整后的关键帧数据组,得到所述至少一个修复数据片段。The at least one repair data segment is obtained based on the adjusted key frame data group respectively corresponding to the at least one key frame data group.
  7. 根据权利要求6所述的方法,其中,所述分别对所述第一对象对对应的至少一个过滤后的关键帧数据组进行后处理,得到所述第一对象对对应的至少一个调整后的关键帧数据组,包括:The method according to claim 6, wherein the at least one filtered key frame data group corresponding to the first object pair is post-processed to obtain at least one adjusted key frame data group corresponding to the first object pair. Keyframe data group, including:
    基于所述第一对象对的骨骼数据和所述第一对象对对应的动捕对象的骨骼数据,确定第一调整参数;Determine a first adjustment parameter based on the skeletal data of the first object pair and the skeletal data of the motion capture object corresponding to the first object pair;
    在所述第一调整参数大于第一阈值的情况下,将所述至少一个过滤后的关键帧数据组分别对应的时间段向后扩充,得到所述至少一个调整后的关键帧数据组;When the first adjustment parameter is greater than the first threshold, extending the time periods corresponding to the at least one filtered key frame data group backward to obtain the at least one adjusted key frame data group;
    在所述第一调整参数小于所述第一阈值的情况下,将所述至少一个过滤后的关键帧数据组分别对应的时间段向前压缩,得到所述至少一个调整后的关键帧数据组。When the first adjustment parameter is less than the first threshold, compress the time periods corresponding to the at least one filtered key frame data group forward to obtain the at least one adjusted key frame data group. .
  8. 根据权利要求1所述的方法,其中,所述对所述待修复动画数据中的至少一个对象进行碰撞体创建,得到所述至少一个对象分别对应的碰撞体集合,包括:The method according to claim 1, wherein said creating a collision body for at least one object in the animation data to be repaired to obtain a collection of collision bodies respectively corresponding to the at least one object includes:
    对于所述至少一个对象中的第一对象,获取所述第一对象的多边形网格,所述多边形网格用于表征对象的外形;For the first object in the at least one object, obtain a polygon mesh of the first object, where the polygon mesh is used to represent the appearance of the object;
    对于所述第一对象对应的目标部位,获取所述目标部位对应的目标局部多边形网格,所述目标局部多边形网格受所述目标部位影响;For the target part corresponding to the first object, obtain a target local polygon mesh corresponding to the target part, and the target local polygon mesh is affected by the target part;
    基于所述目标局部多边形网格对应的网格点,构建所述目标部位的碰撞体,所述目标部位的碰撞体包围所述目标局部多边形网格对应的网格点;Based on the grid points corresponding to the target local polygon grid, construct a collision body of the target part, and the collision body of the target part surrounds the grid points corresponding to the target local polygon grid;
    对所述第一对象对应的至少一个部位进行碰撞体构建,得到所述第一对象对应的碰撞体集合。Construct a collision body on at least one part corresponding to the first object to obtain a set of collision bodies corresponding to the first object.
  9. 根据权利要求8所述的方法,其中,所述目标部位对应的碰撞体跟随所述目标部位移动。The method according to claim 8, wherein the collision body corresponding to the target part moves following the target part.
  10. 根据权利要求1所述的方法,其中,所述碰撞体为包围球,所述方法还包括: The method according to claim 1, wherein the collision body is an enclosing sphere, the method further comprising:
    若两个包围球的球心之间的距离,大于所述两个包围球的半径和,则确定所述两个包围球对应的部位未穿插;If the distance between the centers of the two surrounding spheres is greater than the sum of the radii of the two surrounding spheres, it is determined that the corresponding parts of the two surrounding spheres are not interspersed;
    或者,or,
    若两个包围球的球心之间的距离,等于所述两个包围球的半径和,则确定所述两个包围球对应的部位开始穿插或结束穿插;If the distance between the centers of the two surrounding spheres is equal to the sum of the radii of the two surrounding spheres, then it is determined that the corresponding parts of the two surrounding spheres start to intersect or end to intersect;
    或者,or,
    若两个包围球的球心之间的距离,小于所述两个包围球的半径和,则确定所述两个包围球对应的部位处于穿插状态。If the distance between the centers of the two surrounding spheres is less than the sum of the radii of the two surrounding spheres, it is determined that the corresponding parts of the two surrounding spheres are in an intersecting state.
  11. 根据权利要求1所述的方法,其中,所述按照所述至少一个修复数据片段分别对应的时间段,将所述至少一个修复数据片段与所述待修复动画数据进行叠加,得到修复动画数据,包括:The method according to claim 1, wherein the at least one repair data segment and the animation data to be repaired are superimposed according to the time periods corresponding to the at least one repair data segment to obtain the repair animation data, include:
    对于所述至少一个修复数据片段中的目标修复数据片段,根据所述目标修复数据片段对应的目标时间段,从所述待修复动画数据中获取所述目标时间段对应的原始数据片段;For the target repair data segment in the at least one repair data segment, obtain the original data segment corresponding to the target time period from the animation data to be repaired according to the target time period corresponding to the target repair data segment;
    基于所述原始数据片段对应的骨骼动作参数和所述目标修复数据片段对应的重定向修复参数,对所述原始数据片段和所述目标修复数据片段进行叠加,得到所述目标时段对应的修复动画数据。Based on the skeletal action parameters corresponding to the original data fragments and the redirection repair parameters corresponding to the target repair data fragments, the original data fragments and the target repair data fragments are superimposed to obtain the repair animation corresponding to the target period. data.
  12. 根据权利要求1所述的方法,其中,所述获取待修复动画数据,包括:The method according to claim 1, wherein said obtaining animation data to be repaired includes:
    获取动捕对象对应的动捕数据;Obtain the motion capture data corresponding to the motion capture object;
    对所述动捕数据进行转化,得到骨骼动捕数据,所述骨骼动捕数据对应于所述动捕对象的骨骼数据;Convert the motion capture data to obtain skeletal motion capture data, where the skeletal motion capture data corresponds to the skeletal data of the motion capture object;
    对所述骨骼动捕数据进行重定向,得到所述待修复动画数据。The skeletal motion capture data is redirected to obtain the animation data to be repaired.
  13. 一种动画数据修复装置,所述装置包括:An animation data repair device, the device includes:
    动画数据获取模块,用于获取待修复动画数据,所述待修复动画数据包括由动捕数据重定向得到的多帧动画数据;An animation data acquisition module, used to acquire animation data to be repaired, where the animation data to be repaired includes multi-frame animation data obtained by redirecting motion capture data;
    碰撞体创建模块,用于对所述待修复动画数据中的至少一个对象进行碰撞体创建,得到所述至少一个对象分别对应的碰撞体集合,所述碰撞体集合中包括至少一个碰撞体;A collision body creation module, configured to create a collision body for at least one object in the animation data to be repaired, and obtain a collection of collision bodies corresponding to the at least one object, and the collection of collision bodies includes at least one collision body;
    关键帧获取模块,用于基于所述至少一个对象分别对应的碰撞体集合,分别对所述多帧动画数据中的对象进行碰撞检测,得到至少一个关键帧数据组,每个所述关键帧数据组对应一个穿插过程;A key frame acquisition module, configured to perform collision detection on the objects in the multi-frame animation data based on the set of colliders respectively corresponding to the at least one object, and obtain at least one key frame data group, each of the key frame data The group corresponds to an interspersed process;
    修复片段获取模块,用于基于所述至少一个关键帧数据组,得到至少一个修复数据片段,所述修复数据片段是指所述穿插过程被修复后的数据片段;A repair fragment acquisition module, configured to obtain at least one repair data fragment based on the at least one key frame data group, where the repair data fragment refers to the data fragment after the interleaving process is repaired;
    动画数据修复模块,用于按照所述至少一个修复数据片段分别对应的时间段,将所述至少一个修复数据片段与所述待修复动画数据进行叠加,得到修复动画数据。An animation data repair module is configured to superimpose the at least one repair data segment and the animation data to be repaired according to the time periods corresponding to the at least one repair data segment to obtain repair animation data.
  14. 一种计算机设备,所述计算机设备包括处理器和存储器,所述存储器中存储有计算机程序,所述计算机程序由所述处理器加载并执行以实现如权利要求1至12任一项所述的动画数据修复方法。A computer device, the computer device includes a processor and a memory, a computer program is stored in the memory, and the computer program is loaded and executed by the processor to implement the method described in any one of claims 1 to 12 Animation data repair method.
  15. 一种计算机可读存储介质,所述计算机可读存储介质中存储有计算机程序,所述计算机程序由处理器加载并执行以实现如权利要求1至12任一项所述的动画数据修复方法。A computer-readable storage medium, in which a computer program is stored, and the computer program is loaded and executed by a processor to implement the animation data repair method according to any one of claims 1 to 12.
  16. 一种计算机程序产品,所述计算机程序产品包括计算机指令,所述计算机指令存储在计算机可读存储介质中,处理器从所述计算机可读存储介质读取并执行所述计算机指令,以 实现如权利要求1至12任一项所述的动画数据修复方法。 A computer program product including computer instructions stored in a computer-readable storage medium, and a processor reads and executes the computer instructions from the computer-readable storage medium to Implement the animation data repair method as described in any one of claims 1 to 12.
PCT/CN2023/084373 2022-04-27 2023-03-28 Animation data repair method and apparatus, device, storage medium, and program product WO2023207477A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210454407.6 2022-04-27
CN202210454407.6A CN117011498A (en) 2022-04-27 2022-04-27 Animation data restoration method, device, apparatus, storage medium, and program product

Publications (1)

Publication Number Publication Date
WO2023207477A1 true WO2023207477A1 (en) 2023-11-02

Family

ID=88517332

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/084373 WO2023207477A1 (en) 2022-04-27 2023-03-28 Animation data repair method and apparatus, device, storage medium, and program product

Country Status (2)

Country Link
CN (1) CN117011498A (en)
WO (1) WO2023207477A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102725038A (en) * 2009-09-15 2012-10-10 索尼公司 Combining multi-sensory inputs for digital animation
US9827496B1 (en) * 2015-03-27 2017-11-28 Electronics Arts, Inc. System for example-based motion synthesis
CN112001989A (en) * 2020-07-28 2020-11-27 完美世界(北京)软件科技发展有限公司 Virtual object control method and device, storage medium and electronic device
CN112270734A (en) * 2020-10-19 2021-01-26 北京大米科技有限公司 Animation generation method, readable storage medium and electronic device
CN113724169A (en) * 2021-09-08 2021-11-30 广州虎牙科技有限公司 Skin penetration repairing method and system and computer equipment
CN113888680A (en) * 2021-09-29 2022-01-04 广州虎牙科技有限公司 Method, device and equipment for three-dimensional model interpenetration repair

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102725038A (en) * 2009-09-15 2012-10-10 索尼公司 Combining multi-sensory inputs for digital animation
US9827496B1 (en) * 2015-03-27 2017-11-28 Electronics Arts, Inc. System for example-based motion synthesis
CN112001989A (en) * 2020-07-28 2020-11-27 完美世界(北京)软件科技发展有限公司 Virtual object control method and device, storage medium and electronic device
CN112270734A (en) * 2020-10-19 2021-01-26 北京大米科技有限公司 Animation generation method, readable storage medium and electronic device
CN113724169A (en) * 2021-09-08 2021-11-30 广州虎牙科技有限公司 Skin penetration repairing method and system and computer equipment
CN113888680A (en) * 2021-09-29 2022-01-04 广州虎牙科技有限公司 Method, device and equipment for three-dimensional model interpenetration repair

Also Published As

Publication number Publication date
CN117011498A (en) 2023-11-07

Similar Documents

Publication Publication Date Title
JP6898602B2 (en) Robust mesh tracking and fusion with parts-based keyframes and a priori models
JP7125992B2 (en) Building a virtual reality (VR) game environment using a virtual reality map of the real world
US10860838B1 (en) Universal facial expression translation and character rendering system
JP7457082B2 (en) Reactive video generation method and generation program
CN111161395B (en) Facial expression tracking method and device and electronic equipment
US20120021828A1 (en) Graphical user interface for modification of animation data using preset animation samples
CN109189302B (en) Control method and device of AR virtual model
JP2015079502A (en) Object tracking method, object tracking device, and tracking feature selection method
CN116977522A (en) Rendering method and device of three-dimensional model, computer equipment and storage medium
JP7483979B2 (en) Method and apparatus for playing multi-dimensional responsive images
CN113705520A (en) Motion capture method and device and server
TW202309834A (en) Model reconstruction method, electronic device and computer-readable storage medium
CN115331265A (en) Training method of posture detection model and driving method and device of digital person
CN110415322B (en) Method and device for generating action command of virtual object model
WO2023207477A1 (en) Animation data repair method and apparatus, device, storage medium, and program product
CN108734774B (en) Virtual limb construction method and device and human-computer interaction method
CN107820622A (en) A kind of virtual 3D setting works method and relevant device
WO2024027063A1 (en) Livestream method and apparatus, storage medium, electronic device and product
de Castro et al. Interaction by Hand-Tracking in a Virtual Reality Environment
CN114452646A (en) Virtual object perspective processing method and device and computer equipment
CN115953520B (en) Recording and playback method and device for virtual scene, electronic equipment and medium
CN116132653A (en) Processing method and device of three-dimensional model, storage medium and computer equipment
CN114296622B (en) Image processing method, device, electronic equipment and storage medium
CN115984943B (en) Facial expression capturing and model training method, device, equipment, medium and product
US8896607B1 (en) Inverse kinematics for rigged deformable characters

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23794912

Country of ref document: EP

Kind code of ref document: A1