WO2021258544A1 - Animation blend space partitioning method and apparatus, and device and readable medium - Google Patents

Animation blend space partitioning method and apparatus, and device and readable medium Download PDF

Info

Publication number
WO2021258544A1
WO2021258544A1 PCT/CN2020/112542 CN2020112542W WO2021258544A1 WO 2021258544 A1 WO2021258544 A1 WO 2021258544A1 CN 2020112542 W CN2020112542 W CN 2020112542W WO 2021258544 A1 WO2021258544 A1 WO 2021258544A1
Authority
WO
WIPO (PCT)
Prior art keywords
animation
mixing space
space
reference point
target
Prior art date
Application number
PCT/CN2020/112542
Other languages
French (fr)
Chinese (zh)
Inventor
张迎凯
何文峰
Original Assignee
完美世界(北京)软件科技发展有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 完美世界(北京)软件科技发展有限公司 filed Critical 完美世界(北京)软件科技发展有限公司
Publication of WO2021258544A1 publication Critical patent/WO2021258544A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/802D [Two Dimensional] animation, e.g. using sprites

Definitions

  • the present invention relates to the technical field of data processing, and in particular to a method, device, equipment and readable medium for dividing an animation mixing space.
  • the corresponding skeletal animation played in the game according to the user's control operation is obtained through mixing in some applications.
  • the key to the animation mixing process is how to correctly calculate the required animation and weight information according to the user's needs. According to the calculated required animation and weight information, the animation mixing process can be specifically performed.
  • an animator can make several representative sample skeleton animations. After obtaining the sample skeletal animation, game designers rely on personal experience and repeated trials to set the corresponding motion parameters for the target object in each sample skeletal animation. Then based on these motion parameters, an animation mixing space is constructed. During the execution of the game, when the user performs a control operation on the target object, the sample skeletal animation can be mixed through the animation mixing space to mix the skeletal animation corresponding to the control operation.
  • the set motion parameters do not match the actual motion parameters of the sample skeletal animation.
  • the animation quality of the mixed skeletal animation is directly related to the motion parameters set by the game designer.
  • the animation quality of the mixed skeletal animation is lower .
  • humans cannot interfere with the mixing process during the animation mixing process.
  • the construction of the high-dimensional mixed space is realized by simply combining several low-dimensional mixed spaces together.
  • the high-dimensional mixed space obtained by simple combination cannot realize the coupling of the mixed space.
  • the embodiment of the present invention provides a method, device, equipment and storage medium for dividing an animation mixing space, which are used to improve the animation quality of the mixed skeletal animation.
  • an embodiment of the present invention provides a method for dividing an animation blending space, the method including:
  • the animation collection including a plurality of sample skeletal animations
  • an animation mixing space segmentation device including:
  • An obtaining module configured to obtain an animation set, the animation set contains a plurality of sample skeletal animations to be animated;
  • the motion analysis module is configured to perform motion analysis on the target object in each sample skeletal animation to obtain multiple types of motion parameters corresponding to the target object in each sample skeletal animation;
  • a construction module configured to construct an animation mixing space with multiple types of motion parameters as dimensions
  • the segmentation module is configured to determine a first reference point corresponding to the motion parameter in the animation mixing space, and segment the animation mixing space based on the first reference point.
  • an embodiment of the present invention provides an electronic device including a processor and a memory.
  • the memory stores at least one instruction, at least one program, code set or instruction set, the at least one instruction, at least one program,
  • the code set or instruction set is loaded and executed by the processor to realize the animation mixing space division method in the first aspect.
  • an embodiment of the present invention provides a computer-readable medium on which at least one instruction, at least one program, code set or instruction set is stored, the at least one instruction, at least one program, code set or instruction set It is loaded and executed by the processor to realize the animation mixing space division method in the first aspect.
  • the motion analysis of the target object in each sample skeletal animation can be performed, and in this way, the corresponding target object in each sample skeletal animation can be obtained.
  • Different types of motion parameters The motion parameters automatically calculated through the motion analysis can fit the actual motion parameters of the sample skeleton animation, so the motion parameters obtained through the motion analysis are more accurate.
  • the animation quality of the skeletal animation obtained by the animation blending process based on such motion parameters is higher. It solves the problem of inaccurate motion parameters that require game designers to manually input motion parameters of animations based on personal experience when making animation mixing spaces.
  • the motion parameters can be automatically calculated through motion analysis, and the calculated motion parameters can fit the actual motion parameters of the sample skeletal animation, and the animation quality of the skeletal animation obtained by the animation mixing process can be improved.
  • FIG. 1 is a schematic diagram of a flowchart of a method for dividing an animation mixed space provided by an embodiment of the present invention
  • FIG. 2 is a schematic diagram of a result of using triangles to divide a two-dimensional space according to an embodiment of the present invention
  • FIG. 3 is a schematic diagram of a segmentation result provided by an embodiment of the present invention.
  • FIG. 4 is a schematic diagram of an interpolation principle provided by an embodiment of the present invention.
  • FIG. 5 is a schematic diagram of the effect of a conversion rule geometry provided by an embodiment of the present invention.
  • FIG. 6 is a schematic structural diagram of an animation mixing space segmentation device provided by an embodiment of the present invention.
  • FIG. 7 is a schematic structural diagram of an electronic device provided by an embodiment of the present invention.
  • the words “if” and “if” as used herein can be interpreted as “when” or “when” or “in response to determination” or “in response to detection”.
  • the phrase “if determined” or “if detected (statement or event)” can be interpreted as “when determined” or “in response to determination” or “when detected (statement or event) )” or “in response to detection (statement or event)”.
  • FIG. 1 is a flowchart of a method for dividing an animation blend space provided by an embodiment of the present invention. As shown in FIG. 1, the method includes the following steps:
  • sample skeletal animation may be an animation drawn by an animator, and multiple sample skeletal animations related to the content can be stored in an animation collection.
  • Animation mixing processing can be performed based on the sample skeleton animation, and the animation mixing processing is a more critical technology.
  • Animation blending refers to a technique that can make more than one sample skeleton animation work on the final pose of the character.
  • the key is how to correctly calculate the required animation and weight information according to the user's needs. According to the calculated required animation and weight information, the animation mixing process can be specifically performed.
  • the motion analysis of the target object in each sample skeletal animation can be performed to obtain multiple types of motion parameters corresponding to the target object in each sample skeletal animation.
  • the aforementioned target object may be an object manipulated by the user as a player in the game, for example, it may be a game character.
  • the process of performing motion analysis on the target object in each sample skeletal animation to obtain multiple types of motion parameters corresponding to the target object in each sample skeletal animation can be implemented as follows: during the playback of each sample skeletal animation, Sample each sample skeletal animation to obtain sampling information; according to the sampling information, calculate multiple types of motion parameters corresponding to the target object in each sample skeletal animation in each sample skeletal animation.
  • each sample skeletal animation can be played offline, and then during the process of playing the sample skeletal animation, each sample skeletal animation is sampled. Subsequently, at least any one of the motion (root) trajectory of the target object in each sample skeletal animation and the joint angles contained in the target object can be calculated. Finally, based on at least any one of the motion trajectory and the joint angle, multiple types of motion parameters corresponding to the target object in each sample skeletal animation can be determined. It should be noted that in actual applications, during the process of playing the sample skeletal animation, motion analysis can be performed on each sample skeletal animation to analyze some other available types of motion parameters in addition to the motion trajectory and the joint angle. The different types of motion parameters that can be analyzed through the sample skeleton animation and the manner in which these parameters are used to construct the animation mixing space fall within the protection scope of the present invention.
  • the types of motion parameters include:
  • Movement speed the unit is m/s
  • Angular speed (Turn-speed), the unit is rad/s;
  • the movement speed, the angular velocity, and the ground slope can be calculated by the root motion of the animation (a method of motion analysis).
  • Strafe can be calculated by the angle between the neck joint and the trajectory direction.
  • the motion parameters calculated through motion analysis are more accurate and can better represent the inherent motion characteristics of the sample skeleton animation.
  • a three-dimensional space can be constructed, and the three dimensions of the three-dimensional space correspond to Move-speed. , Turn-speed and Strafe.
  • the animation mixing space After constructing the animation mixing space, the animation mixing space can be divided.
  • the animation mixing space can be divided by automatic division, or the animation mixing space can be divided by manual division, and the animation mixing space can be divided by combining automatic and manual division. .
  • the first reference point corresponding to the motion parameter in the animation mixing space can be determined first.
  • the animation mixing space is composed of three dimensions: Move-speed, Turn-speed, and Strafe
  • the number of selected types of motion parameters determines the dimensionality of the animation mixing space. If two types of motion parameters are selected, a two-dimensional space is formed, and if three types of motion parameters are selected, a three-dimensional space is formed. Different dimensionality of the animation mixing space is divided into different ways.
  • the process of dividing the animation mixing space can be implemented as follows: using the first reference point as the vertex of the geometric surface to divide the animation mixing space
  • the preset types of geometric surfaces include triangles or planar convex quadrilaterals.
  • the process of dividing the animation mixing space into multiple preset types of geometric surfaces using the first reference point as the vertex of the geometric surface can be implemented as follows:
  • the reference point is used as the vertex of the geometric surface, and the animation mixing space is divided into multiple triangles through Delaunay Triangulation (Delaunay Triangulation) subdivision technology.
  • Figure 2 it is a schematic diagram of the result of using triangles to divide the two-dimensional space.
  • the process of dividing the animation mixing space can be implemented as follows: taking the first reference point as the vertex of the geometric body, and dividing the animation mixing space into multiple A preset type of geometry, the preset type of geometry includes a tetrahedron or a convex hull.
  • FIG. 3 it is a schematic diagram of the result of dividing a two-dimensional space with a preset geometric plane, and a schematic diagram of the result of dividing a three-dimensional space with a preset geometric body.
  • the animation mixing space is split by automatic splitting, if the game designer is not satisfied with the splitting result, he can also make appropriate modifications and supplements. Game designers can modify the results of the auto-segmentation, or add some second reference points on the basis of the first reference point. After the second reference point is added, based on the first reference point and the second reference point, the animation mixing space can be re-split or the split result can be adjusted.
  • each reference point in the animation mixing space corresponds to a skeletal animation.
  • the sample skeletal animation corresponding to the first reference point comes from the animation collection drawn by the animator, and the skeletal animation corresponding to the second reference point needs to be based on The sample skeleton animation is generated.
  • the process of generating the skeletal animation corresponding to the second reference point can be implemented as follows: based on the relative positional relationship between the second reference point and the first reference point, in the first reference point, determine the second reference point that matches the second reference point.
  • Target fiducial point based on the sample skeletal animation corresponding to the target fiducial point, skeletal animation is generated.
  • the divided animation mixing space can be used to perform animation mixing processing.
  • the process of animation mixing processing can be realized as follows: when an animation mixing demand event is detected, the control parameters of the target object input by the user are obtained; based on the control parameters, the animation mixing after the split Construct a blending point in the space; based on the blending point, perform animation blending processing.
  • Detect the user-triggered control operation on the target object determine the target motion parameter corresponding to the control operation; among the multiple subspaces contained in the split animation mixing space, determine the target subspace to which the control point corresponding to the target motion parameter belongs.
  • Each subspace includes multiple preset types of geometric surfaces or multiple preset types of geometric bodies; obtains the sample skeleton animation corresponding to the first reference point constituting the target subspace; performs animation blending processing on the obtained sample skeleton animation to obtain The target bone animation corresponding to the control operation.
  • the above-mentioned process of performing animation blending processing based on blending points can be implemented as follows: traversing all the geometry in the split animation blending space to determine the target geometry to which the blending point belongs; calculating the vertices of the target geometry through a geometry interpolation algorithm Information and weight information; based on vertex information and weight information, perform animation blending processing.
  • the user controls the target object in the game, and specifically performs the operation of forwarding the target object to the right. At this time, it can be based on The control operation determines the corresponding target motion parameter (move, turn).
  • the animation mixing space is a two-dimensional space
  • the control points corresponding to the target motion parameters (move, turn) can be determined in the animation mixing space.
  • the animation mixing space is a divided space, and then each subspace can be traversed to determine the target subspace to which the control point belongs.
  • the vertices of the target subspace are reference points, and each reference point corresponds to a skeletal animation, so the skeletal animation corresponding to the reference points that constitute the target subspace can be obtained. Then, perform animation blending processing on the acquired skeletal animation to obtain the target skeletal animation corresponding to the control operation.
  • the process of performing animation blending processing on skeletal animation can be realized as follows: performing animation blending processing on skeletal animation through interpolation method.
  • the split animation mixing space is a two-dimensional space, the geometry is a triangle, and the geometry interpolation algorithm is a barycentric coordinate interpolation algorithm;
  • the split animation mixing space is a two-dimensional space, the geometry is a plane convex quadrilateral, and the geometry interpolation algorithm is an inverse double line Linear interpolation algorithm;
  • the split animation mixed space is a three-dimensional space, the geometry is a tetrahedron, and the geometric interpolation algorithm is a barycentric coordinate interpolation algorithm;
  • the split animation mixed space is a three-dimensional space, the geometry is a convex hull, and the geometric interpolation algorithm is a German The algorithm for converting Laune's triangle to a tetrahedron for interpolation.
  • the interpolation methods used in different situations are different.
  • the animation blending process of the skeletal animation can be realized as follows: through the interpolation method, the weight information corresponding to each reference point corresponding to the target subspace to which the control point belongs is calculated, and based on the weight information corresponding to each reference point, the bone The animation is processed for animation blending.
  • the weight information reflects the distance relationship between the control point and the corresponding reference point. Specifically, the closer the distance between the control point and the corresponding reference point, the greater the weight. When the distance between the control point and the corresponding reference point is farther, the weight is smaller.
  • Animi represents the pose data corresponding to the reference point i in the target subspace.
  • weighti represents the weight information corresponding to the reference point i.
  • the animation blending processing of the skeletal animation can be realized as:
  • blend motion represents the pose data of the target object in the target skeleton animation obtained after the animation mixing process.
  • the target object in the game can be composed of bones and joints, and the posture of the bones and other factors determine the posture of the target object.
  • the posture of the target object is different, and the target object is dynamic.
  • the above-mentioned Animi may include the pose data of the bones of the target object.
  • the target skeletal animation may be composed of multiple video frames, and the skeletal animation used for blending into the target skeletal animation is also composed of multiple video frames.
  • the skeletal animation used for blending into the target skeletal animation also includes 24 video frames.
  • the n-th video frame in the target skeletal animation can be blended through the n-th video frame in the skeletal animation used for blending into the target skeletal animation.
  • the pose data D1 of the bone corresponding to the first video frame in the skeletal animation A1 for blending into the target skeletal animation can be obtained, and used for blending
  • the pose data D2 of the bone corresponding to the first video frame in the skeletal animation A2 of the target skeletal animation multiply the pose data D1 of the bone by the corresponding weight Q1 plus the pose data D2 of the bone and multiply the corresponding weight
  • the result obtained by Q2 is the pose data D3 of the skeleton corresponding to the first video frame in the target skeleton animation.
  • the pose of the target object in the first video frame of the target skeleton animation is known.
  • the pose data of the bones corresponding to all the video frames in the target skeletal animation can be calculated, and the target skeletal animation can be generated through the pose data of the bones corresponding to all the video frames in the target skeletal animation.
  • the process of determining the target subspace to which the control point corresponding to the target motion parameter belongs can be implemented as follows: Obtain multiple grid spaces corresponding to the subspace; traverse each grid space to determine whether the control point corresponding to the target motion parameter belongs to any one of the multiple grid spaces; if the control point belongs to For any one of the multiple grid spaces, a subspace containing any one of the grid spaces is determined as the target subspace to which the control point belongs.
  • a subspace with a convex hull shape can be converted into several tetrahedrons, or multiple grid spaces corresponding to the subspace can be directly obtained, and each grid space can be traversed to determine whether the control point corresponding to the target motion parameter belongs to Any grid space among multiple grid spaces, where the grid space can also be referred to as a virtual grid (Virtual Example Grids, abbreviated as VEG).
  • VEG Virtual Example Grids
  • the grid space is used for weight calculation when the animation is mixed. Because the grid space is neatly arranged, you can quickly calculate which grid space the control point is located in and the vertex corresponding to the grid space. The weight of, and then can achieve the effect of fast mixing animation.
  • the motion analysis of the target object in each sample skeletal animation can be performed, and in this way, the corresponding target object in each sample skeletal animation can be obtained.
  • Different types of motion parameters The motion parameters automatically calculated through the motion analysis can fit the actual motion parameters of the sample skeleton animation, so the motion parameters obtained through the motion analysis are more accurate.
  • the animation quality of the skeletal animation obtained by the animation blending process based on such motion parameters is higher.
  • the embodiment of the present invention covers the entire process of editing, mixing generation and optimizing the high-dimensional space parameterized character animation mixing scheme. Compared with the existing mixing scheme, the embodiment of the present invention analyzes the animation resources to obtain the corresponding animation mixing space parameters, and additionally provides With the user interactive editing and subdivision function, and can handle the high-dimensional space mixing problem, the user can easily and quickly mix and generate a better animation.
  • the solutions provided by the embodiments of the present invention can be used in technical fields such as games and animation to enhance the motion effect and expressiveness of animated characters.
  • animation mixing space division device of one or more embodiments of the present invention will be described in detail below. Those skilled in the art can understand that these animation mixing space segmentation devices can all be configured by using commercially available hardware components through the steps taught in this solution.
  • FIG. 6 is a schematic structural diagram of an animation mixing space segmentation device provided by an embodiment of the present invention. As shown in FIG. 6, the device includes: an acquisition module 11, a motion analysis module 12, a construction module 13, and a segmentation module 14.
  • the obtaining module 11 is configured to obtain an animation set, the animation set contains a plurality of sample skeletal animations to be animation blended;
  • the motion analysis module 12 is configured to perform motion analysis on the target object in each sample skeletal animation to obtain multiple types of motion parameters corresponding to the target object in each sample skeletal animation;
  • the construction module 13 is configured to construct an animation mixing space with multiple types of motion parameters as dimensions;
  • the segmentation module 14 is configured to determine a first reference point corresponding to the motion parameter in the animation mixing space, and segment the animation mixing space based on the first reference point.
  • the motion analysis module 12 is set to:
  • sampling the sample skeleton animation to obtain sampling information
  • sampling information multiple types of motion parameters corresponding to the target object in each sample skeletal animation are calculated.
  • the motion analysis module 12 is set to:
  • the device further includes a modification module
  • the modification module is configured to modify the split result based on the modification operation on the split result triggered by the user.
  • the modification operation is to add a second reference point in the animation mixing space
  • the modification module is set to:
  • the animation mixing space includes a two-dimensional space or a three-dimensional space.
  • the animation mixing space is a two-dimensional space
  • the segmentation module 14 is configured to:
  • the animation mixing space is divided into a plurality of preset types of geometric surfaces, and the preset types of geometric surfaces include triangles or planar convex quadrilaterals.
  • the geometric surface of the preset type is a triangle
  • the splitting module 14 is set to:
  • the animation mixing space is divided into a plurality of preset types of geometric surfaces through the Delaunay triangulation technique.
  • the animation mixing space is a three-dimensional space
  • the segmentation module 14 is configured to:
  • the animation mixing space is divided into a plurality of preset types of geometric bodies, and the preset types of geometric bodies include a tetrahedron or a convex hull.
  • the device further includes a mixing module, and the mixing module is configured to:
  • the animation mixing process is performed.
  • the mixing module is set to:
  • the mixing module is set to:
  • an animation mixing process is performed.
  • the split animation mixing space is a two-dimensional space
  • the geometric body is a plane convex quadrilateral
  • the geometric body interpolation algorithm is an inverse bilinear interpolation algorithm
  • the split animation mixing space is a three-dimensional space, the geometry is a tetrahedron, and the geometry interpolation algorithm is a barycentric coordinate interpolation algorithm;
  • the split animation mixing space is a three-dimensional space
  • the geometry is a convex hull
  • the geometry interpolation algorithm is an algorithm in which Delaunay's triangle is transformed into a tetrahedron for interpolation.
  • the mixing module is set to:
  • the mixing module is set to:
  • a subspace including any grid space is determined as the target subspace to which the control point belongs.
  • the device shown in FIG. 6 can execute the animation blending space division method provided in the embodiments shown in FIGS. 1 to 5.
  • the device shown in FIG. 6 can execute the animation blending space division method provided in the embodiments shown in FIGS. 1 to 5.
  • the systems, methods, and devices of the embodiments of the present invention can be implemented as pure software (for example, software programs written in Java), or as pure hardware (for example, dedicated ASIC chips or FPGA chips) as required. It can also be implemented as a system that combines software and hardware (for example, a firmware system storing fixed codes or a system with general-purpose memory and a processor).
  • the structure of the animation mixing space segmentation device shown in FIG. 6 may be implemented as an electronic device.
  • the electronic device may include a processor 91 and a memory 92.
  • executable code is stored on the memory 92, and when the executable code is executed by the processor 91, the processor 91 can at least implement the functions provided in the embodiments shown in FIGS. 1 to 5 above.
  • the animation hybrid space division method is described in detail below.
  • the electronic device may also include a communication interface 93 for communicating with other devices.
  • Another aspect of the present invention is a computer-readable medium on which computer-readable instructions are stored, and when the instructions are executed, the method of each embodiment of the present invention can be implemented.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

An animation blend space partitioning method and apparatus, and a device and a readable medium. The method comprises: obtaining an animation set, the animation set comprising multiple sample skeletal animations to be subjected to animation blending (101); performing motion analysis on a target object in each sample skeletal animation to obtain multiple types of motion parameters corresponding to the target object in each sample skeletal animation (102); constructing an animation blend space by using the multiple types of the motion parameters as dimensions (103); and determining first reference points corresponding to the motion parameters in the animation blend space, and partitioning the animation blend space on the basis of the first reference points (104). By means of the present method, motion analysis is performed on the target object in each sample skeletal animation; the motion parameters obtained by motion analysis are more accurate; and the animation quality of the skeletal animation obtained by animation blending processing on the basis of the motion parameters is high.

Description

动画混合空间剖分方法、装置、设备和可读介质Animation mixing space segmentation method, device, equipment and readable medium
本申请要求于2020年6月22日提交的申请号为202010575600.6、发明名称为“动画混合空间剖分方法、装置、设备和可读介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of a Chinese patent application filed on June 22, 2020 with the application number 202010575600.6 and the title of the invention "animation blending space segmentation method, device, equipment and readable medium", the entire content of which is incorporated by reference In this application.
技术领域Technical field
本发明涉及数据处理技术领域,尤其涉及一种动画混合空间剖分方法、装置、设备和可读介质。The present invention relates to the technical field of data processing, and in particular to a method, device, equipment and readable medium for dividing an animation mixing space.
背景技术Background technique
游戏中根据用户的控制操作播放的对应的骨骼动画在某些应用中是经过混合处理得到的。动画混合处理的关键是如何按照用户的需求正确计算所需的动画以及权重信息。根据计算出的所需的动画以及权重信息,可以具体进行动画混合处理。The corresponding skeletal animation played in the game according to the user's control operation is obtained through mixing in some applications. The key to the animation mixing process is how to correctly calculate the required animation and weight information according to the user's needs. According to the calculated required animation and weight information, the animation mixing process can be specifically performed.
在设计游戏时,动画师可以制作几个具有代表性的样本骨骼动画。在得到样本骨骼动画后,游戏设计人员凭借个人经验以及反复的试验,给每个样本骨骼动画中的目标对象设置对应的运动参数。然后基于这些运动参数,构建动画混合空间。在游戏执行的过程中,当用户对目标对象进行了控制操作时,可以通过动画混合空间,对样本骨骼动画进行混合处理,以混合出与控制操作对应的骨骼动画。When designing a game, an animator can make several representative sample skeleton animations. After obtaining the sample skeletal animation, game designers rely on personal experience and repeated trials to set the corresponding motion parameters for the target object in each sample skeletal animation. Then based on these motion parameters, an animation mixing space is constructed. During the execution of the game, when the user performs a control operation on the target object, the sample skeletal animation can be mixed through the animation mixing space to mix the skeletal animation corresponding to the control operation.
在上述过程中,由于样本骨骼动画所对应的运动参数是游戏设计人员凭借个人经验手动指定的,导致设置的运动参数与样本骨骼动画实际的运动参数不匹配。而混合出的骨骼动画的动画质量直接与游戏设计人员设置的运动参数相关,当游戏设计人员设置的运动参数与样本骨骼动画实际的运动参数不匹配时,混合出的骨骼动画的动画质量较低。另外,在动画混合过程中人工无法干预混 合过程。高维度混合空间的构建是通过将几个低维度混合空间简单复合在一起实现的,简单复合得到的高维度混合空间无法实现混合空间的耦合性。In the above process, since the motion parameters corresponding to the sample skeletal animation are manually specified by the game designer based on personal experience, the set motion parameters do not match the actual motion parameters of the sample skeletal animation. The animation quality of the mixed skeletal animation is directly related to the motion parameters set by the game designer. When the motion parameters set by the game designer do not match the actual motion parameters of the sample skeletal animation, the animation quality of the mixed skeletal animation is lower . In addition, humans cannot interfere with the mixing process during the animation mixing process. The construction of the high-dimensional mixed space is realized by simply combining several low-dimensional mixed spaces together. The high-dimensional mixed space obtained by simple combination cannot realize the coupling of the mixed space.
发明内容Summary of the invention
本发明实施例提供一种动画混合空间剖分方法、装置、设备和存储介质,用以提高混合出的骨骼动画的动画质量。The embodiment of the present invention provides a method, device, equipment and storage medium for dividing an animation mixing space, which are used to improve the animation quality of the mixed skeletal animation.
第一方面,本发明实施例提供一种动画混合空间剖分方法,该方法包括:In a first aspect, an embodiment of the present invention provides a method for dividing an animation blending space, the method including:
获取动画集合,所述动画集合中包含多个样本骨骼动画;Acquiring an animation collection, the animation collection including a plurality of sample skeletal animations;
对各样本骨骼动画中的目标对象进行运动分析,得到所述目标对象在所述各样本骨骼动画中对应的多个类型的运动参数;Performing motion analysis on the target object in each sample skeletal animation to obtain multiple types of motion parameters corresponding to the target object in each sample skeletal animation;
以所述运动参数的多个类型为维度构建动画混合空间;Constructing an animation mixing space with multiple types of motion parameters as dimensions;
确定所述运动参数在所述动画混合空间中对应的第一基准点,基于所述第一基准点,对所述动画混合空间进行剖分;Determining a first reference point corresponding to the motion parameter in the animation mixing space, and dividing the animation mixing space based on the first reference point;
基于剖分后的动画混合空间,进行动画混合处理。Based on the split animation mixing space, the animation mixing process is performed.
第二方面,本发明实施例提供一种动画混合空间剖分装置,包括:In a second aspect, an embodiment of the present invention provides an animation mixing space segmentation device, including:
获取模块,设置为获取动画集合,所述动画集合中包含多个待进行动画混合的样本骨骼动画;An obtaining module, configured to obtain an animation set, the animation set contains a plurality of sample skeletal animations to be animated;
运动分析模块,设置为对各样本骨骼动画中的目标对象进行运动分析,得到所述目标对象在所述各样本骨骼动画中对应的多个类型的运动参数;The motion analysis module is configured to perform motion analysis on the target object in each sample skeletal animation to obtain multiple types of motion parameters corresponding to the target object in each sample skeletal animation;
构建模块,设置为以所述运动参数的多个类型为维度构建动画混合空间;A construction module, configured to construct an animation mixing space with multiple types of motion parameters as dimensions;
剖分模块,设置为确定所述运动参数在所述动画混合空间中对应的第一基准点,基于所述第一基准点,对所述动画混合空间进行剖分。The segmentation module is configured to determine a first reference point corresponding to the motion parameter in the animation mixing space, and segment the animation mixing space based on the first reference point.
第三方面,本发明实施例提供一种电子设备,包括处理器和存储器,所述存储器中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、至少一段程序、代码集或指令集由所述处理器加载并执行以实现第一方面中的动画混合空间剖分方法。In a third aspect, an embodiment of the present invention provides an electronic device including a processor and a memory. The memory stores at least one instruction, at least one program, code set or instruction set, the at least one instruction, at least one program, The code set or instruction set is loaded and executed by the processor to realize the animation mixing space division method in the first aspect.
第四方面,本发明实施例提供了一种计算机可读介质,其上存储有至少一 条指令、至少一段程序、代码集或指令集,所述至少一条指令、至少一段程序、代码集或指令集由处理器加载并执行以实现第一方面中的动画混合空间剖分方法。In a fourth aspect, an embodiment of the present invention provides a computer-readable medium on which at least one instruction, at least one program, code set or instruction set is stored, the at least one instruction, at least one program, code set or instruction set It is loaded and executed by the processor to realize the animation mixing space division method in the first aspect.
通过本发明实施例提供的方法,在对样本骨骼动画进行动画混合处理之前,可以对每个样本骨骼动画中的目标对象进行运动分析,通过这样的方式来获取目标对象在各样本骨骼动画中对应的不同类型的运动参数。通过运动分析自动计算出的运动参数能够贴合样本骨骼动画实际的运动参数,因此通过运动分析得出的运动参数更加的精确。当计算出的运动参数更加能够贴合样本骨骼动画实际的运动参数时,基于这样的运动参数进行动画混合处理得到的骨骼动画的动画质量较高。解决了在制作动画混合空间时,需要游戏设计人员凭借个人经验手动输入动画的运动参数,导致运动参数不准确的问题。采用本发明,能够通过运动分析自动计算的运动参数,计算出的运动参数能够贴合样本骨骼动画实际的运动参数,能够提高动画混合处理得到的骨骼动画的动画质量。Through the method provided by the embodiment of the present invention, before performing the animation blending process on the sample skeletal animation, the motion analysis of the target object in each sample skeletal animation can be performed, and in this way, the corresponding target object in each sample skeletal animation can be obtained. Different types of motion parameters. The motion parameters automatically calculated through the motion analysis can fit the actual motion parameters of the sample skeleton animation, so the motion parameters obtained through the motion analysis are more accurate. When the calculated motion parameters are more able to fit the actual motion parameters of the sample skeletal animation, the animation quality of the skeletal animation obtained by the animation blending process based on such motion parameters is higher. It solves the problem of inaccurate motion parameters that require game designers to manually input motion parameters of animations based on personal experience when making animation mixing spaces. By adopting the present invention, the motion parameters can be automatically calculated through motion analysis, and the calculated motion parameters can fit the actual motion parameters of the sample skeletal animation, and the animation quality of the skeletal animation obtained by the animation mixing process can be improved.
附图说明Description of the drawings
为了更清楚地说明本发明实施例中的技术方案,下面将对实施例描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly describe the technical solutions in the embodiments of the present invention, the following will briefly introduce the accompanying drawings used in the description of the embodiments. Obviously, the accompanying drawings in the following description are some embodiments of the present invention. For those of ordinary skill in the art, other drawings can be obtained based on these drawings without creative work.
图1为本发明实施例提供的一种动画混合空间剖分方法的流程图示意图;FIG. 1 is a schematic diagram of a flowchart of a method for dividing an animation mixed space provided by an embodiment of the present invention;
图2为本发明实施例提供的一种采用三角形对二维空间进行剖分的结果的示意图;2 is a schematic diagram of a result of using triangles to divide a two-dimensional space according to an embodiment of the present invention;
图3为本发明实施例提供的一种剖分结果示意图;FIG. 3 is a schematic diagram of a segmentation result provided by an embodiment of the present invention;
图4为本发明实施例提供的一种插值原理示意图;FIG. 4 is a schematic diagram of an interpolation principle provided by an embodiment of the present invention;
图5为本发明实施例提供的一种转换规则几何体的效果示意图;FIG. 5 is a schematic diagram of the effect of a conversion rule geometry provided by an embodiment of the present invention;
图6为本发明实施例提供的一种动画混合空间剖分装置的结构示意图;FIG. 6 is a schematic structural diagram of an animation mixing space segmentation device provided by an embodiment of the present invention;
图7为本发明实施例提供的一种电子设备的结构示意图。FIG. 7 is a schematic structural diagram of an electronic device provided by an embodiment of the present invention.
具体实施方式detailed description
为使本发明实施例的目的、技术方案和优点更加清楚,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。In order to make the objectives, technical solutions, and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be described clearly and completely in conjunction with the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments It is a part of the embodiments of the present invention, but not all of the embodiments. Based on the embodiments of the present invention, all other embodiments obtained by those of ordinary skill in the art without creative work shall fall within the protection scope of the present invention.
在本发明实施例中使用的术语是仅仅出于描述特定实施例的目的,而非旨在限制本发明。在本发明实施例和所附权利要求书中所使用的单数形式的“一种”、“所述”和“该”也旨在包括多数形式,除非上下文清楚地表示其他含义,“多种”一般包含至少两种。The terms used in the embodiments of the present invention are only for the purpose of describing specific embodiments, and are not intended to limit the present invention. The singular forms of "a", "said" and "the" used in the embodiments of the present invention and the appended claims are also intended to include plural forms, unless the context clearly indicates other meanings, "multiple" Generally contains at least two types.
取决于语境,如在此所使用的词语“如果”、“若”可以被解释成为“在……时”或“当……时”或“响应于确定”或“响应于检测”。类似地,取决于语境,短语“如果确定”或“如果检测(陈述的条件或事件)”可以被解释成为“当确定时”或“响应于确定”或“当检测(陈述的条件或事件)时”或“响应于检测(陈述的条件或事件)”。Depending on the context, the words "if" and "if" as used herein can be interpreted as "when" or "when" or "in response to determination" or "in response to detection". Similarly, depending on the context, the phrase "if determined" or "if detected (statement or event)" can be interpreted as "when determined" or "in response to determination" or "when detected (statement or event) )" or "in response to detection (statement or event)".
另外,下述各方法实施例中的步骤时序仅为一种举例,而非严格限定。In addition, the sequence of steps in the following method embodiments is only an example, and is not strictly limited.
图1为本发明实施例提供的一种动画混合空间剖分方法的流程图,如图1所示,该方法包括如下步骤:FIG. 1 is a flowchart of a method for dividing an animation blend space provided by an embodiment of the present invention. As shown in FIG. 1, the method includes the following steps:
101、获取动画集合,动画集合中包含多个待进行动画混合的样本骨骼动画。101. Obtain an animation collection, where the animation collection includes a plurality of sample skeletal animations to be animation blended.
102、对各样本骨骼动画中的目标对象进行运动分析,得到目标对象在各样本骨骼动画中对应的多个类型的运动参数。102. Perform motion analysis on the target object in each sample skeletal animation to obtain multiple types of motion parameters corresponding to the target object in each sample skeletal animation.
103、以运动参数的多个类型为维度构建动画混合空间。103. Construct an animation mixing space with multiple types of motion parameters as dimensions.
104、确定运动参数在动画混合空间中对应的第一基准点,基于第一基准点,对动画混合空间进行剖分。104. Determine a first reference point corresponding to the motion parameter in the animation mixing space, and divide the animation mixing space based on the first reference point.
上述样本骨骼动画可以是动画师所绘制的动画,可以将内容相关的多个样本骨骼动画存储到一个动画集合中。The aforementioned sample skeletal animation may be an animation drawn by an animator, and multiple sample skeletal animations related to the content can be stored in an animation collection.
后续可以基于样本骨骼动画进行动画混合处理,动画混合处理为比较关键的技术。动画混合是指能令一个以上的样本骨骼动画对角色最终姿势起作用的一种技术。对于动画混合处理来说,它的关键之处是如何按照用户的需求正确计算所需的动画以及权重信息。根据计算出的所需的动画以及权重信息,可以具体进行动画混合处理。Subsequent animation mixing processing can be performed based on the sample skeleton animation, and the animation mixing processing is a more critical technology. Animation blending refers to a technique that can make more than one sample skeleton animation work on the final pose of the character. For the animation mixing process, the key is how to correctly calculate the required animation and weight information according to the user's needs. According to the calculated required animation and weight information, the animation mixing process can be specifically performed.
在获得动画集合后,可以对各样本骨骼动画中的目标对象进行运动分析,以获得目标对象在各样本骨骼动画中对应的多个类型的运动参数。After the animation set is obtained, the motion analysis of the target object in each sample skeletal animation can be performed to obtain multiple types of motion parameters corresponding to the target object in each sample skeletal animation.
上述目标对象可以是作为玩家的用户在游戏中操控的对象,比如可以是游戏人物等。The aforementioned target object may be an object manipulated by the user as a player in the game, for example, it may be a game character.
可选地,对各样本骨骼动画中的目标对象进行运动分析,得到目标对象在各样本骨骼动画中对应的多个类型的运动参数的过程可以实现为:在各样本骨骼动画播放的过程中,对各样本骨骼动画进行采样,得到采样信息;根据采样信息,计算各样本骨骼动画中的目标对象在各样本骨骼动画中对应的多个类型的运动参数。Optionally, the process of performing motion analysis on the target object in each sample skeletal animation to obtain multiple types of motion parameters corresponding to the target object in each sample skeletal animation can be implemented as follows: during the playback of each sample skeletal animation, Sample each sample skeletal animation to obtain sampling information; according to the sampling information, calculate multiple types of motion parameters corresponding to the target object in each sample skeletal animation in each sample skeletal animation.
实际应用中,可以离线播放每个样本骨骼动画,然后在播放样本骨骼动画的过程中,对各样本骨骼动画进行采样。随后可以计算目标对象在各样本骨骼动画中的运动(root)轨迹、目标对象中包含的关节角度中的至少任意一项。最后可以基于运动轨迹、关节角度中的至少任意一项,确定目标对象在各样本骨骼动画中对应的多个类型的运动参数。需要说明的是,实际应用中还可以在播放样本骨骼动画的过程中,对各样本骨骼动画进行运动分析,以分析出除了运动轨迹、关节角度之外的其他一些可用类型的运动参数。能够通过样本骨骼动画分析出的不同类型的运动参数以及运用这些参数构建动画混合空间的方式,都属于本发明的保护范围之内。In practical applications, each sample skeletal animation can be played offline, and then during the process of playing the sample skeletal animation, each sample skeletal animation is sampled. Subsequently, at least any one of the motion (root) trajectory of the target object in each sample skeletal animation and the joint angles contained in the target object can be calculated. Finally, based on at least any one of the motion trajectory and the joint angle, multiple types of motion parameters corresponding to the target object in each sample skeletal animation can be determined. It should be noted that in actual applications, during the process of playing the sample skeletal animation, motion analysis can be performed on each sample skeletal animation to analyze some other available types of motion parameters in addition to the motion trajectory and the joint angle. The different types of motion parameters that can be analyzed through the sample skeleton animation and the manner in which these parameters are used to construct the animation mixing space fall within the protection scope of the present invention.
通过上述计算可以获得目标对象在各样本骨骼动画中对应的运动参数,运动参数的类型包括:Through the above calculation, the corresponding motion parameters of the target object in each sample skeleton animation can be obtained. The types of motion parameters include:
运动速度(Move-speed),单位为m/s;Movement speed (Move-speed), the unit is m/s;
角速度(Turn-speed),单位为rad/s;Angular speed (Turn-speed), the unit is rad/s;
地面坡度(Slope-Angle),单位为rad;Slope of the ground (Slope-Angle), the unit is rad;
方向(Strafe),单位为rad。Direction (Strafe), the unit is rad.
当然除了上面列举的这几个运动参数的类型之外,实际应用中还可以根据需求引入其他类型的运动参数,对此本发明实施例不做限定。Of course, in addition to the types of motion parameters listed above, other types of motion parameters can also be introduced in practical applications according to requirements, which are not limited in the embodiment of the present invention.
在本发明实施例中,运动速度、角速度以及地面坡度可以通过动画的root motion(一种运动分析方法)计算得到。Strafe可以通过颈(neck)关节与轨迹朝向的夹角计算得到。In the embodiment of the present invention, the movement speed, the angular velocity, and the ground slope can be calculated by the root motion of the animation (a method of motion analysis). Strafe can be calculated by the angle between the neck joint and the trajectory direction.
通过运动分析计算出的运动参数更为精确,更能代表样本骨骼动画的内在运动特征。The motion parameters calculated through motion analysis are more accurate and can better represent the inherent motion characteristics of the sample skeleton animation.
在获得不同类型的运动参数之后,游戏设计人员可以根据需要,在不同类型的运动参数中,选择需要使用的运动参数的类型,以选择出的运动参数的类型为维度构建动画混合空间(Blend Space)。After obtaining different types of motion parameters, game designers can select the types of motion parameters that need to be used among different types of motion parameters according to their needs, and use the selected types of motion parameters as dimensions to construct an animation blending space (Blend Space). ).
举例来说,假设游戏设计人员选择使用Move-speed、Turn-speed以及Strafe三种类型的运动参数构造动画混合空间,那么可以构造出一个三维空间,该三维空间的三个维度分别对应Move-speed、Turn-speed以及Strafe。For example, if a game designer chooses to use Move-speed, Turn-speed, and Strafe three types of motion parameters to construct an animation hybrid space, then a three-dimensional space can be constructed, and the three dimensions of the three-dimensional space correspond to Move-speed. , Turn-speed and Strafe.
在构造出动画混合空间之后,可以对动画混合空间进行剖分。After constructing the animation mixing space, the animation mixing space can be divided.
可以通过自动剖分的方式对动画混合空间进行剖分,也可以通过手动剖分的方式对动画混合空间进行剖分,还可以结合自动剖分与手动剖分的方式对动画混合空间进行剖分。The animation mixing space can be divided by automatic division, or the animation mixing space can be divided by manual division, and the animation mixing space can be divided by combining automatic and manual division. .
在通过自动剖分的方式对动画混合空间进行剖分的过程中,首先可以确定运动参数在动画混合空间中对应的第一基准点。假设动画混合空间由Move-speed、Turn-speed以及Strafe三个维度构成,目标对象在样本骨骼动画V中对应的运动参数分别为:Move-speed=5m/s、Turn-speed=10rad/s、Strafe=6rad。可以在动画混合空间中确定Move-speed=5m/s、Turn-speed=10rad/s、Strafe=6rad对应的第一基准点。由于动画集合中包括多个样本骨骼动画,通过相同的方式,可以得到每个样本骨骼动画在动画混合空间中对应的第一基准点。在获得多个第一基准点之后,可以基于这些基准点自 动对动画混合空间进行剖分。In the process of dividing the animation mixing space by automatic division, the first reference point corresponding to the motion parameter in the animation mixing space can be determined first. Assuming that the animation mixing space is composed of three dimensions: Move-speed, Turn-speed, and Strafe, the corresponding motion parameters of the target object in the sample skeletal animation V are: Move-speed=5m/s, Turn-speed=10rad/s, Strafe=6rad. The first reference point corresponding to Move-speed=5m/s, Turn-speed=10rad/s, Strafe=6rad can be determined in the animation mixing space. Since the animation collection includes multiple sample skeletal animations, in the same way, the first reference point corresponding to each sample skeletal animation in the animation mixing space can be obtained. After obtaining multiple first reference points, the animation mixing space can be automatically divided based on these reference points.
选择的运动参数的类型的数量决定了动画混合空间的维度,如果选择两个类型的运动参数就构成了二维空间,如果选择三个类型的运动参数就构成了三维空间。不同维数的动画混合空间的剖分方式不同。The number of selected types of motion parameters determines the dimensionality of the animation mixing space. If two types of motion parameters are selected, a two-dimensional space is formed, and if three types of motion parameters are selected, a three-dimensional space is formed. Different dimensionality of the animation mixing space is divided into different ways.
可选地,在动画混合空间为二维空间时,基于第一基准点,对动画混合空间进行剖分的过程可以实现为:以第一基准点作为几何面的顶点,将动画混合空间剖分为多个预设类型的几何面,预设类型的几何面包括三角形或者平面凸四边形。Optionally, when the animation mixing space is a two-dimensional space, based on the first reference point, the process of dividing the animation mixing space can be implemented as follows: using the first reference point as the vertex of the geometric surface to divide the animation mixing space There are multiple preset types of geometric surfaces, and the preset types of geometric surfaces include triangles or planar convex quadrilaterals.
实际应用中,在预设类型的几何面为三角形时,以第一基准点作为几何面的顶点,将动画混合空间剖分为多个预设类型的几何面的过程可以实现为:以第一基准点作为几何面的顶点,通过德劳内三角化(Delaunay Triangulation)剖分技术,将动画混合空间剖分为多个三角形。如图2所示,是一种采用三角形对二维空间进行剖分的结果的示意图。In practical applications, when the preset type of geometric surface is a triangle, the process of dividing the animation mixing space into multiple preset types of geometric surfaces using the first reference point as the vertex of the geometric surface can be implemented as follows: The reference point is used as the vertex of the geometric surface, and the animation mixing space is divided into multiple triangles through Delaunay Triangulation (Delaunay Triangulation) subdivision technology. As shown in Figure 2, it is a schematic diagram of the result of using triangles to divide the two-dimensional space.
可选地,在动画混合空间为三维空间时,基于第一基准点,对动画混合空间进行剖分的过程可以实现为:以第一基准点作为几何体的顶点,将动画混合空间剖分为多个预设类型的几何体,预设类型的几何体包括四面体或者凸包。如图3所示,是一种采用预设的几何面对二维空间进行剖分的结果、以及一种采用预设的几何体对三维空间进行剖分的结果的示意图。Optionally, when the animation mixing space is a three-dimensional space, based on the first reference point, the process of dividing the animation mixing space can be implemented as follows: taking the first reference point as the vertex of the geometric body, and dividing the animation mixing space into multiple A preset type of geometry, the preset type of geometry includes a tetrahedron or a convex hull. As shown in FIG. 3, it is a schematic diagram of the result of dividing a two-dimensional space with a preset geometric plane, and a schematic diagram of the result of dividing a three-dimensional space with a preset geometric body.
在通过自动剖分的方式对动画混合空间进行剖分之后,如果游戏设计人员对剖分结果不满意,还可以做适当的修改与补充。游戏设计人员可以在自动剖分的结果之上,对剖分结果进行修改,也可以在第一基准点的基础之上再添加一些第二基准点。在添加第二基准点之后,可以基于第一基准点和第二基准点,重新对动画混合空间进行剖分或者对剖分结果进行调整。After the animation mixing space is split by automatic splitting, if the game designer is not satisfied with the splitting result, he can also make appropriate modifications and supplements. Game designers can modify the results of the auto-segmentation, or add some second reference points on the basis of the first reference point. After the second reference point is added, based on the first reference point and the second reference point, the animation mixing space can be re-split or the split result can be adjusted.
可以理解的是,动画混合空间中的每个基准点都对应有一个骨骼动画,第一基准点对应的样本骨骼动画来自于动画师绘制的动画集合,而第二基准点对应的骨骼动画需要基于样本骨骼动画生成得到。可选地,第二基准点对应的骨骼动画的生成过程可以实现为:基于第二基准点与第一基准点的相对位置关系, 在第一基准点中,确定与第二基准点相匹配的目标基准点;基于目标基准点对应的样本骨骼动画,生成骨骼动画。It is understandable that each reference point in the animation mixing space corresponds to a skeletal animation. The sample skeletal animation corresponding to the first reference point comes from the animation collection drawn by the animator, and the skeletal animation corresponding to the second reference point needs to be based on The sample skeleton animation is generated. Optionally, the process of generating the skeletal animation corresponding to the second reference point can be implemented as follows: based on the relative positional relationship between the second reference point and the first reference point, in the first reference point, determine the second reference point that matches the second reference point. Target fiducial point; based on the sample skeletal animation corresponding to the target fiducial point, skeletal animation is generated.
需要说明的是,在手动对动画混合空间进行剖分的过程中,可以使用更多不同类型的几何面或者几何体对动画混合空间进行剖分,比如说可以使用平面凸四边形或者凸包对动画混合空间进行剖分。这样对动画混合空间的剖分过程更加自由,最终的剖分结果更加贴合游戏设计人员的设计需求。It should be noted that in the process of manually dividing the animation mixing space, you can use more different types of geometric surfaces or geometric bodies to divide the animation mixing space, for example, you can use a plane convex quadrilateral or a convex hull to mix the animation Space is divided. In this way, the splitting process of the animation mixing space is more free, and the final splitting result is more in line with the design needs of game designers.
在对动画混合空间进行剖分处理之后,可以使用剖分后的动画混合空间,进行动画混合处理。基于剖分后的动画混合空间,进行动画混合处理的过程可以实现为:当检测到动画混合需求事件时,获取用户输入的对目标对象的控制参数;基于控制参数,在剖分后的动画混合空间中构造混合点;基于混合点,进行动画混合处理。After the animation mixing space is divided, the divided animation mixing space can be used to perform animation mixing processing. Based on the split animation mixing space, the process of animation mixing processing can be realized as follows: when an animation mixing demand event is detected, the control parameters of the target object input by the user are obtained; based on the control parameters, the animation mixing after the split Construct a blending point in the space; based on the blending point, perform animation blending processing.
检测用户触发的对目标对象的控制操作,确定控制操作对应的目标运动参数;在剖分后的动画混合空间包含的多个子空间中,确定目标运动参数对应的控制点所属的目标子空间,多个子空间包括多个预设类型的几何面或者多个预设类型的几何体;获取构成目标子空间的第一基准点对应的样本骨骼动画;对获取到的样本骨骼动画进行动画混合处理,得到与控制操作对应的目标骨骼动画。Detect the user-triggered control operation on the target object, determine the target motion parameter corresponding to the control operation; among the multiple subspaces contained in the split animation mixing space, determine the target subspace to which the control point corresponding to the target motion parameter belongs. Each subspace includes multiple preset types of geometric surfaces or multiple preset types of geometric bodies; obtains the sample skeleton animation corresponding to the first reference point constituting the target subspace; performs animation blending processing on the obtained sample skeleton animation to obtain The target bone animation corresponding to the control operation.
可选地,上述基于混合点,进行动画混合处理的过程可以实现为:遍历剖分后的动画混合空间中所有的几何体,确定混合点所属的目标几何体;通过几何体插值算法,计算目标几何体的顶点信息以及权重信息;基于顶点信息以及权重信息,进行动画混合处理。Optionally, the above-mentioned process of performing animation blending processing based on blending points can be implemented as follows: traversing all the geometry in the split animation blending space to determine the target geometry to which the blending point belongs; calculating the vertices of the target geometry through a geometry interpolation algorithm Information and weight information; based on vertex information and weight information, perform animation blending processing.
举例来说,在运行游戏的过程中,当生成一个动画混合处理的需求时,例如用户对游戏中的目标对象进行了控制操作,具体对目标对象进行了向右前进的操作,此时可以基于该控制操作确定对应的目标运动参数(move,turn)。假设动画混合空间是二维空间,可以在该动画混合空间中确定目标运动参数(move,turn)对应的控制点。该动画混合空间是已剖分好的空间,进而可以遍历每个子空间,确定该控制点所属的目标子空间。目标子空间的顶点是基准 点,每个基准点对应一个骨骼动画,因此可以获取构成目标子空间的基准点对应的骨骼动画。然后对获取到的骨骼动画进行动画混合处理,得到与控制操作对应的目标骨骼动画。For example, in the process of running a game, when a demand for animation mixing processing is generated, for example, the user controls the target object in the game, and specifically performs the operation of forwarding the target object to the right. At this time, it can be based on The control operation determines the corresponding target motion parameter (move, turn). Assuming that the animation mixing space is a two-dimensional space, the control points corresponding to the target motion parameters (move, turn) can be determined in the animation mixing space. The animation mixing space is a divided space, and then each subspace can be traversed to determine the target subspace to which the control point belongs. The vertices of the target subspace are reference points, and each reference point corresponds to a skeletal animation, so the skeletal animation corresponding to the reference points that constitute the target subspace can be obtained. Then, perform animation blending processing on the acquired skeletal animation to obtain the target skeletal animation corresponding to the control operation.
对骨骼动画进行动画混合处理的过程可以实现为:通过插值方法,对骨骼动画进行动画混合处理。剖分后的动画混合空间为二维空间,几何体为三角形,几何体插值算法为重心坐标插值算法;剖分后的动画混合空间为二维空间,几何体为平面凸四边形,几何体插值算法为逆双线性插值算法;剖分后的动画混合空间为三维空间,几何体为四面体,几何体插值算法为重心坐标插值算法;剖分后的动画混合空间为三维空间,几何体为凸包,几何体插值算法为德劳内三角形转换为四面体进行插值的算法。对于具有不同维数的动画混合空间以及剖分过程中所采用不同几何面或者几何体进行剖分的情况来说,不同情况所使用的插值方法不同。具体不同情况采用的插值方法可见下表所示:The process of performing animation blending processing on skeletal animation can be realized as follows: performing animation blending processing on skeletal animation through interpolation method. The split animation mixing space is a two-dimensional space, the geometry is a triangle, and the geometry interpolation algorithm is a barycentric coordinate interpolation algorithm; the split animation mixing space is a two-dimensional space, the geometry is a plane convex quadrilateral, and the geometry interpolation algorithm is an inverse double line Linear interpolation algorithm; the split animation mixed space is a three-dimensional space, the geometry is a tetrahedron, and the geometric interpolation algorithm is a barycentric coordinate interpolation algorithm; the split animation mixed space is a three-dimensional space, the geometry is a convex hull, and the geometric interpolation algorithm is a German The algorithm for converting Laune's triangle to a tetrahedron for interpolation. For animation mixing spaces with different dimensions and different geometric surfaces or geometric bodies used in the segmentation process, the interpolation methods used in different situations are different. The interpolation methods used in different situations can be seen in the table below:
表1Table 1
Figure PCTCN2020112542-appb-000001
Figure PCTCN2020112542-appb-000001
几何面为三角形时进行重心坐标插值、逆双线性插值、几何体为四面体时进行重心坐标插值以及德劳内三角化转换为四面体进行插值的原理示意图,可见图4所示。The principle schematic diagram of barycentric coordinate interpolation and inverse bilinear interpolation when the geometric plane is a triangle, and the barycentric coordinate interpolation when the geometric body is a tetrahedron, and Delaunay's triangulation into a tetrahedron for interpolation can be seen in Figure 4.
通过插值方法,对骨骼动画进行动画混合处理的过程可以实现为:通过插值方法,计算控制点所属的目标子空间对应的各基准点对应的权重信息,基于各基准点对应的权重信息,对骨骼动画进行动画混合处理。权重信息反映了控制点与对应基准点之间距离的远近关系。具体来说,当控制点与对应基准点之 间距离越近,权重越大。当控制点与对应基准点之间距离越远,权重越小。通过计算权重信息,可以得到如下参数:Through the interpolation method, the animation blending process of the skeletal animation can be realized as follows: through the interpolation method, the weight information corresponding to each reference point corresponding to the target subspace to which the control point belongs is calculated, and based on the weight information corresponding to each reference point, the bone The animation is processed for animation blending. The weight information reflects the distance relationship between the control point and the corresponding reference point. Specifically, the closer the distance between the control point and the corresponding reference point, the greater the weight. When the distance between the control point and the corresponding reference point is farther, the weight is smaller. By calculating the weight information, the following parameters can be obtained:
[Anim0,weight0]、[Anim1,weight1]…[Animi,weighti]。[Anim0,weight0], [Anim1,weight1]...[Animi,weighti].
其中,Animi,表示目标子空间中基准点i对应的位姿数据。weighti表示基准点i对应的权重信息。Among them, Animi represents the pose data corresponding to the reference point i in the target subspace. weighti represents the weight information corresponding to the reference point i.
基于各基准点对应的权重信息,对骨骼动画进行动画混合处理则可实现为:Based on the weight information corresponding to each reference point, the animation blending processing of the skeletal animation can be realized as:
blend motion=Σweighti×Animi。blend motion=Σweighti×Animi.
其中,blend motion表示动画混合处理后得到的目标骨骼动画中目标对象的位姿数据。Among them, blend motion represents the pose data of the target object in the target skeleton animation obtained after the animation mixing process.
可以理解的是,游戏中的目标对象可以由骨骼和关节构成,骨骼的位姿等因素决定了目标对象所处的姿态。当骨骼的位姿在多个连续视频帧发生变化时,目标对象所展现出的姿态不同,目标对象表现出动态。基于此,上述Animi可以包括目标对象的骨骼的位姿数据。It is understandable that the target object in the game can be composed of bones and joints, and the posture of the bones and other factors determine the posture of the target object. When the pose of the bone changes in multiple consecutive video frames, the posture of the target object is different, and the target object is dynamic. Based on this, the above-mentioned Animi may include the pose data of the bones of the target object.
需要说明的是,目标骨骼动画可以由多个视频帧构成,用于混合成目标骨骼动画的骨骼动画也是由多个视频帧构成。It should be noted that the target skeletal animation may be composed of multiple video frames, and the skeletal animation used for blending into the target skeletal animation is also composed of multiple video frames.
假设目标骨骼动画包括24个视频帧,用于混合成目标骨骼动画的骨骼动画也包括24个视频帧。可以通过用于混合成目标骨骼动画的骨骼动画中的第n个视频帧,混合出目标骨骼动画中的第n个视频帧。Assuming that the target skeletal animation includes 24 video frames, the skeletal animation used for blending into the target skeletal animation also includes 24 video frames. The n-th video frame in the target skeletal animation can be blended through the n-th video frame in the skeletal animation used for blending into the target skeletal animation.
现在假设需要混合出目标骨骼动画中的第1个视频帧,则可以获取用于混合成目标骨骼动画的骨骼动画A1中的第1个视频帧对应的骨骼的位姿数据D1,以及用于混合成目标骨骼动画的骨骼动画A2中的第1个视频帧对应的骨骼的位姿数据D2,将骨骼的位姿数据D1乘以对应的权重Q1加上骨骼的位姿数据D2乘以对应的权重Q2得到的结果,即为目标骨骼动画中的第1个视频帧对应的骨骼的位姿数据D3。在计算出骨骼的位姿数据D3之后,就知道目标对象在目标骨骼动画中的第1个视频帧所处的姿态了。Now suppose that the first video frame in the target skeletal animation needs to be blended, and the pose data D1 of the bone corresponding to the first video frame in the skeletal animation A1 for blending into the target skeletal animation can be obtained, and used for blending The pose data D2 of the bone corresponding to the first video frame in the skeletal animation A2 of the target skeletal animation, multiply the pose data D1 of the bone by the corresponding weight Q1 plus the pose data D2 of the bone and multiply the corresponding weight The result obtained by Q2 is the pose data D3 of the skeleton corresponding to the first video frame in the target skeleton animation. After calculating the pose data D3 of the skeleton, the pose of the target object in the first video frame of the target skeleton animation is known.
通过同样的方式可以计算出目标骨骼动画中的所有视频帧对应的骨骼的位姿数据,通过目标骨骼动画中的所有视频帧对应的骨骼的位姿数据,生成目标 骨骼动画。In the same way, the pose data of the bones corresponding to all the video frames in the target skeletal animation can be calculated, and the target skeletal animation can be generated through the pose data of the bones corresponding to all the video frames in the target skeletal animation.
在游戏运行的过程中,如果采用凸包对动画混合空间进行剖分,或者其他种类繁杂形状不规则的几何体对动画混合空间进行剖分之后,遍历控制点所属的目标子空间的过程涉及到的计算量极高,这不利于游戏的运行。如果运行游戏的设备配置较低,会导致在运行游戏的过程中出现卡顿的现象。此时可以考虑将不规则的几何体转换为规则的几何体,遍历控制点所属的规则的几何体的计算量则会大大降低。基于此,可选地,在剖分后的动画混合空间包含的多个子空间中,确定目标运动参数对应的控制点所属的目标子空间的过程可以实现为:对于剖分后的动画混合空间包含的每个子空间,获取子空间对应的多个网格空间;遍历每个网格空间,确定目标运动参数对应的控制点是否属于多个网格空间中的任一网格空间;若控制点属于多个网格空间中的任一网格空间,则确定包含任一网格空间的子空间作为控制点所属的目标子空间。During the running of the game, if convex hulls are used to divide the animation mixing space, or other kinds of irregularly shaped geometric bodies are used to divide the animation mixing space, the process of traversing the target subspace to which the control point belongs is involved. The amount of calculation is extremely high, which is not conducive to the operation of the game. If the configuration of the device running the game is low, it will cause a freeze during the running of the game. At this time, you can consider converting the irregular geometry into a regular geometry, and the amount of calculation to traverse the regular geometry to which the control point belongs will be greatly reduced. Based on this, optionally, among the multiple subspaces contained in the split animation mixing space, the process of determining the target subspace to which the control point corresponding to the target motion parameter belongs can be implemented as follows: Obtain multiple grid spaces corresponding to the subspace; traverse each grid space to determine whether the control point corresponding to the target motion parameter belongs to any one of the multiple grid spaces; if the control point belongs to For any one of the multiple grid spaces, a subspace containing any one of the grid spaces is determined as the target subspace to which the control point belongs.
实际应用中,可以将具有凸包形状的子空间转换为若干四面体,也可以直接获取子空间对应的多个网格空间,遍历每个网格空间,确定目标运动参数对应的控制点是否属于多个网格空间中的任一网格空间,其中,网格空间也可称为虚拟格子(Virtual Example Grids,简写为VEG)。本发明实施例提供的一种二维空间中将不规则的几何面转换为规则的几何面的效果示意图、以及将三维空间中不规则的几何体转换为规则的几何体的效果示意图,可见图5所示。In practical applications, a subspace with a convex hull shape can be converted into several tetrahedrons, or multiple grid spaces corresponding to the subspace can be directly obtained, and each grid space can be traversed to determine whether the control point corresponding to the target motion parameter belongs to Any grid space among multiple grid spaces, where the grid space can also be referred to as a virtual grid (Virtual Example Grids, abbreviated as VEG). The embodiment of the present invention provides a schematic diagram of the effect of converting an irregular geometric surface into a regular geometric surface in a two-dimensional space, and a schematic diagram of the effect of converting an irregular geometric body in a three-dimensional space into a regular geometric body, as shown in Figure 5. Show.
由于VEG的顶点未对应存储有真实的骨骼动画,因此需要构建其对应的虚拟的混合骨骼动画,具体可以存储这样的信息:[Anim0,weight0]、[Anim1,weight1]…[Animi,weighti],这些信息可以由原始的剖分几何体经过重采样计算得到。Since the vertices of VEG do not correspond to storage of real skeletal animation, it is necessary to construct its corresponding virtual hybrid skeletal animation. Specifically, such information can be stored: [Anim0,weight0], [Anim1,weight1]...[Animi,weighti], This information can be calculated by re-sampling the original mesh geometry.
在游戏运行的过程中,在进行动画混合处理时,利用网格空间进行权重计算,由于网格空间是排列整齐统一,可以快速计算得到控制点位于哪个网格空间,以及网格空间对应的顶点的权重,进而可以实现快速混合动画的效果。During the running of the game, the grid space is used for weight calculation when the animation is mixed. Because the grid space is neatly arranged, you can quickly calculate which grid space the control point is located in and the vertex corresponding to the grid space. The weight of, and then can achieve the effect of fast mixing animation.
通过本发明实施例提供的方法,在对样本骨骼动画进行动画混合处理之前,可以对每个样本骨骼动画中的目标对象进行运动分析,通过这样的方式来获取 目标对象在各样本骨骼动画中对应的不同类型的运动参数。通过运动分析自动计算出的运动参数能够贴合样本骨骼动画实际的运动参数,因此通过运动分析得出的运动参数更加的精确。当计算出的运动参数更加能够贴合样本骨骼动画实际的运动参数时,基于这样的运动参数进行动画混合处理得到的骨骼动画的动画质量较高。Through the method provided by the embodiment of the present invention, before performing the animation blending process on the sample skeletal animation, the motion analysis of the target object in each sample skeletal animation can be performed, and in this way, the corresponding target object in each sample skeletal animation can be obtained. Different types of motion parameters. The motion parameters automatically calculated through the motion analysis can fit the actual motion parameters of the sample skeleton animation, so the motion parameters obtained through the motion analysis are more accurate. When the calculated motion parameters are more able to fit the actual motion parameters of the sample skeletal animation, the animation quality of the skeletal animation obtained by the animation blending process based on such motion parameters is higher.
本发明实施例涵盖了高维空间参数化角色动画混合方案的编辑、混合生成以及优化整套流程,相比现有混合方案,本发明实施例对动画资源进行分析获得相应动画混合空间参数、额外提供了用户交互编辑剖分功能,且可以处理高维空间混合问题,用户可以很方便地、且快速地混合生成出效果更优的动画。本发明实施例提供的方案可以用于游戏、动画等技术领域,以增强动画角色的运动效果、表现性。The embodiment of the present invention covers the entire process of editing, mixing generation and optimizing the high-dimensional space parameterized character animation mixing scheme. Compared with the existing mixing scheme, the embodiment of the present invention analyzes the animation resources to obtain the corresponding animation mixing space parameters, and additionally provides With the user interactive editing and subdivision function, and can handle the high-dimensional space mixing problem, the user can easily and quickly mix and generate a better animation. The solutions provided by the embodiments of the present invention can be used in technical fields such as games and animation to enhance the motion effect and expressiveness of animated characters.
以下将详细描述本发明的一个或多个实施例的动画混合空间剖分装置。本领域技术人员可以理解,这些动画混合空间剖分装置均可使用市售的硬件组件通过本方案所教导的步骤进行配置来构成。The animation mixing space division device of one or more embodiments of the present invention will be described in detail below. Those skilled in the art can understand that these animation mixing space segmentation devices can all be configured by using commercially available hardware components through the steps taught in this solution.
图6为本发明实施例提供的一种动画混合空间剖分装置的结构示意图,如图6所示,该装置包括:获取模块11、运动分析模块12、构建模块13、剖分模块14。FIG. 6 is a schematic structural diagram of an animation mixing space segmentation device provided by an embodiment of the present invention. As shown in FIG. 6, the device includes: an acquisition module 11, a motion analysis module 12, a construction module 13, and a segmentation module 14.
获取模块11,设置为获取动画集合,所述动画集合中包含多个待进行动画混合的样本骨骼动画;The obtaining module 11 is configured to obtain an animation set, the animation set contains a plurality of sample skeletal animations to be animation blended;
运动分析模块12,设置为对各样本骨骼动画中的目标对象进行运动分析,得到所述目标对象在所述各样本骨骼动画中对应的多个类型的运动参数;The motion analysis module 12 is configured to perform motion analysis on the target object in each sample skeletal animation to obtain multiple types of motion parameters corresponding to the target object in each sample skeletal animation;
构建模块13,设置为以所述运动参数的多个类型为维度构建动画混合空间;The construction module 13 is configured to construct an animation mixing space with multiple types of motion parameters as dimensions;
剖分模块14,设置为确定所述运动参数在所述动画混合空间中对应的第一基准点,基于所述第一基准点,对所述动画混合空间进行剖分。The segmentation module 14 is configured to determine a first reference point corresponding to the motion parameter in the animation mixing space, and segment the animation mixing space based on the first reference point.
可选地,所述运动分析模块12,设置为:Optionally, the motion analysis module 12 is set to:
在各样本骨骼动画播放的过程中,对所述各样本骨骼动画进行采样,得到采样信息;During the playback of each sample skeleton animation, sampling the sample skeleton animation to obtain sampling information;
根据所述采样信息,计算所述目标对象在所述各样本骨骼动画中对应的多个类型的运动参数。According to the sampling information, multiple types of motion parameters corresponding to the target object in each sample skeletal animation are calculated.
可选地,所述运动分析模块12,设置为:Optionally, the motion analysis module 12 is set to:
根据所述采样信息,确定所述目标对象在所述各样本骨骼动画中的运动轨迹以及所述目标对象中包含的关节角度;Determine, according to the sampling information, the motion trajectory of the target object in the skeletal animation of each sample and the joint angles included in the target object;
基于所述运动轨迹以及所述关节角度,确定所述目标对象在所述各样本骨骼动画中对应的多个类型的运动参数。Based on the motion trajectory and the joint angle, multiple types of motion parameters corresponding to the target object in each sample skeletal animation are determined.
可选地,所述装置还包括修改模块;Optionally, the device further includes a modification module;
所述修改模块,设置为基于用户触发的对剖分结果的修改操作,对所述剖分结果进行修改。The modification module is configured to modify the split result based on the modification operation on the split result triggered by the user.
可选地,所述修改操作为在所述动画混合空间中添加第二基准点,所述修改模块,设置为:Optionally, the modification operation is to add a second reference point in the animation mixing space, and the modification module is set to:
基于所述第一基准点和所述第二基准点,重新对所述动画混合空间进行剖分;Re-split the animation mixing space based on the first reference point and the second reference point;
基于所述第二基准点与所述第一基准点的相对位置关系,在所述第一基准点中,确定与所述第二基准点相匹配的目标基准点;Based on the relative positional relationship between the second reference point and the first reference point, in the first reference point, determining a target reference point that matches the second reference point;
基于所述目标基准点对应的样本骨骼动画,生成骨骼动画,所述骨骼动画与所述动画混合空间中的所述第二基准点相对应。Generate a skeletal animation based on the sample skeletal animation corresponding to the target reference point, the skeletal animation corresponding to the second reference point in the animation mixing space.
可选地,所述动画混合空间包括二维空间或者三维空间。Optionally, the animation mixing space includes a two-dimensional space or a three-dimensional space.
可选地,所述动画混合空间为二维空间,所述剖分模块14,设置为:Optionally, the animation mixing space is a two-dimensional space, and the segmentation module 14 is configured to:
以所述第一基准点作为几何面的顶点,将所述动画混合空间剖分为多个预设类型的几何面,所述预设类型的几何面包括三角形或者平面凸四边形。Using the first reference point as a vertex of a geometric surface, the animation mixing space is divided into a plurality of preset types of geometric surfaces, and the preset types of geometric surfaces include triangles or planar convex quadrilaterals.
可选地,所述预设类型的几何面为三角形,所述剖分模块14,设置为:Optionally, the geometric surface of the preset type is a triangle, and the splitting module 14 is set to:
以所述第一基准点作为几何面的顶点,通过德劳内三角化剖分技术,将所述动画混合空间剖分为多个预设类型的几何面。Using the first reference point as the vertex of the geometric surface, the animation mixing space is divided into a plurality of preset types of geometric surfaces through the Delaunay triangulation technique.
可选地,所述动画混合空间为三维空间,所述剖分模块14,设置为:Optionally, the animation mixing space is a three-dimensional space, and the segmentation module 14 is configured to:
以所述第一基准点作为几何体的顶点,将所述动画混合空间剖分为多个预 设类型的几何体,所述预设类型的几何体包括四面体或者凸包。Using the first reference point as a vertex of a geometric body, the animation mixing space is divided into a plurality of preset types of geometric bodies, and the preset types of geometric bodies include a tetrahedron or a convex hull.
可选地,所述装置还包括混合模块,所述混合模块,设置为:Optionally, the device further includes a mixing module, and the mixing module is configured to:
基于剖分后的动画混合空间,进行动画混合处理。Based on the split animation mixing space, the animation mixing process is performed.
可选地,所述混合模块,设置为:Optionally, the mixing module is set to:
当检测到动画混合需求事件时,获取用户输入的对所述目标对象的控制参数;When an animation mixing demand event is detected, acquiring the control parameters of the target object input by the user;
基于所述控制参数,在剖分后的动画混合空间中构造混合点;Based on the control parameters, construct a mixing point in the split animation mixing space;
基于所述混合点,进行动画混合处理。Based on the mixing point, an animation mixing process is performed.
可选地,所述混合模块,设置为:Optionally, the mixing module is set to:
遍历所述剖分后的动画混合空间中所有的几何体,确定所述混合点所属的目标几何体;Traverse all the geometric bodies in the split animation mixing space, and determine the target geometric body to which the mixing point belongs;
通过几何体插值算法,计算所述目标几何体的顶点信息以及权重信息;Calculate the vertex information and weight information of the target geometry through a geometric interpolation algorithm;
基于所述顶点信息以及所述权重信息,进行动画混合处理。Based on the vertex information and the weight information, an animation mixing process is performed.
可选地,所述剖分后的动画混合空间为二维空间,所述几何体为三角形,所述几何体插值算法为重心坐标插值算法;Optionally, the split animation mixing space is a two-dimensional space, the geometry is a triangle, and the geometry interpolation algorithm is a barycentric coordinate interpolation algorithm;
所述剖分后的动画混合空间为二维空间,所述几何体为平面凸四边形,所述几何体插值算法为逆双线性插值算法;The split animation mixing space is a two-dimensional space, the geometric body is a plane convex quadrilateral, and the geometric body interpolation algorithm is an inverse bilinear interpolation algorithm;
所述剖分后的动画混合空间为三维空间,所述几何体为四面体,所述几何体插值算法为重心坐标插值算法;The split animation mixing space is a three-dimensional space, the geometry is a tetrahedron, and the geometry interpolation algorithm is a barycentric coordinate interpolation algorithm;
所述剖分后的动画混合空间为三维空间,所述几何体为凸包,所述几何体插值算法为德劳内三角形转换为四面体进行插值的算法。The split animation mixing space is a three-dimensional space, the geometry is a convex hull, and the geometry interpolation algorithm is an algorithm in which Delaunay's triangle is transformed into a tetrahedron for interpolation.
可选地,所述混合模块,设置为:Optionally, the mixing module is set to:
检测用户触发的对所述目标对象的控制操作,确定所述控制操作对应的目标运动参数;Detecting a user-triggered control operation on the target object, and determining a target motion parameter corresponding to the control operation;
在剖分后的动画混合空间包含的多个子空间中,确定所述目标运动参数对应的控制点所属的目标子空间,所述多个子空间包括多个预设类型的几何面或者多个预设类型的几何体;Determine the target subspace to which the control point corresponding to the target motion parameter belongs among the multiple subspaces included in the split animation mixing space, and the multiple subspaces include multiple preset types of geometric surfaces or multiple presets Type of geometry;
获取构成所述目标子空间的第一基准点对应的样本骨骼动画;Acquiring a sample skeleton animation corresponding to the first reference point constituting the target subspace;
对获取到的样本骨骼动画进行动画混合处理,得到与所述控制操作对应的目标骨骼动画。Perform animation blending processing on the acquired sample skeleton animation to obtain the target skeleton animation corresponding to the control operation.
可选地,所述混合模块,设置为:Optionally, the mixing module is set to:
对于剖分后的动画混合空间包含的每个子空间,获取所述子空间对应的多个网格空间;For each subspace included in the split animation mixing space, acquiring multiple grid spaces corresponding to the subspace;
遍历每个网格空间,确定所述目标运动参数对应的控制点是否属于所述多个网格空间中的任一网格空间;Traverse each grid space to determine whether the control point corresponding to the target motion parameter belongs to any one of the multiple grid spaces;
若所述控制点属于所述多个网格空间中的任一网格空间,则确定包含所述任一网格空间的子空间作为所述控制点所属的目标子空间。If the control point belongs to any grid space in the plurality of grid spaces, a subspace including any grid space is determined as the target subspace to which the control point belongs.
图6所示装置可以执行前述图1至图5所示实施例中提供的动画混合空间剖分方法,详细的执行过程和技术效果参见前述实施例中的描述,在此不再赘述。The device shown in FIG. 6 can execute the animation blending space division method provided in the embodiments shown in FIGS. 1 to 5. For detailed execution process and technical effects, please refer to the description in the foregoing embodiment, and will not be repeated here.
根据需要,本发明各实施例的系统、方法和装置可以实现为纯粹的软件(例如用Java来编写的软件程序),也可以根据需要实现为纯粹的硬件(例如专用ASIC芯片或FPGA芯片),还可以实现为结合了软件和硬件的系统(例如存储有固定代码的固件系统或者带有通用存储器和处理器的系统)。According to needs, the systems, methods, and devices of the embodiments of the present invention can be implemented as pure software (for example, software programs written in Java), or as pure hardware (for example, dedicated ASIC chips or FPGA chips) as required. It can also be implemented as a system that combines software and hardware (for example, a firmware system storing fixed codes or a system with general-purpose memory and a processor).
在一个可能的设计中,上述图6所示动画混合空间剖分装置的结构可实现为一电子设备,如图7所示,该电子设备可以包括:处理器91、存储器92。其中,所述存储器92上存储有可执行代码,当所述可执行代码被所述处理器91执行时,使所述处理器91至少可以实现如前述图1至图5所示实施例中提供的动画混合空间剖分方法。In a possible design, the structure of the animation mixing space segmentation device shown in FIG. 6 may be implemented as an electronic device. As shown in FIG. 7, the electronic device may include a processor 91 and a memory 92. Wherein, executable code is stored on the memory 92, and when the executable code is executed by the processor 91, the processor 91 can at least implement the functions provided in the embodiments shown in FIGS. 1 to 5 above. The animation hybrid space division method.
可选地,该电子设备中还可以包括通信接口93,用于与其他设备进行通信。Optionally, the electronic device may also include a communication interface 93 for communicating with other devices.
本发明的另一个方面是一种计算机可读介质,其上存储有计算机可读指令,所述指令被执行时可实施本发明各实施例的方法。Another aspect of the present invention is a computer-readable medium on which computer-readable instructions are stored, and when the instructions are executed, the method of each embodiment of the present invention can be implemented.
以上已经描述了本发明的各实施例,上述说明是示例性的,并非穷尽性的,并且也不限于所公开的各实施例。在不偏离所说明的各实施例的范围和精神的 情况下,对于本技术领域的普通技术人员来说许多修改和变更都是显而易见的。要求保护的主题的范围仅由所附的权利要求进行限定。The embodiments of the present invention have been described above, and the above description is exemplary, not exhaustive, and is not limited to the disclosed embodiments. Without departing from the scope and spirit of the described embodiments, many modifications and changes are obvious to those of ordinary skill in the art. The scope of the claimed subject matter is limited only by the appended claims.

Claims (18)

  1. 一种动画混合空间剖分方法,其特征在于,包括:An animation mixing space segmentation method, which is characterized in that it includes:
    获取动画集合,所述动画集合中包含多个待进行动画混合的样本骨骼动画;Acquiring an animation collection, the animation collection including a plurality of sample skeletal animations to be animation blended;
    对各样本骨骼动画中的目标对象进行运动分析,得到所述目标对象在所述各样本骨骼动画中对应的多个类型的运动参数;Performing motion analysis on the target object in each sample skeletal animation to obtain multiple types of motion parameters corresponding to the target object in each sample skeletal animation;
    以所述运动参数的多个类型为维度构建动画混合空间;Constructing an animation mixing space with multiple types of motion parameters as dimensions;
    确定所述运动参数在所述动画混合空间中对应的第一基准点,基于所述第一基准点,对所述动画混合空间进行剖分。The first reference point corresponding to the motion parameter in the animation mixing space is determined, and the animation mixing space is divided based on the first reference point.
  2. 根据权利要求1所述的方法,其特征在于,所述对各样本骨骼动画中的目标对象进行运动分析,得到所述目标对象在所述各样本骨骼动画中对应的多个类型的运动参数,包括:The method according to claim 1, wherein the motion analysis is performed on the target object in each sample skeletal animation to obtain multiple types of motion parameters corresponding to the target object in each sample skeletal animation, include:
    在各样本骨骼动画播放的过程中,对所述各样本骨骼动画进行采样,得到采样信息;During the playback of each sample skeleton animation, sampling the sample skeleton animation to obtain sampling information;
    根据所述采样信息,计算所述目标对象在所述各样本骨骼动画中对应的多个类型的运动参数。According to the sampling information, multiple types of motion parameters corresponding to the target object in each sample skeletal animation are calculated.
  3. 根据权利要求2所述的方法,其特征在于,所述根据所述采样信息,计算所述目标对象在所述各样本骨骼动画中对应的多个类型的运动参数,包括:The method according to claim 2, wherein the calculating multiple types of motion parameters corresponding to the target object in each sample skeletal animation according to the sampling information comprises:
    根据所述采样信息,确定所述目标对象在所述各样本骨骼动画中的运动轨迹以及所述目标对象中包含的关节角度;Determine, according to the sampling information, the motion trajectory of the target object in the skeletal animation of each sample and the joint angles included in the target object;
    基于所述运动轨迹以及所述关节角度,确定所述目标对象在所述各样本骨骼动画中对应的多个类型的运动参数。Based on the motion trajectory and the joint angle, multiple types of motion parameters corresponding to the target object in each sample skeletal animation are determined.
  4. 根据权利要求1所述的方法,其特征在于,在基于所述第一基准点,对所述动画混合空间进行剖分之后,所述方法还包括:The method according to claim 1, wherein after the animation mixing space is divided based on the first reference point, the method further comprises:
    基于用户触发的对剖分结果的修改操作,对所述剖分结果进行修改。Based on the user-triggered modification operation on the segmentation result, the segmentation result is modified.
  5. 根据权利要求1所述的方法,其特征在于,所述修改操作为在所述动画混合空间中添加第二基准点,所述基于用户触发的对剖分结果的修改操作,对 所述剖分结果进行修改,包括:The method according to claim 1, wherein the modification operation is to add a second reference point in the animation mixing space, and the modification operation based on a user-triggered split result is performed on the split The results were modified to include:
    基于所述第一基准点和所述第二基准点,重新对所述动画混合空间进行剖分;Re-split the animation mixing space based on the first reference point and the second reference point;
    所述方法还包括:The method also includes:
    基于所述第二基准点与所述第一基准点的相对位置关系,在所述第一基准点中,确定与所述第二基准点相匹配的目标基准点;Based on the relative positional relationship between the second reference point and the first reference point, in the first reference point, determining a target reference point that matches the second reference point;
    基于所述目标基准点对应的样本骨骼动画,生成骨骼动画,所述骨骼动画与所述动画混合空间中的所述第二基准点相对应。Generate a skeletal animation based on the sample skeletal animation corresponding to the target reference point, the skeletal animation corresponding to the second reference point in the animation mixing space.
  6. 根据权利要求1所述的方法,其特征在于,所述动画混合空间包括二维空间或者三维空间。The method according to claim 1, wherein the animation mixing space comprises a two-dimensional space or a three-dimensional space.
  7. 根据权利要求6所述的方法,其特征在于,所述动画混合空间为二维空间,所述基于所述第一基准点,对所述动画混合空间进行剖分,包括:The method according to claim 6, wherein the animation mixing space is a two-dimensional space, and the dividing the animation mixing space based on the first reference point comprises:
    以所述第一基准点作为几何面的顶点,将所述动画混合空间剖分为多个预设类型的几何面,所述预设类型的几何面包括三角形或者平面凸四边形。Using the first reference point as a vertex of a geometric surface, the animation mixing space is divided into a plurality of preset types of geometric surfaces, and the preset types of geometric surfaces include triangles or planar convex quadrilaterals.
  8. 根据权利要求7所述的方法,其特征在于,所述预设类型的几何面为三角形,所述以所述第一基准点作为几何面的顶点,将所述动画混合空间剖分为多个预设类型的几何面,包括:8. The method according to claim 7, wherein the preset type of geometric surface is a triangle, and the animation mixing space is divided into a plurality of points by using the first reference point as the vertex of the geometric surface Preset types of geometric faces, including:
    以所述第一基准点作为几何面的顶点,通过德劳内三角化剖分技术,将所述动画混合空间剖分为多个预设类型的几何面。Using the first reference point as the vertex of the geometric surface, the animation mixing space is divided into a plurality of preset types of geometric surfaces through the Delaunay triangulation technique.
  9. 根据权利要求6所述的方法,其特征在于,所述动画混合空间为三维空间,所述基于所述第一基准点,对所述动画混合空间进行剖分,包括:The method according to claim 6, wherein the animation mixing space is a three-dimensional space, and the dividing the animation mixing space based on the first reference point comprises:
    以所述第一基准点作为几何体的顶点,将所述动画混合空间剖分为多个预设类型的几何体,所述预设类型的几何体包括四面体或者凸包。Using the first reference point as a vertex of a geometric body, the animation mixing space is divided into a plurality of preset types of geometric bodies, and the preset types of geometric bodies include a tetrahedron or a convex hull.
  10. 根据权利要求7或者9所述的方法,其特征在于,所述方法还包括:The method according to claim 7 or 9, wherein the method further comprises:
    基于剖分后的动画混合空间,进行动画混合处理。Based on the split animation mixing space, the animation mixing process is performed.
  11. 根据权利要求10所述的方法,其特征在于,所述基于剖分后的动画混合空间,进行动画混合处理,包括:The method according to claim 10, wherein the performing animation mixing processing based on the split animation mixing space comprises:
    当检测到动画混合需求事件时,获取用户输入的对所述目标对象的控制参数;When an animation mixing demand event is detected, acquiring the control parameters of the target object input by the user;
    基于所述控制参数,在剖分后的动画混合空间中构造混合点;Based on the control parameters, construct a mixing point in the split animation mixing space;
    基于所述混合点,进行动画混合处理。Based on the mixing point, an animation mixing process is performed.
  12. 根据权利要求11所述的方法,其特征在于,所述基于所述混合点,进行动画混合处理,包括:The method according to claim 11, wherein the performing animation mixing processing based on the mixing point comprises:
    遍历所述剖分后的动画混合空间中所有的几何体,确定所述混合点所属的目标几何体;Traverse all the geometric bodies in the split animation mixing space, and determine the target geometric body to which the mixing point belongs;
    通过几何体插值算法,计算所述目标几何体的顶点信息以及权重信息;Calculate the vertex information and weight information of the target geometry through a geometric interpolation algorithm;
    基于所述顶点信息以及所述权重信息,进行动画混合处理。Based on the vertex information and the weight information, an animation mixing process is performed.
  13. 根据权利要求12所述的方法,其特征在于,所述剖分后的动画混合空间为二维空间,所述几何体为三角形,所述几何体插值算法为重心坐标插值算法;The method according to claim 12, wherein the split animation mixing space is a two-dimensional space, the geometry is a triangle, and the geometry interpolation algorithm is a barycentric coordinate interpolation algorithm;
    所述剖分后的动画混合空间为二维空间,所述几何体为平面凸四边形,所述几何体插值算法为逆双线性插值算法;The split animation mixing space is a two-dimensional space, the geometric body is a plane convex quadrilateral, and the geometric body interpolation algorithm is an inverse bilinear interpolation algorithm;
    所述剖分后的动画混合空间为三维空间,所述几何体为四面体,所述几何体插值算法为重心坐标插值算法;The split animation mixing space is a three-dimensional space, the geometry is a tetrahedron, and the geometry interpolation algorithm is a barycentric coordinate interpolation algorithm;
    所述剖分后的动画混合空间为三维空间,所述几何体为凸包,所述几何体插值算法为德劳内三角形转换为四面体进行插值的算法。The split animation mixing space is a three-dimensional space, the geometry is a convex hull, and the geometry interpolation algorithm is an algorithm in which Delaunay's triangle is transformed into a tetrahedron for interpolation.
  14. 根据权利要求10所述的方法,其特征在于,所述基于剖分后的动画混合空间,进行动画混合处理,包括:The method according to claim 10, wherein the performing animation mixing processing based on the split animation mixing space comprises:
    检测用户触发的对所述目标对象的控制操作,确定所述控制操作对应的目标运动参数;Detecting a user-triggered control operation on the target object, and determining a target motion parameter corresponding to the control operation;
    在剖分后的动画混合空间包含的多个子空间中,确定所述目标运动参数对应的控制点所属的目标子空间,所述多个子空间包括多个预设类型的几何面或者多个预设类型的几何体;Determine the target subspace to which the control point corresponding to the target motion parameter belongs among the multiple subspaces included in the split animation mixing space, and the multiple subspaces include multiple preset types of geometric surfaces or multiple presets Type of geometry;
    获取构成所述目标子空间的第一基准点对应的样本骨骼动画;Acquiring a sample skeleton animation corresponding to the first reference point constituting the target subspace;
    对获取到的样本骨骼动画进行动画混合处理,得到与所述控制操作对应的目标骨骼动画。Perform animation blending processing on the acquired sample skeleton animation to obtain the target skeleton animation corresponding to the control operation.
  15. 根据权利要求14所述的方法,其特征在于,所述在剖分后的动画混合空间包含的多个子空间中,确定所述目标运动参数对应的控制点所属的目标子空间,包括:The method according to claim 14, wherein the determining the target subspace to which the control point corresponding to the target motion parameter belongs among the multiple subspaces included in the split animation mixing space comprises:
    对于剖分后的动画混合空间包含的每个子空间,获取所述子空间对应的多个网格空间;For each subspace included in the split animation mixing space, acquiring multiple grid spaces corresponding to the subspace;
    遍历每个网格空间,确定所述目标运动参数对应的控制点是否属于所述多个网格空间中的任一网格空间;Traverse each grid space to determine whether the control point corresponding to the target motion parameter belongs to any one of the multiple grid spaces;
    若所述控制点属于所述多个网格空间中的任一网格空间,则确定包含所述任一网格空间的子空间作为所述控制点所属的目标子空间。If the control point belongs to any grid space in the plurality of grid spaces, a subspace including any grid space is determined as the target subspace to which the control point belongs.
  16. 一种动画混合空间剖分装置,其特征在于,包括:An animation mixing space segmentation device, which is characterized in that it comprises:
    获取模块,设置为获取动画集合,所述动画集合中包含多个待进行动画混合的样本骨骼动画;An obtaining module, configured to obtain an animation set, the animation set contains a plurality of sample skeletal animations to be animated;
    运动分析模块,设置为对各样本骨骼动画中的目标对象进行运动分析,得到所述目标对象在所述各样本骨骼动画中对应的多个类型的运动参数;The motion analysis module is configured to perform motion analysis on the target object in each sample skeletal animation to obtain multiple types of motion parameters corresponding to the target object in each sample skeletal animation;
    构建模块,设置为以所述运动参数的多个类型为维度构建动画混合空间;A construction module, configured to construct an animation mixing space with multiple types of motion parameters as dimensions;
    剖分模块,设置为确定所述运动参数在所述动画混合空间中对应的第一基准点,基于所述第一基准点,对所述动画混合空间进行剖分。The segmentation module is configured to determine a first reference point corresponding to the motion parameter in the animation mixing space, and segment the animation mixing space based on the first reference point.
  17. 一种电子设备,包括处理器和存储器,所述存储器中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、至少一段程序、代码集或指令集由所述处理器加载并执行以实现根据权利要求1-15中任一项所述的方法。An electronic device, including a processor and a memory, the memory stores at least one instruction, at least one program, code set or instruction set, and the at least one instruction, at least one program, code set or instruction set is processed by the The device is loaded and executed to implement the method according to any one of claims 1-15.
  18. 一种计算机可读介质,其上存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、至少一段程序、代码集或指令集由处理器加载并执行以实现根据权利要求1-15中任一项所述的方法。A computer-readable medium on which at least one instruction, at least one program, code set or instruction set is stored, and the at least one instruction, at least one program, code set or instruction set is loaded and executed by a processor to realize The method of any one of 1-15 is required.
PCT/CN2020/112542 2020-06-22 2020-08-31 Animation blend space partitioning method and apparatus, and device and readable medium WO2021258544A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010575600.6A CN111798547B (en) 2020-06-22 2020-06-22 Animation mixed space subdivision method, device, equipment and readable medium
CN202010575600.6 2020-06-22

Publications (1)

Publication Number Publication Date
WO2021258544A1 true WO2021258544A1 (en) 2021-12-30

Family

ID=72803584

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/112542 WO2021258544A1 (en) 2020-06-22 2020-08-31 Animation blend space partitioning method and apparatus, and device and readable medium

Country Status (2)

Country Link
CN (3) CN113129416A (en)
WO (1) WO2021258544A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090195544A1 (en) * 2008-02-05 2009-08-06 Disney Enterprises, Inc. System and method for blended animation enabling an animated character to aim at any arbitrary point in a virtual space
US20140285513A1 (en) * 2013-03-25 2014-09-25 Naturalmotion Limited Animation of a virtual object
CN109801350A (en) * 2019-01-24 2019-05-24 湖南深度体验智能技术有限公司 A kind of personage's motion simulation method based on example animation

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7859538B2 (en) * 2006-07-31 2010-12-28 Autodesk, Inc Converting deformation data for a mesh to animation data for a skeleton, skinning and shading in a runtime computer graphics animation engine
CN1916969A (en) * 2006-08-07 2007-02-21 浙江大学 Method for generating reaction accompany movement based on hybrid control
US8988437B2 (en) * 2009-03-20 2015-03-24 Microsoft Technology Licensing, Llc Chaining animations
CN101958007B (en) * 2010-09-20 2012-05-23 南京大学 Three-dimensional animation posture modeling method by adopting sketch
CN102254335B (en) * 2011-07-01 2013-01-02 厦门吉比特网络技术股份有限公司 System and method for editing game characters
CN103606180B (en) * 2013-11-29 2017-06-16 广州菲动软件科技有限公司 The rendering intent and device of 3D skeleton cartoons
CN104077797B (en) * 2014-05-19 2017-05-10 无锡梵天信息技术股份有限公司 three-dimensional game animation system
CN105894555B (en) * 2016-03-30 2020-02-11 腾讯科技(深圳)有限公司 Method and device for simulating limb actions of animation model
CN111079071A (en) * 2016-08-16 2020-04-28 完美鲲鹏(北京)动漫科技有限公司 Inverse dynamics calculation method and device applied to human-like skeleton
CN106600626B (en) * 2016-11-01 2020-07-31 中国科学院计算技术研究所 Three-dimensional human motion capture method and system
CN106780683A (en) * 2017-02-23 2017-05-31 网易(杭州)网络有限公司 The processing method and processing device of bone animation data
CN109145788B (en) * 2018-08-08 2020-07-07 北京云舶在线科技有限公司 Video-based attitude data capturing method and system
CN110102050B (en) * 2019-04-30 2022-02-18 腾讯科技(深圳)有限公司 Virtual object display method and device, electronic equipment and storage medium
CN111080755B (en) * 2019-12-31 2023-11-14 上海米哈游天命科技有限公司 Motion calculation method and device, storage medium and electronic equipment
CN111298433B (en) * 2020-02-10 2022-07-29 腾讯科技(深圳)有限公司 Animation video processing method and device, electronic equipment and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090195544A1 (en) * 2008-02-05 2009-08-06 Disney Enterprises, Inc. System and method for blended animation enabling an animated character to aim at any arbitrary point in a virtual space
US20140285513A1 (en) * 2013-03-25 2014-09-25 Naturalmotion Limited Animation of a virtual object
CN109801350A (en) * 2019-01-24 2019-05-24 湖南深度体验智能技术有限公司 A kind of personage's motion simulation method based on example animation

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ANONYMOUS: "Unity - Manual: Skinned Mesh Renderer", UNITY USER MANUAL (2019.4 LTS), UNITY TECHNOLOGIES, 30 April 2019 (2019-04-30), pages 1 - 3, XP055882618, Retrieved from the Internet <URL:https://docs.unity3d.com/2019.4/Documentation/Manual/class-SkinnedMeshRenderer.html> [retrieved on 20220124] *
CHINEDUFN CHINEDU FRANCIS NWAFILI: "skeletal-animation-system: A standalone, stateless, dual quaternion based skeletal animation system built with interactive applications in mind", GITHUB, 23 October 2018 (2018-10-23), pages 1 - 10, XP055882616, Retrieved from the Internet <URL:https://github.com/chinedufn/skeletal-animation-system> [retrieved on 20220124] *

Also Published As

Publication number Publication date
CN113129415A (en) 2021-07-16
CN111798547A (en) 2020-10-20
CN113129416A (en) 2021-07-16
CN111798547B (en) 2021-05-28

Similar Documents

Publication Publication Date Title
CN108648269B (en) Method and system for singulating three-dimensional building models
US6624833B1 (en) Gesture-based input interface system with shadow detection
KR101135186B1 (en) System and method for interactive and real-time augmented reality, and the recording media storing the program performing the said method
EP3008702A1 (en) Scalable volumetric 3d reconstruction
US9881417B2 (en) Multi-view drawing apparatus of three-dimensional objects, and method
JP6863596B6 (en) Data processing device and data processing method
CN112884902B (en) Point cloud registration-oriented target ball position optimization method
CN107808388B (en) Image processing method and device containing moving object and electronic equipment
KR20230127313A (en) 3D reconstruction and related interactions, measurement methods and related devices and devices
CN116843841B (en) Large-scale virtual reality system based on grid compression
Kirsanov et al. Discoman: Dataset of indoor scenes for odometry, mapping and navigation
CN113795867A (en) Object posture detection method and device, computer equipment and storage medium
US9652879B2 (en) Animation of a virtual object
CN108734772A (en) High accuracy depth image acquisition methods based on Kinect fusion
US8264487B2 (en) Method for converting polygonal surfaces to levelsets
WO2021258544A1 (en) Animation blend space partitioning method and apparatus, and device and readable medium
CN115861547B (en) Model surface spline generating method based on projection
CN116248920A (en) Virtual character live broadcast processing method, device and system
CN113920282B (en) Image processing method and device, computer readable storage medium, and electronic device
CN114708382A (en) Three-dimensional modeling method, device, storage medium and equipment based on augmented reality
Li et al. Roaming path generation algorithm and optimization based on Bezier curve
Chen et al. 3D object reconstruction with Kinect based on QR code calibration
Alleaume et al. Introduction to AR-Bot, an AR system for robot navigation
CN116385273B (en) Method, system and storage medium for moving points in stepping panoramic roaming
CN111757081B (en) Movement limiting method for virtual scene, client, server and computing equipment

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20941666

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20941666

Country of ref document: EP

Kind code of ref document: A1