CN117021098A - Method for generating world-place action based on in-place action - Google Patents

Method for generating world-place action based on in-place action Download PDF

Info

Publication number
CN117021098A
CN117021098A CN202311057670.2A CN202311057670A CN117021098A CN 117021098 A CN117021098 A CN 117021098A CN 202311057670 A CN202311057670 A CN 202311057670A CN 117021098 A CN117021098 A CN 117021098A
Authority
CN
China
Prior art keywords
frame
joint
acceleration
pelvis
global
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311057670.2A
Other languages
Chinese (zh)
Other versions
CN117021098B (en
Inventor
米扬
施鉴泓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zhongke Shenzhi Technology Co ltd
Original Assignee
Beijing Zhongke Shenzhi Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zhongke Shenzhi Technology Co ltd filed Critical Beijing Zhongke Shenzhi Technology Co ltd
Priority to CN202311057670.2A priority Critical patent/CN117021098B/en
Publication of CN117021098A publication Critical patent/CN117021098A/en
Application granted granted Critical
Publication of CN117021098B publication Critical patent/CN117021098B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Processing Or Creating Images (AREA)

Abstract

A method for generating a world-place action based on an in-place action, comprising the steps of: step one: inputting an in-place frame sequence, and setting a fixed delay buffer; step two: calculating pelvis displacements of the motion of all buffered frames based on the first ground assumption and calculating acceleration and acceleration derivatives of pelvis joint motion; step three: judging whether the current frame is a jump starting frame, if not, outputting a result, if so, starting to change the state from the frame, traversing the buffer memory to identify a jump landing frame, and calculating the pelvis joint displacement of the jump frame sequence according to the duration of the start and stop frame and the acceleration and the speed of the start frame. The invention provides a method for generating a world-position from an in-position. The method solves the bottleneck problem that the accurate motion displacement according with physics requires manual production: and generating the world-position data with the physical rule from the in-position action in real time.

Description

Method for generating world-place action based on in-place action
Technical Field
The invention belongs to the field of World-plane motion generation, and particularly relates to a method for generating World-plane motion based on in-plane motion.
Background
in-position motion refers to a motion performed at the same position, i.e., a motion of an object or a human body at one position. For example, when a person walks, his body will move on the same location, which is the in-place motion. In contrast, world-motion refers to the movement of an object or body in different positions. The generation of a world-plan motion from an in-plan motion is a very promising theme. Virtual Reality (VR) and Augmented Reality (AR) in which human motion is critical to achieving an immersive experience. Converting in-place human actions into world-place human actions can help develop more realistic virtual characters, thereby improving user experience. Robot control: the in-position motion data are converted into world-position motion data, and the method can be used for motion planning and control of the robot, so that the robot can adapt to different environments and task requirements. The in-position motion data are converted into world-position motion data, so that the threshold of human body motion analysis and health monitoring can be reduced, accurate displacement information can be obtained by using a simple gesture estimation sensor, and the health condition and the functional state of a human body can be known by analyzing the motion track and gesture information of the human body at different positions. In the field of games, the method can greatly reduce the cost of motion production, reduce motion control logic, promote the speed of physical simulation and animation production in game development and improve the sense of reality and playability of the games.
In the conventional world-motion manufacturing process, the displacement is often required to be manufactured manually. Although motion capture and other techniques can also be used to conveniently produce motion containing displacement data, the displacement data obtained by direct capture often has a lot of noise and even errors, and requires a lot of subsequent manual data processing, which is still not intelligent enough. Especially in the current trend of AI, the requirement of large-scale action data sets is urgent, but the bottleneck of manual production seriously affects the production efficiency.
Disclosure of Invention
The invention provides a method for generating a world-place action based on an in-place action, which is used for solving the defects in the prior art.
The invention is realized by the following technical scheme:
a method for generating a world-place action based on an in-place action, comprising the steps of:
step one: preprocessing frame data: first, joint global motion data is calculated from joint local motion data.
The method of calculating global data from local data is as follows:
P global =R father P local +P father
R global =R father *R local
wherein P is global Is to calculate global displacement of the joint, R global To calculate the global attitude rotation value of the joint, R father Is global attitude rotation value, P of father joint father For global displacement of the parent joint, P local To calculate the local displacement data of the joint under the father joint, R local To calculate the local pose rotation value of the joint under the parent joint. Global data of all joints are recursively calculated.
Step two, a step two is carried out; based on the first ground assumption that each frame of motion accords with ground constraint, generating displacement data of all frames, and discretizing to calculate the acceleration and acceleration derivative of the generated pelvis joint, wherein the calculation formula is as follows:
where P represents pelvis displacement, v represents velocity, a represents acceleration,representing the derivative of acceleration, frameRate representing the animation data frame rate;
step three: judging whether the current frame is a jump starting frame, if not, outputting a result, if so, starting to change a state from the frame, traversing a cache to identify a jump landing frame, and calculating pelvis joint displacement of a jump frame sequence according to the duration of the start and stop frame and the acceleration and speed of the start frame;
calculating the ground free constraint according to the start and stop frame acceleration and speed:
wherein P is n For displacement of the pelvis joint of the nth frame during no ground constraint,and->The resolved displacements of the pelvis joint in the vertical and horizontal planes, respectively. />And->Respectively representing the horizontal displacement and the velocity of the pelvis joint of the start frame of the non-ground-constrained process, t representing the offset time from the start frame of the non-ground-constrained process to the nth frame,and->Respectively representing the displacement and the velocity of the initial frame pelvis joint in the vertical direction of the ground free constraint process, and g represents the standard gravitational acceleration.
In the method for generating the world-plan action based on the in-plan action, the caching principle in the first step is that 18 frames are needed for 30 frames/second of animation caching, and 36 frames are needed for 60 frames/second of animation data caching.
The method for generating the world-place action based on the in-place action as described above, wherein the pelvis displacement of each frame in the second step may be expressed as:
in the third step, the jump start and stop frame is judged by using the acceleration value and the acceleration derivative of the pelvis joint, and the judgment formula is as follows:
wherein a is n For the acceleration of the nth frame pelvis joint generated in step one,is the derivative of acceleration for the nth frame. a, a 0 And d 0 Respectively counting the obtained acceleration detection threshold value and the acceleration derivative detection threshold value, and counting the obtained acceleration detection threshold value and the obtained acceleration derivative detection threshold value when a is 0 =32.5、d 0 When=15, the best detection effect is achieved.
If f (a) n )>0, then indicates that the frame is ground constrained, if f (a n )>0 indicates that the frame is in an unconstrained state.
The invention has the advantages that: the invention provides a method for generating a world-position from an in-position. The method solves the bottleneck problem that the accurate motion displacement according with physics requires manual production: and generating the world-position data with the physical rule from the in-position action in real time.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions of the prior art, the drawings that are needed in the embodiments or the description of the prior art will be briefly described below, it will be obvious that the drawings in the following description are some embodiments of the present invention, and that other drawings can be obtained according to these drawings without inventive effort to a person skilled in the art.
FIG. 1 is a schematic flow chart of the present invention;
fig. 2 is a schematic view of the plantar joint of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
As shown in fig. 1, a method for generating a world-place action based on an in-place action includes the following steps:
step one: preprocessing frame data: first, joint global motion data is calculated from joint local motion data.
The method of calculating global data from local data is as follows:
P global =R father P local +P father
R global =R father *R local
wherein R is global Is to calculate global displacement of the joint, R global To calculate the global attitude rotation value of the joint, R father Is global attitude rotation value, P of father joint father For global displacement of the parent joint, P local To calculate the local displacement data of the joint under the father joint, R local To calculate the local pose rotation value of the joint under the parent joint. Global data of all joints are recursively calculated.
Step two, a step two is carried out; based on the first ground assumption that each frame of motion accords with ground constraint, generating displacement data of all frames, and discretizing to calculate the acceleration and acceleration derivative of the generated pelvis joint, wherein the calculation formula is as follows:
where P represents pelvis displacement, v represents velocity, a represents acceleration,representing the derivative of acceleration, frameRate representing the animation data frame rate;
step three: judging whether the current frame is a jump starting frame, if not, outputting a result, if so, starting to change a state from the frame, traversing a cache to identify a jump landing frame, and calculating pelvis joint displacement of a jump frame sequence according to the duration of the start and stop frame and the acceleration and speed of the start frame;
calculating the ground free constraint according to the start and stop frame acceleration and speed:
wherein P is n For displacement of the pelvis joint of the nth frame during no ground constraint,and->The resolved displacements of the pelvis joint in the vertical and horizontal planes, respectively. />And->Respectively representing the horizontal displacement and the velocity of the pelvis joint of the start frame of the non-ground-constrained process, t representing the offset time from the start frame of the non-ground-constrained process to the nth frame,and->Respectively representing the displacement and the velocity of the initial frame pelvis joint in the vertical direction of the ground free constraint process, and g represents the standard gravitational acceleration.
Specifically, in the first step described in this embodiment, the buffering principle is that 18 frames are required for buffering 30 frames/second of animation, and 36 frames are required for buffering 60 frames/second of animation data. The motion of the human body is limited by natural attributes, the motion is limited, the dead time of most human body motion is less than 0.6s through data analysis, the data is used as standard to design the frame buffer length, the buffer of the animation of 30 frames/second needs 18 frames, the buffer of the animation of 60 frames/second needs 36 frames, and the like if the number of frames per second increases.
Specifically, the pelvis displacement of each frame in the second step described in this embodiment may be expressed as:
preferably, the above formula for determining pelvis displacement is derived as follows:
the invention proposes the assumption based on the observation of data and the analysis of realistic physical actions: all movements supported by the lower limbs are produced by the foot as a point of force. According to this assumption, there are three situations in which the ground is in contact, namely, no ground constraint, one-foot ground constraint, and two-foot ground constraint. Several studies have also proposed similar assumptions and used machine learning methods to classify frames of in-place motion to identify several constraints. However, this classification is limited by statistical methods, which make it difficult to achieve near 100% accuracy, and thus the final effect is limited to specific actions such as walking. Through analysis, the invention can obtain a new conclusion, greatly reduce the workload of the problem, and remarkably improve the calculation speed to realize real-time operation.
The problem of "wane" switching of the sole is ignored first, and the animation segments all start in a standing position, so that frame 1 is a double-foot ground constraint. Let the coordinates of the joints of the left and right feet be respectively the sumThen in frame 2, three cases constraint scenarios are possible. Assuming frame 2 is still a bipedal constraint, then +.> Remain unchanged; if the 2 nd frame is a single foot constraint, there are two possibilities that the left foot remains constrained, then the left foot position remains +.>The right foot position is obtained according to the left foot constraint and by using other calculation methods, if the right foot keeps the constraint position +.>The left foot position can be calculated using the same method; if the 2 nd frame is free of ground constraint, calculating the position according to the frame before taking off, and supposing that the pelvis joint movement is calculated according to the calculated frame sequenceSpeed is +.>The position of the pelvis joint of the second frame is +.>The synthesis can be expressed as:
wherein the method comprises the steps ofAnd->The vectors of the root joint to the pelvis joint are represented respectively, and α and β represent weighting coefficients, the sum of which is 1. Because in-place motion is a trusted correct action, therefore +.>And->Is equal, in which case the bipedal constraint may be expressed in the form of a monopedal constraint. Thus equation (1) can be expressed as:
each frame depends on the position of the previous frame, the position of the N frame depends on the N-1 frame, and the frame always falls over
Push to frame 0. Derived from this precondition to all frames:
in addition, regarding the "rocker" problem of the sole, the "rocker" is a metaphor description of the sole of a human body, and as shown in fig. 2, the human foot is often abstract in a virtual space to be a connection relationship as illustrated.
The foot joint is not in direct contact with the ground, and the displacement is indirectly obtained through the constraint of the heel or toe joint and the ground. There are also three possibilities for ground constraint, just toe joint ground constraint, just heel joint ground constraint or toe and heel simultaneous ground constraint, respectively. Can be expressed as:
wherein the method comprises the steps ofAnd->Is consistent and can therefore be expressed as:
in the process of contacting the sole of the foot with the ground, most of the foot and heel are simultaneously contacted with the ground constraint, and the toe joint single joint constraint can be used for replacing calculation. Through statistics, the situation of individual ground constraint of the heel joint is often in the conversion process of two types of constraint, and the constraint is unstable, so that the joint can be quickly transited to the toe and heel joint common ground constraint or the toe individual ground constraint. The ground constraint of the heel joint can be ignored and treated as either unconstrained or the toe joint alone ground constraint. The formula is:
the treatment effect is good through experiments, the generated displacement is not adversely affected, and
the calculation process and the judgment of state switching are greatly simplified.
The preprocessing of in-place motion data in the second step of the invention is based on an assumption that: assume that each frame action belongs to a ground constraint. From the above deduction that each frame belongs to the order of the monopod
A bundle. Thus, the displacement of each frame can be expressed as:
further, in the third step described in this embodiment, the jump start and stop frame is determined by using the acceleration value and the derivative of the acceleration of the pelvis joint, and the determination formula is as follows:
if f (a) 0 )>0, then indicates that the frame is ground constrained, if f (a 0 ) And less than or equal to 0 indicates that the frame is in an unconstrained state.
Preferably, the human body movement needs to conform to the physical laws, and the movement of the pelvis joint is derived from interaction of the foot and the ground. Jumping of the human body requires a force accumulation and landing also requires a buffer to unload the force, so the motion acceleration of pelvis should be smooth according to newton's law. In the step one preprocessing, it is assumed that all motions belong to the ground constraint, and if the frame motion data does not conform to the ground constraint, the pelvis joint acceleration is necessarily not smooth. The skipped frame sequence can thus be determined by analyzing the acceleration of the pelvis joint of the data that completes the step one preprocessing. From a large number of statistical data and mathematical calculations we have found a method for determining the jump start and stop frames by determining the acceleration value and the acceleration derivative of the pelvis joint, expressed by the formula:
wherein a is n For the acceleration of the nth frame pelvis joint generated in step one,is the derivative of acceleration for the nth frame. a, a 0 And d 0 Respectively counting the obtained acceleration detection threshold value and the acceleration derivative detection threshold value, and counting the obtained acceleration detection threshold value and the obtained acceleration derivative detection threshold value when a is 0 =32.5、d 0 When=15, the best detection effect is achieved.
If f (a) n )>0, then indicates that the frame is ground constrained, if f (a n )>0 indicates that the frame is in an unconstrained state.
Finally, it should be noted that the above embodiments are merely illustrative of the technical solution of the present invention, and not limiting thereof; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (4)

1. A method for generating a world-place action based on an in-place action is characterized by comprising the following steps: the method comprises the following steps:
step one: preprocessing frame data: first, joint global motion data is calculated from joint local motion data.
The method of calculating global data from local data is as follows:
P global =R fatherPlocal +P father
R global =R father *R local
wherein P is global Is to calculate global displacement of the joint, R global To calculate the global attitude rotation value of the joint, R father Is global attitude rotation value, P of father joint father For global displacement of the parent joint, P local To calculate the joint under the father jointLocal displacement data of R local To calculate the local pose rotation value of the joint under the parent joint. Global data of all joints are recursively calculated.
Step two, a step two is carried out; based on the first ground assumption that each frame of motion accords with ground constraint, generating displacement data of all frames, and discretizing to calculate the acceleration and acceleration derivative of the generated pelvis joint, wherein the calculation formula is as follows:
where P represents pelvis displacement, v represents velocity, a represents acceleration,representing the derivative of acceleration, frameRate representing the animation data frame rate;
step three: judging whether the current frame is a jump starting frame, if not, outputting a result, if so, starting to change a state from the frame, traversing a cache to identify a jump landing frame, and calculating pelvis joint displacement of a jump frame sequence according to the duration of the start and stop frame and the acceleration and speed of the start frame;
calculating the ground free constraint according to the start and stop frame acceleration and speed:
wherein P is n For displacement of the pelvis joint of the nth frame during no ground constraint,and->The resolved displacements of the pelvis joint in the vertical and horizontal planes, respectively. />And->Respectively representing the horizontal displacement and the velocity of the pelvis joint of the start frame of the non-ground-constrained process, t representing the offset time from the start frame of the non-ground-constrained process to the nth frame,and->Respectively representing the displacement and the velocity of the initial frame pelvis joint in the vertical direction of the ground free constraint process, and g represents the standard gravitational acceleration.
2. The method for generating a world-place action based on an in-place action according to claim 1, wherein: in the first step, the buffering principle is that 18 frames are needed for buffering 30 frames/second of animation, and 36 frames are needed for buffering 60 frames/second of animation data.
3. The method for generating a world-place action based on an in-place action according to claim 1, wherein: the pelvis shift of each frame in the second step can be expressed as:
4. the method for generating a world-place action based on an in-place action according to claim 1, wherein: in the third step, the jump start and stop frame is judged by using the acceleration value and the acceleration derivative of the pelvis joint, and the judgment formula is as follows:
wherein a is n For the acceleration of the nth frame pelvis joint generated in step one,is the derivative of acceleration for the nth frame. a, a 0 And d 0 Respectively counting the obtained acceleration detection threshold value and the acceleration derivative detection threshold value, and counting the obtained acceleration detection threshold value and the obtained acceleration derivative detection threshold value when a is 0 =32.5、d 0 When=15, the best detection effect is achieved.
If f (a) n )>0, then indicates that the frame is ground constrained, if f (a n )>0 indicates that the frame is in an unconstrained state.
CN202311057670.2A 2023-08-22 2023-08-22 Method for generating world-place action based on in-place action Active CN117021098B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311057670.2A CN117021098B (en) 2023-08-22 2023-08-22 Method for generating world-place action based on in-place action

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311057670.2A CN117021098B (en) 2023-08-22 2023-08-22 Method for generating world-place action based on in-place action

Publications (2)

Publication Number Publication Date
CN117021098A true CN117021098A (en) 2023-11-10
CN117021098B CN117021098B (en) 2024-01-23

Family

ID=88644768

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311057670.2A Active CN117021098B (en) 2023-08-22 2023-08-22 Method for generating world-place action based on in-place action

Country Status (1)

Country Link
CN (1) CN117021098B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6144385A (en) * 1994-08-25 2000-11-07 Michael J. Girard Step-driven character animation derived from animation data without footstep information
KR20010057880A (en) * 1999-12-23 2001-07-05 오길록 Animation method for walking motion variation
KR101519775B1 (en) * 2014-01-13 2015-05-12 인천대학교 산학협력단 Method and apparatus for generating animation based on object motion
US20190362529A1 (en) * 2018-05-22 2019-11-28 Magic Leap, Inc. Skeletal systems for animating virtual avatars
CN112891947A (en) * 2021-04-02 2021-06-04 网易(杭州)网络有限公司 Jumping animation processing method and device, electronic equipment and computer readable medium
CN114998487A (en) * 2022-05-07 2022-09-02 广州虎牙科技有限公司 Animation generation method, device, equipment and readable medium
CN115546247A (en) * 2022-10-21 2022-12-30 北京中科深智科技有限公司 Motion capture method based on LightHouse positioning system
WO2023000916A1 (en) * 2021-07-23 2023-01-26 网易(杭州)网络有限公司 Jump control method and apparatus for characters in game, terminal device, and medium
CN115888101A (en) * 2022-12-02 2023-04-04 网易(杭州)网络有限公司 Virtual role state switching method and device, storage medium and electronic equipment
CN116543080A (en) * 2023-05-06 2023-08-04 广州时秤信息技术有限公司 Animation processing method and device based on root bones

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6144385A (en) * 1994-08-25 2000-11-07 Michael J. Girard Step-driven character animation derived from animation data without footstep information
KR20010057880A (en) * 1999-12-23 2001-07-05 오길록 Animation method for walking motion variation
KR101519775B1 (en) * 2014-01-13 2015-05-12 인천대학교 산학협력단 Method and apparatus for generating animation based on object motion
US20190362529A1 (en) * 2018-05-22 2019-11-28 Magic Leap, Inc. Skeletal systems for animating virtual avatars
CN112891947A (en) * 2021-04-02 2021-06-04 网易(杭州)网络有限公司 Jumping animation processing method and device, electronic equipment and computer readable medium
WO2023000916A1 (en) * 2021-07-23 2023-01-26 网易(杭州)网络有限公司 Jump control method and apparatus for characters in game, terminal device, and medium
CN114998487A (en) * 2022-05-07 2022-09-02 广州虎牙科技有限公司 Animation generation method, device, equipment and readable medium
CN115546247A (en) * 2022-10-21 2022-12-30 北京中科深智科技有限公司 Motion capture method based on LightHouse positioning system
CN115888101A (en) * 2022-12-02 2023-04-04 网易(杭州)网络有限公司 Virtual role state switching method and device, storage medium and electronic equipment
CN116543080A (en) * 2023-05-06 2023-08-04 广州时秤信息技术有限公司 Animation processing method and device based on root bones

Also Published As

Publication number Publication date
CN117021098B (en) 2024-01-23

Similar Documents

Publication Publication Date Title
US11232621B2 (en) Enhanced animation generation based on conditional modeling
US8953844B2 (en) System for fast, probabilistic skeletal tracking
US11992768B2 (en) Enhanced pose generation based on generative modeling
WO2021143289A1 (en) Animation processing method and apparatus, and computer storage medium and electronic device
US9058663B2 (en) Modeling human-human interactions for monocular 3D pose estimation
Ye et al. Synthesis of detailed hand manipulations using contact sampling
CN111488824A (en) Motion prompting method and device, electronic equipment and storage medium
US11995754B2 (en) Enhanced animation generation based on motion matching using local bone phases
US11100314B2 (en) Device, system and method for improving motion estimation using a human motion model
JP2011170856A (en) System and method for motion recognition using a plurality of sensing streams
US11670030B2 (en) Enhanced animation generation based on video with local phase
US20240257429A1 (en) Neural animation layering for synthesizing movement
CN116051699A (en) Dynamic capture data processing method, device, equipment and storage medium
Schreiner et al. Global position prediction for interactive motion capture
CN117021098B (en) Method for generating world-place action based on in-place action
JP2011170857A (en) System and method for performing motion recognition with minimum delay
Huang et al. CoMo: Controllable Motion Generation through Language Guided Pose Code Editing
CN101952879A (en) Methods and apparatus for designing animatronics units from articulated computer generated characters
CN115294228A (en) Multi-graph human body posture generation method and device based on modal guidance
Häfliger et al. Dynamic motion matching: design and implementation of a context-aware animation system for games
Qammaz et al. Towards Holistic Real-time Human 3D Pose Estimation using MocapNETs.
US20230083619A1 (en) Method and system of global position prediction for imu motion capture
US20230310998A1 (en) Learning character motion alignment with periodic autoencoders
Jin A Three‐Dimensional Animation Character Dance Movement Model Based on the Edge Distance Random Matrix
KR20150065303A (en) Apparatus and method for reconstructing whole-body motion using wrist trajectories

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address

Address after: Room 911, 9th Floor, Block B, Xingdi Center, Building 2, No.10, Jiuxianqiao North Road, Jiangtai Township, Chaoyang District, Beijing, 100000

Patentee after: Beijing Zhongke Shenzhi Technology Co.,Ltd.

Country or region after: China

Address before: 100000 room 311a, floor 3, building 4, courtyard 4, Yongchang Middle Road, Beijing Economic and Technological Development Zone, Daxing District, Beijing

Patentee before: Beijing Zhongke Shenzhi Technology Co.,Ltd.

Country or region before: China

CP03 Change of name, title or address