CN103279971B - A kind of motion style reorientation method and system - Google Patents

A kind of motion style reorientation method and system Download PDF

Info

Publication number
CN103279971B
CN103279971B CN201310222692.XA CN201310222692A CN103279971B CN 103279971 B CN103279971 B CN 103279971B CN 201310222692 A CN201310222692 A CN 201310222692A CN 103279971 B CN103279971 B CN 103279971B
Authority
CN
China
Prior art keywords
motion
style
coordinate
parameter
space
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201310222692.XA
Other languages
Chinese (zh)
Other versions
CN103279971A (en
Inventor
夏时洪
马万里
王兆其
王从艺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Computing Technology of CAS
Original Assignee
Institute of Computing Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Computing Technology of CAS filed Critical Institute of Computing Technology of CAS
Priority to CN201310222692.XA priority Critical patent/CN103279971B/en
Publication of CN103279971A publication Critical patent/CN103279971A/en
Application granted granted Critical
Publication of CN103279971B publication Critical patent/CN103279971B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The present invention provides a kind of motion style reorientation method and system, and described method comprises: the motion adopting each joint set of the low-dimensional coordinate representation human skeleton unrelated with personal feature; Described low dimension coordinate is processed to convert its motion style having by the space zooming and panning parameter according to motion style. Described method also comprises the low dimension coordinate from obtaining and reconstructs human body movement data. Motion style reorientation method provided by the invention is simple, efficient, it is possible to synthesize, by reusing style information between Different Individual, different motion data, the stylized human body movement data meeting user and requiring.

Description

A kind of motion style reorientation method and system
Technical field
The present invention relates to field of Computer Graphics, particularly relate to a kind of motion style reorientation method and system.
Background technology
Virtual human animation technology is the research focus of international figure educational circles always, is widely used in numerous areas, such as production of film and TV, game amusement, Simulation Training, safe preview etc. Along with the specification of quality of virtual human animation is constantly promoted by every field, it is desirable on the basis of " display " visual human, for visual human gives " soul ". But due to all restrictions such as costs of manufacture, conventional virtual human animation is a set of fixing action template of Reusability often, this just causes between different role except image is different, and its behavior is completely the same. The virtual human animation of this kind " stereotyped " is repelled by people gradually, because people wish to see " in different poses and with different expressions " as real world in virtual PC World. So, visual human's stylization motion synthetic technology is arisen at the historic moment.
In order to synthesize the virtual human animation of " in different poses and with different expressions ", need the key issue solving the following aspects: how (1) defines " style " of motion? people use descriptive word to carry out style intuitively defining (such as " losing ") usually, but how to allow computer can understand sector-style of going forward side by side equally and format motion synthesis or a still open question. (2) how between different exercise datas, style information is reused? such as, being redirected the style information of " that loses walks " to " dribbling normally " data, we can obtain " dribble lost " data. Considering the of a great variety of human motion, technique means is wanted to synthesize, for the different behaviors of Different Individual, the exercise data specifying style. In addition, in vision effect, the exercise data synthesized not only to be embodied the behavioural characteristic specifying style, also should keep the motion component (namely " dribble " can not be become other actions) of raw data.
Existing visual human stylization motion synthetic technology, comprises space-time characteristic transform method, implicit expression mapping method and polyteny analytical procedure three class. Wherein, space-time characteristic transform method can generate the human motion of stylization with extremely fast speed, but these conversion are usually all relevant by force to motion component, cannot reuse style information between heterogeneous motion (such as between " dribble " and " walking "). Implicit expression mapping method can reuse style information between the motion component that some behaviors are more similar, but this kind of method is not suitable for the motor behavior (such as " dancing ") of more complicated. Polyteny analytical procedure is not equally suitable for the human motion of more complicated yet, as non-walk, run, the common behavior such as jumping because it require motion capture data storehouse be by a large amount of structure is similar but exercise data that is that semantically can distinguish out mutually is formed.
Therefore, it is necessary to the motion style between with regard to Different Individual different rows being is redirected problem carries out research work, its meaning is: (1) improves the reusability of existing human sports trapped data; (2) the high coating control method generating human body animation is provided; (3) artistic expression of human body animation in virtual environment is strengthened.
Summary of the invention
According to one embodiment of present invention, it is provided that a kind of motion style reorientation method, comprising:
Step 1), motion by each joint set of low-dimensional coordinate representation human skeleton, wherein said low dimension coordinate comprises a dimension length coordinate and three-dimensional rotation coordinate;
Step 2), process to convert its motion style having to the low dimension coordinate obtained from step 1) according to the space zooming and panning parameter of motion style;
Step 3), according to from step 2) the low dimension coordinate that obtains reconstructs human body movement data.
In an embodiment, step 2) also comprise: according to time-scaling parameter transformation interframe interval.
In an embodiment, described human skeleton comprises 6 joint set.
In an embodiment, step 1) comprises:
Step 11), the motion being similar to each joint set of human skeleton with IK chain, obtain a dimension length coordinate and three-dimensional rotation coordinate;
Step 12), according to skeleton size to described one dimension length coordinate carry out standardization.
In a further embodiment, step 11) comprises:
Step 111), represent with a dimension length coordinate joint set be stretched to target length;
Step 112), with three-dimensional rotation coordinate representation joint set rotated to target towards.
In a further embodiment, step 12) according to skeleton size, one dimension length coordinate is carried out standardization comprises:
According to skeleton size, one dimension length coordinate is projected between standard regions from theorem in Euclid space.
In an embodiment, step 3) comprises:
Step 31), by described one dimension length coordinate between standard regions, instead project back theorem in Euclid space;
Step 32), solve the local motion attitude of each joint set based on IK technology;
Step 33), each joint set is returned in described three-dimensional rotation coordinate effect.
In an embodiment, step 2) comprising:
Step 21), convert the motion style that has of described low dimension coordinate according to following formula:
My=Fx��y(Mx)
= a y a x ( M x - M ‾ x O ) + ( M ‾ x + b y - b x ) O
Wherein,T is the time span of motion, axAnd bxRepresent the space zooming parameter peace shifting parameter of style x respectively, ayAnd byRepresenting the space zooming parameter peace shifting parameter of style y respectively, matrix O is normal matrix;
Step 22), interframe interval is transformed to �� from 1y��x, wherein ��xAnd ��yIt is respectively the time-scaling parameter of style x and style y.
In an embodiment, step 2) also comprise before:
Obtain the space zooming and panning parameter of motion style.
In another embodiment, step 2) also comprise before:
Obtain the time-scaling parameter of motion style.
In an embodiment, the space zooming and panning parameter obtaining motion style comprises:
According to stylization exercise data sample, obtained the space zooming and panning parameter of motion style by the study of following formula:
mkj(t)=akcj(t)+bkj+ekj(t)
Wherein, mkjT () represents the stylized exercise data of t frame, jth joint set, kth kind motion style, cjT () represents the motion component of t frame, jth joint set, akRepresent the space zooming parameter of kth kind motion style, bkjRepresent the translation parameter of jth joint set, kth kind motion style, ekjT () represents noise item.
In another embodiment, the time-scaling parameter obtaining motion style comprises:
According to stylization exercise data sample, obtained the time-scaling parameter of motion style by the study of following formula:
τ k = T k T mormal
Wherein, TkRepresent the time information of kth kind style, TnormalRepresent the time information of normal style.
According to one embodiment of the invention, it is provided that a kind of motion style Redirectional system, described system comprises:
Data processing module: for the motion of each joint set with low-dimensional coordinate representation human skeleton, wherein said low dimension coordinate comprises a dimension length coordinate and three-dimensional rotation coordinate;
Style conversion module: process to convert its motion style having to the low dimension coordinate obtained from described data processing module for the space zooming and panning parameter according to motion style;
Described data processing module is also for reconstructing human body movement data according to the low dimension coordinate obtained from described style conversion module.
The useful effect of the present invention is as follows:
(1) use low-dimensional coordinate representation method collection to be put together from the exercise data of Different Individual and carry out treatment and analysis, thus can carry out being redirected of motion style between Different Individual;
(2) convert by means of relative space-time, it is possible between different behavior (comprising heterogeneous motion), reuse style information;
(3) it is simply efficient that motion style is redirected process, is applicable to various interactive application.
Accompanying drawing explanation
Fig. 1 is the schema of the reorientation method of motion style according to an embodiment of the invention;
Fig. 2 is the Kinematic Decomposition schematic diagram of IK chain;
Fig. 3 is the schema learning relative space-time transform method according to an embodiment of the invention;
Fig. 4 A-4B is one group of stylization data and data reconstruction schematic diagram thereof according to an embodiment of the invention; And
Fig. 5 is the schematic diagram of the stylized human body movement data that the motion style reorientation method adopting the present invention to propose synthesizes.
Embodiment
Below in conjunction with the drawings and specific embodiments, the present invention is described in detail.
According to one embodiment of present invention, it is provided that a kind of motion style reorientation method is used for virtual human animation synthesis. As shown in Figure 1, the method comprises following step:
The first step, the utilization low dimension coordinate unrelated with personal feature represent human body movement data. Wherein, personal feature refers to skeleton size and the stature ratio of human body.
Those skilled in the art are it will be seen that according to dependency between joint in human motion, it is possible to human skeleton is divided into 6 joint set, respectively: trunk, left leg, right leg, left hand, the right hand, and head. Wherein, the motion of each joint set all can use an IK(inverse motion) chain is similar to, and finally can carry out the motion of this joint set of parametrization with the stdn length scalar that 3 is tieed up rotating vector and one 1 dimension.
Specifically, the arbitrary motion of an IK chain all can complete in two steps. As shown in Figure 2, first joint set is stretched to target length, then rotated to target towards. A front process can use one 1 length coordinate tieed up to carry out parametrization, and a rear process then can use one 3 rotational coordinates tieed up to represent. Afterwards, it is possible to by linear transformation, this 1 dimension length coordinate being carried out standardization according to skeleton size, projecting between standard regions from theorem in Euclid space by this length coordinate, so just obtaining the human motion low-dimensional coordinate representation (s unrelated with personal feature1...6(t),r1...6(t)), wherein s1...6T () represents 1 dimension length coordinate after standardization, r1...6T () represents 3 dimension rotational coordinatess, 1..6 represents 6 joint set above described, and t represents current time (i.e. a frame in motion sequence).
In traditional movement representation method (such as Eulerian angles, quaternion, rotation matrix etc.), same group of exercise data drives different virtual roles will produce step and slides, the problem such as certainly penetrates, and this is mainly by different skeleton size and what ratio caused. As mentioned above, motion style reorientation method provided by the invention is represented by low dimension coordinate, the human body movement data of 114 dimensions originally is made to be fallen dimension in 24 dimension spaces of a unification like this, i.e. (1 dimension (length coordinate)+3 dimension (rotational coordinates)) * 6(joint set), data volume is only original 21.05%. In addition, this kind of human motion low-dimensional coordinate representation method can be the end-fixity information that different individualities (such as different people) keeps identical, unrelated with personal feature so that reusing style information between the exercise data of Different Individual becomes possibility. And, this kind of low-dimensional coordinate representation method can also be applicable to the human body movement data of various complexity.
2nd step, according to the low-dimensional coordinate representation obtained from the first step, carries out motion style based on relative space-time conversion and is redirected.
Relative space-time conversion herein comprises the conversion in territory, space and the conversion in time domain. Be redirected in process in motion style, it is necessary to relative space-time is converted parameter and acts on the conversion carrying out motion style on low dimension coordinate that the first step obtains, and relative space-time conversion parameter can from stylization sampled data middle school acquistion to. In an embodiment, the stylized exercise data as learning sample can be obtained by method for capturing movement, same motion component can be adopted but the exercise data of different motion style as sample, this is because the relative conversion between different motion style is unrelated with motion component. Fig. 3 shows the method flow of study relative space-time conversion.
In the spatial domain, by the conversion function in motion deformation technology being expanded the relation described between motion component and stylization exercise data, as shown in Equation (1):
mkj(t)=akcj(t)+bkj+ekj(t) (1)
Formula (1) represents: can by by different space zooming parameter (or claiming space contracting to put the factor) akPeace shifting parameter bkjEffect is to same motion component cjT (), obtains different stylized exercise data mkj(t). Wherein, ekjT () represents noise item (or claiming error term); T represents the moment (i.e. a frame in motion sequence); K represents kth kind motion style; J represents jth joint set, as described above, and j �� [1,6]. In a further embodiment, it is possible to estimate space zooming parameter a according to upper formula by least square methodkWith translation parameter bkj. Fig. 4 A shows one group of stylization data, and Fig. 4 B illustrates that obtaining space zooming parameter peace shifting parameter according to study carries out rebuilding the result obtained.
In the time domain, the time-scaling parameter �� relative to " normally " style can be usedkPortray time style and features, it be shown below:
τ k = T k T mormal - - - ( 2 )
Wherein, TkRepresent the time information of kth kind style, the time as used in one-period in the motion of this style; TnormalThen represent the time information of normal style.
From formula (2), from periodic exercise data (such as " walking "), can directly learn the time-scaling parameter of different-style, because only needing their Cycle Length (time that namely one-period is used) compared with the Cycle Length of " normally ". Further, the time-scaling parameter �� learningkCan directly by heterogeneous motion data reuse, and no matter how complicated its motion component has.
According to one embodiment of the invention, the relative space-time conversion parameter obtained based on above-mentioned study carries out motion style and is redirected and comprises following sub-step:
Step a), according to relative space-time convert parameter (comprise space zooming parameter ak, translation parameter bkjWith time-scaling parameter ��k), it may also be useful to the simple linear operations F in following formulax��yBy the low dimension coordinate M of input motion (namely to be converted the exercise data of style)xIt is transformed to style y from style x:
My=Fx��y(Mx)
= a y a x ( M x - M ‾ x O ) + ( M ‾ x + b y - b x ) O - - - ( 3 )
In formula (3),T is the time span of input motion; axAnd bxRepresent the relative space-time conversion parameter (space zooming parameter peace shifting parameter) of style x respectively; ayAnd byRepresent the relative space-time conversion parameter of style y respectively. Matrix O is a normal matrix, transversely arranged and become by T unit matrix, for alignmentAnd MxSuch that it is able to element wherein is unified process. They are in 5, above-mentioned conversion function (Fx��y) there are following two important character:
Fx��y(Mx)=Mx
Fy��z(Fx��y(Mx))=Fx��z(Mx)
Thus just one group is had (as mentioned above, it is necessary, comprising 6 joint set) the low dimension coordinate M of style xxIt is for conversion into the low dimension coordinate M of one group of style yy��
Step b), the interframe interval in the original motion data is transformed to �� from 1y��x, wherein ��xAnd ��yIt is respectively the time-scaling parameter of style x and style y.
3rd step, obtain low dimension coordinate (such as M from the 2nd stepy) in reconstruct human body movement data.
In an embodiment, for each joint set, first according to current skeleton size, length coordinate is instead projected back Euclidean space between standard regions, then utilize IK technology to solve the local motion attitude of each joint set, finally each joint set is returned in 3 dimension rotational coordinates effects and obtain final exercise data.
According to one embodiment of the invention, also provide a kind of motion style Redirectional system, comprise data processing module and style conversion module.
Wherein, data processing module is used for representing the motion of each joint set of human skeleton with the low dimension coordinate unrelated with personal feature, and this low dimension coordinate comprises a dimension length coordinate and three-dimensional rotation coordinate.
Style conversion module is used for the space zooming and panning parameter according to motion style and processes to convert its motion style having to the low dimension coordinate obtained from data processing module; And for converting the interframe interval in the original motion data according to time-scaling parameter;
In addition, data processing module is also for reconstructing human body movement data according to the low dimension coordinate obtained from style conversion module.
For verifying the validity of motion style reorientation method provided by the invention, contriver converts parameter (motion component corresponding to these stylization data samples is " walking ") from the stylized data sample middle school's acquistion gathered to the relative space-time of two kinds of different-styles, then these relative space-times being converted parameter acts on " playing football " this motion component, synthesize the human body movement data of two kinds of different-styles (such as " crippled ", " baby "), as shown in Figure 5. As can be seen from Fig. 5, adopt motion style reorientation method provided by the invention, reusing style information (relative space-time namely obtained according to the study of a kind of motion component converts the style that parameter converts another motion component) between different motion content, its stylized somatic data synthesized is naturally true to nature. Many people are also investigated by contriver with regard to Fig. 5 content, it has been found that its vision meeting most of people (higher than 86%) is cognitive.
In addition, contriver has also tested on the PC that is furnished with 2.83GHz Duo 2 four core processor and 4GB internal memory. Experimental result shows to adopt the calculating of motion style reorientation method provided by the invention consuming time less, single virtual life becomes the speed of stylization human motion to be about 240 frames/second, much larger than the arithmetic speed needed for real-time application, thus method provided by the invention can with each frame of the form of data stream (i.e. frame by frame, without delay) process input motion data, this feature makes many interactive application become possibility.
It should be noted that and understand, when not departing from the spirit and scope of the present invention required by accompanying claim, it is possible to the present invention of foregoing detailed description is made various amendment and improvement. Therefore, it is desired to the scope of the technical scheme of protection is not by the restriction of given any specific exemplary teachings.

Claims (13)

1. a motion style reorientation method, comprising:
Step 1), with the motion of each joint set of low-dimensional coordinate representation human skeleton, wherein said low dimension coordinate comprises a dimension length coordinate and three-dimensional rotation coordinate;
Step 2), according to the space zooming and panning parameter of motion style to from step 1) the low dimension coordinate that obtains processes to convert its motion style having;
Step 3), according to from step 2) the low dimension coordinate that obtains reconstructs human body movement data.
2. method according to claim 1, wherein, step 2) also comprise:
According to time-scaling parameter transformation interframe interval.
3. method according to claim 1 and 2, wherein, described human skeleton comprises 6 joint set.
4. method according to claim 1 and 2, wherein, step 1) comprising:
Step 11), with the motion of each joint set of the approximate human skeleton of IK chain, obtain a dimension length coordinate and three-dimensional rotation coordinate; Wherein, described IK chain is inverse motion chain;
Step 12), according to skeleton size to described one dimension length coordinate carry out standardization.
5. method according to claim 4, wherein, step 11) comprising:
Step 111), represent with a dimension length coordinate joint set be stretched to target length;
Step 112), with three-dimensional rotation coordinate representation joint set rotated to target towards.
6. method according to claim 4, wherein, step 12) according to skeleton size, one dimension length coordinate is carried out standardization and comprises:
According to skeleton size, one dimension length coordinate is projected between standard regions from theorem in Euclid space.
7. method according to claim 6, wherein, step 3) comprising:
Step 31), by described one dimension length coordinate between standard regions, instead project back theorem in Euclid space;
Step 32), solve the local motion attitude of each joint set based on IK technology; Wherein, described IK technology is inverse motion technology;
Step 33), each joint set is returned in described three-dimensional rotation coordinate effect.
8. method according to claim 2, wherein, step 2) comprising:
Step 21), convert the motion style that has of described low dimension coordinate according to following formula:
M y = F x → y ( M x ) = a y a x ( M x - M ‾ x O ) + ( M ‾ x + b y - b x ) O
Wherein,T is the time span of motion, axAnd bxRepresent the space zooming parameter peace shifting parameter of style x respectively, ayAnd byRepresenting the space zooming parameter peace shifting parameter of style y respectively, matrix O is normal matrix;
Step 22), interframe interval is transformed to �� from 1y/��x, wherein ��xAnd ��yIt is respectively the time-scaling parameter of style x and style y.
9. method according to claim 1 and 2, wherein, step 2) also comprise before:
Obtain the space zooming and panning parameter of motion style.
10. method according to claim 9, wherein, step 2) also comprise before:
Obtain the time-scaling parameter of motion style.
11. methods according to claim 9, wherein, the space zooming and panning parameter obtaining motion style comprises:
According to stylization exercise data sample, obtained the space zooming and panning parameter of motion style by the study of following formula:
mkj(t)=akcj(t)+bkj+ekj(t)
Wherein, mkjT () represents the stylized exercise data of t frame, jth joint set, kth kind motion style, cjT () represents the motion component of t frame, jth joint set, akRepresent the space zooming parameter of kth kind motion style, bkjRepresent the translation parameter of jth joint set, kth kind motion style, ekjT () represents noise item.
12. methods according to claim 10, wherein, the time-scaling parameter obtaining motion style comprises:
According to stylization exercise data sample, obtained the time-scaling parameter of motion style by the study of following formula:
τ k = T k T n o r m a l
Wherein, TkRepresent the time information of kth kind style, TnormalRepresent the time information of normal style.
13. 1 kinds of motion style Redirectional systems, described system comprises:
Data processing module: for the motion of each joint set with low-dimensional coordinate representation human skeleton, wherein said low dimension coordinate comprises a dimension length coordinate and three-dimensional rotation coordinate;
Style conversion module: process to convert its motion style having to the low dimension coordinate obtained from described data processing module for the space zooming and panning parameter according to motion style;
Described data processing module is also for reconstructing human body movement data according to the low dimension coordinate obtained from described style conversion module.
CN201310222692.XA 2013-06-06 2013-06-06 A kind of motion style reorientation method and system Active CN103279971B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310222692.XA CN103279971B (en) 2013-06-06 2013-06-06 A kind of motion style reorientation method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310222692.XA CN103279971B (en) 2013-06-06 2013-06-06 A kind of motion style reorientation method and system

Publications (2)

Publication Number Publication Date
CN103279971A CN103279971A (en) 2013-09-04
CN103279971B true CN103279971B (en) 2016-06-01

Family

ID=49062478

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310222692.XA Active CN103279971B (en) 2013-06-06 2013-06-06 A kind of motion style reorientation method and system

Country Status (1)

Country Link
CN (1) CN103279971B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103679752A (en) * 2013-12-03 2014-03-26 中国科学院计算技术研究所 Method and system for redirecting movement styles of virtual human animation
CN105045373B (en) * 2015-03-26 2018-01-09 济南大学 A kind of three-dimension gesture exchange method of user oriented mental model expression
CN105608727B (en) * 2016-03-01 2018-08-10 中国科学院计算技术研究所 A kind of offshore of data-driven is surged animation synthesizing method and system
CN108961428B (en) * 2018-05-23 2023-05-26 杭州易现先进科技有限公司 Style migration method, medium, device and computing equipment for three-dimensional actions
CN109238302A (en) * 2018-09-26 2019-01-18 天津理工大学 A kind of human body three-dimensional motion capture system based on inertia sensing
CN111127588B (en) * 2019-12-26 2020-10-09 中国人民解放军海军航空大学青岛校区 DirectX-based large data volume parameter curve playback method
CN116012497B (en) * 2023-03-29 2023-05-30 腾讯科技(深圳)有限公司 Animation redirection method, device, equipment and medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008015713A (en) * 2006-07-04 2008-01-24 Kyushu Institute Of Technology Motion deformation system and method for it
CN101655990A (en) * 2009-06-25 2010-02-24 浙江大学 Method for synthesizing three-dimensional human body movement based on non-linearity manifold study

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008015713A (en) * 2006-07-04 2008-01-24 Kyushu Institute Of Technology Motion deformation system and method for it
CN101655990A (en) * 2009-06-25 2010-02-24 浙江大学 Method for synthesizing three-dimensional human body movement based on non-linearity manifold study

Also Published As

Publication number Publication date
CN103279971A (en) 2013-09-04

Similar Documents

Publication Publication Date Title
CN103279971B (en) A kind of motion style reorientation method and system
Sutil Motion and representation: The language of human movement
De Amicis et al. Augmented Reality for virtual user manual
CN104376309B (en) A kind of gesture motion basic-element model structural method based on gesture identification
Stefanidi et al. TooltY: An approach for the combination of motion capture and 3D reconstruction to present tool usage in 3D environments
Nainggolan et al. Anatomy learning system on human skeleton using Leap Motion Controller
Feng et al. Fast, automatic character animation pipelines
Huang et al. Expressive body animation pipeline for virtual agent
Hu et al. Research on the application of virtual reality technology in 3D animation creation
Calvert Approaches to the representation of human movement: notation, animation and motion capture
CN110853131A (en) Virtual video data generation method for behavior recognition
Uzunova et al. Virtual reality system for motion capture analysis and visualization for folk dance training
Rose III Verbs and adverbs: Multidimensional motion interpolation using radial basis functions
Barron-Estrada et al. A natural user interface implementation for an interactive learning environment
Weiss Cohen et al. Generating 3D cad art from human gestures using kinect depth sensor
Calvert Animating dance
Sun et al. Web3D-based online military boxing learning system
Tasoren et al. NOVAction23: Addressing the data diversity gap by uniquely generated synthetic sequences for real-world human action recognition
Friedrich Animation in relational information visualization
KR101526049B1 (en) Virtual ecology park visualization system
Zhao et al. Research on the agricultural skills training based on the motion-sensing technology of the leap motion
Zhu et al. Follow the smoke: Immersive display of motion data with synthesized smoke
Jie Research on motion model for technique movements of competitive swimming in virtual interactive environment
CN103679752A (en) Method and system for redirecting movement styles of virtual human animation
Hui Visualization system of martial arts training action based on artificial intelligence algorithm

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant