CN1996395A - Automatic generation method for 3D human body animation based on moving script - Google Patents

Automatic generation method for 3D human body animation based on moving script Download PDF

Info

Publication number
CN1996395A
CN1996395A CNA2006100533961A CN200610053396A CN1996395A CN 1996395 A CN1996395 A CN 1996395A CN A2006100533961 A CNA2006100533961 A CN A2006100533961A CN 200610053396 A CN200610053396 A CN 200610053396A CN 1996395 A CN1996395 A CN 1996395A
Authority
CN
China
Prior art keywords
motion
path
point
human body
behavior
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CNA2006100533961A
Other languages
Chinese (zh)
Other versions
CN100428281C (en
Inventor
庄越挺
肖俊
吴飞
吴伊自
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CNB2006100533961A priority Critical patent/CN100428281C/en
Publication of CN1996395A publication Critical patent/CN1996395A/en
Application granted granted Critical
Publication of CN100428281C publication Critical patent/CN100428281C/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The invention relates to a three dimensional human body animated picture generation method based on sport script. Deciding the start and end points of a three dimensional role sport at a visual scenic spot based on the scenic information and the preset sport preference parameter to generate the movement path between the start point and the end point, and deciding the corresponding role action of each path, converting the path and behavior selection information into sport script for compilation and storage using expandable mark language XML format, searching out the fitting human sport data for proper compilation, with sport stitching algorithm to get the final three dimensional human body animated sequence. Using this method and system, it can generate three dimensional human animation sequence with automation, agility, intelligence, and high efficiency, and greatly improved three dimensional human body animation making efficiency.

Description

Automatic generation method for 3 D human body animation based on moving script
Technical field
The present invention relates to the computer three-dimensional animation technical field, relate in particular to a kind of automatic generation method for 3 D human body animation based on moving script.
Background technology
The 3 D human body animation generation technique all is subjected to extensive concern all the time, and is rapidly developed in nearly ten years.Along with the popularization of motion capture system, the human sports trapped data and the commercial data base of the height sense of reality occurred more and more having, and be used widely in fields such as motion simulation, video-game, video display stunts.Therefore, obtained some progress around research that generates 3 D human body animation based on motion capture data and IT system development effortsIT in recent years, still as the system of practice, also there is some deficiency in they.
Make the field at 3 D human body animation, use more software systems that Maya is arranged, 3D Max, Kaydara MotionBuilder, SoftImage etc.Need animation Shi Shougong splicing human sports trapped data or even manual each frame human body movement data that generates when using the 3 D human body animation in the above-mentioned software development virtual scene, and require the 3 d human motion sequence that obtains is carried out manual setting according to different scenes, 3 D human body role, exercise data sequence, virtual scene are combined obtain final 3 D human body animation sequence at last.This process need animation teacher has higher empirical and loaded down with trivial details, the inefficiency of animation generative process.
Also there is certain methods to be suggested in addition in the field of study and is used for the 3 D human body animation generation." american computer association computer graphical special interest group ' 02 year meeting paper collection " (ACM SIGGRAPH ' 02 Conference Proceedings, pp.473-482) announced a kind of 3 d human motion generation technique based on motion diagram, based on human sports trapped data storehouse and motion diagram model, the user can three final human motion sequences of interactively generation.But this method is not taken into account the information and the constraint of virtual scene, therefore can not be directly used in the generation of 3 D human body animation in the virtual scene.
" american computer association graphics journal " be (ACM Transactions on Graphics 2003 in the second phase in 2003,2:182-203) announced a kind of method that in virtual scene, generates the 3 D human body animation sequence, adopt probability model scene surface generate the 3 D human body role can with footprint figure and utilize human sports trapped data to generate final 3 D human body animation sequence.But this method can only be handled the plane model of place and the user can not edit the 3 D human body animation sequence that generates.
Summary of the invention
The present invention is the shortcoming and defect that overcomes above-mentioned prior art, provide a kind of intelligence, efficiently, 3 D human body animation generation method flexibly, adopt multiple mode (automatically, semi-automatic, interactive) generate the 3 D human body animation sequence, improve the 3 D human body animation make efficiency.
Automatic generation method for 3 D human body animation based on moving script may further comprise the steps:
(1) at first adopt motion footprint figure generation technique to generate the kinematic roadmap that digital role can use in the virtual scene;
(2) based on motion footprint figure and user-defined motion preference table, system carries out digital role movement path planning automatically and behavior is selected;
(3) automatic path planning and the behavior selection result that provides according to system, the user uses interactive interface it is made amendment and to edit, and can the interactive final motion path that generates digital role;
(4) with step 2), 3) in the path planning that obtains and the behavior selection result moving script data that are converted to the XML form, and can make amendment and edit it;
(5) according to requirements such as the type of sports in the moving script, physical restriction, the required human body movement data segment of retrieval from existing motion capture data storehouse;
(6), under the guidance of motion state machine, use the motion suturing skill and generate the final human animation sequence based on the human body movement data segment that retrieves.
The motion path drawing generating method is meant normal vector in the virtual scene coupled together less than the adjacent triangular faces of threshold value T and forms the continuous motion plane in described a kind of virtual scene, and utilize in the motion capture data storehouse average human movement length parameter L on the continuous motion plane, to carry out stochastic sampling to generate human motion footprint point, then each other distance smaller or equal to the path point of the parameter L formation path profile that is connected with each other.In addition, the situation probability of use model of degenerating for narrow zone sampling in the scene P ( v | V ) = 1 i v + 1 / Σ u ∈ V 1 i u + 1 Revise.
Described automatic digital role's path planning and behavior system of selection mainly comprise following step:
(1) be digital role definition motion preference table, be the motion designated movement preference coefficient of each type, and system be various fundamental types movement definition the acquiescence motion preference coefficient table, as follows:
Type of sports The preference coefficient
Walk 0.1
Run 0.2
Stride down 0.3
Jump off 0.4
Step up 0.5
Jump onto 0.6
Cross over 0.7
Jump 0.9
Leap 0.8
Stand 0.0
(2), adopt formula c=α w to each bar limit in the kinematic roadmap d+ β w bCalculate the motion cost on this limit and be the minimum behavior of this limit selection motion cost, wherein w dThe length on limit in the expression path profile, w bThe preference coefficient is represented in expression, and α and β represent normalization coefficient;
(3) be that kinematic roadmap became a motion planning figure with weight after every limit was selected behavior and calculated top motion cost c in the path profile according to the method described above.Adopt this moment classical dijkstra's algorithm to search for whole motion planning figure and just can obtain a motion path between starting point and terminal point, and each section (limit between the point of path) on the path all designated corresponding behavior.
Described XML form moving script generation method after being meant and obtaining motion path and behavior selection result is combined the adjacent edge that has identical behavior on the motion path, generates the moving script of being represented by the XML form, and is as follows:
<?xml?version=”1.0”encoding”UTF-8”?>
<!DOCTYPE?MotionSequences?SYSTEM“ms.dtd”>
<MotionSequences>
<Motion?type=”walk”>
<Path>
<Point>X1,Y1,Z1</Point>
<Point>X2,Y2,Z2</Point>
……
<Point>Xn,Yn,Zn</Point>
</Path>
</Motion>
……
<Motion?type=”stepover”>
<Path>
<Point>XX1,YY1,ZZ1</Point>
<Point>XX2,YY2,ZZ2</Point>
<Point>XX3,YY3,ZZ3</Point>
<Point>XX3,YY3,ZZ3</Point>
</Path>
</Motion>
</MotionSequences>
Corresponding dtd file is as follows:
<?xml?version=”1.0”encoding”UTF-8”?>
<!ELEMENT?MotionSequences(Motion+)>
<!ELEMENT?Motion(Path)>
<!ATTLIST?Motion?type(walk|run|leapover|stepup|stepdown|stepover|jumpup?|?jumpdown?|
jumpover)>
<!ELEMENT?Path(Point,Point+)>
<!ELEMENT?Point(#PCDATA)>
Described exercise data search method based on moving script comprises following step: (1) with the moving cell in the moving script file (label<Motion〉and</Motion between part) as sample retrieval input motion data retrieval engine; (2) by extractions<Motion〉the type attribute acquisition type of sports information of label, determine the exercise data retrieve subsets; (3) from motion path information (label<Path〉and</Path between part) extract physical restriction and feature that motion institute must satisfy, from the exercise data retrieve subsets, mate the human body movement data segment that is met the demands most.
Described motion sewing method based on the motion state machine comprises following step:
(1) utilize the motion state machine to instruct method of attachment between the different types of movement data.When the motion segment between dissimilar connects, what have can direct transformation, what have can not direct transformation, and the exercise data that must insert the middle transition type just can connect together, and the present invention adopts the motion state machine to instruct the transition between the different types of movement data to connect;
Transit time when (2) adopting following formula to calculate adjacent two sections motion segments to sew up:
T trans = ( t e m 1 - t s m 1 ) + ( t e m 2 - t s m 2 ) 2
t s M1, t e M1, t s M2And t e M2Represent motion segment m respectively 1And m 2The starting and ending time;
(3) adopt monotone decreasing to merge function alpha=0.5cos (β π)+0.5 and be used in transit time two motion segments being merged (being weighted mean), parameter beta is linear change between 0 to 1, obtains the value of corresponding weighting parameters α.
Description of drawings
Fig. 1 generates the continuous motion plane;
Fig. 2 generates the footprint figure that obtains, and wherein (a) is the footprint figure that has the sampled point degradation phenomena, and narrow zone sampled point branch is less, (b) is the footprint figure that finally obtains;
Fig. 3 is path and the behavior program results in the scene, and yellow bead is represented the motion path point, and the colored line between them is represented different behaviors;
Fig. 4 is the interactive movement path editing, (a) is the original motion path that system generates, and stain is represented the new route point of user's appointment, (b) is the new motion path of crossing user's specified path point that system generates automatically;
Fig. 5 is that the interactive movement path generates, and (a) is one group of path point of user's order appointment, (b) is the motion path that system generates automatically;
Fig. 6 is based on the exercise data search engine architecture of script;
Fig. 7 is the motion state machine;
Fig. 8 is a system works flow process structural representation;
Fig. 9 is that the 3 D human body animation sequence generates example in the virtual scene, (a) three-dimensional virtual scene of system's input and the human motion Origin And Destination of user's appointment, (b) system carries out the result of path planning and behavior selection, the behavior that the path representation of different colours is different, (c) 3 D human body animation sequence in the virtual scene of system's generation.
Embodiment
Concrete technical scheme and the step of implementing of automatic generation method for 3 D human body animation based on moving script of the present invention is as follows:
1. kinematic roadmap generates
Virtual scene is made up of tiny tri patch, for virtual role can be moved in scene, at first normal vector is labeled as available dough sheet less than the tri patch of threshold value T, and adjacent available dough sheet coupled together forms continuous motion plane (accompanying drawing 1).Normal vector is bigger greater than its degree of tilt of tri patch of threshold value T, and virtual role carries out proper motion in the above than difficulty, therefore as unavailable plane of movement.
After obtaining in the scene set of continuous motion plane, use the stochastic sampling method and on the continuous motion plane, generate the footprint point.Utilize average human movement length parameter L in the motion capture data storehouse (mean distance in one step of human motion) control sampling density when footprint point stochastic sampling, the situation of avoiding oversampling or sampling sparse takes place.Then will be each other distance smaller or equal to the path point of the parameter L formation path profile that is connected with each other.
Show in the actual application that the situation (being that sampled point is too sparse) that sampled point is degenerated can appear in narrow zone in scene, adopt this moment the probability model in the formula (1) to adjust sampling density and obtain final path profile (accompanying drawing 2).
P ( v | V ) = 1 i v + 1 / &Sigma; u &Element; V 1 i u + 1 - - - ( 1 )
V represents the set of footprint sampled point in the formula (1), and v represents footprint sampled point, i vExpression is connected to the limit number of footprint sampled point v.
The footprint figure that finally obtains represents that virtual role can move along sampled point among the footprint figure and the fillet between the sampled point.
2. motion path planning and behavior selection automatically
After generating motion footprint figure, select a kind of behavior (as walk, run, jump etc.), be convenient to virtual role and move in the above for every limit.In method and system of the present invention, behavior selects to be subjected to the influence of three aspect factor:
(1) physical restriction of human motion for specific physical obstacle, must be selected corresponding behavior.For example, during greater than threshold value W, should adopt " jump " behavior greater than threshold value H or width when the height of barrier in the method; When obstacle height or width hour, adopt " leap " behavior etc.;
(2) user's motion preference, different user may be selected different behaviors for the same movement path.The motion preference that this method adopts the configurable motion preference table to come designated user is convenient to the behavior that system selects to satisfy customer requirements automatically, and system default motion preference table is as follows:
Type of sports The preference coefficient
Walk 0.1
Run 0.2
Stride down 0.3
Jump off 0.4
Step up 0.5
Jump onto 0.6
Cross over 0.7
Jump 0.9
Leap 0.8
Stand 0.0
(3) path.Between 2 o'clock, move equally, select different motion paths can cause different paths, and then cause different motion cost.
For each the bar limit in the kinematic roadmap, system is that in conjunction with above-mentioned three principles and according to formula (2) it selects behavior, and its target is to make motion cost c minimum:
c=αw d+βw b (2)
W wherein dThe length on limit in the expression path profile, w bThe preference coefficient is represented in expression, and α and β represent normalization coefficient.
Be that kinematic roadmap became a motion planning figure with weight after every limit was selected behavior and calculated top motion cost c in the path profile according to the method described above.Adopt this moment classical dijkstra's algorithm to search for whole motion planning figure and just can obtain a motion path between starting point and terminal point, and each section (limit between the point of path) on the path all designated corresponding behavior (accompanying drawing 3).
Can see, if the user wants to obtain different behavioral animation sequences under same virtual scene, as long as just can realize by configuration motion preference table.
3. interactive movement path, behavior selection result editor and generation
After obtaining the automatic program results of motion path, native system can adopt interactive approach that existing motion path and corresponding behavior selection result thereof are edited or generated.
(1) interactive movement path and behavior selection result editor
The user uses certain the path point P on the existing motion path of mouse drag iTo reposition, system regenerates path point P according to the path planning principle of introducing previously I-1To P i, and P iPut P to the path I+1Between new route (accompanying drawing 4).The user also can interactivelyly in system's visualization interface replace the behavior selection result on the path
(2) the interactive movement path generates
The user uses mouse to click one group of path point in the scene in proper order, system generates a paths according to the path planning principle of introducing previously, this path order is through that group path point of user's appointment, and specifies or allow system select pairing behavior on this path (accompanying drawing 5) automatically.
4. moving script generates and editor
After obtaining motion path and behavior selection result, the adjacent edge that has identical behavior on the motion path is combined, generated the moving script of representing by the XML form, as follows:
<?xml?version=”1.0”encoding”UTF-8”?>
<!DOCTYPE?MotionSequences?SYSTEM“ms.dtd”>
<MotionSequences>
<Motion?type=”walk”>
<Path>
<Point>X1,Y1,Z1</Point>
<Point>X2,Y2,Z2</Point>
<Point>Xn,Yn,Zn</Point>
</Path>
</Motion>
……
<Motion?type=”stepover”>
<Path>
<Point>XX1,YY1,ZZ1</Point>
<Point>XX2,YY2,ZZ2</Point>
<Point>XX3,YY3,ZZ3</Point>
<Point>XX3,YY3,ZZ3</point>
</Path>
</Motion>
</MotionSequences>
Corresponding dtd file is as follows:
<?xml?version=”1.0”encoding”UTF-8”?>
<!ELEMENT?MotionSequences(Motion+)>
<!ELEMENT?Motion(Path)>
<!ATTLIST?Motion?type(walk|run|leapover|stepup|stepdown|stepover|jumpup|jumpdown|
jumpover)>
<!ELEMENT?Path(Point,Point+)>
<!ELEMENT?Point(#PCDATA)>
The moving script file is actually a kind of text presentation form in 3 d human motion path and behavior selection result in the system, it with system in visual 3 d human motion path and behavior selection result strict corresponding.Therefore native system provides dual mode that moving script is edited to the user, and is as follows respectively:
(1) user edits motion path and behavior selection result by the visualization interface that system provides, interactive movement path of for example introducing previously and behavior selection result editor, and edited result can in time be reflected in the moving script that is generated;
(2) directly moving script XML file is edited, edited result can in time be reflected in the visualization interface of system, is used to generate new motion path and behavior selection result.
5. retrieve based on the exercise data of script
Based on the moving script file, need from preprepared human sports trapped data storehouse, obtain the 3 d human motion data that needs, be used for the final human animation sequence and generate.
At first, with the moving cell in the moving script file (label<Motion〉and</Motion between part) as sample retrieval input motion data retrieval engine; By extraction<Motion〉the type attribute of label obtains type of sports information, determines the exercise data retrieve subsets; From motion path information (label<Path〉and</Path between part) extract physical restriction and feature that motion institute must satisfy, from the exercise data retrieve subsets, mate the human body movement data segment (accompanying drawing 6) that is met the demands most.
6. sew up based on the motion of motion state machine
Obtain corresponding to the exercise data sheet of motion path and behavior selection result and have no progeny, it is sewed up produce final behavioral animation sequence.
(1) utilize the motion state machine to instruct method of attachment between the different types of movement data.When the motion segment between dissimilar connected, what have can direct transformation, and what have can not direct transformation, and the exercise data that must insert the middle transition type just can connect together.The present invention adopts the motion state machine to instruct the transition between the different types of movement data to connect (accompanying drawing 7).
Transit time when (2) adopting the adjacent two sections motion segments of formula (3) calculating to sew up:
T trans = ( t e m 1 - t s m 1 ) + ( t e m 2 - t s m 2 ) 2 - - - ( 3 )
t s M1, t e M1, t s M2And t e M2Represent motion segment m respectively 1And m 2The starting and ending time.
(3) monotone decreasings as shown in Equation (4) merge function and are used in transit time two motion segments being merged (being weighted mean):
α=0.5cos(βπ)+0.5 (4)
Parameter beta is linear change between 0 to 1, obtains the value of corresponding weighting parameters α.
Accompanying drawing 8 shows the automatic generation method for 3 D human body animation workflow structure based on moving script of the present invention.This human body animation automatic creation system comprises human sports trapped data storehouse 10, the visual interactive module 20 of user, the automatic planning module 30 of motion, interactive movement planning module 40, script generates with editor module 50, based on the motion retrieval module 60 and the animation sequence generation module 70 of script.
Human sports trapped data storehouse 10, utilize the optical motion capture device of mentioning in the background technology, adopt the capture device MotionAnalysis Hawk of U.S. Motion Analysis company production as this example, gather 10 kinds of related common human body movement datas in motion preference table and the motion state machine, have the sample data of different physical value and be stored in the human sports trapped data storehouse this example use SQL Server 2000 data base management system (DBMS)s storage human sports trapped data at every type human motion collection is a plurality of.
The visual interactive module 20 of user, the graphics system that uses a computer provide virtual scene loading, function of browse, and can accept to come from user's input of mouse and keyboard.Adopt Open GL interface to realize an interactive virtual scene browser as this example, and can accept to come from user's mouse, keyboard input, such as realizing choosing and function such as interactively path appointment of motion starting point, terminal point in the scene.
Automatic planning module 30 moves, comprise that path profile generation module 31, motion planning figure generation module 32 and path generate and module 33 is selected in behavior, be used for from the visual interactive module 20 of user, reading the motion Origin And Destination information of user interactions input, generate motion path and the corresponding behavior of human body role in three-dimensional virtual scene automatically.
Path profile generation module 31, its function are to make up the continuous motion curved surface in three-dimensional virtual scene, and sampling generates dummy role movement path point according to average human movement length stochastic parameter on the continuous motion plane, and then constitute path profile.
Motion planning figure generation module 32, it is selected behavior and calculates the motion cost for each bar fillet on the path profile based on path profile and user-defined motion preference table, forms motion planning figure.
The path generates and module 33 is selected in behavior, based on motion planning figure, adopts Diikstra algorithm search motion planning figure, obtains pairing behavior on the optimal path of human body role movement and this path.
Interactive movement planning module 40, comprise interactive route editor and generation module 41 and interactive behavior selection module 42, be used for reading user's keyboard, mouse input, interactive editing or generation are carried out in path and behavior selection result from the visual interactive module 20 of user.
Interactive route editor and generation module 41 are used for reading keyboard, mouse input from the visual interactive module 20 of user, and human body role movement path is edited and generated.For example, the user can carry out local modification to existing human body role movement path, or according to user's mouse data, regenerates the motion path that a human body role can use.
Module 42 is selected in interactive behavior, is used for reading user instruction from the visual interactive module 20 of user, for specifying corresponding behavior in the human body role movement path that generates.
Script generates and editor module 50, its function is to read motion path and with it the corresponding behavior of human body role in three-dimensional virtual scene from move automatic planning module 30 or interactive movement planning module 40, generates the moving script file of XML form automatically.In addition, can also edit the moving script file that generates.
Motion retrieval module 60 based on script, its core is an exercise data search engine based on script (seeing accompanying drawing 6), its function is to read type of sports and routing information from the moving script file, and then extract the exercise data feature, the human body movement data segment that animation generates needs is satisfied in retrieval from existing human sports trapped data storehouse 10.
Animation sequence generation module 70, its function is to read from the motion retrieval module 60 based on script to generate the required human body movement data segment of final animation sequence, according to motion state machine and motion stitching algorithm these human body movement data segments are connected, thereby generate final 3 D human body animation sequence.
Embodiment
Automatically generating method by aforesaid human body animation based on moving script, is example with the virtual scene shown in the accompanying drawing 9 (a), and its step is as follows when desiring to carry out the human body animation generation:
(1) at first, the user utilizes visual interactive module 20 to load virtual scenes and browses, and is virtual role designated movement Origin And Destination by mouse;
(2) secondly, the automatic planning module 30 that moves obtains the three-dimensional virtual scene data and read the motion Origin And Destination information of user's appointment from visual interactive module 20;
(3) method called in the path profile generation module 31 of the automatic planning module 30 of motion generates the path profile G that virtual role can be used in virtual scene;
(4) motion planning module 32 read path figure G, and generate the motion planning figure M that has weight and behavior information based on the motion preference table of user configuration or system default;
(5) path generates and behavior selects module 33 to read motion planning figure M, adopts classical dijkstra's algorithm to search in motion planning figure to obtain the final behavior dummy role movement path P (seeing accompanying drawing 9 (b)) that has;
(6) implementation process of step (3)-(5) (workflow of the automatic planning module 30 that just moves) is full automatic, in its implementation process, the user can oneself call interactive movement planning module 40 and generate the motion planning result alternately, or the module 30 automatic motion planning results that generate are carried out interactive editing;
(7) user calls interactive route editor and generation module 41 by menu or tool button in system, utilize input equipments such as mouse, keyboard in virtual scene, to specify motion path, or the result path that module 33 generates is automatically carried out interactive mode revise for virtual role;
(8) user calls interactive behavior by menu or tool button and selects module 42 in system, utilize input equipments such as mouse, keyboard to select different behaviors for virtual role on motion path, perhaps the behavior selection result that module 33 is generated is automatically made amendment;
(9) then, script generates with editor module 50 and reads the virtual scene data, and from move automatic planning module 30 or interactive movement planning module 40, obtain human motion program results (motion path and the behavior information corresponding that comprise virtual role) with the path, generate the moving script file of XML form; In addition, system also provides special window that the moving script file that generates is edited and edited result is reflected in real time by visual interactive module 20;
(10) after this, obtain the moving script file of generation based on the motion retrieval module 60 of script, and type of sports and routing information to comprise in the moving script, extract retrieval character, the 3 d human motion data that retrieval needs from human sports trapped data storehouse 10;
(11) last, animation sequence generation module 70 obtains one group of 3 d human motion data with sequencing from the motion retrieval module 60 based on script; With the motion state machine is guidance, adopts motion to sew up algorithm it is connected to form final 3 D human body animation sequence and plays (seeing accompanying drawing 9 (c)) by 20 pairs of animation sequences of visual interactive module.

Claims (6)

1. automatic generation method for 3 D human body animation based on moving script is characterized in that may further comprise the steps:
(1) at first adopt motion footprint figure generation technique to generate the kinematic roadmap that digital role can use in the virtual scene;
(2) based on motion footprint figure and user-defined motion preference table, system carries out digital role movement path planning automatically and behavior is selected;
(3) automatic path planning and the behavior selection result that provides according to system, the user uses interactive interface it is made amendment and to edit, and can the interactive final motion path that generates digital role;
(4) with step 2), 3) in the path planning that obtains and the behavior selection result moving script data that are converted to the XML form, and can make amendment and edit it;
(5) according to requirements such as the type of sports in the moving script, physical restriction, the required human body movement data segment of retrieval from existing motion capture data storehouse;
(6), under the guidance of motion state machine, use the motion suturing skill and generate the final human animation sequence based on the human body movement data segment that retrieves.
2. a kind of automatic generation method for 3 D human body animation according to claim 1 based on moving script, it is characterized in that: the motion path drawing generating method is meant normal vector in the virtual scene coupled together less than the adjacent triangular faces of threshold value T and forms the continuous motion plane in described a kind of virtual scene, and utilize in the motion capture data storehouse average human movement length parameter L on the continuous motion plane, to carry out stochastic sampling to generate human motion footprint point, then will be each other distance smaller or equal to the path point of the parameter L formation path profile that is connected with each other, in addition, the situation of degenerating for narrow zone sampling in the scene uses following probability model to revise:
P ( v | V ) = 1 i v + 1 / &Sigma; u &Element; V 1 i u + 1 .
3. a kind of automatic generation method for 3 D human body animation based on moving script according to claim 1 is characterized in that: described automatic digital role's path planning and behavior system of selection mainly comprise following step:
(1) be digital role definition motion preference table, be the motion designated movement preference coefficient of each type, and system be various fundamental types movement definition the acquiescence motion preference coefficient table, as follows:
Type of sports The preference coefficient Walk 0.1 Run 0.2 Stride down 0.3 Jump off 0.4 Step up 0.5
Jump onto 0.6 Cross over 0.7 Jump 0.9 Leap 0.8 Stand 0.0
(2), adopt formula c=α w to each bar limit in the kinematic roadmap d+ β w bCalculate the motion cost on this limit and be the minimum behavior of this limit selection motion cost, wherein w dThe length on limit in the expression path profile, w bThe preference coefficient is represented in expression, and α and β represent normalization coefficient:
(3) be after every limit is selected behavior and calculated top motion cost c in the path profile according to the method described above, kinematic roadmap becomes a motion planning figure with weight, adopt this moment classical dijkstra's algorithm to search for whole motion planning figure and just can obtain a motion path between starting point and terminal point, and each section (limit between the point of path) on the path all designated corresponding behavior.
4. a kind of automatic generation method for 3 D human body animation according to claim 1 based on moving script, it is characterized in that: after described XML form moving script generation method is meant and obtains motion path and behavior selection result, the adjacent edge that has identical behavior on the motion path is combined, the moving script that generation is represented by the expandable mark language XML form, as follows:
<?xml?version=”1.0”encoding”UTF-8”?>
<!DOCTYPE?MotionSequences?SYSTEM“ms.dtd”>
<MotionSequences>
<Motion?type=”walk”>
<Path>
<Point>X1,Y1,Z1</Point>
<Point>X2,Y2,Z2</Point>
......
<Point>Xn,Yn,Zn</Point>
</Path>
</Motion>
<Motion?type=”stepover”>
<Path>
<Point>XX1,YY1,ZZ1</Point>
<Point>XX2,YY2,ZZ2</Point>
<Point>XX3,YY3,ZZ3</Point>
<Point>XX3,YY3,ZZ3</Point>
</Path>
</Motion>
</MotionSequences>
Corresponding DTD (Document Type Definition) dtd file is as follows:
<?xml?version=”1.0”encoding”UTF-8”?>
<!ELEMENT?MotionSequences(Motion+)>
<!ELEMENT?Motion(Path)>
<!ATTLIST?Motion?type(walk|run|leapover|stepup|stepdown|stepover|jumpup|jumpdown|jumpover)>
<!ELEMENT?Path(Point,Point+)>
<!ELEMENT?Point(#PCDATA)> 。
5. a kind of automatic generation method for 3 D human body animation according to claim 1 based on moving script, it is characterized in that: described exercise data search method based on moving script comprises following step: (1) is with the moving cell in the moving script file, i.e. label<Motion〉and</Motion between part as sample retrieval input motion data retrieval engine; (2) by extractions<Motion〉the type attribute acquisition type of sports information of label, determine the exercise data retrieve subsets; (3) from motion path information, i.e. label<Path〉and</Path between extracting section motion the institute physical restriction and the feature that must satisfy, from the exercise data retrieve subsets, mate the human body movement data segment that is met the demands most.
6. a kind of automatic generation method for 3 D human body animation based on moving script according to claim 1 is characterized in that: described motion sewing method based on the motion state machine comprises following step:
(1) utilize the motion state machine to instruct method of attachment between the different types of movement data, when the motion segment between dissimilar connects, what have can direct transformation, what have can not direct transformation, the exercise data that must insert the middle transition type just can connect together, and the present invention adopts the motion state machine to instruct the transition between the different types of movement data to connect;
Transit time when (2) adopting following formula to calculate adjacent two sections motion segments to sew up:
T trans = ( t e m 1 - t s m 1 ) + ( t e m 2 - t s m 2 ) 2
t s M1, t e M1, t s M2And t e M2Represent motion segment m respectively 1And m 2The starting and ending time;
(3) adopt monotone decreasing to merge function alpha=0.5cos (β π)+0.5 and be used in transit time, two motion segments being merged, i.e. weighted mean, parameter beta is linear change between 0 to 1, obtains the value of corresponding weighting parameters α.
CNB2006100533961A 2006-09-14 2006-09-14 Automatic generation method for 3D human body animation based on moving script Expired - Fee Related CN100428281C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB2006100533961A CN100428281C (en) 2006-09-14 2006-09-14 Automatic generation method for 3D human body animation based on moving script

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB2006100533961A CN100428281C (en) 2006-09-14 2006-09-14 Automatic generation method for 3D human body animation based on moving script

Publications (2)

Publication Number Publication Date
CN1996395A true CN1996395A (en) 2007-07-11
CN100428281C CN100428281C (en) 2008-10-22

Family

ID=38251470

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB2006100533961A Expired - Fee Related CN100428281C (en) 2006-09-14 2006-09-14 Automatic generation method for 3D human body animation based on moving script

Country Status (1)

Country Link
CN (1) CN100428281C (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101840586A (en) * 2010-04-02 2010-09-22 中国科学院计算技术研究所 Method and system for planning motion of virtual human
CN101515373B (en) * 2009-03-26 2011-01-19 浙江大学 Sports interactive animation producing method
CN102117179A (en) * 2010-12-31 2011-07-06 杭州乐港科技有限公司 Method for controlling role jump and movement through single key of mouse
CN102136154A (en) * 2010-11-18 2011-07-27 彭浩明 Cartoon manufacture method and device
CN102222425A (en) * 2011-07-15 2011-10-19 西安电子科技大学 Automatic demonstration system and realizing method for 3-dimensioinal (3D) visual emergency plan
CN102289835A (en) * 2011-08-30 2011-12-21 北京瑞信在线系统技术有限公司 Micro-animation effect checking method and device
CN103793934A (en) * 2014-01-21 2014-05-14 北京工业大学 Nonlinear splicing method and device for digital shadow figure animation script file based on Xml
CN116630486A (en) * 2023-07-19 2023-08-22 山东锋士信息技术有限公司 Semi-automatic animation production method based on Unity3D rendering

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106447748B (en) * 2016-09-14 2019-09-24 厦门黑镜科技有限公司 A kind of method and apparatus for generating animation data

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI235962B (en) * 2003-06-06 2005-07-11 Ind Tech Res Inst Method for converting high level motion scripts to computer animations

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101515373B (en) * 2009-03-26 2011-01-19 浙江大学 Sports interactive animation producing method
CN101840586A (en) * 2010-04-02 2010-09-22 中国科学院计算技术研究所 Method and system for planning motion of virtual human
CN102136154A (en) * 2010-11-18 2011-07-27 彭浩明 Cartoon manufacture method and device
CN102136154B (en) * 2010-11-18 2012-12-12 彭浩明 Cartoon manufacture method and device
CN102117179A (en) * 2010-12-31 2011-07-06 杭州乐港科技有限公司 Method for controlling role jump and movement through single key of mouse
CN102222425A (en) * 2011-07-15 2011-10-19 西安电子科技大学 Automatic demonstration system and realizing method for 3-dimensioinal (3D) visual emergency plan
CN102289835A (en) * 2011-08-30 2011-12-21 北京瑞信在线系统技术有限公司 Micro-animation effect checking method and device
CN103793934A (en) * 2014-01-21 2014-05-14 北京工业大学 Nonlinear splicing method and device for digital shadow figure animation script file based on Xml
CN103793934B (en) * 2014-01-21 2016-10-19 北京工业大学 The non-linear joining method of digital figure for shadow-play animation script file based on Xml and device
CN116630486A (en) * 2023-07-19 2023-08-22 山东锋士信息技术有限公司 Semi-automatic animation production method based on Unity3D rendering
CN116630486B (en) * 2023-07-19 2023-11-07 山东锋士信息技术有限公司 Semi-automatic animation production method based on Unity3D rendering

Also Published As

Publication number Publication date
CN100428281C (en) 2008-10-22

Similar Documents

Publication Publication Date Title
CN100428281C (en) Automatic generation method for 3D human body animation based on moving script
RU2541925C2 (en) Animated object presentation method
US6011562A (en) Method and system employing an NLE to create and modify 3D animations by mixing and compositing animation data
Waser et al. World lines
US8271962B2 (en) Scripted interactive screen media
TWI413017B (en) Method and computer system for blended object attribute keyframing model
Lipp et al. Interactive visual editing of grammars for procedural architecture
CN103069383B (en) There is device and the method for exploitation multimedia computer application program of graphic user interface
Bares et al. Virtual 3D camera composition from frame constraints
WO2016065567A1 (en) Authoring tools for synthesizing hybrid slide-canvas presentations
EP1000410A1 (en) Method and system for editing or modifying 3d animations in a non-linear editing environment
CN108108194B (en) User interface editing method and user interface editor
Cantrell et al. Modeling the environment: techniques and tools for the 3D illustration of dynamic landscapes
WO2007091081A2 (en) Processing comic art
CN108388444A (en) A kind of front end page configuration method and system based on React components
McCrae et al. Sketch-based path design
Deng et al. eFASE: expressive facial animation synthesis and editing with phoneme-isomap controls
CN114359501B (en) Configurable 3D visualization platform and scene construction method
US11625900B2 (en) Broker for instancing
Casas et al. Parametric control of captured mesh sequences for real-time animation
Levante Data Management and Virtual Reality Applications of BIM models
Madges et al. AnimDiff: Comparing 3D animations for revision control
CN115186320A (en) Method for generating building document
Korada Creating and editing motion machines for 3D characters
Peters et al. Metroped: A tool for supporting crowds of pedestrian ai’s in urban environments

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20081022

Termination date: 20120914