CN1949274A - 3-D visualising method for virtual crowd motion - Google Patents

3-D visualising method for virtual crowd motion Download PDF

Info

Publication number
CN1949274A
CN1949274A CNA200610114106XA CN200610114106A CN1949274A CN 1949274 A CN1949274 A CN 1949274A CN A200610114106X A CNA200610114106X A CN A200610114106XA CN 200610114106 A CN200610114106 A CN 200610114106A CN 1949274 A CN1949274 A CN 1949274A
Authority
CN
China
Prior art keywords
motion
virtual
data
individuality
crowd
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CNA200610114106XA
Other languages
Chinese (zh)
Other versions
CN100440257C (en
Inventor
毛天露
束搏
徐文彬
夏时洪
王兆其
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
YANTAI HUITONG NETWORK TECHNOLOGY CO., LTD.
Original Assignee
Institute of Computing Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Computing Technology of CAS filed Critical Institute of Computing Technology of CAS
Priority to CNB200610114106XA priority Critical patent/CN100440257C/en
Publication of CN1949274A publication Critical patent/CN1949274A/en
Application granted granted Critical
Publication of CN100440257C publication Critical patent/CN100440257C/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The invention relates to a three-dimensional visualization method for virtual crowd sports. It includes the following steps: analyzing the becoming virtual crowd, and founding modeling board model base and material base; capturing or generating by handcraft the need body sports data and storing it; composing the sports data; processing drawing preprocessing for modeling board model base and the sports data; inputting simple script data described crowd sports; generating virtual individual; driving it sports to generate dynamic virtual crowd; realize population playing up. The method has the advantages of high efficiency, common use, and low modeling cost, realizing real time vivid virtual crowd sports with 30000 peoples scale in common PC.

Description

A kind of three-dimensional visualization method of virtual crowd motion
Technical field
The present invention relates to the virtual reality field, particularly a kind of three-dimensional visualization method of virtual crowd motion.
Background technology
The visual of virtual crowd motion is meant that the exercise data with the crowd is presented on the screen, after exercise data is given certain individuality, by observing individual motion, can recognize macroscopic view or microcosmic effect that exercise data produces on screen.Three-dimensional visualization is carried out in large-scale crowd motion, can in CyberSpace, generate virtual crowd true to nature, and the crowd's motion conditions under the various places of the omnibearing observation of all angles, the various situation, prospect has a very wide range of applications.For example: in heavy construction (as railway station, gymnasium etc.) design, the designer must consider the performance of large-scale crowd when various emergency conditioies (as fire, explode, trample etc.) take place, utilize the three-dimensional visualization technique of large-scale virtual group movement, can carry out the public safety evaluation in the design phase to building easily.In film making, also often need the scene that represents extensive dynamic crowd true to nature, the extensive effect very unexpectedly of paratrooper when representing Allied landings in the film " Band of Brothers ".In game making, the game movie teacher also often need play up the group movement of a large amount of game roles in scene of game, as the strange beast moving scene that covers the sky and the earth in " World of Warcraft ".Therefore, the three-dimensional visualization method that all needs a kind of virtual crowd motion in fields such as recreation/production of film and TV, architectural design, public safety assessments.
The three-dimensional visualization method of large-scale virtual crowd motion need satisfy verisimilitude and real-time demand simultaneously.On the one hand, need to use abundant virtual role model, and each virtual role model can both independently be controlled and be finished the action of appointment, to satisfy the demand of user for verisimilitude; On the other hand, memory cost that model is written into, drives and plays up and computing cost will be controlled within the specific limits, to guarantee that the user can obtain visualization result in real time.In addition, also need to consider the requirement of aspects such as ease for use, versatility and modeling cost.Since technical difficulty, the three-dimensional visualization method that does not also have general large-scale virtual crowd to move at present.
Summary of the invention
The objective of the invention is to overcome prior art and can't realize the defective of three dimensional stress visualized operation is carried out in large-scale virtual crowd motion, thereby a kind of efficient, general and method that modeling cost is low is provided.
To achieve these goals, the invention provides a kind of three-dimensional visualization method of virtual crowd motion, comprise following steps:
The virtual crowd that will occur in step 10), the analytic system is classified individuality, sets up a three-dimensional (3 D) manikin as template model for every kind of classification by resemblance, obtains the template model storehouse; And, select one group to derive from the position according to the feature of each template model, and define each and derive from the material that position can be used, join in the Materials Library;
Step 20), catch or manual generate human body movement data, and exercise data is input in the motion database;
Step 30), resultant motion data;
Step 40), template model and synthetic exercise data are drawn pre-service;
Step 50), the simple script data of input description crowd motion, the information in the script comprises: each type, each individual current position, each individual current motion state under individual in the colony;
Step 60), according to step 50) in the data of input, from the template model storehouse, select template model, and, generate virtual individual in conjunction with the derivation of Materials Library implementation model;
Step 70), to step 60) virtual individual that generates, according to this individuality in step 50) in the motion state information of input, from described motion database, select exercise data, and calculate this exercise data over time, with the motion of driving virtual individual; All virtual individuals in the traversal colony generate dynamic virtual crowd;
Step 80), realize the colony of virtual crowd is played up.
In the technique scheme, in step 10), described individuality is done classification is that age, sex according to individuality divided the crowd.
In the technique scheme, in step 10), clothes, hair style, shoes and hats that described derivation position selects human body to wear.
In the technique scheme, described step 20) comprising:
Step 21), the motion state that may occur of the virtual individual in the statistical system;
Step 22), utilize capturing movement equipment to gather or use motion to generate and the manual step 21 that generates of software for editing) in the human body movement data of the every kind of motion state that obtains of adding up;
In described step 22) in, the capture movement The data based on electromagnetism or based on optics or based on the method for capturing movement of acoustics, described motion generation and software for editing comprise special software 3dsmax or Maya or SOFTIMAGEIXSI or Blender.
In the technique scheme, in described step 30) in, the method for described resultant motion The data art designing hand adjustment or the method that adopts motion fusion, flying splice, motion to be redirected.
In the technique scheme, in step 40) in, template model is drawn pre-service with synthetic exercise data comprises:
41), to each masterplate model, to the executable action data of each this template model,, calculate the three-dimensional model of corresponding this frame to each frame of this action data;
42), to step 41) in the three-dimensional model that obtains, draw pre-service; Wherein, required drafting preprocess method comprise drafting preprocess method based on image, based on the drafting preprocess method of point sampling, based on the preprocess method of grid animation, based on the preprocess method of level of detail technology.
In the technique scheme, described step 60) comprising:
Step 61), at each individuality among the system virtualization crowd, according in step 50) type information of the individuality in the data of importing, from the template model storehouse, select the template corresponding model;
Step 62), judge that whether current individuality occurs first, if, carry out next step, otherwise, execution in step 64);
Step 63), individuality occurs first, still unallocated material is selected a cover material at random from described Materials Library, and in internal memory storage material sequence number, execution in step 70 then);
Step 64), individuality distributed material, directly reads the material sequence number of this template model from internal memory, and uses corresponding material, play up.
In the technique scheme, described step 70) comprising:
Step 71), at each individuality in the colony, according to step 50) the input data in should individuality type information and current time should individuality motion state information, from motion database, select corresponding exercise data;
Step 72), according to should the individuality current time in the input data, last one constantly and the positional information in next moment, ask for current movement velocity of this individuality and direction of motion;
Step 73), according to each constantly motion vector, individual each motion vector, individual last selected exercise data frame index of a moment of emulated data constantly of exercise data of from motion database, selecting, calculate the exercise data of current time individuality, described exercise data comprises individual current position, the athletic posture of health various piece;
Step 74), utilize step 73) in the current time exercise data and the step 72 that obtain) in the direction of motion actuation step 60 that obtains) in the virtual individual that generates;
Step 75), travel through the whole individualities in the colony, generation dynamic virtual colony.
In the technique scheme, described step 80) method that employing is played up by the colony of being narrated in comprises method for drafting based on image, based on the method for drafting of point sampling, based on the method for drafting of grid animation, based on the method for drafting of level of detail technology.Advantage of the present invention mainly shows:
1, the input of system is comparatively simple, mainly is script datas such as different individual down constantly states, position, can be embedded into other application systems easily or is called by them, has good versatility;
2, by using technology such as bone covering, capturing movement, system can control to freedom and flexibility each individuality in the colony, satisfies the requirement of user for the human motion verisimilitude;
3, by the derivation technology that uses a model, can derive a large amount of personalized models from less model template, farthest reduce the modeling cost of three-dimensional population model, and save memory cost and computing cost that model is written into, stores;
4, by using the motion deformation technology, just movement effects true to nature be can produce, thereby the memory cost and the computing cost of model-driven saved with less exercise data;
5, by using, realized the three-dimensional real-time rendering of large-scale crowd data based on technology such as playing up of point sampling.
Description of drawings
Fig. 1 is the process flow diagram of the three-dimensional visualization method of virtual crowd motion of the present invention;
The six class template models of Fig. 2 for being set up in one embodiment of the present of invention.
Embodiment
The present invention is further illustrated below in conjunction with the drawings and specific embodiments.
As shown in Figure 1, the three-dimensional visualization method of virtual crowd motion of the present invention may further comprise the steps:
One, pre-treatment step
The fundamental purpose of pre-treatment step is to generate needed template model storehouse, Materials Library, motion database and drafting storehouse when handling in real time.Therefore comprise that template model modeling, exercise data are caught, exercise data synthesize, template model and exercise data drafting pre-service four sub-steps.
The template model storehouse is set up in step 10, template model modeling.
The virtual crowd that occurs in step 11, the analysis application system is classified individuality wherein according to certain standard.For example: can the individuality among the crowd be divided into: the elderly man, the elderly woman, young people man, young people woman, children man, children woman six classes according to age and sex.
Step 12, at the resemblance of each type individuality, set up a three-dimensional (3 D) manikin as template model.For example, in step 11), virtual crowd is divided into the elderly man, the elderly woman, young people man, young people woman, children man, children woman six classes, in this step, for the elderly man sets up a three-dimensional (3 D) manikin, this model is as the elderly man's template model, virtual crowd to other kinds is also done similar operation, obtains the template model of the elderly man, the elderly woman, young people man, young people woman, children man, children woman six class individualities at last.Fig. 2 is exactly the template model of the elderly man, the elderly woman, young people man, young people woman, children man, children woman six class individualities.
Step 13, at the feature of each template model, select one group to derive from the position, define the possible material that each derives from the position, and join in the Materials Library.Generally speaking, can select clothes that human body wears, hair style, shoes and hats as deriving from part.With the elderly man model template die type is example, if specify upper garment and trousers as the derivation position, and the possible material of depositing in the Materials Library that is upper garment and trousers, these may materials artificially be specified according to the application system needs by the user.
Step 20, catch human body movement data, generate human body movement data perhaps by hand, and exercise data is input in the motion database.
The motion state that virtual individual in step 21, the statistics application system may occur.For example, under the situation of personnel level land emergency evacuation, the motion state that individuality may occur comprises: walk, run, stop to look around, death falls down to the ground.
Step 22, utilize capturing movement equipment or use motion to generate and software for editing the human body movement data of the every kind of motion state that obtains of adding up in the acquisition step 21.Can adopt existing method for capturing movement to catching of human body movement data, comprise based on electromagnetism, based on optics, based on the method for capturing movement of acoustics, the capturing movement equipment that is adopted can adopt VICON.Described motion generation and software for editing comprise special software 3dsmax or Maya or SOFTIMAGE|XSI or Blender.
Step 30, exercise data synthesize.Different template models owing to have anthropometry information such as different height, leg length, if adopt identical exercise data to drive, will produce problems such as step slides, limbs penetrate.Therefore, need handle, meet the exercise data of this model build for the synthetic cover of each template model the exercise data that captures.In addition, in order to save storage space, each exercise data need be processed into broadcast capable of circulation, the cyclical movement data that promptly first frame and tail frame can seamlessly transit.The resultant motion data can adopt the method for art designing's hand adjustment, also can adopt technology such as motion fusion, flying splice, motion are redirected, if the data that generate be true to nature, meet the template model physical characteristic, can loop play just passable.
Step 40, template model and exercise data are drawn pre-service.In order to improve the template model rendering speed, rendering algorithm need be drawn pre-service to template model and exercise data usually efficiently.For example, adopt rendering algorithm, need carry out point sampling, generate multi-resolution models the model of template model after exercise data drives based on point sampling; Employing also needs to drive the image that the back model generates different visual angles according to this based on the rendering algorithm of image.Template model is drawn pretreated specific implementation step with synthetic exercise data to be comprised:
Step 41, to each masterplate model, to the executable action data of each this template model,, calculate the three-dimensional model of corresponding this frame to each frame of this action data;
Step 42, the three-dimensional model to obtaining in the step 41 are drawn pre-service.
Wherein, in step 42 required drafting preprocess method comprise drafting preprocess method based on image, based on the drafting preprocess method of point sampling, based on the preprocess method of grid animation, based on the preprocess method of level of detail technology.
Two, input data step
The simple script data of step 50, input description crowd motion, the information spinner in the script will comprise: each individual affiliated type, each individual current position, each individual current motion state in the colony.
Be an example fragment of input data below:
Individual sequence number: 015;
Individual type: the elderly man;
Time (second): 0.2
Position (rice): 10,11,11;
Motion state: walk;
Time (second): 0.3
Position (rice): 10.1,11,11;
Motion state: walk;
Time (second): 0.4
Position (rice): 10.2,11,11;
Motion state: walk;
Three, real-time treatment step
In real time treatment step comprises that model derivation, model-driven, colony play up, and is used for the three-dimensional visualization of virtual crowd motion is realized.
Step 60, according to the input data, from the template model storehouse, select template model, and, generate virtual individual in conjunction with the derivation of Materials Library implementation model.
Step 61, at each individuality in the colony, according to type information that should individuality in the input data, from the template model storehouse, select the template corresponding model;
Step 62, judge that whether current individuality occurs first, if, carry out next step, otherwise, execution in step 64;
Step 63, individuality occur first, and still unallocated material is selected a cover material at random, and store the material sequence number in internal memory from Materials Library, and execution in step 70 then;
Step 64, individuality have distributed material, directly read the material sequence number of this template model from internal memory, and use corresponding material, play up.
Step 70, driving model obtain the final attitude of individual current time.
Step 71, at each individuality in the colony, according to type information that should individuality in the input data and current time should individuality motion state information, from motion database, select corresponding exercise data.With the data instance of input in the step 50, be 015 the elderly man to individual sequence number, the exercise data of from motion database, selecting the elderly man to walk.
Step 72, according to should the individuality current time in the input data, last one constantly and the positional information in next moment, ask for current movement velocity of this individuality and direction of motion.Still with the data instance of input in the step 50, individual sequence number is 015 the elderly man, in its motion process, last one is 0.2 second constantly, and current time is 0.3 second, and next is 0.4 second constantly, position in the time of 0.2 second is (10,11,11), position in the time of 0.3 second is (10.1,11,11), the position in the time of 0.4 second is (10.2,11,11).Can calculate the current movement velocity of this individuality in view of the above is 1 meter per second, and current direction of motion is (1,0,0).
The motion vector of the motion vector of each moment exercise data that step 73, basis are selected from motion database, each moment emulated data of individuality, individual last selected exercise data frame index of a moment, the exercise data of calculating current time individuality.Exercise data comprises individual current position, the athletic posture of health various piece.
Concrete computing method are as follows:
Suppose { F iI=1 ..., I} is the motion vector of i frame exercise data, wherein, I is the exercise data totalframes.{ S jBe at time point t jLast emulated data motion vector need be tried to achieve { N j, promptly at time point t jGo up selected exercise data frame index.Use mathematical induction, make initial value N 0=1, problem is converted to: at N jUnder the known prerequisite, ask N J+1Wherein, subscript j represents one constantly, and subscript j+1 represents current time.
Use and sound out the method for finding the solution, with N J+1By N jIncrease progressively exploration successively, when F N j + 1 - F N j > S j + 1 - S j The time, exploration stops, and promptly obtains N J+1Wherein,>symbol is the comparison of exercise data and two vectors of emulated data, uses the largest component relative method, promptly when vector
Figure A20061011410600121
In arbitrary component greater than S J+1-S jDuring respective component, promptly whole vector greater than.Because the exercise data frame number is limited, can't infinitely increase progressively exploration, therefore work as N J+1During>I, N J+1Again increase progressively exploration since 1, simultaneously, F N j + 1 = F N j + 1 + F 1 - F 1 .
Step 74, utilize the current time exercise data and the step 72 that obtain in the step 73) in the virtual individual that generates in the direction of motion actuation step 60 that obtains.
Whole individualities in step 75, the traversal colony generate dynamic virtual colony.
Step 80, realization are played up the colony of virtual crowd.In this step, it is ripe prior art that colony plays up, can adopt China Patent No. is 200610089057.9, denomination of invention: the application for a patent for invention of " a kind of virtual man real-time drawing method ", method for drafting or other method for drafting is efficiently wherein disclosed, dynamic virtual colony is carried out real-time rendering, comprise method for drafting, based on the method for drafting of point sampling, based on the method for drafting of grid animation, based on method for drafting of level of detail technology etc. based on image.

Claims (10)

1, a kind of three-dimensional visualization method of virtual crowd motion comprises following steps:
The virtual crowd that will occur in step 10), the analytic system is classified individuality, sets up a three-dimensional (3 D) manikin as template model for every kind of classification by resemblance, obtains the template model storehouse; And, select one group to derive from the position according to the feature of each template model, and define each and derive from the material that position can be used, join in the Materials Library;
Step 20), catch or manual generate human body movement data, and exercise data is input in the motion database;
Step 30), resultant motion data;
Step 40), template model and synthetic exercise data are drawn pre-service;
Step 50), the simple script data of input description crowd motion, the information in the script comprises: each type, each individual current position, each individual current motion state under individual in the colony;
Step 60), according to step 50) in the data of input, from the template model storehouse, select template model, and, generate virtual individual in conjunction with the derivation of Materials Library implementation model;
Step 70), to step 60) virtual individual that generates, according to this individuality in step 50) in the motion state information of input, from described motion database, select exercise data, and calculate this exercise data over time, with the motion of driving virtual individual; All virtual individuals in the traversal colony generate dynamic virtual crowd;
Step 80), realize the colony of virtual crowd is played up.
2, the three-dimensional visualization method of virtual crowd motion according to claim 1 is characterized in that, in step 10), described individuality is done classification is that age, sex according to individuality divided the crowd.
3, the three-dimensional visualization method of virtual crowd motion according to claim 1 is characterized in that, in step 10), and clothes, hair style, shoes and hats that described derivation position selects human body to wear.
4, the three-dimensional visualization method of virtual crowd motion according to claim 1 is characterized in that, described step 20) comprising:
Step 21), the motion state that may occur of the virtual individual in the statistical system;
Step 22), utilize capturing movement equipment to gather or use motion to generate and the manual step 21 that generates of software for editing) in the human body movement data of the every kind of motion state that obtains of adding up.
5, the three-dimensional visualization method of virtual crowd motion according to claim 4, it is characterized in that, in described step 22) in, the capture movement The data based on electromagnetism or based on optics or based on the method for capturing movement of acoustics, described motion generation and software for editing comprise special software 3dsmax or Maya or SOFTIMAGE|XSI or Blender.
6, the three-dimensional visualization method of virtual crowd motion according to claim 1, it is characterized in that, in described step 30) in, the method for described resultant motion The data art designing hand adjustment or the method that adopts motion fusion, flying splice, motion to be redirected.
7, the three-dimensional visualization method of virtual crowd motion according to claim 1 is characterized in that, in step 40) in, template model is drawn pre-service with synthetic exercise data comprises:
41), to each masterplate model, to the executable action data of each this template model,, calculate the three-dimensional model of corresponding this frame to each frame of this action data;
42), to step 41) in the three-dimensional model that obtains, draw pre-service; Wherein, required drafting preprocess method comprise drafting preprocess method based on image, based on the drafting preprocess method of point sampling, based on the preprocess method of grid animation, based on the preprocess method of level of detail technology.
8, the three-dimensional visualization method of virtual crowd motion according to claim 1 is characterized in that, described step 60) comprising:
Step 61), at each individuality among the system virtualization crowd, according in step 50) type information of the individuality in the data of importing, from the template model storehouse, select the template corresponding model;
Step 62), judge that whether current individuality occurs first, if, carry out next step, otherwise, execution in step 64);
Step 63), individuality occurs first, still unallocated material is selected a cover material at random from described Materials Library, and in internal memory storage material sequence number, execution in step 70 then);
Step 64), individuality distributed material, directly reads the material sequence number of this template model from internal memory, and uses corresponding material, play up.
9, the three-dimensional visualization method of virtual crowd motion according to claim 1 is characterized in that, described step 70) comprising:
Step 71), at each individuality in the colony, according to step 50) the input data in should individuality type information and current time should individuality motion state information, from motion database, select corresponding exercise data;
Step 72), according to should the individuality current time in the input data, last one constantly and the positional information in next moment, ask for current movement velocity of this individuality and direction of motion;
Step 73), according to each constantly motion vector, individual each motion vector, individual last selected exercise data frame index of a moment of emulated data constantly of exercise data of from motion database, selecting, calculate the exercise data of current time individuality, described exercise data comprises individual current position, the athletic posture of health various piece;
Step 74), utilize step 73) in the current time exercise data and the step 72 that obtain) in the direction of motion actuation step 60 that obtains) in the virtual individual that generates;
Step 75), travel through the whole individualities in the colony, generation dynamic virtual colony.
10, the three-dimensional visualization method of virtual crowd motion according to claim 1, it is characterized in that described step 80) in the colony the narrated method of playing up employing comprise method for drafting based on image, based on the method for drafting of point sampling, based on the method for drafting of grid animation, based on the method for drafting of level of detail technology.
CNB200610114106XA 2006-10-27 2006-10-27 3-D visualising method for virtual crowd motion Expired - Fee Related CN100440257C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB200610114106XA CN100440257C (en) 2006-10-27 2006-10-27 3-D visualising method for virtual crowd motion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB200610114106XA CN100440257C (en) 2006-10-27 2006-10-27 3-D visualising method for virtual crowd motion

Publications (2)

Publication Number Publication Date
CN1949274A true CN1949274A (en) 2007-04-18
CN100440257C CN100440257C (en) 2008-12-03

Family

ID=38018784

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB200610114106XA Expired - Fee Related CN100440257C (en) 2006-10-27 2006-10-27 3-D visualising method for virtual crowd motion

Country Status (1)

Country Link
CN (1) CN100440257C (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101140663B (en) * 2007-10-16 2010-06-02 中国科学院计算技术研究所 Clothing cartoon computation method
CN101515371B (en) * 2009-03-26 2011-01-19 浙江大学 Human body movement data fragment extracting method
CN102157008A (en) * 2011-04-12 2011-08-17 电子科技大学 Large-scale virtual crowd real-time rendering method
CN102201032A (en) * 2010-03-26 2011-09-28 微软公司 Personalized appareal and accessories inventory and display
CN101877142B (en) * 2009-11-18 2012-05-30 胡晓峰 Simulation method based on multi-scale hierarchical details
CN102768766A (en) * 2012-06-11 2012-11-07 天津大学 Three-dimensional group animation modeling method
CN102800121A (en) * 2012-06-18 2012-11-28 浙江大学 Method for interactively editing virtual individuals in virtual crowd scene
CN102831631A (en) * 2012-08-23 2012-12-19 上海创图网络科技发展有限公司 Rendering method and rendering device for large-scale three-dimensional animations
CN104765959A (en) * 2015-03-30 2015-07-08 燕山大学 Computer vision based evaluation method for general movement of baby
CN106362400A (en) * 2016-10-12 2017-02-01 大连文森特软件科技有限公司 VR game system with online editing function
CN106582005A (en) * 2016-11-14 2017-04-26 深圳市豆娱科技有限公司 Data synchronous interaction method and device in virtual games
CN111210504A (en) * 2019-12-26 2020-05-29 北京邮电大学 Method and device for constructing crowd movement simulation framework
WO2021174898A1 (en) * 2020-03-04 2021-09-10 腾讯科技(深圳)有限公司 Method and device for compositing action sequence of virtual object

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1332360C (en) * 2004-10-26 2007-08-15 中国科学院计算技术研究所 Method for establishing three-dimensional motion using computer
US7464010B2 (en) * 2004-12-21 2008-12-09 Electronics And Telecommunications Research Institute User interface design and evaluation system and hand interaction based user interface design and evaluation system
WO2006099589A2 (en) * 2005-03-16 2006-09-21 Lucasfilm Entertainment Company Ltd. Three- dimensional motion capture

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101140663B (en) * 2007-10-16 2010-06-02 中国科学院计算技术研究所 Clothing cartoon computation method
CN101515371B (en) * 2009-03-26 2011-01-19 浙江大学 Human body movement data fragment extracting method
CN101877142B (en) * 2009-11-18 2012-05-30 胡晓峰 Simulation method based on multi-scale hierarchical details
CN102201032A (en) * 2010-03-26 2011-09-28 微软公司 Personalized appareal and accessories inventory and display
CN102157008A (en) * 2011-04-12 2011-08-17 电子科技大学 Large-scale virtual crowd real-time rendering method
CN102157008B (en) * 2011-04-12 2014-08-06 电子科技大学 Large-scale virtual crowd real-time rendering method
CN102768766A (en) * 2012-06-11 2012-11-07 天津大学 Three-dimensional group animation modeling method
CN102800121B (en) * 2012-06-18 2014-08-06 浙江大学 Method for interactively editing virtual individuals in virtual crowd scene
CN102800121A (en) * 2012-06-18 2012-11-28 浙江大学 Method for interactively editing virtual individuals in virtual crowd scene
CN102831631A (en) * 2012-08-23 2012-12-19 上海创图网络科技发展有限公司 Rendering method and rendering device for large-scale three-dimensional animations
CN102831631B (en) * 2012-08-23 2015-03-11 上海创图网络科技发展有限公司 Rendering method and rendering device for large-scale three-dimensional animations
CN104765959A (en) * 2015-03-30 2015-07-08 燕山大学 Computer vision based evaluation method for general movement of baby
CN106362400A (en) * 2016-10-12 2017-02-01 大连文森特软件科技有限公司 VR game system with online editing function
CN106582005A (en) * 2016-11-14 2017-04-26 深圳市豆娱科技有限公司 Data synchronous interaction method and device in virtual games
CN111210504A (en) * 2019-12-26 2020-05-29 北京邮电大学 Method and device for constructing crowd movement simulation framework
WO2021174898A1 (en) * 2020-03-04 2021-09-10 腾讯科技(深圳)有限公司 Method and device for compositing action sequence of virtual object
US11978142B2 (en) 2020-03-04 2024-05-07 Tencent America LLC Method and device for synthesizing motion sequence of virtual object

Also Published As

Publication number Publication date
CN100440257C (en) 2008-12-03

Similar Documents

Publication Publication Date Title
CN1949274A (en) 3-D visualising method for virtual crowd motion
CN107170030A (en) A kind of virtual newscaster's live broadcasting method and system
CN104915978B (en) Realistic animation generation method based on body-sensing camera Kinect
Shum et al. Interaction patches for multi-character animation
CN107248185A (en) A kind of virtual emulation idol real-time live broadcast method and system
CN102576466A (en) Systems and methods for tracking a model
CN103207667B (en) A kind of control method of human-computer interaction and its utilization
CN1425171A (en) Method and system for coordination and combination of video sequences with spatial and temporal normalization
CN109345614A (en) The animation simulation method of AR augmented reality large-size screen monitors interaction based on deeply study
CN113344777A (en) Face changing and replaying method and device based on three-dimensional face decomposition
Zhang et al. Chinese shadow puppetry with an interactive interface using the Kinect sensor
CN1920880A (en) Video flow based people face expression fantasy method
CN102693549A (en) Three-dimensional visualization method of virtual crowd motion
CN108908353A (en) Robot expression based on the reverse mechanical model of smoothness constraint imitates method and device
CN110853131A (en) Virtual video data generation method for behavior recognition
Zeng et al. Research status of data application based on optical motion capture technology
CN109903360A (en) 3 D human face animation control system and its control method
Cai et al. Immersive interactive virtual fish swarm simulation based on infrared sensors
Burić et al. Object detection using synthesized data
Xia et al. Recent advances on virtual human synthesis
Bera et al. Interactive and adaptive data-driven crowd simulation: User study
Leite et al. Inter-Acting: Understanding interaction with performance-driven puppets using low-cost optical motion capture device
Panev et al. Exploring the impact of rendering method and motion quality on model performance when using multi-view synthetic data for action recognition
US20240135618A1 (en) Generating artificial agents for realistic motion simulation using broadcast videos
Liang et al. A motion-based user interface for the control of virtual humans performing sports

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
ASS Succession or assignment of patent right

Owner name: YANTAI HUITONG NETWORK TECHNOLOGY CO., LTD.

Free format text: FORMER OWNER: INSTITUTE OF COMPUTING TECHNOLOGY, CHINESE ACADEMY OF SCIENCES

Effective date: 20121225

C41 Transfer of patent application or patent right or utility model
COR Change of bibliographic data

Free format text: CORRECT: ADDRESS; FROM: 100080 HAIDIAN, BEIJING TO: 264003 YANTAI, SHANDONG PROVINCE

TR01 Transfer of patent right

Effective date of registration: 20121225

Address after: 264003 Shandong Province, Yantai city Laishan District Yingchun Street No. 133

Patentee after: YANTAI HUITONG NETWORK TECHNOLOGY CO., LTD.

Address before: 100080 Haidian District, Zhongguancun Academy of Sciences, South Road, No. 6, No.

Patentee before: Institute of Computing Technology, Chinese Academy of Sciences

CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20081203

Termination date: 20161027

CF01 Termination of patent right due to non-payment of annual fee