CN102693549A - Three-dimensional visualization method of virtual crowd motion - Google Patents
Three-dimensional visualization method of virtual crowd motion Download PDFInfo
- Publication number
- CN102693549A CN102693549A CN2011100733649A CN201110073364A CN102693549A CN 102693549 A CN102693549 A CN 102693549A CN 2011100733649 A CN2011100733649 A CN 2011100733649A CN 201110073364 A CN201110073364 A CN 201110073364A CN 102693549 A CN102693549 A CN 102693549A
- Authority
- CN
- China
- Prior art keywords
- motion
- individual
- virtual
- data
- crowd
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000007794 visualization technique Methods 0.000 title claims abstract description 12
- 238000000034 method Methods 0.000 claims abstract description 57
- 239000000463 material Substances 0.000 claims abstract description 38
- 238000009877 rendering Methods 0.000 claims abstract description 30
- 230000002194 synthesizing effect Effects 0.000 claims abstract description 5
- 238000007781 pre-processing Methods 0.000 claims description 25
- 238000012800 visualization Methods 0.000 claims description 14
- 238000005516 engineering process Methods 0.000 claims description 11
- 239000013598 vector Substances 0.000 claims description 11
- 238000005070 sampling Methods 0.000 claims description 9
- 238000006073 displacement reaction Methods 0.000 claims description 7
- 238000009795 derivation Methods 0.000 claims description 6
- 238000004088 simulation Methods 0.000 claims description 5
- 238000013461 design Methods 0.000 claims description 4
- 230000004927 fusion Effects 0.000 claims description 3
- 230000003287 optical effect Effects 0.000 claims description 3
- 238000004364 calculation method Methods 0.000 description 4
- 238000004422 calculation algorithm Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 238000004519 manufacturing process Methods 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 230000037237 body shape Effects 0.000 description 1
- QBWCMBCROVPCKQ-UHFFFAOYSA-N chlorous acid Chemical compound OCl=O QBWCMBCROVPCKQ-UHFFFAOYSA-N 0.000 description 1
- 125000004122 cyclic group Chemical group 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000004880 explosion Methods 0.000 description 1
- 230000006698 induction Effects 0.000 description 1
- 230000035515 penetration Effects 0.000 description 1
- 230000000737 periodic effect Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Landscapes
- Processing Or Creating Images (AREA)
Abstract
The invention relates to and discloses a three-dimensional visualization method of virtual crowd motion. The method comprises the following steps: analyzing a virtual crowd appearing in a system, classifying individuals, and establishing a template model database and a material database; capturing or manually generating needed human body motion data and inputting the motion data into a motion database; synthesizing the motion data; carrying out drawing pretreatment on the template model and the synthesized motion data; inputting simple script data which describes the crowd motion; generating the virtual individual; driving the motion of the virtual individual; generating the dynamic virtual crowd; realizing group rendering to the virtual crowd. The three-dimensional visualization method of the virtual crowd motion of the invention has the following advantages that: the method is high-efficient and general; modeling cost is low and so on. By using the method of the invention, the virtual crowd motion with a scale of 30000 people can be vividly displayed on a common PC in real time.
Description
Technical Field
The invention relates to the field of virtual reality, in particular to a three-dimensional visualization method for virtual crowd movement.
Background
The visualization of the virtual crowd movement means that the movement data of the crowd is displayed on a screen, and after the movement data is given to a certain individual, the macroscopic or microscopic effect generated by the movement data can be known by observing the movement of the individual on the screen. The three-dimensional visualization is carried out on the large-scale crowd movement, vivid virtual crowds can be generated in a computer space, and the crowd movement conditions in various places and under various conditions can be observed in all directions from various angles, so that the three-dimensional visualization has a very wide application prospect. For example: in the design of large buildings (such as railway stations, gymnasiums and the like), designers must consider the performance of large-scale crowds when various emergency situations (such as fire, explosion, trampling and the like) occur, and can conveniently perform public safety evaluation on the buildings in the design stage by utilizing a three-dimensional visualization technology of large-scale virtual crowd movement. In movie production, scenes of large-scale dynamic crowds are often needed to be vividly displayed, for example, a movie brother and a brother show the effect that paratroopers fall down from the sky in large scale when allied forces log in. In game production, game artists also often need to render a large amount of group motion of game characters in a game scene, such as a monster motion scene laid on the sky in the magic world. Therefore, a three-dimensional visualization method of virtual crowd movement is needed in the fields of game/movie production, building design, public safety assessment and the like.
A three-dimensional visualization method for large-scale virtual crowd movement needs to meet the requirements of reality and instantaneity at the same time. On one hand, abundant virtual role models need to be used, and each virtual role model can be independently controlled and can complete specified actions so as to meet the requirement of a user on verisimilitude; on the other hand, the memory overhead and the calculation overhead of model loading, driving and rendering are controlled within a certain range, so as to ensure that a user can obtain a visualization result in real time. In addition, requirements in terms of ease of use, versatility, modeling cost, and the like need to be considered. Due to technical difficulties, no universal three-dimensional visualization method for large-scale virtual crowd movement exists at present.
[ summary of the invention ]
The invention aims to overcome the defect that the prior art can not realize three-dimensional visual operation on large-scale virtual crowd movement, thereby providing a method which is efficient and universal and has low modeling cost.
In order to achieve the above object, the present invention provides a three-dimensional visualization method of virtual crowd movement, comprising the following steps:
step 10), analyzing virtual crowds to appear in the system, classifying individuals, establishing a three-dimensional human body model for each classification according to appearance characteristics as a template model, and obtaining a template model library: and according to the characteristics of each template model, selecting a group of derived positions, defining the materials used by each derived position, and adding the materials into a material library:
step 20), capturing or manually generating human body movement data, and inputting the movement data into a movement database;
step 30), synthesizing motion data;
step 40), drawing and preprocessing the template model and the synthesized motion data;
step 50), inputting simple script data for describing crowd movement, wherein the information in the script comprises the following steps: the type of each individual in the group, the current position of each individual and the current motion state of each individual;
step 60), selecting a template model from the template model library according to the data input in the step 50), and generating a virtual individual by combining derivation of the model by the material library;
step 70), selecting motion data from the motion database for the virtual individual generated in the step 60) according to the motion state information input by the individual in the step 50), and calculating the change of the motion data along with time to drive the motion of the virtual individual; traversing all virtual individuals in the group to generate a dynamic virtual crowd;
step 80), realizing group rendering of the virtual crowd.
In the above technical solution, in the step 10), the classifying the individuals is to classify the population according to the ages and the sexes of the individuals.
In the above technical solution, in step 10), the derived part is selected from a garment, a hairstyle, and a cap worn by a human body.
In the above technical solution, the step 20) includes:
step 21), counting the possible motion states of the virtual individuals in the system;
step 22), collecting the human body motion data of each motion state counted in the step 21) by utilizing motion capture equipment or manually generating the human body motion data by using motion generation and editing software;
in said step 22), the motion data is captured using electromagnetic-based or optical-based or acoustic-based motion capture methods, and said motion generation and editing software comprises dedicated software 3dsmax or Maya or SOFTIMAGE | _ XSI or Blender.
In the above technical solution, in the step 30), the synthetic motion data is adjusted by a manual method of art designing, or a method of motion fusion, motion splicing, and motion redirection is adopted.
In the above technical solution, in step 40), the preprocessing of rendering the template model and the synthesized motion data includes:
41) for each template model, the action data which can be executed by each template model, and for each frame of the action data, calculating a three-dimensional model corresponding to the frame:
42) carrying out drawing pretreatment on the three-dimensional model obtained in the step 41); the required drawing preprocessing method comprises an image-based drawing preprocessing method, a point sampling-based drawing preprocessing method, a grid animation-based preprocessing method and a hierarchical detail technology-based preprocessing method.
In the above technical solution, the step 60) includes:
step 61), aiming at each individual in the system virtual crowd, selecting a corresponding template model from the template model library according to the type information of the individual in the data input in the step 50);
step 62), judging whether the current individual appears for the first time, if so, executing the next step, otherwise, executing step 64);
step 63), the individual appears for the first time, the material is not distributed yet, a set of material is randomly selected from the material library, the material serial number is stored in the memory, and then step 70) is executed;
step 64), the individual has distributed the material, directly reads the material serial number of the template model from the memory, and uses the corresponding material to render.
In the above technical solution, the step 70) includes:
step 71), aiming at each individual in the group, selecting corresponding motion data from the motion database according to the type information of the individual in the input data of the step 50) and the motion state information of the individual at the current moment:
step 72), according to the position information of the current time, the previous time and the next time of the individual in the input data, the current movement speed and the movement direction of the individual are obtained:
step 73), calculating the motion data of the individual at the current moment according to the motion data displacement vector of each moment selected from the motion database, the individual simulation data displacement vector of each moment and the individual frame index of the motion data selected at the previous moment, wherein the motion data comprises the current position of the individual and the motion posture of each part of the body;
step 74), driving the virtual individual generated in step 60) by using the current-time motion data obtained in step 73) and the motion direction obtained in step 72);
step 75), traversing all the individuals in the group to generate a dynamic virtual group.
In the above technical solution, the method adopted in the group rendering described in the step 80) includes an image-based rendering method, a point sampling-based rendering method, a grid animation-based rendering method, and a hierarchical detail technology-based rendering method. The advantages of the invention are mainly shown in:
the input of the system is simple, mainly script data such as individual states, positions and the like at different times can be conveniently embedded into or called by other application systems, and the system has better universality;
2. by using the techniques of skeleton covering, motion capture and the like, the system can freely and flexibly control each individual in a group and meet the requirement of a user on human motion reality;
3. by using a model derivation technology, a large number of personalized models can be derived from fewer model templates, the modeling cost of the three-dimensional group model is reduced to the greatest extent, and the memory overhead and the calculation overhead of model loading and storage are saved;
4. by using the motion deformation technology, a vivid motion effect can be generated by using less motion data, so that the memory overhead and the calculation overhead of model driving are saved;
5. by using technologies such as rendering based on point sampling, three-dimensional real-time rendering of large-scale crowd data is achieved.
Drawings
Fig. l is a flowchart of the three-dimensional visualization method of the virtual crowd movement of the present invention:
FIG. 2 is a diagram of six classes of template models created in one embodiment of the present invention.
Detailed Description
The invention is further described with reference to the following figures and detailed description.
As shown in fig. 1, the three-dimensional visualization method for virtual crowd movement of the present invention comprises the following steps:
firstly, a pretreatment step
The main purpose of the preprocessing step is to generate a template model library, a material library, a motion database and a drawing library required for real-time processing. Therefore, the method comprises four sub-steps of template model modeling, motion data capturing, motion data synthesizing, template model and motion data drawing preprocessing.
And 10, modeling a template model, and establishing a template model library.
And step ll, analyzing the virtual crowd appearing in the application system, and classifying the individuals according to a certain standard. For example: individuals in the population can be classified according to age and gender as: male aged person, female aged person, male young person, female young person, male children and female children.
And step 12, establishing a three-dimensional human body model as a template model according to the appearance characteristics of each type of individuals. For example, in step 11), the virtual crowd is divided into six types, namely, an old man, an old woman, a young man, a young woman, a young man and a young woman, in this step, a three-dimensional human body model is established for the old man, the model is used as a template model of the old man, similar operations are performed on other types of virtual crowd, and finally, template models of six types of individuals, namely, the old man, the old woman, the young man and the young woman are obtained. FIG. 2 shows the template models of six types of individuals, namely, elderly men, elderly women, young men, young women, children men and children women.
And step 13, selecting a group of derived positions according to the characteristics of each template model, defining the possible material of each derived position, and adding the possible material into a material library. Generally, a garment, a hairstyle, a shoe and a hat worn by a human body can be selected as a derived part. Taking the male template model of the old as an example, if the jacket and the trousers are designated as derived parts, the possible materials of the jacket and the trousers are stored in the material library, and the possible materials are manually designated by the user according to the requirements of an application system.
Step 20, capturing human body movement data, or manually generating human body movement data and inputting the movement data into a movement database.
And step 21, counting the possible motion states of the virtual individuals in the application system. For example, in the case of emergency evacuation of persons on level ground, the possible motion states of an individual include: walk, run, stop watching, die and fall to the ground.
And step 22, collecting the human body motion data of each motion state counted in the step 21 by utilizing motion capture equipment or using motion generation and editing software. The capture of the human motion data can adopt the existing motion
The capture method comprises an electromagnetic-based, optical-based and acoustic-based motion capture method, and the adopted motion capture device can adopt VICON. The motion generation and editing software includes special software 3dsmax or Maya or SOFTIMAG-XSI or Blender.
And step 30, synthesizing the motion data. Different template models have different body height, leg length and other anthropometric information, and if the same motion data is adopted for driving, the problems of foot step sliding, limb penetration and the like are generated. Therefore, the captured motion data needs to be processed, and a set of motion data which is in accordance with the body type of each template model is synthesized for each template model. Furthermore, in order to save storage space, each motion data needs to be processed into cyclic playable periodic motion data, i.e. the first frame and the last frame can be smoothly transited. The motion data can be synthesized by adopting a method of manual adjustment by an art designer, and also can adopt technologies of motion fusion, motion splicing, motion redirection and the like, as long as the generated data is vivid, conforms to the body shape characteristics of the template model and can be played circularly.
And step 40, drawing and preprocessing the template model and the motion data. In order to increase the rendering speed of the template model, an efficient rendering algorithm generally needs to perform rendering preprocessing on the template model and the motion data. For example, a point sampling-based rendering algorithm is adopted, and a model of a template model driven by motion data needs to be subjected to point sampling to generate a multi-resolution model; with image-based rendering algorithms, it is also necessary to generate images of different viewing angles according to this post-drive model. The concrete implementation steps of the drawing preprocessing of the template model and the synthesized motion data comprise:
step 41, for each template model, calculating the action data executable by each template model, and for each frame of the action data, calculating a three-dimensional model corresponding to the frame;
and 42, performing drawing preprocessing on the three-dimensional model obtained in the step 41.
The rendering preprocessing method required in step 42 includes an image-based rendering preprocessing method, a point sampling-based rendering preprocessing method, a mesh animation-based preprocessing method, and a hierarchical detail technology-based preprocessing method.
Step two, inputting data
Step 50, inputting simple script data for describing crowd movement, wherein the information in the script mainly comprises: the type of each individual in the population, the current position of each individual, and the current motion state of each individual.
The following is an example segment of input data:
…
individual serial numbers: 015;
the individual type: male elderly;
…
time (seconds): 0.2
Position (m): 10, 11, 11;
and (3) motion state: walking;
time (seconds): 0.3
Position (m): 10.1, 11, 11;
and (3) motion state: walking;
time (seconds): 0.4
Position (m): 10.2, 11, 11;
and (3) motion state: walking;
…
third, real-time processing step
The real-time processing steps comprise model derivation, model driving and group rendering and are used for realizing three-dimensional visualization of the virtual crowd movement.
And step 60, selecting a template model from the template model library according to the input data, and combining the derivation of the material library to realize the derivation of the model so as to generate the virtual individual.
Step 61, aiming at each individual in the group, selecting a corresponding template model from a template model library according to the type information of the individual in the input data;
step 62, judging whether the current individual appears for the first time, if so, executing the next step, otherwise, executing step 64;
step 63, when the individual appears for the first time and the material is not distributed yet, randomly selecting a set of material from a material library, storing the material serial number in the memory, and then executing step 70;
and step 64, the individual already distributes the materials, directly reads the material serial number of the template model from the memory, and uses the corresponding materials for rendering.
And step 70, driving the model to obtain the final posture of the individual at the current moment.
And step 71, aiming at each individual in the group, selecting corresponding motion data from the motion database according to the type information of the individual in the input data and the motion state information of the individual at the current moment. Taking the data input in step 50 as an example, for an elderly male with an individual number 015, the walking exercise data of the elderly male is selected from the exercise database.
And 72, obtaining the current movement speed and the movement direction of the individual according to the position information of the individual at the current moment, the previous moment and the next moment in the input data. Still taking the data input in step 50 as an example, an elderly male with an individual number 015 during exercise has a previous time of 0-2 seconds, a current time of 0.3 seconds, a next time of 0.4 seconds, a position of (10, 11, 11) at 0.2 seconds, a position of (10.1, 11, 11) at 0.3 seconds, and a position of (10.2, 11, 11) at 0.4 seconds. From this, the current moving speed of the individual is calculated to be 1 m/s, and the current moving direction is (1, 0, 0).
And 73, calculating the motion data of the individual at the current moment according to the displacement vector of the motion data at each moment selected from the motion database, the displacement vector of the simulation data of the individual at each moment and the motion data frame index selected at the previous moment of the individual. The motion data comprises the current position of the individual and the motion posture of each part of the body.
The specific calculation method is as follows:
suppose aF i ;i=1,……,I And the motion vector of the ith frame is used as the motion data, wherein I is the total frame number of the motion data. { SjAt a time point tjThe displacement vector of the upper simulation data needs to obtain { Nj }, namely at the time point tjAnd the selected motion data frame index. Using mathematical induction to initialize the value N0=1, the problem translates into: in NjUnder the known premise, the N is calculatedj+1. Where the subscript j represents the previous time and the subscript j +1 represents the current time.
Using a heuristic solution method, Nj+1From NjSuccessively increase the trial whenF Nj+1-F Nj>SJ+1-SJThen the probing is stopped, namely N is obtainedj+1. Wherein,>the sign is the comparison of two vectors of motion data and simulation data, using maximum component comparison, i.e. when the vectors areF Nj+1-F NjAny one component is greater than SJ+1-SJWhen the corresponding component is in, the whole vector is larger than. Since the number of frames of motion data is limited, it is impossible to increment the heuristic indefinitely, so when N isj+1>1, is, Nj+1The heuristic is incremented again, starting with 1, and at the same time,F Nj+1=F Nj+1+F l -F 1 。
step 74, the virtual individual generated in step 60 is driven using the current-time motion data obtained in step 73 and the motion direction obtained in step 72).
And 75, traversing all the individuals in the group to generate a dynamic virtual group.
And 80, realizing group rendering of the virtual crowd. In this step, group rendering is a mature prior art, and chinese patent No. 200610089057.9, invented name: the patent application of the invention discloses a virtual human real-time drawing method, wherein a drawing method or other high-efficiency drawing methods are disclosed, and the dynamic virtual population is drawn in real time, wherein the drawing methods comprise an image-based drawing method, a point sampling-based drawing method, a grid animation-based drawing method, a hierarchical detail technology-based drawing method and the like.
Claims (10)
1. A three-dimensional visualization method for virtual crowd movement comprises the following steps:
step 10), analyzing virtual crowds to appear in the system, classifying individuals, establishing a three-dimensional human body model for each classification according to appearance characteristics as a template model, and obtaining a template model library; and according to the characteristics of each template model, selecting a group of derived positions, defining the materials used by each derived position, and adding the materials into a material library:
step 20), capturing or manually generating human body movement data, and inputting the movement data into a movement database;
step 30), synthesizing motion data;
step 40), drawing and preprocessing the template model and the synthesized motion data;
step 50), inputting simple script data for describing crowd movement, wherein the information in the script comprises the following steps: the type of each individual in the group, the current position of each individual and the current motion state of each individual;
step 60), selecting a template model from the template model library according to the data input in the step 50), and generating a virtual individual by combining derivation of the model by the material library;
step 70), selecting motion data from the motion database for the virtual individual generated in the step 60) according to the motion state information input by the individual in the step 50), and calculating the change of the motion data along with time to drive the motion of the virtual individual; traversing all virtual individuals in the group to generate a dynamic virtual crowd;
step 80), realizing group rendering of the virtual crowd.
2. The method for three-dimensional visualization of the motion of the virtual crowd as claimed in claim 1, wherein in the step 10), the classification of the individuals is performed by dividing the crowd according to the ages and the sexes of the individuals.
3. The method for three-dimensional visualization of the movement of the virtual crowd as recited in claim 1, wherein in step 10), the derived parts are selected from the group consisting of clothes, hairstyles, shoes and hats worn by the human body.
4. The method for three-dimensional visualization of the movement of a virtual crowd as recited in claim i, wherein said step 20) comprises:
step 21), counting the possible motion states of the virtual individuals in the system;
step 22), collecting the human body motion data of each motion state counted in the step 21) by utilizing a motion capture device or manually generating the human body motion data by using motion generation and editing software.
5. The method for three-dimensional visualization of the movement of a virtual crowd as claimed in claim 4, wherein in the step 22), the capturing of the movement data is performed by electromagnetic-based or optical-based or acoustic-based movement capturing method, and the movement generating and editing software comprises special software 3dsmax or Maya or SoftimageIXSI or Blender.
6. The method as claimed in claim 1, wherein in the step 30), the synthetic motion data is generated by a manual adjustment method of art design or by a method of motion fusion, motion stitching, or motion redirection.
7. The method for three-dimensional visualization of the motion of a virtual crowd as recited in claim 1, wherein in step 40), the rendering preprocessing of the template model and the synthesized motion data comprises:
41) for each template model, calculating the action data which can be executed by each template model, and for each frame of the action data, calculating a three-dimensional model corresponding to the frame;
42) and carrying out drawing pretreatment on the three-dimensional model obtained in the step 41): the required drawing preprocessing method comprises an image-based drawing preprocessing method, a point sampling-based drawing preprocessing method, a grid animation-based preprocessing method and a hierarchical detail technology-based preprocessing method.
8. The method for three-dimensional visualization of the movement of a virtual crowd as recited in claim i, wherein said step 60) comprises:
step 61), aiming at each individual in the system virtual crowd, selecting a corresponding template model from the template model library according to the type information of the individual in the data input in the step 50);
step 62), judging whether the current individual appears for the first time, if so, executing the next step, otherwise, executing the step 64):
step 63), the individual appears for the first time, the material is not distributed yet, a set of material is randomly selected from the material library, the material serial number is stored in the memory, and then step 70) is executed;
step 64), the individual has distributed the material, directly reads the material serial number of the template model from the memory, and uses the corresponding material to render.
9. The method for three-dimensional visualization of the motion of a virtual crowd as recited in claim 1, wherein said step 70) comprises:
step 71), aiming at each individual in the group, selecting corresponding motion data from a motion database according to the type information of the individual in the input data of the step 50) and the motion state information of the individual at the current moment;
step 72), according to the position information of the current time, the previous time and the next time of the individual in the input data, the current movement speed and the movement direction of the individual are obtained;
step 73), calculating the motion data of the individual at the current moment according to the motion data displacement vector of each moment selected from the motion database, the individual simulation data displacement vector of each moment and the individual frame index of the motion data selected at the previous moment, wherein the motion data comprises the current position of the individual and the motion posture of each part of the body;
step 74), driving the virtual individual generated in step 60) by using the current-time motion data obtained in step 73) and the motion direction obtained in step 72);
step 75), traversing all the individuals in the group to generate a dynamic virtual group.
10. The method for three-dimensional visualization of the movement of the virtual crowd as recited in claim 1, wherein the rendering of the crowd recited in the step 80) comprises an image-based rendering method, a point sampling-based rendering method, a grid animation-based rendering method, and a hierarchical detail technology-based rendering method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2011100733649A CN102693549A (en) | 2011-03-25 | 2011-03-25 | Three-dimensional visualization method of virtual crowd motion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2011100733649A CN102693549A (en) | 2011-03-25 | 2011-03-25 | Three-dimensional visualization method of virtual crowd motion |
Publications (1)
Publication Number | Publication Date |
---|---|
CN102693549A true CN102693549A (en) | 2012-09-26 |
Family
ID=46858949
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN2011100733649A Pending CN102693549A (en) | 2011-03-25 | 2011-03-25 | Three-dimensional visualization method of virtual crowd motion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN102693549A (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103310478A (en) * | 2013-06-04 | 2013-09-18 | 合肥工业大学 | Generation method of diversified virtual crowds |
CN104183000A (en) * | 2014-08-14 | 2014-12-03 | 合肥工业大学 | Full-automatic multi-source heterogeneous motion redirecting method of quasi-man character |
CN104391937A (en) * | 2014-11-24 | 2015-03-04 | 宜瞰(上海)健康管理咨询有限公司 | Visualization of human behavior characteristics and data processing method and system |
CN105635806A (en) * | 2015-12-28 | 2016-06-01 | 北京像素软件科技股份有限公司 | Rendering method of group motion scene |
CN107316343A (en) * | 2016-04-26 | 2017-11-03 | 腾讯科技(深圳)有限公司 | A kind of model treatment method and apparatus based on data-driven |
CN108376198A (en) * | 2018-02-27 | 2018-08-07 | 山东师范大学 | A kind of crowd simulation method and system based on virtual reality |
-
2011
- 2011-03-25 CN CN2011100733649A patent/CN102693549A/en active Pending
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103310478A (en) * | 2013-06-04 | 2013-09-18 | 合肥工业大学 | Generation method of diversified virtual crowds |
CN103310478B (en) * | 2013-06-04 | 2016-02-03 | 合肥工业大学 | A kind of method that diversified virtual crowd generates |
CN104183000A (en) * | 2014-08-14 | 2014-12-03 | 合肥工业大学 | Full-automatic multi-source heterogeneous motion redirecting method of quasi-man character |
CN104183000B (en) * | 2014-08-14 | 2017-04-19 | 合肥工业大学 | Full-automatic multi-source heterogeneous motion redirecting method of quasi-man character |
CN104391937A (en) * | 2014-11-24 | 2015-03-04 | 宜瞰(上海)健康管理咨询有限公司 | Visualization of human behavior characteristics and data processing method and system |
CN105635806A (en) * | 2015-12-28 | 2016-06-01 | 北京像素软件科技股份有限公司 | Rendering method of group motion scene |
CN105635806B (en) * | 2015-12-28 | 2018-12-28 | 北京像素软件科技股份有限公司 | The rendering method of group movement scene |
CN107316343A (en) * | 2016-04-26 | 2017-11-03 | 腾讯科技(深圳)有限公司 | A kind of model treatment method and apparatus based on data-driven |
CN107316343B (en) * | 2016-04-26 | 2020-04-07 | 腾讯科技(深圳)有限公司 | Model processing method and device based on data driving |
CN108376198A (en) * | 2018-02-27 | 2018-08-07 | 山东师范大学 | A kind of crowd simulation method and system based on virtual reality |
CN108376198B (en) * | 2018-02-27 | 2022-03-04 | 山东师范大学 | Crowd simulation method and system based on virtual reality |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US12094045B2 (en) | Generating a background that allows a first avatar to take part in an activity with a second avatar | |
CN100440257C (en) | 3-D visualising method for virtual crowd motion | |
CN114663199B (en) | Dynamic display real-time three-dimensional virtual fitting system and method | |
CN110310319B (en) | Illumination-separated single-view human body clothing geometric detail reconstruction method and device | |
CN102693549A (en) | Three-dimensional visualization method of virtual crowd motion | |
CN110473266A (en) | A kind of reservation source scene figure action video generation method based on posture guidance | |
CN104915978A (en) | Realistic animation generation method based on Kinect | |
CN110298916A (en) | A kind of 3 D human body method for reconstructing based on synthesis depth data | |
CN109035415B (en) | Virtual model processing method, device, equipment and computer readable storage medium | |
Thalmann et al. | Modeling of populations | |
CN115331265A (en) | Training method of posture detection model and driving method and device of digital person | |
JP2002269580A (en) | Animation creating system | |
CN110298917B (en) | Face reconstruction method and system | |
Ami-Williams et al. | Digitizing traditional dances under extreme clothing: The case study of eyo | |
US20230050535A1 (en) | Volumetric video from an image source | |
CN115731356A (en) | Passenger behavior modeling and data enhancement method for dragging elevator in virtual-real interaction scene | |
CN110309554B (en) | Video human body three-dimensional reconstruction method and device based on garment modeling and simulation | |
CN109903360A (en) | 3 D human face animation control system and its control method | |
Jiang et al. | Animating arbitrary topology 3D facial model using the MPEG-4 FaceDefTables | |
Ding | 3D Animation of a Human Body Reconstructed from a Single Photograph | |
Chen et al. | Real-time 3D facial expression control based on performance | |
Zhao et al. | Implementation of Computer Aided Dance Teaching Integrating Human Model Reconstruction Technology | |
CN117504296A (en) | Action generating method, action displaying method, device, equipment, medium and product | |
CN117218250A (en) | Animation model generation method and device | |
CN116503556A (en) | Method for generating stylized human body model based on single human body photo |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C02 | Deemed withdrawal of patent application after publication (patent law 2001) | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20120926 |