CN103310478B - A kind of method that diversified virtual crowd generates - Google Patents

A kind of method that diversified virtual crowd generates Download PDF

Info

Publication number
CN103310478B
CN103310478B CN201310219496.7A CN201310219496A CN103310478B CN 103310478 B CN103310478 B CN 103310478B CN 201310219496 A CN201310219496 A CN 201310219496A CN 103310478 B CN103310478 B CN 103310478B
Authority
CN
China
Prior art keywords
model
texture
crowd
different
variation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201310219496.7A
Other languages
Chinese (zh)
Other versions
CN103310478A (en
Inventor
郑利平
刘晓平
周乘龙
张娟
李琳
徐本柱
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei Huizhong Intellectual Property Management Co ltd
Original Assignee
Hefei University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hefei University of Technology filed Critical Hefei University of Technology
Priority to CN201310219496.7A priority Critical patent/CN103310478B/en
Publication of CN103310478A publication Critical patent/CN103310478A/en
Application granted granted Critical
Publication of CN103310478B publication Critical patent/CN103310478B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention discloses a kind of method that diversified virtual crowd generates, design from aspects such as actor model variation, varying texture and attitude action variations respectively, the differentiation of personage's build is realized by the change fat or thin to the height of person model own, character appearance variation is realized by giving different texture to personage, and be that diversified action given by model by embedding bone technology, final realization generates by a few given grid model the crowd that outward appearance is different, action is various.The present invention effectively reduces the complicacy of manual modeling in crowd simulation, avoids the model similarity problem produced when using common method modeling simultaneously, improves the authenticity of generating virtual crowd.

Description

A kind of method that diversified virtual crowd generates
Technical field
The present invention relates to a kind of method that diversified virtual crowd generates, belong to simulation technical field.
Background technology
In recent years, crowd simulation technology all has application in a lot of field: city planning, game creation, traffic intersection etc.In real film shooting, high-level film be unable to do without of tremendous momentum scene, by simulating the real simulation of group animation, can be applied to film animation industry, thus produces truly vast scene effect; In addition, crowd simulation can be applied to simulation and the rationality of the safety installations designs such as authenticating security passage, may be used in the planning and design of the various public places such as such as market, gymnasium, public place of entertainment.This just proposes requirement to the formation efficiency of virtual crowd.
From current research conditions, the character adopted in group animation is generally by the simple copy of manual modeling or several model, and action is more single.The present invention adopts the model in scan model storehouse, and coding realizes distortion, and application simultaneously embeds bone algorithm automatically, embeds bone, then defines different attitude actions, realize diversified crowd to all "current" models and distorted pattern.The method significantly reduces the complicacy of colony's modeling and improves group animation authenticity.
Summary of the invention
The object of this invention is to provide a kind of method that diversified virtual crowd generates, to reduce or remit the complicacy of manual modeling, avoid the model similarity produced when using common method modeling simultaneously, improve the crowd's authenticity in large-scale crowd emulation.
Technical solution problem of the present invention adopts following technical scheme:
A kind of method that diversified virtual crowd generates, it is characterized in that: design from aspects such as actor model variation, varying texture and attitude action variations respectively, the differentiation of personage's build is realized by the change fat or thin to the height of person model own, realize character appearance variation by giving different texture to personage, and be that diversified action given by model by embedding bone technology.Final realization generates by a few given grid model the crowd that outward appearance is different, action is various; Concrete grammar carries out as follows:
A, scanning somatic data
Gather a large amount of master patterns, the method for employing be Stochastic choice part different sexes and all ages and classes stage crowd as sweep object, with 3D body scan device scan give the related data such as height, body weight of true crowd;
B, realize manikin variation
PCA(PrincipalComponentAnalysis principal component analysis (PCA) is set up to the master pattern of all scanning) space, and calculate its feature space; Define a template model for encoding to the model data of scanning again, by the model data that stored model rebuild and show, change the proper vector of feature space, calculate PCA weight, the height and weight projecting to PCA space and then change model realizes the change of fat or thin change and other attributes;
C, implementation model varying texture
In 3DSMAX software, UV expansion is carried out to model with UVLayout (HeadusUVLayout is a software being used for tearing open UV specially), after completing model UV expansion, detect UV by pinup picture stretch and seam situation, when detection does not have UV stretching and UV seam, image segmentation algorithm is used to split to UV stretch-out view, the part being divided into color different such as the previous section of jacket, the aft section of jacket, the previous section of trousers, the rear face of trousers grade, and give the color that the region allocation of each part is different, now because the color on the two sides, front and back of jacket (trousers) is different, color synchronization is carried out to it two sides, front and back of jacket (trousers) is classified as the filling that a same part is convenient to texture, again by given little texture to the jacket part in figure, the texture that pants part carries out filling implementation model adds, for the effect that implementation model texture is different, adopt variation combination to produce diversified texture formwork to map, in systems in which respectively different clothes and annex are selected from texture searching for the virtual role model after each current virtual role and change, thus produce diversified crowd,
D, implementation model action variation
First define skeleton structure, carry out skeleton embedding by distortion skeleton size with adaptive model, skeleton embedding grammar adopts the manikin bone embedding grammar based on Baran method.Detailed process is: first by building distance field (DistanceField), approximate middle axial plane (ApproximateMedialSurface), biggest ball model (spherepacking), building skeleton diagram (GraphConstruction) this four step to skeletal definition size and the position determining articulation point.Then by the position of the accurate positioning joint point of discretization penalty function, the embedding optimization of skeleton is finally carried out.Diversity index so just for generating embeds bone.Control to accomplish that the non-rigid distortion of local describes by skeleton motion again.Gather new exercise data, just can generate new animation easily.
Compared with the prior art, beneficial effect of the present invention is embodied in:
1, diversified crowd's modeling method provided by the invention obtains by scanning the complexity that standardized human body's data avoid modeling character physical itself, is convenient to generate nature and the actual crowd conformed to.Relative to allowing animation Shi Shoudong create these models, decrease complicacy and the time cost of task.
2, the present invention can carry out the fat or thin type transformation of height by the model of scanning, and produce the colony that build is different, cataloged procedure is practical.
3, the present invention is by texture action template, and by automatically embedding bone algorithm, produces the effect such as rich and varied clothing accessory and various different attitude actions.
Accompanying drawing explanation
Fig. 1 (a) some master pattern schematic diagram for collecting.
Fig. 1 (b) is the schematic diagram after the fat or thin height change of model in Fig. 1 (a).
Fig. 2 is the schematic diagram after model insertion bone.
Fig. 3 is the schematic diagram after the corresponding various texture of modelling.
The schematic diagram that Fig. 4 (a) is the different attitude of model and action.
Fig. 4 (b) is model difference attitude by a relatively large margin and the schematic diagram of action.
Embodiment
The present embodiment is the variation realizing crowd role, respectively from actor model variation, discuss in the aspects such as varying texture and attitude action variation, the model bank of scanning is adopted to realize the differentiation of role's build, realize character appearance variation by texture formwork, and be that different action given by model by embedding bone technology.This method can generate according to a few grid model the crowd that outward appearance is different, action is various.The inventive method is carried out as follows:
A, scanning somatic data
Gather a large amount of master patterns, the method for employing be Stochastic choice part different sexes and all ages and classes stage crowd as sweep object, with 3D body scan device scan give the related data such as height, body weight of true crowd;
B, realize manikin variation
PCA(PrincipalComponentAnalysis principal component analysis (PCA) is set up to the master pattern of all scanning) space, and calculate its feature space; Define a template model for encoding to the model data of scanning again, by the model data that stored model rebuild and show, change the proper vector of feature space, calculate PCA weight, the height and weight projecting to PCA space and then change model realizes the change of fat or thin change and other attributes;
C, implementation model varying texture
In 3DSMAX software, UV expansion is carried out to model with UVLayout (HeadusUVLayout is a software being used for tearing open UV specially), after completing model UV expansion, detect UV by pinup picture stretch and seam situation, when detection does not have UV stretching and UV seam, image segmentation algorithm is used to split to UV stretch-out view, the part being divided into color different such as the previous section of jacket, the aft section of jacket, the previous section of trousers, the rear face of trousers grade, and give the color that the region allocation of each part is different, now because the color on the two sides, front and back of jacket (trousers) is different, color synchronization is carried out to it two sides, front and back of jacket (trousers) is classified as the filling that a same part is convenient to texture, again by given little texture to the jacket part in figure, the texture that pants part carries out filling implementation model adds, for the effect that implementation model texture is different, adopt variation combination to produce diversified texture formwork to map, in systems in which respectively different clothes and annex are selected from texture searching for the virtual role model after each current virtual role and change, thus produce diversified crowd,
D, implementation model action variation
First define skeleton structure, carry out skeleton embedding by distortion skeleton size with adaptive model, skeleton embedding grammar adopts the manikin bone embedding grammar based on Baran method.Detailed process is: first by building distance field (DistanceField), approximate middle axial plane (ApproximateMedialSurface), biggest ball model (spherepacking), building skeleton diagram (GraphConstruction) this four step to skeletal definition size and the position determining articulation point.Then by the position of the accurate positioning joint point of discretization penalty function, the embedding optimization of skeleton is finally carried out.Diversity index so just for generating embeds bone.Control to accomplish that the non-rigid distortion of local describes by skeleton motion again.Gather new exercise data, just can generate new animation easily.
Specific embodiment:
The present embodiment experimental situation is 64 Win7SP1 operating systems, host CPU is Intel (R) CoreQ8400, internal memory 6G, program execution environments is visualstudio2008, in VS, directly call matlab engine method calls the model bank data encapsulated in matlab simultaneously, opengl is as display platform in configuration, and MFC+opengl realizes mutual.
According to this method thinking step, the feasibility of concrete each step of test, Fig. 1 (a) (b) is respectively a fat or thin height change of model; Fig. 2 is for being model insertion bone view; Fig. 3 is the corresponding various texture of modelling; Fig. 4 (a) (b) is respectively the different attitude of model and action.

Claims (1)

1. the method for a diversified virtual crowd generation, it is characterized in that: design from actor model variation, varying texture and attitude action variation respectively, the differentiation of personage's build is realized by the change fat or thin to the height of person model own, character appearance variation is realized by giving different texture to personage, and be that diversified action given by model by embedding bone technology, final realization generates by a few given grid model the crowd that outward appearance is different, action is various;
Concrete grammar carries out as follows:
A, scanning somatic data
Gather a large amount of master patterns, the method for employing be Stochastic choice part different sexes and all ages and classes stage crowd as sweep object, with 3D body scan device scan give height, the weight data of true crowd;
B, realize manikin variation
Set up PCA space to the master pattern of all scanning, and calculate its feature space; Define a template model for encoding to the model data of scanning again, by the model data that stored model rebuild and show, change the proper vector of feature space, calculate PCA weight, the height and weight projecting to PCA space and then change model realizes the change of fat or thin change and other attributes;
C, implementation model varying texture
In 3DSMAX software, UV expansion is carried out to model with UVLayout, after completing model UV expansion, detect UV by pinup picture stretch and seam situation, when detection does not have UV stretching and UV seam, image segmentation algorithm is used to split to UV stretch-out view, by the part that the previous section of jacket, the aft section of jacket, the previous section of trousers, the aft section of trousers are divided into color different, and give the color that the region allocation of each part is different, now because the color on the two sides, front and back of jacket or trousers is different, color synchronization is carried out to it two sides, front and back of jacket or trousers is classified as the filling that a same part is convenient to texture, again by given little texture to the jacket part in figure, the texture that pants part carries out filling implementation model adds, for the effect that implementation model texture is different, adopt variation combination to produce diversified texture formwork to map, in systems in which respectively different clothes and annex are selected from texture searching for the virtual role model after each current virtual role and change, thus produce diversified crowd,
D, implementation model action variation
First define skeleton structure, carry out skeleton embedding by distortion skeleton size with adaptive model, skeleton embedding grammar adopts the manikin bone embedding grammar based on Baran method; Detailed process is: first by building distance field, approximate middle axial plane, biggest ball model, building this four step of skeleton diagram to skeletal definition size and the position determining articulation point, then by the position of the accurate positioning joint point of discretization penalty function, finally carry out the embedding optimization of skeleton, the diversity index so just for generating embeds bone; Control to accomplish that the non-rigid distortion of local describes by skeleton motion again, gather new exercise data, just can generate new animation.
CN201310219496.7A 2013-06-04 2013-06-04 A kind of method that diversified virtual crowd generates Active CN103310478B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310219496.7A CN103310478B (en) 2013-06-04 2013-06-04 A kind of method that diversified virtual crowd generates

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310219496.7A CN103310478B (en) 2013-06-04 2013-06-04 A kind of method that diversified virtual crowd generates

Publications (2)

Publication Number Publication Date
CN103310478A CN103310478A (en) 2013-09-18
CN103310478B true CN103310478B (en) 2016-02-03

Family

ID=49135654

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310219496.7A Active CN103310478B (en) 2013-06-04 2013-06-04 A kind of method that diversified virtual crowd generates

Country Status (1)

Country Link
CN (1) CN103310478B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9898858B2 (en) * 2016-05-18 2018-02-20 Siemens Healthcare Gmbh Human body representation with non-rigid parts in an imaging system
CN107170030A (en) * 2017-05-31 2017-09-15 珠海金山网络游戏科技有限公司 A kind of virtual newscaster's live broadcasting method and system
CN108376198B (en) * 2018-02-27 2022-03-04 山东师范大学 Crowd simulation method and system based on virtual reality
CN113012042B (en) * 2019-12-20 2023-01-20 海信集团有限公司 Display device, virtual photo generation method, and storage medium
WO2022026367A1 (en) 2020-07-25 2022-02-03 Silver Spoon Animation Inc. System and method for populating a virtual crowd in real-time using augmented and virtual reality
CN112017295B (en) * 2020-08-28 2024-02-09 重庆灵翎互娱科技有限公司 Adjustable dynamic head model generation method, terminal and computer storage medium
CN117392330B (en) * 2023-12-11 2024-03-08 江西省映尚科技有限公司 Method and system for manufacturing metauniverse virtual digital person

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102157008A (en) * 2011-04-12 2011-08-17 电子科技大学 Large-scale virtual crowd real-time rendering method
CN102693549A (en) * 2011-03-25 2012-09-26 上海日浦信息技术有限公司 Three-dimensional visualization method of virtual crowd motion

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5002103B2 (en) * 2001-09-28 2012-08-15 株式会社バンダイナムコゲームス Image generation system, image generation method, and program
WO2010060113A1 (en) * 2008-11-24 2010-05-27 Mixamo, Inc. Real time generation of animation-ready 3d character models

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102693549A (en) * 2011-03-25 2012-09-26 上海日浦信息技术有限公司 Three-dimensional visualization method of virtual crowd motion
CN102157008A (en) * 2011-04-12 2011-08-17 电子科技大学 Large-scale virtual crowd real-time rendering method

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Automatic Rigging and Animation of 3D Characters;Ilya Baran等;《ACM Transactions on Graphics》;20070731;第26卷(第3期);第1-8页 *
The space of human body shapes:reconstruction and parameterization from range scans;Brett Allen等;《ACM Transactions of Graphics》;20030731;第22卷(第3期);第587-594页 *
基于真实场景的大规模虚拟人群的快速生成方法;姜立军等;《工程图学学报》;20090415(第2期);第69-75页 *
大规模群体的角色建模与实时渲染技术研究;陈静;《中国优秀硕士学位论文全文数据库》;20090301;第1-72页 *

Also Published As

Publication number Publication date
CN103310478A (en) 2013-09-18

Similar Documents

Publication Publication Date Title
CN103310478B (en) A kind of method that diversified virtual crowd generates
US10867444B2 (en) Synthetic data generation for training a machine learning model for dynamic object compositing in scenes
CN100440257C (en) 3-D visualising method for virtual crowd motion
CN102982578A (en) Estimation method for dressed body 3D model in single character image
CN104599305B (en) A kind of two three-dimensional animation producing methods combined
CN105243375B (en) A kind of motion characteristic extracting method and device
CN106952329A (en) Particle effect edit methods and system based on Unity3D and NGUI
Jing et al. Application of 3D reality technology combined with CAD in animation modeling design
Zhao Application of virtual reality and artificial intelligence technology in fitness clubs
CN107665269A (en) Quick crowd evacuation emulation method and device based on geography information
CN104778736A (en) Three-dimensional garment animation generation method driven by single video content
WO2014170757A2 (en) 3d rendering for training computer vision recognition
CN104036539B (en) View frustum projection clipping method for large-scale terrain rendering
CN104658024A (en) Human face expression synthesis method based on characteristic point
CN102682473B (en) Virtual clothing real-time physical modeling method
CN103021026A (en) Three-dimensional vehicular access collaborative simulation system
Xu Face reconstruction based on multiscale feature fusion and 3d animation design
CN104537704A (en) Real-time dynamic generating method for features on bird body model
CN103871096A (en) Realistic fluid scene synthetic method in three-dimensional space
Li Architectural design virtual simulation based on virtual reality technology
CN104156502B (en) A kind of location-based clothing fold geometry generation method
Tao A VR/AR-based display system for arts and crafts museum
CN101702243A (en) Group movement implementation method based on key formation constraint and system thereof
CN110096837A (en) A kind of engine room facilities maintenance accessi bility and the access verification method of personnel based on Unity
CN104050718A (en) Method for synthesizing three-dimensional fluid scenes with sense of reality and space-time continuity

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20201231

Address after: 245000 No.50, Meilin Avenue, Huangshan Economic Development Zone, Huangshan City, Anhui Province

Patentee after: Huangshan Development Investment Group Co.,Ltd.

Address before: 230009 No. 193, Tunxi Road, Hefei, Anhui

Patentee before: Hefei University of Technology

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20220914

Address after: Huangshan Future Science and Technology City, No. 59, Meilin Avenue, Huangshan High-tech Industrial Development Zone, Huangshan City, Anhui Province, 245000

Patentee after: Huangshan Science and Technology Innovation Center Co.,Ltd.

Address before: 245000 No.50, Meilin Avenue, Huangshan Economic Development Zone, Huangshan City, Anhui Province

Patentee before: Huangshan Development Investment Group Co.,Ltd.

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20230323

Address after: 230001 Gujing Baihua mansion, 156 Shou Chun Road, Hefei, Anhui

Patentee after: HEFEI HUIZHONG INTELLECTUAL PROPERTY MANAGEMENT Co.,Ltd.

Address before: Huangshan Future Science and Technology City, No. 59, Meilin Avenue, Huangshan High-tech Industrial Development Zone, Huangshan City, Anhui Province, 245000

Patentee before: Huangshan Science and Technology Innovation Center Co.,Ltd.