CN101281657A - Method for synthesizing crowd action based on video data - Google Patents

Method for synthesizing crowd action based on video data Download PDF

Info

Publication number
CN101281657A
CN101281657A CNA200810062016XA CN200810062016A CN101281657A CN 101281657 A CN101281657 A CN 101281657A CN A200810062016X A CNA200810062016X A CN A200810062016XA CN 200810062016 A CN200810062016 A CN 200810062016A CN 101281657 A CN101281657 A CN 101281657A
Authority
CN
China
Prior art keywords
pedestrian
crowd
velocity field
video data
field
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CNA200810062016XA
Other languages
Chinese (zh)
Inventor
孙守迁
王鑫
侯易
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CNA200810062016XA priority Critical patent/CN101281657A/en
Publication of CN101281657A publication Critical patent/CN101281657A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a crowd behavior synthetic method based on the video data, which puts forward a method of extracting the velocity field of the typical passersby from the passerby video data, then compounding the behavioral animation of the human based on the velocity field. The method includes the following steps: collecting the passersby' video; extracting the velocity field of the typical passersby; designing the interactive human velocity field; compounding the behavioral animation of the human. The invention has the advantages of small interworking quantity, living simulation result and wide application range.

Description

A kind of crowd behaviour synthetic method based on video data
Technical field
The present invention relates to the field of computer animation, relate in particular to a kind of crowd behaviour synthetic method based on video data.
Background technology
To the research of virtual crowd is in recent years research focus, and the scholar of different field has different focus, such as pay close attention to intelligent behavior and the visual experience that how to simulate true crowd in film and recreation industry; In virtual environment research, pay close attention to and how in virtual environment, to add the sense of reality and the feeling of immersion that virtual crowd increases virtual environment; How in security against fire, city planning, architectural design field, to pay close attention to the specific behavior of modeling virtual crowd, thereby be applied to aspects such as fire fighting device setting, building safety degree and comfort level evaluation.
On each research direction of virtual crowd, exist a common issue with: how the motion path of virtual crowd is planned.The complicacy and the delicate property of crowd motion, the crowd path planning of causing need be considered environmental constraints on the one hand, the reciprocal effect between on the other hand also need the consideration crowd.At present the researchist has proposed various crowd's paths planning methods and can be divided into 2 classes haply: represent [1] based on multiple agent (multiple agent) colony and represent [2] based on the colony of dynamic implicit expression field (dynamic potential fields).Each visual human has independent internal state, decision parameters and simulation parameter in representing based on the colony of multiple agent, direct modeling everyone path planning process among the true crowd, can produce complicated and group behavior true to nature by adjusting various parameters; But the colony that is based on multiple agent represents also have some intrinsic defectives simultaneously, such as rule of conduct difficulty is set, big by the global path planning computing cost and emulation number that cause can not be too much, local paths planning and the motion of virtual human path that causes are untrue etc.Colony based on dynamic implicit expression field represents that thought source is in the fluid mechanics field, by being flowed, the crowd is expressed as a continuous flow field, re-use partial differential equation and control the behavior in this flow field, just the global path planning of virtual crowd can be dodged with local collision and unite, thereby make that the motion of virtual crowd is truer, but the colony that is based on dynamic implicit expression field has represented also to lose the dirigibility for each visual human's intended target and attribute.
Although can produce group movement true to nature based on multiple agent with based on the two kinds of methods in dynamic implicit expression field, all need to be provided with considerable empirical parameter.People [3] such as Musse SR have proposed a kind of motion path that extracts from the video data that the crowd is flowed in 2007, produce the virtual crowd motion route method based on these motion path example data then.This method has the user interactions of need not simulation parameter is set, synthetic group movement path advantage true to nature.But this method limitation part is: the virtual crowd place scene that simulates must be identical with the mobile video data place of original crowd scene.
[1].Funge,J.,X.Tu,and?D.Terzopoulos,Cognitive?modeling:knowledge,reasoning?and?planning?for?intelligent?characters[A].In:Proceedingsof?the?26th?annual?conference?on?Computer?graphics?and?interactivetechniques[C],Louisiana?State?University,Baton?Rouge.ACMPress,1999:29-38.
[2].Colombo,R.M.and?M.D.Rosini,Pedestrian?flows?and?nonclassicalshocks[J].Mathematical?Methods?in?the?Applied?Sciences,2005,28(13):1553-1567.
[3].Musse,S.R.,et?al.,using?computer?vision?to?simulate?the?motionof?virtual?agents[J].Computer?animation?and?virtual?worlds,2007,18(2):83-93.
Summary of the invention
The object of the present invention is to provide a kind of crowd behaviour synthetic method based on video data.
The technical solution used in the present invention may further comprise the steps:
(1) pedestrian's video acquisition;
(2) typical pedestrian's velocity field is extracted;
(3) interactive crowd's velocity field design;
(4) the crowd behaviour animation is synthetic in real time.
Described pedestrian's video acquisition:, designed the mobile video acquisition experiment of 8 groups of crowds to the analytic induction of the mobile scene of pedestrian.Experiment condition comprises camera arrangement, camera position, site condition, staff condition, experimental period and the speed of travel.
Described typical pedestrian's velocity field is extracted: the pedestrian's cap in pedestrian's flowing experiment is carried out color detection, adopt binarization method to handle image, utilize threshold value and morphological operation to detect each pedestrian's current location, employing pedestrian motion path track algorithm obtains pedestrian's moving line, further obtains the velocity field information of various situations.
Described interactive crowd's velocity field design: any one velocity field in 60 people's group velocity field data can be loaded in the tabulation of available velocity field by " increasing the template field ", choose after the current velocity field that will use, the user just can specify any one rectangular area to be subjected to the influence of this base speed field on the two-dimensional map of virtual environment, the size of base speed field is fixed, and the user specifies the size of rectangular area then can change arbitrarily.
Described crowd behaviour animation is synthetic in real time: visual human's motor behavior is divided into 3 levels: layer is selected in behavior, key-course, and the synthetic layer that moves, visual human's summary is sought footpath work and is finished at uppermost behavior selection layer; How virtual crowd dodges various static state, dynamic barrier when the good summary motion path of planning moves then finishes at key-course; In the synthetic layer of motion, use the skeleton skin animation human body walking animation true to nature of loop play at last for each visual human synthesizes.
The beneficial effect that the present invention has is: this method has characteristics such as the interworking amount is little, and simulation result is true to nature, and suitable situation is extensive.
Description of drawings
Fig. 1 is based on the interactive crowd behaviour synthetic method process flow diagram of pedestrian's video data.
Fig. 2 is the derive speed field pattern.
Fig. 3 is eight groups of experiment implementation process synoptic diagram.
Embodiment
The crowd behaviour synthetic method that provides a kind of based on video data of the present invention, process flow diagram as shown in Figure 1 may further comprise the steps:
(1) pedestrian's video acquisition;
(2) typical pedestrian's velocity field is extracted;
(3) interactive crowd's velocity field design;
(4) the crowd behaviour animation is synthetic in real time;
Described pedestrian's video acquisition according to the analytic induction of scene that the pedestrian is flowed, has designed the mobile video acquisition experiment of 8 groups of crowds, as shown in Figure 3.Experiment condition is as follows:
Camera arrangement: Sony dsr pdx10p, wide-angle lens, image resolution ratio 720*576.
Camera position: crowd 10 meters directly over the place of walking.
Site condition: place 1 is the rectangular region of 4 meters of 10 meters *, shown in Fig. 1 (a); Place 2 is the square area of 8 meters of 8 meters *, shown in Fig. 1 (b).
Staff condition: 40 people are divided into and are numbered 1,2,4 groups of 3,4.The 1st group of red coloration cap wherein, the 2nd group blue cap, the 3rd group the aubergine cap, the 4th group green cap.Everyone must wear white or black overcoat.
Experimental period: every group 3 to 5 minutes.
One second two step of the speed of travel: Chang Suwei is one second three step fast, is at a slow speed one second one step.
The implementation process of every group of experiment as shown, can be divided into 3 stages substantially: thus 1. every group of people enters place and constantly circulation adding experiment in regular turn, take and 2. respectively organize the people 15 seconds with fixing speed turnover place and form stable recurrent state, thereby continue 3 to 4 minutes 3. every group of people leave not recycle of place in regular turn and leave experiment, took for 15 seconds.
Described typical pedestrian's velocity field is extracted, each pedestrian in pedestrian's flowing experiment has been put on the cap of particular color, and require pedestrian's dressing colour can not occur, therefore the r of detected image I only need be treated in the position of detecting the pedestrian in picture frame, g, the gray level image Ir that the b component constitutes, Ig, Ib obtains Ir ' as binary conversion treatment respectively, Ig ', Ib ', wherein the threshold value that is adopted during binary conversion treatment according to pedestrian to be detected decide with the color of cap, when being red cap, it is greater than 150 that Ir requires threshold values to Ir ', Ig requires threshold values less than 100 to Ig ', and Ib requires threshold values less than 100 to Ib '.Obtain the image I r ' of binaryzation, Ig ', Ib ' afterwards, three images are done the cap of once putting by turn together and are obtained IA, IA is made the noise spot that morphologic ON operation removes the zonule, the morphologic expansive working that tries again then enlarges the effective coverage that is diminished by the ON operation computing, thereby obtains final binary picture IB.Seeking the center that 8 UNICOMs zone just can obtain each pedestrian region on this two field picture on the IB.Be illustrated in figure 3 as pedestrian detection algorithm flow chart (only detect and be with red cap pedestrian).The basic procedure of pedestrian's motion path track algorithm is: 1) video data is converted to a series of picture frames; 2) find out each pedestrian in every frame image location point (be pedestrian's corresponding diagram resemble the zone central point), regard each location point in every frame image as pedestrian's object.3) at first each the pedestrian's object in first picture frame is distributed a unique object number, handle make the object numbering since each picture frame of second frame then, Processing Algorithm is: (with pedestrian's object is the center to investigate its close region for each unnumbered pedestrian's object A, radius is the border circular areas of r, r is the largest motion pixel value of pedestrian between the two continuous frames image) pedestrian's object B of the minimum distance that exists in the inherent previous frame, give pedestrian's object A with the object number of pedestrian's object B, if in close region,, just give a new unique object number for pedestrian's object A without any pedestrian's object.4) the position coordinates set of all pedestrian's objects on one section frame sequence that all picture frames are disposed and just can obtain each object number correspondence, thus obtain a people from enter camera lens to leave camera lens the motion path of process.
Each bar pedestrian path of using above-mentioned path trace algorithm to obtain is the in order discrete pixel set of on the image coordinate space, can be expressed as:
T={pi=(xi,yi)|i=1,2,…,n}
Wherein n represents that this motion path is distributed on n the picture frame, and xi, yi represent the x coordinate figure of this people in image coordinate system on the i frame, y coordinate figure respectively.For one section video D, all motion paths that wherein comprise can be expressed as:
D={Tj|j=1,2,…,m}
Wherein m is illustrated in and has detected m bar motion path among the video D.Because the weighing apparatus of the time interval in the video between the two continuous frames is fixed, and last adjacent 2 of every paths T must be positioned on the two continuous frames, and we can obtain the sets of speeds of each pedestrian of discrete point place on the paths:
V={vi=(pi+1-pi)*fps/dI|pi∈T,pi+1∈T,i=1,2,…,n}
Wherein i represents each discrete point on the T of path, and fps is the frame rate of video, dI be on the true place 1m length corresponding to the number of pixels in the image coordinate space.
The frame number of video data D is many more, and detected pedestrian's motion path is just many more, can determine in the pattern space that the point of velocity amplitude is just many more.Come the method for extrapolated whole velocity field (every bit all has velocity amplitude) to have a variety of employing nearest-neighbors extrapolation methods (nearest neighbor extrapolation) to obtain whole velocity field to sparse velocity field.Various crowd's flow behaviors have in typical case been forgiven in the set of base speed field.Based on this set of base speed field and interactive crowd's velocity field design tool, we just can specify the flow behavior of virtual crowd in the virtual environment easily, and are the synthetic corresponding behavioral animation of virtual crowd in this virtual environment automatically.
Described interactive crowd's velocity field design, we have developed interactive crowd's velocity field design tool, can generate a two dimension view for a three-dimensional virtual environment, show the main topology of this three-dimensional virtual environment therein.In this instrument, can easily any one velocity field in 60 people's group velocity field data be loaded in the tabulation of available velocity field by " increasing the template field " button.Choose after the current velocity field that will use, the user just can specify any one rectangular area to be subjected to the influence of this base speed field on the two-dimensional map of virtual environment.The size of base speed field is fixed, and the user specifies the size of rectangular area then can change arbitrarily, and therefore the interpolation method of the rectangular area velocity field from the base speed field to any size still needs to use the nearest-neighbors method.
Owing to can only there be a velocity amplitude in a position in the velocity field, and the user often needs not crowd on the same group to have different velocity amplitudes in same position when specifying crowd behaviour, such as the existing stream of people left in same corridor, the stream of people is to the right arranged also.Based on above-mentioned consideration, interactive crowd's velocity field design tool can produce the velocity field of many levels, the field of each level all is that each extrapolation field of basic of design on this layer by the user combines, and wherein the user specifies that the velocity amplitude of each location point is defaulted as zero outside the rectangular area.When real-time simulation, to be subjected to the influence of one deck velocity field, still same visual human can be subjected to the influence of different layers velocity field in different moment point to each visual human of each moment point, thereby has improved the dirigibility of people's group control only.We can obtain the velocity amplitude of optional position point on arbitrary levels in the virtual environment fast by multi-level velocity field map, break the barriers map we can judge fast that optional position point in the virtual environment is positioned at the outside of its nearest barrier, on the inner still border.
Described crowd behaviour animation is synthetic in real time, and visual human's motor behavior is divided into 3 levels: layer (an action selection) is selected in behavior, key-course (steering), the synthetic layer of motion (locomotion).Visual human's summary is sought footpath work and is finished at uppermost behavior selection layer; How virtual crowd dodges various static state, dynamic barrier when the good summary motion path of planning moves then finishes at key-course; We use the skeleton skin animation human body walking animation true to nature for each visual human synthesizes of loop play in the synthetic layer of motion at last.Realized 5 kinds of steer behavior at key-course, be respectively: path following, obstacle avoidance, wander, unaligned collision avoidance and flowfollowing.Wherein path following behavior guarantees that the visual human moves along the good summary motion path of planning, obstacle avoidance guarantees that the visual human avoids being recorded in all barriers in the barrier map, wander provides limited random motion direction for the visual human, unaligned collision avoidance makes between visual human and the visual human and dodges mutually, and flow following makes the visual human be subjected to the influence of its place velocity field.The steer force of 5 kinds of steer behavior generations acts on visual human's strategy on one's body simultaneously and produces visual human's motor behavior true to nature.When the real-time combination of 5 kinds of steer behavior of specific implementation, method is exactly that each steer force that will be expressed as vector type directly does the computing of weighing vector linear adder the most intuitively.Practice shows that it is mixed effect that this direct-vision method can obtain good multirow.
Through real-time animation emulation after a while, each organizes the end position that virtual crowd has all reached plan, crowd's movement tendency obviously is subjected to the influence of velocity field, and the virtual crowd of same time period at same velocity field layer shows the characteristics of motion that is consistent with video acquisition experiment raw data.In addition, interpersonal collision is dodged in the simulation process, the collision between people and the scene environment is dodged all quite true to nature, thus correctness and the validity that have proved that this paper proposes based on the interactive crowd behaviour synthetic method of pedestrian's video data.

Claims (5)

1, a kind of crowd behaviour synthetic method based on video data is characterized in that may further comprise the steps:
(1) pedestrian's video acquisition;
(2) typical pedestrian's velocity field is extracted;
(3) interactive crowd's velocity field design;
(4) the crowd behaviour animation is synthetic in real time.
2. a kind of crowd behaviour synthetic method based on video data according to claim 1 is characterized in that described pedestrian's video acquisition: to the analytic induction of the mobile scene of pedestrian, designed the mobile video acquisition experiment of 8 groups of crowds.Experiment condition comprises camera arrangement, camera position, site condition, staff condition, experimental period and the speed of travel.
3. a kind of crowd behaviour synthetic method according to claim 1 based on video data, it is characterized in that described typical pedestrian's velocity field extraction: the pedestrian's cap in pedestrian's flowing experiment is carried out color detection, adopt binarization method to handle image, utilize threshold value and morphological operation to detect each pedestrian's current location, employing pedestrian motion path track algorithm obtains pedestrian's moving line, further obtains the velocity field information of various situations.
4, a kind of crowd behaviour synthetic method according to claim 1 based on video data, it is characterized in that described interactive crowd's velocity field design: any one velocity field in 60 people's group velocity field data can be loaded in the tabulation of available velocity field by " increasing the template field ", choose after the current velocity field that will use, the user just can specify any one rectangular area to be subjected to the influence of this base speed field on the two-dimensional map of virtual environment, the size of base speed field is fixed, and the user specifies the size of rectangular area then can change arbitrarily.
5, a kind of crowd behaviour synthetic method according to claim 1 based on video data, it is characterized in that described crowd behaviour animation is synthetic in real time: visual human's motor behavior is divided into 3 levels: layer is selected in behavior, key-course, the directly work of seeking of the synthetic layer of motion, visual human's summary is finished in uppermost behavior and is selected layer; How virtual crowd dodges various static state, dynamic barrier when the good summary motion path of planning moves then finishes at key-course; In the synthetic layer of motion, use the skeleton skin animation human body walking animation true to nature of loop play at last for each visual human synthesizes.
CNA200810062016XA 2008-05-23 2008-05-23 Method for synthesizing crowd action based on video data Pending CN101281657A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNA200810062016XA CN101281657A (en) 2008-05-23 2008-05-23 Method for synthesizing crowd action based on video data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNA200810062016XA CN101281657A (en) 2008-05-23 2008-05-23 Method for synthesizing crowd action based on video data

Publications (1)

Publication Number Publication Date
CN101281657A true CN101281657A (en) 2008-10-08

Family

ID=40014101

Family Applications (1)

Application Number Title Priority Date Filing Date
CNA200810062016XA Pending CN101281657A (en) 2008-05-23 2008-05-23 Method for synthesizing crowd action based on video data

Country Status (1)

Country Link
CN (1) CN101281657A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101510314B (en) * 2009-03-27 2012-11-21 腾讯科技(深圳)有限公司 Method and apparatus for synthesizing cartoon video
CN104040289A (en) * 2012-01-20 2014-09-10 西门子公司 Use of the occupancy rate of areas or buildings to simulate the flow of persons
CN106525113A (en) * 2016-11-02 2017-03-22 百奥森(江苏)食品安全科技有限公司 Forage feed detection method
CN107481305A (en) * 2017-08-18 2017-12-15 苏州歌者网络科技有限公司 Game movie preparation method
CN112569598A (en) * 2020-12-22 2021-03-30 上海幻电信息科技有限公司 Target object control method and device

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101510314B (en) * 2009-03-27 2012-11-21 腾讯科技(深圳)有限公司 Method and apparatus for synthesizing cartoon video
CN104040289A (en) * 2012-01-20 2014-09-10 西门子公司 Use of the occupancy rate of areas or buildings to simulate the flow of persons
US9513131B2 (en) 2012-01-20 2016-12-06 Siemens Aktiengesellschaft Use of the occupancy rate of areas or buildings to simulate the flow of persons
CN104040289B (en) * 2012-01-20 2017-05-10 西门子公司 Use of the occupancy rate of areas or buildings to simulate the flow of persons
CN106525113A (en) * 2016-11-02 2017-03-22 百奥森(江苏)食品安全科技有限公司 Forage feed detection method
CN107481305A (en) * 2017-08-18 2017-12-15 苏州歌者网络科技有限公司 Game movie preparation method
CN107481305B (en) * 2017-08-18 2021-02-09 苏州歌者网络科技有限公司 Game cartoon making method
CN112569598A (en) * 2020-12-22 2021-03-30 上海幻电信息科技有限公司 Target object control method and device

Similar Documents

Publication Publication Date Title
Rio et al. Local interactions underlying collective motion in human crowds
Karnan et al. Socially compliant navigation dataset (scand): A large-scale dataset of demonstrations for social navigation
Karamouzas et al. Simulating and evaluating the local behavior of small pedestrian groups
Chu et al. A fast ground segmentation method for 3D point cloud
Li et al. Paralleleye pipeline: An effective method to synthesize images for improving the visual intelligence of intelligent vehicles
Berton et al. Eye-gaze activity in crowds: impact of virtual reality and density
CN102509092A (en) Spatial gesture identification method
Kulpa et al. Imperceptible relaxation of collision avoidance constraints in virtual crowds
CN101281657A (en) Method for synthesizing crowd action based on video data
Wang et al. Group split and merge prediction with 3D convolutional networks
Kim et al. An open-source low-cost mobile robot system with an RGB-D camera and efficient real-time navigation algorithm
Ahmed et al. Multi-scale pedestrian intent prediction using 3D joint information as spatio-temporal representation
Oshita Agent navigation using deep learning with agent space heat map for crowd simulation
Chung et al. VRCAT: VR collision alarming technique for user safety
Nguyen et al. Deep learning-based multiple objects detection and tracking system for socially aware mobile robot navigation framework
Zheng et al. Simulating heterogeneous crowds from a physiological perspective
Li et al. A multiple targets appearance tracker based on object interaction models
Bonneaud et al. Accounting for patterns of collective behavior in crowd locomotor dynamics for realistic simulations
Kim et al. Learning-based human segmentation and velocity estimation using automatic labeled lidar sequence for training
CN102467751A (en) Rubber band algorithm of three-dimensional virtual scene rapid path planning
Wu et al. Simulating the local behavior of small pedestrian groups using synthetic-vision based steering approach
Hommaru et al. Walking support for visually impaired using AR/MR and virtual braille block
Amaoka et al. Personal space-based simulation of non-verbal communications
Komatsuzaki et al. Generation of datasets for semantic segmentation from 3D scanned data to train a classifier for visual navigation
Nevalainen et al. Image analysis and development of graphical user interface for pole vault action

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Open date: 20081008