CN101751549B - Method for tracking moving object - Google Patents

Method for tracking moving object Download PDF

Info

Publication number
CN101751549B
CN101751549B CN200810179785.8A CN200810179785A CN101751549B CN 101751549 B CN101751549 B CN 101751549B CN 200810179785 A CN200810179785 A CN 200810179785A CN 101751549 B CN101751549 B CN 101751549B
Authority
CN
China
Prior art keywords
mobile object
mobile
appearance model
images
tracing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN200810179785.8A
Other languages
Chinese (zh)
Other versions
CN101751549A (en
Inventor
黄钟贤
石明于
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Industrial Technology Research Institute ITRI
Original Assignee
Industrial Technology Research Institute ITRI
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Industrial Technology Research Institute ITRI filed Critical Industrial Technology Research Institute ITRI
Priority to CN200810179785.8A priority Critical patent/CN101751549B/en
Publication of CN101751549A publication Critical patent/CN101751549A/en
Application granted granted Critical
Publication of CN101751549B publication Critical patent/CN101751549B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to a method for tracking a moving object, which comprises the following steps: detecting the moving object in a plurality of continuous images to obtain space information of the moving object in each image; also extracting appearance characteristics of the moving object in the images to establish an appearance model of the moving object; and finally combining the space information with the appearance characteristics of the moving object to track a moving track of the moving object in the images. Hereby, even though the moving object leaves a monitoring picture, when the moving object enters the monitoring picture again, the method still can continuously track the moving object so as to assist a monitor to find abnormal behaviors in time and take consequent reaction.

Description

The method for tracing of mobile object
Technical field
The present invention relates to a kind of image processing method, and be particularly related to a kind of method for tracing of mobile object.
Background technology
Vision monitoring technology is increasingly important in recent years, especially after passing through 9.11 event, increasing monitoring camera is positioned in each place, however traditional monitoring by manpower, supervise often, or just deposit in memory storage as the instrument of having access to afterwards.Along with increasing video camera sets up, required manpower is also more and more, therefore by the auxiliary Automatic monitoring systems of computer vision technique, plays the part of even more important role in recent years.
Visual monitor system is by the behavior of mobile object in analysis monitoring picture, and it can be track, attitude or other features, is detected the generation of anomalous event, and vaild notice Security Officer processes.The basic subject under discussion of many vision monitorings, as existing considerable document and the research in the past such as background subtracting, moving Object Detection and tracking, shadow removal, in recent years, focus transfers the event detection of high-order to, as behavioural analysis, legacy detection, hover detection or crowded detection etc., under the at present strong monitoring market demand, robotization and the behavioural analysis expection with wisdom will have great demand and business opportunity.
The so-called detection of hovering, refers to when one or more mobile objects are in special time, continues and repeats in certain guarded region.For instance, hooker or beggar can hang out in street corner, dauber and can stay in wall limit, have suicide intention person to hover can to hover in subway station, to wait with its client in railway platform or drug trafficker and meet etc.
Yet, because the visual field of the video camera of visual monitor system is limited, cannot contain the path that ranger moves completely, therefore, when ranger leaves after guarded region, visual monitor system loses the target of monitoring, cannot continue to detect its trend.The situation that particularly ranger returns after leaving again, how still identification associated with behavior before it again, is the bottleneck that detection technique faces of hovering at present.
Summary of the invention
In view of this, the invention provides a kind of method for tracing of mobile object, in conjunction with spatial information and the appearance model of mobile object in multiple images, the mobile route of this mobile object of sustainable tracking in image.
The present invention proposes a kind of method for tracing of mobile object, it comprises the mobile object detecting continuously in multiple images, to obtain the spatial information of this mobile object in each image, in addition also extract the macroscopic features of this mobile object in each image, to set up the appearance model of this mobile object, last in conjunction with spatial information and the appearance model of this mobile object, to follow the trail of the mobile route of this mobile object in image.
In one embodiment of this invention, above-mentioned detection is the step of the mobile object in multiple images continuously, also comprise and judge whether this mobile object is mobile object, and by non-be the mobile object filtering of mobile object.Wherein, judge whether this mobile object is that the mode of mobile object comprises whether the area that judges this rectangular area is greater than the first preset value, and when its area is greater than the first preset value, judge that the mobile object that this rectangular area surrounds is mobile object.Another kind of mode is whether the length breadth ratio that judges this rectangular area is greater than the second preset value, and when its length breadth ratio is greater than the second preset value, judges that the mobile object that this rectangular area surrounds is mobile object.
In one embodiment of this invention, the macroscopic features of said extracted mobile object in each image, to set up the step of the appearance model of mobile object, comprise and first rectangular area is divided into a plurality of blocks, and extract the COLOR COMPOSITION THROUGH DISTRIBUTION of each block, then adopt the mode of pulling over, from each block, take out the intermediate value of its COLOR COMPOSITION THROUGH DISTRIBUTION, set up according to this binary tree and describe its COLOR COMPOSITION THROUGH DISTRIBUTION, last choose the COLOR COMPOSITION THROUGH DISTRIBUTION of this binary tree branch, to using as the appearance model of mobile object proper vector.
In one embodiment of this invention, the above-mentioned step that rectangular area is divided into block comprises that cutting apart in certain proportion this rectangular area is head block, health block and lower limb block, when extracting the COLOR COMPOSITION THROUGH DISTRIBUTION of each block, comprises the COLOR COMPOSITION THROUGH DISTRIBUTION of head block is ignored.Wherein, above-mentioned COLOR COMPOSITION THROUGH DISTRIBUTION comprises the color character in RGB (RGB) color space or tone chroma luminance (HSI) color space.
In one embodiment of this invention, mobile object in continuous multiple images of above-mentioned detection, after obtaining the step of the spatial information of this mobile object in each image, also comprise the mobile route that utilizes these spatial informations to follow the trail of this mobile object, and this mobile object of accumulative total rests on the residence time in these images.
In one embodiment of this invention, after above-mentioned accumulative total mobile object rests on the step of the residence time in image, also comprise and judge whether the residence time that this mobile object rests in these images surpasses one first Preset Time, and when this residence time surpasses the first Preset Time, begin to extract the macroscopic features of mobile object, to set up the appearance model of mobile object, and in conjunction with spatial information and the appearance model of mobile object, follow the trail of the mobile route of mobile object in these images.
In one embodiment of this invention, the spatial information of above-mentioned combination mobile object and appearance model, the step of following the trail of the mobile route of mobile object in these images comprises first utilizes spatial information to calculate spatially relevant apriori probability of mobile object corresponding in two adjacent images, and utilize appearance information to calculate the similarity of mobile object corresponding in two adjacent images, then just in conjunction with apriori probability and similarity in shellfish formula tracker, to judge the mobile route of mobile object in these adjacent images.
In one embodiment of this invention, when the judgement residence time surpasses the first Preset Time, also comprise the residence time of this mobile object and appearance model are recorded in to database, it comprise the appearance model of mobile object and a plurality of appearance models in database are carried out associated, to judge whether the appearance model of this mobile object has been recorded in database.Wherein, if the appearance model of mobile object has been recorded in database, only the residence time of record move object in database; Otherwise if the appearance model of mobile object is not recorded in database, the residence time of record move object and appearance model are in database.
In one embodiment of this invention, above-mentionedly the appearance model of mobile object and appearance model in database are carried out to associated step comprise the first distance of calculating between the appearance model that same mobile object sets up in two different time points, to set up the first range distribution, and calculate the second distance between the appearance model of two mobile objects in each image, to set up second distance, distribute, and then utilize this first range distribution and second distance to distribute and ask for its boundary line, using as the standard of distinguishing appearance model.
In one embodiment of this invention, above-mentioned the residence time of mobile object and appearance model are recorded in to the step of database after, also comprise the time series of this mobile object in analytical database, to judge whether this mobile object meets the event of hovering.Its judgment mode comprises that whether the time that judges mobile object lasting appearance in these images is over the second Preset Time, and when mobile object has continued to occur over the second Preset Time in these images, judges that this mobile object meets the event of hovering.Another way is to judge whether the time interval that mobile object leaves these images is less than the 3rd Preset Time, and the time interval that leaves these images when mobile object is while being less than the 3rd Preset Time, judges that this mobile object meets the event of hovering.
Based on above-mentioned, the present invention is by setting up visitor's appearance model, and in conjunction with shellfish formula tracer technique, database management technology and adaptive threshold learning art, monitoring enters the mobile object in picture constantly, can solve after mobile object leaves picture and return to again the problem that cannot continue detection.In addition, the present invention comes across the time conditions in picture according to visitor, can automatically detect visitor's the event of hovering.
For above-mentioned feature and advantage of the present invention can be become apparent, special embodiment below, and coordinate accompanying drawing to be described in detail below.
Accompanying drawing explanation
Fig. 1 is the schematic diagram of the mobile object that illustrates according to one embodiment of the invention system architecture of following the trail of.
Fig. 2 is the process flow diagram of the method for tracing of the mobile object that illustrates according to one embodiment of the invention.
Fig. 3 (a), (b) and (c) be the schematic diagram of the appearance model of the mobile object that illustrates according to one embodiment of the invention.
Fig. 4 is the schematic diagram of the binary tree of the COLOR COMPOSITION THROUGH DISTRIBUTION that illustrates according to one embodiment of the invention.
Fig. 5 is the process flow diagram of the shellfish formula object method for tracing that illustrates according to one embodiment of the invention.
Fig. 6 is the process flow diagram of the management method of the visitor database that illustrates according to one embodiment of the invention.
Fig. 7 (a), (b) and (c) be the schematic diagram of the adaptive threshold update method that illustrates according to one embodiment of the invention.
Fig. 8 is the legend that the adaptive threshold that illustrates according to one embodiment of the invention is calculated.
[main element symbol description]
100: tracing system
110: background subtracting
120: mobile object object extracts
130: macroscopic features is calculated
140: shellfish formula object is followed the trail of
150: visitor database management
160: visitor database
170: adaptive threshold is upgraded
180: the event detection of hovering
S210~S230: each step of the method for tracing of the mobile object of one embodiment of the invention
S510~S570: each step of the shellfish formula object method for tracing of one embodiment of the invention
S610~S670: each step of the management method of the visitor database of one embodiment of the invention
Embodiment
The present invention sets up the detection technique of hovering of a set of non-supervisory formula, and system can automatically be learnt specific parameter by the event of monitored picture, and to entering the visitor of picture, does the foundation of appearance model, and itself and visitor database are analyzed with associated.Wherein, by comparing with historical record, even if visitor enters monitoring scene after leaving picture again, still can keep following the trail of uninterrupted.Finally, utilize the rule of hovering of definition in advance, can detect further the event of hovering.In order to make content of the present invention more clear, below the example that really can implement according to this as the present invention especially exemplified by embodiment.
Fig. 1 is the schematic diagram of the mobile object that illustrates according to one embodiment of the invention system architecture of following the trail of.Please refer to Fig. 1, the tracing system 100 of the present embodiment comprises first via background subtracting 110, detects continuously the mobile object in multiple images.Due to the present embodiment follow the trail of to as if the mobile object with complete external form (for example: pedestrian), so next step set by simple condition, can not be the mobile object filtering of mobile object, and this is mobile object object and extracts 120.
On the other hand, the mobile object extracting for each, the present embodiment first calculates its macroscopic features 130, and utilize one to take Bei Shi decision-making (Bayesian Decision) as this mobile object 140 of the lasting tracking of basic tracker, and set up its appearance model by the macroscopic features of the obtained same mobile object of multiple images.Tracing system 100 is safeguarded a visitor database 150 simultaneously in storer.Wherein, visitor database management 160 meetings are compared the macroscopic features of current extracted mobile object according to the result of adaptive threshold renewal 170 and are associated with the appearance model in visitor database 150, if the some personages in this motive objects physical efficiency and visitor database 150 associate, represent that this mobile object past attempts accessed this scene; Otherwise, this mobile object is increased newly to visitor database 150.Finally, the time conditions that comes across picture according to visitor, as basis for estimation, can detect the event of hovering 180.When following the trail of mobile object, the situation that tracing system 100 can distribute mobile object in picture is as sample, and how automatic learning distinguishes different access person's difference, in order to the foundation as appearance model interaction, below the detailed process of the method for tracing of mobile object of the present invention is described for an embodiment again.
Fig. 2 is the process flow diagram of the method for tracing of the mobile object that illustrates according to one embodiment of the invention.Please refer to Fig. 2, the present embodiment is followed the trail of for the mobile object entering in monitored picture, use and set up its appearance model, and compare with the data in the visitor database being disposed in system storage, and then judge this mobile object and whether once occurred, and can continue to follow the trail of mobile object, its detailed step is as follows:
First, detect continuously the mobile object in multiple images, to obtain the spatial information (step S210) of this mobile object in each image.This moving Object Detection technology is mainly first to set up a background image, and by current image therewith background image subtract each other and carry out acquisition prospect.By prospect drawn after background subtracting, can utilize join domain labelling method that each join domain is marked, and with I, surround the rectangular area b={r of this join domain left, r top, r right, r bottomrecord, wherein r in addition left, r top, r rightwith r bottomrepresent that respectively this rectangular area is in the border of image left, up, right, down.
It is worth mentioning that, owing to causing, the factor of prospect is a lot, and be the foreground object that comprises single mobile object at this interested object, therefore the present embodiment be take pedestrian as example, also comprise and judge whether this mobile object is pedestrian, and will wherein not belong to pedestrian's mobile object filtering, it is for example by two condition filterings in addition below: first condition is whether the area that judges rectangular area is greater than first preset value, and when its area is greater than the first preset value, judge that the mobile object that this rectangular area surrounds is pedestrian, so can filtering noise and broken object, second condition is whether the length breadth ratio that judges rectangular area is greater than the second preset value, and when its length breadth ratio is greater than the second preset value, judge that the mobile object that this rectangular area surrounds is pedestrian, so can filtering the block that overlaps of many people or noise on a large scale.
Next step is to extract the macroscopic features of mobile object in each image, to set up the appearance model (step S220) of this mobile object.In detail, the present invention proposes a kind of new appearance description, and it is considered color structure and by looser body segmentation, draws compared with the macroscopic features of tool meaning.So-called looser body segmentation, exactly the rectangular area that surrounds a pedestrian is divided into a plurality of blocks, and extract the COLOR COMPOSITION THROUGH DISTRIBUTION of each block, for example can rectangular area be divided into head block, health block and lower limb block according to the ratio of 2:4:4, to correspond respectively to pedestrian's head, health and lower limb.Wherein, because the color character of head is subject to the impact of its faces direction, and discrimination do not showing, and therefore the information of this head block can be ignored.
For instance, Fig. 3 (a), 3 (b) and 3 (c) are the schematic diagram of the appearance model of the mobile object that illustrates according to one embodiment of the invention.Wherein, Fig. 3 (a) shows pedestrian's image, and it is via the rectangular area after above-mentioned two condition filterings, can be referred to as pedestrian candidate person P, and the connection object that Fig. 3 (b) is its correspondence.After connecting object mark, next can be by the mode of pulling over, health block and lower limb block in Fig. 3 (c) partly take out the intermediate value of COLOR COMPOSITION THROUGH DISTRIBUTION, and then set up a binary tree and describe COLOR COMPOSITION THROUGH DISTRIBUTION.
Fig. 4 is the schematic diagram of the binary tree of the COLOR COMPOSITION THROUGH DISTRIBUTION that illustrates according to one embodiment of the invention.Please refer to Fig. 4, wherein M represents the intermediate value of some COLOR COMPOSITION THROUGH DISTRIBUTION in health block or lower limb block, and ML and MH are after COLOR COMPOSITION THROUGH DISTRIBUTION is divided out by M for this reason, and the intermediate value of COLOR COMPOSITION THROUGH DISTRIBUTION separately, can analogize according to this and obtain the MLL of branch, MLH, MHL and MHH.Wherein, above-mentioned COLOR COMPOSITION THROUGH DISTRIBUTION can be the arbitrary color character in RGB (RGB) color space or tone chroma luminance (HSI) color space, and even the color character of other color spaces, does not limit at this.For convenience of description, the present embodiment is selected rgb color space, and sets up a binary tree that comprises three layers of COLOR COMPOSITION THROUGH DISTRIBUTION, and it can form the proper vector of 24 dimensions f = [ R MLL body , G MLL body , B MLL body , R MLH body , . . . , R MHH legs , G MHH legs , B MHH legs ] Pedestrian's macroscopic features is described.Obtain after this proper vector, each pedestrian candidate people can represent by its spatial information and appearance model in image.
After obtaining the spatial information and appearance model of mobile object, the present embodiment, further combined with this different information, is followed the trail of the mobile route (step S230) of mobile object in image according to this.The present embodiment is reached mobile object by a kind of method for tracing of the mobile object based on Bei Shi decision-making and is followed the trail of, the method is considered appearance and the position of mobile object in two adjacent images, and using Bei Shi decision-making to make best association, this is mobile object and follows the trail of.
In detail, suppose when time t, by the tablet menu that comprises n pedestrian candidate person's rectangle that object detects and the foundation of appearance model obtains, be shown C = { P j t | j = 1,2 , . . . , n } , The historical record that the Bei Shi tracker that system is safeguarded was in addition followed the trail of before the time at t-1 is the inventory that comprises m visitor's guess M = { H i t | i = 1,2 , . . . , m } . So-called visitor's guess refers under the tracking of continuous time, the τ being shut away mutually continuous pedestrian candidate person's image, i.e. H={P t-τ, P t-τ+1..., P t, ρ }, P wherein t-τthe pedestrian candidate rectangle that visitor occurs for the first time for this reason, the rest may be inferred for all the other.In addition, ρ is called confidence index, and it can increase or reduce along with the success of object tracking or failure, when this confidence index is greater than a upper bound threshold value, can think that this visitor's guess has enough level of confidences, and change visitor's guess into an entities access person; Otherwise, if this confidence index lower than zero time, thinks that this mobile object has left monitoring scene, in the inventory M that now this visitor's guess can be safeguarded by Bei Shi tracker, remove.Above-mentioned Bei Shi object is followed the trail of and can be divided into study, relevant and renewal three phases, with next, describes in detail for an embodiment again.
Fig. 5 is the process flow diagram of the shellfish formula object method for tracing that illustrates according to one embodiment of the invention.Please refer to Fig. 5, first, in learning phase, the present embodiment first provides a group access person to guess inventory M (step S510), and this visitor guesses that in inventory M, comprising a plurality of processes follows the trail of continuous time, and associated visitor guess.
Then, for visitor, guess each the visitor's guess in inventory M
Figure G2008101797858D00074
inspect its length stopping (tracked time) and whether surpass one first Preset Time L in image 1(step S520), if it is shorter in length than L 1, assert that it is still in learning phase, now the pedestrian candidate person of adjacent pictures only comes associated (step S530) by the correlativity in space.For instance, if belong to visitor's guess
Figure G2008101797858D00075
rectangular area
Figure G2008101797858D00076
pedestrian candidate person with current picture
Figure G2008101797858D00077
rectangular area
Figure G2008101797858D00081
overlapping relation on having living space, visitor's guess can be by adding pedestrian candidate person
Figure G2008101797858D00083
and be updated to
Figure G2008101797858D00084
Then, in association phase, i.e. visitor's guess
Figure G2008101797858D00085
length be greater than the first Preset Time L 1, represent its state in being stabilizedly tracked, now we not only consider spatial coherence, also more consider the macroscopic features of object, and by Bei Shi decision-making, visitor are guessed to associated with pedestrian candidate person (step S540).In detail, this step comprises utilizes above-mentioned spatial information to calculate spatially relevant apriori probability of mobile object corresponding in two adjacent images, and utilize above-mentioned appearance information to calculate the similarity of mobile object corresponding in two adjacent images, and then this apriori probability and similarity are incorporated into shellfish formula tracker, to judge that visitor guesses with pedestrian candidate person, whether be associated.For instance, formula (1) is the discriminating equation (discriminant function) of Bei Shi decision-making:
BD ( H i t - 1 , P j t ) = P ( C H | P j t ) / P ( C H ‾ | P j t )
= ( p ( C H ) p ( P j t | C H ) ) / ( p ( C H ‾ ) p ( P j t | C H ‾ ) ) (1)
Similarity function wherein
Figure G2008101797858D00088
expression gives
Figure G2008101797858D00089
it belongs to
Figure G2008101797858D000810
probability, and contrary, that is
Figure G2008101797858D000812
do not belong to
Figure G2008101797858D000813
probability.Therefore,, if BD is greater than 1, represent that decision-making is partial to belong to
Figure G2008101797858D000815
therefore by both with associated.Wherein, if by the similarity letter formula in formula (1)
Figure G2008101797858D000816
normal distribution N (μ, ∑ with multidimensional 2) represent, suc as formula shown in (2):
p ( P j t | C H ) = 1 det Σ ( 2 π d ) exp ( - 1 2 ( f j t - μ ) T Σ - 1 ( f j t - μ ) ) - - - ( 2 )
Wherein μ and ∑ is past L 1individual eigenwert (from
Figure G2008101797858D000818
arrive
Figure G2008101797858D000819
) on average Variation Matrix together, computing method are as follows:
μ = Σ k = t - L 1 t - 1 f k L 1 - - - ( 3 )
Figure G2008101797858D000821
Wherein
σ xy=(f xx)(f yy) (5)
Similarity function
Figure G2008101797858D000822
at this, to be uniformly distributed (uniform distribution) function, represent.On the other hand, due to apriori probability p (C h) with being this event is occurred for reaction in advance cognitive, is by this cognitive correlativity that corresponds to space in advance at this.In other words, work as
Figure G2008101797858D000824
with
Figure G2008101797858D00091
distance is nearer, gives larger apriori probability, at this, can represent p (C with the index letter formula of Range-based by one h) with
Figure G2008101797858D00092
shown in (6) and (7):
p ( C H ) = exp ( - D ( b j t , b i t - 1 ) σ D 2 ) - - - ( 6 )
p ( C H ‾ ) = 1 - p ( C H ) - - - ( 7 )
Wherein, σ dfor the parameter of being controlled by user, it can be adjusted according to the speed of mobile object in picture.Consider above-mentioned spatial coherence and macroscopic features, can be in order to judge that whether visitor guesses associated with pedestrian candidate person (step S550).And in new stage more, if with
Figure G2008101797858D00096
via the above-mentioned stage, be judged as relevant, can be by
Figure G2008101797858D00097
increase newly extremely
Figure G2008101797858D00098
and upgrade visitor's guess, be H i t = { P i 1 , P i 2 , . . . , P i t - 1 , P j t , ρ i } (step S560).Meanwhile, also can add a constant Δ ρ, to improve the confidence index ρ of this guess i, until ρ ireach a preset maximum value ρ max; Otherwise, if
Figure G2008101797858D000910
cannot be relevant to any one pedestrian candidate person in picture, by its confidence index ρ ideduct constant Δ ρ, if its value is reduced to while being less than zero, from visitor, guesses inventory M and remove this visitor guess, represent that this visitor has left monitored picture.On the other hand, if there is pedestrian candidate person cannot correlate any visitor's guess in picture t, represent the visitor of this pedestrian's candidate for newly entering, therefore guess that in visitor increasing a visitor in inventory M guesses
Figure G2008101797858D000911
(step S570), and given ρ m+1=0.In the present embodiment, ρ maxbe set as 1 and Δ ρ is set as 0.1, but be not limited to this.
For to turnover, the visitor in scene does the identification of appearance, in order to analyzing behavior and the time point of same pedestrian in this scene turnover, the present invention has configured the visitor database person's that comes record access appearance model and the access time thereof, as shown in Figure 6, it is described as follows the management process of this visitor database:
First, newly-increased visitor guess (step S610) in tracker, and judge whether the length of this visitor's guess arrives the second Preset Time L 2integral multiple (step S620), and when arriving L 2time, according to its passing L 2individual macroscopic features, calculates its averaged feature vector and common Variation Matrix, uses a Gaussian function and describes appearance model V={N (μ now, ∑), { s}} (step S630), wherein { s} is a constant sequence, and it records the time that this appearance model is set up.Then the guess length that, judges whether visitor equals L 2(step S640) and its appearance model are for being associated (step S650) with the visitor's appearance model recording in visitor database.Wherein, if visitor's appearance model exists more than one similar (both are less than threshold value T at distance) in this visitor and database, represent that this visitor's ever accessed crosses this scene, now just can be by itself and the most close person V kalso through type (8), (9) are updated to (10) the visitor's appearance model that is positioned at visitor database to carry out association
Figure G2008101797858D000912
(step S660).
V ~ k = { N ( μ ~ k , Σ ~ k ) , { s k 1 , s k 2 , . . . , s k u , s i 1 , s i 2 , . . . , s i v } } - - - ( 8 )
μ ~ k = u · μ k + v · μ i u + v - - - ( 9 )
σ ~ k 2 ( x , y ) = u · σ k 2 ( x , y ) + v · σ i 2 ( x , y ) u + v - - - ( 10 )
σ wherein 2(x, y) represents the element (x, y) in Variation Matrix altogether, and u and v are the length of time series of two appearance models, and the weight of usining as renewal, by V at this moment ibe an appearance model of newly setting up, therefore its v value is 1.Otherwise, if V icannot carry out associatedly with any appearance line in visitor database, represent that this is a newly observed visitor, now its appearance model and time mark can be added to (step S670) in visitor database.Herein two appearance models (be respectively the Gaussian distribution of a d dimension, N 11, ∑ 1) and N 22, ∑ 2)) between distance by following range formula, calculate:
D(V(N 1),V(N 2))=(D KL(N 1||N 2)+D KL(N 2||N 1))/2 (11)
Wherein
D KL ( N 1 | | N 2 ) = 1 2 ( ln ( det Σ 2 det Σ 1 ) + tr ( Σ 2 - 1 Σ 1 ) + ( μ 2 - μ 1 ) T Σ 2 - 1 ( μ 2 - μ 1 ) - d ) - - - ( 12 )
It is worth mentioning that, be L if this visitor guesses length 2twice more than, represent that it correlated with visitor database, now only need its correspondence of through type (8) continuous updating at the appearance model of visitor database.
For above-mentioned, mention as the threshold value T of the relevant foundation of two appearance models of judgement, when two appearance modal distances are greater than this threshold value T, can judge that these two appearance models come from different visitor; If anti-distance is less than threshold value T, both are associated can to judge this, and then conclude that both are same visitor.
In order to calculate an optimal threshold value T, the present invention proposes a kind of learning strategy of non-supervisory formula, system can automatically be upgraded from film learning, to obtain best appearance resolution characteristic, it comprise consideration below two kinds of event: event A betide same visitor and followed the trail of sustainedly and stably.As shown in Fig. 7 (a) and Fig. 7 (b), when visitor by system stability follow the trail of 2L 2time span, now have enough confidence to believe two appearance models V 1 ′ = AM ( P t , P t - 1 , . . . , P t - L 2 + 1 ) With V 1 = AM ( P t - L , P t - L 2 - 1 , . . . , P t - 2 L 2 + 1 ) Be from same visitor, now can calculate the distance D (V between these two appearance models 1', V 1), and as the eigenwert of event A; And event B betides two visitors, occur in picture simultaneously and stably followed the trail of.As shown in Fig. 7 (c), because work as same picture, there are two visitors simultaneously, now there is enough confidence to believe that these two appearance models are from different visitors, calculate by this its distance D (V 2, V 3), and as the eigenwert of event B.
When the event A collecting when system and the quantity of event B reach a certain amount of, then it is done to statistical analysis.As shown in Figure 8, wherein the eigenwert due to event A is to calculate the distance of the appearance model of setting up in two different time points from same visitor, therefore its value is comparatively concentrated near null position and distribution; And the eigenwert of event B is to calculate the distance of the appearance model of two different objects, therefore the far away and comparatively dispersion that distributes from zero point of its value.By calculating these two kinds of events other mean value and standard deviation, then with normal distribution with represent its range data, can set up the first range distribution and second distance and distribute, now can ask for this first range distribution and the second distance best boundary line that distributes by following equation (13), the threshold value T of usining as differentiation appearance model:
1 σ A 2 π e - 1 2 ( μ A - T σ A ) 2 = 1 σ B 2 π e - 1 2 ( μ B - T σ B ) 2 - - - ( 13 )
Finally, for each visitor's that above-mentioned visitor database is recorded appearance model and the residence time, the present invention is further applied to the detection of hovering, as long as the time series of each visitor's appearance model record in analysis visitor database s}, the judgement that can be hovered as condition in following formula (14) and (15) at this:
s t-s 1>α (14)
s i-s i-1<β,1<i≤t (15)
Its Chinese style (14) represents that visitor when occurring detecting up till now for the first time, at picture, occur surpassing Preset Time α, and formula (15) represents that the detected time interval of visitor is at least less than Preset Time β.
In sum, the method for tracing of mobile object of the present invention is in conjunction with " mobile object tracking ", " visitor database management ", and technology such as " adaptive threshold study ", according to the macroscopic features of mobile object in multiple images, set up appearance model, and compare with the data in the visitor database being disposed in system storage, and can remain uninterrupted for visitor's tracking, even if visitor leaves monitored picture and then enters again, still can successfully this visitor behavior previous with it be associated, and then auxiliary supervisor is the behavior that early notes abnormalities and makes subsequent reactions.
Although the present invention with embodiment openly as above; so it is not in order to limit the present invention, those skilled in the art, without departing from the spirit and scope of the present invention; when doing a little change and retouching, therefore protection scope of the present invention is when being as the criterion depending on the appended claims person of defining.

Claims (21)

1. a method for tracing for mobile object, comprises the following steps:
Detect continuously the mobile object in multiple images, to detect this mobile object, and obtain the spatial information of this mobile object in each of these images;
This mobile object of accumulative total rests on the residence time in these images;
Extract the macroscopic features of this mobile object in each of these images, to set up the appearance model of this mobile object;
In conjunction with this spatial information and this appearance model of this mobile object, follow the trail of the mobile route of this mobile object in these images;
Record this residence time of this mobile object and this appearance model in a database; And
Analyze a time series of this mobile object in this database, to judge whether this mobile object meets the event of hovering.
2. the method for tracing of mobile object as claimed in claim 1, the step that wherein detects this mobile object comprises:
By these figure image subtraction one background images.
3. the method for tracing of mobile object as claimed in claim 2, wherein, after by the step of these these background images of figure image subtraction, also comprises:
A plurality of join domains in these images of mark;
Estimate I and surround a rectangular area of these join domains, this rectangular area surrounds this mobile object; And
Judge that whether this mobile object that this rectangular area surrounds is the thing that follows the trail of the objective.
4. the method for tracing of mobile object as claimed in claim 3, wherein, in the step of this mobile object in detecting continuously multiple images, after detecting this mobile object, also comprises:
Non-this mobile object for this thing that follows the trail of the objective of filtering.
5. the method for tracing of mobile object as claimed in claim 4, wherein judges that whether this mobile object is that the step of this thing that follows the trail of the objective comprises:
Whether the area that judges this rectangular area is greater than one first preset value; And
When this area is greater than this first preset value, judge that this mobile object that this rectangular area surrounds is this thing that follows the trail of the objective.
6. the method for tracing of mobile object as claimed in claim 4, wherein judges that whether this mobile object is that the step of this thing that follows the trail of the objective comprises:
Whether the length breadth ratio that judges this rectangular area is greater than one second preset value; And
When this length breadth ratio is greater than this second preset value, judge that this mobile object that this rectangular area surrounds is this thing that follows the trail of the objective.
7. the method for tracing of mobile object as claimed in claim 3, wherein extracts this mobile object this macroscopic features in each of these images, to set up the step of this appearance model of this mobile object, comprises:
Cutting apart this rectangular area is a plurality of blocks, and extracts each COLOR COMPOSITION THROUGH DISTRIBUTION of these blocks;
Adopt the mode of recurrence, take out the intermediate value of the COLOR COMPOSITION THROUGH DISTRIBUTION in each of these blocks, set up according to this binary tree and describe this COLOR COMPOSITION THROUGH DISTRIBUTION; And
Choose these COLOR COMPOSITION THROUGH DISTRIBUTION of this binary tree branch, the proper vector of usining as this appearance model of this mobile object.
8. the method for tracing of mobile object as claimed in claim 3, wherein judges that whether this mobile object that this rectangular area surrounds is that the step of this thing that follows the trail of the objective comprises:
Judge whether this mobile object that this rectangular area surrounds is a group traveling together.
9. the method for tracing of mobile object as claimed in claim 8, the step of wherein cutting apart this rectangular area and be a plurality of blocks comprises that the ratio of take 2: 4: 4 cuts apart this rectangular area as a head block, a body region piece and a lower limb block.
10. the method for tracing of mobile object as claimed in claim 9, each the step of COLOR COMPOSITION THROUGH DISTRIBUTION of wherein extracting these blocks comprises this COLOR COMPOSITION THROUGH DISTRIBUTION of ignoring this head block.
The method for tracing of 11. mobile objects as claimed in claim 10, wherein this COLOR COMPOSITION THROUGH DISTRIBUTION comprises the color character in RGB color space or tone chroma luminance color space.
The method for tracing of 12. mobile objects as claimed in claim 3, wherein, after this mobile object of accumulative total rests on the step of the residence time in these images, also comprises:
Judge whether this residence time that this mobile object rests in these images surpasses one first Preset Time; And
When this residence time surpasses this first Preset Time, extract these macroscopic featuress of this mobile object.
The method for tracing of 13. mobile objects as claimed in claim 12, wherein, in conjunction with this spatial information and this appearance model of this mobile object, the step of following the trail of this mobile route of this mobile object in these images comprises:
Utilize these spatial informations to calculate spatially relevant apriori probability of this mobile object corresponding in two adjacent images;
Utilize these appearance information to calculate the similarity of this mobile object corresponding in two adjacent images; And
In conjunction with this apriori probability and this similarity, take Bei Yesi Decision plan as basic tracker in one, to judge this mobile object this mobile route in these adjacent images.
The method for tracing of 14. mobile objects as claimed in claim 12, wherein when this residence time of judgement surpasses this first Preset Time, records this residence time of this mobile object and this appearance model in the step of a database.
The method for tracing of 15. mobile objects as claimed in claim 14, this residence time and this appearance model that wherein record this mobile object comprise in the step of this database:
This appearance model of this mobile object and a plurality of appearance models in this database are carried out associated, to judge whether this appearance model of this mobile object has been recorded in this database;
If this appearance model of this mobile object has been recorded in this database, only record this residence time of this mobile object in this database; And
If this appearance model of this mobile object is not recorded in this database, record this residence time of this mobile object and this appearance model in this database.
The method for tracing of 16. mobile objects as claimed in claim 15, wherein this appearance model of this mobile object and these appearance models in this database are carried out associated, to judge that the step whether this appearance model of this mobile object has been recorded in this database comprises:
Calculate each distance of these appearance models in this appearance model of this mobile object and this database, and judge whether this distance is less than a threshold value; And
If there is this distance of this appearance model to be less than this threshold value, by this appearance model in this this database of appearance model modification of this mobile object.
The method for tracing of 17. mobile objects as claimed in claim 16, wherein the step by this appearance model in this this database of appearance model modification of this mobile object comprises:
Select this distance to be less than in these appearance models of this threshold value and upgrade this appearance model in this database to the most similar person of this appearance model of this mobile object.
The method for tracing of 18. mobile objects as claimed in claim 17, each the step of distance of wherein calculating these appearance models in this appearance model of this mobile object and this database comprises:
Calculate the first distance between the appearance model that in these images, this mobile object is set up in two different time points, to set up one first range distribution;
Calculate the second distance between this appearance models of two mobile objects in each of these images, to set up a second distance, distribute; And
Ask for the boundary line that this first range distribution and this second distance distribute, using as the standard of distinguishing this appearance model.
The method for tracing of 19. mobile objects as claimed in claim 18, the step of wherein setting up this first range distribution and the distribution of this second distance comprises:
Calculate respectively mean value and standard deviation that this first range distribution and this second distance distribute; And
According to this mean value and this standard deviation, in the mode of normal distribution, represent the data of these the first distances and these second distances, and set up this first range distribution and the distribution of this second distance.
The method for tracing of 20. mobile objects as claimed in claim 14, wherein analyzes this time series of this mobile object in this database, to judge that the step whether this mobile object meets this event of hovering comprises:
Judge whether the time that this mobile object continues to occur in these images surpasses one second Preset Time; And
When this mobile object has continued to occur surpassing this second Preset Time in these images, judge that this mobile object meets this event of hovering.
The method for tracing of 21. mobile objects as claimed in claim 14, wherein analyzes this time series of this mobile object in this database, to judge that the step whether this mobile object meets this event of hovering comprises:
Judge whether the time interval that this mobile object leaves these images is less than one the 3rd Preset Time; And
When this time interval that leaves these images when this mobile object is less than the 3rd Preset Time, judge that this mobile object meets this event of hovering.
CN200810179785.8A 2008-12-03 2008-12-03 Method for tracking moving object Active CN101751549B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN200810179785.8A CN101751549B (en) 2008-12-03 2008-12-03 Method for tracking moving object

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN200810179785.8A CN101751549B (en) 2008-12-03 2008-12-03 Method for tracking moving object

Publications (2)

Publication Number Publication Date
CN101751549A CN101751549A (en) 2010-06-23
CN101751549B true CN101751549B (en) 2014-03-26

Family

ID=42478515

Family Applications (1)

Application Number Title Priority Date Filing Date
CN200810179785.8A Active CN101751549B (en) 2008-12-03 2008-12-03 Method for tracking moving object

Country Status (1)

Country Link
CN (1) CN101751549B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103324906B (en) * 2012-03-21 2016-09-14 日电(中国)有限公司 A kind of method and apparatus of legacy detection
CN103970262B (en) * 2013-02-06 2018-01-16 原相科技股份有限公司 Optical profile type pointing system
CN103824299B (en) * 2014-03-11 2016-08-17 武汉大学 A kind of method for tracking target based on significance
CN105574511B (en) * 2015-12-18 2019-01-08 财团法人车辆研究测试中心 Have the adaptability object sorter and its method of parallel framework
CN107305378A (en) * 2016-04-20 2017-10-31 上海慧流云计算科技有限公司 A kind of method that image procossing follows the trail of the robot of object and follows the trail of object
CN108205643B (en) * 2016-12-16 2020-05-15 同方威视技术股份有限公司 Image matching method and device
CN110032917A (en) * 2018-01-12 2019-07-19 杭州海康威视数字技术股份有限公司 A kind of accident detection method, apparatus and electronic equipment
CN109117721A (en) * 2018-07-06 2019-01-01 江西洪都航空工业集团有限责任公司 A kind of pedestrian hovers detection method
TWI697868B (en) * 2018-07-12 2020-07-01 廣達電腦股份有限公司 Image object tracking systems and methods
CN109102669A (en) * 2018-09-06 2018-12-28 广东电网有限责任公司 A kind of transformer substation auxiliary facility detection control method and its device
CN111815671B (en) * 2019-04-10 2023-09-15 曜科智能科技(上海)有限公司 Target quantity counting method, system, computer device and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1141427A (en) * 1995-12-29 1997-01-29 西安交通大学 Method for measuring moving articles based on pattern recognition
CN1766928A (en) * 2004-10-29 2006-05-03 中国科学院计算技术研究所 A kind of motion object center of gravity track extraction method based on the dynamic background sport video
CN101170683A (en) * 2006-10-27 2008-04-30 松下电工株式会社 Target moving object tracking device

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6295367B1 (en) * 1997-06-19 2001-09-25 Emtera Corporation System and method for tracking movement of objects in a scene using correspondence graphs
JP3880759B2 (en) * 1999-12-20 2007-02-14 富士通株式会社 Moving object detection method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1141427A (en) * 1995-12-29 1997-01-29 西安交通大学 Method for measuring moving articles based on pattern recognition
CN1766928A (en) * 2004-10-29 2006-05-03 中国科学院计算技术研究所 A kind of motion object center of gravity track extraction method based on the dynamic background sport video
CN101170683A (en) * 2006-10-27 2008-04-30 松下电工株式会社 Target moving object tracking device

Also Published As

Publication number Publication date
CN101751549A (en) 2010-06-23

Similar Documents

Publication Publication Date Title
CN101751549B (en) Method for tracking moving object
US8243990B2 (en) Method for tracking moving object
CN102831439B (en) Gesture tracking method and system
US8213679B2 (en) Method for moving targets tracking and number counting
Chen et al. A people counting system based on face-detection
CN107230267B (en) Intelligence In Baogang Kindergarten based on face recognition algorithms is registered method
CN101561928B (en) Multi-human body tracking method based on attribute relational graph appearance model
CN101799968B (en) Detection method and device for oil well intrusion based on video image intelligent analysis
CN106023244A (en) Pedestrian tracking method based on least square locus prediction and intelligent obstacle avoidance model
CN103854027A (en) Crowd behavior identification method
CN102214359B (en) Target tracking device and method based on hierarchic type feature matching
Tan et al. Dynamic hand gesture recognition using motion trajectories and key frames
CN105654139A (en) Real-time online multi-target tracking method adopting temporal dynamic appearance model
CN106355604A (en) Target image tracking method and system
Hsu et al. Passenger flow counting in buses based on deep learning using surveillance video
Wong et al. Recognition of pedestrian trajectories and attributes with computer vision and deep learning techniques
Chen et al. Multimedia data mining for traffic video sequences
CN113763427B (en) Multi-target tracking method based on coarse-to-fine shielding processing
CN106778637B (en) Statistical method for man and woman passenger flow
CN103390151A (en) Face detection method and device
CN113378675A (en) Face recognition method for simultaneous detection and feature extraction
CN106127798B (en) Dense space-time contextual target tracking based on adaptive model
CN109508657B (en) Crowd gathering analysis method, system, computer readable storage medium and device
Lin et al. Face occlusion detection for automated teller machine surveillance
CN109977796A (en) Trail current detection method and device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant