CN102999918B - Multi-target object tracking system of panorama video sequence image - Google Patents

Multi-target object tracking system of panorama video sequence image Download PDF

Info

Publication number
CN102999918B
CN102999918B CN201210116956.9A CN201210116956A CN102999918B CN 102999918 B CN102999918 B CN 102999918B CN 201210116956 A CN201210116956 A CN 201210116956A CN 102999918 B CN102999918 B CN 102999918B
Authority
CN
China
Prior art keywords
image
formula
coordinate
destination object
pix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201210116956.9A
Other languages
Chinese (zh)
Other versions
CN102999918A (en
Inventor
汤一平
严杭晨
田旭园
马宝庆
孟焱
叶良波
俞立
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN201210116956.9A priority Critical patent/CN102999918B/en
Publication of CN102999918A publication Critical patent/CN102999918A/en
Application granted granted Critical
Publication of CN102999918B publication Critical patent/CN102999918B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

A multi-target object tracking system of a panorama video sequence image comprises a panorama camera shooting device and a microprocessor. The panorama camera shooting device is used for acquiring a scene in a large area, and the microprocessor is used for video analyzing and processing a panorama image shot by the panorama camera shooting device. During the processing process of moving target object extracting and simple multi-target object tracking, an MHoEI (motion history or energy images) algorithm which is low in computation complexity, simple in parameters and threshold value selection and convenient to realize in a system-on-chip is utilized. In order to effectively track multi-target objects in the scene in cases of shielding, separating and merging, the multi-target objects are sequentially tracked via motion characteristics, color characteristics and shape characteristics of the target objects according to matched results, and while tracking efficiency is improved, robustness for tracking the multi-target objects is improved.

Description

The multiple goal object-tracking systems of panorama video sequence image
Technical field
The application foundation technology of the technology such as computer vision, omni-directional visual that the present invention relates in panorama intelligent video analysis, relates in particular to the multiple goal object-tracking systems in panorama video sequence image.
Background technology
It is the analysis of destination object behavior and the prerequisite of identification that destination object is followed the tracks of, and it plays extremely important role in intelligent video analysis.It is exactly the dynamic image sequence photographed by analyzing video camera that destination object is followed the tracks of, and builds the destination object matching problem of the attributive character such as Shape-based interpolation, size, position, direction of motion, speed, color, texture between continuous print picture frame.
The destination object detection system that robustness is high needs to possess motion and detects, eliminates the graphics processing function such as shade and noise, and will realize above-mentioned functions at present also needs to adjust various parameter in advance.And existing intelligent video analysis system often all adopts fixing parameter value, make easily to cause video Detection results not good under different scene, in different application process.In addition, the parameter adjustment in destination object detection system also depends on expertise, becomes a bottleneck of extensive commercial application.Therefore, how to study a kind of efficiently, robustness high, operand is little, be convenient to hard-wired track algorithm problem urgently to be resolved hurrily in intelligent video analysis technology.
Destination object track algorithm not yet has an authoritative sorting technique, and different criteria for classifications has different classification results.Generally can be divided into following a few class: the tracking based on Model Matching, the tracking based on distorted pattern, the tracking based on Region Matching, the tracking of feature based coupling, the tracking based on kinetic characteristic and the tracking based on probability statistics.More above-mentioned track algorithms are all carry out following the tracks of according to certain attribute of object, wherein based on the tracking of kinetic characteristic be a kind of the most efficiently, closest to the tracker of human vision.
Frame difference method is based on a kind of main method in the track algorithm of kinetic characteristic, is the most straightforward procedure detecting Moving Targets in Sequent Images, have do not need background modeling, operand little, be easy to the advantages such as hardware implementing.Mainly can be divided into two frame differences and symmetric difference.
Two frame differences: the part that in difference image, gray-scale value is non-vanishing, the variation range namely caused in the picture due to target travel.But this variation range can only represent the relative position change of moving object in this two two field picture, and cannot obtain the concrete shape of moving object, and insensitive to the object slowly that moves, so there is certain limitation.
Symmetric difference: the symmetric difference technology of continuous three picture frames compensate for limitation existing for two frame difference methods.Adjacent two two field pictures carry out difference, and the significant difference between two two field pictures can detect the range of movement of target rapidly, and continuous three frame sequence images can detect the shape profile of intermediate frame motion target preferably by & operation.
Different conditions (motion or static) for the target entered in video scene takes different tracking strategy respectively, handle the application transfer problem of tracking strategy well, realizing real-time, lasting, the stable tracking of quiet target when moving when entering panoramic video scene multiple is the problem that the present invention endeavours to solve.
Summary of the invention
Follow the tracks of that operand is large, following range is limited to overcome in existing video sequence image destination object, tracking parameter be not easy to arrange, track algorithm robustness shortcoming, tracking strategy merge difficulty and be difficult to the problems such as Hardware, the invention provides a kind of efficiently, robustness high, operand is little, be convenient to hard-wired full-view video image destination object tracker.
The technical solution adopted for the present invention to solve the technical problems is:
A kind of multiple goal object-tracking systems of panorama video sequence image, comprise the omnibearing shooting device of the full-view video image capturing whole scene internal object object, described omnibearing shooting device ODVS represents, described ODVS is placed in above the middle part of monitoring scene, and described ODVS is connected with microprocessor by USB interface; Described microprocessor is connected with described PC by computer network; Described microprocessor comprises:
Video image reading unit, for being read the panoramic picture captured by ODVS by USB interface, and submits to video image expanding unit and video image storage unit by the panoramic picture of reading;
Video image expanding unit, for panoramic picture is carried out column expansion, target-object detecting unit submitted to by the panorama histogram picture after expansion;
Destination object detecting and tracking unit, for detecting the moving target object existed in panorama column unfolded image, and live destination object by rectangle circle, with a kind of Motion History or Energy Images algorithm, hereinafter referred to as MHoEI algorithm, extract and tracking target object with the motion history of destination object and energygram picture;
In described PC, carry out formalization process and the process of behavior semantization of full-view video image, described PC comprises: multiple goal subject tracking unit, block in scene for multiple goal object, be separated and combination situation time effectively follow the tracks of.
Further, in described destination object detecting and tracking unit, adopt MHoEI algorithm to extract and tracking target object the motion history of destination object and energygram picture, represent with formula (3):
H τ ( x , y , t ) = τ ifD ( x , y , t ) = 1 max ( 0 , H τ ( x , y , t - 1 ) ) ifS ≤ δ max ( 0 , H τ ( x , y , t - 1 ) - 1 ) otherwise - - - ( 3 )
In formula, S is the movement velocity of destination object, and τ is the duration, the binary picture sequence that D (x, y, t) is moving region, H τthe binary picture sequence that (x, y, t-1) is non-moving areas, duration τ needs to carry out dynamic conditioning according to destination object movement velocity S.
Further again, in described destination object detecting and tracking unit, according to the feature that the colourity of pixel in shadow region is almost equal compared with background pixel colourity, the rgb color space of original image is changed into HSI color space, and then carry out frame difference method computing and just can eliminate shade, rgb color space changes into the computing method of HSI color space as shown in formula (7)
H = &theta; &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; G &GreaterEqual; B 2 &pi; - &theta; &CenterDot; &CenterDot; &CenterDot; G < B
S = 1 - 3 ( R + G + B ) [ min ( R , G , B ) ] - - - ( 7 )
&theta; = arccos { [ ( R - G ) + ( R - B ) ] / 2 [ ( R - G ) 2 + ( R - B ) ( G - B ) ] 1 / 2 }
In formula, R is the red component in rgb color space, and G is the green component in rgb color space, and B is the blue component in rgb color space; H is the tone in HSI color space, represents by angle, reflects color closest to which type of spectral wavelength; S is the depth degree of the saturation degree in HSI color space, characterizing color; Tone H and saturation degree S claims colourity altogether;
For the destination object of the distant place of distance 0DVS, on H component and S component, carry out frame difference method process respectively, its computing formula as shown in (8),
IP L , H Image ( i , j ) = 1 &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; if | Pix H , t ( i , j ) - Pix H , t - 3 ( i , j ) | > Threshold 1 0 &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; else - - - ( 8 )
IP L , S Image ( i , j ) = 1 &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; if | Pix S , t ( i , j ) - Pix S , t - 3 ( i , j ) | > Threshold 1 0 &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; else
In formula, IP l, Himage (i, j) for the coordinate of the top H color component in current input panoramic image frame be the testing result that (i, j) puts, IP l, Simage (i, j) is the testing result that (i, j) puts for the coordinate of the top S color component in current input panoramic image frame, represents by binary map, and 1 represents foreground moving object, and 0 represents background, Pix h, t(i, j) and Pix h, t-3(i, j) represents that top H color component coordinate in t and t-3 time chart picture frame of full-view video image is the pixel value that (i, j) puts respectively, Pi x, t(i, j) and Pix s, t-3(i, j) represents that top S color component coordinate in t and t-3 time chart picture frame of full-view video image is the pixel value that (i, j) puts respectively, and Threshold1 is corresponding judgment threshold, and value is 45 here;
For destination object at a distance in distance ODVS, on H component and S component, carry out frame difference method process respectively, its computing formula as shown in (9),
IP M , H Image ( i , j ) = 1 &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; if | Pix H , t ( i , j ) - Pix H , t - 2 ( i , j ) | > Threshold 2 0 &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; else - - - ( 9 )
IP M , S Image ( i , j ) = 1 &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; if | Pix S , t ( i , j ) - Pix S , t - 2 ( i , j ) | > Threshold 2 0 &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; else
In formula, IP m, Him age (i, j) for the coordinate of the middle part H color component in current input panoramic image frame be the testing result that (i, j) puts, IP m, Sim age (i, j) is the testing result that (i, j) puts for the coordinate of the middle part S color component in current input panoramic image frame, represents by binary map, and 1 represents foreground moving object, and 0 represents background, Pix h, t(i, j) and Pix h, t-2(i, j) represents that middle part H color component coordinate in t and t-2 time chart picture frame of full-view video image is the pixel value that (i, j) puts respectively, Pix s, t(i, j) and Pix s, t-2(i, j) represents that middle part S color component coordinate in t and t-2 time chart picture frame of full-view video image is the pixel value that (i, j) puts respectively, and Threshold2 is corresponding judgment threshold, and value is 45 here;
For the destination object nearby of distance ODVS, on H component and S component, carry out frame difference method process respectively, its computing formula as shown in (10),
IP N , H Image ( i , j ) = 1 &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; if | Pix H , t ( i , j ) - Pix H , t - 1 ( i , j ) | > Threshold 3 0 &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; else - - - ( 10 )
IP N , S Image ( i , j ) = 1 &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; if | Pix S , t ( i , j ) - Pix S , t - 1 ( i , j ) | > Threshold 3 0 &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; else
In formula, IP n, Him age (i, j) for the coordinate of the bottom H color component in current input panoramic image frame be the testing result that (i, j) puts, IP n, Sim age (i, j) is the testing result that (i, j) puts for the coordinate of the bottom S color component in current input panoramic image frame, represents by binary map, and 1 represents foreground moving object, and 0 represents background, Pix h, t(i, j) and Pix h, t-1(i, j) represents that middle part H color component coordinate in t and t-1 time chart picture frame of full-view video image is the pixel value that (i, j) puts respectively, Pix s, t(i, j) and Pix s, t-1(i, j) represents that middle part S color component coordinate in t and t-1 time chart picture frame of full-view video image is the pixel value that (i, j) puts respectively, and Threshold3 is corresponding judgment threshold, and value is 45 here;
Finally, process is arranged for the destination object split after segmentation; On the one hand, because colourity is formed primarily of tone H and saturation degree S two part, need here to carry out or calculation process; On the other hand, in P4, P5 and P6 processing procedure, view picture panoramic picture is divided into three, upper, middle and lower part, also needs here to carry out or calculation process; Obtain the segmentation image of the moving target object on view picture panoramic picture through such process, disposal route as shown in formula (11),
D(x,y,t)=IP L,HIm age(i,j)∨IP L,SIm age(i,j)∨IP M,HIm age(i,j)∨IP M,SIm age(i,j)∨IP N,HIm age(i,j)∨IP N,SIm age(i,j) (11)
In formula, D (x, y, t) for coordinate in current input panoramic image frame be the testing result that (i, j) puts, IP n, Himage (i, j) for the coordinate of the bottom H color component in current input panoramic image frame be the testing result that (i, j) puts, IP n, Sim age (i, j) for the coordinate of the bottom S color component in current input panoramic image frame be the testing result that (i, j) puts, IP m, Him age (i, j) for the coordinate of the middle part H color component in current input panoramic image frame be the testing result that (i, j) puts, IP m, Sim age (i, j) for the coordinate of the middle part S color component in current input panoramic image frame be the testing result that (i, j) puts, IP l, Him age (i, j) for the coordinate of the top H color component in current input panoramic image frame be the testing result that (i, j) puts, IP l, Simage (i, j) for the coordinate of the top S color component in current input panoramic image frame be the testing result that (i, j) puts.
In described video image expanding unit, according to the centre coordinate of the panoramic picture calculated in initialization process and the inside and outside circle radius of image, by the initial point O of the centre coordinate of panoramic picture setting plane coordinate system *(0,0), X *axle, Y *the internal diameter of axle, panoramic picture is r, external diameter is R, and with radius of a circle in the middle of r1=(r+R)/2 setting, position angle is β=tan -1(y */ x *); Panorama column unfolded image is with true origin O *(0,0), X *axle, Y *axle is plane coordinate system, is r and X by the internal diameter in panoramic picture *the intersection point (r, 0) of axle is as true origin O *(0,0), counterclockwise launches with azimuthal angle beta; Set up any point pixel coordinates P in panorama column unfolded image *(x *, y *) with panoramic picture in pixel coordinates Q *(x *, y *) corresponding relation, its calculating formula is:
x *=y */(tan(360x **/π(R+r))) (4)
y *=(y **+r)coSβ (5)
In above formula, x *, y *for the pixel coordinates value of panorama column unfolded image, x *, y *for the pixel coordinates value of panoramic picture, R is the external diameter of circular panoramic picture, and r is the internal diameter of circular panoramic picture, and β is the position angle of circular panoramic picture coordinate.
In described destination object detecting and tracking unit, when adopting the extraction of MHoEI algorithm and tracking target object, obtaining the ROI of each moving target object, calculating i-th ROI icenter-of-mass coordinate ROI i, m(x, y, t), obtains i-th ROI in then utilizing one to circulate icenter-of-mass coordinate ROI i, m(x, y, t-1) calculates the speed of i-th moving target object, computing method as shown in formula (15),
S i ( t ) = | ROI i , m ( x , y , t ) - ROI i , m ( x , y , t - 1 ) | &Delta;t - - - ( 15 )
In formula, ROI i, m(x, y, t) is i-th ROI in frame under process icenter-of-mass coordinate, ROI i, m(x, y, t-1) is i-th ROI in a upper processed frame icenter-of-mass coordinate, Δ t was two frame period times, S it () is i-th ROI in frame under process imovement velocity.
In described destination object detecting and tracking unit, the destination object movement velocity S calculated by formula (15) it () is as calculating prolongeding time τ i, Mfoundation; Computing method as shown in formula (12),
τ i,M=k/S i(t) (12)
In formula, τ i, Mbe the duration of i-th destination object, S it () is the translational speed of i-th destination object, k is a certain constant;
For the far and near different destination object of distance ODVS, τ i, Mvalue also needs suitable adjustment, for the destination object of same movement velocity, the speed that the near destination object of distance ODVS reflects on panoramic expansion figure can be hurry up, the speed that the destination object that then distance ODVS is far away reflects on panoramic expansion figure can be slow, and what formula (12) was tried to achieve is from the τ in ODVS moderate distance situation i, Mvalue, carried out normalized to the duration here, specific practice is that the duration is nearby set to H τ(x, y, t)=τ i, M-α, the duration of middle distant place is set to H τ(x, y, t)=τ i, M, the duration is at a distance set to H τ(x, y, t)=τ i, M+ α; Wherein α=2 ~ 4.
In described destination object detecting and tracking unit, each circulation obtains i ROI area-of-interest, the center-of-mass coordinate value in region and the sizes values of regional frame; The extraction of these destination objects and tracking data also call when being supplied to video sequence image process on the middle and senior level with full-view video image in the mode of software interface.
In described multiple goal subject tracking unit, each destination object entering scene is created automatically to the object of a software, the describing method of object is as follows:
In described multiple goal subject tracking unit, when adopting color histogram to occur division situation as color characteristic model and multiple goal object, first mates foundation, and specific practice is before multiple object merging, is kept in object by the chrominance information of each object; The color histogram of the relevant target object be stored in object is read respectively when division situation appears in multiple goal object data; Color histogram is done to the ROI of relevant target object during division situation simultaneously using the tolerance of Bhattacharyya distance as two color histogram similaritys, the computing method of discrete Bart Charlie subbreed number as shown in formula (16),
&rho; [ P i , Q j ] = &Sigma; &mu; = 1 m p i ( &mu; ) q j ( &mu; ) - - - ( 16 )
In formula, ρ [P i, Q j] ∈ [0,1] is Bart Charlie subbreed number, Bhattacharyya distance formula (17) calculates,
d = min { d i , j } = min { 1 - &rho; [ P i , Q j ] } - - - ( 17 )
In formula, d is Bhattacharyya distance, if the threshold value Threshold4 that this value is less than a certain regulation just represents that the match is successful, continues by the second coupling according to mating for not having the destination object that the match is successful.
Described multiple goal subject tracking unit, when adopting Hu bending moment does not occur division situation as destination object shape facility and multiple goal object, second mates foundation, specific practice is before multiple object merging, is kept in object by 7 vector information of the Hu invariant moment features of each object; The Hu invariant moment features vector information be stored in the relevant target object of object is read respectively when division situation appears in multiple goal object; The vector operation of Hu invariant moment features is done to relevant target object during division situation simultaneously, using the tolerance of Euclidean distance as two Hu invariant moment features similaritys, the computing method of Hu not bending moment 7 proper vectors as shown in formula (18),
φ 1=η 2002
&phi; 2 = ( &eta; 20 - &eta; 02 ) 2 + 4 &eta; 11 2
φ 3=(θ 30-3η 12) 2+(3η 2103) 2
φ 4=(η 3012) 2+(η 2103) 2
φ 5=(η 30-3η 12)(η 3012)[(η 3012) 2-3(η 2103) 2]+ (18)
(3η 2103)(η 2103)[3(η 3012) 2-(η 2103) 2]
φ 6=(η 2002)[(η 3012) 2-3(η 2103) 2]+4η 113012)(η 2103)
φ 7=(3η 2103)(η 3012)[(η 3012) 2-3(η 2103) 2]-
30-3η 12)(η 2103)[3(η 3012) 2-(η 2103) 2]
In formula, η pqfor normalization centre distance, computing method are provided by formula (19),
&eta; pq = &mu; pq / &mu; pq r &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; r = ( p + q ) / 2 &CenterDot; &CenterDot; &CenterDot; p + q = 2,3 , &CenterDot; &CenterDot; &CenterDot; - - - ( 19 )
In formula, μ pqcentered by distance, computing method are provided by formula (20),
&mu; pq = &Sigma; x = 1 M &Sigma; y = 1 N ( x - x &OverBar; ) p ( y - y &OverBar; ) q f ( x , y ) - - - ( 20 )
In formula, the binary map that f (x, y) is destination object, with for the barycentric coordinates of destination object, computing method are provided by formula (21),
x &OverBar; = m 10 / m 11 , y &OverBar; = m 01 / m 00 ( 21 )
m pq = &Sigma; x = 1 M &Sigma; y = 1 N x p y q f ( x , y )
In formula, the binary map that f (x, y) is destination object; Can pass through for each destination object 7 vectors that formula (18) ~ (21) calculate its Hu not bending moment, then use formula (22) Euclidean distance go to judge both similarity,
d ms = &Sigma; i = 1 7 ( &phi; mi - &phi; si ) 2 &CenterDot; &CenterDot; &CenterDot; s &Element; S - - - ( 22 )
In formula, d msfor being stored in the Hu invariant moment features vector value φ in object siwith by the Hu invariant moment features vector value φ of destination object mated mieuclidean distance value, S be multiple destination object merge time destination object number, if meet d mj=min{d msj, s ∈ S, so a jth object is just judged to be by the destination object mated.
Beneficial effect of the present invention is mainly manifested in: in the process extracting moving target object, 1, automatically eliminate the shade that moving target object produces; 2, algorithm simple, utilize recursive calculation method, only up-to-date information is needed to store, makes to calculate fast and efficient; 3, the common problem in the bottom visual processes of intelligent video analysis extracts foreground object effectively easily and effectively follows the tracks of foreground object, and wish to realize by SOC (system on a chip); 4, adopt software object describing mode that multiple target tracking real-time and robustness are greatly improved.
Accompanying drawing explanation
Fig. 1 is the processing flow chart of the multiple goal object-tracking systems of panorama video sequence image;
Fig. 2 is the hardware structure diagram of the multiple goal object-tracking systems of panorama video sequence image;
Fig. 3 is the structural drawing of a kind of ODVS;
Fig. 4 is the imaging panorama schematic diagram of ODVS;
Fig. 5 is a kind of ODVS imaging schematic diagram;
Fig. 6 is that the panoramic perspective of the panoramic vision imaging of single view launches key diagram;
Fig. 7 be full-view perspective far away, in, foreground object modeling in nearly piecemeal and block illustrates schematic diagram;
Fig. 8 is the multiple goal object extraction algorithm flow chart that panorama column launches video image;
Fig. 9 is the multiple goal object tracking algorithm process flow diagram that panorama column launches video image;
Figure 10 is that a kind of simple multiple goal Object tracking illustrates schematic diagram;
Figure 11 is that a kind of multiple goal Object tracking of complexity illustrates schematic diagram;
Figure 12 is a kind of foreground target object extraction process process flow diagram.
Embodiment
Below in conjunction with accompanying drawing, the invention will be further described.
With reference to Fig. 1 ~ 12, a kind of multiple goal object-tracking systems of panorama video sequence image, comprise the omnibearing shooting device for obtaining scene on a large scale, for carrying out the microprocessor of underlying concept process and the PC for carrying out formalization process and the process of behavior semantization to underlying concept result to the panoramic picture captured by omnibearing shooting device, as shown in Figure 1, its system hardware is formed as shown in Figure 2 panoramic vision treatment scheme; Described omnibearing shooting device, as shown in Figure 3, referred to as ODVS, above the middle part being placed in monitoring scene, makes ODVS can capture the full-view video image of whole scene internal object object, a captured panoramic picture circle as shown in Figure 4; Described omnibearing shooting device is connected with described microprocessor by USB interface; Described microprocessor is connected with described PC by computer network; Described microprocessor comprises hardware and software two parts, and hardware components adopts generally commercially available davinci platform; Application software part in davinci platform comprises: video image reading unit, for being read the panoramic picture captured by ODVS by USB interface, and the panoramic picture of reading is submitted to video image expanding unit and video image storage unit; Video image expanding unit is used for panoramic picture to carry out column expansion, and target-object detecting unit submitted to by the panorama histogram picture after expansion; Destination object detecting and tracking unit is for detecting the moving target object existed in panorama column unfolded image, and live destination object by rectangle circle, with a kind of Motion History orEnergy Images algorithm, hereinafter referred to as MHoEI algorithm, extract and tracking target object with the motion history of destination object and energygram picture; Described PC mainly carries out formalization process and the process of behavior semantization of full-view video image, mainly in the present invention realize complicated multiple goal Object tracking, multiple goal subject tracking unit be used for multiple goal object block in scene, be separated and the situation such as merging time effectively follow the tracks of;
Fig. 5 is the structural drawing of ODVS, ODVS comprises hyperboloid minute surface 2, upper cover 1, transparent housing 3, lower fixed seat 4, image unit holder 5, image unit 6, linkage unit 7, upper cover 8, described hyperboloid minute surface 2 is fixed on described upper cover 1, described lower fixed seat 4 and transparent housing 3 link into an integrated entity by described linkage unit 7, together with described transparent housing 3 is fixed by screws in described upper cover 1 and described upper cover 8, described image unit 6 is screwed on described image unit holder 5, described image unit 6 holder 5 is screwed on described lower fixed seat 4, the output port of described image unit 6 is USB interface,
The main treatment scheme realized as shown in Figure 8 in davinci platform, the flow chart below according to Fig. 8 illustrates and from panorama video sequence image, extracts destination object and the processing procedure of following the tracks of destination object;
P1 is that microprocessor reads full-view video image by USB interface from ODVS, and the panoramic picture of reading is submitted to P2;
P2 carries out column to full-view video image and launches process, according to the centre coordinate of the panoramic picture calculated in initialization process and the inside and outside circle radius of image, as shown in Figure 6, by the initial point O of the centre coordinate of panoramic picture setting plane coordinate system *(0,0), X *axle, Y *the internal diameter of axle, panoramic picture is r, external diameter is R, uses r iradius of a circle in the middle of the setting of=(r+R)/2, position angle is β=tan -1(y */ x *); Panorama column unfolded image is with true origin O *(0,0), X *axle, Y *axle is plane coordinate system, is r and X by the internal diameter in panoramic picture *the intersection point (r, 0) of axle is as true origin O *(0,0), counterclockwise launches with azimuthal angle beta; Set up any point pixel coordinates P in panorama column unfolded image *(x *, y *) with panoramic picture in pixel coordinates Q *(x *, y *) corresponding relation, its calculating formula is:
x *=y */(tan(360x **/π(R+r))) (4)
y *=(y **+r)coSβ (5)
In above formula, x *, y *for the pixel coordinates value of panorama column unfolded image, x *, y *for the pixel coordinates value of panoramic picture, R is the external diameter of circular panoramic picture, and r is the internal diameter of circular panoramic picture, and β is the position angle of circular panoramic picture coordinate;
Because the scope of above-mentioned panorama column unfolded image is 0 ~ 360 °, the situation being judged as two destination objects is there will be when same tracking target object is in 0 ° or 360 ° of edges, for this reason, in the present invention, the scope of panorama column unfolded image is set to 0 ~ 380 °, namely the overlapping region of about 20 ° is had, as shown in Figure 7;
According to the image-forming principle of ODVS, as shown in Figure 5, destination object is in from the distant regional imaging of ODVS on the top of panorama column unfolded image, destination object is in from the middle part of the regional imaging away from ODVS moderate distance at panorama column unfolded image, and destination object is in regional imaging from ODVS close together in the bottom of panorama column unfolded image; For this reason, in the present invention, panorama column unfolded image scope is in vertical direction set to three regions, as shown in Figure 7, is respectively remote region, moderate distance region and nearby region;
P3 carries out the conversion process of gray-scale value conversion and HSI color space to panoramic expansion image, the object of gray-scale value conversion is to obtain moving target subject area when asking frame-to-frame differences, and the object of the conversion process of HSI color space is the shade in order to eliminate moving target object when asking frame-to-frame differences;
In order to do the shade eliminating moving target object in frame difference method calculating process; Frame difference method is a kind of based on seasonal effect in time series directly simple moving target detecting method, frame difference method computing method such as formula shown in (6),
IP Image ( i , j ) = 1 &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; if | Pix t ( i , j ) - Pix t - n ( i , j ) | > Threshold 0 &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; else - - - ( 6 )
In formula, IP Im age (i, j) is the testing result that (i, j) puts for coordinate in current input image frame, represents by binary map, and 1 represents foreground moving object, and 0 represents background, Pix t(i, j) and Pix t-n(i, j) represents that video image coordinate in t and t-n time chart picture frame is the pixel value that (i, j) puts respectively, and Threshold is corresponding judgment threshold; To think when the pixel value difference of gained is greater than the threshold value that this sets that in video image coordinate in t frame belongs to a pixel in foreground moving set of regions as the pixel of (i, j), otherwise, be judged as the pixel that scene is concentrated;
Rgb color space changes into the computing method of HSI color space as shown in formula (7),
H = &theta; &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; G &GreaterEqual; B 2 &pi; - &theta; &CenterDot; &CenterDot; &CenterDot; G < B
S = 1 - 3 ( R + G + B ) [ min ( R , G , B ) ] - - - ( 7 )
&theta; = arccos { [ ( R - G ) + ( R - B ) ] / 2 [ ( R - G ) 2 + ( R - B ) ( G - B ) ] 1 / 2 }
In formula, R is the red component in rgb color space, and G is the green component in rgb color space, and B is the blue component in rgb color space; H is the tone in HSI color space, represents by angle, reflects color closest to which type of spectral wavelength; S is the depth degree of the saturation degree in HSI color space, characterizing color; Tone H and saturation degree S claims colourity altogether;
In P3 process, the present invention carries out color space change respectively to panoramic expansion figure, obtains the panoramic expansion figure of H component and the panoramic expansion figure of S component, and the result after process is submitted to P4, P5 and P6 respectively according to the distance of distance ODVS and carries out frame difference method process;
P4 process be the destination object of the distant place of distance ODVS, be divided into H component and S component, its computing formula as shown in (8),
IP L , H Image ( i , j ) = 1 &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; if | Pix H , t ( i , j ) - Pix H , t - 3 ( i , j ) | > Threshold 1 0 &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; else - - - ( 8 )
IP L , S Image ( i , j ) = 1 &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; if | Pix S , t ( i , j ) - Pix S , t - 3 ( i , j ) | > Threshold 1 0 &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; else
In formula, IP l, Him age (i, j) for the coordinate of the top H color component in current input panoramic image frame be the testing result that (i, j) puts, IP l, Sim age (i, j) is the testing result that (i, j) puts for the coordinate of the top S color component in current input panoramic image frame, represents by binary map, and 1 represents foreground moving object, and 0 represents background, Pix h, t(i, j) and Pix h, t-3(i, j) represents that top H color component coordinate in t and t-3 time chart picture frame of full-view video image is the pixel value that (i, j) puts respectively, Pix s, t(i, j) and Pix s, t-3(i, j) represents that top S color component coordinate in t and t-3 time chart picture frame of full-view video image is the pixel value that (i, j) puts respectively, and Threshold1 is corresponding judgment threshold, and value is 45 here;
P5 process be the destination object of the middle distant place of distance ODVS, be divided into H component and S component, its computing formula as shown in (9),
IP M , H Image ( i , j ) = 1 &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; if | Pix H , t ( i , j ) - Pix H , t - 2 ( i , j ) | > Threshold 2 0 &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; else - - - ( 9 )
IP M , S Image ( i , j ) = 1 &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; if | Pix S , t ( i , j ) - Pix S , t - 2 ( i , j ) | > Threshold 2 0 &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; else
In formula, IP m, Him age (i, j) for the coordinate of the middle part H color component in current input panoramic image frame be the testing result that (i, j) puts, IP m, Sim age (i, j) is the testing result that (i, j) puts for the coordinate of the middle part S color component in current input panoramic image frame, represents by binary map, and 1 represents foreground moving object, and 0 represents background, Pix h, t(i, j) and Pix h, t-2(i, j) represents that middle part H color component coordinate in t and t-2 time chart picture frame of full-view video image is the pixel value that (i, j) puts respectively, Pix s, t(i, j) and Pix s, t-2(i, j) represents that middle part S color component coordinate in t and t-2 time chart picture frame of full-view video image is the pixel value that (i, j) puts respectively, and Threshold2 is corresponding judgment threshold, and value is 45 here;
P6 process be the destination object nearby of distance ODVS, be divided into H component and S component, its computing formula as shown in (10),
IP N , H Image ( i , j ) = 1 &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; if | Pix H , t ( i , j ) - Pix H , t - 1 ( i , j ) | > Threshold 3 0 &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; else - - - ( 10 )
IP N , S Image ( i , j ) = 1 &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; if | Pix S , t ( i , j ) - Pix S , t - 1 ( i , j ) | > Threshold 3 0 &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; else
In formula, IP n, Him age (i, j) for the coordinate of the bottom H color component in current input panoramic image frame be the testing result that (i, j) puts, IP n, Sim age (i, j) is the testing result that (i, j) puts for the coordinate of the bottom S color component in current input panoramic image frame, represents by binary map, and 1 represents foreground moving object, and 0 represents background, Pix h, t(i, j) and Pix h, t-1(i, j) represents that middle part H color component coordinate in t and t-1 time chart picture frame of full-view video image is the pixel value that (i, j) puts respectively, Pix s, t(i, j) and Pix s, t-1(i, j) represents that middle part S color component coordinate in t and t-1 time chart picture frame of full-view video image is the pixel value that (i, j) puts respectively, and Threshold3 is corresponding judgment threshold, and value is 45 here;
P8 mainly split after destination object split arrange process, on the one hand, because colourity is partly formed primarily of tone H and saturation degree S two, need here to carry out or calculation process; On the other hand, in P4, P5 and P6 processing procedure, view picture panoramic picture is divided into three, upper, middle and lower part, also needs here to carry out or calculation process; Obtain the segmentation image of the moving target object on view picture panoramic picture through such process, disposal route as shown in formula (11),
D(x,y,t)=IP L,HIm age(i,j)∨IP L,SIm age(i,j)∨IP M,HIm age(i,j)∨IP M,SIm age(i,j)∨IP N,HIm age(i,j)∨IP N,SIm age(i,j) (11)
In formula, D (x, y, t) for coordinate in current input panoramic image frame be the testing result that (i, j) puts, IP n, Him age (i, j) for the coordinate of the bottom H color component in current input panoramic image frame be the testing result that (i, j) puts, IP n, Sim age (i, j) for the coordinate of the bottom S color component in current input panoramic image frame be the testing result that (i, j) puts, IP m, Him age (i, j) for the coordinate of the middle part H color component in current input panoramic image frame be the testing result that (i, j) puts, IP m, Sim age (i, j) for the coordinate of the middle part S color component in current input panoramic image frame be the testing result that (i, j) puts, IP l, Him age (i, j) for the coordinate of the top H color component in current input panoramic image frame be the testing result that (i, j) puts, IP l, Sim age (i, j) for the coordinate of the top S color component in current input panoramic image frame be the testing result that (i, j) puts; The testing result of P8 submits to P9 process;
P9 mainly judges whether new motor image vegetarian refreshments, if there is new motor image vegetarian refreshments to be submitted to P11 process, otherwise is submitted to P10 process;
Whether the translational speed of P10 detected target object is greater than the threshold value δ of regulation, and the related pixel point satisfied condition submits to P16 process, otherwise submits to P15 process;
P11 selects the τ in formula (3) according to the speed of the translational speed of destination object, pixel for the fast destination object of translational speed adopts little τ value, the pixel of the destination object that translational speed is slow adopts large τ value, and computing method are as shown in formula (12)
τ i,M=k/S i(t) (12)
In formula, τ i, Mbe the duration of i-th destination object, S it () is the translational speed of i-th destination object, k is a certain constant;
For the far and near different destination object of distance ODVS, τ i, Mvalue also needs suitable adjustment, for the destination object of same movement velocity, the speed that the near destination object of distance ODVS reflects on panoramic expansion figure can be hurry up, the speed that the destination object that then distance ODVS is far away reflects on panoramic expansion figure can be slow, and what formula (12) was tried to achieve is from the τ in ODVS moderate distance situation i, Mvalue, carried out normalized to the duration in the present invention, processed respectively in P12, P13 and P14, specific practice is that the duration is nearby set to H τ(x, y, t)=τ i, M-α, the duration of middle distant place is set to H τ(x, y, t)=τ i, M, the duration is at a distance set to H τ(x, y, t)=τ i, M+ α; Wherein α=2 ~ 4; At a distance, Fig. 7 is shown in middle distant place and division nearby;
P15 mainly for the grey scale pixel value that the destination object being in temporary stop motion state is formed carry out maintenance process, for following the tracks of and lock the destination object that those are in temporary stop motion state, computing method as shown in formula (13),
H τ(x,y,t)=max(0,H τ(x,y,t-1)) (13)
P16 carries out subtracting 1 computing mainly for those grey scale pixel values that still moving destination object is formed, for progressively removing those motions pixel more of a specified duration, computing method as shown in formula (14),
H τ(x,y,t)=max(0,H τ(x,y,t-1)-1) (14)
P17, according to the ROI obtaining each moving target object in the result of P12, P13, P14, P15 and P16, calculates i-th ROI icenter-of-mass coordinate ROI i, m(x, y, t), obtains i-th ROI in then utilizing one to circulate icenter-of-mass coordinate ROI i, m(x, y, t-1) calculates the speed of i-th moving target object, computing method as shown in formula (15),
S i ( t ) = | ROI i , m ( x , y , t ) - ROI i , m ( x , y , t - 1 ) | &Delta;t - - - ( 15 )
In formula, ROI i, m(x, y, t) is i-th ROI in frame under process icenter-of-mass coordinate, ROI i, m(x, y, t-1) is i-th ROI in a upper processed frame icenter-of-mass coordinate, Δ t was two frame period times, S it () is i-th ROI in frame under process imovement velocity; According to this movement velocity S it () conduct will detect Rule of judgment, as calculating prolongeding time τ in P11 in P10 i, Mfoundation;
In the moving target object extraction shown in Fig. 8 and tracking processing procedure, because computation complexity is not high, parameter and threshold value choose fairly simple, be convenient to realize in SOC (system on a chip), substantially single goal Object tracking and simple multiple goal Object tracking problem can have been met, as shown in Figure 10, there is not situations such as blocking, merge and be separated in multiple goal object in this case; For Complex multi-target tracking problem, as shown in figure 11, in this case, there is merging and the situation such as to be separated in multiple goal object, need to process in another thread of the multiple-target system in the application software of PC shown in Fig. 9 to effectively distinguish different destination objects and recording each destination object movement locus in video sequence image; After P17 process terminates, obtain i ROI area-of-interest, the center-of-mass coordinate value in region and the sizes values of regional frame; The extraction of these destination objects and tracking data being submitted in the PC shown in Fig. 9 by network in the mode of software interface with full-view video image is processed;
Multiple goal subject tracking unit be used for multiple goal object block in scene, be separated and the situation such as merging time effectively follow the tracks of; Treatment scheme below according to Fig. 9 is described multiple goal Object tracking;
S1 step mainly reads the number i of present frame ROI and the center p of binary map ithe sizes values s of (x, y) and ROI i(Δ x, Δ y), uses p i(x, y) and s i(Δ x, Δ y) creates i ROI software object object rOI (i);
S2 step mainly judges already present destination object object in the number i of present frame ROI and previous frame presence (j)number j compare, two or more destination object is deposited in case, if the number i of ROI equals already present destination object object presence (j)number j when showing in scene destination object number and do not change simultaneously also not occur merging or being separated, then proceed to S3 pairing;
To the ROI in present frame and the destination object object that also existed in S3 step presence (j)adopt the very little criterion of spatial dimension change to match, match computing method as shown in formula (23),
d t,j=|ROIi(x,y,t)-object presence(j)(x,y,t-1)| (23)
In formula, d t, jfor i-th ROI in present frame i(x, y, t) and an already present jth object presence (j)city distance between (x, y, t-1), if meet d t, jthe condition of≤D, shows both successful matchings;
For successful matching ROI i(x, y, t) and object presence (j)(x, y, t-1), first, by system time with HHMMSS form and object presence (j)(x, y, t-1) adds object object to presence (j)enumerations in; Then with ROI i(x, y, t) substitutes object presence (j)(x, y, t-1);
If the number i of ROI is greater than already present destination object object presence (j)number j show in scene, have new destination object to enter or occurred situation about being separated; Processing procedure is as follows: first adopt S3 treatment step to carry out pairing and calculate, and adopt the very little criterion of spatial dimension change to match, pairing computing method are as shown in formula (23); Then S7 treatment step is proceeded to;
Calculate the ROI not having successful matching by formula (24) in S7 treatment step i(x, y, t) and group jthe city distance of (x, y),
In formula, ROI i(x, y, t) be not for having the ROI software object of successful matching, group j(x, y) group of objects for existing in scene, then proceeds to the judgement process of S8;
S8 judges that process judges by formula (25),
dR t,j≤D R(25)
In formula, dR t, jthe ROI not having successful matching i(x, y, t) and group jthe city distance of (x, y), D rfor the threshold value arranged; When judging that formula (25) is set up, representing that group of objects has separation case to occur, proceeding to S9 treatment step; Otherwise proceed to S10 treatment step;
S9 treatment step the first matching condition is mated, and reads respectively and is stored in group of objects group jeach relevant target object object in (x, y) presence (j)the color histogram of (x, y, t-1) data; Simultaneously to the ROI not having successful matching i(x, y, t) does color histogram using the tolerance of Bhattacharyya distance as two color histogram similaritys, the computing method of discrete Bart Charlie subbreed number as shown in formula (16),
&rho; [ P i , Q j ] = &Sigma; &mu; = 1 m p i ( &mu; ) q j ( &mu; ) - - - ( 16 )
In formula, ρ [P i, Q j] ∈ [0,1] is Bart Charlie subbreed number, Bhattacharyya distance formula (17) calculates,
d = min { d i , j } = min { 1 - &rho; [ P i , Q j ] } - - - ( 17 )
In formula, d is Bhattacharyya distance, if the threshold value Threshold4 that this value is less than a certain regulation just represents that the match is successful; Proceed to step S12 for what the match is successful, otherwise proceed to step S11;
S11 treatment step does not have successful situation to carry out second time matching treatment to first time coupling, and what adopt in first time matching treatment is the color characteristic of destination object, and in second time coupling employing be Hu not bending moment; Specific practice is: read respectively and be stored in group of objects group jeach relevant target object object in (x, y) presence (j)the Hu invariant moment features vector information of (x, y, t-1); Simultaneously to the ROI not having successful matching i(x, y, t) does the vector operation of Hu invariant moment features, using the tolerance of Euclidean distance as two Hu invariant moment features similaritys, the computing method of Hu not bending moment 7 proper vectors as shown in formula (18),
φ 1=η 2002
&phi; 2 = ( &eta; 20 - &eta; 02 ) 2 + 4 &eta; 11 2
φ 3=(η 30-3η 12) 2+(3η 2103) 2
φ 4=(η 3012) 2+(η 2103) 2
φ 5=(η 30-3η 12)(η 3012)[(η 3012) 2-3(η 2103) 2]+
(3η 2103)(η 2103)[3(η 3012) 2-(η 2103) 2] (18)
φ 6=(η 2002)[(η 3012) 2-3(η 2103) 2]+4η 113012)(η 2103)
φ 7=(3η 2103)(η 3012)[(η 3012) 2-3(η 2103) 2]-
30-3η 12)(η 2103)[3(η 3012) 2-(η 2103) 2]
In formula, η pqfor normalization centre distance, computing method are provided by formula (19),
&eta; pq = &mu; pq / &mu; pq r &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; r = ( p + q ) / 2 &CenterDot; &CenterDot; &CenterDot; p + q = 2,3 , &CenterDot; &CenterDot; &CenterDot; - - - ( 19 )
In formula, μ pqcentered by distance, computing method are provided by formula (20),
&mu; pq = &Sigma; x = 1 M &Sigma; y = 1 N ( x - x &OverBar; ) p ( y - y &OverBar; ) q f ( x , y ) - - - ( 20 )
In formula, f (x, y) is destination object ROI ithe binary map of (x, y, t), with for destination object ROI ithe barycentric coordinates of (x, y, t), computing method are provided by formula (21),
x &OverBar; = m 10 / m 11 , y &OverBar; = m 01 / m 00
( 21 )
m pq = &Sigma; x = 1 M &Sigma; y = 1 N x p y q f ( x , y )
In formula, f (x, y) is destination object ROI ithe binary map of (x, y, t); Can pass through for each destination object 7 vectors that formula (18) ~ (21) calculate its Hu not bending moment, then use formula (22) Euclidean distance go to judge both similarity,
d ms = &Sigma; i = 1 7 ( &phi; mi - &phi; si ) 2 &CenterDot; &CenterDot; &CenterDot; s &Element; S - - - ( 22 )
In formula, d msfor being stored in group of objects group jeach relevant target object object in (x, y) presence (j)the Hu invariant moment features vector value φ of (x, y, t-1) siwith by the destination object ROI mated ithe Hu invariant moment features vector value φ of (x, y, t) mieuclidean distance value, S be multiple destination object merge time destination object number, if meet d mj=min{d msj, s ∈ S, so a jth object presence (j)(x, y, t-1) is just judged to be by the destination object mated; Then S12 treatment step is proceeded to;
First group of objects group is detected in S12 treatment step jeach relevant target object object in (x, y) presence (j)the number N of (x, y, t-1), if N=2 is so by group jthe object reference variable of (x, y) is Null, with the ROI that the match is successful ithe center of (x, y, t) substitutes object presence (j)the center of (x, y, t-1); If N > 2 is so by group jthe object reference variable of (x, y) is Null, will remain in group of objects group jeach relevant target object object in (x, y) presence (j)(x, y, t-1) re-creates new group of objects group j(x, y), with the ROI that the match is successful ithe center of (x, y, t) substitutes object presence (j)the center of (x, y, t-1);
S10 treatment step mainly for the process having new destination object to enter scenario, at this moment needs to use ROI i(x, y, t) creates a new object presence (j)(x, y, t-1) destination object, and by formulae discovery color histogram and the Hu not data such as bending moment, and be saved in object with its spatial positional information presence (j)in (x, y, t-1) destination object;
If the number i of ROI is less than already present destination object object presence (j)number j show that in scene existing destination object leaves or occurred situation about merging; Processing procedure is as follows: first adopt S3 treatment step to carry out pairing and calculate, and adopt the very little criterion of spatial dimension change to match, pairing computing method are as shown in formula (23); Then S4 treatment step is proceeded to;
According to the result in S3 treatment step in S4 treatment step, if there is two or more distance value d when matching computing formula (23) calculating t, jbe less than D, namely meet formula (26),
(d t,n<D)∧(d t,m<D) (26)
So just think the n-th destination object object presence (n)with m destination object object presence (m)there occurs merging, proceed to S5 process; Otherwise proceed to S6 process;
S5 treatment step according to the judged result of formula (26), by the n-th destination object object presence (n)with m destination object object presence (m)carry out merging treatment, create a new group of objects group j(x, y);
S6 treatment step, according to the judged result of formula (26), shows the scene having a destination object to leave, to the object not having successful matching presence (j)the object reference variable of (x, y, t-1) is Null, allows the garbage collector in Java language automatically collect the destination object having left scene.
Extraction and the tracking algorithm of the panorama sequence images expansion shown in Fig. 8, destination object realize primarily of C language, run on davinci platform; Formalization process and the behavior semantization process scheduling algorithm of the multiple goal Object tracking Processing Algorithm shown in Fig. 9 and image are write primarily of Java language, run on PC.

Claims (9)

1. the multiple goal object-tracking systems of a panorama video sequence image, it is characterized in that: the multiple goal object-tracking systems of described panorama video sequence image comprises the omnibearing shooting device of the full-view video image capturing whole scene internal object object, described omnibearing shooting device ODVS represents, described ODVS is placed in above the middle part of monitoring scene, and described ODVS is connected with microprocessor by USB interface; Described microprocessor is connected with PC by computer network; Described microprocessor comprises:
Video image reading unit, for being read the panoramic picture captured by ODVS by USB interface, and submits to video image expanding unit and video image storage unit by the panoramic picture of reading;
Video image expanding unit, for panoramic picture is carried out column expansion, target-object detecting unit submitted to by the panorama histogram picture after expansion;
Destination object detecting and tracking unit, for detecting the moving target object existed in panorama column unfolded image, and live destination object by rectangle circle, with a kind of Motion History or Energy Images algorithm, hereinafter referred to as MHoEI algorithm, extract and tracking target object with the motion history of destination object and energygram picture; Described PC mainly carries out formalization process and the process of behavior semantization of full-view video image, multiple goal subject tracking unit be used for multiple goal object block in scene, be separated and the situation such as merging time effectively follow the tracks of;
Described video image expanding unit, for panoramic picture is carried out column expansion, described destination object detecting and tracking unit submitted to by the panorama histogram picture after expansion;
Described destination object detecting and tracking unit, for the moving target object that detection and tracking exist in panorama column unfolded image, adopt MHoEI algorithm to extract and tracking target object the motion history of destination object and energygram picture, obtain region of interest ROI, the center-of-mass coordinate value in region and the sizes values of regional frame; The extraction of these destination objects and tracking data are also submitted to described multiple goal subject tracking unit in the mode of software interface by network with full-view video image and process;
Described multiple goal subject tracking unit, block in scene for multiple goal object, be separated and the situation such as merging time effectively follow the tracks of;
Described destination object detecting and tracking unit, in order to realize the shade that moving target object is effectively split and elimination moving target produces while segmentation, some feature utilizing shade and non-hatched area to distinguish in conducting frame difference method process is to eliminate shade, the feature that in Main Basis shadow region, the colourity of pixel is almost equal compared with background pixel colourity, the rgb color space of original image is changed into HSI color space, and then carry out frame difference method computing and just can eliminate shade, rgb color space changes into the computing method of HSI color space as shown in formula (7),
H = &theta; . . . . . . G &GreaterEqual; B 2 &pi; - &theta; . . . G < B
S = 1 - 3 ( R + G + B ) [ min ( R , G , B ) ] - - - ( 7 )
&theta; = arccos { [ ( R - G ) + ( R - B ) ] / 2 [ ( R - G ) 2 + ( R - B ) ( G - B ) ] 1 / 2 }
In formula, R is the red component in rgb color space, and G is the green component in rgb color space, and B is the blue component in rgb color space; H is the tone in HSI color space, represents by angle, reflects color closest to which type of spectral wavelength; S is the depth degree of the saturation degree in HSI color space, characterizing color; Tone H and saturation degree S claims colourity altogether;
For the destination object of the distant place of distance ODVS, on H component and S component, carry out frame difference method process respectively, its computing formula as shown in (8),
IP L , H Im age ( i , j ) = 1 . . . . . . if | Pix H , t ( i , j ) - Pix H , t - 3 ( i , j ) | > Threshold 1 0 . . . . . . else - - - ( 8 )
IP L , S Im age ( i , j ) = 1 . . . . . . if | pix S , t ( i , j ) - Pix S , t - 3 ( i , j ) | > Threshold 1 0 . . . . . . else
In formula, IP l,Himage (i, j) for the coordinate of the top H color component in current input panoramic image frame be the testing result that (i, j) puts, IP l,Simage (i, j) is the testing result that (i, j) puts for the coordinate of the top S color component in current input panoramic image frame, represents by binary map, and 1 represents foreground moving object, and 0 represents background, Pix h,t(i, j) and Pix h, t-3(i, j) represents that top H color component coordinate in t and t-3 time chart picture frame of full-view video image is the pixel value that (i, j) puts respectively, Pix s,t(i, j) and Pix s, t-3(i, j) represents that top S color component coordinate in t and t-3 time chart picture frame of full-view video image is the pixel value that (i, j) puts respectively, and Threshold1 is corresponding judgment threshold, and value is 45 here;
For destination object at a distance in distance ODVS, on H component and S component, carry out frame difference method process respectively, its computing formula as shown in (9),
IP M , H Im age ( i , j ) = 1 . . . . . . if | Pix H , t ( i , j ) - Pix H , t - 2 ( i , j ) | > Threshold 2 0 . . . . . . else - - - ( 9 )
IP M , S Im age ( i , j ) = 1 . . . . . . if | Pix S , t ( i , j ) - Pix S , t - 2 ( i , j ) | > Threshold 2 0 . . . . . . else
In formula, IP m,Himage (i, j) for the coordinate of the middle part H color component in current input panoramic image frame be the testing result that (i, j) puts, IP m,Simage (i, j) is the testing result that (i, j) puts for the coordinate of the middle part S color component in current input panoramic image frame, represents by binary map, and 1 represents foreground moving object, and 0 represents background, Pix h,t(i, j) and Pix h, t-2(i, j) represents that middle part H color component coordinate in t and t-2 time chart picture frame of full-view video image is the pixel value that (i, j) puts respectively, Pix s,t(i, j) and Pix s, t-2(i, j) represents that middle part S color component coordinate in t and t-2 time chart picture frame of full-view video image is the pixel value that (i, j) puts respectively, and Threshold2 is corresponding judgment threshold, and value is 45 here;
For the destination object nearby of distance ODVS, on H component and S component, carry out frame difference method process respectively, its computing formula as shown in (10),
IP N , H Im age ( i , j ) = 1 . . . . . . if | Pix H , t ( i , j ) - Pix H , t - 1 ( i , j ) | > Threshold 3 0 . . . . . . else - - - ( 10 )
IP M , S Im age ( i , j ) = 1 . . . . . . if | Pix S , t ( i , j ) - Pix S , t - 1 ( i , j ) | > Threshold 3 0 . . . . . . else
In formula, IP n,Himage (i, j) for the coordinate of the bottom H color component in current input panoramic image frame be the testing result that (i, j) puts, IP n,Simage (i, j) is the testing result that (i, j) puts for the coordinate of the bottom S color component in current input panoramic image frame, represents by binary map, and 1 represents foreground moving object, and 0 represents background, Pix h,t(i, j) and Pix h, t-1(i, j) represents that middle part H color component coordinate in t and t-1 time chart picture frame of full-view video image is the pixel value that (i, j) puts respectively, Pix s,t(i, j) and Pix s, t-1(i, j) represents that middle part S color component coordinate in t and t-1 time chart picture frame of full-view video image is the pixel value that (i, j) puts respectively, and Threshold3 is corresponding judgment threshold, and value is 45 here;
Finally, process is arranged for the destination object split after segmentation; On the one hand, because colourity is formed primarily of tone H and saturation degree S two part, need here to carry out or calculation process; On the other hand, in P4, P5 and P6 processing procedure, view picture panoramic picture is divided into three, upper, middle and lower part, also needs here to carry out or calculation process; Obtain the segmentation image of the moving target object on view picture panoramic picture through such process, disposal route as shown in formula (11),
D(x,y,t)=IP L,HImage(i,j)∨IP L,SImage(i,j)∨IP M,HImage(i,j)∨
IP M,SImage(i,j)∨IP N,HImage(i,j)∨IP N,SImage(i,j) (11)
In formula, D (x, y, t) for coordinate in current input panoramic image frame be the testing result that (i, j) puts, IP n,Himage (i, j) for the coordinate of the bottom H color component in current input panoramic image frame be the testing result that (i, j) puts, IP n,Simage (i, j) for the coordinate of the bottom S color component in current input panoramic image frame be the testing result that (i, j) puts, IP m,Himage (i, j) for the coordinate of the middle part H color component in current input panoramic image frame be the testing result that (i, j) puts, IP m,Simage (i, j) for the coordinate of the middle part S color component in current input panoramic image frame be the testing result that (i, j) puts, IP l,Himage (i, j) for the coordinate of the top H color component in current input panoramic image frame be the testing result that (i, j) puts, IP l,Simage (i, j) for the coordinate of the top S color component in current input panoramic image frame be the testing result that (i, j) puts.
2. the multiple goal object-tracking systems of panorama video sequence image as claimed in claim 1, it is characterized in that: described destination object detecting and tracking unit, for the moving target object that detection and tracking exist in panorama column unfolded image, adopt MHoEI algorithm to extract and tracking target object the motion history of destination object and energygram picture, represent with formula (3):
H &tau; ( x , y , t ) = &tau; if D ( x , y , t ) = 1 max ( 0 , H &tau; ( x , y , t - 1 ) ) if S &le; &delta; max ( 0 , H &tau; ( x , y , t - 1 ) - 1 ) otherwise - - - ( 3 )
In formula, S is the movement velocity of destination object, and τ is the duration, the binary picture sequence that D (x, y, t) is moving region, H τthe binary picture sequence that (x, y, t-1) is non-moving areas, duration τ needs to carry out dynamic conditioning according to destination object movement velocity S, and δ is threshold speed.
3. the multiple goal object-tracking systems of panorama video sequence image as claimed in claim 1, it is characterized in that: described video image expanding unit, for panoramic picture is carried out column expansion, according to the centre coordinate of the panoramic picture calculated in initialization process and the inside and outside circle radius of image, by the initial point O of the centre coordinate of panoramic picture setting plane coordinate system *(0,0), X *axle, Y *the internal diameter of axle, panoramic picture is r, external diameter is R, and with radius of a circle in the middle of r1=(r+R)/2 setting, position angle is β=tan -1(y */ x *); Panorama column unfolded image is with true origin O *(0,0), X *axle, Y *axle is plane coordinate system, is r and X by the internal diameter in panoramic picture *the intersection point (r, 0) of axle is as true origin O *(0,0), counterclockwise launches with azimuthal angle beta; Set up any point pixel coordinates P in panorama column unfolded image *(x *, y *) with panoramic picture in pixel coordinates Q *(x *, y *) corresponding relation, its calculating formula is:
x *=y */(tan(360x **/π(R+r))) (4)
y *=(y **+r)cosβ (5)
In above formula, x *, y *for the pixel coordinates value of panorama column unfolded image, x *, y *for the pixel coordinates value of panoramic picture, R is the external diameter of circular panoramic picture, and r is the internal diameter of circular panoramic picture, and β is the position angle of circular panoramic picture coordinate;
Because the scope of above-mentioned panorama column unfolded image is 0 ~ 360 °, the situation being judged as two destination objects is there will be when same tracking target object is in 0 ° or 360 ° of edges, the scope of panorama column unfolded image is set to 0 ~ 380 °, namely has the overlapping region of about 20 °.
4. the multiple goal object-tracking systems of panorama video sequence image as described in claim 1 or 2, it is characterized in that: described destination object detecting and tracking unit, when adopting the extraction of MHoEI algorithm and tracking target object, obtaining the ROI of each moving target object, calculating i-th ROI icenter-of-mass coordinate ROI i,m(x, y, t), obtains i-th ROI in then utilizing one to circulate icenter-of-mass coordinate ROI i,m(x, y, t-1) calculates the speed of i-th moving target object, computing method as shown in formula (15),
S i ( t ) = | ROI i , m ( x , y , t ) - ROI i , m ( x , y , t - 1 ) | &Delta;t - - - ( 15 )
In formula, ROI i,m(x, y, t) is i-th ROI in frame under process icenter-of-mass coordinate, ROI i,m(x, y, t-1) is i-th ROI in a upper processed frame icenter-of-mass coordinate, Δ t was two frame period times, S it () is i-th ROI in frame under process imovement velocity.
5. the multiple goal object-tracking systems of panorama video sequence image as described in claim 1 or 2, is characterized in that: described destination object detecting and tracking unit, the destination object movement velocity S calculated by formula (15) it () is as calculating prolongeding time τ i,Mfoundation; Computing method as shown in formula (12),
τ i,M=k/S i(t) (12)
In formula, τ i,Mbe the duration of i-th destination object, S it () is the translational speed of i-th destination object, k is a certain constant;
For the far and near different destination object of distance ODVS, τ i,Mvalue also needs suitable adjustment, for the destination object of same movement velocity, the speed that the near destination object of distance ODVS reflects on panoramic expansion figure can be hurry up, the speed that the destination object that then distance ODVS is far away reflects on panoramic expansion figure can be slow, and what formula (12) was tried to achieve is from the τ in ODVS moderate distance situation i,Mvalue, carried out normalized to the duration here, specific practice is that the duration is nearby set to H τ(x, y, t)=τ i,M-α, the duration of middle distant place is set to H τ(x, y, t)=τ i,M, the duration is at a distance set to H τ(x, y, t)=τ i,M+ α; Wherein α=2 ~ 4.
6. the multiple goal object-tracking systems of panorama video sequence image as described in claim 1 or 2, it is characterized in that: described destination object detecting and tracking unit, each circulation obtains i ROI area-of-interest, the center-of-mass coordinate value in region and the sizes values of regional frame; The extraction of these destination objects and tracking data also call when being supplied to video sequence image process on the middle and senior level with full-view video image in the mode of software interface.
7. the multiple goal object-tracking systems of panorama video sequence image as claimed in claim 1, it is characterized in that: described multiple goal subject tracking unit, block in scene for multiple goal object, be separated and the situation such as merging time effectively follow the tracks of; In order to the effective tracking realizing multiple goal object needs the various attribute datas in destination object to merge, mate, upgrade, each destination object entering scene is created automatically to the object of a software here, the describing method of object is as follows:
object{
// morphological state variable
Locus (distance centered by observer, orientation, system represents with Gauss coordinate)
(centered by observer, distance carries out sameization process to size dimension, with mm 2/ pixel represents)
Shape, attitude (with tight ness rating, solid degree, excentricity, degree of irregularity, minimum enclosed rectangle, figure's ratio, Hu not bending moment etc. represent, preserve with 7 vector value data of Hu not bending moment)
Colourity (with the H component in HSI color space and S representation in components, preserving with color histogram data)
// motion state variable
Movement velocity (representing with mm/s)
Direction of motion (orientation references centered by observer)
Attitude rate
The residence time (representing with S)
Die-out time (representing with S)
// motion history status data
Enumerations (with time, locus record)
}。
8. the multiple goal object-tracking systems of panorama video sequence image as claimed in claim 1, it is characterized in that: described multiple goal subject tracking unit, when adopting color histogram to occur division situation as color characteristic model and multiple goal object, first mates foundation, specific practice is before multiple object merging, is kept in object by the chrominance information of each object; The color histogram of the relevant target object be stored in object is read respectively when division situation appears in multiple goal object data; Color histogram is done to the ROI of relevant target object during division situation simultaneously using the tolerance of Bhattacharyya distance as two color histogram similaritys, the computing method of discrete Bart Charlie subbreed number as shown in formula (16),
&rho; [ P i , Q j ] = &Sigma; &mu; = 1 m p i ( &mu; ) q i ( &mu; ) - - - ( 16 )
In formula, ρ [P i, Q j] ∈ [0,1] is Bart Charlie subbreed number, Bhattacharyya distance formula (17) calculates,
d = min { d i , j } = min { 1 - &rho; [ P i , Q j ] } - - - ( 17 )
In formula, d is Bhattacharyya distance, if the threshold value Threshold4 that this value is less than a certain regulation just represents that the match is successful, continues by the second coupling according to mating for not having the destination object that the match is successful.
9. the multiple goal object-tracking systems of panorama video sequence image as claimed in claim 1, it is characterized in that: described multiple goal subject tracking unit, when adopting Hu bending moment does not occur division situation as destination object shape facility and multiple goal object, second mates foundation, specific practice is before multiple object merging, is kept in object by 7 vector information of the Hu invariant moment features of each object; The Hu invariant moment features vector information be stored in the relevant target object of object is read respectively when division situation appears in multiple goal object; The vector operation of Hu invariant moment features is done to relevant target object during division situation simultaneously, using the tolerance of Euclidean distance as two Hu invariant moment features similaritys, the computing method of Hu not bending moment 7 proper vectors as shown in formula (18),
φ 1=η 2002
&phi; 2 = ( &eta; 20 - &eta; 02 ) 2 + 4 &eta; 11 2
φ 3=(η 30-3η 12) 2+(3η 2103) 2
φ 4=(η 3012) 2+(η 2103) 2
φ 5=(η 30-3η 12)(η 3012)[(η 3012) 2-3(η 2103) 2]+ (18)
(3η 2103)(η 2103)[3(η 3012) 2-(η 2103) 2]
φ 6=(η 2002)[(η 3012) 2-3(η 2103) 2]+4η 113012)(η 2103)
φ 7=(3η 2103)(η 3012)[(η 3012) 2-3(η 2103) 2]-
30-3η 12)(η 2103)[3(η 3012) 2-(η 2103) 2]
In formula, η pqfor normalization centre distance, computing method are provided by formula (19),
&eta; pq = &mu; pq / &mu; pq r . . . . . . r = ( p + q ) / 2 . . . p + q = 2,3 , . . . - - - ( 19 )
In formula, μ pqcentered by distance, computing method are provided by formula (20),
&mu; pq = &Sigma; x = 1 M &Sigma; y = 1 N ( x - x &OverBar; ) p ( y - y &OverBar; ) q f ( x , y ) - - - ( 20 )
In formula, the binary map that f (x, y) is destination object, with for the barycentric coordinates of destination object, computing method are provided by formula (21),
x &OverBar; = m 10 / m 11 , y &OverBar; = m 01 / m 00 m pq = &Sigma; x = 1 M &Sigma; y = 1 N x p y q f ( x , y ) - - - ( 21 )
In formula, the binary map that f (x, y) is destination object; Can pass through for each destination object 7 vectors that formula (18) ~ (21) calculate its Hu not bending moment, then use formula (22) Euclidean distance go to judge both similarity,
d ms = &Sigma; t = 1 7 ( &phi; mi - &phi; si ) 2 . . . s &Element; S - - - ( 22 )
In formula, d msfor being stored in the Hu invariant moment features vector value φ in object siwith by the Hu invariant moment features vector value φ of destination object mated mieuclidean distance value, S be multiple destination object merge time destination object number, if meet d mj=min{d msj, s ∈ S, so a jth object is just judged to be by the destination object mated.
CN201210116956.9A 2012-04-19 2012-04-19 Multi-target object tracking system of panorama video sequence image Active CN102999918B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210116956.9A CN102999918B (en) 2012-04-19 2012-04-19 Multi-target object tracking system of panorama video sequence image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210116956.9A CN102999918B (en) 2012-04-19 2012-04-19 Multi-target object tracking system of panorama video sequence image

Publications (2)

Publication Number Publication Date
CN102999918A CN102999918A (en) 2013-03-27
CN102999918B true CN102999918B (en) 2015-04-22

Family

ID=47928452

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210116956.9A Active CN102999918B (en) 2012-04-19 2012-04-19 Multi-target object tracking system of panorama video sequence image

Country Status (1)

Country Link
CN (1) CN102999918B (en)

Families Citing this family (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103310204B (en) * 2013-06-28 2016-08-10 中国科学院自动化研究所 Feature based on increment principal component analysis mates face tracking method mutually with model
CN110213508B (en) * 2013-08-12 2021-10-15 株式会社尼康 Electronic device
CN103955947A (en) * 2014-03-21 2014-07-30 南京邮电大学 Multi-target association tracking method based on continuous maximum energy and apparent model
CN103927763B (en) * 2014-03-24 2016-08-17 河海大学 A kind of image sequence multiple target tracking track identification processing method
CN104182990B (en) * 2014-08-12 2017-05-31 中国科学院上海微系统与信息技术研究所 A kind of Realtime sequence images motion target area acquisition methods
JP6521676B2 (en) * 2015-03-09 2019-05-29 キヤノン株式会社 Motion information acquisition apparatus and motion information acquisition method
CN104850850B (en) * 2015-04-05 2017-12-01 中国传媒大学 A kind of binocular stereo vision image characteristic extracting method of combination shape and color
EP3287003B1 (en) * 2015-04-24 2020-06-24 Sony Corporation Inspection device, inspection method, and program
JP6814196B2 (en) * 2015-07-16 2021-01-13 ブラスト モーション インコーポレイテッドBlast Motion Inc. Integrated sensor and video motion analysis method
CN105163025B (en) * 2015-08-31 2020-01-31 联想(北京)有限公司 Image capturing method and electronic device
CN106991359B (en) * 2016-01-20 2020-08-07 上海慧体网络科技有限公司 Algorithm for tracking basketball in ball game video in panoramic mode
CN106412582B (en) * 2016-10-21 2019-01-29 北京大学深圳研究生院 The description method of panoramic video area-of-interest and coding method
CN107194406A (en) * 2017-05-09 2017-09-22 重庆大学 A kind of panorama machine vision target identification method based on CS characteristic values
CN107507230A (en) * 2017-08-31 2017-12-22 成都观界创宇科技有限公司 Method for tracking target and panorama camera applied to panoramic picture
CN107564040A (en) * 2017-08-31 2018-01-09 成都观界创宇科技有限公司 Method for tracking target and panorama camera
CN107564039A (en) * 2017-08-31 2018-01-09 成都观界创宇科技有限公司 Multi-object tracking method and panorama camera applied to panoramic video
CN107909032A (en) * 2017-11-15 2018-04-13 重庆邮电大学 A kind of behavioral value and recognition methods based on single sample
CN108062510B (en) * 2017-11-17 2022-02-11 维库(厦门)信息技术有限公司 Multi-target tracking result real-time dynamic display method and computer equipment
CN108181989B (en) * 2017-12-29 2020-11-20 北京奇虎科技有限公司 Gesture control method and device based on video data and computing equipment
DE102018202707A1 (en) * 2018-02-22 2019-08-22 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Generation of panoramic pictures
CN108764100B (en) * 2018-05-22 2022-03-25 全球能源互联网研究院有限公司 Target behavior detection method and server
CN109915888B (en) * 2018-06-04 2021-04-27 新昌县馁侃农业开发有限公司 Electric oil heater
CN109035205A (en) * 2018-06-27 2018-12-18 清华大学苏州汽车研究院(吴江) Water hyacinth contamination detection method based on video analysis
CN109919053A (en) * 2019-02-24 2019-06-21 太原理工大学 A kind of deep learning vehicle parking detection method based on monitor video
CN110995989A (en) * 2019-11-25 2020-04-10 陈颖 Lens capturing method based on human body feature positioning
CN111461242A (en) * 2020-04-08 2020-07-28 北京航天新风机械设备有限责任公司 Multi-material rapid comparison and matching method for production line
CN113763416A (en) * 2020-06-02 2021-12-07 璞洛泰珂(上海)智能科技有限公司 Automatic labeling and tracking method, device, equipment and medium based on target detection
CN111739024B (en) * 2020-08-28 2020-11-24 安翰科技(武汉)股份有限公司 Image recognition method, electronic device and readable storage medium
CN112114300B (en) * 2020-09-14 2022-06-21 哈尔滨工程大学 Underwater weak target detection method based on image sparse representation
US11450103B2 (en) 2020-10-05 2022-09-20 Crazing Lab, Inc. Vision based light detection and ranging system using dynamic vision sensor
CN112243110B (en) * 2020-10-15 2023-03-24 成都易瞳科技有限公司 Panoramic target track recording method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101051223A (en) * 2007-04-29 2007-10-10 浙江工业大学 Air conditioner energy saving controller based on omnibearing computer vision
CN101064837A (en) * 2007-05-29 2007-10-31 王海燕 Method for tracking plurality of targets in video image
CN101145200A (en) * 2007-10-26 2008-03-19 浙江工业大学 Inner river ship automatic identification system of multiple vision sensor information fusion
CN101251928A (en) * 2008-03-13 2008-08-27 上海交通大学 Object tracking method based on core
CN101729872A (en) * 2009-12-11 2010-06-09 南京城际在线信息技术有限公司 Video monitoring image based method for automatically distinguishing traffic states of roads

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101051223A (en) * 2007-04-29 2007-10-10 浙江工业大学 Air conditioner energy saving controller based on omnibearing computer vision
CN101064837A (en) * 2007-05-29 2007-10-31 王海燕 Method for tracking plurality of targets in video image
CN101145200A (en) * 2007-10-26 2008-03-19 浙江工业大学 Inner river ship automatic identification system of multiple vision sensor information fusion
CN101251928A (en) * 2008-03-13 2008-08-27 上海交通大学 Object tracking method based on core
CN101729872A (en) * 2009-12-11 2010-06-09 南京城际在线信息技术有限公司 Video monitoring image based method for automatically distinguishing traffic states of roads

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
《Action recognition by employing combined directional motion history and energy images》;Ahad, M.A.R. 等;《Computer Vision and Pattern Recognition Workshops (CVPRW)》;20101231;第2页左栏第3-4段 *

Also Published As

Publication number Publication date
CN102999918A (en) 2013-03-27

Similar Documents

Publication Publication Date Title
CN102999918B (en) Multi-target object tracking system of panorama video sequence image
Zhang et al. Real-time multiple human perception with color-depth cameras on a mobile robot
Cao et al. Vehicle detection and motion analysis in low-altitude airborne video under urban environment
Mitzel et al. Real-time multi-person tracking with detector assisted structure propagation
Wang et al. An overview of 3d object detection
Teutsch et al. Robust detection of moving vehicles in wide area motion imagery
Xu et al. Integrated approach of skin-color detection and depth information for hand and face localization
CN110992378B (en) Dynamic updating vision tracking aerial photographing method and system based on rotor flying robot
CN106529441B (en) Depth motion figure Human bodys&#39; response method based on smeared out boundary fragment
Teutsch et al. Detection and classification of moving objects from UAVs with optical sensors
Kong et al. Particle filter‐based vehicle tracking via HOG features after image stabilisation in intelligent drive system
Ackerman et al. Robot steering with spectral image information
Mitzel et al. Real-Time Multi-Person Tracking with Time-Constrained Detection.
CN112257617B (en) Multi-modal target recognition method and system
CN117133041A (en) Three-dimensional reconstruction network face recognition method, system, equipment and medium based on deep learning
CN115018886B (en) Motion trajectory identification method, device, equipment and medium
Zhang et al. Target tracking for mobile robot platforms via object matching and background anti-matching
CN111027427B (en) Target gate detection method for small unmanned aerial vehicle racing match
CN110910418B (en) Target tracking algorithm based on rotation invariance image feature descriptor
Lau et al. Atdetect: Face detection and keypoint extraction at range and altitude
McElhone et al. Joint detection and pose tracking of multi-resolution surfel models in RGB-D
Zhou et al. Speeded-up robust features based moving object detection on shaky video
Tian et al. High confidence detection for moving target in aerial video
CN111833384A (en) Method and device for quickly registering visible light and infrared images
Mademlis et al. Stereoscopic video description for human action recognition

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant