CN101739686B - Moving object tracking method and system thereof - Google Patents

Moving object tracking method and system thereof Download PDF

Info

Publication number
CN101739686B
CN101739686B CN2009100774355A CN200910077435A CN101739686B CN 101739686 B CN101739686 B CN 101739686B CN 2009100774355 A CN2009100774355 A CN 2009100774355A CN 200910077435 A CN200910077435 A CN 200910077435A CN 101739686 B CN101739686 B CN 101739686B
Authority
CN
China
Prior art keywords
target
area
threshold value
image
filtering
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN2009100774355A
Other languages
Chinese (zh)
Other versions
CN101739686A (en
Inventor
王�华
曾建平
黄建
王正
菅云峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netposa Technologies Ltd
Original Assignee
Beijing Zanb Science & Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zanb Science & Technology Co Ltd filed Critical Beijing Zanb Science & Technology Co Ltd
Priority to CN2009100774355A priority Critical patent/CN101739686B/en
Publication of CN101739686A publication Critical patent/CN101739686A/en
Application granted granted Critical
Publication of CN101739686B publication Critical patent/CN101739686B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention provides a moving object tracking method and a system thereof. The moving object tracking method comprises the following steps: detecting objects, and partitioning a moving object area in a video scene from the background; predicting the objects, and estimating the next frame motion of the objects; matching the objects, tracking the matched stable objects, and filtering the false objects; and updating the objects, and updating templates of the stable objects in the current frame. The method and the system realize accurate tracking of multiple objects under the complex background, and solve the problems of shading, leaf swing and the like; moreover, the operation is simple and convenient, so the method and the system are quite practical.

Description

Motion target tracking method and system thereof
Technical field
The present invention relates to the video monitoring technology, particularly relate to motion target tracking method and system thereof in a kind of intelligent video monitoring system.
Background technology
Along with the increase of crime levels and threat, safety has become world's question of common concern.Video monitoring is one of method that addresses this problem.Except public safety, video monitoring also can solve some other problemses effectively, like the adjusting of crowded urban traffic amount, flow of the people.Large-scale for many years supervisory system has obtained using widely in the main place such as airport, bank, highway or down town etc.
Because traditional video monitoring technology is generally artificial the supervision, exists fatiguability, be prone to carelessness, many deficiencies such as reaction velocity is slow, labour cost height.Therefore, people study the intelligent video monitoring technology of a kind of digitizing, standardization, intellectuality and IP networkization gradually in recent years.
Conventional intelligent video monitoring technology all comprises a motion target tracking technology.The purpose of motion target tracking is on the basis that correctly detects moving target, confirms the process of same target position in different scene images.
In order to realize following the tracks of, can use method, like frame-to-frame differences point-score and light stream split plot design based on motion analysis.The frame-to-frame differences point-score is that the consecutive frame image is done after the additive operation, and result images is got threshold value and cut apart, and extracts moving target.Whether the shortcoming of this method is can only detect in the scene target according to the Strength Changes of interframe pixel to move, the frame-to-frame correlation of moving target signal and the frame-to-frame correlation of noise all very a little less than, be difficult to distinguish.The light stream split plot design is to detect moving target through the friction speed between the target and background.The shortcoming of this method be can not the active zone partial objectives for background that causes of motion block, problems such as demonstration and aperture, calculated amount is big, needs special hardware supports.
In order to realize following the tracks of, can use image matching method, like zone coupling, Model Matching.Zone coupling is superposeing on a certain integral body of reference picture and all possible positions of realtime graphic, calculate the analog value that certain image similarity is measured then, and the corresponding position of its maximum comparability is exactly the position of target.The shortcoming of this method is that calculated amount is big, is difficult to reach the real-time requirement.Model Matching is to mate the target in the scene image according to template.The shortcoming of this method is that computational analysis is complicated, and arithmetic speed is slow, and the renewal of model is comparatively complicated, and real-time is relatively poor.
In sum, press for more simple, effective, the real-time motion target tracking scheme of proposition at present.
Summary of the invention
In view of this, fundamental purpose of the present invention is to provide a motion target tracking method and system thereof, and it can obtain correct foreground image, has reduced the mistake of target detection; Further, can predict according to testing result, mate, operation such as renewal,, realize the accurate tracking of moving target with filtering spurious motion target.
For achieving the above object, technical scheme of the present invention is achieved in that
The invention provides a motion target tracking method, said motion target tracking method comprises:
Detect target, the motion target area in the video scene is split from background;
Target of prediction, the next frame motion of estimating target;
The coupling target, the stable objects of tracking and matching, and filtering false target;
Fresh target more upgrades the template of stable objects in the present frame.
According to the present invention, said detection target comprises the steps:
Obtain video, obtain video content obtaining scene image, and set up background model;
Pretreatment image is eliminated the influence of scene image to background model;
Marked region carries out foreground segmentation according to background model to scene image, and marks connected region;
Maintenance state is judged and is detected object module present located state, makes handled, and does abnormality detection where necessary;
Enhancement region is used shadow Detection, high bright detection and tree filtering, rejects the false areas of shade, high bright and leaf swing;
Division merges and divides processing with merging zone, the constraint that the use background model provides and the priori of people and vehicle model to the zone, to solve target over-segmentation and the mutual occlusion issue of target.
Wherein, said pretreatment image comprises: Filtering Processing and global motion compensation; Wherein,
Said Filtering Processing comprises: image is carried out noise filtering processing, picture smooth treatment;
Said global motion compensation is the Global Motion that compensation causes owing to the slight swing of camera, and in global motion compensation, motion model comprises translation, rotation, zoom.
Through the regional luminance difference IDS of positive and negative 5 pixels around the rectangular area at following conventional formula calculating prospect place, obtain distance, delta x, the Δ y of image translation in the global motion compensation, formula is following:
IDS = Σ x = s x m Σ y = s y n ( I ( x , y ) ( t ) - I ( x , y ) ( t - 1 ) ) s x s y
Wherein, s xRepresent regional starting point x coordinate, s yRepresent regional starting point y coordinate, I (x, y)(t) expression current frame image gray scale, I (x, y)(t-1) expression previous frame gradation of image; In like manner calculate the location variation in other four zones, obtain average Δ x, Δ y at last; Image is carried out the image after translation is compensated according to Δ x, Δ y.
Wherein, said marked region comprises the steps:
Foreground segmentation is cut apart scene image based on background model, to obtain the bianry image of prospect;
Morphology is handled, and uses the method for mathematical morphology to handle described bianry image, with the less false areas of removal area, and the bigger zone of filling area; And
Connected component labeling is with the zones of different in the method mark Same Scene of connected domain, to distinguish different target areas.
Wherein, said maintenance state comprises condition judgement and abnormality detection.
Said condition judgement is to judge to detect object module present located state, makes handled; Surpass threshold value 1 when scene stabilization time, system gets into duty by init state; Surpass threshold value 2 when the scene change time, system gets into init state by duty.Wherein, said threshold value 1 was preferably between 0.5~2 second, and said threshold value 2 was preferably between 5~20 seconds.
Said abnormality detection is in the vision signal serious interference, and carries out when artificial situation of blocking camera is arranged; Edge matching value and successful shortest time of background initialization according to twice background are judged; If the numerical value that the edge of the background of present frame and background model is complementary surpasses threshold value 4 less than threshold value 3 or successful shortest time of background initialization, then think abnormal occurrence.Wherein, said threshold value 3 is preferably between 30~50, and said threshold value 4 was preferably between 6~20 seconds.
Wherein, said enhancement region comprises: shadow Detection, high bright detection, tree filtering.
Shadow Detection is to each connected region, calculates the average of the pixel value in this connected region respectively, and with this average as threshold value, judge the shadow region that this is regional, then with the shadow region filtering, if pixel value less than said threshold value, then is judged to be shade.
High bright detection is to be used for detected image whether to be in high bright state, if, then carrying out luminance compensation, luminance compensation makes that the average of pixel value of image is 128.
Tree detects, and is used for the swing leaf and swing leaf shade of detected image, and with its filtering from foreground image.
The detection of swing leaf is judged realization according to one of following two characteristics: (1) movement locus is followed the tracks of; When target corresponding region in the movement locus point belongs to the part of moving region area less than the threshold value 5 of moving region area, think that then this target is the swing leaf; (2) amplitude of center of mass motion when the change in displacement of target barycenter in the adjacent track point surpasses the threshold value 6 of target width, thinks that then this target is the swing leaf.Wherein, said threshold value 5 is preferably between 5%~15%; Said threshold value 6 is preferably between 1.5~2.5.
The detection method of swing leaf shade is: add up respectively before and after the expansive working should the zone in before and after the expansive working pixel value be the number of the point of " 1 ", and calculate their ratio, if this ratio, thinks then that this zone is the zone of swinging the leaf shade less than threshold value 7.Wherein, said threshold value 7 is preferably between 40%~60%.
Wherein, said division and the regional processing procedure that is based on said enhancement region of merging judge whether adjacent two zones are same target areas; If belong to same target area, then these two zones are merged; Otherwise, with its division; Wherein, adjacent two zones are meant the zone of edges of regions distance less than threshold value 8.Wherein, said threshold value 8 is preferably between 3~7 pixels.
According to the present invention, said target of prediction is the displacement and adding up the time accordingly of adding up according to target travel, calculates the average velocity of this target travel, and according to the displacement next time of this prediction of speed target; Wherein,
The relation of the said displacement that adds up, add up time and average movement velocity is:
v=s/t
Wherein, s is the displacement after the target barycenter stable motion multiframe, and t is the required time of target travel multiframe, and v is the average velocity of this target stable motion;
Displacement next time according to said average velocity v prediction is:
s′=v·Δt
Wherein, Δ t is the object time of prediction, and s ' is the displacement of target barycenter stable motion Δ t after the time.
According to the present invention, said coupling target comprises: the stable objects of tracking and matching and filtering false target; Wherein,
The stable objects of said tracking and matching is to judge whether surveyed area and tracking target mate, and said coupling is judged according to the matching factor D of surveyed area and target in the following formula:
D=Da*A Da+Db*A Db+Dc*A Dc
Wherein, Da is area matched coefficient, and Db is the histogram matching factor, and Dc is the Distance Matching coefficient.As the matching factor D of surveyed area and target during, then judge this surveyed area and object matching greater than threshold value 9.A Da, A Db, A DcBe respectively the weights coefficient of Da, Db, Dc correspondence.Wherein, said threshold value 9 is preferably between 0.7~0.8.
Area matched coefficient Da is when the area in the zone that surveyed area and target intersect during greater than the threshold value 10 of the area of target, thinks that then this surveyed area satisfies the coupling of area, and Da gets 1; Otherwise Da gets 0.Wherein, said threshold value 10 is preferably between 40%~60%.
Histogram matching factor Db is when the histogram in the zone that surveyed area and target intersect during greater than the histogrammic threshold value 11 of target, thinks that then this surveyed area satisfies histogrammic coupling, and Db gets 1; Otherwise Db gets 0.Wherein, said threshold value 11 is preferably between 40%~60%.
Distance Matching coefficient Dc, according to surveyed area be the motion or two kinds of static situation consider Distance Matching coefficient Dc; If in current frame image and the former frame image in the difference image of surveyed area, the number of foreground point is thought that then surveyed area moves, otherwise is thought that this surveyed area is static during greater than the threshold value 12 of background dot number.
When surveyed area is motion; Calculate the distance at the center of surveyed area in the center of surveyed area in the last two field picture and current frame image; If this distance belongs to the threshold value 13 of the catercorner length of rectangle frame less than target, then think the coupling that satisfies distance, Dc gets 1; Otherwise Dc gets 0.
When surveyed area when being static, the distance at the center of surveyed area in the center of calculating surveyed area in the former frame image and the current frame image less than threshold value 14, is then thought the coupling that satisfies distance as if this distance, and Dc gets 1; Otherwise Dc gets 0.
Wherein, said threshold value 12 is preferably between 65%~75%.Said threshold value 13 is preferably between 1.5~2.Said threshold value 14 is preferably between 8~12 pixels.
The filtering false target is the trajectory analysis through target travel, with the false target area of filtering; Wherein, trajectory analysis is to utilize target trajectory information, the stationarity that the flatness of statistics area change and center of mass point change.
Productive set closed { area above the flatness of said statistics area change was meant the statistical objects tracing point 1, area 2..., area n, n representes the number of tracing point, statistics area average:
area ‾ = 1 n Σ i = 1 n area i
Statistics area variance: Area Sd = 1 n Σ i = 1 n ( Area i - Area ‾ ) 2
Work as area Sd/ area>0.5 o'clock thinks that area change is unsmooth, with this target area filtering;
The stationarity that said statistics center of mass point changes is that the motion according to normal target can not produce regular sudden change on direction; The ratio that direction changes in the statistics adjacent track point; If this ratio surpasses threshold value 15, it is not steady to think that then center of mass point changes, with this target area filtering.Wherein, said threshold value 15 is preferably between 40%~60%.
According to a further aspect in the invention, the present invention also provides a kind of motion target tracking system, and said motion target tracking system comprises:
Detect object module, be used for the motion target area of video scene image is split from background;
The target of prediction module is used for estimating the position of said moving target at the next frame scene image;
Mate object module, be used for the stable objects of tracking and matching, and the filtering false target; With
Upgrade object module, be used for upgrading the template of present frame stable objects.
Wherein, said detection object module comprises:
Obtain video module, be used to obtain video content obtaining scene image, and set up background model;
The pretreatment image module is used to eliminate the influence of scene image to background model;
The marked region module is used for according to background model scene image being carried out foreground segmentation, and marks connected region;
The maintenance state module is used for judging and detects object module present located state, makes handled, and does abnormality detection where necessary;
The enhancement region module is used to use shadow Detection, high bright detection and tree filtering, rejects the false areas of shade, high bright and leaf swing; With
Division with merge regions module, be used to use the constraint that background model provides and the priori of people and vehicle model the zone to be merged and divides processing, with solution target over-segmentation and the mutual occlusion issue of target.
Said coupling object module comprises: the stable objects module of tracking and matching is used to judge whether surveyed area and tracking target mate; With filtering false target module, be used for the filtering false areas.
Great advantage of the present invention is to have realized multiobject accurate tracking under the complex background, solved block, problem such as leaf swing, and computing is easy, has very strong practicality
The present invention be advantageous in that and accurately to detect the moving target in the scene image, comprise people, car, can ignore simultaneously the influence of the disturbing factors such as tree, brightness variation, shade, rain, snow of flating, swing.
The present invention can also be used for intelligent video monitoring system, in order to realize functions such as target classification identification, moving target warning, motion target tracking, PTZ tracking, feature shooting automatically, goal behavior detection, flow detection, crowded detection, legacy detection, stolen quality testing survey, Smoke Detection and flame detection.
Description of drawings
Fig. 1 is the structural representation of motion target tracking method of the present invention;
Fig. 2 is for detecting the schematic flow sheet of target in the motion target tracking method of the present invention;
Fig. 3 is the schematic flow sheet of marked region in the motion target tracking method of the present invention;
Fig. 4 is the schematic flow sheet of coupling target in the motion target tracking method of the present invention;
Fig. 5 is the structural representation of motion target tracking of the present invention system;
Fig. 6 is for detecting the structural representation of object module in the motion target tracking of the present invention system;
Fig. 7 is the structural representation of coupling object module in the motion target tracking of the present invention system.
Embodiment
Structural representation for motion target tracking method among the present invention shown in Figure 1, as shown in Figure 1, motion target tracking method comprises:
Detect target 10, the motion target area in the video scene is split from background;
Target of prediction 20, the next frame motion of estimating target;
Coupling target 30, the stable objects of tracking and matching, and filtering false target;
More fresh target 40, upgrade the template of stable objects in the present frame.
At first carry out the first step and detect target, the motion target area that is about in the video scene splits from background.Fig. 2 is for detecting the framework synoptic diagram of target among the present invention, and is as shown in Figure 2.The framework synoptic diagram that detects target 10 comprises: obtain video 11: obtain video content obtaining scene image, and set up background model; Pretreatment image 12: eliminate the influence of scene image to background model; Marked region 13: according to background model scene image is carried out foreground segmentation, and mark connected region; Maintenance state 14: judge and detect object module present located state, make handled, and do abnormality detection where necessary; Enhancement region 15 is used shadow Detection, high bright detection and tree filtering, rejects the false areas of shade, high bright and leaf swing; Division merges and divides processing with merging zone 16, the constraint that the use background model provides and the priori of people and vehicle model to the zone, to solve target over-segmentation and the mutual occlusion issue of target.
The content of at first obtaining video 11 is to realize that through video capture device this video capture device can be visible spectrum, near infrared or a thermal camera.Said near infrared and thermal camera allow under the low light level of no additional light rays, to use.The said background model of setting up is upgraded in maintenance state 14 at first with first frame scene image model as a setting afterwards.
Pretreatment image 12 comprises Filtering Processing and global motion compensation then.Said Filtering Processing is meant does noise filtering, conventional processing such as level and smooth to image, to remove the noise spot in the image.Filtering Processing can be passed through following document and realize, as: " image denoising mixed filtering method [J]. Chinese image graphics journal, 2005; 10 (3) "; " the improvement mean filter algorithm [J] of self-adaptation central weighted. Tsing-Hua University's journal (natural science edition), 1999,39 (9) ".
Global motion compensation is meant the Global Motion that compensation causes owing to the slight swing of camera.In global motion compensation, motion model is exactly the various motions that reflect video camera basically, comprises translation, rotation, zoom etc.The method of global motion compensation is: based on the motion compensation of region unit coupling, and four region units that in image, draw, the length and width of region unit require the zone to cover relatively fixing background between 32~64 pixels, such as building, perhaps fixed background.
The method of conventional global motion compensation is following: the rectangular area size of supposing the prospect place is m * n, calculates this zone regional luminance difference IDS of positive and negative 5 pixels on every side, and formula is following:
IDS = Σ x = s x m Σ y = s y n ( I ( x , y ) ( t ) - I ( x , y ) ( t - 1 ) ) s x s y
Wherein, s xRepresent regional starting point x coordinate, s yRepresent regional starting point y coordinate, I (x, y)(t) expression current frame image gray scale, I (x, y)(t-1) expression previous frame gradation of image.
Can obtain the position of minimum brightness difference institute corresponding region like this, calculate this regional location variation Δ x, Δ y.In like manner calculate the location variation in other four zones, obtain average Δ x, Δ y at last.Image is carried out the image after translation is compensated according to Δ x, Δ y.
Fig. 3 is for the schematic flow sheet of marked region 13 among the present invention, and is as shown in Figure 3.The flow process of zone marker 13 flow processs is specific as follows: foreground segmentation 131, morphology processing 132, connected component labeling 133.
Foreground segmentation 131 is meant based on background model to be cut apart scene image, to obtain the bianry image of prospect.Particularly, scene image and background model corresponding pixel value are subtracted each other, if this result, then is designated as " 1 " greater than preset threshold to be expressed as the foreground point; If less than threshold value, then be designated as " 0 " to be expressed as background dot, obtain the bianry image of prospect thus.
Morphology processing 132 is meant uses the method for mathematical morphology to handle described bianry image, promptly through corroding after expansion earlier, handles described bianry image, with the less false areas of removal area, and the bigger zone of filling area.Wherein, what corrosion parameter selected is 3 * 3 templates, and what the expansion parameter was selected is 3 * 3 templates.
Connected component labeling 133 typically refers to the zones of different in the method mark Same Scene of connected domain, to distinguish different target areas.Method for marking connected region can be realized through four connected domain methods or eight connected domain methods.The method of the connection mark of eight companies/four connected domains is: at first, the image execution that morphology processing 132 is obtained is lined by line scan, find first point in a unmarked zone, this point of mark; Check that eight of this point connects/four and connects territory points and mark and satisfy connectivity platform, and the point that is not labeled as yet, simultaneously the gauge point that increases newly is noted the seed points as " region growing ".In follow-up labeling process, constantly from the array of record seed points, take out a seed, implement above-mentioned operation, so circulation is empty up to the array that writes down seed points, a connected component labeling finishes.The then next unlabelled zone of mark again, all connected regions of handling 132 images that obtain up to morphology all are labeled.
In marked region 13, single zone and single target are not one to one.Owing to block situation, a zone has comprised a plurality of people or car; Because prospect is similar with background, a target possibly is a plurality of zones by over-segmentation; Because the influence of illumination possibly comprise shade and highlight regions in the zone; Because some non-interested motions, as leaf swing with ripple etc., also can produce false foreground area.These problems all are that the background model method is intrinsic, need in subsequent step, solve.
Maintenance state 14 comprises among Fig. 2: condition judgement and abnormality detection.
Condition judgement is meant judges detection object module present located state, and makes handled.Judge detecting object module present located state mainly judges through scene stabilization time, scene change time.Surpass threshold value 1 when scene stabilization time, system gets into duty by init state; Surpass threshold value 2 when the scene change time, system gets into init state by duty.
Said threshold value 1 was preferably between 0.5~2 second.Said threshold value 2 was preferably between 5~20 seconds.
When being in said duty, continue to carry out next operation, background model is constant.When being in said init state, rebuliding background model, and make abnormality detection where necessary.Said rebuliding during the background model can be carried out the zone through the frame-to-frame differences point-score and detected realization.The frame-to-frame differences point-score subtracts each other the realization that takes absolute value through two two field pictures.
Abnormality detection is to comprise the vision signal serious interference where necessary, has the artificial situation such as camera of blocking to carry out.Edge matching value and successful shortest time of background initialization according to twice background are judged.If the numerical value that the edge of the background of present frame and background model is complementary surpasses threshold value 4 less than threshold value 3 or successful shortest time of background initialization, then think abnormal occurrence.
Said threshold value 3 is preferably between 30~50.Said threshold value 4 was preferably between 6~20 seconds.
Enhancement region 15 among Fig. 2, are to use shadow Detection, high bright detection and tree filtering, reject the false areas of shade, high bright and leaf swing; Comprise: shadow Detection, high bright detection, tree filtering.
Shadow Detection is used for detecting the shadow region of foreground image, comprises the shade of people, car, and with the filtering of detected shadow region.Said shadow Detection is to each connected region, calculates the average of the pixel value in this connected region respectively, and with this average as threshold value, judge the shadow region that this is regional, then with the shadow region filtering.The shade decision rule is following: if pixel value less than said threshold value, then is judged to be shade.
High bright detection is used for detected image and whether is in high bright state (high bright state refers to that promptly the pixel value of image is generally too high), if then carry out luminance compensation.Luminance compensation realizes through luminance proportion, makes that the average of pixel value of image is 128.
Tree filtering is used for the leaf and the shade thereof of the swing of detected image, and with its filtering from foreground image.
The wobble detection leaf is judged realization according to one of following two characteristics: (1) movement locus is followed the tracks of; When target corresponding region in the movement locus point belongs to the part of moving region area less than the threshold value 5 of moving region area, think that then this target is the swing leaf; For example target has 10 tracing points, and moving in zone for once corresponding in these tracing points, then is regarded as swinging leaf to this target, with this target filtering.(2) amplitude of center of mass motion; If the amplitude of the center of mass motion of a certain target suddenlys change, think that then this target is the swing leaf, promptly when the change in displacement of target barycenter in the adjacent track point surpasses the threshold value 6 of target width; Think that then this target is the leaf of swing, with this target filtering.
Said threshold value 5 is preferably between 5%~15%; Said threshold value 6 is preferably between 1.5~2.5.
Wobble detection leaf shade is to realize through the closeness of putting in the surveyed area; The detection method of swing leaf shade is: the number (in promptly should the zone before and after the expansive working pixel value be the number of the point of " 1 ") of adding up the point in the zone before and after the expansive working respectively; And calculate their ratio; If this ratio, thinks then that this zone is the zone of swing leaf shade less than threshold value 7, and should the zone filtering.
Said threshold value 7 is preferably between 40%~60%.
Division merges and divides processing with constraint and the prioris such as people and vehicle model that merging zone 16 is to use background model to provide among Fig. 2 to the zone, too cuts apart and the mutual occlusion issue of target to solve target.Said division is based on above-mentioned enhancement region 15 processing procedures with the method that merges the zone, judges that adjacent two zones are same target areas, or the different target zone.If belong to same target area, then these two zones are merged; Otherwise, with its division.Wherein, adjacent two zones are meant the zone of edges of regions distance less than threshold value 8, the zone that same zone index mark is consistent, the inconsistent zone of different target zone index mark.
Said threshold value 8 is preferably between 3~7 pixels.
Second step of the present invention is a target of prediction 20, according to the displacement and adding up the time accordingly of adding up of target travel, calculates the average velocity of this target travel, and according to the displacement next time of this prediction of speed target.Wherein, the displacement that the said displacement that adds up is exactly target travel add up with, time of adding up be exactly target travel time add up with.The relation of the said displacement that adds up, add up time and average movement velocity is:
v=s/t
Wherein, s is the displacement after the target barycenter stable motion multiframe, and t is the required time of target travel multiframe, and v is the average velocity of this target stable motion.Just can calculate average velocity through above-mentioned formula.
Displacement next time according to said average velocity v prediction is:
s′=v·Δt
Wherein, Δ t is the object time of prediction, and s ' is the displacement of target barycenter stable motion Δ t after the time.Just can calculate through above-mentioned formula and to predict displacement next time.
The 3rd step of the present invention is a coupling target 30, is used for the stable objects of tracking and matching, and the filtering false target.Fig. 4 is for the schematic flow sheet of coupling target among the present invention, and is as shown in Figure 4.Coupling target 30 comprises: the stable objects 301 of tracking and matching and filtering false target 302.
The stable objects 301 of tracking and matching is to judge whether surveyed area and tracking target mate.The decision condition of said coupling is: the computing formula of the matching factor D of surveyed area and target is following:
D=Da*A Da+Db*A Db+Dc*A Dc
Wherein, Da is area matched coefficient, and Db is the histogram matching factor, and Dc is the Distance Matching coefficient.As the matching factor D of surveyed area and target during, then judge this surveyed area and object matching greater than threshold value 9.A Da, A Db, A DcBe respectively the weights coefficient of Da, Db, Dc correspondence.Said threshold value 9 is preferably between 0.7~0.8.
Said A Da, A Db, A DcValue all between 0~1, and satisfy the three value and be 1.Said A Da, A Db, A DcPreferred value be respectively 0.2,0.3,0.5.
1) area matched coefficient Da.When the area in the zone that surveyed area and target intersect during greater than the threshold value 10 of the area of target, think that then this surveyed area satisfies the coupling of area, Da gets 1; Otherwise Da gets 0.
Said threshold value 10 is preferably between 40%~60%.
2) histogram matching factor Db.When the histogram in the zone that surveyed area and target intersect during greater than the histogrammic threshold value 11 of target, think that then this surveyed area satisfies histogrammic coupling, Db gets 1; Otherwise Db gets 0.
Said threshold value 11 is preferably between 40%~60%.
3) Distance Matching coefficient Dc.Divide two kinds of situation to consider Distance Matching coefficient Dc, both of these case is that surveyed area is motion or static.If in current frame image and the former frame image in the difference image of surveyed area, the number of foreground point is thought that then surveyed area moves, otherwise is thought that this surveyed area is static during greater than the threshold value 12 of background dot number.When surveyed area is when motion, calculate the distance at the center of surveyed area in the center of surveyed area in the last two field picture and current frame image, if belong to the threshold value 13 of the catercorner length of rectangle frame, then think the coupling that satisfies distance less than target, Dc gets 1; Otherwise Dc gets 0.When surveyed area when being static, the distance at the center of surveyed area in the center of calculating surveyed area in the former frame image and the current frame image as if less than threshold value 14, is then thought the coupling that satisfies distance, and Dc gets 1; Otherwise Dc gets 0.
Said threshold value 12 is preferably between 65%~75%.Said threshold value 13 is preferably between 1.5~2.Said threshold value 14 is preferably between 8~12 pixels.
The filtering false target is meant the trajectory analysis through target travel, with the false target area of filtering.Wherein, trajectory analysis is to utilize target trajectory information (comprising plane information and center of mass point information), the stationarity that the flatness of statistics area change and center of mass point change.
Wherein, the method for the flatness of statistics area change is following: productive set closes { area above the statistical objects tracing point 1, area 2..., area n, n representes the number of tracing point, statistics area average:
area ‾ = 1 n Σ i = 1 n area i
Statistics area variance: Area Sd = 1 n Σ i = 1 n ( Area i - Area ‾ ) 2
Work as area Sd/ area>0.5 o'clock thinks that area change is unsmooth, with this target area filtering.
The method of the stationarity that the statistics center of mass point changes is that the motion according to normal target can not produce regular sudden change on direction; The ratio that direction changes in the statistics adjacent track point; If this ratio surpasses threshold value 15, it is not steady to think that then center of mass point changes, with this target area filtering.
Said threshold value 15 is preferably between 40%~60%.
Final step is to carry out more fresh target 40, according to the stable objects after the object matching 30, and the model of real-time renewal tracking target.
The present invention also provides a kind of motion target tracking system, and Fig. 5 is for the structural representation of motion target tracking of the present invention system, and is as shown in Figure 5.The motion target tracking system comprises detection object module 71, target of prediction module 72, coupling object module 73 and upgrades object module 74.Wherein, Detecting object module 71 is used for the motion target area of video scene image is split from background; Target of prediction module 72 is used for estimating the position of said moving target at the next frame scene image; Coupling object module 73 is used for the stable objects of tracking and matching, and the filtering false target, upgrades the template that object module 74 is used for upgrading the present frame stable objects.
Fig. 6 is for detecting the structural representation of object module in the motion target tracking of the present invention system.As shown in Figure 6, detection object module 71 comprises to be obtained video module 711, pretreatment image module 712, marked region module 713, maintenance state module 714, enhancement region module 715 and division and merges regions module 716.Wherein, obtain video module 711, be used to obtain video content obtaining scene image, and set up background model; Pretreatment image module 712 is used to eliminate the influence of scene image to background model; Marked region module 713 is used for according to background model scene image being carried out foreground segmentation, and marks connected region; Maintenance state module 714 is used for judging and detects object module present located state, makes handled, and does abnormality detection where necessary; Enhancement region module 715 is used to use shadow Detection, high bright detection and tree filtering, rejects the false areas of shade, high bright and leaf swing; Division with merge regions module 716, be used to use the constraint that background model provides and the priori of people and vehicle model the zone to be merged and divides processing, with solution target over-segmentation and the mutual occlusion issue of target.
Fig. 7 is the structural representation of coupling object module in the motion target tracking of the present invention system.As shown in Figure 7, coupling object module 73 comprises the stable objects module 731 and filtering false target module 732 of tracking and matching.Wherein, the stable objects module 731 of tracking and matching is used to judge whether surveyed area and tracking target mate, and filtering false target module 732 is used for the filtering false areas.
Great advantage of the present invention is to have realized multiobject accurate tracking under the complex background, solved block, problem such as leaf swing, and computing is easy, has very strong practicality.
The present invention be advantageous in that and accurately to detect the moving target in the scene image, comprise people, car, can ignore simultaneously the influence of the disturbing factors such as tree, brightness variation, shade, rain, snow of flating, swing.
The present invention can also be used for intelligent video monitoring system, in order to realize functions such as target classification identification, moving target warning, motion target tracking, PTZ tracking, feature shooting automatically, goal behavior detection, flow detection, crowded detection, legacy detection, stolen quality testing survey, Smoke Detection and flame detection.
The above; Being merely preferred embodiment of the present invention, is not to be used to limit protection scope of the present invention, is to be understood that; The present invention is not limited to described implementation here, and these implementation purpose of description are to help those of skill in the art to put into practice the present invention.Any those of skill in the art are easy to further improving without departing from the spirit and scope of the present invention and perfect; Therefore the present invention only receives the restriction of the content and the scope of claim of the present invention, and its intention contains all and is included in alternatives and equivalent in the spirit and scope of the invention that is limited accompanying claims.

Claims (2)

1. a motion target tracking method is characterized in that, said motion target tracking method comprises the steps:
(1) detects target, the motion target area in the video scene is split from background;
(2) target of prediction, the next frame motion of estimating target;
(3) coupling target, the stable objects of tracking and matching, and filtering false target; With
(4) fresh target more upgrades the template of stable objects in the present frame;
Wherein, said detection target comprises the steps:
Obtain video, obtain video content obtaining scene image, and set up background model;
Pretreatment image is eliminated the influence of scene image to background model; Said pretreatment image comprises: Filtering Processing and global motion compensation; Wherein, said Filtering Processing comprises: image is carried out noise filtering processing, picture smooth treatment; Said global motion compensation is the Global Motion that compensation causes owing to the slight swing of camera, and in global motion compensation, motion model comprises translation, rotation, zoom;
Marked region carries out foreground segmentation according to background model to scene image, and marks connected region; Said marked region comprises the steps: foreground segmentation, based on background model scene image is cut apart, to obtain the bianry image of prospect; Morphology is handled, and uses the method for mathematical morphology to handle described bianry image, with the less false areas of removal area, and the bigger zone of filling area; And connected component labeling, with the zones of different in the method mark Same Scene of connected domain, to distinguish different target areas;
Maintenance state comprises condition judgement and abnormality detection; Said condition judgement is to judge to carry out the module present located state that detects target, makes handled; Surpass first threshold when scene stabilization time, system gets into duty by init state; Surpass second threshold value when the scene change time, system gets into init state by duty; Said abnormality detection is in the vision signal serious interference, and carries out when artificial situation of blocking camera is arranged; Edge matching value and successful shortest time of background initialization according to twice background are judged; If the numerical value that the edge of the background of present frame and background model is complementary surpasses the 4th threshold value less than the 3rd threshold value or successful shortest time of background initialization, then think abnormal occurrence;
Enhancement region is used shadow Detection, high bright detection and tree filtering, rejects the false areas of shade, high bright and leaf swing; Said enhancement region comprises: shadow Detection, to each connected region, calculate the average of the pixel value in this connected region respectively; And with this average as threshold value, judge the shadow region of this connected region, then with the shadow region filtering; If pixel value less than said threshold value, then is judged to be shade; High bright detection, whether detected image is in high bright state, if, then carrying out luminance compensation, luminance compensation makes that the average of pixel value of image is 128; Tree filtering, swing leaf in the detected image and swing leaf shade, and with its filtering from foreground image; Wherein the wobble detection leaf is judged realization according to one of following two characteristics: (1) movement locus is followed the tracks of; When target corresponding region in the movement locus point belongs to the part of moving region area less than the 5th threshold value of moving region area, think that then this target is the swing leaf; (2) amplitude of center of mass motion when the change in displacement of target barycenter in the adjacent track point surpasses the 6th threshold value of target width, thinks that then this target is the swing leaf; The method of wobble detection leaf shade is: add up the number of the interior expansive working of this connected region front and back, expansive working front and back pixel value for the point of " 1 " respectively; And calculate their ratio; If this ratio, thinks then that this connected region is the zone of swing leaf shade less than the 7th threshold value; With
Division merges and divides processing with merging zone, the constraint that the use background model provides and the priori of people and vehicle model to the zone, to solve target over-segmentation and the mutual occlusion issue of target; Said division and the regional processing procedure that is based on said enhancement region of merging judge whether adjacent two zones are same target areas; If belong to same target area, then these two zones are merged; Otherwise, with its division; Wherein, adjacent two zones are meant the zone of edges of regions distance less than the 8th threshold value;
Said target of prediction is the displacement and adding up the time accordingly of adding up according to target travel, calculates the average velocity of this target travel, and according to the displacement next time of this prediction of speed target; The relation of wherein, the said displacement that adds up, add up time and average movement velocity is:
v=s/t
Wherein, s is the displacement after the target barycenter stable motion multiframe, and t is the required time of target travel multiframe, and v is the average velocity of this target stable motion;
Displacement next time according to said average velocity v prediction is:
s′=v·Δt
Wherein, Δ t is the object time of prediction, and s ' is the displacement of target barycenter stable motion Δ t after the time;
Said coupling target comprises: the stable objects of tracking and matching and filtering false target; Wherein, the stable objects of said tracking and matching is to judge whether surveyed area and tracking target mate, and said coupling is judged according to the matching factor D of surveyed area and target in the following formula:
D=Da*A Da+Db*A Db+Dc*A Dc
Wherein, Da is area matched coefficient, and Db is the histogram matching factor, and Dc is the Distance Matching coefficient; A Da, A Db, A DcBe respectively the corresponding weights coefficient of Da, Db, Dc,, then judge this surveyed area and object matching as the matching factor D of surveyed area and target during greater than the 9th threshold value; Area matched coefficient Da is when the area in the zone that surveyed area and target intersect during greater than the tenth threshold value of the area of target, thinks that then this surveyed area satisfies the coupling of area, and Da gets 1; Otherwise Da gets 0; Histogram matching factor Db is when the histogram in the zone that surveyed area and target intersect during greater than histogrammic the 11 threshold value of target, thinks that then this surveyed area satisfies histogrammic coupling, and Db gets 1; Otherwise Db gets 0; Distance Matching coefficient Dc, according to surveyed area be the motion or two kinds of static situation consider Distance Matching coefficient Dc; If in current frame image and the former frame image in the difference image of surveyed area, the number of foreground point is thought that then surveyed area moves, otherwise is thought that this surveyed area is static during greater than the 12 threshold value of background dot number; When surveyed area is motion; Calculate the distance at the center of surveyed area in the center of surveyed area in the last two field picture and current frame image; If this distance belongs to the 13 threshold value of the catercorner length of rectangle frame less than target, then think the coupling that satisfies distance, Dc gets 1; Otherwise Dc gets 0; When surveyed area when being static, the distance at the center of surveyed area in the center of calculating surveyed area in the former frame image and the current frame image less than the 14 threshold value, is then thought the coupling that satisfies distance as if this distance, and Dc gets 1; Otherwise Dc gets 0; The filtering false target is the trajectory analysis through target travel, with the false target area of filtering; Wherein, trajectory analysis is to utilize target trajectory information, the stationarity that the flatness of statistics area change and center of mass point change; Productive set closed { area above the flatness of said statistics area change was meant the statistical objects tracing point 1, area 2..., area n, n representes the number of tracing point, statistics area average:
Figure DEST_PATH_FSB00000658575600031
Statistics area variance:
Figure FSB00000635651500051
During as
Figure FSB00000635651500052
; Think that area change is unsmooth, with this target area filtering;
The stationarity that said statistics center of mass point changes is that the motion according to normal target can not produce regular sudden change on direction; The ratio that direction changes in the statistics adjacent track point; If this ratio surpasses the 15 threshold value, it is not steady to think that then center of mass point changes, with this target area filtering.
2. a motion target tracking system is characterized in that, said motion target tracking system comprises:
Detect object module, be used for the motion target area of video scene image is split from background;
The target of prediction module is used for estimating the position of said moving target at the next frame scene image;
Mate object module, be used for the stable objects of tracking and matching, and the filtering false target; With
Upgrade object module, be used for upgrading the template of present frame stable objects;
Wherein, said detection object module comprises:
Obtain video module, be used to obtain video content obtaining scene image, and set up background model;
The pretreatment image module is used to eliminate the influence of scene image to background model;
The marked region module is used for according to background model scene image being carried out foreground segmentation, and marks connected region;
The maintenance state module is used for judge detecting object module present located state, makes handled, and in the vision signal serious interference, and does abnormality detection when artificial situation of blocking camera is arranged;
The enhancement region module is used to use shadow Detection, high bright detection and tree filtering, rejects the false areas of shade, high bright and leaf swing; Wherein, shadow Detection is to each connected region; Calculate the average of the pixel value in this connected region respectively, and with this average as threshold value, judge the shadow region of this connected region; Then with the shadow region filtering, if pixel value less than said threshold value, then is judged to be shade; High bright detection, whether detected image is in high bright state, if, then carrying out luminance compensation, luminance compensation makes that the average of pixel value of image is 128; Tree filtering, swing leaf in the detected image and swing leaf shade, and with its filtering from foreground image; Wherein the wobble detection leaf is judged realization according to one of following two characteristics: (1) movement locus is followed the tracks of; When target corresponding region in the movement locus point belongs to the part of moving region area less than the 5th threshold value of moving region area, think that then this target is the swing leaf; (2) amplitude of center of mass motion when the change in displacement of target barycenter in the adjacent track point surpasses the 6th threshold value of target width, thinks that then this target is the swing leaf; The method of wobble detection leaf shade is: add up the number of the interior expansive working of this connected region front and back, expansive working front and back pixel value for the point of " 1 " respectively; And calculate their ratio; If this ratio, thinks then that this connected region is the zone of swing leaf shade less than the 7th threshold value; With
Division with merge regions module, be used to use the constraint that background model provides and the priori of people and vehicle model the zone to be merged and divides processing, with solution target over-segmentation and the mutual occlusion issue of target;
Said coupling object module comprises:
The stable objects module of tracking and matching is used to judge whether surveyed area and tracking target mate; With
Filtering false target module is used for the filtering false areas.
CN2009100774355A 2009-02-11 2009-02-11 Moving object tracking method and system thereof Active CN101739686B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2009100774355A CN101739686B (en) 2009-02-11 2009-02-11 Moving object tracking method and system thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2009100774355A CN101739686B (en) 2009-02-11 2009-02-11 Moving object tracking method and system thereof

Publications (2)

Publication Number Publication Date
CN101739686A CN101739686A (en) 2010-06-16
CN101739686B true CN101739686B (en) 2012-05-30

Family

ID=42463137

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2009100774355A Active CN101739686B (en) 2009-02-11 2009-02-11 Moving object tracking method and system thereof

Country Status (1)

Country Link
CN (1) CN101739686B (en)

Families Citing this family (43)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101950424B (en) * 2010-09-09 2012-06-20 西安电子科技大学 Feature associated cell tracking method based on centroid tracking frame
CN101996317B (en) * 2010-11-01 2012-11-21 中国科学院深圳先进技术研究院 Method and device for identifying markers in human body
US9615064B2 (en) 2010-12-30 2017-04-04 Pelco, Inc. Tracking moving objects using a camera network
US9171075B2 (en) * 2010-12-30 2015-10-27 Pelco, Inc. Searching recorded video
CN102831378B (en) * 2011-06-14 2015-10-21 株式会社理光 The detection and tracking method and system of people
CN102270347B (en) * 2011-08-05 2013-02-27 上海交通大学 Target detection method based on linear regression model
JP5830373B2 (en) * 2011-12-22 2015-12-09 オリンパス株式会社 Imaging device
CN102760230B (en) * 2012-06-19 2014-07-23 华中科技大学 Flame detection method based on multi-dimensional time domain characteristics
CN102779348B (en) * 2012-06-20 2015-01-07 中国农业大学 Method for tracking and measuring moving targets without marks
CN103516956B (en) * 2012-06-26 2016-12-21 郑州大学 Pan/Tilt/Zoom camera monitoring intrusion detection method
CN102982559B (en) * 2012-11-28 2015-04-29 大唐移动通信设备有限公司 Vehicle tracking method and system
CN103226697A (en) * 2013-04-07 2013-07-31 布法罗机器人科技(苏州)有限公司 Quick vehicle tracking method and device
CN104083146B (en) * 2013-06-25 2016-03-16 北京大学 A kind of biological neural loop living imaging system
KR102161212B1 (en) 2013-11-25 2020-09-29 한화테크윈 주식회사 System and method for motion detecting
KR102247596B1 (en) 2014-01-24 2021-05-03 한화파워시스템 주식회사 Compressor system and method for controlling thereof
CN103941752B (en) * 2014-03-27 2016-10-12 北京大学 A kind of nematicide real-time automatic tracing imaging system
CN103971381A (en) * 2014-05-16 2014-08-06 江苏新瑞峰信息科技有限公司 Multi-target tracking system and method
JP6434507B2 (en) * 2014-06-03 2018-12-05 住友重機械工業株式会社 Construction machine human detection system and excavator
CN104754296A (en) * 2014-07-21 2015-07-01 广西电网公司钦州供电局 Time sequence tracking-based target judging and filtering method applied to transformer substation operation security control
CN105447431B (en) * 2014-08-01 2018-11-27 深圳中集天达空港设备有限公司 A kind of docking aircraft method for tracking and positioning and system based on machine vision
CN104778474B (en) * 2015-03-23 2019-06-07 四川九洲电器集团有限责任公司 A kind of classifier construction method and object detection method for target detection
KR102457617B1 (en) * 2015-09-16 2022-10-21 한화테크윈 주식회사 Method and apparatus of estimating a motion of an image, method and apparatus of image stabilization and computer-readable recording medium for executing the method
CN105761245B (en) * 2016-01-29 2018-03-06 速感科技(北京)有限公司 A kind of automatic tracking method and device of view-based access control model characteristic point
CN106096496A (en) * 2016-05-28 2016-11-09 张维秀 A kind of fire monitoring method and system
CN106204653B (en) * 2016-07-13 2019-04-30 浙江宇视科技有限公司 A kind of monitoring tracking and device
CN106251388A (en) * 2016-08-01 2016-12-21 乐视控股(北京)有限公司 Photo processing method and device
CN106447685B (en) * 2016-09-06 2019-04-02 电子科技大学 A kind of infrared track method
CN106530325A (en) * 2016-10-26 2017-03-22 合肥润客软件科技有限公司 Multi-target visual detection and tracking method
CN106910203B (en) * 2016-11-28 2018-02-13 江苏东大金智信息系统有限公司 The quick determination method of moving target in a kind of video surveillance
CN107202980B (en) * 2017-06-13 2019-12-10 西安电子科技大学 multi-frame combined sea surface small target detection method based on direction ratio
CN108010055B (en) * 2017-11-23 2022-07-12 塔普翊海(上海)智能科技有限公司 Tracking system and tracking method for three-dimensional object
CN108154119B (en) * 2017-12-25 2021-09-28 成都全景智能科技有限公司 Automatic driving processing method and device based on self-adaptive tracking frame segmentation
CN108280408B (en) * 2018-01-08 2021-11-02 北京联合大学 Crowd abnormal event detection method based on hybrid tracking and generalized linear model
CN108596946A (en) * 2018-03-21 2018-09-28 中国航空工业集团公司洛阳电光设备研究所 A kind of moving target real-time detection method and system
CN108960253A (en) * 2018-06-27 2018-12-07 魏巧萍 A kind of object detection system
CN109389031B (en) * 2018-08-27 2021-12-03 浙江大丰实业股份有限公司 Automatic positioning mechanism for performance personnel
CN110909579A (en) * 2018-09-18 2020-03-24 杭州海康威视数字技术股份有限公司 Video image processing method and device, electronic equipment and storage medium
CN110929597A (en) * 2019-11-06 2020-03-27 普联技术有限公司 Image-based leaf filtering method and device and storage medium
CN111654700B (en) * 2020-06-19 2022-12-06 杭州海康威视数字技术股份有限公司 Privacy mask processing method and device, electronic equipment and monitoring system
CN111767875B (en) * 2020-07-06 2024-05-10 中兴飞流信息科技有限公司 Tunnel smoke detection method based on instance segmentation
CN112270657A (en) * 2020-11-04 2021-01-26 成都寰蓉光电科技有限公司 Sky background-based target detection and tracking algorithm
CN112967316B (en) * 2021-03-05 2022-09-06 中国科学技术大学 Motion compensation optimization method and system for 3D multi-target tracking
CN114155255B (en) * 2021-12-14 2023-07-28 成都索贝数码科技股份有限公司 Video horizontal screen-to-vertical screen method based on space-time track of specific person

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101029824A (en) * 2006-02-28 2007-09-05 沈阳东软软件股份有限公司 Method and apparatus for positioning vehicle based on characteristics
CN101236606A (en) * 2008-03-07 2008-08-06 北京中星微电子有限公司 Shadow cancelling method and system in vision frequency monitoring

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101029824A (en) * 2006-02-28 2007-09-05 沈阳东软软件股份有限公司 Method and apparatus for positioning vehicle based on characteristics
CN101236606A (en) * 2008-03-07 2008-08-06 北京中星微电子有限公司 Shadow cancelling method and system in vision frequency monitoring

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
张明修.实时的运动目标视觉分析中关键技术研究.《中国优秀硕士学位论文全文数据库》.2008,(第5期),第12-13页,第19-22页,第24页,第27页,第40-42页. *
曾锐利等.智能交通监控系统中多目标跟踪算法.《电子器件》.2007,第30卷(第6期),第2160-2161页. *

Also Published As

Publication number Publication date
CN101739686A (en) 2010-06-16

Similar Documents

Publication Publication Date Title
CN101739686B (en) Moving object tracking method and system thereof
CN101739551B (en) Method and system for identifying moving objects
CN101739550B (en) Method and system for detecting moving objects
CN106910203B (en) The quick determination method of moving target in a kind of video surveillance
CN101794385B (en) Multi-angle multi-target fast human face tracking method used in video sequence
CN100545867C (en) Aerial shooting traffic video frequency vehicle rapid checking method
CN100589561C (en) Dubious static object detecting method based on video content analysis
CN100544446C (en) The real time movement detection method that is used for video monitoring
CN103971380B (en) Pedestrian based on RGB-D trails detection method
CN101872546B (en) Video-based method for rapidly detecting transit vehicles
CN108596129A (en) A kind of vehicle based on intelligent video analysis technology gets over line detecting method
CN106127807A (en) A kind of real-time video multiclass multi-object tracking method
CN101325690A (en) Method and system for detecting human flow analysis and crowd accumulation process of monitoring video flow
CN103049787A (en) People counting method and system based on head and shoulder features
Pan et al. Traffic surveillance system for vehicle flow detection
CN103150549A (en) Highway tunnel fire detecting method based on smog early-stage motion features
CN102222214A (en) Fast object recognition algorithm
CN101739694B (en) Image analysis-based method and device for ultrahigh detection of high voltage transmission line
CN102867416A (en) Vehicle part feature-based vehicle detection and tracking method
CN105160297A (en) Masked man event automatic detection method based on skin color characteristics
CN103077423A (en) Crowd quantity estimating, local crowd clustering state and crowd running state detection method based on video stream
CN105893962A (en) Method for counting passenger flow at airport security check counter
CN104616006A (en) Surveillance video oriented bearded face detection method
CN101996307A (en) Intelligent video human body identification method
CN104866827A (en) Method for detecting people crossing behavior based on video monitoring platform

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
ASS Succession or assignment of patent right

Owner name: NETPOSA TECHNOLOGIES, LTD.

Free format text: FORMER OWNER: BEIJING ZANB SCIENCE + TECHNOLOGY CO., LTD.

Effective date: 20150716

C41 Transfer of patent application or patent right or utility model
TR01 Transfer of patent right

Effective date of registration: 20150716

Address after: 100102, Beijing, Chaoyang District, Tong Tung Street, No. 1, Wangjing SOHO tower, two, C, 26 floor

Patentee after: NETPOSA TECHNOLOGIES, Ltd.

Address before: 100048 Beijing city Haidian District Road No. 9, building 4, 5 layers of international subject

Patentee before: Beijing ZANB Technology Co.,Ltd.

PP01 Preservation of patent right
PP01 Preservation of patent right

Effective date of registration: 20220726

Granted publication date: 20120530