CN101739686B - Moving object tracking method and system thereof - Google Patents

Moving object tracking method and system thereof Download PDF

Info

Publication number
CN101739686B
CN101739686B CN2009100774355A CN200910077435A CN101739686B CN 101739686 B CN101739686 B CN 101739686B CN 2009100774355 A CN2009100774355 A CN 2009100774355A CN 200910077435 A CN200910077435 A CN 200910077435A CN 101739686 B CN101739686 B CN 101739686B
Authority
CN
China
Prior art keywords
target
area
detection
image
threshold value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN2009100774355A
Other languages
Chinese (zh)
Other versions
CN101739686A (en
Inventor
王�华
曾建平
黄建
王正
菅云峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netposa Technologies Ltd
Original Assignee
Beijing Zanb Science & Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zanb Science & Technology Co Ltd filed Critical Beijing Zanb Science & Technology Co Ltd
Priority to CN2009100774355A priority Critical patent/CN101739686B/en
Publication of CN101739686A publication Critical patent/CN101739686A/en
Application granted granted Critical
Publication of CN101739686B publication Critical patent/CN101739686B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention provides a moving object tracking method and a system thereof. The moving object tracking method comprises the following steps: detecting objects, and partitioning a moving object area in a video scene from the background; predicting the objects, and estimating the next frame motion of the objects; matching the objects, tracking the matched stable objects, and filtering the false objects; and updating the objects, and updating templates of the stable objects in the current frame. The method and the system realize accurate tracking of multiple objects under the complex background, and solve the problems of shading, leaf swing and the like; moreover, the operation is simple and convenient, so the method and the system are quite practical.

Description

Moving target tracking method and system thereof
Technical Field
The invention relates to a video monitoring technology, in particular to a moving target tracking method and a moving target tracking system in an intelligent video monitoring system.
Background
With increasing crime levels and threats, security has become a common concern in the world. Video surveillance is one of the methods to solve this problem. Besides public safety, video monitoring can also effectively solve other problems, such as adjustment of traffic flow and people flow in crowded cities. Large monitoring systems have been widely used for many years in major locations such as airports, banks, highways or city centres.
Because the traditional video monitoring technology is generally manual monitoring, the defects of easy fatigue, easy negligence, slow reaction speed, high labor cost and the like exist. Therefore, in recent years, a digital, standardized, intelligent and IP-networked intelligent video monitoring technology is gradually researched.
Conventional intelligent video surveillance techniques all include a moving object tracking technique. The purpose of moving object tracking is to determine the position of the same object in different scene images on the basis of correctly detecting the moving object.
To achieve tracking, methods based on motion analysis, such as inter-frame difference methods and optical flow segmentation methods, may be used. The interframe difference method is to subtract the adjacent frame images, then take the threshold value of the result image and divide, and then extract the moving target. The method has the disadvantages that whether the target moves in the scene can be detected only according to the intensity change of the pixels between frames, and the correlation between the frames of moving target signals and the correlation between the frames of noise are weak and difficult to distinguish. Optical flow segmentation detects moving objects by the different velocities between the object and the background. The method has the disadvantages that the problems of background occlusion, display, aperture and the like caused by the movement of the target cannot be effectively distinguished, the calculation amount is large, and special hardware support is required.
To achieve tracking, image matching methods such as region matching, model matching may be used. The area matching is to superpose a certain block of the reference image with all possible positions of the real-time image, and then calculate the corresponding value of a certain image similarity measure, and the position corresponding to the maximum similarity is the position of the target. The method has the defects of large calculation amount and difficulty in meeting the real-time requirement. Model matching is matching objects in a scene image according to a template. The method has the defects of complex calculation and analysis, low operation speed, complex model updating and poor real-time performance.
In summary, there is an urgent need to provide a simpler, more effective and more real-time moving target tracking scheme.
Disclosure of Invention
In view of the above, the main objective of the present invention is to provide a moving target tracking method and system thereof, which can obtain correct foreground images and reduce target detection errors; furthermore, operations such as prediction, matching, updating and the like can be performed according to the detection result so as to filter false moving targets and realize accurate tracking of the moving targets.
In order to achieve the purpose, the technical scheme of the invention is realized as follows:
the invention provides a moving target tracking method, which comprises the following steps:
detecting a target, and segmenting a moving target area in a video scene from a background;
predicting a target, and estimating the next frame motion of the target;
matching targets, tracking the matched stable targets, and filtering false targets;
and updating the target, and updating the template of the stable target in the current frame.
According to the invention, the detection of the target comprises the following steps:
acquiring a video, acquiring video content to obtain a scene image, and establishing a background model;
preprocessing the image, and eliminating the influence of the scene image on the background model;
marking a region, performing foreground segmentation on the scene image according to the background model, and marking a connected region;
maintaining the state, judging the current state of the detection target module, performing corresponding processing, and performing abnormal detection if necessary;
enhancing the area, and removing false areas of shadow, highlight and leaf swing by using shadow detection, highlight detection and tree filtering;
splitting and merging the regions, merging and splitting the regions by using the constraint provided by the background model and the prior knowledge of the human and vehicle models so as to solve the problems of over-segmentation of the target and mutual shielding of the target.
Wherein the pre-processing the image comprises: filtering processing and global motion compensation; wherein,
the filtering process includes: carrying out noise filtering processing and image smoothing processing on the image;
the global motion compensation is to compensate the image global motion caused by slight swing of the camera, and in the global motion compensation, a motion model comprises translation, rotation and zooming.
The area brightness difference IDS of plus and minus 5 pixels around the rectangular area where the foreground is located is calculated through the following conventional formula to obtain the image translation distances delta x and delta y in global motion compensation, wherein the formula is as follows:
<math> <mrow> <mi>IDS</mi> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>x</mi> <mo>=</mo> <msub> <mi>s</mi> <mi>x</mi> </msub> </mrow> <mi>m</mi> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>y</mi> <mo>=</mo> <msub> <mi>s</mi> <mi>y</mi> </msub> </mrow> <mi>n</mi> </munderover> <mrow> <mo>(</mo> <msub> <mi>I</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>I</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>-</mo> <mn>1</mn> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <msub> <mi>s</mi> <mi>x</mi> </msub> <msub> <mi>s</mi> <mi>y</mi> </msub> </mrow> </math>
wherein s isxDenotes the area starting point x coordinate, syDenotes the y coordinate of the start of the region, I(x,y)(t) represents the gray level of the current frame image, I(x,y)(t-1) representing the gray level of the image of the previous frame; calculating the position variation of other four areas in the same way, and finally solving the average delta x and delta y; and translating the image according to the delta x and the delta y to obtain a compensated image.
Wherein the marking region comprises the steps of:
foreground segmentation, namely segmenting a scene image based on a background model to obtain a binary image of a foreground;
morphological processing, namely processing the binary image by using a mathematical morphology method to remove false regions with small areas and fill regions with large areas; and
and marking different areas in the same scene by using a connected domain method to distinguish different target areas.
Wherein the maintenance state includes state determination and anomaly detection.
The state judgment is to judge the current state of the detection target module and carry out corresponding processing; when the scene stability time exceeds a threshold value 1, the system enters a working state from an initialization state; when the scene change time exceeds the threshold value 2, the system enters an initialization state from an operating state. The threshold value 1 is preferably between 0.5 and 2 seconds, and the threshold value 2 is preferably between 5 and 20 seconds.
The abnormal detection is executed under the conditions that the video signal interference is serious and the camera is artificially shielded; and judging according to the edge matching values of the background twice and the shortest time for successful background initialization, and if the value of the background of the current frame matched with the edge of the background model is less than a threshold value 3 or the shortest time for successful background initialization exceeds a threshold value 4, determining that the phenomenon is abnormal. The threshold 3 is preferably 30-50 seconds, and the threshold 4 is preferably 6-20 seconds.
Wherein the enhancement region comprises: shadow detection, highlight detection, tree filtering.
And shadow detection, namely respectively calculating the mean value of the pixel values in each communication area, taking the mean value as a threshold value, judging the shadow area of the area, filtering the shadow area, and judging the shadow if the pixel value is smaller than the threshold value.
The highlight detection is for detecting whether or not an image is in a highlight state, and if so, brightness compensation is performed so that the average value of pixel values of the image becomes 128.
And the tree detection is used for detecting the leaves of the swinging tree and the shadows of the swinging tree in the image and filtering the leaves of the swinging tree from the foreground image.
The detection of the swing leaves is realized according to one of the following two characteristics: (1) tracking a motion track, and when the part of the area of the motion area corresponding to the target in the motion track point is smaller than a threshold value 5 of the area of the motion area, considering that the target is a swing leaf; (2) and (3) the amplitude of the centroid motion, and when the displacement change of the centroid of the target in the adjacent track points exceeds the threshold value 6 of the width of the target, the target is considered to be the leaf of the swinging tree. Wherein, the threshold value 5 is preferably between 5% and 15%; the threshold 6 is preferably between 1.5 and 2.5.
The detection method of the swinging leaf shadow comprises the following steps: and respectively counting the number of points with the pixel value of 1 before and after the expansion operation in the area before and after the expansion operation, calculating the ratio of the points, and if the ratio is less than a threshold value 7, determining that the area is the area of the swinging leaf shadow. Wherein, the threshold value 7 is preferably between 40% and 60%.
The splitting and merging area is based on the processing process of the enhancement area, and whether two adjacent areas are the same target area is judged; if the two regions belong to the same target region, merging the two regions; otherwise, splitting the same; the two adjacent regions are regions with a region edge distance smaller than a threshold value 8. The threshold value 8 is preferably between 3 and 7 pixels.
According to the invention, the target is predicted by calculating the average speed of the target movement according to the accumulated displacement of the target movement and the corresponding accumulated time, and predicting the next displacement of the target according to the speed; wherein,
the relationship among the accumulated displacement, the accumulated time and the average movement speed is as follows:
v=s/t
wherein s is the displacement of a target mass center after stably moving for multiple frames, t is the time required by the target moving for multiple frames, and v is the average speed of the target stably moving;
the next displacement predicted from the average velocity v is:
s′=v·Δt
and the time delta t is the predicted target time, and the time s' is the displacement of the target mass center after the stable movement time delta t.
According to the invention, the matching target comprises: tracking the matched stable target and filtering out false target; wherein,
the stable target of the tracking matching is to judge whether the detection area is matched with the tracking target, and the matching is judged according to a matching coefficient D of the detection area and the target in the following formula:
D=Da*ADa+Db*ADb+Dc*ADc
where Da is an area matching coefficient, Db is a histogram matching coefficient, and Dc is a distance matching coefficient. When the matching coefficient D of the detection area and the target is larger than the threshold value 9, the detection area is judged to be matched with the target. A. theDa、ADb、ADcThe weight coefficients are respectively corresponding to Da, Db and Dc. Wherein, the threshold 9 is preferably between 0.7 and 0.8.
The area matching coefficient Da is that when the area of the area where the detection area and the target are intersected is larger than the threshold value 10 of the area of the target, the detection area is considered to meet the matching of the areas, and Da is 1; otherwise Da is 0. Wherein, the threshold value 10 is preferably between 40% and 60%.
A histogram matching coefficient Db, which is used for considering that the detection area meets the matching of the histogram when the histogram of the area where the detection area and the target are intersected is larger than the threshold value 11 of the histogram of the target, and taking 1 as Db; otherwise Db is taken to be 0. The threshold 11 is preferably between 40% and 60%.
A distance matching coefficient Dc that is considered in accordance with two cases of whether the detection region is moving or stationary; if the number of foreground points in the difference image of the detection area in the current frame image and the previous frame image is greater than the threshold value 12 of the number of background points, the detection area is considered to be moving, otherwise, the detection area is considered to be static.
When the detection area is in motion, calculating the distance between the center of the detection area in the current frame image and the center of the detection area in the current frame image, if the distance is smaller than a threshold value 13 of the length of a diagonal line of a rectangular frame where the target is located, considering that the distance matching is met, and taking 1 as Dc; otherwise Dc is taken as 0.
When the detection area is static, calculating the distance between the center of the detection area in the previous frame image and the center of the detection area in the current frame image, if the distance is less than a threshold value 14, determining that the distance matching is satisfied, and taking Dc as 1; otherwise Dc is taken as 0.
Wherein, the threshold value 12 is preferably between 65% and 75%. The threshold value 13 is preferably between 1.5 and 2. The threshold value 14 is preferably between 8 and 12 pixels.
Filtering out false targets by analyzing the motion tracks of the targets to filter out false target areas; the track analysis is to count the smoothness of the area change and the stationarity of the centroid point change by using target track information.
The smoothness of the statistical area change refers to an area set { area ] on a statistical target track point1,area2,...,areanN represents the number of the trace points, and the area mean value is counted:
<math> <mrow> <mover> <mi>area</mi> <mo>&OverBar;</mo> </mover> <mo>=</mo> <mfrac> <mn>1</mn> <mi>n</mi> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <msub> <mi>area</mi> <mi>i</mi> </msub> </mrow> </math>
and (3) counting the area variance: <math> <mrow> <msub> <mi>area</mi> <mi>sd</mi> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mi>n</mi> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <msup> <mrow> <mo>(</mo> <msub> <mi>area</mi> <mi>i</mi> </msub> <mo>-</mo> <mover> <mi>area</mi> <mo>&OverBar;</mo> </mover> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> </math>
when areasdWhen area is more than 0.5, the area change is not smooth, and the target area is filtered;
the method is characterized in that the stationarity of the change of the centroid points is calculated according to the fact that frequent sudden changes can not be generated in the direction of the movement of a normal target, the ratio of the direction change in the adjacent track points is calculated, if the ratio exceeds a threshold value 15, the change of the centroid points is considered to be unstable, and the target area is filtered. Wherein, the threshold value 15 is preferably between 40% and 60%.
According to another aspect of the present invention, there is also provided a moving object tracking system including:
the detection target module is used for segmenting a moving target area in the video scene image from a background;
a predicted target module, configured to estimate a position of the moving target in a next scene image;
the matching target module is used for tracking the matched stable target and filtering out a false target; and
and the target updating module is used for updating the template of the stable target in the current frame.
Wherein the detection target module comprises:
the video acquisition module is used for acquiring video content to obtain a scene image and establishing a background model;
the image preprocessing module is used for eliminating the influence of the scene image on the background model;
the marking region module is used for carrying out foreground segmentation on the scene image according to the background model and marking a connected region;
the maintenance state module is used for judging the current state of the detection target module, performing corresponding processing and performing abnormal detection when necessary;
the enhancement region module is used for removing false regions of shadow, highlight and leaf swing by using shadow detection, highlight detection and tree filtering; and
and the splitting and combining region module is used for combining and splitting the regions by using the constraint provided by the background model and the prior knowledge of the human and vehicle models so as to solve the problems of target over-segmentation and target mutual occlusion.
The matching target module comprises: the tracking matching stable target module is used for judging whether the detection area is matched with the tracking target or not; and a false object filtering module for filtering the false region.
The method has the greatest advantages of realizing the accurate tracking of multiple targets under the complex background, solving the problems of shielding, leaf swinging and the like, along with simple and convenient operation and strong practicability
The invention has the advantages that the invention can accurately detect the moving objects in the scene image, including people and vehicles, and can ignore the influence of interference factors such as image shake, swinging trees, brightness change, shadow, rain, snow and the like.
The invention can also be used in an intelligent video monitoring system to realize the functions of target classification identification, moving target warning, moving target tracking, PTZ tracking, automatic close-up shooting, target behavior detection, flow detection, congestion detection, carry-over detection, stolen object detection, smoke detection, flame detection and the like.
Drawings
FIG. 1 is a schematic structural diagram of a moving object tracking method according to the present invention;
FIG. 2 is a schematic diagram illustrating a process of detecting a target in the moving target tracking method according to the present invention;
FIG. 3 is a schematic flow chart of a labeling area in the moving object tracking method according to the present invention;
FIG. 4 is a schematic flow chart of a matching target in the moving target tracking method according to the present invention;
FIG. 5 is a schematic diagram of a moving object tracking system according to the present invention;
FIG. 6 is a schematic diagram of a target detection module in the moving target tracking system according to the present invention;
fig. 7 is a schematic structural diagram of a matching target module in the moving target tracking system of the present invention.
Detailed Description
Fig. 1 is a schematic structural diagram of a moving object tracking method in the present invention, and as shown in fig. 1, the moving object tracking method includes:
detecting a target 10, and segmenting a moving target area in a video scene from a background;
a predicted target 20 that estimates a next frame motion of the target;
matching the target 30, tracking the matched stable target, and filtering out false target;
the target 40 is updated, and the template of the stable target in the current frame is updated.
Firstly, a first step of detecting a target is carried out, namely, a moving target area in a video scene is segmented from a background. Fig. 2 is a schematic diagram of a detection target of the present invention, as shown in fig. 2. The schematic diagram of the detection target 10 includes: acquiring the video 11: acquiring video content to obtain a scene image, and establishing a background model; pre-processing the image 12: eliminating the influence of the scene image on the background model; mark region 13: performing foreground segmentation on the scene image according to the background model, and marking a connected region; maintenance state 14: judging the current state of the detection target module, performing corresponding processing, and performing abnormal detection if necessary; an enhanced region 15, which eliminates false regions of shadow, highlight and leaf wobble by using shadow detection, highlight detection and tree filtering; and splitting and merging the areas 16, merging and splitting the areas by using the constraint provided by the background model and the prior knowledge of the human and vehicle models so as to solve the problems of target over-segmentation and target mutual occlusion.
First acquiring the content of the video 11 is performed by a video acquisition device, which may be a visible spectrum, near infrared or infrared camera. The near infrared and infrared cameras allow application in low light without additional light. The background model is initially created with a first frame of scene images as the background model and then updated in the maintenance state 14.
The pre-processed image 12 then includes a filtering process and global motion compensation. The filtering processing refers to performing conventional processing such as noise filtering and smoothing on the image to remove noise points in the image. The filtering process can be implemented by the following documents, for example: "image denoising hybrid filtering method [ J ]. chinese image graphics press, 2005, 10 (3)", "adaptive center weighted improved mean filtering algorithm [ J ]. hua university press (natural science edition), 1999, 39 (9)".
Global motion compensation refers to compensating for image global motion due to slight camera shake. In global motion compensation, the motion model basically reflects various motions of the camera, including translation, rotation, zooming, and the like. The global motion compensation method comprises the following steps: based on the motion compensation of the region block matching, four region blocks are drawn in the image, the length and the width of each region block are between 32 and 64 pixels, and the region is required to cover a relatively fixed background, such as a building or a stationary background.
The conventional method of global motion compensation is as follows: assuming that the size of the rectangular area where the foreground is located is mxn, the area brightness difference IDS of plus and minus 5 pixels around the area is calculated, and the formula is as follows:
<math> <mrow> <mi>IDS</mi> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>x</mi> <mo>=</mo> <msub> <mi>s</mi> <mi>x</mi> </msub> </mrow> <mi>m</mi> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>y</mi> <mo>=</mo> <msub> <mi>s</mi> <mi>y</mi> </msub> </mrow> <mi>n</mi> </munderover> <mrow> <mo>(</mo> <msub> <mi>I</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>I</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>-</mo> <mn>1</mn> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <msub> <mi>s</mi> <mi>x</mi> </msub> <msub> <mi>s</mi> <mi>y</mi> </msub> </mrow> </math>
wherein s isxDenotes the area starting point x coordinate, syDenotes the y coordinate of the start of the region, I(x,y)(t) represents the gray level of the current frame image, I(x,y)And (t-1) represents the gray scale of the image of the last frame.
Thus, the position of the area corresponding to the minimum brightness difference is obtained, and the position change amounts Δ x and Δ y of the area are calculated. And calculating the position variation of other four areas in the same way, and finally solving the average delta x and delta y. And translating the image according to the delta x and the delta y to obtain a compensated image.
Fig. 3 is a schematic flow chart of the marking region 13 according to the present invention, as shown in fig. 3. The flow of the area mark 13 flow is specifically as follows: foreground segmentation 131, morphological processing 132, connected region labeling 133.
The foreground segmentation 131 is to segment the scene image based on the background model to obtain a binary image of the foreground. Specifically, the pixel values corresponding to the scene image and the background model are subtracted, and if the result is greater than a set threshold value, the result is marked as "1" to represent the scene image as a foreground point; if less than the threshold, it is noted as "0" to represent as a background point, thereby obtaining a binary image of the foreground.
The morphology processing 132 is to process the binary image by using a mathematical morphology method, i.e. by erosion and then dilation, to remove the dummy regions with small area and fill the regions with large area. Wherein, the corrosion parameter is selected to be a 3X 3 template, and the expansion parameter is selected to be a 3X 3 template.
The connected component marking 133 generally refers to marking different regions in the same scene by using a connected component method to distinguish different target regions. The connected region labeling method may be implemented by a four-connected domain method or an eight-connected domain method. The method for the connectivity marking of the eight-connection/four-connection domain comprises the following steps: firstly, the image obtained by the morphological processing 132 is scanned line by line, a first point of an unmarked area is found, and the point is marked; checking the eight-link/four-link domain points of the point, marking the points which meet the connectivity requirement and are not marked yet, and recording newly added marked points as seed points of 'region growing'. In the subsequent marking process, continuously taking out a seed from the array of the recorded seed points, and executing the operation, and repeating the steps until the array of the recorded seed points is empty and a connected region mark is finished. The next unmarked area is then marked until all connected regions of the image acquired by the morphological processing 132 are marked.
In the mark area 13, individual areas do not correspond one-to-one to individual targets. Due to the occlusion, an area contains multiple people or vehicles; since the foreground is similar to the background, one object may be over-segmented into multiple regions; due to the influence of illumination, shadow and highlight areas may be contained in the area; false foreground regions may also be created due to some non-interesting motions, such as leaf wiggling and water wave rippling. These problems are inherent in the background model approach and need to be solved in a subsequent step.
The maintenance state 14 in fig. 2 includes: status determination and anomaly detection.
The state judgment means that the current state of the detection target module is judged and corresponding processing is performed. The current state of the detection target module is mainly determined according to the scene stable time and the scene change time. When the scene stability time exceeds a threshold value 1, the system enters a working state from an initialization state; when the scene change time exceeds the threshold value 2, the system enters an initialization state from an operating state.
The threshold value 1 is preferably between 0.5 and 2 seconds. The threshold value 2 is preferably between 5 and 20 seconds.
And when the mobile terminal is in the working state, continuing to execute the next operation, and keeping the background model unchanged. When in the initialization state, the background model is re-established and anomaly detection is made if necessary. During the process of reestablishing the background model, the region detection can be realized by an interframe difference method. The interframe difference method is realized by subtracting two frames of images to obtain an absolute value.
The abnormal detection is performed when necessary, including the situations that the video signal interference is serious, and a camera is artificially shielded. And judging according to the edge matching values of the background twice and the shortest time for successful background initialization. And if the value of the background of the current frame matched with the edge of the background model is less than a threshold value 3 or the shortest time for successful background initialization exceeds a threshold value 4, determining that the current frame is an abnormal phenomenon.
The threshold value 3 is preferably between 30 and 50. The threshold 4 is preferably between 6 and 20 seconds.
The enhanced region 15 in fig. 2 is a false region that uses shadow detection, highlight detection, and tree filtering to remove shadows, highlights, and leaf flapping; the method comprises the following steps: shadow detection, highlight detection, tree filtering.
The shadow detection is used for detecting shadow areas in the foreground image, including shadows of people and vehicles, and filtering out the detected shadow areas. The shadow detection is to respectively calculate the mean value of the pixel values in each connected region, take the mean value as a threshold value, judge the shadow region of the region and then filter the shadow region. The shadow determination rule is as follows: and if the pixel value is smaller than the threshold value, judging the shadow.
The highlight detection is used to detect whether an image is in a highlight state (the highlight state means that pixel values in the image are generally too high), and if so, brightness compensation is performed. The luminance compensation is achieved by luminance equalization such that the mean value of the pixel values of the image is 128.
Tree filtering is used to detect the leaves of the wiggling in the image and their shadows and to filter them out of the foreground image.
The detection of the swing leaves is achieved according to one of the following two characteristic decisions: (1) tracking a motion track, and when the part of the area of the motion area corresponding to the target in the motion track point is smaller than a threshold value 5 of the area of the motion area, considering that the target is a swing leaf; for example, if the target has 10 trace points, and only one corresponding region in the trace points is moving, the target is regarded as a leaf of a swinging tree, and the target is filtered out. (2) And if the amplitude of the centroid motion of a certain target is abrupt change, the target is considered to be a leaf of the swinging tree, namely when the displacement change of the target centroid in the adjacent track points exceeds a threshold value 6 of the target width, the target is considered to be the leaf of the swinging tree, and the target is filtered.
The threshold value 5 is preferably between 5% and 15%; the threshold 6 is preferably between 1.5 and 2.5.
The detection of the shadow of the swing leaves is realized by detecting the density of points in an area, and the detection method of the shadow of the swing leaves comprises the following steps: respectively counting the number of points in the area before and after the expansion operation (namely the number of points with the pixel value of 1 before and after the expansion operation in the area), calculating the ratio of the points, and if the ratio is less than a threshold value of 7, determining that the area is the area with the shadow of the swinging leaves, and filtering the area.
The threshold value 7 is preferably between 40% and 60%.
The splitting and merging area 16 in fig. 2 is a merging and splitting process for areas using constraints provided by a background model and a priori knowledge of human and vehicle models, etc., to solve the problems of object over-segmentation and mutual occlusion of objects. The method for splitting and merging the regions is to determine whether two adjacent regions are the same target region or different target regions based on the processing procedure of the enhanced region 15. If the two regions belong to the same target region, merging the two regions; otherwise, it is split. The two adjacent areas are areas with the edge distance smaller than a threshold value 8, areas with the same area index mark consistent, and areas with different target area index marks inconsistent.
The threshold value 8 is preferably between 3 and 7 pixels.
The second step of the present invention is to predict the target 20, calculate the average velocity of the target motion based on the accumulated displacement of the target motion and its corresponding accumulated time, and predict the next displacement of the target based on the velocity. Wherein the accumulated displacement is an accumulated sum of displacements of the target motion, and the accumulated time is an accumulated sum of times of the target motion. The relationship among the accumulated displacement, the accumulated time and the average movement speed is as follows:
v=s/t
wherein s is the displacement of the target mass center after the target mass center stably moves for multiple frames, t is the time required by the target to move for multiple frames, and v is the average speed of the target stably moving. The average speed can be calculated by the formula.
The next displacement predicted from the average velocity v is:
s′=v·Δt
and the time delta t is the predicted target time, and the time s' is the displacement of the target mass center after the stable movement time delta t. The next displacement can be calculated and predicted by the formula.
The third step of the present invention is a matching target 30 for tracking the matching stable target and filtering out false targets. Fig. 4 is a schematic flow chart of matching targets in the present invention, as shown in fig. 4. The matching target 30 includes: tracking the matching stable target 301 and filtering out false targets 302.
The stable target 301 of tracking matching is to determine whether or not the detection area and the tracking target match. The matching judgment conditions are as follows: the calculation formula of the matching coefficient D of the detection area and the target is as follows:
D=Da*ADa+Db*ADb+Dc*ADc
where Da is an area matching coefficient, Db is a histogram matching coefficient, and Dc is a distance matching coefficient. When the matching coefficient D of the detection area and the target is larger than the threshold value 9, the detection area is judged to be matched with the target. A. theDa、ADb、ADcThe weight coefficients are respectively corresponding to Da, Db and Dc. The threshold 9 is preferably between 0.7 and 0.8.
A is describedDa、ADb、ADcThe values of (A) are all between 0 and 1, and the sum of the values of the three is 1. A is describedDa、ADb、ADcPreferred values of (b) are 0.2, 0.3, 0.5, respectively.
1) Area matching coefficient Da. When the area of the area where the detection area and the target intersect is larger than the threshold value 10 of the area of the target, the detection area is considered to meet the matching of the areas, and Da is 1; otherwise Da is 0.
The threshold value 10 is preferably between 40% and 60%.
2) The histogram matching coefficient Db. When the histogram of the area where the detection area and the target are intersected is larger than the threshold value 11 of the histogram of the target, the detection area is considered to meet the matching of the histogram, and Db is 1; otherwise Db is taken to be 0.
The threshold 11 is preferably between 40% and 60%.
3) The distance matching coefficient Dc. The distance matching coefficient Dc is considered in two cases, i.e., whether the detection area is moving or stationary. If the number of foreground points in the difference image of the detection area in the current frame image and the previous frame image is greater than the threshold value 12 of the number of background points, the detection area is considered to be moving, otherwise, the detection area is considered to be static. When the detection area is in motion, calculating the distance between the center of the detection area in the current frame image and the center of the detection area in the current frame image, if the distance is smaller than a threshold value 13 of the length of a diagonal line of a rectangular frame where the target is located, considering that the distance matching is met, and taking Dc as 1; otherwise Dc is taken as 0. When the detection area is static, calculating the distance between the center of the detection area in the previous frame image and the center of the detection area in the current frame image, if the distance is smaller than a threshold value 14, determining that the distance matching is satisfied, and taking Dc as 1; otherwise Dc is taken as 0.
The threshold 12 is preferably between 65% and 75%. The threshold value 13 is preferably between 1.5 and 2. The threshold value 14 is preferably between 8 and 12 pixels.
Filtering out false target means to filter out false target area through the trajectory analysis of target motion. The track analysis is to use target track information (including plane information and centroid point information) to count the smoothness of area change and the stationarity of centroid point change.
The method for counting the smoothness of the area change comprises the following steps: area set { area on statistical target track point1,area2,...,areanN represents the number of the trace points, and the area mean value is counted:
<math> <mrow> <mover> <mi>area</mi> <mo>&OverBar;</mo> </mover> <mo>=</mo> <mfrac> <mn>1</mn> <mi>n</mi> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <msub> <mi>area</mi> <mi>i</mi> </msub> </mrow> </math>
and (3) counting the area variance: <math> <mrow> <msub> <mi>area</mi> <mi>sd</mi> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mi>n</mi> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <msup> <mrow> <mo>(</mo> <msub> <mi>area</mi> <mi>i</mi> </msub> <mo>-</mo> <mover> <mi>area</mi> <mo>&OverBar;</mo> </mover> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> </math>
when areasdWhen area > 0.5, the area change is considered to be not smooth, and the target region is filtered out.
The method for counting the stability of the change of the centroid point is that the ratio of the direction change in the adjacent track points is counted according to the fact that the normal target does not generate frequent sudden change in the direction of the motion, if the ratio exceeds a threshold value 15, the centroid point is considered to be unstable in change, and the target area is filtered.
The threshold 15 is preferably between 40% and 60%.
The last step is to update the target 40, and update the model of the tracked target in real time according to the stable target after the target matching 30.
The invention also provides a moving target tracking system, and fig. 5 is a schematic structural diagram of the moving target tracking system of the invention, as shown in fig. 5. The moving object tracking system includes a detected object module 71, a predicted object module 72, a matched object module 73, and an updated object module 74. The detection target module 71 is configured to segment a moving target region in a video scene image from a background, the prediction target module 72 is configured to estimate a position of the moving target in a next frame of the scene image, the matching target module 73 is configured to track a matched stable target and filter a false target, and the update target module 74 is configured to update a template of the stable target in a current frame.
Fig. 6 is a schematic structural diagram of a target detection module in the moving target tracking system of the present invention. As shown in FIG. 6, the detection object module 71 includes an acquire video module 711, a pre-process image module 712, a mark region module 713, a maintenance status module 714, an enhanced region module 715, and a split and merge region module 716. The acquiring video module 711 is configured to acquire video content to obtain a scene image and establish a background model; a pre-processing image module 712, configured to eliminate an influence of the scene image on the background model; a marking region module 713, configured to perform foreground segmentation on the scene image according to the background model and mark a connected region; a maintenance state module 714, configured to determine a current state of the detection target module, perform corresponding processing, and perform anomaly detection if necessary; an enhanced region module 715, configured to remove false regions of shadows, highlights, and leaf flapping using shadow detection, highlight detection, and tree filtering; a split and merge region module 716, configured to merge and split the regions using the constraints provided by the background model and the a priori knowledge of the human and vehicle models to solve the problems of object over-segmentation and mutual occlusion.
Fig. 7 is a schematic structural diagram of a matching target module in the moving target tracking system of the present invention. As shown in fig. 7, the match target module 73 includes a stable target module 731 that tracks matches and a false target module 732 that filters out false targets. The track-matching stable target module 731 is configured to determine whether the detected region matches the tracked target, and the filter false target module 732 is configured to filter the false region.
The method has the greatest advantages of realizing the accurate tracking of multiple targets under the complex background, solving the problems of shielding, leaf swinging and the like, along with simple and convenient operation and strong practicability.
The invention has the advantages that the invention can accurately detect the moving objects in the scene image, including people and vehicles, and can ignore the influence of interference factors such as image shake, swinging trees, brightness change, shadow, rain, snow and the like.
The invention can also be used in an intelligent video monitoring system to realize the functions of target classification identification, moving target warning, moving target tracking, PTZ tracking, automatic close-up shooting, target behavior detection, flow detection, congestion detection, carry-over detection, stolen object detection, smoke detection, flame detection and the like.
While the foregoing is directed to the preferred embodiment of the present invention, and is not intended to limit the scope of the invention, it will be understood that the invention is not limited to the embodiments described herein, which are described to assist those skilled in the art in practicing the invention. Further modifications and improvements may readily occur to those skilled in the art without departing from the spirit and scope of the invention, and it is intended that the invention be limited only by the terms and scope of the appended claims, as including all alternatives and equivalents which may be included within the spirit and scope of the invention as defined by the appended claims.

Claims (2)

1. A moving object tracking method is characterized by comprising the following steps:
(1) detecting a target, and segmenting a moving target area in a video scene from a background;
(2) predicting a target, and estimating the next frame motion of the target;
(3) matching targets, tracking the matched stable targets, and filtering false targets; and
(4) updating the target, and updating the template of the stable target in the current frame;
wherein, the detection of the target comprises the following steps:
acquiring a video, acquiring video content to obtain a scene image, and establishing a background model;
preprocessing the image, and eliminating the influence of the scene image on the background model; the pre-processing the image includes: filtering processing and global motion compensation; wherein the filtering process includes: carrying out noise filtering processing and image smoothing processing on the image; the global motion compensation is used for compensating the image global motion caused by slight swing of a camera, and in the global motion compensation, a motion model comprises translation, rotation and zooming;
marking a region, performing foreground segmentation on the scene image according to the background model, and marking a connected region; the marking area comprises the following steps: foreground segmentation, namely segmenting a scene image based on a background model to obtain a binary image of a foreground; morphological processing, namely processing the binary image by using a mathematical morphology method to remove false regions with small areas and fill regions with large areas; the connected region mark is used for marking different regions in the same scene by using a connected domain method so as to distinguish different target regions;
maintenance status, including status determination and anomaly detection; the state judgment is to judge the current state of a module executing the detection target and carry out corresponding processing; when the scene stabilization time exceeds a first threshold value, the system enters a working state from an initialization state; when the scene change time exceeds a second threshold value, the system enters an initialization state from a working state; the abnormal detection is executed under the conditions that the video signal interference is serious and the camera is artificially shielded; judging according to the edge matching values of the background twice and the shortest time of successful background initialization, and if the value of the background of the current frame matched with the edge of the background model is smaller than a third threshold or the shortest time of successful background initialization exceeds a fourth threshold, determining that the phenomenon is abnormal;
enhancing the area, and removing false areas of shadow, highlight and leaf swing by using shadow detection, highlight detection and tree filtering; the enhancement region includes: shadow detection, namely respectively calculating the mean value of pixel values in each connected region, taking the mean value as a threshold value, judging the shadow region of the connected region, filtering the shadow region, and judging the shadow if the pixel value is smaller than the threshold value; detecting the highlight, namely detecting whether the image is in a highlight state, if so, performing brightness compensation, wherein the average value of pixel values of the image is 128 through the brightness compensation; tree filtering, namely detecting the leaves of the swinging tree and the shadows of the leaves of the swinging tree in the image and filtering the leaves of the swinging tree from the foreground image; the detection of the swing leaves is determined and realized according to one of the following two characteristics: (1) tracking a motion track, and when the part of the area of the motion area corresponding to the target in the motion track point is smaller than a fifth threshold value of the area of the motion area, considering that the target is a swing leaf; (2) the amplitude of the mass center motion is determined, and when the displacement change of the mass center of the target in the adjacent track points exceeds a sixth threshold value of the width of the target, the target is a swinging leaf; the method for detecting the shadow of the swinging leaves comprises the following steps: respectively counting the number of points with pixel values of 1 before and after the expansion operation in the connected region before and after the expansion operation, and calculating the ratio of the points, wherein if the ratio is less than a seventh threshold value, the connected region is considered to be a region with a shadow of the swinging leaf; and
splitting and merging the regions, merging and splitting the regions by using the constraint provided by the background model and the prior knowledge of the human and vehicle models so as to solve the problems of over-segmentation of the target and mutual shielding of the target; the splitting and merging area is based on the processing process of the enhancement area, and whether two adjacent areas are the same target area is judged; if the two regions belong to the same target region, merging the two regions; otherwise, splitting the same; the two adjacent areas are areas with the edge distance smaller than an eighth threshold value;
the target is predicted by calculating the average speed of the target motion according to the accumulated displacement of the target motion and the corresponding accumulated time thereof and predicting the next displacement of the target according to the speed; wherein, the relationship among the accumulated displacement, the accumulated time and the average movement speed is as follows:
v=s/t
wherein s is the displacement of a target mass center after stably moving for multiple frames, t is the time required by the target moving for multiple frames, and v is the average speed of the target stably moving;
the next displacement predicted from the average velocity v is:
s′=v·Δt
wherein, Δ t is predicted target time, s' is displacement of the target mass center after stable movement for Δ t time;
the matching target includes: tracking the matched stable target and filtering out false target; wherein, the stable target of the tracking matching is to determine whether the detection area is matched with the tracking target, and the matching is determined according to a matching coefficient D of the detection area and the target in the following formula:
D=Da*ADa+Db*ADb+Dc*ADc
wherein Da is an area matching coefficient, Db is a histogram matching coefficient, and Dc is a distance matching coefficient; a. theDa、ADb、ADcRespectively corresponding weight coefficients Da, Db and Dc, and when the matching coefficient D of the detection area and the target is greater than a ninth threshold value, judging that the detection area is matched with the target; the area matching coefficient Da is that when the area of the area where the detection area and the target are intersected is larger than the tenth threshold of the area of the target, the detection area is considered to meet the area matching, and Da is 1; otherwise Da is 0; the histogram matching coefficient Db is that when the histogram of the area where the detection area and the target are intersected is larger than the eleventh threshold value of the histogram of the target, the detection area is considered to meet the matching of the histogram, and Db is 1; otherwise Db is 0; a distance matching coefficient Dc that is considered in accordance with two cases of whether the detection region is moving or stationary; if the number of foreground points in the differential image of the detection area in the current frame image and the previous frame image is greater than the twelfth threshold of the number of background points, the detection area is considered to be moving, otherwise, the detection area is considered to be static; when the detection area is in motion, calculating the distance between the center of the detection area in the current frame image and the center of the detection area in the current frame image, if the distance is smaller than a thirteenth threshold value of the diagonal length of the rectangular frame where the target is located, considering that the distance matching is met, and taking Dc as 1; otherwise Dc is 0; when the detection area is stationary, calculatingThe distance between the center of the detection area in the previous frame image and the center of the detection area in the current frame image is smaller than a fourteenth threshold value, the matching of the distances is considered to be met, and Dc is 1; otherwise Dc is 0; filtering out false targets by analyzing the motion tracks of the targets to filter out false target areas; the track analysis is to count the smoothness of the area change and the stationarity of the centroid point change by using target track information; the smoothness of the statistical area change refers to an area set { area ] on a statistical target track point1,area2,...,areanN represents the number of the trace points, and the area mean value is counted:
Figure DEST_PATH_FSB00000658575600031
and (3) counting the area variance:
Figure FSB00000635651500051
when in use
Figure FSB00000635651500052
When the area change is not smooth, filtering the target area;
and counting the variation stationarity of the centroid points, namely counting the ratio of the direction variation in the adjacent track points according to the fact that the normal target does not generate frequent sudden change in the direction of the movement, and if the ratio exceeds a fifteenth threshold value, considering that the centroid points vary unstably, and filtering the target area.
2. A moving object tracking system, characterized in that the moving object tracking system comprises:
the detection target module is used for segmenting a moving target area in the video scene image from a background;
a predicted target module, configured to estimate a position of the moving target in a next scene image;
the matching target module is used for tracking the matched stable target and filtering out a false target; and
the updating target module is used for updating a template of a stable target in the current frame;
wherein the detection target module comprises:
the video acquisition module is used for acquiring video content to obtain a scene image and establishing a background model;
the image preprocessing module is used for eliminating the influence of the scene image on the background model;
the marking region module is used for carrying out foreground segmentation on the scene image according to the background model and marking a connected region;
the maintenance state module is used for judging the current state of the detection target module, performing corresponding processing and performing abnormal detection under the conditions that video signals are seriously interfered and a camera is artificially shielded;
the enhancement region module is used for removing false regions of shadow, highlight and leaf swing by using shadow detection, highlight detection and tree filtering; the shadow detection comprises the steps of respectively calculating the mean value of pixel values in each communication area according to each communication area, taking the mean value as a threshold value, judging the shadow area of each communication area, filtering the shadow area, and judging the shadow if the pixel value is smaller than the threshold value; detecting the highlight, namely detecting whether the image is in a highlight state, if so, performing brightness compensation, wherein the average value of pixel values of the image is 128 through the brightness compensation; tree filtering, namely detecting the leaves of the swinging tree and the shadows of the leaves of the swinging tree in the image and filtering the leaves of the swinging tree from the foreground image; the detection of the swing leaves is determined and realized according to one of the following two characteristics: (1) tracking a motion track, and when the part of the area of the motion area corresponding to the target in the motion track point is smaller than a fifth threshold value of the area of the motion area, considering that the target is a swing leaf; (2) the amplitude of the mass center motion is determined, and when the displacement change of the mass center of the target in the adjacent track points exceeds a sixth threshold value of the width of the target, the target is a swinging leaf; the method for detecting the shadow of the swinging leaves comprises the following steps: respectively counting the number of points with pixel values of 1 before and after the expansion operation in the connected region before and after the expansion operation, and calculating the ratio of the points, wherein if the ratio is less than a seventh threshold value, the connected region is considered to be a region with a shadow of the swinging leaf; and
the splitting and combining region module is used for combining and splitting the regions by using the constraint provided by the background model and the prior knowledge of the human and vehicle models so as to solve the problems of target over-segmentation and target mutual shielding;
the matching target module comprises:
the tracking matching stable target module is used for judging whether the detection area is matched with the tracking target or not; and
and the false object filtering module is used for filtering false regions.
CN2009100774355A 2009-02-11 2009-02-11 Moving object tracking method and system thereof Active CN101739686B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2009100774355A CN101739686B (en) 2009-02-11 2009-02-11 Moving object tracking method and system thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2009100774355A CN101739686B (en) 2009-02-11 2009-02-11 Moving object tracking method and system thereof

Publications (2)

Publication Number Publication Date
CN101739686A CN101739686A (en) 2010-06-16
CN101739686B true CN101739686B (en) 2012-05-30

Family

ID=42463137

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2009100774355A Active CN101739686B (en) 2009-02-11 2009-02-11 Moving object tracking method and system thereof

Country Status (1)

Country Link
CN (1) CN101739686B (en)

Families Citing this family (43)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101950424B (en) * 2010-09-09 2012-06-20 西安电子科技大学 Feature associated cell tracking method based on centroid tracking frame
CN101996317B (en) * 2010-11-01 2012-11-21 中国科学院深圳先进技术研究院 Method and device for identifying markers in human body
US9171075B2 (en) * 2010-12-30 2015-10-27 Pelco, Inc. Searching recorded video
US9615064B2 (en) 2010-12-30 2017-04-04 Pelco, Inc. Tracking moving objects using a camera network
CN102831378B (en) * 2011-06-14 2015-10-21 株式会社理光 The detection and tracking method and system of people
CN102270347B (en) * 2011-08-05 2013-02-27 上海交通大学 Target detection method based on linear regression model
JP5830373B2 (en) * 2011-12-22 2015-12-09 オリンパス株式会社 Imaging device
CN102760230B (en) * 2012-06-19 2014-07-23 华中科技大学 Flame detection method based on multi-dimensional time domain characteristics
CN102779348B (en) * 2012-06-20 2015-01-07 中国农业大学 Method for tracking and measuring moving targets without marks
CN103516956B (en) * 2012-06-26 2016-12-21 郑州大学 Pan/Tilt/Zoom camera monitoring intrusion detection method
CN102982559B (en) * 2012-11-28 2015-04-29 大唐移动通信设备有限公司 Vehicle tracking method and system
CN103226697A (en) * 2013-04-07 2013-07-31 布法罗机器人科技(苏州)有限公司 Quick vehicle tracking method and device
CN104083146B (en) * 2013-06-25 2016-03-16 北京大学 A kind of biological neural loop living imaging system
KR102161212B1 (en) * 2013-11-25 2020-09-29 한화테크윈 주식회사 System and method for motion detecting
KR102247596B1 (en) 2014-01-24 2021-05-03 한화파워시스템 주식회사 Compressor system and method for controlling thereof
CN103941752B (en) * 2014-03-27 2016-10-12 北京大学 A kind of nematicide real-time automatic tracing imaging system
CN103971381A (en) * 2014-05-16 2014-08-06 江苏新瑞峰信息科技有限公司 Multi-target tracking system and method
WO2015186588A1 (en) * 2014-06-03 2015-12-10 住友重機械工業株式会社 Human detection system for construction machine
CN104754296A (en) * 2014-07-21 2015-07-01 广西电网公司钦州供电局 Time sequence tracking-based target judging and filtering method applied to transformer substation operation security control
CN105447431B (en) * 2014-08-01 2018-11-27 深圳中集天达空港设备有限公司 A kind of docking aircraft method for tracking and positioning and system based on machine vision
CN104778474B (en) * 2015-03-23 2019-06-07 四川九洲电器集团有限责任公司 A kind of classifier construction method and object detection method for target detection
KR102457617B1 (en) * 2015-09-16 2022-10-21 한화테크윈 주식회사 Method and apparatus of estimating a motion of an image, method and apparatus of image stabilization and computer-readable recording medium for executing the method
CN105761245B (en) * 2016-01-29 2018-03-06 速感科技(北京)有限公司 A kind of automatic tracking method and device of view-based access control model characteristic point
CN106096496A (en) * 2016-05-28 2016-11-09 张维秀 A kind of fire monitoring method and system
CN106204653B (en) * 2016-07-13 2019-04-30 浙江宇视科技有限公司 A kind of monitoring tracking and device
CN106251388A (en) * 2016-08-01 2016-12-21 乐视控股(北京)有限公司 Photo processing method and device
CN106447685B (en) * 2016-09-06 2019-04-02 电子科技大学 A kind of infrared track method
CN106530325A (en) * 2016-10-26 2017-03-22 合肥润客软件科技有限公司 Multi-target visual detection and tracking method
CN106910203B (en) * 2016-11-28 2018-02-13 江苏东大金智信息系统有限公司 The quick determination method of moving target in a kind of video surveillance
CN107202980B (en) * 2017-06-13 2019-12-10 西安电子科技大学 multi-frame combined sea surface small target detection method based on direction ratio
CN108010055B (en) * 2017-11-23 2022-07-12 塔普翊海(上海)智能科技有限公司 Tracking system and tracking method for three-dimensional object
CN108154119B (en) * 2017-12-25 2021-09-28 成都全景智能科技有限公司 Automatic driving processing method and device based on self-adaptive tracking frame segmentation
CN108280408B (en) * 2018-01-08 2021-11-02 北京联合大学 Crowd abnormal event detection method based on hybrid tracking and generalized linear model
CN108596946A (en) * 2018-03-21 2018-09-28 中国航空工业集团公司洛阳电光设备研究所 A kind of moving target real-time detection method and system
CN108960253A (en) * 2018-06-27 2018-12-07 魏巧萍 A kind of object detection system
CN109389031B (en) * 2018-08-27 2021-12-03 浙江大丰实业股份有限公司 Automatic positioning mechanism for performance personnel
CN110909579A (en) * 2018-09-18 2020-03-24 杭州海康威视数字技术股份有限公司 Video image processing method and device, electronic equipment and storage medium
CN110929597A (en) * 2019-11-06 2020-03-27 普联技术有限公司 Image-based leaf filtering method and device and storage medium
CN111654700B (en) * 2020-06-19 2022-12-06 杭州海康威视数字技术股份有限公司 Privacy mask processing method and device, electronic equipment and monitoring system
CN111767875B (en) * 2020-07-06 2024-05-10 中兴飞流信息科技有限公司 Tunnel smoke detection method based on instance segmentation
CN112270657A (en) * 2020-11-04 2021-01-26 成都寰蓉光电科技有限公司 Sky background-based target detection and tracking algorithm
CN112967316B (en) * 2021-03-05 2022-09-06 中国科学技术大学 Motion compensation optimization method and system for 3D multi-target tracking
CN114155255B (en) * 2021-12-14 2023-07-28 成都索贝数码科技股份有限公司 Video horizontal screen-to-vertical screen method based on space-time track of specific person

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101029824A (en) * 2006-02-28 2007-09-05 沈阳东软软件股份有限公司 Method and apparatus for positioning vehicle based on characteristics
CN101236606A (en) * 2008-03-07 2008-08-06 北京中星微电子有限公司 Shadow cancelling method and system in vision frequency monitoring

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101029824A (en) * 2006-02-28 2007-09-05 沈阳东软软件股份有限公司 Method and apparatus for positioning vehicle based on characteristics
CN101236606A (en) * 2008-03-07 2008-08-06 北京中星微电子有限公司 Shadow cancelling method and system in vision frequency monitoring

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
张明修.实时的运动目标视觉分析中关键技术研究.《中国优秀硕士学位论文全文数据库》.2008,(第5期),第12-13页,第19-22页,第24页,第27页,第40-42页. *
曾锐利等.智能交通监控系统中多目标跟踪算法.《电子器件》.2007,第30卷(第6期),第2160-2161页. *

Also Published As

Publication number Publication date
CN101739686A (en) 2010-06-16

Similar Documents

Publication Publication Date Title
CN101739686B (en) Moving object tracking method and system thereof
CN101739550B (en) Method and system for detecting moving objects
CN101739551B (en) Method and system for identifying moving objects
CN110517288B (en) Real-time target detection tracking method based on panoramic multi-path 4k video images
CN112036254B (en) Moving vehicle foreground detection method based on video image
CN101872546B (en) Video-based method for rapidly detecting transit vehicles
CN104751491B (en) A kind of crowd&#39;s tracking and people flow rate statistical method and device
CN100545867C (en) Aerial shooting traffic video frequency vehicle rapid checking method
CN106600625A (en) Image processing method and device for detecting small-sized living thing
CN108596129A (en) A kind of vehicle based on intelligent video analysis technology gets over line detecting method
CN102222214A (en) Fast object recognition algorithm
CN105046719B (en) A kind of video frequency monitoring method and system
CN110619651B (en) Driving road segmentation method based on monitoring video
CN109711256B (en) Low-altitude complex background unmanned aerial vehicle target detection method
WO2001084844A1 (en) System for tracking and monitoring multiple moving objects
CN108765453B (en) Expressway agglomerate fog identification method based on video stream data
CN111860120A (en) Automatic shielding detection method and device for vehicle-mounted camera
CN105678213A (en) Dual-mode masked man event automatic detection method based on video characteristic statistics
CN107122732B (en) High-robustness rapid license plate positioning method in monitoring scene
Furuya et al. Road intersection monitoring from video with large perspective deformation
CN111161308A (en) Dual-band fusion target extraction method based on key point matching
CN111815556A (en) Vehicle-mounted fisheye camera self-diagnosis method based on texture extraction and wavelet transformation
FAN et al. Robust lane detection and tracking based on machine vision
CN109102520A (en) The moving target detecting method combined based on fuzzy means clustering with Kalman filter tracking
CN112215109B (en) Vehicle detection method and system based on scene analysis

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
ASS Succession or assignment of patent right

Owner name: NETPOSA TECHNOLOGIES, LTD.

Free format text: FORMER OWNER: BEIJING ZANB SCIENCE + TECHNOLOGY CO., LTD.

Effective date: 20150716

C41 Transfer of patent application or patent right or utility model
TR01 Transfer of patent right

Effective date of registration: 20150716

Address after: 100102, Beijing, Chaoyang District, Tong Tung Street, No. 1, Wangjing SOHO tower, two, C, 26 floor

Patentee after: NETPOSA TECHNOLOGIES, Ltd.

Address before: 100048 Beijing city Haidian District Road No. 9, building 4, 5 layers of international subject

Patentee before: Beijing ZANB Technology Co.,Ltd.

PP01 Preservation of patent right
PP01 Preservation of patent right

Effective date of registration: 20220726

Granted publication date: 20120530