CN103577833A - Abnormal intrusion detection method based on motion template - Google Patents

Abnormal intrusion detection method based on motion template Download PDF

Info

Publication number
CN103577833A
CN103577833A CN201210271736.3A CN201210271736A CN103577833A CN 103577833 A CN103577833 A CN 103577833A CN 201210271736 A CN201210271736 A CN 201210271736A CN 103577833 A CN103577833 A CN 103577833A
Authority
CN
China
Prior art keywords
motion
image
target
profile
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201210271736.3A
Other languages
Chinese (zh)
Inventor
董文彧
蒋龙泉
郭跃飞
冯瑞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fudan University
Original Assignee
Fudan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fudan University filed Critical Fudan University
Priority to CN201210271736.3A priority Critical patent/CN103577833A/en
Publication of CN103577833A publication Critical patent/CN103577833A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention belongs to the technical field of digital image processing and mode recognition, and particularly relates to an abnormal intrusion detection method based on a motion template. The method comprises the steps of selecting the region of interest and size parameters of an abnormal object from a monitoring video, conducting binaryzation on a poor image to remove the timeout effect, updating a historical image, then calculating a gradient direction of the motion historical image, dividing the whole motion into independent motion portions, finally calculating the global motion direction of the selected region, and therefore obtaining the motion direction of a motion target. Intersection, arranged between two adjacent frames of the motion target, existing in a picture is utilized in the method, and the track, speed and direction of the target can be clearly displayed without extrapolation, correlation analysis and track post-processing. Compared with a tracking method in the prior art, the method has the higher real-time performance and the better robustness, and the problem that due to the illumination changes, shaking of a camera and the like, abnormal detection and tracking are caused is well solved.

Description

Based on Motion mask, extremely swarm into detection method
Technical field
The invention belongs to Digital Image Processing and mode identification technology, be specifically related to extremely swarm in intelligent video monitoring the method for detection.
Technical background
Moving object detection and tracking, as a cutting edge technology interdisciplinary, have been merged the knowwhy of the multiple different field such as image processing, pattern-recognition, artificial intelligence, automatic control.In the various fields such as military affairs guidance, vision guided navigation, security monitoring, intelligent transportation, Video coding, medical diagnosis, meteorologic analysis and astronomical sight, have broad application prospects, the research of track algorithm has important practical significance and theory value.Motion target tracking is exactly in the every width image in one section of sequence image, to find in real time interested moving target (comprising the kinematic parameters such as position, speed and acceleration).In the research of motion target tracking problem, there are on the whole two kinds of thinkings: a) do not rely on priori, moving target directly from image sequence, detected, and carry out target identification, finally follow the tracks of interested moving target; B) depending on the priori of target, is first modeling target, then in image sequence, finds in real time the moving target matching.Around these two kinds of thinkings, a large amount of effective motion detection and track algorithm have been produced.But the target that up to now, the unification of the robustness of motion detection and track algorithm, accuracy and real-time is still not yet fine solution and is laying siege to.Moving object detection can be divided into Static and dynamic by background, and static background mainly contains following several method:
1, background subtraction point-score
Background subtraction point-score is to utilize the difference of present image and background image to detect a kind of technology of moving region.It generally can provide characteristic the most completely, but for the variation of dynamic scene, and as responsive especially in weather, illumination, background perturbation and background objects are moved, people shifts out etc., the shade of moving target also can affect the accuracy of testing result and the accuracy of tracking.Its basic thought is exactly first to obtain a background model, then present frame and background model is subtracted each other, if pixel value difference is greater than a certain threshold value, judges that this pixel belongs to moving target, otherwise belongs to background image.The removals of the foundation of background model and renewal, shade etc. are most important to the quality of tracking results.
2, frame differential method
Adjacent frame differential method is to calculate by the difference of adjacent two two field pictures, obtains the moving target detecting method of the information such as moving object position and shape.Its adaptability to environment is stronger, particularly for the variation strong adaptability of illumination, but because the information such as the texture of pixel on moving target, gray scale are more close, can not detect complete target, can only obtain the partial information of moving target and object is insensitive slowly to moving, have some limitations.The people such as He Guiming have proposed Symmetrical DFD on the basis of adjacent frame differential method, by every continuous three two field pictures in image sequence are carried out to symmetric difference, detect the range of movement of target, utilize the template that previous frame splits to revise the target travel scope detecting simultaneously, can detect preferably the shape profile of intermediate frame moving target.
3, optical flow method
In space, motion can be described with sports ground, and on a plane of delineation, and the difference that the motion of object distributes by gradation of image in image sequence often embodies, thereby sports ground in space is transferred to, is just expressed as optical flow field on image.Optical flow field has reflected the variation tendency of every bit gray scale on image, can regard that pixel with gray scale moves on the plane of delineation as and the instantaneous velocity field that produces is also a kind of approximate evaluation to real motion field.More satisfactory in the situation that, it can detect the object of self-movement, does not need to know in advance any information of scene, can very accurately calculate the speed of moving object, and can be used for the situation of dynamic scene.But the calculating very complex of most of optical flow approach is higher to hardware requirement, is unsuitable for real-time processing. and more responsive to noise ratio, and noise immunity is poor.
Dynamic background mainly contains following methods:
Owing to existing between target and video camera complicated relative motion, under dynamic background moving object detection than the moving object detection complexity under static background many.Inapplicable herein, so put aside.
According to expression and the similarity measurement of moving target, Moving Target Tracking Algorithm can be divided into four classes: the tracking based on active profile, the tracking based on feature, the tracking based on region and the tracking based on model.The precision of track algorithm and robustness depend on the definition to the expression of moving target and similarity measurement to a great extent, and the real-time of track algorithm depends on match search strategy and filter forecasting algorithm.
1, the tracking based on active profile
The active contour model that the people such as L.Ass propose, being snake model, is the deformable curve defining in image area, by minimizing its energy function, it is consistent with objective contour that dynamic outline is progressively adjusted self shape, and this deformable curve is called again snake curve.Snake technology can be processed any deformation of arbitrary shaped body, first will cut apart the object boundary that obtains as the original template of following the tracks of, then determine the objective function that characterizes object real border, and by reducing target function value, initial profile is moved gradually to the real border of object.The advantage of following the tracks of based on active profile is the half-tone information of not only considering from image, and considers the geological information of overall profile, has strengthened the reliability of following the tracks of.Because tracing process is actually the searching process of solution, the calculated amount of bringing is larger, and due to the blindness of snake model, for object or the larger situation of deformation of rapid movement, tracking effect is not ideal enough.
2, the tracking based on feature
Tracking based on characteristic matching is not considered the global feature of moving target, only by some notable features of the picture of marking on a map day, follows the tracks of.Suppose that moving target can be expressed by only characteristic set.Searching this corresponding characteristic set just thinks to follow the tracks of and has gone up moving target.Except the feature with single realizes tracking, can also adopt a plurality of feature fusion together as tracking characteristics.Tracking based on feature mainly comprises feature extraction and two aspects of characteristic matching.The advantage of the track algorithm based on feature is that the variations such as yardstick, deformation and the brightness to moving target are insensitive, even if certain part of target is blocked, as long as some feature can be in sight, just can complete tracing task; In addition, this method is combined use with Kalman wave filter, also has good tracking effect.But it is more responsive for image blurring, noise etc., the extraction effect of characteristics of image also depends on the setting of various extraction operators and parameter thereof, in addition, the feature corresponding relation of successive frame is also more difficult to be determined, especially when the number of features of each two field picture is inconsistent, it is undetected to exist, feature increases or the situation such as minimizing.
3, the tracking based on region
Track algorithm basic thought based on region is: A) obtain the template that comprises target, this template is cut apart and obtained or artificially determine in advance by image, and template is generally the rectangle that is slightly larger than target, also can be irregularly shaped; B), in sequence image, use related algorithm tracking target.The advantage of this algorithm is when target is not blocked, and tracking accuracy is very high, it is highly stable to follow the tracks of.But first its shortcoming is time-consuming, and when region of search is larger, situation is especially serious; Secondly, algorithm requires target distortion little, and can not have too greatly and block, otherwise relevant precise decreasing can cause the loss of target.In recent years, to tracking based on region, paying close attention to more is situation when how processing template changes, and this variation is changed and caused by moving object attitude, if attitude that can correct Prediction target changes, can realize tenacious tracking.
4, the tracking based on model
Tracking based on model is, by certain priori, institute's tracking target is set up to model, then by coupling tracking target, carries out the real-time update of model.For rigid-object, its motion state conversion is mainly translation, rotation etc., can utilize the method realize target to follow the tracks of.But what follow the tracks of in practical application is not only rigid body, also having most is non-rigid body, and the definite geometric model of target is not easy to obtain.This method is not subject to observe the impact at visual angle, has stronger robustness, and Model Matching tracking accuracy is high, the various motion change that are suitable for maneuvering target, antijamming capability is strong, but because computational analysis is complicated, arithmetic speed is slow, the renewal of model is comparatively complicated, and real-time is poor.Accurately set up the key of motion model foot Model Matching success.
These Moving Target Tracking Algorithms, are relatively to commonly use and effective method above, but all there is no general especially algorithm.For different environment, illumination, the variation of prospect background, the complexity of scene, effect varies.In actual tracking application, need to consider especially and select.
Summary of the invention
The object of the invention is, for solving under the severe outdoor environment in booth, land for growing field crops and the warehouse etc. of agriculture scene, utilize limited hardware condition, reach effect fast and accurately, provide a kind of and extremely swarm into detection method based on Motion mask.
The present invention propose based on Motion mask extremely swarm into detection method, be after choosing area-of-interest, carry out the operation of following steps:
(1) selected area-of-interest and abnormal object dimensional parameters
First according to the actual requirements, on monitor video, by mouse, choose ,Ru passageway, interested region, the key positions such as door and window.Again according to demand, determine the approximate size parameter of target object, so that program can be followed the tracks of with specific aim.
(2) difference image is done to binaryzation, upgrade motion history image
For monitor video, present frame is deducted to the difference image that former frame obtains video, reset threshold value, be for example greater than two of averages and more than standard deviation, by difference image binaryzation, obtain the profile of corresponding moving object, with rectangle frame, represent.Then, along with the motion of rectangle, new profile is hunted down and is covered by current profile, and the time threshold according to setting, is generally 30 frames, and overtime profile is deleted, and obtains a continuous contour motion track, is motion history image (mhi).
(3) cut apart motion history image and become independently subregion, then cut apart extraction object
In mhi image, find current contour area, from up-to-date profile, the motion of the most recent of search next-door neighbour periphery, border, if find the motion of this most recent, splits interested object from current region.
(4) gradient direction of computed segmentation regional movement and correct direction mask
By Motion mask, recorded the contour of object of different time, the gradient that adopts Sobel gradient method to calculate mhi image is obtained movable information, reject again after king-sized gradient exceptional value (may be defined as especially be greatly greater than average 2 more than standard deviation), obtain the measured value of global motion.Repeat the operation of step (3) and step (4), just can obtain the movable information of entire image.Sobel calculation procedure is as follows:
For digital picture, can replace single order differential by first order difference;
△xf(x,y)=f(x,y)-f(x-1,y);
△yf(x,y)=f(x,y)-f(x,y-1)
While asking gradient, for quadratic sum computing and extracting operation, can represent by the absolute value sum of two components, that is:
G[f(x,y)]={[△xf(x,y)]+[△yf(x,y)]}|△xf(x,y)|+|△yf(x,y)|;
Sobel gradient operator is first to make weighted mean, then differential, then asks gradient, that is:
△xf(x,y)=f(x-1,y+1)+2f(x,y+1)+f(x+1,y+1)-f(x-1,y-1)-2f(x,y-1)-f(x+1,y-1);
△yf(x,y)=f(x-1,y-1)+2f(x-1,y)+f(x-1,y+1)-f(x+1,y-1)-2f(x+1,y)-f(x+1,y+1);
G[f(x,y)]=|△xf(x,y)|+|△yf(x,y)|;
(5) in the region of selecting, calculate direction of motion, contrast default dimensional parameters, determine whether to be judged to be extremely and swarm into.
By above-mentioned steps, obtain the movable information of whole section of video.For the profile detecting at every turn, itself and default parameter threshold are compared, when meeting size conforms and betiding area-of-interest, confirm the generation of once extremely swarming into.
(6) according to the movable information of catching, generate corresponding video frequency abstract
When confirming extremely to swarm into after generation, by setting up timeout threshold, the video segment that occurs extremely to swarm into is spliced and extracted, form a plurality of video frequency abstracts, for follow-up abnormal identification provides pre-service.
Good effect of the present invention is:
(1) efficiency is high; In the application of video monitoring, real-time is one of important requirement; The present invention can guarantee the continuous calculating to real-time monitored picture, detects in real time the generation of extremely swarming into event.
(2) be subject to illumination effect little; Due to the difference of video monitoring scene, the illumination condition of picture is different; The present invention can adapt to and extremely swarms into detection under various illumination conditions.
(3) verification and measurement ratio is high; The present invention is responsive to the slight change of monitoring scene, even if very little animal is swarmed into also, can be detected, and accuracy of detection and accuracy rate are high.
Accompanying drawing explanation
Fig. 1 is the FB(flow block) of extremely swarming into detection method that the present invention is based on Motion mask.
Fig. 2 is the monitoring example under real work scene, and red block region is user-defined area-of-interest.
Fig. 3 is the schematic diagram that calculates mhi
Fig. 4 is the Motion mask schematic diagram of two objects.
Fig. 5 is cut apart the method flow diagram that image becomes independent subregion difference compute gradient.
Fig. 6 is the method flow diagram of compute gradient.
Embodiment
Below in conjunction with accompanying drawing, explain the embodiment of extremely swarming into detection method that the present invention is based on Motion mask, but be noted that enforcement of the present invention is not limited to following embodiment.
The concrete operation step of the inventive method as shown in Figure 1.
One, selected area-of-interest and abnormal object dimensional parameters
As shown in Figure 2, this is the monitored picture of a width agricultural greenhouse.In picture, by mouse, click and choose the area-of-interest region of extremely swarming into detection; Then set the minimum dimension of abnormal object.
Two, difference image is done to binaryzation, upgrade motion history image
As shown in Figure 3:
The current object white marking of cutting apart, at next time point, object of which movement with new current time mark, waits behind partitioning boundary above.At next time point, object continues motion and is gradually dark rectangle by dividing mark before this, and these sequences that comprise motion have just produced motion history image, computing formula suc as formula shown in:
Figure BDA00001960652100061
Wherein, mhi is a secondary floating point values image, represents Motion mask.Silhouette is a secondary byte image, and wherein non-zero pixel represents the up-to-date segmentation contour of foreground object.Timestamp is the current system time.Duration is timeout threshold, and example is 30 frames as previously mentioned.So any pixel that deducts the value Zao (few) of duration than timestamp will be set to 0 in mhi.
Three, cut apart motion history image and become independently subregion, then cut apart extraction object
Owing to often there is multiple goal in scene, as shown in Figure 3, be two moving targets arranged side by side, be present under Same Scene.So can being divided into mhi image independently subregion, we can calculate more easily and follow the tracks of each target.Its computation process as shown in Figure 4.First, search mhi image is found profile a then.When search is when time profile, along other nearer profile b of its boundary search, if find, by the method for exhaustion, be partitioned into local motion c.According to timestamp and known time step, calculate the gradient in the local motion region being partitioned into again, and with this gradient calculation local motion, until calculate complete.Then remove sub-region and search the next region d when front profile, along its search e, then with method of exhaustion filling f step by step.Then calculate the motion of new cut zone, repeat a and c, know and there is no the remaining front profile of working as.
Four, in the region of selecting, calculate direction of motion, contrast parameter preset, determines whether abnormal
Repeat the operation of step (3) and step (4), just can obtain the movable information of entire image.As shown in Figure 5, because boundary easily produces people, be wherein the very big gradient exceptional value causing, we can be by restriction gradient amplitude, and Exception Filter value, obtains the gradient in selected region, obtains the direction of motion of object.The method of compute gradient is shown below, and first calculates difference Dx and the Dy of MHI, compute gradient direction then, and its formula is:
orientation(x,y)=arctan(D y(x,y)/D x(x,y))
Wherein, orientation (x, y) is the direction of (x, y) point, will consider the symbol of Dx and Dy while calculating simultaneously, then fills mask to represent which direction is correct.
Five, according to the movable information of catching, generate corresponding video frequency abstract
The information of the moving object obtaining according to the first step, does further processing, obtains the video segment that comprises our interested object.We need following two steps:
Feature extraction: according to the size of object, profile, color, shapes etc. are extracted feature.
According to tagsort: by design category device, interested object is separated from other object.

Claims (1)

  1. Based on Motion mask extremely swarm into a detection method, it is characterized in that, it comprises step:
    (1) selected area-of-interest and abnormal object dimensional parameters
    First, according to the actual requirements, on monitor video, by mouse, choose interested region; Then, more according to demand, determine the approximate size parameter of target object, program energy specific aim is followed the tracks of;
    (2) difference image is done to binaryzation, upgrade motion history image
    For monitor video, present frame is deducted to the difference image that former frame obtains video, reset threshold value, by difference image binaryzation, obtain the profile of corresponding moving object, with rectangle frame, represent; Then, along with the motion of rectangle, new profile is hunted down and is covered by current profile, according to the time threshold of setting, overtime profile is deleted, and obtains a continuous contour motion track, is motion history image (mhi);
    (3) cut apart motion history image and become independently subregion, then cut apart extraction object
    In motion history image, find current contour area, from up-to-date profile, the motion of the most recent of search next-door neighbour periphery, border; If find the motion of most recent, interested object is split from current region;
    (4) gradient direction of computed segmentation regional movement and correct direction mask
    By Motion mask, recorded the contour of object of different time, accordingly, the gradient that adopts Sobel gradient method to calculate motion history image is obtained movable information; Reject again after king-sized gradient exceptional value, obtain the measured value of global motion,
    Described is defined as especially greatly: be greater than 2 of averages more than standard deviation;
    Repeat the operation of step (3) and step (4), obtain the movable information of entire image;
    (5) in the region of selecting, calculate direction of motion, contrast parameter preset threshold value, determines whether abnormal
    By above-mentioned steps, obtain the movable information of whole section of video; The profile at every turn detecting and default parameter threshold are compared, when meeting size conforms and betiding area-of-interest, confirm the generation of once extremely swarming into;
    (6) according to the movable information of catching, generate corresponding video frequency abstract
    When confirming extremely to swarm into after generation, by setting up timeout threshold, the video segment that occurs extremely to swarm into is spliced and extracted, form a plurality of video frequency abstracts, for follow-up abnormal identification provides pre-service.
CN201210271736.3A 2012-08-01 2012-08-01 Abnormal intrusion detection method based on motion template Pending CN103577833A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210271736.3A CN103577833A (en) 2012-08-01 2012-08-01 Abnormal intrusion detection method based on motion template

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210271736.3A CN103577833A (en) 2012-08-01 2012-08-01 Abnormal intrusion detection method based on motion template

Publications (1)

Publication Number Publication Date
CN103577833A true CN103577833A (en) 2014-02-12

Family

ID=50049582

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210271736.3A Pending CN103577833A (en) 2012-08-01 2012-08-01 Abnormal intrusion detection method based on motion template

Country Status (1)

Country Link
CN (1) CN103577833A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105575027A (en) * 2014-10-09 2016-05-11 北京君正集成电路股份有限公司 Invasion and perimeter defense method and invasion and perimeter defense device
CN106204633A (en) * 2016-06-22 2016-12-07 广州市保伦电子有限公司 A kind of student trace method and apparatus based on computer vision
CN110278414A (en) * 2019-06-28 2019-09-24 Oppo广东移动通信有限公司 Image processing method, device, server and storage medium
CN111226226A (en) * 2018-06-29 2020-06-02 杭州眼云智家科技有限公司 Motion-based object detection method, object detection device and electronic equipment
CN111756602A (en) * 2020-06-29 2020-10-09 上海商汤智能科技有限公司 Communication timeout detection method in neural network model training and related product
CN111860192A (en) * 2020-06-24 2020-10-30 国网宁夏电力有限公司检修公司 Moving object identification method and system
RU2796096C1 (en) * 2022-05-13 2023-05-17 Акционерное общество "Научно-Производственный Комплекс "Альфа-М" Method of tracking objects

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101189645A (en) * 2005-05-18 2008-05-28 艾迪泰克股份有限公司 System and method for intrusion detection
CN101719216A (en) * 2009-12-21 2010-06-02 西安电子科技大学 Movement human abnormal behavior identification method based on template matching
CN101950425A (en) * 2010-09-26 2011-01-19 新太科技股份有限公司 Motion behavior detection-based intelligent tracking arithmetic
CN102509306A (en) * 2011-10-08 2012-06-20 西安理工大学 Specific target tracking method based on video

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101189645A (en) * 2005-05-18 2008-05-28 艾迪泰克股份有限公司 System and method for intrusion detection
CN101719216A (en) * 2009-12-21 2010-06-02 西安电子科技大学 Movement human abnormal behavior identification method based on template matching
CN101950425A (en) * 2010-09-26 2011-01-19 新太科技股份有限公司 Motion behavior detection-based intelligent tracking arithmetic
CN102509306A (en) * 2011-10-08 2012-06-20 西安理工大学 Specific target tracking method based on video

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
吴刚等: ""强噪声场景下运动模板的检测与跟踪技术研究"", 《计算机工程与应用》, vol. 46, no. 26, 11 September 2010 (2010-09-11) *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105575027A (en) * 2014-10-09 2016-05-11 北京君正集成电路股份有限公司 Invasion and perimeter defense method and invasion and perimeter defense device
CN105575027B (en) * 2014-10-09 2019-02-05 北京君正集成电路股份有限公司 It is a kind of to invade and boundary defence method and device
CN106204633A (en) * 2016-06-22 2016-12-07 广州市保伦电子有限公司 A kind of student trace method and apparatus based on computer vision
CN106204633B (en) * 2016-06-22 2020-02-07 广州市保伦电子有限公司 Student tracking method and device based on computer vision
CN111226226A (en) * 2018-06-29 2020-06-02 杭州眼云智家科技有限公司 Motion-based object detection method, object detection device and electronic equipment
CN110278414A (en) * 2019-06-28 2019-09-24 Oppo广东移动通信有限公司 Image processing method, device, server and storage medium
CN111860192A (en) * 2020-06-24 2020-10-30 国网宁夏电力有限公司检修公司 Moving object identification method and system
CN111756602A (en) * 2020-06-29 2020-10-09 上海商汤智能科技有限公司 Communication timeout detection method in neural network model training and related product
RU2796096C1 (en) * 2022-05-13 2023-05-17 Акционерное общество "Научно-Производственный Комплекс "Альфа-М" Method of tracking objects

Similar Documents

Publication Publication Date Title
CN110084272B (en) Cluster map creation method and repositioning method based on cluster map and position descriptor matching
CN109949375B (en) Mobile robot target tracking method based on depth map region of interest
Lieb et al. Adaptive Road Following using Self-Supervised Learning and Reverse Optical Flow.
Rakibe et al. Background subtraction algorithm based human motion detection
CN105405154B (en) Target object tracking based on color-structure feature
CN101141633B (en) Moving object detecting and tracing method in complex scene
CN103077521B (en) A kind of area-of-interest exacting method for video monitoring
CN102542289B (en) Pedestrian volume statistical method based on plurality of Gaussian counting models
CN104091348A (en) Multi-target tracking method integrating obvious characteristics and block division templates
WO2015010451A1 (en) Method for road detection from one image
CN103577833A (en) Abnormal intrusion detection method based on motion template
CN109919944B (en) Combined superpixel graph-cut optimization method for complex scene building change detection
CN102447835A (en) Non-blind-area multi-target cooperative tracking method and system
CN103164858A (en) Adhered crowd segmenting and tracking methods based on superpixel and graph model
CN108038415B (en) Unmanned aerial vehicle automatic detection and tracking method based on machine vision
CN106447680A (en) Method for radar and vision fused target detecting and tracking in dynamic background environment
CN103268480A (en) System and method for visual tracking
CN103077539A (en) Moving object tracking method under complicated background and sheltering condition
CN104036524A (en) Fast target tracking method with improved SIFT algorithm
CN102222346A (en) Vehicle detecting and tracking method
CN106709938B (en) Based on the multi-target tracking method for improving TLD
CN105160649A (en) Multi-target tracking method and system based on kernel function unsupervised clustering
Rodríguez et al. An adaptive, real-time, traffic monitoring system
CN105760846A (en) Object detection and location method and system based on depth data
CN104036483A (en) Image processing system and image processing method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20140212