CN106778484A - Moving vehicle tracking under traffic scene - Google Patents

Moving vehicle tracking under traffic scene Download PDF

Info

Publication number
CN106778484A
CN106778484A CN201611030765.5A CN201611030765A CN106778484A CN 106778484 A CN106778484 A CN 106778484A CN 201611030765 A CN201611030765 A CN 201611030765A CN 106778484 A CN106778484 A CN 106778484A
Authority
CN
China
Prior art keywords
target
tracking
vehicle
moving
sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201611030765.5A
Other languages
Chinese (zh)
Inventor
陈锡清
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanning Haofa Technology Co Ltd
Original Assignee
Nanning Haofa Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanning Haofa Technology Co Ltd filed Critical Nanning Haofa Technology Co Ltd
Priority to CN201611030765.5A priority Critical patent/CN106778484A/en
Publication of CN106778484A publication Critical patent/CN106778484A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • G06V20/42Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses the moving vehicle tracking under a kind of traffic scene, comprise the following steps:S1:Head end video IMAQ, image is pre-processed, and target detection is carried out to moving vehicle as tracking object;S2:Estimation is carried out to tracking target using Kalman filter, by setting up motion state model, the historical movement information according to tracked target predicts its position in the current frame;S3:According to the color histogram that tracking target is preserved in former frame, back projection is calculated in the estimation range that Kalman filter is provided, using Camshift algorithm search moving targets;S4:After marking moving target, judge whether target location overlaps, if there is the phenomenon then only positional information of more fresh target of target occlusion, more new histogram, does not update motion state and respective histogram simultaneously if without circumstance of occlusion;S5:Using the target after renewal as the tracking object of next frame, said process is repeated.

Description

Moving vehicle tracking method in traffic scene
Technical Field
The invention relates to a method for tracking a moving vehicle in a traffic scene.
Background
Along with the continuous acceleration of the urbanization process, the development of the transportation industry and the increase of the automobile holding amount bring great convenience to the working and traveling of people. However, problems follow, urban road construction is seriously delayed, urban traffic management experience is insufficient, and the passing capacity of a road network cannot meet the requirement of traffic volume increase. Traffic jam is becoming more serious and traffic accidents are occurring frequently, which is just the common problem faced by all countries in the world.
In order to solve various problems in urban traffic and meet the ever-increasing traffic demands, the traditional solution is to strengthen the construction of urban traffic infrastructure by continuously constructing more roads. Although the problems are relieved to a certain extent and smooth urban traffic is guaranteed, road resources available for extension are limited, the method is difficult to solve the problems encountered at present substantially, and various traffic accidents are still in continuous occurrence. With the continuous development and innovation of scientific technology, people begin to consider the utilization of technologies such as computer vision and the like to improve the existing urban road traffic and build a more convenient, efficient, safe and unblocked traffic management system, thereby obviously improving the transportation and management capacity of a traffic network. Intelligent transportation systems have been developed in this situation.
The intelligent transportation system-ITS (intelligent transportation system-ITS) is a hot spot and leading edge of the current world transportation development. The research in the field of intelligent transportation mainly comprises vehicle detection, vehicle tracking, vehicle information extraction, vehicle behavior analysis and the like, and the vehicle detection and tracking are used as core links of an intelligent transportation system, so that important guarantee is provided for subsequent vehicle information extraction and behavior analysis.
The main purpose of the vehicle detection and tracking technology is to accurately extract a vehicle target in a video image, realize matching by utilizing the characteristic information of a vehicle, determine the position of the target in each frame of image, and provide a motion track as the basis for vehicle behavior analysis. However, the actual traffic scene is very complex, and various interference factors such as human-vehicle mixing, traffic jam, light change and the like generally exist, which brings great difficulty to vehicle detection and tracking. The main difficulties of vehicle detection and tracking are the following:
1. the human body and the vehicle are mixed. In the road sections with more people flowing in city centers, districts and the like, the detection and tracking of vehicles are greatly interfered by pedestrians going in and out, and particularly in the rush hours, the phenomenon of people and vehicles mixing at traffic light intersections often occurs. How to effectively distinguish pedestrians from vehicles and avoid the interference of people flow is one of the main problems facing at present.
2. And (5) shielding the vehicle. The traffic flow speed on the highway is fast, and the distance between vehicles is large, so that the vehicle detection and tracking are relatively easy. However, the speed of the vehicle is generally slow in urban road sections, particularly during peak periods of traffic flow, traffic jam is easy to occur, and obvious shielding phenomena exist among vehicles, so that great challenges are brought to vehicle detection and tracking.
3. The illumination changes. The illumination condition under the traffic scene changes obviously with time, the vehicle shadow generated due to the illumination change can have great influence on the detection, and the characteristic information difference of the vehicle is obvious particularly under different illumination conditions in the day and at night. The method effectively solves the problem of light ray change, realizes all-weather stable work, and is a basic requirement for vehicle detection and tracking in traffic scenes.
4. The complexity of the algorithm. In practical application, the electronic police system has high requirement on the real-time performance of the algorithm, and the algorithm cannot be too complex.
Disclosure of Invention
The invention aims to provide a moving vehicle tracking method in a traffic scene.
The method for tracking the moving vehicle in the traffic scene comprises the following steps:
s1: acquiring a front-end video image through a camera, preprocessing the image, and detecting a target of a moving vehicle as a tracking object;
s2: carrying out motion estimation on a tracked target by using a Kalman filter, and predicting the position of the tracked target in a current frame according to historical motion information of the tracked target by establishing a motion state model;
s3: tracking a Camshift target, calculating a back projection in a prediction range given by a Kalman filter according to a color histogram stored in a previous frame of the tracked target, and searching a moving target by utilizing a Camshift algorithm;
s4: after the moving target is marked, judging whether the target position is overlapped, if the target shielding phenomenon exists, only updating the position information of the target, not updating the histogram, and if the target shielding condition does not exist, updating the moving state and the corresponding histogram at the same time;
s5: and taking the updated target as a tracking object of the next frame, and repeatedly executing the process.
Further, the specific method for detecting the moving vehicle target is as follows:
s1-1: extracting a large number of vehicle images from the video images as positive samples, extracting non-vehicle images as negative samples, and extracting Haar-like rectangular features from training samples as training feature sets;
s1-2: assuming that the sample space is X, the sample is represented by Y ═ {0, 1}, where 0 represents non-vehicle and 1 represents vehicle. Assuming that the total number of Haar-like features is N, wt,jRepresents the weight of the ith sample in the t round cycle;
s1-3: the training method of the strong classifier is as follows:
(1) for a series of training samples (x)1,y1),(x2,y2),...,(xn,yn) Assuming that n samples in the sample library are uniformly distributed, the sample weight wt,j=1/n;
(2)Fort=1toT:
1) The sample weight distribution is normalized by the weight distribution of the samples,
2) for each feature j, at a given weight wt,jDown-training weak classifier ht,j(x) Calculating the classification error rate:
3) selecting the optimal weak classifier h from the weak classifierst(x) The method comprises the following steps Order toThen h ist(x)=ft,k(x) And the classification error rate of the sample set istt,k
4) Updating the sample weight according to the classification error rate of the previous round:
wherein,ei0 represents correct classification, and ei1 represents the classification error, and the final strong classifier is:wherein,
s1-4: and scanning windows with different scales on the image to be detected, and finally outputting all detected vehicle targets.
Further, a specific method for performing motion estimation on the tracking target by using a Kalman filter is as follows:
s2-1: the Kalman filtering algorithm model comprises a state equation and an observation equation:
S(n)=A(n)S(n-1)+W(n-1),
X(n)=C(n)S(n)+V(n),
wherein, s (n) and x (n) are respectively the state vector and observation vector at n moments, a (n) is a state transition matrix, c (n) is an observation matrix, w (n) and v (n) are state noise and observation noise, which are both uncorrelated white gaussian noise with an average value of 0;
s2-2: the central point of a vehicle target rectangle is used as a prediction object, and a motion state vector X of the central point of a motion target is establishedxAnd Xy
Wherein s isx,sy,vx,vy,ax,ayRepresenting the position, velocity and acceleration of the vehicle target in the horizontal and vertical directions, respectively;
s2-3: the equation of motion for tracking the center of the target in the horizontal direction is:
wherein s isx(n),vx(n),ax(n) represents the position, velocity and acceleration of the center point of the object at time n, ox(n-1) is white noise;
rewriting the above formula to a matrix form:
the only motion state components that can be observed are the positions of the moving objects:
s2-4: comparing the state equation and the observation equation of the formula Kalman filter in the formula S2-1, the state equation and the observation equation of the tracking target center point can be obtained as follows:
wherein,C(n)=[1 0 0]。
further, the concrete flow of the Camshift algorithm is as follows:
s3-1: initializing a search window to enable a target to be tracked to be in the search window;
s3-2: extracting H components at the corresponding positions of the windows in the HSV space to obtain H component histograms, and calculating a color probability distribution map, namely a reverse projection map, of the whole tracking area according to the H component histograms;
s3-3: selecting a search window with the same size as the initial window in the reverse projection graph;
s3-4: adjusting the size of the window according to the pixel sum S in the search window, and moving the center of the window to the position of the mass center;
s3-5: judging whether convergence is achieved or not, outputting the centroid (x, y) if convergence is achieved, otherwise, repeating the steps S3-3 and S3-4 until convergence is achieved or the maximum iteration number is achieved;
s3-6: and taking the position and the size of the finally obtained search window as the initial window of the next frame, and continuously executing circulation.
The invention has the beneficial effects that:
1) the vehicle detection algorithm based on the Haar-like features and the Adaboost classifier can obtain a reliable vehicle classifier through enriching training samples, better adapts to complex changes in traffic scenes, has extremely high detection rate and low false alarm rate, and can meet the actual working requirements of an electronic police system;
2) the invention realizes the tracking of the vehicle target by adopting the idea of combining the Camshift tracking method based on the target color information and the Kalman tracking method based on the motion information prediction, and has better tracking effect.
Detailed Description
The following specific examples further illustrate the invention but are not intended to limit the invention thereto.
The method for tracking the moving vehicle in the traffic scene comprises the following steps:
s1: acquiring a front-end video image through a camera, preprocessing the image, and detecting a target of a moving vehicle as a tracking object;
s2: carrying out motion estimation on a tracked target by using a Kalman filter, and predicting the position of the tracked target in a current frame according to historical motion information of the tracked target by establishing a motion state model;
s3: tracking a Camshift target, calculating a back projection in a prediction range given by a Kalman filter according to a color histogram stored in a previous frame of the tracked target, and searching a moving target by utilizing a Camshift algorithm;
s4: after the moving target is marked, judging whether the target position is overlapped, if the target shielding phenomenon exists, only updating the position information of the target, not updating the histogram, and if the target shielding condition does not exist, updating the moving state and the corresponding histogram at the same time;
s5: and taking the updated target as a tracking object of the next frame, and repeatedly executing the process.
The specific method for detecting the moving vehicle target is as follows:
s1-1: extracting a large number of vehicle images from the video images as positive samples, extracting non-vehicle images as negative samples, and extracting Haar-like rectangular features from training samples as training feature sets;
s1-2: assuming that the sample space is X, the sample is represented by Y ═ {0, 1}, where 0 represents non-vehicle and 1 represents vehicle. Assuming that the total number of Haar-like features is N, wt,jRepresents the weight of the ith sample in the t round cycle;
s1-3: the training method of the strong classifier is as follows:
(1) for a series of training samples (x)1,y1),(x2,y2),...,(xn,yn) Assuming that n samples in the sample library are uniformly distributed, the sample weight wt,j=1/n;
(2)Fort=1toT:
1) The sample weight distribution is normalized by the weight distribution of the samples,
2) for each feature j, at a given weight wt,jDown-training weak classifier ht,j(x) Calculating the classification error rate:
3) selecting the optimal weak classifier h from the weak classifierst(x) The method comprises the following steps Order toThen h ist(x)=ft,k(x) And the classification error rate of the sample set istt,k
4) Updating the sample weight according to the classification error rate of the previous round:
wherein,ei0 represents correct classification, and ei1 represents the classification error, and the final strong classifier is:wherein,
s1-4: and scanning windows with different scales on the image to be detected, and finally outputting all detected vehicle targets.
The specific method for performing motion estimation on the tracking target by using the Kalman filter is as follows:
s2-1: the Kalman filtering algorithm model comprises a state equation and an observation equation:
S(n)=A(n)S(n-1)+W(n-1),
X(n)=C(n)S(n)+V(n),
wherein, s (n) and x (n) are respectively the state vector and observation vector at n moments, a (n) is a state transition matrix, c (n) is an observation matrix, w (n) and v (n) are state noise and observation noise, which are both uncorrelated white gaussian noise with an average value of 0;
s2-2: the central point of a vehicle target rectangle is used as a prediction object, and a motion state vector X of the central point of a motion target is establishedxAnd Xy
Wherein s isx,sy,vx,vy,ax,ayRepresenting the position, velocity and acceleration of the vehicle target in the horizontal and vertical directions, respectively;
s2-3: the equation of motion for tracking the center of the target in the horizontal direction is:
wherein s isx(n),vx(n),ax(n) represents the position, velocity and acceleration of the center point of the object at time n, ox(n-1) is white noise;
rewriting the above formula to a matrix form:
the only motion state components that can be observed are the positions of the moving objects:
s2-4: comparing the state equation and the observation equation of the formula Kalman filter in the formula S2-1, the state equation and the observation equation of the tracking target center point can be obtained as follows:
wherein,C(n)=[1 0 0]。
the concrete flow of the Camshift algorithm is as follows:
s3-1: initializing a search window to enable a target to be tracked to be in the search window;
s3-2: extracting H components at the corresponding positions of the windows in the HSV space to obtain H component histograms, and calculating a color probability distribution map, namely a reverse projection map, of the whole tracking area according to the H component histograms;
s3-3: selecting a search window with the same size as the initial window in the reverse projection graph;
s3-4: adjusting the size of the window according to the pixel sum S in the search window, and moving the center of the window to the position of the mass center;
s3-5: judging whether convergence is achieved or not, outputting the centroid (x, y) if convergence is achieved, otherwise, repeating the steps S3-3 and S3-4 until convergence is achieved or the maximum iteration number is achieved;
s3-6: and taking the position and the size of the finally obtained search window as the initial window of the next frame, and continuously executing circulation.

Claims (4)

1. The method for tracking the moving vehicle in the traffic scene is characterized by comprising the following steps of:
s1: acquiring a front-end video image through a camera, preprocessing the image, detecting a target of a moving vehicle, and segmenting the target to be used as a tracking object;
s2: carrying out motion estimation on a tracked target by using a Kalman filter, and predicting the position of the tracked target in a current frame according to historical motion information of the tracked target by establishing a motion state model;
s3: tracking a Camshift target, calculating a back projection in a prediction range given by a Kalman filter according to a color histogram stored in a previous frame of the tracked target, and searching a moving target by utilizing a Camshift algorithm;
s4: after the moving target is marked, judging whether the target position is overlapped, if the target shielding phenomenon exists, only updating the position information of the target, not updating the histogram, and if the target shielding condition does not exist, updating the moving state and the corresponding histogram at the same time;
s5: and taking the updated target as a tracking object of the next frame, and repeatedly executing the process.
2. The moving vehicle tracking method according to claim 1, wherein the moving vehicle target detection is as follows:
s1-1: extracting a large number of vehicle images from the video images as positive samples, extracting non-vehicle images as negative samples, and extracting Haar-like rectangular features from training samples as training feature sets;
s1-2: assuming that the sample space is X, the sample is represented by Y ═ {0, 1}, where 0 represents non-vehicle and 1 represents vehicle. Assuming that the total number of Haar-like features is N, wt,jRepresents the weight of the ith sample in the t round cycle;
s1-3: the training method of the strong classifier is as follows:
(1) for a series of training samples (x)1,y1),(x2,y2),...,(xn,yn) Assuming that n samples in the sample library are uniformly distributed, the sample weight wt,j=1/n;
(2)Fort=1toT:
1) The sample weight distribution is normalized by the weight distribution of the samples,
2) for each feature j, at a given weight wt,jDown-training weak classifier ht,j(x) Calculating the classification error rate:
3) selecting the optimal weak classifier h from the weak classifierst(x) The method comprises the following steps Order toThen h ist(x)=ft,k(x) And the classification error rate of the sample set istt,k
4) Updating the sample weight according to the classification error rate of the previous round:
wherein,ei0 represents correct classification, and ei1 represents the classification error, and the final strong classifier is:wherein,
s1-4: and scanning windows with different scales on the image to be detected, and finally outputting all detected vehicle targets.
3. The moving vehicle tracking method according to claim 1, wherein a specific method of motion estimation of the tracking target using the Kalman filter is as follows:
s2-1: the Kalman filtering algorithm model comprises a state equation and an observation equation:
S(n)=A(n)S(n-1)+W(n-1),
X(n)=C(n)S(n)+V(n),
wherein, s (n) and x (n) are respectively the state vector and observation vector at n moments, a (n) is a state transition matrix, c (n) is an observation matrix, w (n) and v (n) are state noise and observation noise, which are both uncorrelated white gaussian noise with an average value of 0;
s2-2: the central point of a vehicle target rectangle is used as a prediction object, and a motion state vector X of the central point of a motion target is establishedxAnd Xy
X x = ( s x , v x , a x ) T X y = ( s y , v y , a y ) T ,
Wherein s isx,sy,vx,vy,ax,ayRepresenting the position, velocity and acceleration of the vehicle target in the horizontal and vertical directions, respectively;
s2-3: the equation of motion for tracking the center of the target in the horizontal direction is:
s x ( n ) = s x ( n - 1 ) + v x ( n - 1 ) T + 1 2 a x ( n - 1 ) T 2 v x ( n ) = v x ( n - 1 ) + a x ( n - 1 ) T a x ( n ) = a x ( n - 1 ) + o x ( n - 1 ) T
wherein s isx(n),vx(n),ax(n) represents the position, velocity and acceleration of the center point of the object at time n, ox(n-1) is white noise;
rewriting the above formula to a matrix form:
s x ( n ) v x ( n ) a x ( n ) = 1 T 1 2 T 2 0 1 T 0 0 1 s x ( n - 1 ) v x ( n - 1 ) a x ( n - 1 ) + 0 0 o x ( n - 1 ) T ,
the only motion state components that can be observed are the positions of the moving objects:
s x ( n ) = 1 0 0 s x ( n ) v x ( n ) a x ( n ) ;
s2-4: comparing the state equation and the observation equation of the formula Kalman filter in the formula S2-1, the state equation and the observation equation of the tracking target center point can be obtained as follows:
s x ( n ) v x ( n ) a x ( n ) = A ( n ) s x ( n - 1 ) v x ( n - 1 ) a x ( n - 1 ) + W ( n - 1 ) ,
s x ( n ) = C ( n ) s x ( n ) v x ( n ) a x ( n ) ,
wherein,C(n)=[1 0 0]。
4. the moving vehicle tracking method of claim 1, wherein the Camshift algorithm is specified by the following steps:
s3-1: initializing a search window to enable a target to be tracked to be in the search window;
s3-2: extracting H components at the corresponding positions of the windows in the HSV space to obtain H component histograms, and calculating a color probability distribution map, namely a reverse projection map, of the whole tracking area according to the H component histograms;
s3-3: selecting a search window with the same size as the initial window in the reverse projection graph;
s3-4: adjusting the size of the window according to the pixel sum S in the search window, and moving the center of the window to the position of the mass center;
s3-5: judging whether convergence is achieved or not, outputting the centroid (x, y) if convergence is achieved, otherwise, repeating the steps S3-3 and S3-4 until convergence is achieved or the maximum iteration number is achieved;
s3-6: and taking the position and the size of the finally obtained search window as the initial window of the next frame, and continuously executing circulation.
CN201611030765.5A 2016-11-16 2016-11-16 Moving vehicle tracking under traffic scene Withdrawn CN106778484A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611030765.5A CN106778484A (en) 2016-11-16 2016-11-16 Moving vehicle tracking under traffic scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611030765.5A CN106778484A (en) 2016-11-16 2016-11-16 Moving vehicle tracking under traffic scene

Publications (1)

Publication Number Publication Date
CN106778484A true CN106778484A (en) 2017-05-31

Family

ID=58971792

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611030765.5A Withdrawn CN106778484A (en) 2016-11-16 2016-11-16 Moving vehicle tracking under traffic scene

Country Status (1)

Country Link
CN (1) CN106778484A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108022589A (en) * 2017-10-31 2018-05-11 努比亚技术有限公司 Aiming field classifier training method, specimen discerning method, terminal and storage medium
CN108776974A (en) * 2018-05-24 2018-11-09 南京行者易智能交通科技有限公司 A kind of real-time modeling method method suitable for public transport scene
CN109934162A (en) * 2019-03-12 2019-06-25 哈尔滨理工大学 Facial image identification and video clip intercept method based on Struck track algorithm
CN110032978A (en) * 2019-04-18 2019-07-19 北京字节跳动网络技术有限公司 Method and apparatus for handling video
TWI675580B (en) * 2017-07-27 2019-10-21 香港商阿里巴巴集團服務有限公司 Method and device for user authentication based on feature information
CN114419106A (en) * 2022-03-30 2022-04-29 深圳市海清视讯科技有限公司 Vehicle violation detection method, device and storage medium
CN115174861A (en) * 2022-07-07 2022-10-11 广州后为科技有限公司 Method and device for automatically tracking moving target by pan-tilt camera

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102737385A (en) * 2012-04-24 2012-10-17 中山大学 Video target tracking method based on CAMSHIFT and Kalman filtering
CN104866823A (en) * 2015-05-11 2015-08-26 重庆邮电大学 Vehicle detection and tracking method based on monocular vision

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102737385A (en) * 2012-04-24 2012-10-17 中山大学 Video target tracking method based on CAMSHIFT and Kalman filtering
CN104866823A (en) * 2015-05-11 2015-08-26 重庆邮电大学 Vehicle detection and tracking method based on monocular vision

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张志鹏: "嵌入式电子警察系统中的车辆检测与跟踪", 《中国优秀硕士学位论文全文数据库信息科技辑》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI675580B (en) * 2017-07-27 2019-10-21 香港商阿里巴巴集團服務有限公司 Method and device for user authentication based on feature information
CN108022589A (en) * 2017-10-31 2018-05-11 努比亚技术有限公司 Aiming field classifier training method, specimen discerning method, terminal and storage medium
CN108776974A (en) * 2018-05-24 2018-11-09 南京行者易智能交通科技有限公司 A kind of real-time modeling method method suitable for public transport scene
CN109934162A (en) * 2019-03-12 2019-06-25 哈尔滨理工大学 Facial image identification and video clip intercept method based on Struck track algorithm
CN110032978A (en) * 2019-04-18 2019-07-19 北京字节跳动网络技术有限公司 Method and apparatus for handling video
CN114419106A (en) * 2022-03-30 2022-04-29 深圳市海清视讯科技有限公司 Vehicle violation detection method, device and storage medium
CN115174861A (en) * 2022-07-07 2022-10-11 广州后为科技有限公司 Method and device for automatically tracking moving target by pan-tilt camera
CN115174861B (en) * 2022-07-07 2023-09-22 广州后为科技有限公司 Method and device for automatically tracking moving target by holder camera

Similar Documents

Publication Publication Date Title
Wei et al. Multi-vehicle detection algorithm through combining Harr and HOG features
CN106778484A (en) Moving vehicle tracking under traffic scene
CN106875424B (en) A kind of urban environment driving vehicle Activity recognition method based on machine vision
Hadi et al. Vehicle detection and tracking techniques: a concise review
CN102819764B (en) Method for counting pedestrian flow from multiple views under complex scene of traffic junction
CN102768804B (en) Video-based traffic information acquisition method
Dai et al. Multi-task faster R-CNN for nighttime pedestrian detection and distance estimation
Mahaur et al. Road object detection: a comparative study of deep learning-based algorithms
CN102354457B (en) General Hough transformation-based method for detecting position of traffic signal lamp
CN106934374B (en) Method and system for identifying traffic signboard in haze scene
CN103914688A (en) Urban road obstacle recognition system
CN103871079A (en) Vehicle tracking method based on machine learning and optical flow
Feng et al. Mixed road user trajectory extraction from moving aerial videos based on convolution neural network detection
Chiu et al. Automatic Traffic Surveillance System for Vision-Based Vehicle Recognition and Tracking.
CN109272482A (en) A kind of urban road crossing vehicle queue detection system based on sequence image
Liu et al. Effective road lane detection and tracking method using line segment detector
Cao et al. Application of convolutional neural networks and image processing algorithms based on traffic video in vehicle taillight detection
CN105405297A (en) Traffic accident automatic detection method based on monitoring video
CN103679156A (en) Automatic identification and tracking method for various kinds of moving objects
CN105160324A (en) Vehicle detection method based on part spatial relation
Zhou et al. Real-time traffic light recognition based on c-hog features
Li et al. CrackTinyNet: A novel deep learning model specifically designed for superior performance in tiny road surface crack detection
Zhang et al. Traffic sign detection algorithm based on improved YOLOv7
Stubbs et al. A real-time collision warning system for intersections
Yang et al. A robust vehicle queuing and dissipation detection method based on two cameras

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20170531