CN108734103B - Detection and Tracking Method of Moving Objects in Satellite Video - Google Patents
Detection and Tracking Method of Moving Objects in Satellite Video Download PDFInfo
- Publication number
- CN108734103B CN108734103B CN201810358177.7A CN201810358177A CN108734103B CN 108734103 B CN108734103 B CN 108734103B CN 201810358177 A CN201810358177 A CN 201810358177A CN 108734103 B CN108734103 B CN 108734103B
- Authority
- CN
- China
- Prior art keywords
- target
- vehicle
- tracking
- image
- targets
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 21
- 238000000034 method Methods 0.000 title claims abstract description 15
- 238000005516 engineering process Methods 0.000 claims abstract description 23
- 230000033001 locomotion Effects 0.000 claims abstract description 21
- 239000013598 vector Substances 0.000 claims description 21
- 239000011159 matrix material Substances 0.000 claims description 10
- 230000001133 acceleration Effects 0.000 claims description 8
- 238000005259 measurement Methods 0.000 claims description 6
- 230000000877 morphologic effect Effects 0.000 claims description 6
- 230000002159 abnormal effect Effects 0.000 claims description 2
- 238000012937 correction Methods 0.000 claims description 2
- 230000001186 cumulative effect Effects 0.000 claims description 2
- 238000012544 monitoring process Methods 0.000 abstract description 7
- 238000012545 processing Methods 0.000 abstract description 5
- 238000010586 diagram Methods 0.000 description 6
- 230000003287 optical effect Effects 0.000 description 5
- 238000013461 design Methods 0.000 description 2
- 238000002474 experimental method Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 230000011218 segmentation Effects 0.000 description 2
- 230000006641 stabilisation Effects 0.000 description 2
- 238000011105 stabilization Methods 0.000 description 2
- 230000033228 biological regulation Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000008030 elimination Effects 0.000 description 1
- 238000003379 elimination reaction Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
- G06V20/584—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/207—Analysis of motion for motion estimation over a hierarchy of resolutions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/08—Detecting or categorising vehicles
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
本发明属于图像和视频处理技术领域,具体为卫星视频中运动目标的检测与跟踪方法。本发明方法包括:采用基于运动和局部杂波建模的检测技术,对在卫星视频中城市里的运动目标‑‑车辆进行检测;采用基于区域增长与多线索的鉴别技术,对检测到的运动目标进行鉴别;采用基于卡尔曼滤波器的目标跟踪技术,以及将检测到的多个车辆目标与正在跟踪的多条轨迹进行关联,进一步对地面运动目标进行跟踪。本发明实用性强,并可用于交通监测、城市监控、军事监测等领域,应用前景广阔。
The invention belongs to the technical field of image and video processing, in particular to a method for detecting and tracking moving objects in satellite videos. The method of the invention includes: using the detection technology based on motion and local clutter modeling to detect moving objects-vehicles in the city in the satellite video; The target is identified; the target tracking technology based on Kalman filter is adopted, and the detected multiple vehicle targets are associated with the multiple trajectories being tracked to further track the ground moving targets. The invention has strong practicability and can be used in the fields of traffic monitoring, urban monitoring, military monitoring and the like, and has broad application prospects.
Description
发明领域Field of Invention
本发明属于图像处理和视频处理技术领域,具体涉及一种专门针对卫星视频中运动目标的检测与跟踪方法。The invention belongs to the technical field of image processing and video processing, and in particular relates to a method for detecting and tracking moving objects in satellite videos.
背景技术Background technique
地球观测(EO)具有广阔的视野,可以达到城市大小的观测范围,是当前研究的热点和未来的研究趋势。它在城市交通系统,海上监视,空中间谍和战场监视等民用和军事领域发挥着至关重要的作用。目前许多观测技术包括光学卫星图像、星载合成孔径雷达和航空领域影像已经逐步发展并具有重要意义。光学卫星图像和星载SAR都可以观察到高分辨率的广阔视野。但光学卫星图像和SAR图像是静态图片,不能观察物体的动态变化,限制了其应用范围。Earth observation (EO) has a broad field of view and can reach the observation range of the size of a city. It is the current research hotspot and future research trend. It plays a vital role in civil and military fields such as urban transportation systems, maritime surveillance, aerial espionage and battlefield surveillance. At present, many observation technologies including optical satellite imagery, spaceborne synthetic aperture radar and aerial imagery have gradually developed and are of great significance. Both optical satellite imagery and spaceborne SAR can observe a wide field of view with high resolution. However, optical satellite images and SAR images are static pictures and cannot observe the dynamic changes of objects, which limits their application scope.
随着无人机(UAV)发展,可以使用的航空影像也越来越多。然而航拍影像因为平台不稳定性波动的影响从而必须依赖复杂的稳定处理。为了目标跟踪,需要实现图像配准来将相机运动与目标运动分开。航拍的高度有限,并且根据新的民用航空法规,无人机的适用范围被压缩,使得航拍获取的视频范围大大减小。With the development of unmanned aerial vehicle (UAV), more and more aerial imagery can be used. However, aerial imagery must rely on complex stabilization processing due to platform instability fluctuations. For object tracking, image registration needs to be implemented to separate camera motion from object motion. The height of aerial photography is limited, and according to the new civil aviation regulations, the scope of application of drones is compressed, which greatly reduces the scope of video obtained by aerial photography.
卫星视频是一种先进的监测技术,它利用光学传感器在运动的卫星平台上获取连续的图像。与无人机的航拍影像相比,除了更广阔的视野外,由于卫星运动的稳定性使得卫星视频比起航拍视频更加稳定。卫星视频的另一个优势就是它可以通过自身的飞行始终将镜头对准感兴趣区域(ROI)。Satellite video is an advanced monitoring technology that uses optical sensors to acquire continuous images on a moving satellite platform. Compared with the aerial images of drones, in addition to a wider field of view, satellite video is more stable than aerial video due to the stabilization of satellite motion. Another advantage of satellite video is that it can always keep the camera on the region of interest (ROI) through its own flight.
发明内容SUMMARY OF THE INVENTION
本发明的目的在于提供一种专门针对卫星成像视频(以下简称“卫星视频”)中的运动目标的检测和跟踪方法。The purpose of the present invention is to provide a method for detecting and tracking moving objects in satellite imaging video (hereinafter referred to as "satellite video").
本发明是首个专门针对卫星视频处理的算法,可以自动实现对卫星视频中车辆目标的检测和跟踪。其中,本发明提出了新的运动目标检测算法,新的虚警目标剔除算法。The present invention is the first algorithm specially for satellite video processing, which can automatically realize the detection and tracking of vehicle targets in satellite video. Among them, the present invention proposes a new moving target detection algorithm and a new false alarm target elimination algorithm.
本发明提出的卫星视频中运动目标的检测与跟踪方法,这里的运动目标主要针对车辆,具体步骤如下:The method for detecting and tracking moving objects in satellite video proposed by the present invention, the moving objects here are mainly aimed at vehicles, and the specific steps are as follows:
(1)采用基于运动和局部杂波建模的检测技术,对在卫星视频中城市里的运动目标--车辆进行检测;(1) The detection technology based on motion and local clutter modeling is used to detect the moving target-vehicle in the city in the satellite video;
(2)采用基于区域增长与多线索的鉴别技术,对检测到的运动目标进行鉴别;(2) Using the identification technology based on regional growth and multi-cues to identify the detected moving targets;
(3)采用基于卡尔曼滤波器的多目标跟踪技术,以及将检测到的多个车辆目标状态与正在跟踪的多条轨迹进行关联,进一步对地面运动目标进行跟踪;这里,目标是指任意帧中检测到的车辆,目标状态是其位置、速度和加速度在组成的6维状态向量,轨迹是指具有某一个车辆目标在一段连续时间的状态向量的集合,任意轨迹拥有一个整个视频唯一的编号。(3) The multi-target tracking technology based on Kalman filter is adopted, and the detected vehicle target states are associated with the multiple trajectories being tracked to further track the ground moving target; here, the target refers to any frame The target state is a 6-dimensional state vector composed of its position, velocity and acceleration, and the trajectory refers to a set of state vectors with a certain vehicle target in a continuous period of time. Any trajectory has a unique number for the entire video. .
步骤(1)中,所述的基于运动和局部杂波建模的检测技术,是假设原始图像为图像信号和噪声信号的和,所以帧间差分图像(两帧图像相减后取绝对值得到的图像)表示噪声信号,差分图像中的异常像素表示车辆目标可能存在的位置,选择指数分布拟合噪声杂波分布,计算二值化阈值,实现对帧间差分图像的二值化,二值图像中的前景表示车辆目标,二值图像中的背景表示图像背景区域,利用连续两幅二值图像的交集确定目标的当前位置。In step (1), the detection technology based on motion and local clutter modeling assumes that the original image is the sum of the image signal and the noise signal, so the difference image between frames (the absolute value obtained after subtracting the two frame images is obtained) image) represents the noise signal, the abnormal pixels in the differential image represent the possible location of the vehicle target, select the exponential distribution to fit the noise clutter distribution, calculate the binarization threshold, and realize the binarization of the difference image between frames. The foreground in the image represents the vehicle target, the background in the binary image represents the background area of the image, and the current position of the target is determined by the intersection of two consecutive binary images.
步骤(2)中,所述的基于区域增长与多线索的鉴别技术,是重构运动目标的几何形状,利用多种形态学线索,鉴别运动目标与虚警目标。In step (2), the identification technology based on region growth and multiple clues is to reconstruct the geometric shape of the moving target, and use a variety of morphological clues to identify the moving target and the false alarm target.
步骤(3)中,所述的基于卡尔曼滤波器的目标跟踪技术,是利用线性模型描述车辆运动,结合观测值和预测值,估计运动目标的当前状态,并更新系统状态。In step (3), the Kalman filter-based target tracking technology uses a linear model to describe the vehicle motion, combines the observed value and the predicted value, estimates the current state of the moving target, and updates the system state.
步骤(3)中,所述的检测到的多个车辆目标与正在跟踪的多条轨迹的关联,是利用匈牙利算法实现所有对车辆目标和轨迹的最优匹配,匹配完成后部分目标匹配上部分轨迹,对未匹配上轨迹的目标初始化一条新的跟踪轨迹,对未匹配上目标的轨迹利用最近邻搜索技术再次寻找匹配的目标。In step (3), the association between the detected multiple vehicle targets and the multiple trajectories being tracked is to use the Hungarian algorithm to achieve all the optimal matching of vehicle targets and trajectories. After the matching is completed, some targets match the upper part. Track, initialize a new tracking track for the target that does not match the track, and use the nearest neighbor search technology to find the matching target again for the track that does not match the target.
下面对各步骤的算法作进一步介绍:The algorithm of each step is further described below:
1、基于运动和局部杂波建模的检测技术,具体步骤为:1. Detection technology based on motion and local clutter modeling, the specific steps are:
(1)栅格化,即:在每一个帧中,沿着垂直和水平方向将图像划分为二维网格,每个网格单元内包含若干像素,在每个单元中分别执行检测算法;(1) Rasterization, that is: in each frame, the image is divided into two-dimensional grids along the vertical and horizontal directions, each grid cell contains several pixels, and the detection algorithm is executed in each cell;
(1a)将图像看作是原始图像信号与噪声信号的和,即:(1a) Consider the image as the sum of the original image signal and the noise signal, namely:
Gi(x,y)=gi(x,y)+ni(x,y) (1)G i (x, y)=g i (x, y)+n i (x, y) (1)
其中,Gi(x,y)是第i帧视频中像素点(x,y)的幅值大小,gi(x,y)是第i帧视频中原始图像中点(x,y)的幅值大小,ni(x,y)相应的噪声信号的幅值大小;Among them, G i (x, y) is the magnitude of the pixel point (x, y) in the ith frame of video, and gi (x, y) is the point (x, y) in the original image in the ith frame of video. Amplitude, the amplitude of the noise signal corresponding to n i (x, y);
(1b)选择指数分布来拟合噪声信号的分布,指数分布的累积密度函数(CDF)为:(1b) Choose an exponential distribution to fit the distribution of the noise signal, and the cumulative density function (CDF) of the exponential distribution is:
其中,cE(*)是指数分布的CDF,λ是指数分布的参数。where c E (*) is the CDF of the exponential distribution and λ is the parameter of the exponential distribution.
(1c)找出灰度变化大的区域作为目标,预先设定虚警率,计算出二值化的阈值:(1c) Find the area with large grayscale change as the target, set the false alarm rate in advance, and calculate the binarization threshold:
其中,是逆函数,pfa是预先设置的概率值;in, is the inverse function, pfa is the preset probability value;
(1d)利用预先定义的概率值计算二值化阈值将像素分类,像素幅度值大于th是前景,像素幅度值小于th是背景,在二值图像中,‘1’为前景,‘0’为背景;(1d) Use a predefined probability value to calculate the binarization threshold to classify the pixels, the pixel amplitude value greater than th is the foreground, and the pixel amplitude value less than th is the background. In a binary image, '1' is the foreground, and '0' is the foreground background;
(1e)将得到的两个二值图像取交集图,也就是逐像素进行“与”布尔运算,消除前面时刻和后面时刻的景象目标,从而唯一保留运动车辆目标当前位置。(1e) Take the intersection map of the two binary images obtained, that is, perform the "AND" Boolean operation pixel by pixel to eliminate the scene targets at the previous moment and the next moment, so as to uniquely retain the current position of the moving vehicle target.
2、基于区域增长与多线索的鉴别技术,具体步骤为:2. Identification technology based on regional growth and multiple clues, the specific steps are:
(2a)利用基于高斯分布的区域增长算法,利用检测器检测出的一个连通区域的所有前景像素估计高斯分布的参数;高斯分布的PDF是;(2a) Using the region growing algorithm based on Gaussian distribution, using all foreground pixels of a connected region detected by the detector to estimate the parameters of the Gaussian distribution; the PDF of the Gaussian distribution is;
其中,μ和σ分别表示平均值和标准差,高斯分布的CDF是:where μ and σ represent the mean and standard deviation, respectively, and the CDF of the Gaussian distribution is:
其中,erf(*)表示相关误差函数;Among them, erf(*) represents the correlation error function;
(2b)预定义的下边界概率和上边界概率分别计算幅度值的下边界和上边界,即:(2b) Predefined lower bound probability and the upper bound probability Calculate the lower and upper bounds of the magnitude value separately, namely:
其中,和分别代表阈值的下界和上界;in, and represent the lower and upper bounds of the threshold, respectively;
(2c)重新判断这个区域的前景目标,幅度值在范围内的像素为前景;(2c) Re-judging the foreground target in this area, the amplitude value is in The pixels within the range are the foreground;
(2d)保留与原始前景相连通的所有像素,构建成一个新的连通区域,也就是区域生长的结果;(2d) Retain all pixels connected to the original foreground, and construct a new connected region, that is, the result of region growth;
(2e)最后,采用面积、占空比、长度和偏心率作为形态学线索,对区域生长的结果进行形态学鉴别,目的是剔除检测到的目标中的非车辆目标,其中占空比的定义是,连通区域像素和连通区域边界框的面积的比值。(2e) Finally, using area, duty cycle, length, and eccentricity as morphological cues, the results of region growth are morphologically discriminated with the aim of eliminating non-vehicle targets from the detected targets, where the definition of duty cycle Yes, the ratio of the connected region pixels to the area of the connected region bounding box.
3、基于卡尔曼滤波器的目标跟踪技术,具体步骤为:3. Target tracking technology based on Kalman filter, the specific steps are:
(3a)KF一般适用于线性运动的目标跟踪,而非线性运动一般可以分解为一系列线性运动,首先只考虑一个运动目标的KF跟踪问题,预测系统的动态方程为:(3a) KF is generally suitable for linear motion target tracking, while nonlinear motion can generally be decomposed into a series of linear motions. First, only the KF tracking problem of one moving target is considered. The dynamic equation of the prediction system is:
xi=Fi·xi-1+vi (8)x i =F i ·x i-1 +v i (8)
其中,Fi为卫星视频第i帧时刻的演化矩阵,vi为噪声向量,xi-1是第i-1帧时刻的状态向量,xi为当前时刻的状态向量;Among them, F i is the evolution matrix of the ith frame moment of the satellite video, vi is the noise vector, x i -1 is the state vector of the i-1th frame moment, and x i is the state vector of the current moment;
状态向量xi定义为:The state vector x i is defined as:
xi=[x,y,vx,vy,ax,ay]T (9)x i = [x, y, v x , v y , a x , a y ] T (9)
其中,x,y,vx,vy,ax,ay分别表示X轴方向的位置,Y轴方向的位置,X轴方向的速度,Y轴方向的速度,X轴方向的加速度,Y轴方向的加速度;Among them, x, y, v x , v y , a x , a y represent the position in the X-axis direction, the position in the Y-axis direction, the velocity in the X-axis direction, the velocity in the Y-axis direction, the acceleration in the X-axis direction, Y acceleration in the axis direction;
假设车辆目标在两帧之间的时刻进行匀加速的直线运动,所以演化矩阵Fi为:Assuming that the vehicle target moves in a straight line with uniform acceleration at the moment between two frames, the evolution matrix F i is:
其中,τ表示视频中两帧之间的时间间隔,单位为“秒”,也就是帧率的倒数。Among them, τ represents the time interval between two frames in the video, in "seconds", which is the inverse of the frame rate.
(3b)利用检测车辆的面积和位置作为观测值,得到观测方程为:(3b) Using the area and position of the detected vehicle as the observation value, the observation equation is obtained as:
其中,yi,Hi和ni分别表示测量矢量,测量矩阵和测量噪声;Among them, y i , H i and ni represent the measurement vector, measurement matrix and measurement noise respectively;
观测矩阵Hi为:The observation matrix H i is:
(3c)通过贝叶斯定理导出状态向量xi的后验概率密度函数(PDF),再利用链式法则对其进行化简,即:(3c) Derive the posterior probability density function (PDF) of the state vector x i by Bayes' theorem, and then use the chain rule to simplify it, namely:
其中,通过Chapman-Kolmogorov方程导出概率p(xi|y1:i-1),即:Among them, the probability p(x i |y 1:i-1 ) is derived by the Chapman-Kolmogorov equation, namely:
p(xi|y1:i-1)=∫p(xi|xi-1)p(xi-1|y1:i-1)dxi-1 (14)p(x i |y 1 : i-1 )=∫p(x i |x i-1 )p(x i-1 |y 1:i-1 )dx i-1 (14)
(3d)利用公式(13)和(14),得到状态向量xi的后验概率密度函数PDF:(3d) Using equations (13) and (14), the posterior probability density function PDF of the state vector x i is obtained:
(3e)结合观测值和预测值,利用公式(15),估计当前状态,并不断更新系统的状态。(3e) Combine the observed value and the predicted value, using formula (15), estimate the current state, and continuously update the state of the system.
4、多个车辆检测与跟踪的关联,具体步骤为:4. The association of multiple vehicle detection and tracking, the specific steps are:
(4a)在基于卡尔曼滤波器的目标跟踪技术中,通过匈牙利算法将运动轨迹与运动目标相关联,并对已关联的目标进行跟踪;(4a) In the target tracking technology based on Kalman filter, the motion trajectory is associated with the moving target through the Hungarian algorithm, and the associated target is tracked;
(4b)利用基于SSIM(structural similarity index)的最邻近搜索,将未关联目标的轨迹重新寻找与之相关联的运动目标,若关联成功,则在校正后继续完成跟踪,若该轨迹依旧没有相关联的目标,则轨迹终止。(4b) Using the nearest neighbor search based on SSIM (structural similarity index), the trajectory of the unrelated target is re-searched for the moving target associated with it. If the association is successful, the tracking will continue after correction. If the trajectory is still unrelated target, the trajectory is terminated.
本发明算法精度高,具有实用价值,可用于交通监测、城市监控、军事监测等领域,应用前景广阔。The algorithm of the invention has high precision and practical value, can be used in traffic monitoring, urban monitoring, military monitoring and other fields, and has broad application prospects.
附图说明Description of drawings
图1是本发明卫星视频的一个例子的示意图。其中,(a)视频地点位于西班牙瓦伦西亚,(b)是从Google Earth下载的相应光学地图。FIG. 1 is a schematic diagram of an example of satellite video of the present invention. where (a) the video location is in Valencia, Spain, and (b) is the corresponding optical map downloaded from Google Earth.
图2是本发明利用指数分布拟合噪声杂波分布的检测算法的流程图。FIG. 2 is a flow chart of a detection algorithm for fitting noise clutter distribution using exponential distribution according to the present invention.
图3是本发明运动目标和虚警的前景分割结果和区域增长结果的示意图。其中,(a)、(b)、(c)和(d)分别表示四个假设,第一行、第二行和第三行分别表示彩色图、前景分割结果和区域增长结果。为了便于观察,将原始的灰度图相应转换第一行所示的图像。FIG. 3 is a schematic diagram of foreground segmentation results and region growth results of moving targets and false alarms according to the present invention. Among them, (a), (b), (c), and (d) represent the four hypotheses, respectively, and the first, second, and third rows represent the color map, foreground segmentation results, and region growth results, respectively. For ease of observation, the original grayscale image is correspondingly converted to the image shown in the first row.
图4是本发明运动车辆样本和非运动车辆样本示意图。其中(a),(b)和(c)是真实的运动目标,但是(b)中的黑色车辆,与背景相比没有显着差异,(d)是一个虚警目标,它可能是一个类似于真正的运动目标的静止车辆。FIG. 4 is a schematic diagram of a moving vehicle sample and a non-moving vehicle sample according to the present invention. where (a), (b) and (c) are real moving targets, but the black vehicle in (b), which is not significantly different from the background, (d) is a false alarm target, which may be a similar A stationary vehicle with a real moving target.
图5是本发明实验中使用的卫星视频的注释区域的示意图。FIG. 5 is a schematic diagram of the annotation area of the satellite video used in the experiments of the present invention.
图6是本发明实验中使用的卫星视频的注释后的效果图。其中,(a)显示车辆的位置,(b)显示其相应的名称。图中所注释的为代表性场景。Fig. 6 is the effect diagram after the annotation of the satellite video used in the experiment of the present invention. Among them, (a) shows the location of the vehicle and (b) shows its corresponding name. Annotated in the figure are representative scenarios.
图7是本发明卫星视频四帧中车辆检测和跟踪结果的示意图:分别为第50帧,第100帧,第150帧,第200帧,每辆车都有自己的编号。7 is a schematic diagram of vehicle detection and tracking results in four frames of the satellite video of the present invention: the 50th frame, the 100th frame, the 150th frame, and the 200th frame, and each vehicle has its own number.
图8是本发明流程图示。FIG. 8 is a flow chart of the present invention.
具体实施方式Detailed ways
首先,对卫星视频中的运动目标进行检测,利用基于运动和局部杂波建模的方法,具体实施方式为:First, the moving target in the satellite video is detected, and the method based on motion and local clutter modeling is used. The specific implementation is as follows:
步骤一:在每一个帧中,沿着垂直和水平方向将图像划分为二维网格,每个网格单元内包含若干像素;Step 1: In each frame, the image is divided into two-dimensional grids along the vertical and horizontal directions, and each grid cell contains several pixels;
步骤二:将图像看作是原始图像信号与噪声信号的和,选择指数分布来拟合噪声信号的分布;Step 2: The image is regarded as the sum of the original image signal and the noise signal, and the exponential distribution is selected to fit the distribution of the noise signal;
步骤三:找出灰度变化大的区域作为目标,预先设定虚警率为1×10-2,计算出二值化的阈值,用该阈值将像素分类,即:像素幅度值大于该阈值的是前景,反正是背景(在二值图像中,‘1’为前景,‘0’为背景);Step 3: Find the area with large grayscale change as the target, set the false alarm rate to 1×10 -2 in advance, calculate the binarization threshold, and use the threshold to classify the pixels, that is: the pixel amplitude value is greater than the threshold is the foreground, and is the background anyway (in a binary image, '1' is the foreground, and '0' is the background);
步骤四:将得到的两个二值图像取交集图,也就是逐像素进行“与”布尔运算,消除前面时刻和后面时刻的景象目标,从而唯一保留运动车辆目标当前位置,得到检测结果。Step 4: Take the intersection map of the two binary images obtained, that is, perform the "AND" Boolean operation pixel by pixel to eliminate the scene target at the previous moment and the next moment, so as to uniquely retain the current position of the moving vehicle target and obtain the detection result.
然后,对检测到的卫星视频中的运动目标进行鉴别,利用基于区域增长与多线索的方法,具体实施方式为:Then, the detected moving objects in the satellite video are identified, and the method based on regional growth and multiple clues is used, and the specific implementation is as follows:
步骤一:利用基于高斯分布的区域增长算法,利用检测器检测出的一个连通区域的所有前景像素估计高斯分布的参数,进而得到高斯分布的PDF;Step 1: Using the region growth algorithm based on Gaussian distribution, use all foreground pixels of a connected region detected by the detector to estimate the parameters of the Gaussian distribution, and then obtain the PDF of the Gaussian distribution;
步骤二:预定义上辩解概率和下边界概率的数值,设计下界概率为1×10-2,设计上界概率为1-1×10-2,分别计算幅度值的上边界和下边界,利用新的幅度边界值,重新判断这个区域的前景目标;Step 2: Predefine the values of the upper justification probability and the lower bound probability, design the lower bound probability as 1×10 -2 , and design the upper bound probability as 1-1×10 -2 , respectively calculate the upper bound and lower bound of the amplitude value, and use The new amplitude boundary value, re-judging the foreground target in this area;
步骤三:保留与原始前景相连通的所有像素,构建成一个新的连通区域,得到区域生长的结果;Step 3: Retain all pixels connected to the original foreground, build a new connected area, and obtain the result of area growth;
步骤四:采用面积、占空比、长度和偏心率作为形态学线索,形态学线索的参数可以根据实际情况设计,对区域生长的结果进行形态学鉴别,剔除检测到的目标中的非车辆目标,得到鉴别结果。Step 4: Use area, duty cycle, length, and eccentricity as morphological clues. The parameters of morphological clues can be designed according to the actual situation. The results of regional growth are morphologically identified, and non-vehicle targets are excluded from the detected targets. , get the identification result.
最后,对经过检测和鉴别的卫星视频中的运动目标进行跟踪与关联,利用卡尔曼滤波器的目标跟踪的方法,具体实施方式为:Finally, the moving target in the detected and identified satellite video is tracked and associated, and the method for target tracking using the Kalman filter is implemented as follows:
步骤一:先只考虑一个运动目标的KF跟踪问题,预测系统的动态方程,利用状态矢量,得到演化矩阵;Step 1: Consider only the KF tracking problem of a moving target, predict the dynamic equation of the system, and use the state vector to obtain the evolution matrix;
步骤二:利用检测车辆的面积和位置作为观测值,得到观测方程,并利用观测矢量得到观测矩阵;Step 2: Use the area and position of the detected vehicle as the observation value to obtain the observation equation, and use the observation vector to obtain the observation matrix;
步骤三:通过贝叶斯法则推导出状态向量的后验概率密度函数,并利用链式法则对其进行简化;Step 3: Derive the posterior probability density function of the state vector by Bayesian rule, and use the chain rule to simplify it;
步骤四:结合观测值和预测值,估计当前状态,并更新系统状态;Step 4: Combine the observed value and the predicted value, estimate the current state, and update the system state;
步骤五:将检测到的车辆目标与正在跟踪的轨迹利用匈牙利算法进行关联,如果能够关联上那么用目标状态更新相应的轨迹的状态,如果目标没有关联上轨迹那么初始化一个新的轨迹并进行跟踪,如果轨迹没有关联上目标那么使用SSIM算法再次搜索相应的目标,如果能够搜索到那么用搜索到的目标的状态更新轨迹的状态,如果不能搜索到那么终止对这条轨迹的跟踪。Step 5: Use the Hungarian algorithm to associate the detected vehicle target with the track being tracked. If it can be associated, update the state of the corresponding track with the target state. If the target is not associated with the track, initialize a new track and track it. , if the trajectory is not associated with the target, then use the SSIM algorithm to search for the corresponding target again. If it can be searched, update the state of the trajectory with the state of the searched target. If it cannot be searched, terminate the tracking of this trajectory.
该算法在图1所示的视频上进行了测试,该视频帧率为20帧/秒,分辨率为1米,时长29秒,图像高度位3072像素,图像宽度为4096像素。人工标注了一些区域如图6所示,用这些区域人工标注的数据和算法输出的结果进行对比从而定量评价算法性能,评价试验表明算法召回率63.06%,准确率81.04%,算法精度较高,具有实用价值。The algorithm was tested on the video shown in Figure 1, which has a frame rate of 20 frames per second, a resolution of 1 meter, a duration of 29 seconds, an image height of 3072 pixels, and an image width of 4096 pixels. Some areas are manually marked as shown in Figure 6. The manually marked data in these areas is compared with the output results of the algorithm to quantitatively evaluate the performance of the algorithm. The evaluation test shows that the algorithm recall rate is 63.06%, the accuracy rate is 81.04%, and the algorithm precision is high. Has practical value.
Claims (5)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810358177.7A CN108734103B (en) | 2018-04-20 | 2018-04-20 | Detection and Tracking Method of Moving Objects in Satellite Video |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810358177.7A CN108734103B (en) | 2018-04-20 | 2018-04-20 | Detection and Tracking Method of Moving Objects in Satellite Video |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108734103A CN108734103A (en) | 2018-11-02 |
CN108734103B true CN108734103B (en) | 2021-08-20 |
Family
ID=63939136
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810358177.7A Active CN108734103B (en) | 2018-04-20 | 2018-04-20 | Detection and Tracking Method of Moving Objects in Satellite Video |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108734103B (en) |
Families Citing this family (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109708627B (en) * | 2018-11-15 | 2022-10-18 | 上海航天控制技术研究所 | Method for rapidly detecting space dynamic point target under moving platform |
CN109712169A (en) * | 2018-11-15 | 2019-05-03 | 上海卫星工程研究所 | Moving-target motion profile prediction technique and method for autonomous tracking based on EO-1 hyperion |
CN109816606B (en) * | 2019-01-18 | 2022-01-04 | 中国科学院空天信息创新研究院 | Method for tracking target by using optical remote sensing satellite |
CN110057449B (en) * | 2019-04-29 | 2021-06-18 | 上海卫星工程研究所 | Air target height and speed calculation method based on field-of-view spectroscopic hyperspectral detection |
CN110969643B (en) * | 2019-12-18 | 2023-04-25 | 中国人民解放军国防科技大学 | On-board Autonomous Prediction Method of Ground Target's Trajectory |
CN111369570B (en) * | 2020-02-24 | 2023-08-18 | 成都空御科技有限公司 | Multi-target detection tracking method for video image |
CN111462488B (en) * | 2020-04-01 | 2021-09-10 | 北京工业大学 | Intersection safety risk assessment method based on deep convolutional neural network and intersection behavior characteristic model |
CN112182133B (en) * | 2020-09-29 | 2022-02-15 | 南京北斗创新应用科技研究院有限公司 | A Ship Loitering Detection Method Based on AIS Data |
EP4064120B1 (en) | 2021-03-25 | 2023-07-26 | Axis AB | Method for determining images plausible to have a false negative object detection |
CN112990124B (en) * | 2021-04-26 | 2021-08-06 | 湖北亿咖通科技有限公司 | Vehicle tracking method and device, electronic equipment and storage medium |
CN113283330B (en) * | 2021-05-20 | 2024-10-29 | 南京航空航天大学 | Video SAR moving target detection method based on deep learning and multi-target tracking algorithm |
CN115480273B (en) * | 2022-10-21 | 2025-02-07 | 北京汇通天下物联科技有限公司 | Vehicle positioning system detection method and device |
CN116030633B (en) * | 2023-02-21 | 2023-06-02 | 天津汉云工业互联网有限公司 | Vehicle tunnel early warning method and device |
CN117315498B (en) * | 2023-10-10 | 2024-05-24 | 中国人民解放军战略支援部队航天工程大学 | False alarm discrimination method based on space target detection result |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105374026A (en) * | 2015-10-09 | 2016-03-02 | 上海海事大学 | A maritime infrared small target detection method suitable for coastal defense monitoring |
WO2016085552A1 (en) * | 2014-10-31 | 2016-06-02 | Raytheon Company | System for real-time moving target detection using vision based image segmentation |
CN106846374A (en) * | 2016-12-21 | 2017-06-13 | 大连海事大学 | The track calculating method of vehicle under multi-cam scene |
CN107292911A (en) * | 2017-05-23 | 2017-10-24 | 南京邮电大学 | A kind of multi-object tracking method merged based on multi-model with data correlation |
CN107730515A (en) * | 2017-10-12 | 2018-02-23 | 北京大学深圳研究生院 | Panoramic picture conspicuousness detection method with eye movement model is increased based on region |
-
2018
- 2018-04-20 CN CN201810358177.7A patent/CN108734103B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016085552A1 (en) * | 2014-10-31 | 2016-06-02 | Raytheon Company | System for real-time moving target detection using vision based image segmentation |
CN105374026A (en) * | 2015-10-09 | 2016-03-02 | 上海海事大学 | A maritime infrared small target detection method suitable for coastal defense monitoring |
CN106846374A (en) * | 2016-12-21 | 2017-06-13 | 大连海事大学 | The track calculating method of vehicle under multi-cam scene |
CN107292911A (en) * | 2017-05-23 | 2017-10-24 | 南京邮电大学 | A kind of multi-object tracking method merged based on multi-model with data correlation |
CN107730515A (en) * | 2017-10-12 | 2018-02-23 | 北京大学深圳研究生院 | Panoramic picture conspicuousness detection method with eye movement model is increased based on region |
Non-Patent Citations (3)
Title |
---|
"基于帧差法和自适应阂值区域生长的红外运动目标检测";李冰 等;《光电技术应用》;20170228;正文第156-160页 * |
Frame Difference And Kalman Filter Techniques For Detection Of Moving Vehicles In Video Surveillance;C.Srinivas Rao,et al;《International Journal of Engineering Research and Applications》;20121231;正文第1168-1170页 * |
基于O2O学习的多目标检测与跟踪技术研究;车少帅;《中国优秀硕士学位论文全文数据库 信息科技辑》;20160515;I136-599 * |
Also Published As
Publication number | Publication date |
---|---|
CN108734103A (en) | 2018-11-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108734103B (en) | Detection and Tracking Method of Moving Objects in Satellite Video | |
Zhang et al. | Eye in the sky: Drone-based object tracking and 3d localization | |
Du et al. | Object tracking in satellite videos based on a multiframe optical flow tracker | |
Minaeian et al. | Effective and efficient detection of moving targets from a UAV’s camera | |
TWI393074B (en) | Apparatus and method for moving object detection | |
CN110472553B (en) | Target tracking method, computing device and medium for fusion of image and laser point cloud | |
CN110866887A (en) | Target situation fusion sensing method and system based on multiple sensors | |
CN106845364B (en) | Rapid automatic target detection method | |
CN113223059A (en) | Weak and small airspace target detection method based on super-resolution feature enhancement | |
CN103149939A (en) | Dynamic target tracking and positioning method of unmanned plane based on vision | |
CN114325634B (en) | A highly robust method for extracting traversable areas in wild environments based on LiDAR | |
El Bouziady et al. | Vehicle speed estimation using extracted SURF features from stereo images | |
Ahmadi et al. | A simple method for detecting and tracking vehicles and vessels from high resolution spaceborne videos | |
Murugan et al. | Automatic moving vehicle detection and classification based on artificial neural fuzzy inference system | |
Yuan et al. | Independent moving object detection based on a vehicle mounted binocular camera | |
Liu et al. | Tilt correction toward building detection of remote sensing images | |
Cheng et al. | Moving target detection technology based on UAV Vision | |
Vikram et al. | Deep Learning Based Vehicle Detection and Counting System for Intelligent Transportation. | |
Pollard et al. | GM-PHD filters for multi-object tracking in uncalibrated aerial videos | |
Ersoy et al. | Interactive target tracking for persistent wide-area surveillance | |
Cigla et al. | Image-based visual perception and representation for collision avoidance | |
Molloy et al. | Looming aircraft threats: shape-based passive ranging of aircraft from monocular vision | |
Farooq et al. | Efficient object proposals extraction for target detection in VHR remote sensing images | |
Ao et al. | Detecting tiny moving vehicles in satellite videos | |
Castellano et al. | Crowd flow detection from drones with fully convolutional networks and clustering |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |