CN116385495A - Moving target closed-loop detection method of infrared video under dynamic background - Google Patents

Moving target closed-loop detection method of infrared video under dynamic background Download PDF

Info

Publication number
CN116385495A
CN116385495A CN202310428567.8A CN202310428567A CN116385495A CN 116385495 A CN116385495 A CN 116385495A CN 202310428567 A CN202310428567 A CN 202310428567A CN 116385495 A CN116385495 A CN 116385495A
Authority
CN
China
Prior art keywords
image
tracking
frame
corner
detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310428567.8A
Other languages
Chinese (zh)
Inventor
王勇
霍礼乐
范云生
刘婷
王国峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian Maritime University
Original Assignee
Dalian Maritime University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian Maritime University filed Critical Dalian Maritime University
Priority to CN202310428567.8A priority Critical patent/CN116385495A/en
Publication of CN116385495A publication Critical patent/CN116385495A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/269Analysis of motion using gradient-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20164Salient point detection; Corner detection
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

本发明公开了一种动态背景下红外视频的运动目标闭环检测方法,包括:对红外图像的噪声种类进行分析、使用滤波算法对图像进行去除噪声处理;针对不同帧采取不同的干扰角点滤除方法、从而对角点进行粗消除和细消除,并将最终检测角点进行存储和记录;采用稀疏光流法对上述角点进行跟踪,并使用双向跟踪对跟踪点进行滤除,根据前后两帧角点集合之间关系进行单应性矩阵计算,使用单应性变换矩阵对当前帧图像进行背景补偿;将前一帧红外图像与背景补偿后的当前帧图像进行差分,对差分结果进行自适应灰度阈值二值化处理,对二值图像进行形态学操作获取最终目标位置;根据前一帧检测目标位置形成掩膜,反馈到下一帧角点检测,形成一个完整的闭环检测回路。

Figure 202310428567

The invention discloses a method for closed-loop detection of a moving target in an infrared video under a dynamic background, which includes: analyzing the noise type of the infrared image, using a filtering algorithm to remove the noise from the image; and adopting different interference corner point filtering for different frames method, so that the corner points are roughly eliminated and finely eliminated, and the final detected corner points are stored and recorded; the above-mentioned corner points are tracked by using the sparse optical flow method, and the tracked points are filtered by two-way tracking. The homography matrix calculation is performed on the relationship between the frame corner sets, and the background compensation is performed on the current frame image using the homography transformation matrix; the difference between the previous frame infrared image and the current frame image after background compensation is performed, and the difference result is automatically calculated. Adapt to the gray threshold binarization process, perform morphological operations on the binary image to obtain the final target position; form a mask based on the target position detected in the previous frame, and feed back to the corner detection in the next frame to form a complete closed-loop detection circuit.

Figure 202310428567

Description

一种动态背景下红外视频的运动目标闭环检测方法A closed-loop detection method for moving targets in infrared video under dynamic background

技术领域technical field

本发明涉及红外运动目标检测领域,尤其涉及一种动态背景下红外视频的运动目标闭环检测方法。The invention relates to the field of infrared moving target detection, in particular to a closed-loop detection method for infrared video moving targets under a dynamic background.

背景技术Background technique

运动目标检测是计算机视觉领域核心研究课题之一,运动目标检测是目标跟踪、目标识别、目标行为理解的基础,在军事、安防监控、工业自动化、智能交通等领域有着广泛的应用前景。而根据拍摄平台或相机的运动与否,运动目标检测可分为静态背景下的运动目标检测和动态背景下的运动目标检测。静态背景下的运动目标检测是在拍摄过程中,相机静止,所得视频序列中只含有目标的运动;动态背景下的运动目标检测是在拍摄过程中,拍摄平台或相机和运动目标同时发生变化,拍摄平台和相机的变化包括平移、旋转、缩放等,所得视频序列不仅包含目标的运动,也包含背景的运动,相比静态背景下的目标检测,难度大大提升。动态背景下的运动目标检测主要采取的方法为背景补偿,通过计算前一帧与当前帧的变换矩阵,对当前帧进行背景补偿,而后使用帧差法对运动目标进行检测,其中背景补偿的精度会直接影响检测精度。Moving target detection is one of the core research topics in the field of computer vision. Moving target detection is the basis of target tracking, target recognition, and target behavior understanding. It has broad application prospects in military, security monitoring, industrial automation, intelligent transportation and other fields. According to whether the shooting platform or camera is moving or not, moving object detection can be divided into moving object detection under static background and moving object detection under dynamic background. The moving target detection in the static background is that the camera is still during the shooting process, and the obtained video sequence only contains the movement of the target; the moving target detection in the dynamic background is in the shooting process, the shooting platform or camera and the moving target change at the same time, The changes of the shooting platform and camera include translation, rotation, zooming, etc. The resulting video sequence contains not only the movement of the target, but also the movement of the background. Compared with the target detection in a static background, the difficulty is greatly improved. The main method of moving object detection under dynamic background is background compensation. By calculating the transformation matrix between the previous frame and the current frame, the background compensation is performed on the current frame, and then the moving object is detected using the frame difference method. The accuracy of the background compensation It will directly affect the detection accuracy.

与可见光图像相比,红外图像具有对比度较低、分辨率较差、噪声较多等特点,这会导致在背景补偿过程中,红外图像提取的角点或特征点会比可见光提取的少而且更加集中。这会影响后续变换矩阵的计算的精度,影响背景补偿准确性,此外当进行背景补偿时,会发现背景中的角点或特征点在变换矩阵计算中发挥积极作用,而运动目标对象区域中的特征点会阻碍背景的准确配准。红外图像提取的角点或特征点与可见光相比,运动目标对象区域中的特征点在整体中的占比会加大,这可能导致PROSAC算法进行特征点筛选时,选择的内点为运动目标对象区域中的特征点的机率加大,将导致不准确的配准,影响最终检测结果。Compared with visible light images, infrared images have the characteristics of low contrast, poor resolution, and more noise, which will lead to fewer corner points or feature points extracted by infrared images than those extracted by visible light during the background compensation process. concentrated. This will affect the accuracy of the calculation of the subsequent transformation matrix and the accuracy of background compensation. In addition, when performing background compensation, it will be found that the corner points or feature points in the background play an active role in the calculation of the transformation matrix, while the corner points or feature points in the moving target object area Feature points can hinder accurate registration of the background. Compared with visible light, the corner points or feature points extracted from infrared images will increase the proportion of feature points in the moving target object area as a whole, which may cause the selected interior points to be moving targets when the PROSAC algorithm performs feature point screening. The increased probability of feature points in the object area will lead to inaccurate registration and affect the final detection result.

发明内容Contents of the invention

根据现有技术存在的问题,本发明公开了一种动态背景下红外视频的运动目标闭环检测方法,具体包括如下步骤:According to the problems existing in the prior art, the present invention discloses a method for closed-loop detection of a moving target in infrared video under a dynamic background, which specifically includes the following steps:

对红外图像的噪声种类进行分析、使用滤波算法对图像进行去除噪声处理;Analyze the noise types of infrared images, and use filtering algorithms to remove noise from the images;

针对不同帧采取不同的干扰角点滤除方法、从而对角点进行粗消除和细消除,并将最终检测角点进行存储和记录;Different interference corner filtering methods are adopted for different frames, so as to perform coarse and fine elimination of corner points, and store and record the final detected corner points;

采用稀疏光流法对上述角点进行跟踪,并使用双向跟踪方式对跟踪点进行滤除,确定上一帧角点在当前帧对应得位置关系,再使用当前跟踪点进行反向跟踪,并对两组角点进行筛选,剔除反向跟踪失败的角点;Use the sparse optical flow method to track the above corner points, and use the two-way tracking method to filter out the tracking points, determine the corresponding position relationship of the corner points in the previous frame in the current frame, and then use the current tracking points for reverse tracking, and The two groups of corner points are screened to remove the corner points that fail to reverse tracking;

根据前后两帧角点集合之间关系进行单应性矩阵计算,使用单应性变换矩阵对当前帧图像进行背景补偿;Calculate the homography matrix according to the relationship between the corner sets of the two frames before and after, and use the homography transformation matrix to perform background compensation on the current frame image;

将前一帧红外图像与背景补偿后的当前帧图像进行差分,对差分结果进行自适应灰度阈值二值化处理,对二值图像进行形态学操作获取最终目标位置;Differentiate the previous frame infrared image from the current frame image after background compensation, perform adaptive gray threshold binarization on the difference result, and perform morphological operations on the binary image to obtain the final target position;

根据前一帧检测目标位置形成掩膜,反馈到下一帧角点检测,形成一个完整的闭环检测回路。A mask is formed according to the position of the detection target in the previous frame, which is fed back to the corner detection in the next frame to form a complete closed-loop detection circuit.

进一步的,利用高斯滤波器对当前帧红外图像进行噪声滤除,去除红外图像中的高斯白噪声;使用中值滤波器滤除红外图像中的随机点状噪声。Further, a Gaussian filter is used to perform noise filtering on the current frame infrared image to remove Gaussian white noise in the infrared image; a median filter is used to filter out random point noise in the infrared image.

进一步的,判断当前帧是否为前五帧红外图像,如果为前五帧红外图像,则进行粗消除,否则进行细消除;Further, judge whether the current frame is the first five frames of infrared images, if it is the first five frames of infrared images, perform coarse elimination, otherwise perform fine elimination;

将红外图像分割成大小相同的子块,子块序列i1至i25Divide the infrared image into sub-blocks of the same size, sub-block sequence i 1 to i 25 ;

从i1开始对每一小块使用Shi-Tomasi算法进行角点检测,检测完成对每一小块内的角点数目进行从小到大排序;Starting from i 1 , use the Shi-Tomasi algorithm for corner detection on each small block, and sort the number of corner points in each small block from small to large after the detection is completed;

去除数目最大的5个子块和浓度最小的5个子块,将最终角点进行存储,Remove the 5 sub-blocks with the largest number and the 5 sub-blocks with the smallest concentration, and store the final corner points,

若判断不是前五帧图像,则在检测时会将上一帧的检测结果反馈到当前帧,使用上一帧运动目标区域生成的掩膜mask,对掩膜区域为零的区域在当前帧上不进行角点检测;If it is judged that it is not the first five frames of images, the detection result of the previous frame will be fed back to the current frame during detection, and the mask generated by the moving target area of the previous frame will be used, and the area with zero mask area will be on the current frame. No corner detection is performed;

Figure BDA0004189551520000021
Figure BDA0004189551520000021

对其他区域进行Shi-Tomasi角点检测,完成细消除,使得整个检测方法实现闭环检测;Perform Shi-Tomasi corner detection on other areas to complete fine elimination, so that the entire detection method can achieve closed-loop detection;

将最终角点检测结果进行存储。Store the final corner detection result.

进一步的,使用LK光流金字塔算法对得到的每一个角点在当前帧进行跟踪,得出上一帧的角点Pi(x0,y0)在当前帧的位置Pi(x1,y1),重复该步骤直至所有角点都计算完毕,将所有跟踪点进行存储;Further, use the LK optical flow pyramid algorithm to track each obtained corner point in the current frame, and obtain the position P i (x 1 , y 0 ) of the corner point P i (x 0 , y 0 ) in the previous frame in the current frame y 1 ), repeat this step until all corner points are calculated, and store all tracking points;

再次使用LK光流金字塔算法对上述跟踪点集合进行反向跟踪,得出当前帧的跟踪点Pi(x1,y1)在前一帧的位置Pi(x2,y2),重复该步骤直至所有跟踪点都计算完毕,将所有反向跟踪的跟踪点对进行存储;Use the LK optical flow pyramid algorithm to perform reverse tracking on the above-mentioned set of tracking points again, and obtain the position P i (x 2 , y 2 ) of the tracking point P i (x 1 , y 1 ) in the current frame in the previous frame, and repeat In this step, until all tracking points are calculated, all tracking point pairs of reverse tracking are stored;

根据输出状态矢量对正向跟踪点对进行去除;Remove the forward tracking point pairs according to the output state vector;

如果判断输出状态矢量为1,则将前一帧角点与当前帧对应跟踪点进行存储,如果判断输出状态矢量为0,则将对应点对去除;If it is judged that the output state vector is 1, the corner point of the previous frame and the corresponding tracking point of the current frame are stored, and if the judged output state vector is 0, the corresponding point pair is removed;

对反向跟踪的集合进行相同的去除策略;Apply the same removal strategy to the collection of backtracking;

将正向跟踪点对集合与反向跟踪点对集合进行筛选,取出点Pi(x1,y1)比较其对应集合中的Pi(x0,y0)与Pi(x2,y2),若两点x坐标和y坐标相同,则双向跟踪成功,将Pi(x0,y0)与Pi(x1,y1)加入跟踪成功集合中;Filter the set of forward tracking point pairs and the set of reverse tracking point pairs, take out point P i (x 1 ,y 1 ) and compare it with P i (x 0 ,y 0 ) and P i (x 2 , y 2 ), if the x-coordinates and y-coordinates of the two points are the same, the two-way tracking is successful, and P i (x 0 , y 0 ) and P i (x 1 , y 1 ) are added to the successful tracking set;

重复上述操作,直至完成所有点对的筛选。Repeat the above operations until all point pairs are screened.

进一步的,对得到的前一帧与当前帧对应的两组角点集合采用PROSAC算法计算两帧红外图像的最优单应性变换矩阵H,对于前一帧的角点Pi(x0,y0)与当前帧对应的跟踪点Pi(x1,y1)应满足以下关系:Further, the PROSAC algorithm is used to calculate the optimal homography transformation matrix H of two frames of infrared images for the obtained two sets of corner points corresponding to the previous frame and the current frame. For the corner points P i (x 0 , y 0 ) and the tracking point P i (x 1 ,y 1 ) corresponding to the current frame should satisfy the following relationship:

Figure BDA0004189551520000031
Figure BDA0004189551520000031

使用最优单应性变换矩阵H对当前帧进行背景补偿,采用双线性插值的方式在像素点x方向和y方向分别进行插值从而进行图像校正。Use the optimal homography transformation matrix H to perform background compensation on the current frame, and use bilinear interpolation to perform interpolation in the x-direction and y-direction of pixels to perform image correction.

将前一帧红外图像与补偿后的红外图像进行差分运算,对差分图像进行高斯滤波去除噪声;Perform a difference operation on the previous frame infrared image and the compensated infrared image, and perform Gaussian filtering on the difference image to remove noise;

对差分图像使用Otsu算法进行阈值分割,获取疑似是运动目标的二值图;Use the Otsu algorithm to perform threshold segmentation on the difference image to obtain a binary image suspected to be a moving target;

对疑似是运动目标的二值图进行腐蚀操作,去除离散噪声以及线条状噪声干扰,再进行膨胀操作,膨胀后,对小区域进行标记并滤除,再次进行膨胀操作,得到最终运动目标二值图;Corrosion operation is performed on the binary image suspected of being a moving target to remove discrete noise and line noise interference, and then the expansion operation is performed. After expansion, the small area is marked and filtered, and the expansion operation is performed again to obtain the final binary value of the moving target. picture;

根据最终运动目标二值图计算运动目标轮廓,并将轮廓进行存储;Calculate the contour of the moving target according to the binary image of the final moving target, and store the contour;

遍历每一个目标轮廓并根据轮廓在当前帧红外图像上绘制外接矩形,并将矩形的位置以及矩形的长和宽进行存储;Traverse each target contour and draw a bounding rectangle on the current frame infrared image according to the contour, and store the position of the rectangle as well as the length and width of the rectangle;

重复上述步骤,直至遍历完所有轮廓获取最终运动目标检测结果图。Repeat the above steps until all contours are traversed to obtain the final moving object detection result map.

进一步的,创建大小与类型与当前帧红外图像相同的颜色设为白色的单通道mask掩膜图像;Further, create a single-channel mask image with the same size and type as the current frame infrared image and set the color to white;

从存储的矩形框集合中取出一未处理的外接矩形框,获取矩形框的位置与大小,将矩形框的长和宽向外扩充m个像素点,将扩充后的矩形框映射到mask掩膜图像中,将mask掩膜图像矩形框内部像素点灰度值置为0,将处理完毕的矩形框加入到已处理集合中,继续处理下一矩形框,重复上述步骤,直至处理完所有矩形框;Take an unprocessed circumscribed rectangular frame from the stored rectangular frame set, obtain the position and size of the rectangular frame, expand the length and width of the rectangular frame by m pixels, and map the expanded rectangular frame to the mask mask In the image, set the gray value of the pixels inside the rectangular frame of the mask mask image to 0, add the processed rectangular frame to the processed set, continue to process the next rectangular frame, and repeat the above steps until all the rectangular frames are processed ;

得到一运动目标的mask掩膜图像,将此掩膜图作为初始信息反馈到下一帧检测中,实现闭环检测。Obtain the mask image of a moving target, and use this mask image as initial information to feed back to the next frame detection to realize closed-loop detection.

由于采用了上述技术方案,本发明提供的一种动态背景下红外视频的运动目标闭环检测方法,该方法在检测初期使用角点均匀化处理,对均匀化的角点进行光流跟踪,提高变换矩阵的计算准确性;在光流跟踪阶段,使用双向跟踪算法,去除错误跟踪点对背景补偿的影响,提高运动目标检测的准确性;使用上一帧运动对象区域掩膜,消除当前帧运动对象区域角点的影响,仅使用背景点对两帧红外图像的最优单应性变换矩阵进行迭代计算,不仅提高了背景补偿的精度,而且算法的运行效率,提高了算法的实时性。Due to the adoption of the above technical solution, the present invention provides a closed-loop detection method for infrared video moving objects under a dynamic background. The method uses corner homogenization processing at the initial stage of detection, and performs optical flow tracking on the homogenized corner points to improve the transformation efficiency. The calculation accuracy of the matrix; in the optical flow tracking stage, use the two-way tracking algorithm to remove the influence of wrong tracking points on background compensation, and improve the accuracy of moving object detection; use the mask of the moving object area in the previous frame to eliminate the moving object in the current frame Influenced by the corner points of the region, only the background points are used to iteratively calculate the optimal homography transformation matrix of the two frames of infrared images, which not only improves the accuracy of the background compensation, but also improves the operating efficiency of the algorithm and improves the real-time performance of the algorithm.

附图说明Description of drawings

为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请中记载的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the technical solutions in the embodiments of the present application or the prior art, the following will briefly introduce the drawings that need to be used in the description of the embodiments or the prior art. Obviously, the accompanying drawings in the following description are only These are some embodiments described in this application. Those skilled in the art can also obtain other drawings based on these drawings without creative work.

图1为本发明中公开的方法的流程图;Fig. 1 is the flowchart of the method disclosed in the present invention;

图2为本发明中红外图像序列示意图;Fig. 2 is a schematic diagram of the mid-infrared image sequence of the present invention;

图3为本发明中角点均匀化图像分块的示意图;Fig. 3 is a schematic diagram of corner point homogenization image segmentation in the present invention;

图4为本发明中稀疏光流双向跟踪的示意图;4 is a schematic diagram of sparse optical flow bidirectional tracking in the present invention;

图5为本发明中未使用背景补偿的差分结果的示意图;Fig. 5 is a schematic diagram of the difference result without background compensation in the present invention;

图6为本发明中未使用双向跟踪与“细消除”差分结果示意图;Fig. 6 is a schematic diagram of the differential result of not using two-way tracking and "fine elimination" in the present invention;

图7为本发明中使用双向跟踪与“细消除”差分结果示意图;Fig. 7 is a schematic diagram of the differential results using bidirectional tracking and "fine elimination" in the present invention;

图8为本发明中未使用双向跟踪与“细消除”差分结果阈值分割示意图;Fig. 8 is a schematic diagram of the threshold segmentation of the difference result without using two-way tracking and "fine elimination" in the present invention;

图9为本发明中使用双向跟踪与“细消除”差分结果阈值分割示意图;Fig. 9 is a schematic diagram of threshold segmentation using two-way tracking and "fine elimination" difference results in the present invention;

图10最终运动区域监测结果示意图;Figure 10 is a schematic diagram of the final motion region monitoring results;

图11为本发明中运动对象区域掩膜的示意图。FIG. 11 is a schematic diagram of a moving object region mask in the present invention.

具体实施方式Detailed ways

为使本发明的技术方案和优点更加清楚,下面结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚完整的描述:In order to make the technical solutions and advantages of the present invention more clear, the technical solutions in the embodiments of the present invention are clearly and completely described below in conjunction with the drawings in the embodiments of the present invention:

如图1所示的一种动态背景下红外视频的运动目标闭环检测方法,在实施过程中,设备采集的红外视频图像如图2所示,然后根据帧数选用角点均匀化与“粗消除”或“细消除”,而后采用光流双向跟踪去除错误点对,接着进行背景补偿以及图像差分,最后根据运动对象区域生成掩膜,反馈到下一帧检测中,形成红外运动目标的闭环检测。本发明公开的方法具体步骤如下:As shown in Figure 1, a method for closed-loop detection of moving targets in infrared video under a dynamic background. "or "fine elimination", and then use optical flow two-way tracking to remove wrong point pairs, then perform background compensation and image difference, and finally generate a mask according to the moving object area, which is fed back to the next frame detection to form a closed-loop detection of infrared moving objects . The concrete steps of the method disclosed by the invention are as follows:

S1:对红外视频的当前帧进行预处理:根据红外图像的噪声种类分析,使用滤波算法对图像进行处理;具体采用如下方式:S1: Preprocessing the current frame of the infrared video: According to the analysis of the noise type of the infrared image, use the filtering algorithm to process the image; the specific method is as follows:

S11:利用7*7的高斯滤波器对当前帧红外图像进行噪声滤除,去除红外图像中的高斯白噪声。S11: Use a 7*7 Gaussian filter to perform noise filtering on the infrared image of the current frame, and remove Gaussian white noise in the infrared image.

S12:接着使用3*3的中值滤波器滤除红外图像中的随机点状噪声,处理过后的红外图像噪声得到明显减少,同时细节信息也得到了较好的保存,图像质量得到一定程度的提高,为后续处理提供了良好的基础。S12: Then use a 3*3 median filter to filter out random point noise in the infrared image, the noise of the processed infrared image is significantly reduced, and the detailed information is also well preserved, and the image quality is improved to a certain extent Improvement provides a good foundation for subsequent processing.

S2:对于不同帧采取不同的干扰角点滤除策略,实现角点的“粗消除”或“细消除”,并将最终检测角点进行存储记录。具体采用如下方式:S2: Different interference corner filtering strategies are adopted for different frames to achieve "coarse elimination" or "fine elimination" of corner points, and the final detected corner points are stored and recorded. The specific method is as follows:

S21:判断当前帧是否为前五帧图像,如果为前五帧红外图像,进行“粗消除”,否则进行“细消除”。S21: Determine whether the current frame is the first five frames of images, if it is the first five frames of infrared images, perform "rough elimination", otherwise perform "fine elimination".

S22:如果S21判断为前五帧,将红外图像分割成5*5大小相同的子块,子块序列i1至i25,红外图像分块如图3所示。S22: If it is determined in S21 that it is the first five frames, the infrared image is divided into 5*5 sub-blocks of the same size, the sub-block sequence is i 1 to i 25 , and the infrared image is divided into blocks as shown in FIG. 3 .

S23:从i1开始对每一小块使用Shi-Tomasi算法进行角点检测,检测完成对每一小块内的角点数目进行从小到大排序。S23: Starting from i 1 , use the Shi-Tomasi algorithm to perform corner detection on each small block, and sort the number of corner points in each small block from small to large after the detection is completed.

S24:去除数目最大的5个子块和浓度最小的5个子块,将最终角点进行存储,使得检测的角点能较为均匀,完成“粗消除”。S24: Remove the 5 sub-blocks with the largest number and the 5 sub-blocks with the smallest concentration, and store the final corner points, so that the detected corner points can be relatively uniform, and complete "rough elimination".

S25:若S21判断不是前五帧图像,在检测时会将上一帧的检测结果反馈到当前帧,使用上一帧运动目标区域生成的掩膜mask,对掩膜区域为零的地方在当前帧上不进行角点检测,完成“细消除”,黑色区域运动目标区域。S25: If it is judged in S21 that it is not the first five frames of images, the detection result of the previous frame will be fed back to the current frame during detection, and the mask generated by the moving target area in the previous frame will be used, and the place where the mask area is zero is in the current frame Corner detection is not performed on the frame, and "fine elimination" is completed, and the black area moves to the target area.

Figure BDA0004189551520000051
Figure BDA0004189551520000051

S26:对其他区域进行Shi-Tomasi角点检测,完成“细消除”,使得整个检测方法实现闭环检测。S26: Perform Shi-Tomasi corner detection on other areas to complete "fine elimination", so that the entire detection method realizes closed-loop detection.

S27:将最终角点检测结果进行存储。S27: Store the final corner point detection result.

S3:采用稀疏光流法对上一帧计算角点进行跟踪,并使用双向跟踪对跟踪点进行滤除,确定上一帧角点在当前帧对应得位置关系,再使用当前跟踪点进行反向跟踪,并对两组角点进行筛选,剔除反向跟踪失败的角点。具体采用如下方式:S3: Use the sparse optical flow method to track the corner points calculated in the previous frame, and use bidirectional tracking to filter out the tracking points, determine the corresponding position relationship of the corner points in the previous frame in the current frame, and then use the current tracking points to reverse Track, and filter the two groups of corner points, and eliminate the corner points that fail to reverse track. The specific method is as follows:

S31:使用LK光流金字塔算法对S27得到的每一个角点在当前帧进行跟踪,得出上一帧的角点Pi(x0,y0)在当前帧的位置Pi(x1,y1),在红外视频中,由于视频是连续时间变化的,可以合理地认为前一帧中的许多点能够在下一帧中找到。S31: Use the LK optical flow pyramid algorithm to track each corner point obtained in S27 in the current frame, and obtain the position P i (x 1 , y 0 ) of the corner point P i (x 0 ,y 0 ) in the previous frame in the current frame y 1 ), in infrared video, since the video is continuously changing in time, it is reasonable to think that many points in the previous frame can be found in the next frame.

S32:重复S31步骤直至所有角点都计算完毕,将所有跟踪点进行存储。S32: Repeat step S31 until all corner points are calculated, and store all tracking points.

S33:再次使用LK光流金字塔算法对S31的跟踪点集合进行反向跟踪,得出当前帧的跟踪点Pi(x1,y1)在前一帧的位置Pi(x2,y2)。S33: Use the LK optical flow pyramid algorithm to perform reverse tracking on the set of tracking points in S31 again, and obtain the position P i (x 2 ,y 2 ) of the tracking point P i (x 1 , y 1 ) in the current frame in the previous frame ).

S34:重复S33步骤直至所有跟踪点都计算完毕,将所有反向跟踪的跟踪点对进行存储。S34: Repeat step S33 until all tracking points are calculated, and store all tracking point pairs for reverse tracking.

S35:根据输出状态矢量对正向跟踪点对进行去除。S35: Remove the forward tracking point pairs according to the output state vector.

S36:如果从S32判断输出状态矢量为1,则将前一帧角点与当前帧对应跟踪点进行存储,从S32判断输出状态矢量为0,则将对应点对去除。S36: If it is judged from S32 that the output state vector is 1, then store the corner point of the previous frame and the corresponding tracking point of the current frame, and if it is judged from S32 that the output state vector is 0, then remove the corresponding point pair.

S37:对反向跟踪的集合进行相同的去除策略。S37: Perform the same removal strategy on the set of reverse traces.

S38:将正向跟踪点对集合与反向跟踪点对集合进行筛选,取出点Pi(x1,y1)比较其对应集合中的Pi(x0,y0)与Pi(x2,y2),若两点x坐标和y坐标相同,则双向跟踪成功,双向跟踪示意图如图4所示,将Pi(x0,y0)与Pi(x1,y1)加入跟踪成功集合中。S38: Filter the set of forward tracking point pairs and the set of reverse tracking point pairs, take out point P i (x 1 , y 1 ) and compare P i (x 0 , y 0 ) and P i (x 2 ,y 2 ), if the x - coordinates and y - coordinates of the two points are the same, the two-way tracking is successful . The schematic diagram of the two-way tracking is shown in Figure 4 . Added to tracking success collection.

S39:重复上述操作,直至完成所有点对的筛选。S39: Repeat the above operations until the screening of all point pairs is completed.

S4:根据前后两帧角点集合之间关系进行单应性矩阵计算,使用单应性变换矩阵对当前帧图像进行背景补偿。具体采用如下方式:S4: Calculate the homography matrix according to the relationship between the corner sets of the two frames before and after, and use the homography transformation matrix to perform background compensation on the current frame image. The specific method is as follows:

S41:对S36得到的前一帧与当前帧对应的两组角点集合采用PROSAC算法计算两帧红外图像的最优单应性变换矩阵H,对于前一帧的角点Pi(x0,y0)与当前帧对应的跟踪点Pi(x1,y1)应满足以下关系:S41: Using the PROSAC algorithm to calculate the optimal homography transformation matrix H of two frames of infrared images for the two sets of corner points corresponding to the previous frame obtained in S36 and the current frame, for the corner points P i (x 0 , y 0 ) and the tracking point P i (x 1 ,y 1 ) corresponding to the current frame should satisfy the following relationship:

Figure BDA0004189551520000071
Figure BDA0004189551520000071

S42:使用S51步骤计算所得的最优单应性变换矩阵H对当前帧进行背景补偿未进行背景补偿的差分图像如图5所示,在补偿过程中为了尽可能地消除由于像素的偏移量而引起的误差,采用双线性插值的方式在像素点x方向和y方向分别进行插值来进行图像校正。S42: Use the optimal homography transformation matrix H calculated in step S51 to perform background compensation on the current frame. The difference image without background compensation is shown in Figure 5. During the compensation process, in order to eliminate the offset due to pixels as much as possible The error caused by the bilinear interpolation method is used to perform image correction in the x direction and y direction of the pixel point respectively.

S5:将前一帧红外图像与背景补偿后的当前帧图像进行差分,对差分结果进行自适应灰度阈值二值化,对二值图像进行形态学操作,获取最终目标位置。具体采用如下方式:S5: Differentiate the previous frame infrared image and the current frame image after background compensation, perform adaptive gray threshold binarization on the difference result, perform morphological operations on the binary image, and obtain the final target position. The specific method is as follows:

S51:将前一帧红外图像与补偿后的红外图像进行差分运算,对差分图像进行高斯滤波,去除噪声,未进行双向跟踪与“细消除”的背景补偿差分图像如图6所示,进行双向跟踪与“细消除”的背景补偿差分图像如图7所示。S51: Perform differential calculation on the previous frame infrared image and the compensated infrared image, perform Gaussian filtering on the differential image, remove noise, and perform two-way bidirectional tracking on the background compensation differential image without two-way tracking and "fine elimination" as shown in Figure 6 The background compensated difference image of tracking and "fine elimination" is shown in Fig. 7.

S52:对差分图像使用Otsu算法进行阈值分割,获取疑是运动目标的二值图,未进行双向跟踪与“细消除”差分阈值分割图像如图8所示,进行双向跟踪与“细消除”差分阈值分割图像如图9所示。S52: Use the Otsu algorithm to perform threshold segmentation on the difference image, and obtain a binary image suspected of being a moving target. The differential threshold segmentation image without bidirectional tracking and "fine elimination" is shown in Figure 8, and performs bidirectional tracking and "fine elimination" difference. The thresholded segmented image is shown in Figure 9.

S53:对疑是运动目标的二值图进行腐蚀操作,去除离散噪声以及线条状噪声干扰,再进行膨胀操作,膨胀后,对小区域进行标记并滤除,再次进行膨胀操作,得到最终运动目标二值图,最终运动区域检测结果如图10所示。S53: Carry out erosion operation on the binary image suspected to be a moving target, remove discrete noise and line noise interference, and then perform expansion operation, after expansion, mark and filter out small areas, perform expansion operation again, and obtain the final moving target The binary image and the final motion region detection result are shown in Figure 10.

S54:根据最终运动目标二值图计算运动目标轮廓,并将轮廓进行存储。S54: Calculate the contour of the moving object according to the final binary image of the moving object, and store the contour.

S55:遍历每一个目标轮廓,并根据轮廓在当前帧红外图像上绘制外接矩形,并将矩形的位置以及矩形的长和宽进行存储。S55: Traverse each target contour, draw a bounding rectangle on the current frame infrared image according to the contour, and store the position of the rectangle and the length and width of the rectangle.

S56:重复上述步骤,直至遍历完所有轮廓,获取最终运动目标检测结果图。S56: Repeat the above steps until all contours are traversed to obtain a final moving target detection result map.

S6:根据前一帧检测目标位置形成掩膜,反馈到下一帧角点检测,形成一个完整的闭环检测系统。具体采用如下方式:S6: Form a mask according to the detection target position in the previous frame, and feed back to the corner detection in the next frame to form a complete closed-loop detection system. The specific method is as follows:

S61:创建一大小与类型与当前帧红外图像相同的颜色设为白色的单通道mask掩膜初始图像。S61: Create an initial image of a single-channel mask whose size and type are the same as those of the current frame infrared image and whose color is set to white.

S62:从S55存储的矩形框集合中取出一未处理的外接矩形框,获取矩形框的位置与大小,将矩形框的长和宽向外扩充m个像素点,m可以随实际情况进行调整,将扩充后的矩形框映射到mask掩膜图像中,将mask掩膜图像矩形框内部像素点灰度值置为0,将处理完毕的矩形框加入到已处理集合中,继续处理下一矩形框。S62: Take an unprocessed circumscribed rectangular frame from the rectangular frame set stored in S55, obtain the position and size of the rectangular frame, and expand the length and width of the rectangular frame by m pixels, m can be adjusted according to the actual situation, Map the expanded rectangular frame to the mask mask image, set the gray value of the pixel inside the mask mask image rectangular frame to 0, add the processed rectangular frame to the processed set, and continue to process the next rectangular frame .

S63:重复上述步骤,直至处理完所有矩形框。S63: Repeat the above steps until all the rectangular frames are processed.

S64:完成上述步骤,得到一运动目标的mask掩膜图像,将此掩膜图作为初始信息反馈到下一帧检测中,实现闭环检测,mask掩膜图像如图11所示。S64: Complete the above steps to obtain a mask image of a moving target, and feed back the mask image as initial information to the next frame detection to realize closed-loop detection. The mask image is shown in FIG. 11 .

以上所述,仅为本发明较佳的具体实施方式,但本发明的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,根据本发明的技术方案及其发明构思加以等同替换或改变,都应涵盖在本发明的保护范围之内。The above is only a preferred embodiment of the present invention, but the scope of protection of the present invention is not limited thereto, any person familiar with the technical field within the technical scope disclosed in the present invention, according to the technical solution of the present invention Any equivalent replacement or change of the inventive concepts thereof shall fall within the protection scope of the present invention.

Claims (7)

1. A method for detecting a moving target in a closed loop of an infrared video under a dynamic background is characterized by comprising the following steps:
analyzing the noise type of the infrared image, and removing noise from the image by using a filtering algorithm;
different interference corner filtering methods are adopted for different frames, so that coarse elimination and fine elimination of corners are carried out, and final detection corners are stored and recorded;
tracking the angular points by adopting a sparse optical flow method, filtering the tracking points by adopting a bidirectional tracking mode, determining the corresponding position relation of the angular points of the previous frame in the current frame, carrying out backward tracking by using the current tracking points, screening the two groups of angular points, and eliminating angular points with backward tracking failure;
carrying out homography matrix calculation according to the relation between the angular point sets of the front frame and the rear frame, and carrying out background compensation on the current frame image by using the homography transformation matrix;
differentiating the infrared image of the previous frame with the current frame image after background compensation, performing self-adaptive gray threshold binarization processing on the differential result, and performing morphological operation on the binary image to obtain a final target position;
and forming a mask according to the detection target position of the previous frame, and feeding back to the corner detection of the next frame to form a complete closed loop detection circuit.
2. The method for detecting the moving object in the closed loop of the infrared video under the dynamic background according to claim 1, wherein the method comprises the following steps: carrying out noise filtering on the infrared image of the current frame by using a Gaussian filter to remove Gaussian white noise in the infrared image; a median filter is used to filter out random punctiform noise in the infrared image.
3. The method for detecting the moving object in the closed loop of the infrared video under the dynamic background according to claim 1, wherein the method comprises the following steps: judging whether the current frame is the infrared image of the previous five frames, if so, performing coarse elimination, otherwise, performing fine elimination;
dividing the infrared image into subblocks with the same size, and a subblock sequence i 1 To i 25
From i 1 Starting to perform corner detection on each small block by using a Shi-Tomasi algorithm, and sorting the number of the corners in each small block from small to large after the detection is completed;
removing 5 sub-blocks with the largest number and 5 sub-blocks with the smallest concentration, storing the final corner points,
if the image is not the previous five frames, the detection result of the previous frame is fed back to the current frame during detection, and the mask generated by the moving target area of the previous frame is used for not detecting the corner point of the area with the mask area of zero on the current frame;
Figure FDA0004189551510000021
carrying out Shi-Tomasi corner detection on other areas to finish fine elimination, so that the whole detection method realizes closed-loop detection;
and storing the final corner detection result.
4. The method for detecting the moving object in the closed loop of the infrared video under the dynamic background according to claim 2, wherein the method comprises the following steps:
tracking each obtained corner in the current frame by using LK optical flow pyramid algorithm to obtain the corner P of the previous frame i (x 0 ,y 0 ) At the position P of the current frame i (x 1 ,y 1 ) Repeating the steps until all the corner points are calculated, and storing all the tracking points;
and performing back tracking on the tracking point set by using the LK optical flow pyramid algorithm again to obtain a tracking point P of the current frame i (x 1 ,y 1 ) At position P of the previous frame i (x 2 ,y 2 ) Repeating the steps until all tracking points are calculated, and storing all tracking point pairs which are reversely tracked;
removing the forward tracking point pairs according to the output state vector;
if the output state vector is judged to be 1, the corner point of the previous frame and the corresponding tracking point of the current frame are stored, and if the output state vector is judged to be 0, the corresponding point pair is removed;
performing the same removal strategy on the back tracking set;
screening the forward tracking point pair set and the backward tracking point pair set, and taking out the point P i (x 1 ,y 1 ) Compare P in its corresponding set i (x 0 ,y 0 ) And P i (x 2 ,y 2 ) If the x coordinate and the y coordinate of the two points are the same, the bidirectional tracking is successful, and P is calculated i (x 0 ,y 0 ) And P i (x 1 ,y 1 ) Adding the tracking success set;
repeating the above operation until the screening of all the point pairs is completed.
5. The method for detecting the moving object in the closed loop of the infrared video under the dynamic background according to claim 3, wherein the method comprises the following steps: p is adopted for two corner sets corresponding to the obtained previous frame and the current frameThe ROSAC algorithm calculates the optimal homography transformation matrix H of two frames of infrared images, and for the corner P of the previous frame i (x 0 ,y 0 ) Tracking point P corresponding to current frame i (x 1 ,y 1 ) The following relationship should be satisfied:
Figure FDA0004189551510000022
and performing background compensation on the current frame by using the optimal homography transformation matrix H, and respectively performing interpolation in the x direction and the y direction of the pixel point by adopting a bilinear interpolation mode so as to perform image correction.
6. The method for closed loop detection of a moving object in an infrared video under a dynamic background according to claim 4, wherein the method comprises the following steps:
performing differential operation on the infrared image of the previous frame and the compensated infrared image, and performing Gaussian filtering on the differential image to remove noise;
threshold segmentation is carried out on the differential image by using an Otsu algorithm, and a binary image suspected to be a moving target is obtained;
performing corrosion operation on the binary image suspected to be the moving target, removing discrete noise and linear noise interference, performing expansion operation, marking and filtering small areas after expansion, and performing expansion operation again to obtain a final moving target binary image;
calculating the outline of the moving object according to the final moving object binary image, and storing the outline;
traversing each target contour, drawing an external rectangle on the infrared image of the current frame according to the contour, and storing the position of the rectangle and the length and width of the rectangle;
repeating the steps until all the outlines are traversed to obtain a final moving target detection result diagram.
7. The method for detecting the moving object in the closed loop of the infrared video under the dynamic background according to claim 1, wherein the method comprises the following steps: creating a single-channel mask image with the same size and type as the infrared image of the current frame and set to be white;
taking out an unprocessed external rectangular frame from the stored rectangular frame set, acquiring the position and the size of the rectangular frame, expanding the length and the width of the rectangular frame outwards by m pixel points, mapping the expanded rectangular frame into a mask image, setting the gray value of the pixel points in the mask image rectangular frame to be 0, adding the processed rectangular frame into the processed set, continuing to process the next rectangular frame, and repeating the steps until all the rectangular frames are processed;
and obtaining a mask image of the moving object, and feeding back the mask image as initial information to the next frame detection to realize closed loop detection.
CN202310428567.8A 2023-04-20 2023-04-20 Moving target closed-loop detection method of infrared video under dynamic background Pending CN116385495A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310428567.8A CN116385495A (en) 2023-04-20 2023-04-20 Moving target closed-loop detection method of infrared video under dynamic background

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310428567.8A CN116385495A (en) 2023-04-20 2023-04-20 Moving target closed-loop detection method of infrared video under dynamic background

Publications (1)

Publication Number Publication Date
CN116385495A true CN116385495A (en) 2023-07-04

Family

ID=86970988

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310428567.8A Pending CN116385495A (en) 2023-04-20 2023-04-20 Moving target closed-loop detection method of infrared video under dynamic background

Country Status (1)

Country Link
CN (1) CN116385495A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116999044A (en) * 2023-09-07 2023-11-07 南京云思创智信息科技有限公司 Real-time motion full-connection bidirectional consistent optical flow field heart rate signal extraction method
CN117671801A (en) * 2024-02-02 2024-03-08 中科方寸知微(南京)科技有限公司 Real-time target detection method and system based on binary reduction

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116999044A (en) * 2023-09-07 2023-11-07 南京云思创智信息科技有限公司 Real-time motion full-connection bidirectional consistent optical flow field heart rate signal extraction method
CN116999044B (en) * 2023-09-07 2024-04-16 南京云思创智信息科技有限公司 Real-time motion full-connection bidirectional consistent optical flow field heart rate signal extraction method
CN117671801A (en) * 2024-02-02 2024-03-08 中科方寸知微(南京)科技有限公司 Real-time target detection method and system based on binary reduction
CN117671801B (en) * 2024-02-02 2024-04-23 中科方寸知微(南京)科技有限公司 Real-time target detection method and system based on binary reduction

Similar Documents

Publication Publication Date Title
CN110866924B (en) Line structured light center line extraction method and storage medium
CN109785385B (en) Visual target tracking method and system
CN116385495A (en) Moving target closed-loop detection method of infrared video under dynamic background
CN111199556B (en) Camera-based indoor pedestrian detection and tracking method
CN112560538B (en) Method for quickly positioning damaged QR (quick response) code according to image redundant information
CN110363196B (en) Method for accurately recognizing characters of inclined text
CN110309765B (en) An efficient method for detecting moving objects in video
CN107369159A (en) Threshold segmentation method based on multifactor two-dimensional gray histogram
CN108229475A (en) Wireless vehicle tracking, system, computer equipment and readable storage medium storing program for executing
CN115331245A (en) A table structure recognition method based on image instance segmentation
CN116664478A (en) A deep learning-based steel surface defect detection algorithm
CN117115415B (en) Image marking processing method and system based on big data analysis
CN116012579A (en) Method for detecting abnormal states of parts based on photographed images of intelligent inspection robot of train
CN114529555A (en) Image recognition-based efficient cigarette box in-and-out detection method
CN113920168A (en) Image tracking method in audio and video control equipment
CN114419006B (en) A method and system for removing text watermarks from grayscale video that changes with background
CN117911419A (en) Method and device for detecting steel rotation angle enhancement of medium plate, medium and equipment
CN112446851A (en) Endpoint detection algorithm based on high-speed pulse type image sensor
CN113096090B (en) End face gap visual measurement method with chamfer, device, equipment and storage medium
CN118506338A (en) Electronic device printed character recognition and detection method based on deep learning
CN102917224A (en) Mobile background video object extraction method based on novel crossed diamond search and five-frame background alignment
CN114708179A (en) Wheel tread splicing method based on multi-linear array camera
CN114820718A (en) Visual dynamic positioning and tracking algorithm
CN118379315B (en) 8-Direction Sobel edge detection system based on FPGA
Liu et al. Lane line detection based on OpenCV

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination