CN109902578B - Infrared target detection and tracking method - Google Patents

Infrared target detection and tracking method Download PDF

Info

Publication number
CN109902578B
CN109902578B CN201910073794.7A CN201910073794A CN109902578B CN 109902578 B CN109902578 B CN 109902578B CN 201910073794 A CN201910073794 A CN 201910073794A CN 109902578 B CN109902578 B CN 109902578B
Authority
CN
China
Prior art keywords
target
image
detected
roi
static
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201910073794.7A
Other languages
Chinese (zh)
Other versions
CN109902578A (en
Inventor
刘辉
何博侠
焦浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Original Assignee
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and Technology filed Critical Nanjing University of Science and Technology
Priority to CN201910073794.7A priority Critical patent/CN109902578B/en
Publication of CN109902578A publication Critical patent/CN109902578A/en
Application granted granted Critical
Publication of CN109902578B publication Critical patent/CN109902578B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention provides an infrared target detection and tracking method, which comprises the following steps: selecting density, rectangularity and geometric invariant moments as static variables, and establishing a static decision criterion; and selecting the area, the perimeter, the speed, the self-adaptive segmentation threshold value and the ROI position of the target contour as dynamic variables to establish a dynamic decision criterion. Calculating a target static variable and a partial dynamic characteristic variable value by adopting a first frame target detection algorithm; and the subsequent frame firstly utilizes an improved local self-adaptive threshold segmentation algorithm to segment the image, then utilizes static and dynamic decision criteria to screen out a segmented real target, and finally calculates and updates a characteristic parameter set of the target to be detected. The method has better robustness to complex background updating, scene change, and self scale, gray scale and contour feature change of the target.

Description

Infrared target detection and tracking method
Technical Field
The invention belongs to the technical field of target detection and tracking, and particularly relates to an infrared target detection and tracking method.
Background
The target detection and tracking based on infrared imaging is widely applied to the aspects of accurate guidance, battlefield monitoring, unmanned reconnaissance, visual navigation, intelligent monitoring and the like. In such applications, the complexity of the dynamic scene and the background of the target image is a main factor influencing the robustness of the target detection and tracking algorithm, and how to overcome the interference is an important factor to be considered in algorithm design. For infrared accurate guidance, in the process that an image of a target on an infrared detector is from a plurality of pixels to the infrared detector is full of the whole field of view, the scene updating speed is high, the coupling between the target and the background is strong, particularly at the guidance end, the size, the gray level and the contour characteristics of the target change violently, the robustness of an algorithm and the guidance accuracy are influenced seriously, and the method is also one of key problems to be considered in algorithm design.
The accuracy of target detection and tracking and the robustness of the algorithm depend on accurate apparent modeling to a large extent. The accurate target appearance model is established to solve the problem that the background change, the target scale, the gray scale and the profile characteristic change in the target motion process influence the performance of the tracking algorithm. At present, target detection and tracking methods using visual features are mainly classified into two categories: the first category is artificially designed features, which are implemented by establishing constraint criteria. The second type is an autonomous learning characteristic, and the algorithm can learn autonomously and establish an appearance model of the target characteristic. The first method can be better used for detection and tracking tasks, is easier to implement and has stronger real-time performance, but the algorithm design difficulty is high, and a more deep and stable characteristic appearance model of the target cannot be established. The second method improves the tracking accuracy and robustness, however, the establishment of the appearance model needs to rely on a large number of training and testing data sets, and the algorithm operation needs higher hardware support. However, the existing embedded processors of the infrared precise guided munition mainly comprise an ARM, a DSP, an ARM + DSP and an FPGA + DSP, and the processors have insufficient performance in the aspects of floating point operation, matrix multiplication, convolution operation, memory bandwidth and the like, and cannot meet the requirements of transplantation and real-time processing of complex model algorithms of the second method.
Disclosure of Invention
The invention researches a first infrared target detection and tracking method. On the basis of analyzing infrared targets and background features of different scenes, a complex unknown scene infrared target rapid detection and tracking algorithm based on multi-feature fusion and ROI (region of interest) prediction is provided according to the independent processing idea of a first frame and a subsequent frame of a target infrared sequence image.
The technical scheme for realizing the aim of the invention is as follows: an infrared target detection and tracking method comprises the following steps:
s001, selecting an input source of an infrared target detection and tracking algorithm, wherein the supported input source format comprises an offline infrared sequence image, an offline video file and a real-time video stream which are acquired by an infrared imaging device and are under a scene to be detected and tracked;
step S002, reading and displaying an input image; reading a frame of image from the selected input source and displaying the image on the display;
step S003, judging whether the target to be detected is manually selected through a mouse, if the target to be detected to be tracked appears, manually selecting the target to be detected through the mouse and generating an ROI image, and entering the step S005; otherwise, jumping to the step S011 for execution;
step S004, judging whether the image of the current frame is the first frame image after the target to be detected is selected, and if the current frame is the first frame image after the target is selected, entering the step S005; otherwise, executing S011;
step S005, a suspected object to be detected is roughly extracted, the ROI image is segmented by a watershed segmentation algorithm and a maximum inter-class variance method, and the background in the ROI image is filtered to obtain a suspected object to be detected set containing the object to be detected;
step S006, accurately extracting suspected targets to be detected, calculating the outline areas of the suspected targets to be detected, sorting the outline areas, and determining the suspected targets to be detected corresponding to the maximum area value as real targets to be detected;
step S007, calculating a set of real static feature parameters of a target to be detected and constructing a static decision criterion, wherein the set of static feature parameters comprises density, rectangularity and geometric invariant moment of the target;
step S008, predicting the coordinates of the ROI image of the next frame;
step S009, displaying a result image, using a tracking frame to circle out the target to be detected according to the real target position to be detected screened out by the detection and tracking algorithm, and displaying the result image on a display device;
step S010, judging whether the reading of the input image is finished, and finishing the infrared target detection and tracking task if the reading of the input image is finished; otherwise, jumping to step S002 to continue execution;
step S011, acquiring and generating a predicted ROI image, taking a current frame image as a parent image, and generating a sub-image according to the ROI coordinates obtained by prediction of a previous frame, wherein the sub-image is the ROI image of the current frame image;
step S012, processing the ROI image by adopting an improved self-adaptive threshold segmentation algorithm, wherein the threshold of the improved self-adaptive threshold segmentation algorithm is obtained by comprehensively calculating the gray mean variance of the circumscribed rectangle of the outline of the real target to be detected in the previous frame of image and the minimum deviation of the gray mean of the circumscribed rectangle of the outline of the real target to be detected and the gray mean of the eight neighborhood subregions;
step S013, calculating multiple feature sets of each suspected target to be detected, wherein the multiple feature sets comprise static feature sets, the contour area, the perimeter, the speed, the self-adaptive segmentation threshold value and the predicted ROI image coordinates;
step S014, roughly extracting each suspected target to be detected by using a static decision criterion;
step S015, screening out a target to be detected; and calculating the perimeter, the area and the movement speed of the outline of each suspected target to be detected after the rough extraction of the static decision-making criterion, rejecting the suspected targets to be detected according to the dynamic decision-making criterion, and screening out the real target to be detected. If the current frame is the second frame image after the target extraction, the first frame image occupies few pixels, the manual target selection segmentation effect is ideal, the time interval of adjacent frames is short, and the target to be detected can be accurately screened out by using a static criterion.
Step S016, judging whether the current frame image is the second frame image after target selection, if so, executing step S017; otherwise, jumping to step S018 for execution;
step S017, establishing a dynamic decision criterion;
and S018, updating the real target feature set to be detected. And after the updating of the real target feature set to be detected is finished, executing the step S009.
Compared with the prior art, the invention has the following advantages:
(1) selecting target density, rectangularity and geometric invariant moment as static variables, and target contour area, perimeter, speed, adaptive segmentation threshold and ROI position as dynamic variables, and establishing target identification static and dynamic decision criteria;
(2) the idea of independent processing of the first frame and the subsequent frame of the target infrared sequence image is introduced, and the detection time consumption of the target detection algorithm of the first frame is increased by 8 times and 2.5 times respectively compared with a template matching algorithm and a feature matching algorithm;
(3) the subsequent frame adopts an improved local self-adaptive segmentation algorithm to segment the image, and has better segmentation effect compared with a fixed threshold segmentation method, a maximum inter-class variance method (Otsu) and a local self-adaptive threshold segmentation algorithm;
(4) the method has better robustness to complex background updating, scene change, and target scale, gray scale and contour feature change.
The invention is further described below with reference to the accompanying drawings.
Drawings
FIG. 1 is a flow chart of an infrared target detection and tracking method of the present invention.
Fig. 2 is a flow chart of first frame infrared target detection.
Fig. 3 is a schematic diagram of ROI image gray distribution.
Detailed Description
With reference to fig. 1, fig. 2 and fig. 3, an infrared target detection and tracking method includes the following specific steps:
s001, selecting an input source of an infrared target detection and tracking algorithm, wherein the supported input source format comprises an offline infrared sequence image, an offline video file and a real-time video stream which are acquired by an infrared imaging device (such as an infrared camera) and are under a scene to be detected and tracked;
step S002, reading and displaying an input image; reading a frame of image from the selected input source and displaying the image on the display;
step S003, judge whether to choose the target to be measured manually through the mouse, if already choose the target to be measured, enter the step S004; otherwise, the process jumps to step S009. The process of manually selecting the image ROI area is: let image I be the first frame image and image resolution be Rrowpixel×Ccolpixel, the gray scale of any point in the image is f (i, j), i belongs to [0, R ∈row],j∈[0,Ccol]Sroi (i, j) represents the ROI in the image, where i ∈ (0, R)row],j∈(0,Ccol]. Using human a priori knowledge, the rectangular region containing the target is manually selected to be Srect (i, j), where i is (0, R)row],j∈(0,Ccol]And Sroi (i, j) belongs to Srect (i, j);
step S004, judging whether the current frame is the first frame image after target selection, and if the current frame is the first frame image, entering step S005 to execute; otherwise, S011 is executed. Manually selecting a first frame image, determining the type of a target, and establishing a target characteristic parameter set through calculation;
step S005, the candidate target is roughly extracted. The manually selected Srect (i, j) contains real SRoi (i, j) and other interference backgrounds with strong coupling of gray features, and candidate targets need to be extracted roughly further. With reference to fig. 2, the specific implementation process of step S005 is: step S0051 processing Srect (i, j) by using a watershed segmentation algorithm; step S0052, filling the foreground and the background in the segmented binary image into 0 and 1 respectively, marking the filled image as Imask (i, j), wherein the resolution of the Imask (i, j) is the same as that of the Srect (i, j); restoring a segmented foreground image Ifront (i, j) by taking Imask (i, j) as a mask and Srect (i, j) as an original image, wherein the Ifront (i, j), the Imask (i, j) and the Srect (i, j) have the same resolution; step S0053, further dividing Ifront (i, j) by using a maximum inter-class variance method (Otsu), filling the processed binary image, and recording the processing result as Ifill (i, j);
in step S006, candidate targets are extracted. With reference to fig. 2, the specific implementation manner of step S006 is: s0061 extracting all contours in Ifill (i, j), then calculating characteristic parameters of area, perimeter, density and rectangularity of each contour, S0062 sorting contour areas of all candidate targets, determining the maximum value as the target to be detected of the first frame, and calculating the gray mean value mu in the contour area of the target to be detectedtarSum variance δ2 tar
And step S007, calculating a real static feature set of the target to be measured and constructing a static decision criterion. Although the gray scale feature, the shape feature and the contour feature all change from a few pixels to the process of filling the whole field of view, the shape, the contour and the local feature of each part of the object are stable relative to change. Therefore, the three indexes of density, rectangularity and geometric invariant moment are determined by adopting a principal component analysis method to form a static characteristic set, and the establishment process of the static decision criterion is as follows:
the density, the squareness and the geometric invariant moment index calculation formula are as follows:
Figure BDA0001958104850000051
wherein L is the perimeter of the target contour, A is the area of the target contour region, and SmerThe area of the circumscribed rectangle that is the boundary of the target contour. MiThe calculation formula of (a) is as follows:
Figure BDA0001958104850000052
the calculation formula of each variable in the formula (2) is as follows:
Figure BDA0001958104850000053
in the formula, mjk、μjkAnd ηjkGeometric moments of order j + k, center distances and normalized center distances, R, of image I (x, y), respectivelyrow,CcolThe rows and columns of image I (x, y), j, k 1,2,3 …,
the target static feature set is calculated by the above formula, and accordingly, a static decision criterion can be constructed as follows:
Figure BDA0001958104850000061
in the formula, SP is a static decision criterion, J is compactness, R is squareness, Hu is a geometric invariant moment, parameters alpha, beta and gamma are three index weight parameters, and lambda isiIndicating tolerance to static characteristic parameters, setting lambda empirically1=λ3=λ5=0.5,λ2=λ4=λ6=1.5。
Step S008 predicts the next frame ROI image coordinates. The ROI position of the next frame can be predicted by utilizing the calculated target motion speed, centroid position and contour information of the current frame, and the calculation method is as follows:
(2) calculating a circumscribed rectangle of the target contour of the current frame, and recording coordinates of the upper left corner and the lower right corner of the rectangle as Preclu(x,y)、Precrd(x, y) length HrecWidth Wrec
(3) According to the direction and speed of the motion trail and the position of the centroid of the current frame target
Figure BDA0001958104850000062
Calculating to obtain the predicted ROI centroid position of the next frame
Figure BDA0001958104850000063
(4) And determining the position of the ROI by considering the self-scale information of the target.
Figure BDA0001958104850000064
Figure BDA0001958104850000065
In the formula, proiluIs the coordinate of the upper left corner of the ROI, proiluIs the coordinate of the lower right corner of the ROI, lambda7And λ8The position of the ROI of the next frame can be predicted according to dynamic setting of the target self-scale information, but the position must be larger than 0.5;
step S009, displaying a result image, using a tracking frame to circle out the target to be detected according to the real target position to be detected screened out by the detection and tracking algorithm, and displaying the result image on a display device;
step S010, judging whether the reading of the input image is finished, and finishing the infrared target detection and tracking task if the reading of the input image is finished; otherwise, jumping to step S002 to continue execution;
step S011, acquiring and generating a predicted ROI image, taking a current frame image as a parent image, and generating a sub-image according to the coordinates of the ROI image predicted from the previous frame (namely in step S008), wherein the sub-image is the ROI image of the current frame image;
step S012, processing the ROI image by using an improved adaptive threshold segmentation algorithm, as shown in FIG. 3, where the black line connected region in the image is a current frame target contour region, and the rectangle ABCD is a target contour circumscribed rectangle. And extending the AB edge, the CD edge, the AD edge and the BC edge, dividing the ROI into nine sub-regions, wherein the middle sub-region is a target minimum circumscribed rectangle, and the other sub-regions are 8 neighborhoods of the target minimum circumscribed rectangle. The process of solving the contribution of 8 neighborhoods to the self-adaptive segmentation threshold comprises the following steps:
(1) determining the range of each neighborhood subregion:
as shown in FIG. 3, the eight neighborhoods of the rectangular ABCD start from the upper left corner and rotate clockwise by one circle, which are respectively named as Ri,i∈[1,8]Selecting the edge transition boundary as 3, and calculating the coordinate range of each neighborhood subregion (if the distance from the rectangular boundary to the ROI image boundary is less than 3 pixels, calculating according to the whole boundary distance);
(2) calculating a rough segmentation threshold value:
considering that the circumscribed rectangle ABCD of the current frame target contour contains a non-target area, the gray value of the target is generally higher than the background gray value in the circumscribed rectangle ABCD, and the edge pixel of the target is higher than the average value of the pixels in the rectangular area, therefore, the rough segmentation threshold T istempExpressed approximately by the mean and variance of the pixel gray levels within the rectangular area:
Ttemp=μtar+εδ2 tar (7)
in the formula, mutarAnd delta2 tarRespectively representing the mean value and the variance of the gray level in the rectangular area, wherein the epsilon value can be adjusted and is generally a positive number;
(3) calculating the average gray scale of each neighborhood subregion:
Figure BDA0001958104850000071
in which i is e [1,8 ]],GimIs corresponding to the sub-region RiGray value of mth pixel point, SiIs corresponding to the sub-region RiA total number of pixels;
(4) solving the gray difference between each neighborhood subregion and the circumscribed rectangle region:
Figure BDA0001958104850000072
Dmin=min(D1,D2,D3,D4,D5,D6,D7,D8) (10)
(5) determination of gray value:
the minimum deviation of the gray average value of the circumscribed rectangle of the target contour and the eight-neighborhood subregion is obtained through the step (4), and the rough segmentation threshold value can more accurately position the target boundary by considering the deviation information, so that the local self-adaptive segmentation threshold value T of the next framenextThe calculation formula is as follows:
Tnext=Ttemp+Dmin=μtar+εδ2 tar+Dmin (11)
step S013, step S013 calculates each suspected target feature set to be detected: the feature set consists of a static feature set and a dynamic feature set, the static feature set is obtained in step S007, the dynamic feature set consists of a suspected target to be detected contour area, a perimeter, a speed, a self-adaptive segmentation threshold and a predicted ROI image coordinate, wherein the method for determining the self-adaptive segmentation threshold is detailed in step S012, the solution of the motion speed requires calculating the centroid position of the infrared target of the adjacent frame, and the centroid position of the target can be solved through a geometric moment, and the calculation formula is as follows:
Figure BDA0001958104850000081
in the formula (I), the compound is shown in the specification,
Figure BDA0001958104850000082
respectively an abscissa point and an ordinate point of the centroid of the previous frame of infrared target,
Figure BDA0001958104850000083
respectively an abscissa point and an ordinate point of the current frame infrared target centroid;
Figure BDA0001958104850000084
and (m)00,m10,m01) Respectively 0+0, 1+0, 0+1 order geometric moments of a previous frame and a current frame I (x, y) of the digital image; vcurRepresenting the moving speed of the target in the image coordinate system, and delta t representing the time interval of adjacent frames;
step S014, using static decision criterion to extract candidate targets;
step S015, screening out the target to be measured, calculating the perimeter, the area and the movement speed of the outline of each suspected target to be measured after the static decision criterion is roughly extracted, rejecting the suspected target to be measured according to the dynamic decision criterion, and screening out the real target. It should be noted here that if the current frame is the second frame of image after target extraction, because the first frame of image occupies few pixels, the manual selection of target segmentation effect is ideal, and the time interval between adjacent frames is short, the target to be detected can be accurately screened out by using the static decision criterion;
step S016, judging whether the current frame image is the second frame image after target selection, if so, executing step S017; otherwise, jumping to step S018 for execution;
and step S017, establishing a dynamic decision criterion. According to the target dynamic feature set, the following dynamic decision criteria are constructed:
Figure BDA0001958104850000091
in the formula, DP is the dynamic decision criterion, ScurAnd CcurRespectively is the outline area and the perimeter V of the suspected target to be detected of the current frame imagecurFor each suspected target motion speed, Th, of the current frame imagecurAdaptive segmentation threshold, P, to be updated for the current framecurThe ROI position to be updated of the current frame image is a geometric invariant moment and a parameter
Figure BDA0001958104850000092
Indicating the degree of tolerance to changes in the dynamic characteristic variable, Spre,Cpre,Vpre,ThpreAnd PpreRespectively corresponding variables of the previous frame;
and S018, updating the target feature parameter set to be detected, and executing S009 to display a result image after the real target feature set to be detected is updated.
The foregoing illustrates and describes the principles, general features, and advantages of the present invention. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, which are described in the specification and illustrated only to illustrate the principle of the present invention, but that various changes and modifications may be made therein without departing from the spirit and scope of the present invention, which fall within the scope of the invention as claimed. The scope of the invention is defined by the appended claims and equivalents thereof.

Claims (1)

1. An infrared target detection and tracking method is characterized by comprising the following steps:
step S001, selecting an input source of an infrared target detection and tracking algorithm: the input source format comprises an offline infrared sequence image, an offline video file and a real-time video stream under a scene to be detected and tracked, which are acquired by an infrared imaging device;
step S002, reading and displaying an input image: reading a frame of image from the selected input source and displaying the image on the display;
step S003, judging whether the target to be detected is manually selected through a mouse, if the target to be detected to be tracked appears, manually selecting the target to be detected through the mouse and generating an ROI image, and entering step S004; otherwise, jumping to the step S011 for execution;
step S004, judging whether the image of the current frame is the first frame image after the target to be detected is selected, and if the current frame is the first frame image after the target is selected, entering the step S005; otherwise, executing S011;
step S005, crude extraction of suspected targets to be detected: segmenting the ROI image by adopting a watershed segmentation algorithm and a maximum inter-class variance method, filtering the background in the ROI image, and obtaining a suspected target set containing a target to be detected, wherein the method comprises the following specific steps:
1) processing the generated ROI image by using a watershed segmentation algorithm, then filling the foreground and the background in the segmented binary image into 0 and 1 respectively, and recording the filled image as Imask;
2) restoring a segmented foreground image Ifront by taking Imask as a mask and the generated ROI as an original image, wherein the Ifront, the Imask and the generated ROI have the same resolution;
3) further segmenting Ifront by using a maximum inter-class variance method, filling the binarized image which is further segmented by using the maximum inter-class variance method, and recording a processing result as Ifill;
step S006, precisely extracting a suspected target to be detected: calculating the outline area of each suspected target to be detected, sequencing the outline areas, and determining the suspected target to be detected corresponding to the maximum area value as a real target to be detected;
step S007, calculating a set of real static feature parameters of the target to be measured and constructing a static decision criterion, wherein the set of static feature parameters comprises density, rectangularity and geometric invariant moment of the target, and the establishing process of calculating the real static feature parameters of the target to be measured and constructing the static decision criterion is as follows:
calculating the density, the rectangularity and the geometric invariant moment of a real target to be measured so as to form a static characteristic set, and establishing a static decision criterion, wherein the calculation formula of each index is as follows:
Figure FDA0002737805070000021
wherein L is the perimeter of the target contour, A is the area of the target contour, and SmerArea of circumscribed rectangle that is boundary of target contour, MiThe calculation formula of (a) is as follows:
Figure FDA0002737805070000022
the calculation formula of each variable in the formula (2) is as follows:
Figure FDA0002737805070000023
in the formula, mjk、μjkAnd ηjkRespectively, the geometric moment of order j + k, the center distance and the normalized center distance of the image I (x, y), Rrow,CcolThe rows and columns of image I (x, y), j, k is 0,1,2,3 …,
calculating by using the formula (3) to obtain a target static feature set, and constructing a static decision criterion according to the target static feature set as follows:
Figure FDA0002737805070000024
in the formula, SP is a static decision criterion, J is compactness, R is squareness, Hu is a geometric invariant moment, alpha, beta and gamma are three index weight parameters, and lambda isiRepresenting tolerance to static characteristic parameters, setting lambda1=λ3=λ5=0.5,λ2=λ4=λ6=1.5;
Step S008, predicting the coordinates of the ROI of the next frame image, and adopting a method for predicting the coordinates of the ROI of the next frame image, wherein the specific process is as follows:
(1) calculating a circumscribed rectangle of the target contour in the current frame image, and recording the coordinates of the upper left corner and the lower right corner of the rectangle as P respectivelyreclu(x,y)、Precrd(x, y) length HrecWidth Wrec
(2) According to the direction and speed of the target motion track and the position of the mass center of the target of the current frame
Figure FDA0002737805070000031
Calculating to obtain the predicted ROI centroid position of the next frame
Figure FDA0002737805070000032
(3) And (3) determining the position of the ROI by considering the self-scale information of the target:
Figure FDA0002737805070000033
Figure FDA0002737805070000034
in the formula, proiluIs the coordinate of the upper left corner of the ROI, proirdIs the coordinate of the lower right corner of the ROI, lambda7And λ8Is greater than 0.5;
step S009, displaying the real target image to be detected: according to the position of the real target to be detected, a rectangular tracking frame is used for circling out the real target to be detected, and a real target image to be detected is displayed on a display device;
step S010, judging whether the reading of the image in the input source is finished, and if the reading is finished, finishing the infrared target detection and tracking task; otherwise, jumping to step S002 to continue execution;
step S011, acquiring and generating a predicted ROI image: taking the current frame image as a parent image, and generating a sub-image according to the ROI image coordinate obtained by predicting the previous frame, wherein the sub-image is the ROI image of the current frame image;
step S012, processing the ROI image by adopting an improved adaptive threshold segmentation algorithm: the improved threshold value of the adaptive threshold segmentation algorithm is obtained by comprehensively calculating the gray mean variance of the circumscribed rectangle of the outline of the real target to be detected in the previous frame of image and the minimum deviation of the circumscribed rectangle of the outline of the real target to be detected and the gray mean of the eight-neighborhood sub-region, and specifically comprises the following steps:
setting a rectangle ABCD as a target contour external rectangle, prolonging an AB side, a CD side, an AD side and a BC side, dividing an ROI (region of interest) into nine sub-regions, taking a middle sub-region as a target minimum external rectangle, taking other sub-regions as eight neighborhoods thereof, and solving the contribution flow of the eight neighborhoods to the self-adaptive segmentation threshold value as follows:
(1) determining the range of each neighborhood subregion:
the eight neighborhoods of the rectangle ABCD start from the upper left corner, rotate clockwise for one circle and are respectively named as Ri,i∈[1,8]Selecting 3 pixels as the edge transition boundary, and calculating the coordinate range of each neighborhood subregion;
(2) calculating a rough segmentation threshold value:
coarse segmentationThreshold value TtempExpressed approximately by the mean and variance of the pixel gray levels within the rectangular area:
Ttemp=μtar+εδ2 tar (7)
in the formula, mutarAnd delta2 tarRespectively representing the mean value and the variance of the gray level in the rectangular area, and taking a positive number as epsilon;
(3) calculating the average gray scale of each neighborhood subregion:
Figure FDA0002737805070000041
in which i is e [1,8 ]],GimIs corresponding to the sub-region RiGray value of mth pixel point, SiIs corresponding to the sub-region RiA total number of pixels;
(4) solving the gray difference between each neighborhood subregion and the circumscribed rectangle region:
Figure FDA0002737805070000042
Dmin=min(D1,D2,D3,D4,D5,D6,D7,D8) (10)
(5) determination of gray value:
the minimum deviation D of the gray average values of the circumscribed rectangle of the target contour and the eight-neighborhood sub-region is obtained through the step (4)minLocal adaptive segmentation threshold T of next framenextThe calculation formula is as follows:
Tnext=Ttemp+Dmin=μtar+εδ2 tar+Dmin (11);
step S013, calculating a plurality of characteristic sets of each suspected target to be detected: the multi-feature set comprises a static feature set, the contour area, the perimeter, the speed, a self-adaptive segmentation threshold value and a prediction ROI image coordinate of each suspected target to be detected, and the specific process of calculating the feature set of each suspected target to be detected is as follows:
the feature set consists of a static feature set and a dynamic feature set, the dynamic feature set consists of a suspected target to be detected contour area, a perimeter, a speed, a self-adaptive segmentation threshold value and a predicted ROI image coordinate, wherein the solving of the movement speed needs to calculate the centroid position of an infrared target of an adjacent frame, the position of the centroid of the target can be solved through a geometric invariant moment, and the calculation formula is as follows:
Figure FDA0002737805070000051
in the formula (I), the compound is shown in the specification,
Figure FDA0002737805070000052
respectively an abscissa point and an ordinate point of the centroid of the target in the previous frame,
Figure FDA0002737805070000053
respectively an abscissa point and an ordinate point of the current frame target centroid;
Figure FDA0002737805070000054
and (m)00,m10,m01) 0+0, 1+0, 0+1 order geometric moments, V, of the previous and current frames I (x, y) of the image, respectivelycurRepresenting the moving speed of the target under the current frame image coordinate system, and delta t representing the time interval of adjacent frames;
step S014, roughly extracting each suspected target to be detected by using a static decision criterion;
step S015, screening out a target to be tested: calculating the perimeter, the area and the movement speed of the outline of each suspected target to be tested after the rough extraction of the static decision-making criterion, rejecting the suspected targets to be tested according to the dynamic decision-making criterion, screening out the real target to be tested, and screening out the real target to be tested by using the static decision-making criterion if the current frame is the second frame image after the target extraction;
step S016, judging whether the current frame image is the second frame image after target selection, if so, executing step S017; otherwise, jumping to step S018 for execution;
step S017, establishing a dynamic decision criterion, specifically comprising the following steps:
Figure FDA0002737805070000055
in the formula, DP is the static decision criterion, ScurAnd CcurRespectively the outline area and perimeter, Th of the suspected target to be measured of the current framecurAdaptive segmentation threshold, P, to be updated for the current framecurFor the current frame, the ROI position to be updated is a geometric invariant moment and a parameter
Figure FDA0002737805070000056
Indicating the degree of tolerance to changes in the dynamic characteristic variable, Spre,Cpre,Vpre,ThpreAnd PpreRespectively corresponding variables of the previous frame;
and S018, updating the real target feature set to be detected, and executing the step S009 after the real target feature set to be detected is updated.
CN201910073794.7A 2019-01-25 2019-01-25 Infrared target detection and tracking method Expired - Fee Related CN109902578B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910073794.7A CN109902578B (en) 2019-01-25 2019-01-25 Infrared target detection and tracking method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910073794.7A CN109902578B (en) 2019-01-25 2019-01-25 Infrared target detection and tracking method

Publications (2)

Publication Number Publication Date
CN109902578A CN109902578A (en) 2019-06-18
CN109902578B true CN109902578B (en) 2021-01-08

Family

ID=66944190

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910073794.7A Expired - Fee Related CN109902578B (en) 2019-01-25 2019-01-25 Infrared target detection and tracking method

Country Status (1)

Country Link
CN (1) CN109902578B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110443247A (en) * 2019-08-22 2019-11-12 中国科学院国家空间科学中心 A kind of unmanned aerial vehicle moving small target real-time detecting system and method
CN111784743B (en) * 2020-07-03 2022-03-29 电子科技大学 Infrared weak and small target detection method
CN112288767A (en) * 2020-11-04 2021-01-29 成都寰蓉光电科技有限公司 Automatic detection and tracking method based on target adaptive projection
CN113537237B (en) * 2021-06-25 2024-01-16 西安交通大学 Multi-feature part quality information intelligent sensing method, system and device

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105784713A (en) * 2016-03-11 2016-07-20 南京理工大学 Sealing ring surface defect detection method based on machine vision

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101359372B (en) * 2008-09-26 2011-05-11 腾讯科技(深圳)有限公司 Training method and device of classifier, method and apparatus for recognising sensitization picture
CN102275723A (en) * 2011-05-16 2011-12-14 天津工业大学 Machine-vision-based online monitoring system and method for conveyer belt
CN102930558B (en) * 2012-10-18 2015-04-01 中国电子科技集团公司第二十八研究所 Real-time tracking method for infrared image target with multi-feature fusion
CN104361352A (en) * 2014-11-13 2015-02-18 东北林业大学 Solid wood panel defect separation method based on compressed sensing
CN104766334B (en) * 2015-04-21 2017-12-29 西安电子科技大学 Small IR targets detection tracking and its device
CN105354842B (en) * 2015-10-22 2017-12-29 武汉康美华医疗投资管理有限公司 A kind of profile key point registration and identification method based on stability region
CN105976403B (en) * 2016-07-25 2018-09-21 中国电子科技集团公司第二十八研究所 A kind of IR imaging target tracking method based on the drift of kernel function barycenter
CN108052942B (en) * 2017-12-28 2021-07-06 南京理工大学 Visual image recognition method for aircraft flight attitude

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105784713A (en) * 2016-03-11 2016-07-20 南京理工大学 Sealing ring surface defect detection method based on machine vision

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
一种多特征融合的场景分类方法;李志欣 等;《小型微型计算机系统》;20180531;第39卷(第05期);1085-1091 *
基于多特征融合与ROI预测的红外目标跟踪算法;刘辉 等;《光子学报》;20190731;0710004-1至0710004-16 *
基于时空条纹图法的光纤干涉条纹投影三维面形测量技术;李浩宇 等;《激光与光电子学进展》;20181031(第10期);185-191 *

Also Published As

Publication number Publication date
CN109902578A (en) 2019-06-18

Similar Documents

Publication Publication Date Title
CN109902578B (en) Infrared target detection and tracking method
CN107292911B (en) Multi-target tracking method based on multi-model fusion and data association
CN108805897B (en) Improved moving target detection VIBE method
CN107633226B (en) Human body motion tracking feature processing method
CN107273905B (en) Target active contour tracking method combined with motion information
CN110490907B (en) Moving target tracking method based on multi-target feature and improved correlation filter
CN106709472A (en) Video target detecting and tracking method based on optical flow features
CN110766676B (en) Target detection method based on multi-source sensor fusion
CN107590427B (en) Method for detecting abnormal events of surveillance video based on space-time interest point noise reduction
CN113449606B (en) Target object identification method and device, computer equipment and storage medium
CN111886600A (en) Device and method for instance level segmentation of image
CN110555868A (en) method for detecting small moving target under complex ground background
CN113312973B (en) Gesture recognition key point feature extraction method and system
KR101690050B1 (en) Intelligent video security system
CN113379789B (en) Moving target tracking method in complex environment
Xu et al. BgCut: automatic ship detection from UAV images
CN109493370B (en) Target tracking method based on space offset learning
CN109241981B (en) Feature detection method based on sparse coding
Wan et al. Automatic moving object segmentation for freely moving cameras
CN112613565A (en) Anti-occlusion tracking method based on multi-feature fusion and adaptive learning rate updating
CN110751671B (en) Target tracking method based on kernel correlation filtering and motion estimation
CN110322474B (en) Image moving target real-time detection method based on unmanned aerial vehicle platform
CN113920168A (en) Image tracking method in audio and video control equipment
Sujatha et al. An innovative moving object detection and tracking system by using modified region growing algorithm
Jebelli et al. Efficient robot vision system for underwater object tracking

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20210108