CN115205655B - Infrared dark spot target detection system under dynamic background and detection method thereof - Google Patents
Infrared dark spot target detection system under dynamic background and detection method thereof Download PDFInfo
- Publication number
- CN115205655B CN115205655B CN202211118428.7A CN202211118428A CN115205655B CN 115205655 B CN115205655 B CN 115205655B CN 202211118428 A CN202211118428 A CN 202211118428A CN 115205655 B CN115205655 B CN 115205655B
- Authority
- CN
- China
- Prior art keywords
- target
- characteristic
- infrared
- point
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
Landscapes
- Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Theoretical Computer Science (AREA)
- Databases & Information Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to the technical field of image processing, in particular to an infrared dark spot target detection system and a detection method thereof under a dynamic background, wherein the detection method comprises the following steps: s1, performing subsequence splitting on an infrared sequence image under dynamic background change to obtain a stage static background subsequence image; s2, performing background clutter suppression on the subsequence images and enhancing targets in the subsequence images through a space-time fusion background suppression network based on full convolution; s3, detecting a target from the background-suppressed image according to an infrared point target detection network based on the improved Yolov 5; and S4, confirming the target through a track association matching algorithm, and removing a false alarm of the target to obtain a real target. The invention can accurately detect the target from noise, background residue and interferent.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to an infrared dark spot target detection system and a detection method thereof under a dynamic background.
Background
When an infrared detection system is adopted to observe a target in an air-to-air and air-to-ground long distance (usually dozens of kilometers or even hundreds of kilometers), the target is influenced by atmospheric disturbance, optical scattering, diffraction and the like, the spectral irradiance of a target surface receiving a target signal is very small, so that the signal to noise ratio of the target is low, the imaging area is small (point and spot shape, the spatial range of only a few pixels in the whole scene) and has no shape texture information, and the target is easy to submerge in background clutter and noise. In addition, in practical application, the number of the movable base detectors is large, and the detection difficulty is greatly increased due to the dynamic update of the background. Therefore, how to realize effective detection of the infrared dark weakness target in a dynamic background becomes a research hotspot in the world detection field nowadays.
Disclosure of Invention
In view of the above problems, an object of the present invention is to provide an infrared dark and weak point target detection system and a detection method thereof under a dynamic background, which can extract effective target information from real clutter or interference in all time periods by combining a conventional method with deep learning and a "knowledge and data" joint driving manner.
Compared with the prior art, the method has the following advantages:
(1) The invention provides a space-time fusion background suppression network based on full convolution to cope with the situations of long-time stillness of a target and reduction of the gray level contrast of the target-background in parallel;
(2) The invention provides an infrared point target detection network based on improved Yolov5 to replace the traditional threshold segmentation method, and the method does not take the gray value (after background suppression) as the target discrimination basis, but utilizes the strong characteristic extraction capability of a neural network to fully learn the various attributes of the shape, structure, texture and gray level of the target, so as to accurately detect the target from noise points, background residues and interferents.
(3) The full convolution-based space-time fusion background suppression algorithm and the improved Yolov 5-based infrared point target detection network adopt an end-to-end training and predicting mode, and due to the correlation between background suppression and detection, target functions (LOSS functions) of the background suppression and the detection are mutually constrained and supervised, so that parameter overfitting can be avoided, and the network robustness can be improved.
Drawings
Fig. 1 is a logic framework diagram of a method for detecting an infrared dark spot target in a dynamic background according to an embodiment of the present invention.
Fig. 2 is a schematic flowchart of a method for detecting an infrared dark spot target in a dynamic background according to an embodiment of the present invention.
Fig. 3 is a schematic diagram of a backbone network structure of a space-time fusion background suppression network in the infrared dark spot target detection method under a dynamic background according to an embodiment of the present invention.
Fig. 4 is a schematic structural diagram of a feature extraction network of an infrared point target detection network in the infrared dark and weak point target detection method in a dynamic background according to an embodiment of the present invention.
Fig. 5 is a schematic diagram of a target detection result of the infrared dark spot target detection method under a dynamic background according to an embodiment of the present invention.
Detailed Description
Hereinafter, embodiments of the present invention will be described with reference to the accompanying drawings. In the following description, the same reference numerals are used for the same blocks. In the case of the same reference numerals, their names and functions are also the same. Therefore, detailed description thereof will not be repeated.
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail below with reference to the accompanying drawings and specific embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and do not limit the invention.
Fig. 1 is a logic framework diagram illustrating an infrared dark spot target detection method in a dynamic background according to an embodiment of the present invention.
Fig. 2 is a schematic flow chart of a method for detecting an infrared dark spot target in a dynamic background according to an embodiment of the present invention.
As shown in fig. 1 and fig. 2, the method for detecting an infrared dark spot target under a dynamic background provided by the present invention includes the following steps:
s1, performing subsequence splitting according to the infrared sequence image under the dynamic background change to obtain a background stage static subsequence image.
When the target leaves the view field in the infrared sequence image, the infrared image sensor can follow up to ensure that the target is in an observation area at any moment. The background or scene may change in stages (as distinguished from the slow update). In the process, the target can reappear after disappearing briefly and can be displaced greatly. Based on the method, the similarity between adjacent frame images is calculated by combining the perceptual hash and the difference hash so as to judge whether the infrared image sensor moves, and the sequence image with dynamically changed background is split into subsequences with one background or static scene stages according to the movement moment, so that the subsequent enhanced adaptability is improved, and the detection tracking difficulty is reduced.
Step S1 includes the following substeps:
s11, calculating the similarity between adjacent frame images in the infrared sequence image, judging whether the infrared image sensor moves, and recording the moving time if the infrared image sensor moves.
Step S11 includes the following substeps:
s110, a pretreatment step: the following parameters were set: perceptual hash threshold pth, difference hash threshold dth, continuous variation threshold sth (e.g. 5), split threshold ssth (e.g. 100).
S111, respectively calculating infrared sequence images F N And (i, j) obtaining the perceptual hash difference p and the difference hash difference d in the two frames of images according to the perceptual hash value and the difference hash value of the current frame and the previous frame in the (i, j).
And N is the original sequence length in the infrared sequence image.
And S112, when the perceptual hash difference p is greater than the perceptual hash threshold pth and the difference hash difference d is greater than the difference hash threshold dth, judging that the scene of the current frame changes at the moment, namely the infrared image sensor moves, and storing the frame number of the current frame into a scene change set B.
And S113, sequencing the scene change set B according to time (frame number).
And S12, removing continuous change frames in the moving process of the infrared image sensor, and determining a starting frame and an ending frame of each sub-sequence.
And if the difference value between the current frame number and the previous frame number in the scene change set B is greater than the continuous change threshold value sth, storing the current frame number into a subsequence end set C, and sequencing the subsequence end set C according to time (frame number).
And S13, when the length of the subsequence is less than ssth, not splitting, and keeping the subsequence in the last subsequence to obtain a subsequence splitting set D.
The subsequence length is less than ssth, and the similarity caused by illumination change is possibly poor. S14, splitting the set D according to the subsequence to obtain an infrared sequence image F N (i, j) splitting to obtain an infrared subsequence F N1 (i,j)、F N2 (i,j)、F N3 (i,j)…F Nn (i,j)。
And S2, performing background clutter suppression on the subsequence images and enhancing targets in the subsequence images through a space-time fusion background suppression network based on full convolution.
The basic idea of background prediction and subtraction is to subtract the prediction background from the original image by the assumption that the background pixels are spatially related and opposite to the features of the target pixels. The invention provides a space-time fusion background suppression network based on full convolution. The method has the main idea that the morphological characteristics, the gray characteristics and the motion characteristics of a target are fully mined by utilizing the time domain information and the space domain information of a sequence infrared image simultaneously through a neural network.
Step S2 includes the following substeps:
and S20, splitting the subsequence image into T sections, wherein each section is T frames.
S21, inputting the t frame subsequence image as a whole t channel image into a space-time fusion background suppression network based on full convolution for training to obtain a background prediction model.
Aiming at the conditions that the gray contrast of a target-background is reduced, the target is submerged in the background and the target is dark and invisible, which are easily caused by remote detection, the invention splits the stage-wise background static ith subsequence obtained by splitting into T sections by utilizing time sequence information in an infrared sequence image, and each section of T frames, T frames of infrared images are taken as T channel images to be integrally input into a space-time fusion background suppression network based on full convolution for training to obtain a background prediction model. The background prediction model can calculate the background suppression component of each pixel of the input image, and ideally, the finally output image should completely remove the background and noise, and the real target is enhanced and has higher signal-to-noise ratio.
And S22, calculating the background suppression component of each pixel according to the subsequence image by using the background prediction model to obtain an output image which is removed of the background and the noise and enhanced in target.
Fig. 3 shows a schematic diagram of a backbone network structure of a space-time fusion background suppression network in the infrared weak spot target detection method under a dynamic background according to an embodiment of the present invention.
As shown in fig. 3, the network mainly comprises 6 convolutional layers, since the number of point target pixels is small, no pooling layer or downsampling design is adopted, the scale of the convolutional kernels is set to 3 × 3, and the number of the convolutional kernels in each layer is 8, 16, 32, 64, 128 and t (the output feature diagram is ensured to be consistent with the number of input channels); the n-th layer and the n-2 layer feature maps are directly fused, target details and strong semantic information are considered, the network adopts a full convolution design, and the size of an input image and an output image is not limited.
And S3, detecting the target from the background-suppressed image according to the improved Yolov 5-based infrared point target detection network.
In the traditional detection method based on background suppression, after background subtraction and target reinforcement, a gray threshold is set to screen and segment true targets. However, under the condition that the signal-to-noise ratio of the target is low, if the set gray detection threshold is high, the dark and weak target may be lost; conversely, if the set threshold is low, a large amount of false alarms may result, and even if an adaptive threshold segmentation method (i.e., adaptively determining the segmentation threshold based on the mean and variance of the gray scale of the image) is used, the reduction is only to a certain extent. Based on the method, the neural network is adopted to replace the traditional threshold segmentation, the gray value is not taken as a target judgment basis, but the strong characteristic extraction capability of the neural network is utilized to fully learn the various attributes of the shape, the structure, the texture and the gray value of the target, and the aim of separating the target from noise points, background residues and interferents is strived to be achieved.
Although the classical Yolov5 target detection network has good performance in robustness and accuracy, a certain bottleneck still exists in the infrared small target detection with unobvious features.
Fig. 4 is a schematic diagram illustrating a feature extraction network structure of an infrared point target detection network in an infrared dark and weak point target detection method in a dynamic background according to an embodiment of the present invention.
As shown in FIG. 4, the invention proposes an infrared point target detection network based on improved Yolov 5.
Step S3 includes the following substeps:
s301, establishing an infrared point and spot target data set;
s302, directly reducing the number of network layers (convolutional layers and pooling layers), the number of residual network structures and the number of different-scale receptive fields in SPPs in a YOLOv5 feature extraction network; and introducing a self-Attention mechanism (Attention), and embedding a channel Attention module and a space Attention module in a network in a cascading way (before each down-sampling); and obtaining the improved Yolov5 infrared point target feature extraction network.
S303, inputting the data set image in the step S301 into an improved Yolov5 infrared point target feature extraction network for feature extraction to obtain feature maps with different scales;
s304, classifying the characteristic diagram obtained in the step S303 and performing bounding box regression to calculate loss;
s305, after the training of the improved Yolov5 infrared point target detection network is completed, testing the test set by means of the divided data sets to realize the detection of the infrared point target, and evaluating the detection effect of the improved Yolov5 infrared point target detection network. Step S302 is described in detail as follows:
the invention redesigns the feature extraction part in the Yolov5 network (the network structure design is shown in detail in fig. 4). Because the point/spot target information occupies few pixels and is difficult to transmit to the high layer of the network, even though the Yolov5 adopts compensation fusion strategies such as a characteristic pyramid (FPN) and the like, the problems can not be effectively solved: (1) The spatial hierarchy and data structure information which are lost in the down-sampling process are difficult to effectively recover through simple up-sampling; (2) Coarse fusion does not necessarily significantly improve feature characterization capability. According to the method, a Yolov5 feature extraction network is used as a blueprint, the number of network layers (convolution layers and pooling layers), the number of residual network structures and the number of different scale receptive fields in the SPP are directly reduced, a self-Attention mechanism (Attention) is introduced, a channel Attention module and a space Attention module are cascaded and embedded into the network (before downsampling each time), information flow in the network is regulated and controlled through correlation of modeling feature channels and spaces, the weight and representation of useful information are enhanced, and a strong semantic feature map capable of fully representing small targets is generated, so that high-quality detection is realized.
The number of network layers of a Yolov5 feature extraction network, the number of residual network structures and the number of different scale receptive fields in the SPP are reduced.
The method ensures that the point/spot target information occupying less pixels is transmitted to the high layer of the network, and avoids information loss, rough fusion or jump connection confusion feature expression.
A self-Attention mechanism (Attention) is introduced into a Yolov5 feature extraction network, a channel Attention module and a space Attention module are embedded into the Yolov5 feature extraction network in a cascading mode (before downsampling every time), information flow in the network is regulated and controlled through modeling of association of feature channels and spaces, weight and representation of useful information are enhanced, and a strong semantic feature map capable of fully representing small targets is generated.
Step S302 includes the following substeps:
s30201: inputting an image A with the size of H multiplied by W multiplied by 1 (height, width and channel number), and outputting a characteristic Q1 with the size of H multiplied by W multiplied by 32 through Focus layer processing (layering, splicing and convolution);
s30202: the characteristic Q1 is processed by a Conv layer (convolution, regularization and SiLU activation function) to output a characteristic Q2 with the size of H/2 xW/2 x64;
s30203: the characteristic Q2 is subjected to Bottleneck (twice convolution) processing and then outputs a characteristic Q3 with the size of H/2 xW/2 x 64;
s30204: the feature Q3 is processed by an Attention module (described below) to output a feature Q4 with the size of H/2 xW/2 x 64;
s30205: the feature Q4 is processed by a Conv layer (convolution, regularization and SiLU activation function) and then outputs a feature Q5 with the size of H/4 xW/4 x 128;
s30206: the feature Q5 outputs a feature Q6 with the size of H/4 xW/4 x128 after being processed by a BottleneckCSP layer (convolution, bottleneck, conv, splicing, regularization and LeakyReLU activation function) for 3 times;
s30207: the feature Q6 is processed by an Attention module (described below) to output a feature Q7 with a size of H/4 xW/4 x 128;
s30208: the characteristic Q7 outputs a characteristic Q8 with the size of H/8 multiplied by W/8 multiplied by 256 after being processed by a Conv layer (convolution, regularization and SiLU activation function);
s30209: the feature Q8 is processed by an SPP layer (Conv, parallel maximum pooling in different scales and splicing) and then outputs a feature Q9 with the size of H/8 xW/8 x256;
s30210: the feature Q9 outputs a feature Q10 with the size of H/8 xW/8 x256 after being processed by a BottleneckCSP layer (convolution, bottleneck, conv, splicing, regularization and LeakyReLU activation function) for 1 time;
s30211: the feature Q10 is processed by an Attention module (described below) to output a feature Q11 with a size of H/8 xW/8 x256;
s30212: the feature Q11 outputs a feature Q12 with the size of H/8 xW/8 x 256 after being processed by 1 time of BottleneckCSP layer (convolution, bottleneck, conv, splicing, regularization and LeakyReLU activation function) and is used as a final feature map of a target with the size of 30 x 30 pixels;
s30213: the characteristic Q12 is processed by an upper sampling layer, then spliced with the characteristic Q7, and processed by a Conv layer (convolution, regularization and SiLU activation function) to output a characteristic Q13 with the size of H/4 xW/4 x128;
s30214: the feature Q13 is subjected to BottleneckCSP layer processing (convolution, bottleneck, conv, splicing, regularization and LeakyReLU activation function) for 1 time, and a feature Q14 with the size of H/4 xW/4 x 128 is output and serves as a final feature map of a target with the size of 10 x 10 pixels;
s30215: the feature Q14 is processed by an upper sampling layer, then spliced with the feature Q4, processed by a Conv layer (convolution, regularization and SiLU activation function) and then output to be a feature Q15 with the size of H/2 xW/2 x 64;
s30216: the feature Q15 outputs a feature Q16 with the size of H/2 xW/2 x64 after being processed by 1 time of BottleneckCSP layer (convolution, bottleneck, conv, splicing, regularization and LeakyReLU activation function) and is used as a final feature map of a target with the size of 5 x 5 pixels;
the Attention module comprises a channel Attention operation and a space Attention operation;
channel attention operations were performed first:
s3001: inputting a characteristic E with the size of H multiplied by W multiplied by C (height, width and channel number), performing average pooling processing on spatial dimensions to output the characteristic E1, wherein the size of 1 multiplied by C is 1 multiplied by 1, and performing maximum pooling processing on the spatial dimensions to output the characteristic E2;
s3002: e1 is subjected to full connection and Relu activation function, then the size of output characteristic E1-1 is 1 × 1 × C1, C1= H × W × C/16, E2 is subjected to full connection and Relu activation function, then the size of output characteristic E2-1 of a layer is 1 × 1 × C1, and C1= H × W × C/16;
s3003: e1-1 outputs the characteristic E1-2 with the size of 1 multiplied by C after full connection, E2-1 outputs the characteristic E2-2 with the size of 1 multiplied by C after full connection;
s3004: e1-2 and E2-2 are added and the feature E3 is output through a Sigmoid function, and the size is 1 multiplied by C. Carrying out Hadamard product processing along the channel dimension with the input original characteristic E, and outputting a characteristic graph E4 with the size H multiplied by W multiplied by C;
and then, carrying out spatial attention operation on the feature map E4:
s3005: inputting a characteristic E4 with the size of H multiplied by W multiplied by C (height, width and channel number), performing average pooling processing on channel dimensions to output a characteristic E5, with the size of H multiplied by W multiplied by 1, performing maximum pooling processing on the channel dimensions to output a characteristic E6 with the size of H multiplied by W multiplied by 1;
s3006: e5 and E6 are spliced, and a characteristic E7 is output after passing through a convolution layer and a Sigmoid function, wherein the size of the characteristic E7 is H multiplied by W multiplied by 1;
s3007: e7 and E4 carry out Hadamard product processing along the space dimension, and output a final characteristic diagram E8 with the size of H multiplied by W multiplied by C.
And S4, confirming the target through a track association matching algorithm, and removing a false alarm in the target to obtain a real target.
After the processing of the steps, false alarms such as cloud layer edges, noise, fragments and the like are remained in the detection result besides the true target. Relative to false alarms, the unique features of a target are a stable, regular flight path: a long-distance target can form a stable flight path between continuous frames, fragments mainly show a free-falling trend, and noise and cloud floc interference are relatively random and do not have a continuous motion track. According to the method, track prediction is firstly carried out according to the continuity and the regularity of target motion, then a target is judged by utilizing the position relation between possible target points in adjacent frames, and a nearest neighbor association method is adopted to associate a target candidate point of a current frame with a prediction result of previous frames and a confirmation detection result of previous frames so as to judge whether the candidate point is a true target.
Step S4 includes the following substeps:
s41, predicting the track of the candidate target point in the current frame to obtain a track prediction value of the candidate target point in the next frame.
Step S41 includes the following substeps:
s411, acquiring the centroid position (x, y) of the candidate target point in each frame of detection result.
And S412, estimating the candidate target point through a probability data association algorithm to obtain a track predicted value of the next frame of candidate target point.
And S413, calculating the centroid position (x ', y') of the next frame of candidate target point according to the predicted value through an unscented Kalman particle filter algorithm.
S42, associating the candidate target point of the current frame with the track prediction result of the historical frame and the confirmation detection result of the historical frame by a nearest neighbor association method;
if the association is successful, the candidate target point is a real target; and obtaining the detection result of each frame.
And if the association fails, the candidate target point is a false alarm.
The method for detecting the infrared dark weak point target under the dynamic background further comprises a preprocessing step S0, and bad bright points in the infrared sequence image are removed through an automatic detection algorithm.
The units in the infrared camera that cannot normally sense light are called dead spots. The method is particularly divided into a dead dark spot or a dead bright spot, the size of the dead bright spot is generally only one pixel, the brightness of the dead bright spot is not influenced by the brightness of surrounding pixels, and the dead bright spot is basically unchanged in each frame. The existing mode for processing the bad bright spots is generally that the models of the infrared image sensors are known, the position information of the bad bright spots is recorded in advance and stored, and the bad bright spots are removed according to the recorded position information. The method is limited in practical engineering application, on one hand, data to be processed can come from unknown models of the infrared image sensors, and on the other hand, the number of bad bright spots can be increased along with time due to the fact that the infrared image sensors are aged or installed or damaged in use.
In the step S0, the bright spots in the infrared sequence image may be removed in an off-line mode and an on-line mode, respectively.
The invention provides a more robust automatic detection algorithm for bad bright spots, which is divided into an off-line mode and an on-line mode.
The bright spot removing step in the off-line mode is carried out before the step S1;
the defective lighting point removing step in the online mode performs processing between step S3 and step S4.
The off-line mode is long in time consumption and is suitable for tasks with low real-time requirements. The automatic dead spot detection algorithm in the off-line mode comprises the following steps of:
s01, for infrared sequence image F N (i, j) sorting the pixel points of each frame according to the gray value from large to small, and recording the previous P (P)>100 The coordinate positions of the pixel points are used as a suspected bad bright point set.
S02, in an infrared sequence image F N Randomly selecting M (M) from (i, j)>1000 ) frame images, and solving the intersection S (i, j) of the M frame images and the suspected bad bright point set as a bad bright point set;
if the target track in the infrared sequence image is known, the target position in the bad bright set S (i, j) is removed accordingly;
if no target track is a priori, a plurality of (A) and (B) are required>3) Infrared sequence image F N And (i, j) taking the intersection again to avoid that the target position (due to high brightness) is mistaken for a bad bright point.
S03, calculating each frame of infrared sequence image F N Replacing the bad bright points by the gray average value of the neighborhood of the bad bright points in (i, j), and carrying out image F on the infrared sequence N (i, j) is updated to f (i, j).
The neighborhood is pixel points in the upper, lower, left and right directions around the bad bright point.
In the online mode, a bad bright spot removing module is embedded into a target confirming module and is used for distinguishing noise, bad bright spots and captured targets in an infrared sequence image, the bad bright spot removing is carried out on other segments based on a first segment subsequence confirming result, the accuracy is slightly lower than that of an offline version, but the generated time consumption is negligible, and the bad bright spot automatic detection algorithm in the online mode comprises the following steps:
s011, taking the first segment of subsequence M frames to perform self-adaptive threshold segmentation, and taking the intersection to obtain a bad bright point set S 1 (i,j),M>1000。
Namely, the threshold of each pixel is determined by a neighborhood window taking the pixel as the center, gaussian convolution (adding a constant value on the basis) is taken as the threshold, the pixel points larger than the threshold are judged as bad bright points, and the bad bright point set is S 1 (i,j)。
S022, according to a target confirmation result based on track association matching (of the first segment of subsequence), collecting bad bright spots within a preset neighborhood radius of the target from a bad bright spot set S 1 (i, j) is deleted.
After track association and target confirmation are carried out on the detection result of the first segment of subsequence, a target position coordinate is obtained, and the neighborhood radius is 2 pixels or 5 pixels;
s033, gathering with bad bright spots S 1 And (i, j) judging whether the sequence detection result is near the neighborhood position of the bad lighting point (the neighborhood radius is set to be 0.5pixel or 1 pixel) before the nth segment (n is not equal to 1) of the subsequence track is associated, if so, considering the point as the bad lighting point, and rejecting the point from the detection result.
Fig. 5 is a schematic diagram illustrating a target detection result of the infrared dark spot target detection method under a dynamic background according to an embodiment of the present invention.
As shown in fig. 5, the present invention has an excellent detection effect on an infrared image with low resolution or with insignificant feature information and semantic information after down-sampling.
The invention also provides an infrared dark spot target detection system under the dynamic background, which comprises an image subsequence splitting module, a background suppression module, a target detection module and a target confirmation module;
the image subsequence splitting module is used for performing subsequence splitting on the infrared sequence image under the dynamic background change;
the background suppression module is used for performing background clutter suppression on the subsequence image and enhancing a target in the subsequence image through a space-time fusion background suppression network based on full convolution;
the target detection module is used for detecting a target from the background-suppressed image according to an infrared point target detection network based on the improved Yolov 5;
and the target confirmation module confirms the target through a track association matching algorithm, removes false alarms in the target and obtains a real target.
The image subsequence splitting module comprises an infrared image sensor monitoring unit, a continuous transformation frame removing unit and a subsequence splitting unit.
The infrared image sensor monitoring unit is used for judging whether the infrared image sensor moves or not by calculating the similarity between adjacent frame images in the infrared sequence image, and recording the moving moment if the infrared image sensor moves;
the infrared image sensor monitoring unit comprises a parameter setting subunit, a numerical value calculating subunit and an infrared image sensor moving moment recording subunit;
the parameter setting subunit is configured to set the following parameters: sensing a hash threshold pth, a difference hash threshold dth, a continuous change threshold sth and a split threshold ssth;
the numerical operator unit is used for respectively calculating the infrared sequence images F N (i, j) obtaining a perceptual hash difference p and a differential hash difference d between the current frame and the previous frame in the two frames of images;
wherein N is the sequence length in the infrared sequence image;
the infrared image sensor movement moment recording subunit is used for recording the frame number of the infrared image sensor in the scene change set B when the infrared image sensor moves;
in the infrared image sensor movement time recording subunit:
when the perceptual hash difference p is larger than the perceptual hash threshold pth and the difference hash difference d is larger than the difference hash threshold dth, judging that the infrared image sensor moves at the moment, and storing the frame number of the current frame into a scene change set B; sequencing the scene change set B according to time;
the continuous transformation frame removing unit is used for removing continuous transformation frames in the moving process of the infrared image sensor according to the scene change set B and determining a starting frame and an ending frame of each sub-sequence;
the subsequence splitting unit is used for splitting the infrared sequence image to obtain a subsequence;
in the sub-sequence splitting unit, when the length of the sub-sequence is smaller than a splitting threshold value ssth, the sub-sequence is not split, and the sub-sequence is reserved in the previous sub-sequence, so that a sub-sequence splitting set D is obtained;
splitting set D according to subsequence to infrared sequence image F N (i, j) splitting to obtain a subsequence F N1 (i,j)、F N2 (i,j)、F N3 (i,j)…F Nn (i,j)。
The background suppression module comprises a subsequence image dividing unit, a background prediction model construction unit and a detection result output unit;
the subsequence image dividing unit is used for dividing the subsequence image into T sections, and each section has T frames;
the background prediction model construction unit is used for inputting the t frame subsequence image as a t channel image into a space-time fusion background suppression network based on full convolution for training to obtain a background prediction model;
and the detection result output unit is used for calculating the background suppression component of each pixel according to the subsequence image by using the background prediction model to obtain an output image with background, noise and target enhancement removed.
The target confirmation module comprises a candidate target point prediction unit and a real target judgment unit;
the candidate target point prediction unit is used for predicting the track of the candidate target point in the current frame to obtain the predicted value of the candidate target point of the next frame;
the candidate target point prediction unit comprises a candidate target point centroid position calculation subunit, a candidate target point prediction value operator unit and a candidate target point next frame centroid position prediction subunit;
the candidate target point centroid position calculation subunit is used for acquiring a centroid position (x, y) of the candidate target point in each frame of detection results;
the candidate target point prediction value operator unit is used for estimating a candidate target point through a probability data association algorithm to obtain a track prediction value of the next frame of candidate target point;
and the candidate target point next frame centroid position prediction subunit is used for calculating the next frame candidate target point centroid position (x ', y') according to the trajectory prediction value through an unscented Kalman particle filter algorithm to obtain a prediction result.
The real target judgment unit respectively associates the current frame candidate target point with the track prediction result of the historical frame and the confirmation detection result of the historical frame by a nearest neighbor association method;
if the association fails, the candidate target point is a false alarm; if the association is successful, the candidate target point is a real target; and obtaining the detection result of each frame.
The invention provides an infrared dark and weak point target detection system under a dynamic background, which further comprises a bad bright point removing module;
and the bad bright spot removing module is used for removing the bad bright spots in the infrared sequence images through an automatic detection algorithm.
The bright spot removing module comprises a bright spot removing module in an off-line mode, and specifically comprises a suspected bright spot set acquiring unit, a bright spot set acquiring unit and a bright spot updating unit;
the suspected bad lighting point set acquisition unit is used for acquiring an infrared sequence image F N (i, j) sorting the pixel points of each frame according to the gray value from large to small, and recording the previous P (P)>100 ) the coordinate positions of the pixel points are used as suspected bad bright point sets;
the bad lighting point set acquisition unit is used for acquiring an infrared sequence image F N Randomly selecting M (M) from (i, j)>1000 ) frame image, and finding the intersection S (i, j) with the suspected bad bright point set in the M frame image as a bad bright point set.
If the target track in the infrared sequence image is known, the target positions in the bad bright set S (i, j) are removed accordingly;
if there is no target trajectory a priori, a plurality of (A), (B), (C) and (D) are required>3) Infrared sequence image F N Taking intersection from the bad bright point set S (i, j) of (i, j) to avoid that the target position (due to high brightness) is mistaken for a bad bright point;
the bad bright point updating unit is used for calculating each frame of infrared sequence image F N Replacing the bad lighting points by the gray average value of the neighborhood of the bad lighting points in (i, j), and carrying out image F on the infrared sequence N (i, j) is updated to f (i, j).
The bad bright point removing module comprises a bad bright point removing module in an online mode, and specifically comprises a threshold segmentation unit and a bad bright point removing unit;
the threshold segmentation unit is used for taking M frames in the first segment of subsequence to carry out self-adaptive threshold segmentation, and taking intersection to obtain a bad bright spot set S 1 (i,j),M>1000;
The bright spot removing unit is used for deleting the bright spots in the infrared sequence image;
in the bright spot removing unit:
according to a target confirmation result based on track association matching (a first segment of subsequence), a bad bright spot in a target preset neighborhood radius is collected from a bad bright spot set S 1 (i, j) deleted; and collect S with the dead spot 1 (i, j) according to the judgment result, before the nth sub-sequence track is associated, judging whether the sequence detection result is near the neighborhood position of the defective lighting point, if so, judging the point as the defective lighting point, and removing the point from the detection result; wherein n ≠ 1.
Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention.
The above embodiments of the present invention should not be construed as limiting the scope of the present invention. Any other corresponding changes and modifications made according to the technical idea of the present invention should be included in the protection scope of the claims of the present invention.
Claims (10)
1. A method for detecting infrared dark weak point targets under a dynamic background is characterized by comprising the following steps:
s1, performing subsequence splitting on an infrared sequence image under the condition of dynamic background change to obtain a stage static subsequence image of a background;
s2, performing background clutter suppression on the subsequence image and enhancing a target in the subsequence image through a space-time fusion background suppression network based on full convolution;
s3, detecting a target from the image after background suppression according to an infrared point target detection network based on the improved Yolov 5;
s4, confirming the target through a track association matching algorithm, and removing a false alarm of the target to obtain a real target;
the step S4 includes the following substeps:
s41, predicting the track of the candidate target point in the current frame to obtain a track predicted value of the candidate target point in the next frame;
s42, respectively associating the current frame candidate target point with a prediction result of a historical frame and a confirmation detection result of the historical frame by a nearest neighbor association method;
if the association fails, the candidate target point is a false alarm;
if the association is successful, the candidate target point is a real target; and obtaining the detection result of each frame.
2. The method for detecting infrared dark spot targets in dynamic background according to claim 1, wherein the step S1 includes the following sub-steps:
s11, calculating the similarity between adjacent frame images in the infrared sequence image, judging whether the infrared image sensor moves, and if the infrared image sensor moves, recording the moving moment;
s12, removing continuous change frames in the moving process of the infrared image sensor, and determining a starting frame and an ending frame of each segment of subsequence image;
s13, when the length of the subsequence is less than a splitting threshold value ssth, the subsequence is not split, and the subsequence is reserved in the last subsequence, so that a subsequence splitting set D is obtained;
s14, splitting the infrared sequence image F according to the sub-sequence splitting set D N (i, j) splitting to obtain a subsequence F N1 (i,j)、F N2 (i,j)、F N3 (i,j)…F Nn (i,j)。
3. The method of claim 2, wherein the object detection method of infrared dark spots in dynamic background,
the step S11 includes the following substeps:
s110, a preprocessing step, setting parameters: sensing a hash threshold pth, a difference hash threshold dth and a continuous change threshold sth;
s111, respectively calculating the infrared sequence images F N (i, j) obtaining a perceptual hash difference p and a differential hash difference d between the current frame and the previous frame in the two frames of images;
wherein N is the sequence length in the infrared sequence image;
and when the perceptual hash difference p is greater than a perceptual hash threshold pth and the difference hash difference d is greater than a difference hash threshold dth, judging that the infrared image sensor moves at the moment, storing the frame number of the current frame into a scene change set B, and sequencing the scene change set B according to time.
4. The method for detecting infrared dark weak point targets under dynamic background according to claim 3, wherein the step S2 comprises the following sub-steps:
s20, splitting the subsequence image into T sections, wherein each section is T frames;
s21, inputting the t frame subsequence image serving as a t channel image into a space-time fusion background suppression network based on full convolution for training to obtain a background prediction model;
and S22, calculating the background suppression component of each pixel according to the subsequence image by using the background prediction model to obtain an output image with background, noise and target enhancement removed.
5. The method for detecting infrared dark weak point targets under dynamic background according to claim 4, wherein the step S3 comprises the following sub-steps:
s301, establishing an infrared point and spot target data set;
s302, directly reducing the number of network layers, the number of residual network structures and the number of different-scale receptive fields in an SPP in a YOLOv5 feature extraction network; a self-attention mechanism is introduced, and a channel attention module and a space attention module are cascaded and embedded into a network; obtaining an improved Yolov5 infrared point target feature extraction network;
s303, inputting the data set image in the step S301 into the improved Yolov5 infrared point target feature extraction network for feature extraction to obtain feature maps with different scales;
s304, classifying the feature map obtained in the step S303 and performing bounding box regression to calculate loss;
s305, after the training of the improved Yolov5 infrared point target detection network is completed, testing the test set by means of the divided data set to realize the detection of the infrared point target, and evaluating the detection effect of the improved Yolov5 infrared point target detection network.
6. The method for detecting infrared dark spot targets in dynamic background according to claim 5, wherein the step S302 includes the following sub-steps:
s30201: inputting an image A with the size of H multiplied by W multiplied by 1, and outputting a characteristic Q1 with the size of H multiplied by W multiplied by 32 after Focus layer processing;
s30202: the characteristic Q1 is processed by a Conv layer and then outputs a characteristic Q2 with the size of H/2 xW/2 x64;
s30203: the characteristic Q2 is processed by Bottleneck and then outputs a characteristic Q3 with the size of H/2 xW/2 x 64;
s30204: the characteristic Q3 is processed by an Attention module and then outputs a characteristic Q4 with the size of H/2 xW/2 x 64;
s30205: the characteristic Q4 outputs a characteristic Q5 with the size of H/4 xW/4 x 128 after being processed by a Conv layer;
s30206: the characteristic Q5 is processed by the BottleneckCSP layer for 3 times and then outputs a characteristic Q6 with the size of H/4 xW/4 x 128;
s30207: the characteristic Q6 is processed by an Attention module and then outputs a characteristic Q7 with the size of H/4 xW/4 x 128;
s30208: the characteristic Q7 outputs a characteristic Q8 with the size of H/8 xW/8 x 256 after being processed by a Conv layer;
s30209: the characteristic Q8 is processed by an SPP layer and then outputs a characteristic Q9 with the size of H/8 multiplied by W/8 multiplied by 256;
s30210: the characteristic Q9 is processed by the BottleneckCSP layer for 1 time and then outputs a characteristic Q10 with the size of H/8 multiplied by W/8 multiplied by 256;
s30211: the characteristic Q10 is processed by the Attention module and then outputs a characteristic Q11 with the size of H/8 multiplied by W/8 multiplied by 256;
s30212: the characteristic Q11 is processed by a BottleneckCSP layer for 1 time and then outputs a characteristic Q12 with the size of H/8 xW/8 x 256, and the characteristic Q12 is used as a final characteristic diagram of a 30 x 30pixel target;
s30213: the characteristic Q12 is subjected to upsampling layer processing, then spliced with the characteristic Q7, and subjected to Conv layer processing to output a characteristic Q13 with the size of H/4 xW/4 x 128;
s30214: the feature Q13 is processed by a BottleneckCSP layer for 1 time to output a feature Q14 with the size of H/4 xW/4 x128, and the feature Q14 is used as a final feature map of a 10 x 10pixel target;
s30215: the characteristic Q14 is processed by an upper sampling layer, then spliced with the characteristic Q4, and processed by a Conv layer to output a characteristic Q15 with the size of H/2 xW/2 x64;
s30216: the feature Q15 is processed by the BottleneckCSP layer for 1 time to output a feature Q16 with the size of H/2 xW/2 x 64, and the feature Q16 is used as a final feature map of a 5 x 5pixel target.
7. The method for detecting infrared dark spot targets in dynamic background according to claim 6, wherein the Attention module in step S3 includes a channel Attention operation and a space Attention operation;
channel attention operations were performed first:
s3001: inputting the characteristic E with the size of H multiplied by W multiplied by C, carrying out average pooling processing on the space dimension to output the characteristic E1 with the size of 1 multiplied by C, and carrying out maximum pooling processing on the space dimension to output the characteristic E2 with the size of 1 multiplied by C;
s3002: the feature E1 outputs a feature E1-1 with the size of 1 × 1 × C1 after the full connection and Relu activation function, C1= H × W × C/16, the feature E2 outputs a feature E2-1 with the size of 1 × 1 × C1 after the full connection and Relu activation function, and C1= H × W × C/16;
s3003: the feature E1-1 outputs a feature E1-2 with the size of 1 multiplied by C after full connection, and the feature E2-1 outputs a feature E2-2 with the size of 1 multiplied by C after full connection;
s3004: adding the characteristic E1-2 and the characteristic E2-2, and outputting a characteristic E3 with the size of 1 multiplied by C through a Sigmoid function; carrying out Hadamard product processing along the channel dimension with the characteristic E, and outputting a characteristic E4 with the size of H multiplied by W multiplied by C;
and then performing spatial attention operation on the feature E4:
s3005: inputting the feature E4, performing average pooling on channel dimensions to output a feature E5 with the size of H multiplied by W multiplied by 1, and performing maximum pooling on the channel dimensions to output a feature E6 with the size of H multiplied by W multiplied by 1;
s3006: splicing the characteristic E5 and the characteristic E6, and outputting a characteristic E7 with the size of H multiplied by W multiplied by 1 after passing through a convolution layer and a Sigmoid function;
s3007: and carrying out Hadamard product processing on the characteristic E7 and the characteristic E4 along the spatial dimension, and outputting a final characteristic diagram E8 with the size of H multiplied by W multiplied by C.
8. The method for detecting infrared dark spot targets under dynamic background according to claim 7, wherein the step S41 comprises the following sub-steps:
s411, acquiring the centroid position (x, y) of the candidate target point in each frame of detection result;
s412, estimating the candidate target point through a probability data association algorithm to obtain a track prediction value of the next frame of candidate target point;
and S413, calculating the centroid position (x ', y') of the next frame candidate target point according to the predicted value through an unscented Kalman particle filter algorithm to obtain a predicted result.
9. The method for detecting infrared dark and weak point targets under a dynamic background according to claim 8, further comprising a step of removing bright spots: removing bad bright spots in the infrared sequence images through an automatic detection algorithm;
the bright spot removing step includes a bright spot removing step in an off-line mode, and the bright spot removing step in the off-line mode is performed before the step S1, and specifically includes the following sub-steps:
for the infrared sequence image F N And (i, j) sorting the pixel points of each frame according to the gray values from large to small, and recording the coordinate positions of the first P pixel points as a suspected bad bright point set, wherein P is>100;
In the infrared sequence image F N Randomly selecting M frames of images from (i, j), and solving an intersection S (i, j) of the M frames of images and the suspected bad light point set as a bad light point set, M>1000;
If the target track in the infrared sequence image is known, the target positions in the bad bright point set S (i, j) are removed accordingly;
if no target track is priori available, L infrared sequence images F are needed N Taking the intersection set S (i, j) of the bad lighting point set (i, j) and L>3;
Calculating each frame of infrared sequence image F N Replacing the bad bright points by the gray average value of the neighborhood of the bad bright points in (i, j), and carrying out image F on the infrared sequence N (i, j) is updated to f (i, j);
the bright spot removing step further includes a bright spot removing step in an online mode, where the bright spot removing step in the online mode is performed between the step S3 and the step S4, and specifically includes the following sub-steps:
taking the first sub-sequencePerforming self-adaptive threshold segmentation on M frames in the column image, and taking intersection to obtain a bad bright point set S 1 (i,j),M>1000;
According to the target confirmation result based on the track correlation matching, bad bright points in the target preset neighborhood radius are collected from a bad bright point set S 1 (i, j) deleting;
by collecting S with bad bright spots 1 (i, j) according to the judgment result, before the nth sub-sequence track is associated, judging whether the sequence detection result is near the neighborhood position of the defective lighting point, if so, judging the point as the defective lighting point, and removing the point from the detection result;
wherein n ≠ 1.
10. An infrared dark and weak point target detection system under a dynamic background is characterized by comprising an image subsequence splitting module, a background suppression module, a target detection module and a target confirmation module;
the image subsequence splitting module is used for performing subsequence splitting on the infrared sequence image under the dynamic background change;
the background suppression module is used for performing background clutter suppression on the subsequence image and enhancing a target in the subsequence image based on a full-convolution space-time fusion background suppression network;
the target detection module is used for detecting the target from the background suppressed image according to an infrared point target detection network based on improved Yolov 5;
the target confirmation module confirms the target through a track association matching algorithm, removes a false alarm in the target and obtains a real target;
the target validation module comprises: a candidate target point prediction unit and a real target determination unit;
the candidate target point prediction unit is used for predicting the track of the candidate target point in the current frame to obtain the predicted value of the candidate target point of the next frame;
the real target judgment unit respectively associates the current frame candidate target point with the track prediction result of the historical frame and the confirmation detection result of the historical frame by a nearest neighbor association method;
if the association fails, the candidate target point is a false alarm; if the association is successful, the candidate target point is a real target; and obtaining the detection result of each frame.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211118428.7A CN115205655B (en) | 2022-09-15 | 2022-09-15 | Infrared dark spot target detection system under dynamic background and detection method thereof |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211118428.7A CN115205655B (en) | 2022-09-15 | 2022-09-15 | Infrared dark spot target detection system under dynamic background and detection method thereof |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115205655A CN115205655A (en) | 2022-10-18 |
CN115205655B true CN115205655B (en) | 2022-12-09 |
Family
ID=83572049
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211118428.7A Active CN115205655B (en) | 2022-09-15 | 2022-09-15 | Infrared dark spot target detection system under dynamic background and detection method thereof |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115205655B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116665015B (en) * | 2023-06-26 | 2024-04-02 | 中国科学院长春光学精密机械与物理研究所 | Method for detecting dim and small targets in infrared sequence image based on YOLOv5 |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117557789B (en) * | 2024-01-12 | 2024-04-09 | 国研软件股份有限公司 | Intelligent detection method and system for offshore targets |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106952286A (en) * | 2017-03-21 | 2017-07-14 | 中国人民解放军火箭军工程大学 | Dynamic background Target Segmentation method based on motion notable figure and light stream vector analysis |
CN107392885A (en) * | 2017-06-08 | 2017-11-24 | 江苏科技大学 | A kind of method for detecting infrared puniness target of view-based access control model contrast mechanism |
CN108182690A (en) * | 2017-12-29 | 2018-06-19 | 中国人民解放军63861部队 | A kind of infrared Weak target detecting method based on prospect weighting local contrast |
CN109003277A (en) * | 2017-06-07 | 2018-12-14 | 中国航空工业集团公司洛阳电光设备研究所 | A kind of infrared small target in complex background detection method and device |
CN114972423A (en) * | 2022-05-17 | 2022-08-30 | 中国电子科技集团公司第十研究所 | Aerial video moving target detection method and system |
CN114998566A (en) * | 2022-05-09 | 2022-09-02 | 中北大学 | Interpretable multi-scale infrared small and weak target detection network design method |
Family Cites Families (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105894532A (en) * | 2015-07-27 | 2016-08-24 | 广东东软学院 | Sea surface monitoring image dim target detection method and device |
CN108470350B (en) * | 2018-02-26 | 2021-08-24 | 阿博茨德(北京)科技有限公司 | Broken line dividing method and device in broken line graph |
CN109272509B (en) * | 2018-09-06 | 2021-10-29 | 郑州云海信息技术有限公司 | Target detection method, device and equipment for continuous images and storage medium |
CN110400294B (en) * | 2019-07-18 | 2023-02-07 | 湖南宏动光电有限公司 | Infrared target detection system and detection method |
CN112418200B (en) * | 2021-01-25 | 2021-04-02 | 成都点泽智能科技有限公司 | Object detection method and device based on thermal imaging and server |
CN113963421B (en) * | 2021-11-16 | 2023-04-07 | 南京工程学院 | Dynamic sequence unconstrained expression recognition method based on hybrid feature enhanced network |
CN114648547B (en) * | 2022-03-09 | 2023-06-27 | 中国空气动力研究与发展中心计算空气动力研究所 | Weak and small target detection method and device for anti-unmanned aerial vehicle infrared detection system |
CN114882237A (en) * | 2022-04-11 | 2022-08-09 | 石河子大学 | Target detection method based on space attention and channel attention |
CN114998736B (en) * | 2022-06-07 | 2024-08-13 | 中国人民解放军国防科技大学 | Infrared dim target detection method, device, computer equipment and storage medium |
CN114998711A (en) * | 2022-06-21 | 2022-09-02 | 西安中科立德红外科技有限公司 | Method and system for detecting aerial infrared small and weak target and computer storage medium |
CN115035378A (en) * | 2022-08-09 | 2022-09-09 | 中国空气动力研究与发展中心计算空气动力研究所 | Method and device for detecting infrared dim target based on time-space domain feature fusion |
-
2022
- 2022-09-15 CN CN202211118428.7A patent/CN115205655B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106952286A (en) * | 2017-03-21 | 2017-07-14 | 中国人民解放军火箭军工程大学 | Dynamic background Target Segmentation method based on motion notable figure and light stream vector analysis |
CN109003277A (en) * | 2017-06-07 | 2018-12-14 | 中国航空工业集团公司洛阳电光设备研究所 | A kind of infrared small target in complex background detection method and device |
CN107392885A (en) * | 2017-06-08 | 2017-11-24 | 江苏科技大学 | A kind of method for detecting infrared puniness target of view-based access control model contrast mechanism |
CN108182690A (en) * | 2017-12-29 | 2018-06-19 | 中国人民解放军63861部队 | A kind of infrared Weak target detecting method based on prospect weighting local contrast |
CN114998566A (en) * | 2022-05-09 | 2022-09-02 | 中北大学 | Interpretable multi-scale infrared small and weak target detection network design method |
CN114972423A (en) * | 2022-05-17 | 2022-08-30 | 中国电子科技集团公司第十研究所 | Aerial video moving target detection method and system |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116665015B (en) * | 2023-06-26 | 2024-04-02 | 中国科学院长春光学精密机械与物理研究所 | Method for detecting dim and small targets in infrared sequence image based on YOLOv5 |
Also Published As
Publication number | Publication date |
---|---|
CN115205655A (en) | 2022-10-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN115205655B (en) | Infrared dark spot target detection system under dynamic background and detection method thereof | |
Varghese et al. | ChangeNet: A deep learning architecture for visual change detection | |
US20190325241A1 (en) | Device and a method for extracting dynamic information on a scene using a convolutional neural network | |
JP4429298B2 (en) | Object number detection device and object number detection method | |
Heidecker et al. | An application-driven conceptualization of corner cases for perception in highly automated driving | |
CN114299417A (en) | Multi-target tracking method based on radar-vision fusion | |
CN111881853B (en) | Method and device for identifying abnormal behaviors in oversized bridge and tunnel | |
CN104036323A (en) | Vehicle detection method based on convolutional neural network | |
CN110555868A (en) | method for detecting small moving target under complex ground background | |
CN103093198A (en) | Crowd density monitoring method and device | |
Zhou et al. | Detecting and tracking small moving objects in wide area motion imagery (wami) using convolutional neural networks (cnns) | |
CN112489055B (en) | Satellite video dynamic vehicle target extraction method fusing brightness-time sequence characteristics | |
Tuominen et al. | Cloud detection and movement estimation based on sky camera images using neural networks and the Lucas-Kanade method | |
CN115019201B (en) | Weak and small target detection method based on feature refinement depth network | |
CN117809154A (en) | Neural network model training method, flaw detection method of product and related devices | |
CN116188944A (en) | Infrared dim target detection method based on Swin-transducer and multi-scale feature fusion | |
KR101690050B1 (en) | Intelligent video security system | |
Kim et al. | Real-time assessment of surface cracks in concrete structures using integrated deep neural networks with autonomous unmanned aerial vehicle | |
Khoshboresh-Masouleh et al. | Robust building footprint extraction from big multi-sensor data using deep competition network | |
JP4918615B2 (en) | Object number detection device and object number detection method | |
CN103473753A (en) | Target detection method based on multi-scale wavelet threshold denoising | |
Al-Shammri et al. | A combined method for object detection under rain conditions using deep learning | |
CN115147450B (en) | Moving target detection method and detection device based on motion frame difference image | |
CN116630904A (en) | Small target vehicle detection method integrating non-adjacent jump connection and multi-scale residual error structure | |
CN114937239B (en) | Pedestrian multi-target tracking identification method and tracking identification device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |