CN114463619A - Infrared dim target detection method based on integrated fusion features - Google Patents

Infrared dim target detection method based on integrated fusion features Download PDF

Info

Publication number
CN114463619A
CN114463619A CN202210377446.0A CN202210377446A CN114463619A CN 114463619 A CN114463619 A CN 114463619A CN 202210377446 A CN202210377446 A CN 202210377446A CN 114463619 A CN114463619 A CN 114463619A
Authority
CN
China
Prior art keywords
image
sub
multiplied
characteristic
dictionary
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210377446.0A
Other languages
Chinese (zh)
Other versions
CN114463619B (en
Inventor
陈振国
聂青凤
掲斐然
李国强
翟正军
万锦锦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwestern Polytechnical University
Luoyang Institute of Electro Optical Equipment AVIC
Original Assignee
Northwestern Polytechnical University
Luoyang Institute of Electro Optical Equipment AVIC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern Polytechnical University, Luoyang Institute of Electro Optical Equipment AVIC filed Critical Northwestern Polytechnical University
Priority to CN202210377446.0A priority Critical patent/CN114463619B/en
Publication of CN114463619A publication Critical patent/CN114463619A/en
Application granted granted Critical
Publication of CN114463619B publication Critical patent/CN114463619B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/28Determining representative reference patterns, e.g. by averaging or distorting; Generating dictionaries

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an infrared dim target detection method based on integrated fusion features, which relates to the field of machine learning and comprises the steps of obtaining an initial image of an infrared dim target as a training set, establishing a classifier and obtaining a trained model; acquiring an image to be detected and filtering the image through a high-pass filter; performing constant false alarm threshold segmentation on the filtered image; marking candidate target areas of the segmented binary image, and calculating the center coordinates of the candidate target areas; according to the center coordinates of each candidate target, image blocks are taken from the image to be detected; extracting the characteristic parameters of each image block to be detected; and classifying the characteristic parameters of the image block to be detected through the trained model to obtain and output the central coordinate of the target, thereby completing target detection. The classification capability of the fusion features is stronger, the classification precision can be improved, and the convergence speed can be accelerated, so that the aim of reducing the parameters of the classifier is fulfilled; and the method has enough adaptability in the face of complex application scenes, and is convenient for engineering application.

Description

Infrared dim target detection method based on integrated fusion features
Technical Field
The invention relates to the field of machine learning, in particular to an infrared small and weak target detection method based on integrated fusion features.
Background
The infrared weak and small target detection technology is one of the core technologies of an airborne photoelectric system, and is a basic premise for target monitoring and reconnaissance and accurate striking. As technology advances, the photodetection distance requirements become more and more demanding. The infrared target is small in imaging size under the long-distance condition, even the infrared target is difficult to distinguish by human eyes, and is interfered by noise waves in various complex scenes such as air, ground and sea, so that the small target is difficult to accurately detect.
The existing infrared weak and small target detection technology can be divided into methods based on multi-frame and single-frame images from the technical route. The method based on the multi-frame images realizes the extraction of the motion characteristics of the target by utilizing the time domain and space domain characteristics of the input video sequence images, thereby achieving the aim of high-precision detection; the method can obtain higher detection precision only by utilizing integral processing of a plurality of frames of images, but in practical application, scene changes of frames before and after a video are severe due to search and scanning of an optoelectronic system, and poor detection effect is caused due to difficulty in utilizing inter-frame correlation information. The method based on the single-frame image comprises the following steps: when the saliency target detection method faces open environment, the saliency target detection method is difficult to adapt to various complex application scenarios: one set of parameters can only be adapted to individual scenes, and when the application background is switched to other application backgrounds, the parameters need to be adjusted to be adapted; most of the current single-frame image detection methods based on machine learning need complex feature extraction or detection models, and are difficult to realize in real time in engineering application.
Disclosure of Invention
Aiming at the defects in the prior art, the infrared small and weak target detection method based on the integrated fusion features solves the problems that the existing method is insufficient in adaptability to complex application scenes or high in calculation complexity and difficult to apply in engineering.
In order to achieve the purpose of the invention, the invention adopts the technical scheme that:
the method for detecting the infrared dim small target based on the integrated fusion features comprises the following steps:
s1, acquiring an initial image of the infrared dim target as a training set, and constructing a dictionary filter to perform multi-scale central dictionary feature extraction on the training set;
s2, establishing a classifier based on the multi-scale central dictionary features to obtain a trained model;
s3, obtaining an image to be detected and filtering the image through a high-pass filter to obtain a filtered image;
s4, performing constant false alarm threshold segmentation on the filtered image to obtain a segmented binary image;
s5, marking candidate target areas of the segmented binary image, and calculating to obtain the center coordinates of the candidate target areas;
s6, taking image blocks from the image to be detected according to the center coordinates of each candidate target;
s7, extracting the characteristic parameters of each image block to be detected;
and S8, classifying the characteristic parameters of the image block to be detected through the trained model to obtain and output the center coordinates of the target, and completing target detection.
Further, the specific method of step S1 is:
s1-1, acquiring an initial image of the infrared dim target as a training set, marking a candidate target area on the image in the training set, and calculating to obtain the center coordinate of the candidate target area;
s1-2, extracting sub image blocks of 19 multiplied by 19 according to the center coordinates of the candidate targets;
and S1-3, constructing a dictionary filter, performing convolution on the sub-image blocks to obtain feature maps of the sub-image blocks, stretching all the feature maps into vectors, and combining to form a feature column vector, namely completing multi-scale central dictionary feature extraction.
Further, the specific process of constructing the dictionary filter in step S1-3 is as follows:
s1-3-1, taking 9 sub image blocks with different sizes by taking pixel coordinates (10, 10) as the center for each image block with the size of 19 multiplied by 19;
the sizes of the 9 sub image blocks are respectively 3 × 3, 5 × 5, 7 × 7, 9 × 9, 11 × 11, 13 × 13, 15 × 15, 17 × 17 and 19 × 19;
s1-3-2, clustering the sub-image blocks with the size of 3 multiplied by 3 to obtain 3 dictionary filters;
s1-3-3, clustering the sub-image blocks with the size of 5 multiplied by 5 to obtain 3 dictionary filters;
s1-3-4, clustering the sub-image blocks with the size of 7 multiplied by 7 to obtain 3 dictionary filters;
s1-3-5, clustering the sub-image blocks with the sizes of 9 multiplied by 9, 11 multiplied by 11, 13 multiplied by 13, 15 multiplied by 15, 17 multiplied by 17 and 19 multiplied by 19 to respectively obtain 1 dictionary filter; a total of 15 dictionary filters are obtained.
Further, the specific process in step S2 is:
s2-1, representing the multi-scale central dictionary features of all image division blocks in the training set as
Figure 596829DEST_PATH_IMAGE001
Wherein
Figure 546331DEST_PATH_IMAGE002
Is as followsiA feature column vector of each partial image block;
Figure 104351DEST_PATH_IMAGE003
is as followsiA label of each divided image block, +1 represents a positive sample, -1 represents a negative sample;mthe total number of the image blocks in the training set;
s2-2, according to the calculation formula
Figure 164711DEST_PATH_IMAGE004
Initialization ofiWeights of the sub image blocks;
s2-3, normalizing the weight;
s2-4, obtaining the second one by weight calculation after normalizationiThe characteristic line vector of each partial image block is based oniCalculating the characteristic column vector and the characteristic row vector of each sub-image block to obtain characteristic parameters;
s2-5, constructing a weak classifier, and calculating according to the weak classifier to obtain a score of the characteristic parameter;
s2-6, constructing a classifier based on the number ratio of the negative samples to the number of the positive samples and the characteristic sign function;
s2-7, updating the weight of the image block according to the score;
and S2-8, repeating the steps S2-3 to S2-7 based on the updated weight of the image block, and performing T iterations to obtain a strong classifier integrated with a weak classifier, namely the trained model.
Further, the specific process in step S2-4 is:
s2-4-1, according to the formula:
Figure 124577DEST_PATH_IMAGE005
to obtain the firstiFirst of each divided image blocktCharacteristic row vector of round iteration
Figure 979400DEST_PATH_IMAGE006
(ii) a Wherein
Figure 742826DEST_PATH_IMAGE007
In order to find the function of the minimum value,
Figure 403614DEST_PATH_IMAGE008
in order to be the weight after the normalization,
Figure 155670DEST_PATH_IMAGE009
in order to obtain the intercept of the signal,
Figure 978132DEST_PATH_IMAGE010
for the hyper-parameters used to control the norm constraint,
Figure 714007DEST_PATH_IMAGE011
the intermediate formula is adopted, and the intermediate formula is,
Figure 99858DEST_PATH_IMAGE012
for adjusting the hyper-parameters of two different norm constraint gravities,
Figure 34316DEST_PATH_IMAGE013
is a two-norm of the number of the samples,
Figure 762100DEST_PATH_IMAGE014
is zero norm;
s2-4-2, according to the formula:
Figure 454113DEST_PATH_IMAGE015
to obtain the firstiCharacteristic parameter of each sub-image block
Figure 925545DEST_PATH_IMAGE016
Further, the specific process in step S2-5 is:
s2-5-1, according to the formula:
Figure 714510DEST_PATH_IMAGE017
to obtain the firsttWeak classifier for characteristic line vector of round iteration
Figure 596884DEST_PATH_IMAGE018
(ii) a Wherein
Figure 41772DEST_PATH_IMAGE019
And
Figure 51316DEST_PATH_IMAGE020
in order to obtain the parameters to be solved,
Figure 694787DEST_PATH_IMAGE021
is a sign function;
s2-5-2, according to the formula:
Figure 764374DEST_PATH_IMAGE022
to obtain the firstiScore of individual image blocks
Figure 680247DEST_PATH_IMAGE023
(ii) a Wherein
Figure 290220DEST_PATH_IMAGE024
Is as followsuA weak classifier.
Further, the specific process of updating the weight in step S2-7 is as follows:
according to the formula:
Figure 257039DEST_PATH_IMAGE025
obtaining updated weights
Figure 966369DEST_PATH_IMAGE026
(ii) a Where e is a constant.
Further, the specific process of obtaining the strong classifier integrated with the weak classifier in step S2-8 is as follows:
according to the formula:
Figure 448165DEST_PATH_IMAGE027
obtain a strong classifier
Figure 799512DEST_PATH_IMAGE028
(ii) a WhereinxIs the characteristic column vector of the image to be detected and consists of the characteristic column vector of each sub-image block,
Figure 604526DEST_PATH_IMAGE029
as an intermediate parameter, the parameter is,Tin order to iterate the number of updates,ris the ratio of the number of negative samples to the number of positive samples, ln is a natural constanteA logarithmic function of the base.
Further, the specific method of step S4 is:
s2-1, setting false alarm parameters, and calculating the mean value and variance of the filtered image;
s2-2, based on the false alarm parameters, the mean and the variance, according to the formula:
Figure 547074DEST_PATH_IMAGE030
deriving a segmentation thresholdK(ii) a Wherein
Figure 188271DEST_PATH_IMAGE031
Is the average of the filtered images and is,
Figure 140047DEST_PATH_IMAGE032
is the variance of the filtered image and,
Figure 815879DEST_PATH_IMAGE033
in the form of a normal distribution function,
Figure 132591DEST_PATH_IMAGE034
is a false alarm parameter;
and S2-3, setting the pixel value in the filtered image to be greater than the segmentation threshold value to be 1, and setting the pixel value to be less than the segmentation threshold value to be 0, so as to obtain the segmented binary image.
Further, the specific method for marking the target region on the segmented binary image in step S5 is as follows: searching a region with a pixel of 1 in the divided binary image, and marking a region which forms a connected domain in the region with the pixel of 1 as a candidate target region; wherein the sum of the pixels of the candidate target region is equal to or greater than 3.
The invention has the beneficial effects that: the method designs the multi-scale central dictionary features, covers targets with various sizes, designs the central dictionary in a targeted manner, and improves the description capability of the target features; when the classifier is trained, the characteristic row vector is introduced
Figure 323401DEST_PATH_IMAGE035
The features are linearly fused, anAnd forming the fused features into a simple classifier by an ensemble learning mode. The classification capability of the fusion features is stronger, the classification precision can be improved, and the convergence speed can be accelerated, so that the aim of reducing the parameters of the classifier is fulfilled. And the method has enough adaptability in the face of complex application scenes, and is convenient for engineering application.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a block diagram of a sub-image;
FIG. 3 is an initial image;
fig. 4 is a binary image.
Detailed Description
The following description of the embodiments of the present invention is provided to facilitate the understanding of the present invention by those skilled in the art, but it should be understood that the present invention is not limited to the scope of the embodiments, and it will be apparent to those skilled in the art that various changes may be made without departing from the spirit and scope of the invention as defined and defined in the appended claims, and all matters produced by the invention using the inventive concept are protected.
As shown in fig. 1, the method for detecting infrared dim targets based on integrated fusion features comprises the following steps:
s1, acquiring an initial image of the infrared dim target as a training set, and constructing a dictionary filter to perform multi-scale central dictionary feature extraction on the training set;
s2, establishing a classifier based on the multi-scale central dictionary features to obtain a trained model;
s3, obtaining an image to be detected and filtering the image through a high-pass filter to obtain a filtered image;
s4, performing constant false alarm threshold segmentation on the filtered image to obtain a segmented binary image;
s5, marking candidate target areas of the segmented binary image, and calculating to obtain the center coordinates of the candidate target areas;
s6, taking image blocks from the image to be detected according to the center coordinates of each candidate target;
s7, extracting the characteristic parameters of each image block to be detected;
and S8, classifying the characteristic parameters of the image block to be detected through the trained model to obtain and output the center coordinates of the target, and completing target detection.
The specific method of step S1 is:
s1-1, acquiring an initial image of the infrared dim target as a training set, marking a candidate target area on the image in the training set, and calculating to obtain the center coordinate of the candidate target area;
s1-2, extracting sub image blocks of 19 multiplied by 19 according to the center coordinates of the candidate targets;
and S1-3, constructing a dictionary filter, performing convolution on the sub-image blocks to obtain feature maps of the sub-image blocks, stretching all the feature maps into vectors, and combining to form a feature column vector, namely completing multi-scale central dictionary feature extraction.
The specific process of constructing the dictionary filter in the step S1-3 is as follows:
s1-3-1, taking 9 sub image blocks with different sizes by taking pixel coordinates (10, 10) as the center for each image block with the size of 19 multiplied by 19;
the sizes of the 9 sub image blocks are respectively 3 × 3, 5 × 5, 7 × 7, 9 × 9, 11 × 11, 13 × 13, 15 × 15, 17 × 17 and 19 × 19;
s1-3-2, clustering the sub-image blocks with the size of 3 multiplied by 3 to obtain 3 dictionary filters;
s1-3-3, clustering the sub-image blocks with the size of 5 multiplied by 5 to obtain 3 dictionary filters;
s1-3-4, clustering the sub-image blocks with the size of 7 multiplied by 7 to obtain 3 dictionary filters;
s1-3-5, clustering the sub-image blocks with the sizes of 9 multiplied by 9, 11 multiplied by 11, 13 multiplied by 13, 15 multiplied by 15, 17 multiplied by 17 and 19 multiplied by 19 to respectively obtain 1 dictionary filter; a total of 15 dictionary filters are obtained.
The specific process in step S2 is:
s2-1, representing the multi-scale central dictionary features of all the image blocks in the training set as
Figure 238DEST_PATH_IMAGE001
Wherein
Figure 592894DEST_PATH_IMAGE002
Is as followsiA feature column vector of each partial image block;
Figure 80507DEST_PATH_IMAGE003
is as followsiA label of each divided image block, +1 represents a positive sample, -1 represents a negative sample;mthe total number of the image blocks in the training set;
s2-2, according to the calculation formula
Figure 493034DEST_PATH_IMAGE004
Initialization ofiWeights of the individual tile blocks;
s2-3, normalizing the weight;
s2-4, obtaining the second one by weight calculation after normalizationiThe characteristic line vector of each partial image block is based oniCalculating the characteristic column vector and the characteristic row vector of each sub-image block to obtain characteristic parameters;
s2-5, constructing a weak classifier, and calculating according to the weak classifier to obtain a score of the characteristic parameter;
s2-6, constructing a classifier based on the number ratio of the negative samples to the number of the positive samples and the characteristic sign function;
s2-7, updating the weight of the image block according to the score;
and S2-8, repeating the steps S2-3 to S2-7 based on the updated weight of the image block, and performing T iterations to obtain a strong classifier integrated with a weak classifier, namely the trained model.
The specific process in step S2-4 is:
s2-4-1, according to the formula:
Figure 989874DEST_PATH_IMAGE005
to obtain the firstiFirst of each divided image blocktCharacteristic row vector of round iteration
Figure 374719DEST_PATH_IMAGE006
(ii) a Wherein
Figure 95550DEST_PATH_IMAGE007
In order to find the function of the minimum value,
Figure 447903DEST_PATH_IMAGE008
in order to be the weight after the normalization,
Figure 279593DEST_PATH_IMAGE009
in order to obtain the intercept of the signal,
Figure 518944DEST_PATH_IMAGE010
for the hyper-parameters used to control the norm constraint,
Figure 613939DEST_PATH_IMAGE011
the intermediate formula is adopted, and the intermediate formula is,
Figure 1058DEST_PATH_IMAGE012
for adjusting the hyper-parameters of two different norm constraint gravities,
Figure 574122DEST_PATH_IMAGE013
the number of the signals is two norms,
Figure 995876DEST_PATH_IMAGE014
is zero norm;
s2-4-2, according to the formula:
Figure 245461DEST_PATH_IMAGE015
to obtain the firstiCharacteristic parameter of each sub-image block
Figure 119876DEST_PATH_IMAGE016
Further, the specific process in step S2-5 is:
s2-5-1, according to the formula:
Figure 231051DEST_PATH_IMAGE017
to obtain the firsttWeak classifier for characteristic line vector of round iteration
Figure 710574DEST_PATH_IMAGE018
(ii) a Wherein
Figure 944110DEST_PATH_IMAGE019
And
Figure 243504DEST_PATH_IMAGE020
in order to obtain the parameters to be obtained,
Figure 220687DEST_PATH_IMAGE021
is a sign function;
s2-5-2, according to the formula:
Figure 803984DEST_PATH_IMAGE022
to obtain the firstiScore of individual image blocks
Figure 411683DEST_PATH_IMAGE023
(ii) a Wherein
Figure 995111DEST_PATH_IMAGE024
Is as followsuA weak classifier.
The specific process of updating the weight in step S2-7 is as follows:
according to the formula:
Figure 713668DEST_PATH_IMAGE025
obtaining updated weights
Figure 964521DEST_PATH_IMAGE026
(ii) a Where e is a constant.
The specific process of obtaining the strong classifier integrated with the weak classifier in the step S2-8 is as follows:
according to the formula:
Figure 743121DEST_PATH_IMAGE027
obtain a strong classifier
Figure 796DEST_PATH_IMAGE028
(ii) a WhereinxIs the characteristic column vector of the image to be detected and consists of the characteristic column vector of each sub-image block,
Figure 319782DEST_PATH_IMAGE029
as an intermediate parameter, the parameter is,Tin order to iterate the number of updates,ris the ratio of the number of negative samples to the number of positive samples, ln is a natural constanteA logarithmic function of the base.
The specific method of step S4 is:
s2-1, setting false alarm parameters, and calculating the mean value and variance of the filtered image;
s2-2, based on the false alarm parameters, the mean and the variance, according to the formula:
Figure 362824DEST_PATH_IMAGE030
deriving a segmentation thresholdK(ii) a Wherein
Figure 312326DEST_PATH_IMAGE031
Is the average of the filtered images and,
Figure 135925DEST_PATH_IMAGE032
is the variance of the filtered image and,
Figure 930706DEST_PATH_IMAGE033
in the form of a normal distribution function,
Figure 156151DEST_PATH_IMAGE034
is a false alarm parameter;
and S2-3, setting the pixel value in the filtered image to be greater than the segmentation threshold value to be 1, and setting the pixel value to be less than the segmentation threshold value to be 0, so as to obtain the segmented binary image.
The specific method for marking the target region on the segmented binary image in step S5 is as follows: searching a region with a pixel of 1 in the divided binary image, and marking a region which forms a connected domain in the region with the pixel of 1 as a candidate target region; wherein the sum of the pixels of the candidate target region is equal to or greater than 3.
As shown in fig. 2, the divided image block corresponding to step S1-2;
as shown in fig. 3, the initial image corresponding to step S1;
as shown in fig. 4, corresponds to the binary image of step S4.
In one embodiment of the present invention, for a multi-scale central dictionary feature divided into 9 segmented image blocks, is
Figure 260242DEST_PATH_IMAGE036
The dimension of the corresponding feature row vector is 2335.
For the
Figure 305559DEST_PATH_IMAGE037
Classifier parameters
Figure 904030DEST_PATH_IMAGE038
The solution of (2) is converted into a general solution problem, and then the iterative solution process is as follows:
1) setting up
Figure 187244DEST_PATH_IMAGE039
To make
Figure 275286DEST_PATH_IMAGE040
The proportion of the medium-zero norm constraint is slightly higher than that of the two-norm constraint, so that the solution is obtained
Figure 745581DEST_PATH_IMAGE041
More medium and non-zero elements are beneficial to reducing the calculation complexity;
2) setting up
Figure 210061DEST_PATH_IMAGE042
Wherein
Figure 331469DEST_PATH_IMAGE043
Denotes the firstiFirst of a characteristic column vector of each partial image blockjThe number of the components is one,
Figure 590412DEST_PATH_IMAGE044
averaging the maximum and minimum values so that each is solved
Figure 813583DEST_PATH_IMAGE041
Even if not optimal, the performance is not reduced because of no poor performance;
3) solving by a gradient descent method to obtain
Figure 19437DEST_PATH_IMAGE041
The iteration of (c) is as follows:
Figure 808401DEST_PATH_IMAGE045
Figure 175928DEST_PATH_IMAGE046
Figure 683133DEST_PATH_IMAGE047
Figure 676366DEST_PATH_IMAGE048
wherein
Figure 523099DEST_PATH_IMAGE049
To it is firstjThe number of the components is such that,
Figure 389424DEST_PATH_IMAGE050
to it is firstkThe number of the components is such that,
Figure 56029DEST_PATH_IMAGE051
to which it is ajThe number of the components is such that,
Figure 666002DEST_PATH_IMAGE052
to it is firstkA component;
4) when the total error of the gradient descent is converged, the total error can be obtained
Figure 101662DEST_PATH_IMAGE053
For
Figure 138888DEST_PATH_IMAGE037
Classifier symbolic parameter
Figure 643573DEST_PATH_IMAGE054
And
Figure 729341DEST_PATH_IMAGE055
the expression of (a) is as follows:
Figure 347404DEST_PATH_IMAGE056
Figure 493215DEST_PATH_IMAGE057
for the
Figure 196729DEST_PATH_IMAGE037
Determining the iterative updating times T of the classifier:
first, theRClassification error of wheel
Figure 86187DEST_PATH_IMAGE058
In which
Figure 558757DEST_PATH_IMAGE059
If, if
Figure 124736DEST_PATH_IMAGE060
There was no change in any of the 20 rounds, indicating that the model has converged, i.e., the training can be stopped, and T = R is set, the current cutoff iteration round value. In practiceIn application, the larger value can be set first, and training can be stopped properly according to the convergence condition.
The method is used for testing 16492 candidate target data sets (4039 targets and 12453 non-targets) extracted from twenty thousand pairs of air-infrared images, and the classification accuracy reaches 97.6%. In the invention, the number of each scale filter in the dictionary set is set to be slightly different, which is determined according to the size of the target in an application scene, if the size of the target is larger, the number of large-size filters is increased, and the number of small-size filters is reduced.
The method designs the multi-scale central dictionary features, covers targets with various sizes, designs the central dictionary in a targeted manner, and improves the description capability of the target features; when the classifier is trained, a characteristic row vector is introduced
Figure DEST_PATH_IMAGE061
The features are linearly fused, and the fused features are formed into a simple classifier through an ensemble learning mode. The classification capability of the fusion features is stronger, the classification precision can be improved, and the convergence speed can be accelerated, so that the aim of reducing the parameters of the classifier is fulfilled.

Claims (10)

1. An infrared small and weak target detection method based on integrated fusion features is characterized by comprising the following steps:
s1, acquiring an initial image of the infrared dim target as a training set, and constructing a dictionary filter to perform multi-scale central dictionary feature extraction on the training set;
s2, establishing a classifier based on the multi-scale central dictionary features to obtain a trained model;
s3, obtaining an image to be detected and filtering the image through a high-pass filter to obtain a filtered image;
s4, performing constant false alarm threshold segmentation on the filtered image to obtain a segmented binary image;
s5, marking candidate target areas of the segmented binary image, and calculating to obtain the center coordinates of the candidate target areas;
s6, taking image blocks from the image to be detected according to the center coordinates of each candidate target;
s7, extracting the characteristic parameters of each image block to be detected;
and S8, classifying the characteristic parameters of the image block to be detected through the trained model to obtain and output the center coordinates of the target, and completing target detection.
2. The infrared dim target detection method based on integrated fusion features as claimed in claim 1, wherein the specific method of step S1 is:
s1-1, acquiring an initial image of the infrared dim target as a training set, marking a candidate target area on the image in the training set, and calculating to obtain the center coordinate of the candidate target area;
s1-2, extracting sub image blocks of 19 multiplied by 19 according to the center coordinates of the candidate targets;
and S1-3, constructing a dictionary filter, performing convolution on the sub-image blocks to obtain feature maps of the sub-image blocks, stretching all the feature maps into vectors, and combining to form a feature column vector, namely completing multi-scale central dictionary feature extraction.
3. The infrared weak and small target detection method based on integrated fusion features as claimed in claim 2, wherein the specific process of constructing the dictionary filter in step S1-3 is as follows:
s1-3-1, taking 9 sub image blocks with different sizes by taking pixel coordinates (10, 10) as the center for each image block with the size of 19 multiplied by 19;
the sizes of the 9 sub image blocks are respectively 3 × 3, 5 × 5, 7 × 7, 9 × 9, 11 × 11, 13 × 13, 15 × 15, 17 × 17 and 19 × 19;
s1-3-2, clustering the sub-image blocks with the size of 3 multiplied by 3 to obtain 3 dictionary filters;
s1-3-3, clustering the sub-image blocks with the size of 5 multiplied by 5 to obtain 3 dictionary filters;
s1-3-4, clustering the sub-image blocks with the size of 7 multiplied by 7 to obtain 3 dictionary filters;
s1-3-5, clustering the sub-image blocks with the sizes of 9 multiplied by 9, 11 multiplied by 11, 13 multiplied by 13, 15 multiplied by 15, 17 multiplied by 17 and 19 multiplied by 19 to respectively obtain 1 dictionary filter; a total of 15 dictionary filters are obtained.
4. The method for detecting infrared dim targets based on integrated fusion features according to claim 2, characterized in that the specific process in step S2 is:
s2-1, representing the multi-scale central dictionary features of all image division blocks in the training set as
Figure DEST_PATH_IMAGE001
Wherein
Figure 710807DEST_PATH_IMAGE002
Is as followsiA feature column vector of each partial image block;
Figure DEST_PATH_IMAGE003
is as followsiA label of each divided image block, +1 represents a positive sample, -1 represents a negative sample;mthe total number of the image blocks in the training set;
s2-2, according to the calculation formula
Figure 788353DEST_PATH_IMAGE004
Initialization ofiWeights of the individual tile blocks;
s2-3, normalizing the weight;
s2-4, obtaining the second one by weight calculation after normalizationiThe characteristic line vector of each partial image block is based oniCalculating the characteristic column vector and the characteristic row vector of each sub-image block to obtain characteristic parameters;
s2-5, constructing a weak classifier, and calculating according to the weak classifier to obtain a score of the characteristic parameter;
s2-6, constructing a classifier based on the number ratio of the negative samples to the number of the positive samples and the characteristic sign function;
s2-7, updating the weight of the image block according to the score;
and S2-8, repeating the steps S2-3 to S2-7 based on the updated weight of the image block, and performing T iterations to obtain a strong classifier integrated with a weak classifier, namely the trained model.
5. The infrared weak and small target detection method based on integrated fusion features as claimed in claim 4, wherein the specific process in step S2-4 is as follows:
s2-4-1, according to the formula:
Figure DEST_PATH_IMAGE005
to obtain the firstiFirst of each divided image blocktCharacteristic row vector of round iteration
Figure 319828DEST_PATH_IMAGE006
(ii) a Wherein
Figure DEST_PATH_IMAGE007
In order to find the function of the minimum value,
Figure 850036DEST_PATH_IMAGE008
in order to be the weight after the normalization,
Figure DEST_PATH_IMAGE009
in order to obtain the intercept of the signal,
Figure 492370DEST_PATH_IMAGE010
for the hyper-parameters used to control the norm constraint,
Figure DEST_PATH_IMAGE011
the intermediate formula is adopted, and the intermediate formula is,
Figure 237472DEST_PATH_IMAGE012
for adjusting the hyper-parameters of two different norm constraint gravities,
Figure DEST_PATH_IMAGE013
is a two-norm of the number of the samples,
Figure 674269DEST_PATH_IMAGE014
is zero norm;
s2-4-2, according to the formula:
Figure DEST_PATH_IMAGE015
to obtain the firstiCharacteristic parameter of each sub-image block
Figure 691773DEST_PATH_IMAGE016
6. The infrared dim target detection method based on integrated fusion features as claimed in claim 5, wherein the specific process in step S2-5 is:
s2-5-1, according to the formula:
Figure DEST_PATH_IMAGE017
to obtain the firsttWeak classifier for characteristic line vector of round iteration
Figure 872218DEST_PATH_IMAGE018
(ii) a Wherein
Figure DEST_PATH_IMAGE019
And
Figure 924357DEST_PATH_IMAGE020
in order to obtain the parameters to be solved,
Figure DEST_PATH_IMAGE021
is a sign function;
s2-5-2, according to the formula:
Figure 797635DEST_PATH_IMAGE022
to obtain the firstiScore of individual image blocks
Figure DEST_PATH_IMAGE023
(ii) a Wherein
Figure 53167DEST_PATH_IMAGE024
Is as followsuA weak classifier.
7. The integrated fusion feature-based infrared small and weak target detection method as claimed in claim 6, wherein the specific process of updating the weight in step S2-7 is as follows:
according to the formula:
Figure DEST_PATH_IMAGE025
obtaining updated weights
Figure 286571DEST_PATH_IMAGE026
(ii) a Where e is a constant.
8. The integrated fusion feature-based infrared weak and small target detection method according to claim 7, wherein the specific process of obtaining the strong classifier of the integrated weak classifier in step S2-8 is as follows:
according to the formula:
Figure DEST_PATH_IMAGE027
obtain a strong classifier
Figure 740686DEST_PATH_IMAGE028
(ii) a WhereinxThe characteristic column vector of the image to be detected is composed of the characteristic column vectors of all the sub-image blocks,
Figure DEST_PATH_IMAGE029
As an intermediate parameter, the parameter is,Tin order to iterate the number of updates,ris the ratio of the number of negative samples to the number of positive samples, ln is a natural constanteA logarithmic function of base.
9. The integrated fusion feature-based infrared small and weak target detection method according to claim 1, wherein the specific method of step S4 is as follows:
s2-1, setting false alarm parameters, and calculating the mean value and variance of the filtered image;
s2-2, based on the false alarm parameters, the mean and the variance, according to the formula:
Figure 722549DEST_PATH_IMAGE030
deriving a segmentation thresholdK(ii) a Wherein
Figure DEST_PATH_IMAGE031
Is the average of the filtered images and is,
Figure 511382DEST_PATH_IMAGE032
is the variance of the filtered image and,
Figure DEST_PATH_IMAGE033
in the form of a normal distribution function,
Figure 299209DEST_PATH_IMAGE034
is a false alarm parameter;
and S2-3, setting the pixel value in the filtered image to be greater than the segmentation threshold value to be 1, and setting the pixel value to be less than the segmentation threshold value to be 0, so as to obtain the segmented binary image.
10. The infrared weak and small target detection method based on integrated fusion features as claimed in claim 2, wherein the specific method for labeling the target region of the segmented binary image in step S5 is as follows: searching a region with a pixel of 1 in the divided binary image, and marking a region which forms a connected domain in the region with the pixel of 1 as a candidate target region; wherein the sum of the pixels of the candidate target region is equal to or greater than 3.
CN202210377446.0A 2022-04-12 2022-04-12 Infrared dim target detection method based on integrated fusion features Expired - Fee Related CN114463619B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210377446.0A CN114463619B (en) 2022-04-12 2022-04-12 Infrared dim target detection method based on integrated fusion features

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210377446.0A CN114463619B (en) 2022-04-12 2022-04-12 Infrared dim target detection method based on integrated fusion features

Publications (2)

Publication Number Publication Date
CN114463619A true CN114463619A (en) 2022-05-10
CN114463619B CN114463619B (en) 2022-07-08

Family

ID=81417687

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210377446.0A Expired - Fee Related CN114463619B (en) 2022-04-12 2022-04-12 Infrared dim target detection method based on integrated fusion features

Country Status (1)

Country Link
CN (1) CN114463619B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116228819A (en) * 2023-04-27 2023-06-06 中国科学院空天信息创新研究院 Infrared moving target detection method and device
CN117011196A (en) * 2023-08-10 2023-11-07 哈尔滨工业大学 Infrared small target detection method and system based on combined filtering optimization

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004133629A (en) * 2002-10-09 2004-04-30 Ricoh Co Ltd Dictionary preparation device for detecting specific mark, specific mark detection device, specific mark recognition device, and program and recording medium
CN102842047A (en) * 2012-09-10 2012-12-26 重庆大学 Infrared small and weak target detection method based on multi-scale sparse dictionary
CN104899567A (en) * 2015-06-05 2015-09-09 重庆大学 Small weak moving target tracking method based on sparse representation
CN105513076A (en) * 2015-12-10 2016-04-20 南京理工大学 Weak object constant false alarm detection method based on object coordinate distribution features
CN106709512A (en) * 2016-12-09 2017-05-24 河海大学 Infrared target detection method based on local sparse representation and contrast
CN107274410A (en) * 2017-07-02 2017-10-20 中国航空工业集团公司雷华电子技术研究所 Adaptive man-made target constant false alarm rate detection method
CN108304873A (en) * 2018-01-30 2018-07-20 深圳市国脉畅行科技股份有限公司 Object detection method based on high-resolution optical satellite remote-sensing image and its system
CN109102003A (en) * 2018-07-18 2018-12-28 华中科技大学 A kind of small target detecting method and system based on Infrared Physics Fusion Features
US20190095739A1 (en) * 2017-09-27 2019-03-28 Harbin Institute Of Technology Adaptive Auto Meter Detection Method based on Character Segmentation and Cascade Classifier
CN109902715A (en) * 2019-01-18 2019-06-18 南京理工大学 A kind of method for detecting infrared puniness target based on context converging network
CN111539428A (en) * 2020-05-06 2020-08-14 中国科学院自动化研究所 Rotating target detection method based on multi-scale feature integration and attention mechanism
CN112001257A (en) * 2020-07-27 2020-11-27 南京信息职业技术学院 SAR image target recognition method and device based on sparse representation and cascade dictionary
CN112749714A (en) * 2019-10-29 2021-05-04 中国科学院长春光学精密机械与物理研究所 Method for detecting polymorphic dark and weak small target in single-frame infrared image
CN113935984A (en) * 2021-11-01 2022-01-14 中国电子科技集团公司第三十八研究所 Multi-feature fusion method and system for detecting infrared dim small target in complex background

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004133629A (en) * 2002-10-09 2004-04-30 Ricoh Co Ltd Dictionary preparation device for detecting specific mark, specific mark detection device, specific mark recognition device, and program and recording medium
CN102842047A (en) * 2012-09-10 2012-12-26 重庆大学 Infrared small and weak target detection method based on multi-scale sparse dictionary
CN104899567A (en) * 2015-06-05 2015-09-09 重庆大学 Small weak moving target tracking method based on sparse representation
CN105513076A (en) * 2015-12-10 2016-04-20 南京理工大学 Weak object constant false alarm detection method based on object coordinate distribution features
CN106709512A (en) * 2016-12-09 2017-05-24 河海大学 Infrared target detection method based on local sparse representation and contrast
CN107274410A (en) * 2017-07-02 2017-10-20 中国航空工业集团公司雷华电子技术研究所 Adaptive man-made target constant false alarm rate detection method
US20190095739A1 (en) * 2017-09-27 2019-03-28 Harbin Institute Of Technology Adaptive Auto Meter Detection Method based on Character Segmentation and Cascade Classifier
CN108304873A (en) * 2018-01-30 2018-07-20 深圳市国脉畅行科技股份有限公司 Object detection method based on high-resolution optical satellite remote-sensing image and its system
CN109102003A (en) * 2018-07-18 2018-12-28 华中科技大学 A kind of small target detecting method and system based on Infrared Physics Fusion Features
CN109902715A (en) * 2019-01-18 2019-06-18 南京理工大学 A kind of method for detecting infrared puniness target based on context converging network
CN112749714A (en) * 2019-10-29 2021-05-04 中国科学院长春光学精密机械与物理研究所 Method for detecting polymorphic dark and weak small target in single-frame infrared image
CN111539428A (en) * 2020-05-06 2020-08-14 中国科学院自动化研究所 Rotating target detection method based on multi-scale feature integration and attention mechanism
CN112001257A (en) * 2020-07-27 2020-11-27 南京信息职业技术学院 SAR image target recognition method and device based on sparse representation and cascade dictionary
CN113935984A (en) * 2021-11-01 2022-01-14 中国电子科技集团公司第三十八研究所 Multi-feature fusion method and system for detecting infrared dim small target in complex background

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
BOAZ OPHIR 等: "Multi-scale dictionary learning using wavelets", 《IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING》 *
GEN LI 等: "The Research on Classification of Small Sample Data Set Image Based on Convolutional Neural Network", 《2021 33RD CHINESE CONTROL AND DECISION CONFERENCE (CCDC)》 *
XUEQI LI 等: "Research on Feature Analysis and Detection of Infrared Small Target under Complex Ground Background", 《2019 IEEE 8TH JOINT INTERNATIONAL INFORMATION TECHNOLOGY AND ARTIFICIAL INTELLIGENCE CONFERENCE (ITAIC 2019)》 *
杨帆 等: "《精通图像处理经典算法 MATLAB版》", 30 April 2014, 北京航空航天大学出版社 *
王会改 等: "基于多尺度自适应稀疏字典的小弱目标检测方法", 《红外与激光工程》 *
蒋昕昊 等: "基于 YOLO-IDSTD 算法的红外弱小目标检测", 《红外与激光工程》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116228819A (en) * 2023-04-27 2023-06-06 中国科学院空天信息创新研究院 Infrared moving target detection method and device
CN116228819B (en) * 2023-04-27 2023-08-08 中国科学院空天信息创新研究院 Infrared moving target detection method and device
CN117011196A (en) * 2023-08-10 2023-11-07 哈尔滨工业大学 Infrared small target detection method and system based on combined filtering optimization
CN117011196B (en) * 2023-08-10 2024-04-19 哈尔滨工业大学 Infrared small target detection method and system based on combined filtering optimization

Also Published As

Publication number Publication date
CN114463619B (en) 2022-07-08

Similar Documents

Publication Publication Date Title
CN114463619B (en) Infrared dim target detection method based on integrated fusion features
CN105956539B (en) A kind of Human Height measurement method of application background modeling and Binocular Vision Principle
CN109636771B (en) Flight target detection method and system based on image processing
CN106548169B (en) Fuzzy literal Enhancement Method and device based on deep neural network
CN109949361A (en) A kind of rotor wing unmanned aerial vehicle Attitude estimation method based on monocular vision positioning
CN103455797A (en) Detection and tracking method of moving small target in aerial shot video
CN105160310A (en) 3D (three-dimensional) convolutional neural network based human body behavior recognition method
CN105279771B (en) A kind of moving target detecting method based on the modeling of online dynamic background in video
CN106023257A (en) Target tracking method based on rotor UAV platform
CN109447082B (en) Scene moving object segmentation method, system, storage medium and equipment
CN111209920B (en) Airplane detection method under complex dynamic background
CN110334703B (en) Ship detection and identification method in day and night image
US11361534B2 (en) Method for glass detection in real scenes
CN112308883A (en) Multi-ship fusion tracking method based on visible light and infrared images
CN109165602A (en) A kind of black smoke vehicle detection method based on video analysis
CN103942786B (en) The self adaptation block objects detection method of unmanned plane visible ray and infrared image
CN113312973A (en) Method and system for extracting features of gesture recognition key points
CN110516527B (en) Visual SLAM loop detection improvement method based on instance segmentation
CN110910497B (en) Method and system for realizing augmented reality map
CN112465863A (en) Unmanned aerial vehicle video target tracking method based on deep learning
CN111339824A (en) Road surface sprinkled object detection method based on machine vision
CN113902044B (en) Image target extraction method based on lightweight YOLOV3
CN110232314A (en) A kind of image pedestrian's detection method based on improved Hog feature combination neural network
CN115482257A (en) Motion estimation method integrating deep learning characteristic optical flow and binocular vision
CN108876849B (en) Deep learning target identification and positioning method based on auxiliary identification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20220708

CF01 Termination of patent right due to non-payment of annual fee