CN114463619A - Infrared dim target detection method based on integrated fusion features - Google Patents
Infrared dim target detection method based on integrated fusion features Download PDFInfo
- Publication number
- CN114463619A CN114463619A CN202210377446.0A CN202210377446A CN114463619A CN 114463619 A CN114463619 A CN 114463619A CN 202210377446 A CN202210377446 A CN 202210377446A CN 114463619 A CN114463619 A CN 114463619A
- Authority
- CN
- China
- Prior art keywords
- image
- sub
- multiplied
- characteristic
- dictionary
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 27
- 230000004927 fusion Effects 0.000 title claims abstract description 20
- 238000000034 method Methods 0.000 claims abstract description 41
- 238000012549 training Methods 0.000 claims abstract description 21
- 230000011218 segmentation Effects 0.000 claims abstract description 13
- 238000001914 filtration Methods 0.000 claims abstract description 4
- 239000013598 vector Substances 0.000 claims description 34
- 238000004364 calculation method Methods 0.000 claims description 8
- 238000000605 extraction Methods 0.000 claims description 8
- 238000010606 normalization Methods 0.000 claims description 6
- 238000005315 distribution function Methods 0.000 claims description 3
- 230000005484 gravity Effects 0.000 claims description 3
- 238000002372 labelling Methods 0.000 claims 1
- 238000010801 machine learning Methods 0.000 abstract description 3
- 230000006870 function Effects 0.000 description 8
- 238000005516 engineering process Methods 0.000 description 4
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000011478 gradient descent method Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000005693 optoelectronics Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/28—Determining representative reference patterns, e.g. by averaging or distorting; Generating dictionaries
Landscapes
- Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses an infrared dim target detection method based on integrated fusion features, which relates to the field of machine learning and comprises the steps of obtaining an initial image of an infrared dim target as a training set, establishing a classifier and obtaining a trained model; acquiring an image to be detected and filtering the image through a high-pass filter; performing constant false alarm threshold segmentation on the filtered image; marking candidate target areas of the segmented binary image, and calculating the center coordinates of the candidate target areas; according to the center coordinates of each candidate target, image blocks are taken from the image to be detected; extracting the characteristic parameters of each image block to be detected; and classifying the characteristic parameters of the image block to be detected through the trained model to obtain and output the central coordinate of the target, thereby completing target detection. The classification capability of the fusion features is stronger, the classification precision can be improved, and the convergence speed can be accelerated, so that the aim of reducing the parameters of the classifier is fulfilled; and the method has enough adaptability in the face of complex application scenes, and is convenient for engineering application.
Description
Technical Field
The invention relates to the field of machine learning, in particular to an infrared small and weak target detection method based on integrated fusion features.
Background
The infrared weak and small target detection technology is one of the core technologies of an airborne photoelectric system, and is a basic premise for target monitoring and reconnaissance and accurate striking. As technology advances, the photodetection distance requirements become more and more demanding. The infrared target is small in imaging size under the long-distance condition, even the infrared target is difficult to distinguish by human eyes, and is interfered by noise waves in various complex scenes such as air, ground and sea, so that the small target is difficult to accurately detect.
The existing infrared weak and small target detection technology can be divided into methods based on multi-frame and single-frame images from the technical route. The method based on the multi-frame images realizes the extraction of the motion characteristics of the target by utilizing the time domain and space domain characteristics of the input video sequence images, thereby achieving the aim of high-precision detection; the method can obtain higher detection precision only by utilizing integral processing of a plurality of frames of images, but in practical application, scene changes of frames before and after a video are severe due to search and scanning of an optoelectronic system, and poor detection effect is caused due to difficulty in utilizing inter-frame correlation information. The method based on the single-frame image comprises the following steps: when the saliency target detection method faces open environment, the saliency target detection method is difficult to adapt to various complex application scenarios: one set of parameters can only be adapted to individual scenes, and when the application background is switched to other application backgrounds, the parameters need to be adjusted to be adapted; most of the current single-frame image detection methods based on machine learning need complex feature extraction or detection models, and are difficult to realize in real time in engineering application.
Disclosure of Invention
Aiming at the defects in the prior art, the infrared small and weak target detection method based on the integrated fusion features solves the problems that the existing method is insufficient in adaptability to complex application scenes or high in calculation complexity and difficult to apply in engineering.
In order to achieve the purpose of the invention, the invention adopts the technical scheme that:
the method for detecting the infrared dim small target based on the integrated fusion features comprises the following steps:
s1, acquiring an initial image of the infrared dim target as a training set, and constructing a dictionary filter to perform multi-scale central dictionary feature extraction on the training set;
s2, establishing a classifier based on the multi-scale central dictionary features to obtain a trained model;
s3, obtaining an image to be detected and filtering the image through a high-pass filter to obtain a filtered image;
s4, performing constant false alarm threshold segmentation on the filtered image to obtain a segmented binary image;
s5, marking candidate target areas of the segmented binary image, and calculating to obtain the center coordinates of the candidate target areas;
s6, taking image blocks from the image to be detected according to the center coordinates of each candidate target;
s7, extracting the characteristic parameters of each image block to be detected;
and S8, classifying the characteristic parameters of the image block to be detected through the trained model to obtain and output the center coordinates of the target, and completing target detection.
Further, the specific method of step S1 is:
s1-1, acquiring an initial image of the infrared dim target as a training set, marking a candidate target area on the image in the training set, and calculating to obtain the center coordinate of the candidate target area;
s1-2, extracting sub image blocks of 19 multiplied by 19 according to the center coordinates of the candidate targets;
and S1-3, constructing a dictionary filter, performing convolution on the sub-image blocks to obtain feature maps of the sub-image blocks, stretching all the feature maps into vectors, and combining to form a feature column vector, namely completing multi-scale central dictionary feature extraction.
Further, the specific process of constructing the dictionary filter in step S1-3 is as follows:
s1-3-1, taking 9 sub image blocks with different sizes by taking pixel coordinates (10, 10) as the center for each image block with the size of 19 multiplied by 19;
the sizes of the 9 sub image blocks are respectively 3 × 3, 5 × 5, 7 × 7, 9 × 9, 11 × 11, 13 × 13, 15 × 15, 17 × 17 and 19 × 19;
s1-3-2, clustering the sub-image blocks with the size of 3 multiplied by 3 to obtain 3 dictionary filters;
s1-3-3, clustering the sub-image blocks with the size of 5 multiplied by 5 to obtain 3 dictionary filters;
s1-3-4, clustering the sub-image blocks with the size of 7 multiplied by 7 to obtain 3 dictionary filters;
s1-3-5, clustering the sub-image blocks with the sizes of 9 multiplied by 9, 11 multiplied by 11, 13 multiplied by 13, 15 multiplied by 15, 17 multiplied by 17 and 19 multiplied by 19 to respectively obtain 1 dictionary filter; a total of 15 dictionary filters are obtained.
Further, the specific process in step S2 is:
s2-1, representing the multi-scale central dictionary features of all image division blocks in the training set as;
WhereinIs as followsiA feature column vector of each partial image block;is as followsiA label of each divided image block, +1 represents a positive sample, -1 represents a negative sample;mthe total number of the image blocks in the training set;
s2-3, normalizing the weight;
s2-4, obtaining the second one by weight calculation after normalizationiThe characteristic line vector of each partial image block is based oniCalculating the characteristic column vector and the characteristic row vector of each sub-image block to obtain characteristic parameters;
s2-5, constructing a weak classifier, and calculating according to the weak classifier to obtain a score of the characteristic parameter;
s2-6, constructing a classifier based on the number ratio of the negative samples to the number of the positive samples and the characteristic sign function;
s2-7, updating the weight of the image block according to the score;
and S2-8, repeating the steps S2-3 to S2-7 based on the updated weight of the image block, and performing T iterations to obtain a strong classifier integrated with a weak classifier, namely the trained model.
Further, the specific process in step S2-4 is:
s2-4-1, according to the formula:
to obtain the firstiFirst of each divided image blocktCharacteristic row vector of round iteration(ii) a WhereinIn order to find the function of the minimum value,in order to be the weight after the normalization,in order to obtain the intercept of the signal,for the hyper-parameters used to control the norm constraint,the intermediate formula is adopted, and the intermediate formula is,for adjusting the hyper-parameters of two different norm constraint gravities,is a two-norm of the number of the samples,is zero norm;
s2-4-2, according to the formula:
Further, the specific process in step S2-5 is:
s2-5-1, according to the formula:
to obtain the firsttWeak classifier for characteristic line vector of round iteration(ii) a WhereinAndin order to obtain the parameters to be solved,is a sign function;
s2-5-2, according to the formula:
Further, the specific process of updating the weight in step S2-7 is as follows:
according to the formula:
Further, the specific process of obtaining the strong classifier integrated with the weak classifier in step S2-8 is as follows:
according to the formula:
obtain a strong classifier(ii) a WhereinxIs the characteristic column vector of the image to be detected and consists of the characteristic column vector of each sub-image block,as an intermediate parameter, the parameter is,Tin order to iterate the number of updates,ris the ratio of the number of negative samples to the number of positive samples, ln is a natural constanteA logarithmic function of the base.
Further, the specific method of step S4 is:
s2-1, setting false alarm parameters, and calculating the mean value and variance of the filtered image;
s2-2, based on the false alarm parameters, the mean and the variance, according to the formula:
deriving a segmentation thresholdK(ii) a WhereinIs the average of the filtered images and is,is the variance of the filtered image and,in the form of a normal distribution function,is a false alarm parameter;
and S2-3, setting the pixel value in the filtered image to be greater than the segmentation threshold value to be 1, and setting the pixel value to be less than the segmentation threshold value to be 0, so as to obtain the segmented binary image.
Further, the specific method for marking the target region on the segmented binary image in step S5 is as follows: searching a region with a pixel of 1 in the divided binary image, and marking a region which forms a connected domain in the region with the pixel of 1 as a candidate target region; wherein the sum of the pixels of the candidate target region is equal to or greater than 3.
The invention has the beneficial effects that: the method designs the multi-scale central dictionary features, covers targets with various sizes, designs the central dictionary in a targeted manner, and improves the description capability of the target features; when the classifier is trained, the characteristic row vector is introducedThe features are linearly fused, anAnd forming the fused features into a simple classifier by an ensemble learning mode. The classification capability of the fusion features is stronger, the classification precision can be improved, and the convergence speed can be accelerated, so that the aim of reducing the parameters of the classifier is fulfilled. And the method has enough adaptability in the face of complex application scenes, and is convenient for engineering application.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a block diagram of a sub-image;
FIG. 3 is an initial image;
fig. 4 is a binary image.
Detailed Description
The following description of the embodiments of the present invention is provided to facilitate the understanding of the present invention by those skilled in the art, but it should be understood that the present invention is not limited to the scope of the embodiments, and it will be apparent to those skilled in the art that various changes may be made without departing from the spirit and scope of the invention as defined and defined in the appended claims, and all matters produced by the invention using the inventive concept are protected.
As shown in fig. 1, the method for detecting infrared dim targets based on integrated fusion features comprises the following steps:
s1, acquiring an initial image of the infrared dim target as a training set, and constructing a dictionary filter to perform multi-scale central dictionary feature extraction on the training set;
s2, establishing a classifier based on the multi-scale central dictionary features to obtain a trained model;
s3, obtaining an image to be detected and filtering the image through a high-pass filter to obtain a filtered image;
s4, performing constant false alarm threshold segmentation on the filtered image to obtain a segmented binary image;
s5, marking candidate target areas of the segmented binary image, and calculating to obtain the center coordinates of the candidate target areas;
s6, taking image blocks from the image to be detected according to the center coordinates of each candidate target;
s7, extracting the characteristic parameters of each image block to be detected;
and S8, classifying the characteristic parameters of the image block to be detected through the trained model to obtain and output the center coordinates of the target, and completing target detection.
The specific method of step S1 is:
s1-1, acquiring an initial image of the infrared dim target as a training set, marking a candidate target area on the image in the training set, and calculating to obtain the center coordinate of the candidate target area;
s1-2, extracting sub image blocks of 19 multiplied by 19 according to the center coordinates of the candidate targets;
and S1-3, constructing a dictionary filter, performing convolution on the sub-image blocks to obtain feature maps of the sub-image blocks, stretching all the feature maps into vectors, and combining to form a feature column vector, namely completing multi-scale central dictionary feature extraction.
The specific process of constructing the dictionary filter in the step S1-3 is as follows:
s1-3-1, taking 9 sub image blocks with different sizes by taking pixel coordinates (10, 10) as the center for each image block with the size of 19 multiplied by 19;
the sizes of the 9 sub image blocks are respectively 3 × 3, 5 × 5, 7 × 7, 9 × 9, 11 × 11, 13 × 13, 15 × 15, 17 × 17 and 19 × 19;
s1-3-2, clustering the sub-image blocks with the size of 3 multiplied by 3 to obtain 3 dictionary filters;
s1-3-3, clustering the sub-image blocks with the size of 5 multiplied by 5 to obtain 3 dictionary filters;
s1-3-4, clustering the sub-image blocks with the size of 7 multiplied by 7 to obtain 3 dictionary filters;
s1-3-5, clustering the sub-image blocks with the sizes of 9 multiplied by 9, 11 multiplied by 11, 13 multiplied by 13, 15 multiplied by 15, 17 multiplied by 17 and 19 multiplied by 19 to respectively obtain 1 dictionary filter; a total of 15 dictionary filters are obtained.
The specific process in step S2 is:
s2-1, representing the multi-scale central dictionary features of all the image blocks in the training set as;
WhereinIs as followsiA feature column vector of each partial image block;is as followsiA label of each divided image block, +1 represents a positive sample, -1 represents a negative sample;mthe total number of the image blocks in the training set;
s2-3, normalizing the weight;
s2-4, obtaining the second one by weight calculation after normalizationiThe characteristic line vector of each partial image block is based oniCalculating the characteristic column vector and the characteristic row vector of each sub-image block to obtain characteristic parameters;
s2-5, constructing a weak classifier, and calculating according to the weak classifier to obtain a score of the characteristic parameter;
s2-6, constructing a classifier based on the number ratio of the negative samples to the number of the positive samples and the characteristic sign function;
s2-7, updating the weight of the image block according to the score;
and S2-8, repeating the steps S2-3 to S2-7 based on the updated weight of the image block, and performing T iterations to obtain a strong classifier integrated with a weak classifier, namely the trained model.
The specific process in step S2-4 is:
s2-4-1, according to the formula:
to obtain the firstiFirst of each divided image blocktCharacteristic row vector of round iteration(ii) a WhereinIn order to find the function of the minimum value,in order to be the weight after the normalization,in order to obtain the intercept of the signal,for the hyper-parameters used to control the norm constraint,the intermediate formula is adopted, and the intermediate formula is,for adjusting the hyper-parameters of two different norm constraint gravities,the number of the signals is two norms,is zero norm;
s2-4-2, according to the formula:
Further, the specific process in step S2-5 is:
s2-5-1, according to the formula:
to obtain the firsttWeak classifier for characteristic line vector of round iteration(ii) a WhereinAndin order to obtain the parameters to be obtained,is a sign function;
s2-5-2, according to the formula:
The specific process of updating the weight in step S2-7 is as follows:
according to the formula:
The specific process of obtaining the strong classifier integrated with the weak classifier in the step S2-8 is as follows:
according to the formula:
obtain a strong classifier(ii) a WhereinxIs the characteristic column vector of the image to be detected and consists of the characteristic column vector of each sub-image block,as an intermediate parameter, the parameter is,Tin order to iterate the number of updates,ris the ratio of the number of negative samples to the number of positive samples, ln is a natural constanteA logarithmic function of the base.
The specific method of step S4 is:
s2-1, setting false alarm parameters, and calculating the mean value and variance of the filtered image;
s2-2, based on the false alarm parameters, the mean and the variance, according to the formula:
deriving a segmentation thresholdK(ii) a WhereinIs the average of the filtered images and,is the variance of the filtered image and,in the form of a normal distribution function,is a false alarm parameter;
and S2-3, setting the pixel value in the filtered image to be greater than the segmentation threshold value to be 1, and setting the pixel value to be less than the segmentation threshold value to be 0, so as to obtain the segmented binary image.
The specific method for marking the target region on the segmented binary image in step S5 is as follows: searching a region with a pixel of 1 in the divided binary image, and marking a region which forms a connected domain in the region with the pixel of 1 as a candidate target region; wherein the sum of the pixels of the candidate target region is equal to or greater than 3.
As shown in fig. 2, the divided image block corresponding to step S1-2;
as shown in fig. 3, the initial image corresponding to step S1;
as shown in fig. 4, corresponds to the binary image of step S4.
In one embodiment of the present invention, for a multi-scale central dictionary feature divided into 9 segmented image blocks, isThe dimension of the corresponding feature row vector is 2335.
For theClassifier parametersThe solution of (2) is converted into a general solution problem, and then the iterative solution process is as follows:
1) setting upTo makeThe proportion of the medium-zero norm constraint is slightly higher than that of the two-norm constraint, so that the solution is obtainedMore medium and non-zero elements are beneficial to reducing the calculation complexity;
2) setting upWhereinDenotes the firstiFirst of a characteristic column vector of each partial image blockjThe number of the components is one,averaging the maximum and minimum values so that each is solvedEven if not optimal, the performance is not reduced because of no poor performance;
whereinTo it is firstjThe number of the components is such that,to it is firstkThe number of the components is such that,to which it is ajThe number of the components is such that,to it is firstkA component;
first, theRClassification error of wheelIn whichIf, ifThere was no change in any of the 20 rounds, indicating that the model has converged, i.e., the training can be stopped, and T = R is set, the current cutoff iteration round value. In practiceIn application, the larger value can be set first, and training can be stopped properly according to the convergence condition.
The method is used for testing 16492 candidate target data sets (4039 targets and 12453 non-targets) extracted from twenty thousand pairs of air-infrared images, and the classification accuracy reaches 97.6%. In the invention, the number of each scale filter in the dictionary set is set to be slightly different, which is determined according to the size of the target in an application scene, if the size of the target is larger, the number of large-size filters is increased, and the number of small-size filters is reduced.
The method designs the multi-scale central dictionary features, covers targets with various sizes, designs the central dictionary in a targeted manner, and improves the description capability of the target features; when the classifier is trained, a characteristic row vector is introducedThe features are linearly fused, and the fused features are formed into a simple classifier through an ensemble learning mode. The classification capability of the fusion features is stronger, the classification precision can be improved, and the convergence speed can be accelerated, so that the aim of reducing the parameters of the classifier is fulfilled.
Claims (10)
1. An infrared small and weak target detection method based on integrated fusion features is characterized by comprising the following steps:
s1, acquiring an initial image of the infrared dim target as a training set, and constructing a dictionary filter to perform multi-scale central dictionary feature extraction on the training set;
s2, establishing a classifier based on the multi-scale central dictionary features to obtain a trained model;
s3, obtaining an image to be detected and filtering the image through a high-pass filter to obtain a filtered image;
s4, performing constant false alarm threshold segmentation on the filtered image to obtain a segmented binary image;
s5, marking candidate target areas of the segmented binary image, and calculating to obtain the center coordinates of the candidate target areas;
s6, taking image blocks from the image to be detected according to the center coordinates of each candidate target;
s7, extracting the characteristic parameters of each image block to be detected;
and S8, classifying the characteristic parameters of the image block to be detected through the trained model to obtain and output the center coordinates of the target, and completing target detection.
2. The infrared dim target detection method based on integrated fusion features as claimed in claim 1, wherein the specific method of step S1 is:
s1-1, acquiring an initial image of the infrared dim target as a training set, marking a candidate target area on the image in the training set, and calculating to obtain the center coordinate of the candidate target area;
s1-2, extracting sub image blocks of 19 multiplied by 19 according to the center coordinates of the candidate targets;
and S1-3, constructing a dictionary filter, performing convolution on the sub-image blocks to obtain feature maps of the sub-image blocks, stretching all the feature maps into vectors, and combining to form a feature column vector, namely completing multi-scale central dictionary feature extraction.
3. The infrared weak and small target detection method based on integrated fusion features as claimed in claim 2, wherein the specific process of constructing the dictionary filter in step S1-3 is as follows:
s1-3-1, taking 9 sub image blocks with different sizes by taking pixel coordinates (10, 10) as the center for each image block with the size of 19 multiplied by 19;
the sizes of the 9 sub image blocks are respectively 3 × 3, 5 × 5, 7 × 7, 9 × 9, 11 × 11, 13 × 13, 15 × 15, 17 × 17 and 19 × 19;
s1-3-2, clustering the sub-image blocks with the size of 3 multiplied by 3 to obtain 3 dictionary filters;
s1-3-3, clustering the sub-image blocks with the size of 5 multiplied by 5 to obtain 3 dictionary filters;
s1-3-4, clustering the sub-image blocks with the size of 7 multiplied by 7 to obtain 3 dictionary filters;
s1-3-5, clustering the sub-image blocks with the sizes of 9 multiplied by 9, 11 multiplied by 11, 13 multiplied by 13, 15 multiplied by 15, 17 multiplied by 17 and 19 multiplied by 19 to respectively obtain 1 dictionary filter; a total of 15 dictionary filters are obtained.
4. The method for detecting infrared dim targets based on integrated fusion features according to claim 2, characterized in that the specific process in step S2 is:
s2-1, representing the multi-scale central dictionary features of all image division blocks in the training set as;
WhereinIs as followsiA feature column vector of each partial image block;is as followsiA label of each divided image block, +1 represents a positive sample, -1 represents a negative sample;mthe total number of the image blocks in the training set;
s2-3, normalizing the weight;
s2-4, obtaining the second one by weight calculation after normalizationiThe characteristic line vector of each partial image block is based oniCalculating the characteristic column vector and the characteristic row vector of each sub-image block to obtain characteristic parameters;
s2-5, constructing a weak classifier, and calculating according to the weak classifier to obtain a score of the characteristic parameter;
s2-6, constructing a classifier based on the number ratio of the negative samples to the number of the positive samples and the characteristic sign function;
s2-7, updating the weight of the image block according to the score;
and S2-8, repeating the steps S2-3 to S2-7 based on the updated weight of the image block, and performing T iterations to obtain a strong classifier integrated with a weak classifier, namely the trained model.
5. The infrared weak and small target detection method based on integrated fusion features as claimed in claim 4, wherein the specific process in step S2-4 is as follows:
s2-4-1, according to the formula:
to obtain the firstiFirst of each divided image blocktCharacteristic row vector of round iteration(ii) a WhereinIn order to find the function of the minimum value,in order to be the weight after the normalization,in order to obtain the intercept of the signal,for the hyper-parameters used to control the norm constraint,the intermediate formula is adopted, and the intermediate formula is,for adjusting the hyper-parameters of two different norm constraint gravities,is a two-norm of the number of the samples,is zero norm;
s2-4-2, according to the formula:
6. The infrared dim target detection method based on integrated fusion features as claimed in claim 5, wherein the specific process in step S2-5 is:
s2-5-1, according to the formula:
to obtain the firsttWeak classifier for characteristic line vector of round iteration(ii) a WhereinAndin order to obtain the parameters to be solved,is a sign function;
s2-5-2, according to the formula:
8. The integrated fusion feature-based infrared weak and small target detection method according to claim 7, wherein the specific process of obtaining the strong classifier of the integrated weak classifier in step S2-8 is as follows:
according to the formula:
obtain a strong classifier(ii) a WhereinxThe characteristic column vector of the image to be detected is composed of the characteristic column vectors of all the sub-image blocks,As an intermediate parameter, the parameter is,Tin order to iterate the number of updates,ris the ratio of the number of negative samples to the number of positive samples, ln is a natural constanteA logarithmic function of base.
9. The integrated fusion feature-based infrared small and weak target detection method according to claim 1, wherein the specific method of step S4 is as follows:
s2-1, setting false alarm parameters, and calculating the mean value and variance of the filtered image;
s2-2, based on the false alarm parameters, the mean and the variance, according to the formula:
deriving a segmentation thresholdK(ii) a WhereinIs the average of the filtered images and is,is the variance of the filtered image and,in the form of a normal distribution function,is a false alarm parameter;
and S2-3, setting the pixel value in the filtered image to be greater than the segmentation threshold value to be 1, and setting the pixel value to be less than the segmentation threshold value to be 0, so as to obtain the segmented binary image.
10. The infrared weak and small target detection method based on integrated fusion features as claimed in claim 2, wherein the specific method for labeling the target region of the segmented binary image in step S5 is as follows: searching a region with a pixel of 1 in the divided binary image, and marking a region which forms a connected domain in the region with the pixel of 1 as a candidate target region; wherein the sum of the pixels of the candidate target region is equal to or greater than 3.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210377446.0A CN114463619B (en) | 2022-04-12 | 2022-04-12 | Infrared dim target detection method based on integrated fusion features |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210377446.0A CN114463619B (en) | 2022-04-12 | 2022-04-12 | Infrared dim target detection method based on integrated fusion features |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114463619A true CN114463619A (en) | 2022-05-10 |
CN114463619B CN114463619B (en) | 2022-07-08 |
Family
ID=81417687
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210377446.0A Expired - Fee Related CN114463619B (en) | 2022-04-12 | 2022-04-12 | Infrared dim target detection method based on integrated fusion features |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114463619B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116228819A (en) * | 2023-04-27 | 2023-06-06 | 中国科学院空天信息创新研究院 | Infrared moving target detection method and device |
CN117011196A (en) * | 2023-08-10 | 2023-11-07 | 哈尔滨工业大学 | Infrared small target detection method and system based on combined filtering optimization |
Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004133629A (en) * | 2002-10-09 | 2004-04-30 | Ricoh Co Ltd | Dictionary preparation device for detecting specific mark, specific mark detection device, specific mark recognition device, and program and recording medium |
CN102842047A (en) * | 2012-09-10 | 2012-12-26 | 重庆大学 | Infrared small and weak target detection method based on multi-scale sparse dictionary |
CN104899567A (en) * | 2015-06-05 | 2015-09-09 | 重庆大学 | Small weak moving target tracking method based on sparse representation |
CN105513076A (en) * | 2015-12-10 | 2016-04-20 | 南京理工大学 | Weak object constant false alarm detection method based on object coordinate distribution features |
CN106709512A (en) * | 2016-12-09 | 2017-05-24 | 河海大学 | Infrared target detection method based on local sparse representation and contrast |
CN107274410A (en) * | 2017-07-02 | 2017-10-20 | 中国航空工业集团公司雷华电子技术研究所 | Adaptive man-made target constant false alarm rate detection method |
CN108304873A (en) * | 2018-01-30 | 2018-07-20 | 深圳市国脉畅行科技股份有限公司 | Object detection method based on high-resolution optical satellite remote-sensing image and its system |
CN109102003A (en) * | 2018-07-18 | 2018-12-28 | 华中科技大学 | A kind of small target detecting method and system based on Infrared Physics Fusion Features |
US20190095739A1 (en) * | 2017-09-27 | 2019-03-28 | Harbin Institute Of Technology | Adaptive Auto Meter Detection Method based on Character Segmentation and Cascade Classifier |
CN109902715A (en) * | 2019-01-18 | 2019-06-18 | 南京理工大学 | A kind of method for detecting infrared puniness target based on context converging network |
CN111539428A (en) * | 2020-05-06 | 2020-08-14 | 中国科学院自动化研究所 | Rotating target detection method based on multi-scale feature integration and attention mechanism |
CN112001257A (en) * | 2020-07-27 | 2020-11-27 | 南京信息职业技术学院 | SAR image target recognition method and device based on sparse representation and cascade dictionary |
CN112749714A (en) * | 2019-10-29 | 2021-05-04 | 中国科学院长春光学精密机械与物理研究所 | Method for detecting polymorphic dark and weak small target in single-frame infrared image |
CN113935984A (en) * | 2021-11-01 | 2022-01-14 | 中国电子科技集团公司第三十八研究所 | Multi-feature fusion method and system for detecting infrared dim small target in complex background |
-
2022
- 2022-04-12 CN CN202210377446.0A patent/CN114463619B/en not_active Expired - Fee Related
Patent Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004133629A (en) * | 2002-10-09 | 2004-04-30 | Ricoh Co Ltd | Dictionary preparation device for detecting specific mark, specific mark detection device, specific mark recognition device, and program and recording medium |
CN102842047A (en) * | 2012-09-10 | 2012-12-26 | 重庆大学 | Infrared small and weak target detection method based on multi-scale sparse dictionary |
CN104899567A (en) * | 2015-06-05 | 2015-09-09 | 重庆大学 | Small weak moving target tracking method based on sparse representation |
CN105513076A (en) * | 2015-12-10 | 2016-04-20 | 南京理工大学 | Weak object constant false alarm detection method based on object coordinate distribution features |
CN106709512A (en) * | 2016-12-09 | 2017-05-24 | 河海大学 | Infrared target detection method based on local sparse representation and contrast |
CN107274410A (en) * | 2017-07-02 | 2017-10-20 | 中国航空工业集团公司雷华电子技术研究所 | Adaptive man-made target constant false alarm rate detection method |
US20190095739A1 (en) * | 2017-09-27 | 2019-03-28 | Harbin Institute Of Technology | Adaptive Auto Meter Detection Method based on Character Segmentation and Cascade Classifier |
CN108304873A (en) * | 2018-01-30 | 2018-07-20 | 深圳市国脉畅行科技股份有限公司 | Object detection method based on high-resolution optical satellite remote-sensing image and its system |
CN109102003A (en) * | 2018-07-18 | 2018-12-28 | 华中科技大学 | A kind of small target detecting method and system based on Infrared Physics Fusion Features |
CN109902715A (en) * | 2019-01-18 | 2019-06-18 | 南京理工大学 | A kind of method for detecting infrared puniness target based on context converging network |
CN112749714A (en) * | 2019-10-29 | 2021-05-04 | 中国科学院长春光学精密机械与物理研究所 | Method for detecting polymorphic dark and weak small target in single-frame infrared image |
CN111539428A (en) * | 2020-05-06 | 2020-08-14 | 中国科学院自动化研究所 | Rotating target detection method based on multi-scale feature integration and attention mechanism |
CN112001257A (en) * | 2020-07-27 | 2020-11-27 | 南京信息职业技术学院 | SAR image target recognition method and device based on sparse representation and cascade dictionary |
CN113935984A (en) * | 2021-11-01 | 2022-01-14 | 中国电子科技集团公司第三十八研究所 | Multi-feature fusion method and system for detecting infrared dim small target in complex background |
Non-Patent Citations (6)
Title |
---|
BOAZ OPHIR 等: "Multi-scale dictionary learning using wavelets", 《IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING》 * |
GEN LI 等: "The Research on Classification of Small Sample Data Set Image Based on Convolutional Neural Network", 《2021 33RD CHINESE CONTROL AND DECISION CONFERENCE (CCDC)》 * |
XUEQI LI 等: "Research on Feature Analysis and Detection of Infrared Small Target under Complex Ground Background", 《2019 IEEE 8TH JOINT INTERNATIONAL INFORMATION TECHNOLOGY AND ARTIFICIAL INTELLIGENCE CONFERENCE (ITAIC 2019)》 * |
杨帆 等: "《精通图像处理经典算法 MATLAB版》", 30 April 2014, 北京航空航天大学出版社 * |
王会改 等: "基于多尺度自适应稀疏字典的小弱目标检测方法", 《红外与激光工程》 * |
蒋昕昊 等: "基于 YOLO-IDSTD 算法的红外弱小目标检测", 《红外与激光工程》 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116228819A (en) * | 2023-04-27 | 2023-06-06 | 中国科学院空天信息创新研究院 | Infrared moving target detection method and device |
CN116228819B (en) * | 2023-04-27 | 2023-08-08 | 中国科学院空天信息创新研究院 | Infrared moving target detection method and device |
CN117011196A (en) * | 2023-08-10 | 2023-11-07 | 哈尔滨工业大学 | Infrared small target detection method and system based on combined filtering optimization |
CN117011196B (en) * | 2023-08-10 | 2024-04-19 | 哈尔滨工业大学 | Infrared small target detection method and system based on combined filtering optimization |
Also Published As
Publication number | Publication date |
---|---|
CN114463619B (en) | 2022-07-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114463619B (en) | Infrared dim target detection method based on integrated fusion features | |
CN105956539B (en) | A kind of Human Height measurement method of application background modeling and Binocular Vision Principle | |
CN109636771B (en) | Flight target detection method and system based on image processing | |
CN106548169B (en) | Fuzzy literal Enhancement Method and device based on deep neural network | |
CN109949361A (en) | A kind of rotor wing unmanned aerial vehicle Attitude estimation method based on monocular vision positioning | |
CN103455797A (en) | Detection and tracking method of moving small target in aerial shot video | |
CN105160310A (en) | 3D (three-dimensional) convolutional neural network based human body behavior recognition method | |
CN105279771B (en) | A kind of moving target detecting method based on the modeling of online dynamic background in video | |
CN106023257A (en) | Target tracking method based on rotor UAV platform | |
CN109447082B (en) | Scene moving object segmentation method, system, storage medium and equipment | |
CN111209920B (en) | Airplane detection method under complex dynamic background | |
CN110334703B (en) | Ship detection and identification method in day and night image | |
US11361534B2 (en) | Method for glass detection in real scenes | |
CN112308883A (en) | Multi-ship fusion tracking method based on visible light and infrared images | |
CN109165602A (en) | A kind of black smoke vehicle detection method based on video analysis | |
CN103942786B (en) | The self adaptation block objects detection method of unmanned plane visible ray and infrared image | |
CN113312973A (en) | Method and system for extracting features of gesture recognition key points | |
CN110516527B (en) | Visual SLAM loop detection improvement method based on instance segmentation | |
CN110910497B (en) | Method and system for realizing augmented reality map | |
CN112465863A (en) | Unmanned aerial vehicle video target tracking method based on deep learning | |
CN111339824A (en) | Road surface sprinkled object detection method based on machine vision | |
CN113902044B (en) | Image target extraction method based on lightweight YOLOV3 | |
CN110232314A (en) | A kind of image pedestrian's detection method based on improved Hog feature combination neural network | |
CN115482257A (en) | Motion estimation method integrating deep learning characteristic optical flow and binocular vision | |
CN108876849B (en) | Deep learning target identification and positioning method based on auxiliary identification |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20220708 |
|
CF01 | Termination of patent right due to non-payment of annual fee |