CN115240055A - Space-based infrared air target multi-frame detection method, system, medium and equipment - Google Patents
Space-based infrared air target multi-frame detection method, system, medium and equipment Download PDFInfo
- Publication number
- CN115240055A CN115240055A CN202210251163.1A CN202210251163A CN115240055A CN 115240055 A CN115240055 A CN 115240055A CN 202210251163 A CN202210251163 A CN 202210251163A CN 115240055 A CN115240055 A CN 115240055A
- Authority
- CN
- China
- Prior art keywords
- local
- space
- target
- background
- slice
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Landscapes
- Aiming, Guidance, Guns With A Light Source, Armor, Camouflage, And Targets (AREA)
Abstract
The invention belongs to the technical field of infrared remote sensing and infrared space, and discloses a space-based infrared air target multi-frame detection method, a system, a medium and equipment, which comprise the following steps: determining a reference frame I b And a reference frame I b+l And I b‑l Local slices are extracted from the three-frame image, and [ b-l, b + l ] is estimated through local similarity matching]After the background in the interval moves in the direction, respectively constructing a bidirectional space-time combined model and a space background suppression model, realizing the enhancement and strong clutter suppression of the infrared aerial target, and obtaining a significance mapping chart; and setting a self-adaptive segmentation threshold, wherein the object with the gray value higher than the threshold in the significance map is the aerial object. The invention constructs local bidirectional framesThe space-based infrared aerial target detection method based on the interframe bidirectional matching has the advantages that the space-based infrared aerial target detection is realized by the aid of the interframe bidirectional matching differential model, the structure is simple, the processing complexity of the infrared aerial target detection and the resource requirement of hardware realization are reduced, the target detection efficiency is effectively improved, and the space-based infrared aerial target detection method based on the interframe bidirectional matching has important significance in the field of space-based target detection.
Description
Technical Field
The invention belongs to the technical field of infrared remote sensing and infrared space, and particularly relates to a space-based infrared air target multi-frame detection method, a system, a medium and equipment.
Background
At present, most of the existing infrared small target detection methods aim at ground or air detection, but not sky-based detection, because the conditions of a space-based platform are greatly different from those of the ground and air platforms. The distance between the aerial target and the space-based infrared detector is more than 300km, and the aerial target mostly shows the characteristics of small size and weak energy in an infrared image due to the low resolution of infrared remote sensing and the influence of factors such as atmospheric interference, optical scattering and diffraction, and the like, so that the aerial target accords with the standard of an infrared weak target. Weak energy means that the contrast between the object and the background is not sufficiently apparent, while small size means that the object does not have certain shape and texture features. In addition, in an actual application scene, the background of the space-based infrared image consists of cloud, land, ocean and various random noises, strong background clutter exists in the background, and the gray value of the background is far larger than that of an aerial target. In addition, the space-based platform has limited computational resources, so that the space-based platform is difficult to fully preprocess an original infrared image and narrow the strength difference between a target and clutter; meanwhile, the performance of the space-based detection algorithm is limited by limited computing resources, and the detection efficiency is reduced. For the above reasons, effective detection of an aerial target space-based is still a difficult task.
In recent years, various space-based infrared target detection methods have been proposed. Due to the limitation of computational resources, these space-based detection methods are all Local Contrast (LCM) based methods. The single-frame detection method is suitable for infrared images generated by two modes of staring and scanning, such as a multi-direction filtering fusion method, a neighborhood significance mapping method and the like. However, strong clutter is difficult to suppress by using a single-frame detection method, so that a multi-frame detection method is the mainstream of the space-based detection field. However, these methods are only applicable to gaze patterns that produce a stable or slightly moving background, spatio-temporal local contrast filter methods, spatio-temporal local contrast methods based on space, detection methods based on TDLMS, neighborhood gray difference and connected domain processing, spatio-temporal joint processing model methods, and the like.
However, the infrared images used in the above method are pre-processed by contrast stretching and histogram equalization, which means that these images are not original images from the space-based platform, but secondary images from post-processing. In the space-based image which is only subjected to background subtraction and non-uniformity correction preprocessing, the intensity of the clutter is far greater than that of the target, the existing method has a good effect of inhibiting most backgrounds, but clutter is easily enhanced and the real target is ignored, so that many missed detections and false alarms are caused. Therefore, it is desirable to design a new space-based infrared air target multi-frame detection method, system, medium and device.
Through the above analysis, the problems and defects of the prior art are as follows:
(1) The existing infrared small target detection method is mainly aimed at ground or air detection, but not space-based detection, because the conditions of a space-based platform are greatly different from those of the ground and air platforms; the background of the space-based infrared image has strong background clutter, and the gray value of the background clutter is far larger than that of an aerial target.
(2) The existing space-based platform has limited computing resources, so that the space-based platform is difficult to fully preprocess an original infrared image and narrow the strength difference between a target and a clutter; meanwhile, the performance of the space-based detection algorithm is limited by limited computing resources, and the detection efficiency is reduced.
(3) The existing single-frame detection method is difficult to suppress strong clutter, while the multi-frame detection method is only suitable for generating a staring mode with a stable or slightly moving background; the existing method has a good effect of inhibiting most backgrounds, but clutter is easily enhanced to ignore real targets, so that a lot of missed detections and false alarms are caused.
The difficulties in solving the above problems and drawbacks are as follows. Unlike traditional ground-based infrared detection, space-based infrared images are characterized by the strength of airborne targets being much lower than the strength of strong clutter. In the face of the characteristics, the existing infrared small target detection algorithm tends to enhance strong clutter and ignore weak targets, and finally the detection result is high in false alarm rate and low in detection rate.
The significance for solving the problems and the defects is as follows: the algorithm can adapt to the special characteristics of the space-based image, can enhance the weak target while inhibiting the strong clutter, and realizes the effective detection of the space-based aerial target.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides a space-based infrared air target multi-frame detection method, a system, a medium and equipment.
The invention is realized in this way, a space-based infrared aerial target multi-frame detection method, which comprises the following steps: determining a reference frame I b And a reference frame I b+l And I b-l Local slices are extracted from the three-frame image, and [ b-l, b + l ] is estimated by local similarity matching]After the background in the interval moves in the direction, respectively constructing a bidirectional space-time combined model and a space background suppression model, realizing the enhancement and strong clutter suppression of the infrared aerial target, and obtaining a significance mapping chart; and setting a self-adaptive segmentation threshold, wherein the object with the gray value higher than the threshold in the significance map is the aerial object.
Further, the space-based infrared aerial target multi-frame detection method comprises the following steps:
step one, local slice extraction and local slice group extraction are respectively carried out, and data contained in the subsequent steps are all carried out in the extracted local neighborhood;
step two, local normalization and similarity matching are respectively carried out, the local normalization can limit the weak target and the strong clutter in the same value range, the weak target is prevented from being omitted, and the similarity matching provides reference for background moving direction estimation in a [ b-l, b + l ] interval;
constructing a two-way space-time combined model, estimating the moving direction of the background, effectively inhibiting strong clutter and other backgrounds, and simultaneously enhancing dark and weak targets by the dipole step;
step four, performing spatial background suppression, and acquiring a significance mapping value at a pixel (x, y), wherein the spatial background suppression can be used for suppressing non-uniform stripes and noise existing in a space-based image;
step five, calculating self-adaptive segmentation threshold value T pair I map And carrying out binarization segmentation and determining the target position.
Further, the step of performing the local slice extraction and the local slice group extraction in the first step respectively includes:
(1) Local slice extraction
In I b At the middle position (x, y), extracting a local slice I according to the size of the target b (x, y) as R 11 The local slice moves from left to right and from top to bottom, and b represents that the reference frame is the b-th frame of the image sequence; enlarging the area of the partial slice such that the weight, R, of the pixels of the background area is increased in the subsequent similarity matching step 11 The position of each element in the neighborhood of (x, y)Internal:
wherein, (i, j) is a local slice R 11 Internal element is in b The middle position, s, is the target size radius, which depends on the actual target size in the original image.
(2) Local slice group extraction
In reference frame I b+l In (1), extract I b+l 3 × 3 neighborhood at (x, y), in Ω local Expressed, defined as follows:
Ω local ={(p,q)|max(|p-x|,|y-q|)≤1};
wherein (p, q) is a reference frame I b+l A middle pixel coordinate; are respectively expressed by omega local Extracting local slice with each pixel as center, and recording as R 2m M =1,2, 3.., 9, the 9 local slice sizes and R 11 The same is true.
(3) Extraction of I b-l Partial section R of (1) 31 With the elements being located adjacent to (x, y)Domain(s)Internal:
wherein, (i, j) is a local slice R 31 Internal elements in I b-l The middle position.
Obtaining R from a local normalization function 31 Local normalized slice ofCalculating R according to the similarity function 11 And R 31 Match coefficient r therebetween 2 (x,y):
Further, the performing local normalization and similarity matching respectively in the second step includes:
(1) Local normalization
Normalizing the gray values of the local slice pixels to [0,1 respectively by a local normalization function]. Due to R 11 And R 2m The dimensions are the same, so the coordinates within both types of local slices are represented by the coordinate system (g, h), g, h ∈ {1, 2., 6 × s +9 }. The local normalization function is defined as follows:
(g,h),g,h∈{1,2,...,6×s+9};
wherein R is nor1 (g, h) represents R 11 The normalized value of the pixel at the inner (g, h),the same is true.
(2) Similarity matching
Defining a local similarity function, and calculating R 11 And R 2m Degree of similarity between r m And r is m The medium maximum is defined as the matching coefficient r of the reference frame and the reference frame at the position (x, y) 1 (x, y), determining R 2m In with R 11 Sequence number m of layout slice with maximum similarity max . The relevant operations are as follows:
r 1 (x,y)=max(r m );
where | x | represents an absolute value operation,the representation is taken as the mean of the local slice gray values.
Further, the constructing a bi-directional spatio-temporal joint model and performing background moving direction estimation in the third step comprises:
(1) Construction of I b And I b+l Bidirectional space-time combined model
And carrying out local interframe difference, and inhibiting background and strong clutter:
wherein R is dif The differential slices obtained after local differentiation. And then suppress the non-uniformity fringes. In FIG. 2 (a), R 11 The neighborhood is divided into an inner region and an outer region, at R dif Neighborhood of (2)Is also divided into inner regions omega int And an outer region omega ext The two regions are related as follows:
Ω int ={(g,h)|max(|g-x|,|h-j|)≤s+1},s=1,2,3,4;
Since the residual non-uniformity fringes exhibit a characteristic of small-range fluctuation in the gray-scale value, suppression of the residual non-uniformity fringes is achieved by the following formula:
d dif2 (x,y)=max(R int )-max(R ext );
wherein R is int Is represented by omega int Matrix of inner pixels, R ext From Ω ext A matrix of pixels.
Extracting a target dipole, and enhancing a target:
d dipole1 (x,y)=[max(R int )-min(R int )] 2 ;
wherein d is dipole1 (x, y) represents the dipole strength at (x, y).
If no target exists at the point (x, y), suppressing background and strong clutter; when the target exists, the dipole exists after the difference due to the motion characteristic of the target, and then the dipole is extracted and the target is enhanced.
(2) Background moving direction estimation
Considering that the background is in straight-line uniform motion in short time, determining m max Then the background can be estimated to be [ b-l, b + l]The direction of movement of (a). The background is in [ b-l, b ]]Internal displacementComprises the following steps:
where dx and dy represent the displacement of the background in the horizontal and vertical directions, mod (×, 3) represents the operation of taking the remainder of 3, and fix (×) represents the rounding-down, respectively.
(3) Construction of I b And I b-l Bidirectional space-time combined model
Obtaining a non-uniform row-stripe suppression value d at (x, y) dif3 After (x, y), I is obtained b And I b-l Value d after dipole enhancement at position (x, y) dipole2 (x,y)。
Further, the spatial background suppression in the fourth step and obtaining the saliency map value at pixel (x, y) comprises:
(1) Spatial background suppression
Spatial background suppression at position (x, y) is achieved by local background subtraction:
(2) Obtaining a saliency map value I at a pixel (x, y) map (x,y)
An intermediate result matrix I can be obtained after traversing the whole image through a formula map 。
In the fifth step, the self-adaptive segmentation threshold value T is calculated according to the following formula map Carrying out binarization segmentation and determining the target position:
wherein k is a segmentation coefficient and is 20-30; when I is map When the value of the element in the position (1) is larger than T, the value is set to be 1, otherwise, the value is set to be 0, and the point set to be 1 is the aerial target position.
Another objective of the present invention is to provide a space-based infrared aerial target multi-frame detection system implementing the space-based infrared aerial target multi-frame detection method, where the space-based infrared aerial target multi-frame detection system includes:
the slice extraction module is used for respectively carrying out local slice extraction and local slice group extraction;
the local normalization module is used for respectively carrying out normalization processing on the gray values of the local slice pixels through a local normalization function;
the similarity matching module is used for defining a local similarity function and calculating a matching coefficient and a slice serial number;
the bidirectional space-time combined model building module is used for building a bidirectional space-time combined model and estimating the background moving direction;
the spatial background suppression module is used for performing spatial background suppression and acquiring a significance mapping value of a pixel;
a target location determination module for adaptively segmenting the threshold T pair I by calculating map And carrying out binarization segmentation and determining the target position.
It is a further object of the invention to provide a computer device comprising a memory and a processor, the memory storing a computer program which, when executed by the processor, causes the processor to perform the steps of:
determining a reference frame I b And a reference frame I b+l And I b-l Local slices are extracted from the three-frame image, and [ b-l, b + l ] is estimated by local similarity matching]After the background in the interval moves in the direction, respectively constructing a bidirectional space-time combined model and a space background suppression model, realizing the enhancement and strong clutter suppression of the infrared aerial target, and obtaining a significance mapping chart;and setting a self-adaptive segmentation threshold, wherein the object with the gray value higher than the threshold in the significance map is the aerial object.
It is another object of the present invention to provide a computer-readable storage medium storing a computer program which, when executed by a processor, causes the processor to perform the steps of:
determining a reference frame I b And a reference frame I b+l And I b-l Local slices are extracted from the three-frame image, and [ b-l, b + l ] is estimated by local similarity matching]After the background in the interval moves in the direction, respectively constructing a bidirectional space-time combined model and a space background suppression model, realizing the enhancement and strong clutter suppression of the infrared aerial target, and obtaining a significance mapping chart; and setting a self-adaptive segmentation threshold, wherein the object with the gray value higher than the threshold in the significance mapping graph is the aerial object.
The invention also aims to provide an information data processing terminal, which is used for realizing the space-based infrared air target multi-frame detection system.
By combining all the technical schemes, the method and the device have the advantages and positive effect under the condition that the existing detection method for the infrared target cannot overcome the characteristic that the clutter intensity in the space-based infrared image is far larger than that of the aerial target and the false alarm rate in the space-based infrared target detection is high and the detection rate is low. The space-based infrared aerial target multi-frame detection method is suitable for an infrared image sequence obtained in a space-based staring imaging mode, mainly comprises the detection method aiming at the space-based infrared aerial target, is particularly suitable for inhibiting the background and strong clutter in the infrared image in the space-based staring imaging mode, enhances the aerial target and finally realizes the effective detection of the space-based infrared aerial target.
The method realizes space-based infrared aerial target detection by constructing a local bidirectional interframe matching model, has a simple structure, reduces the processing complexity of infrared aerial target detection and the resource requirement of hardware realization, and effectively improves the efficiency of target detection. The space-based infrared aerial target detection method based on interframe bidirectional matching has important significance in the field of space-based target detection.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments of the present invention will be briefly described below, and it is obvious that the drawings described below are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a flowchart of a space-based infrared air target multi-frame detection method provided by an embodiment of the present invention.
Fig. 2 is a schematic diagram of a space-based infrared air target multi-frame detection method provided by an embodiment of the present invention.
Fig. 3 is a block diagram of a structure of a space-based infrared aerial target multi-frame detection system according to an embodiment of the present invention;
in the figure: 1. a slice extraction module; 2. a local normalization module; 3. a similarity matching module; 4. a bidirectional space-time combined model building module; 5. a spatial background suppression module; 6. a target location determination module.
FIG. 4 shows a reference frame I provided by an embodiment of the present invention b The center point of the local slice and the reference frame I b+j And I b-j The position relationship of the reference neighborhood center point in (1) is shown schematically.
FIG. 5 is a reference frame I provided by an embodiment of the present invention b+j And the position relation of the nine local slices at the middle position (x, y) is shown schematically.
Fig. 6 is an artwork and a three-dimensional view thereof provided by an embodiment of the present invention.
FIG. 6 (a) is a gray scale image provided by an embodiment of the present invention, wherein the target is located in the middle box of the image, and the left bottom box is an enlarged image of the target and its neighborhood.
Fig. 6 (b) is a three-dimensional view of the original provided by the embodiment of the present invention.
Fig. 7 is a saliency map and its three-dimensional view after being processed by the method according to the embodiment of the present invention.
FIG. 7 (a) is a gray scale image provided by an embodiment of the present invention, wherein the target is located in the middle box of the image, and the left bottom box is an enlarged image of the target and its neighborhood.
Fig. 7 (b) is a three-dimensional view of an original provided by an embodiment of the present invention.
Fig. 8 is a diagram of the detection result provided by the embodiment of the present invention and a three-dimensional view thereof.
FIG. 8 (a) is a grayscale image provided by an embodiment of the present invention, where the target is located in the middle box and the bottom left box is an enlarged view of the target and its neighborhood.
Fig. 8 (b) is a three-dimensional view of the original provided by the embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail with reference to the following embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and do not limit the invention.
In order to solve the problems in the prior art, the invention provides a space-based infrared air target multi-frame detection method, a system, a medium and equipment, and the invention is described in detail with reference to the accompanying drawings.
As shown in fig. 1, the space-based infrared air target multi-frame detection method provided by the embodiment of the present invention includes the following steps:
s101, respectively extracting local slices and local slice groups;
s102, respectively carrying out local normalization and similarity matching;
s103, constructing a bidirectional space-time combined model, and estimating the background moving direction;
s104, performing spatial background suppression, and acquiring a significance mapping value at a pixel (x, y);
s105, calculating an adaptive segmentation threshold T pair I map And carrying out binarization segmentation and determining the target position.
A schematic diagram of a space-based infrared aerial target multi-frame detection method provided by the embodiment of the invention is shown in fig. 2.
As shown in fig. 3, the space-based infrared air target multi-frame detection system provided by the embodiment of the present invention includes:
a slice extraction module 1 for performing local slice extraction and local slice group extraction, respectively;
the local normalization module 2 is used for respectively normalizing the gray values of the local slice pixels through a local normalization function;
the similarity matching module 3 is used for defining a local similarity function and calculating a matching coefficient and a slice serial number;
the bidirectional space-time combined model building module 4 is used for building a bidirectional space-time combined model and estimating the background moving direction;
the spatial background suppression module 5 is used for performing spatial background suppression and acquiring a saliency mapping value of a pixel;
a target position determination module 6 for calculating an adaptive segmentation threshold T for I map And carrying out binarization segmentation and determining the target position.
The technical solution of the present invention is further described with reference to the following specific examples.
The invention mainly comprises the following steps:
1. and (4) local slice extraction. In I b At the middle position (x, y), extracting a local slice I according to the target size b (x, y) as R 11 The local slice moves from left to right and from top to bottom, and b denotes that the reference frame is the b-th frame of the image sequence, as shown in fig. 4. In order to improve the weight of the pixels in the background area in the subsequent similarity matching step, the invention enlarges the area of the local slice, R 11 The position of each element in the neighborhood of (x, y)Internal:
wherein (i, j) is a partial slice R 11 Internal element is in b The middle position, s is the radius of the target size, and depends on the actual target size in the original image, for example, s is the radius of the target size when the actual target size is 3 × 3=1; when the target size is 5 × 5, s =2; when the target size is 7 × 7, s =3.
2. And extracting local slice groups. In reference frame I b+l In (1), extracting I b+l 3 × 3 neighborhood at (x, y), in Ω local Which is defined as follows:
Ω local ={(p,q)|max(|p-x|,|y-q|)≤1} (2)
wherein (p, q) is a reference frame I b+l The coordinates of the middle pixel. Are respectively expressed by omega local Extracting local slice with each pixel as center, and recording as R 2m M =1,2,3,. 9. The 9 local slice sizes and R 11 Similarly, the included range is shown as the area (a) in FIG. 5.
3. And (6) local normalization. Normalizing the gray values of the local slice pixels to [0,1 ] respectively through a local normalization function]. Due to R 11 And R 2m The dimensions are the same, so the coordinates within both types of local slices are represented by the coordinate system (g, h), g, h ∈ {1, 2., 6 × s +9 }. The local normalization function is defined as follows:
(g,h),g,h∈{1,2,...,6×s+9} (3)
wherein R is nor1 (g, h) represents R 11 The normalized value of the pixel at the inner (g, h),the same is true.
4. And matching the similarity. Defining a local similarity function and calculating R 11 And R 2m Degree of similarity between r m And r is m The medium maximum is defined as the matching coefficient r of the reference frame and the reference frame at the position (x, y) 1 (x, y), determining R 2m In and R 11 Sequence number m of layout slice with maximum similarity max . The relevant operations are as follows:
r 1 (x,y)=max(r m ) (6)
where | x | represents an absolute value operation,the representation is taken as the mean of the local slice gray values.
5. Construction of I b And I b+l The bi-directional spatio-temporal union model of (1). Firstly, local interframe difference is carried out, and background and strong clutter are suppressed:
wherein R is dif The differential slices obtained after local differentiation. And then suppress the non-uniformity streaks. In FIG. 4 (a), R 11 The neighborhood is divided into an inner region and an outer region, at R dif Neighborhood of (2)Is also divided into inner regions omega int And an outer region omega ext The two regions are related as follows:
Ω int ={(g,h)|max(|g-x|,|h-j|)≤s+1},s=1,2,3,4 (9)
whereinIndicating an empty set. Since the residual non-uniformity fringes exhibit a characteristic of small-range fluctuation in the gray-scale value, suppression of the residual non-uniformity fringes can be achieved by the following formula:
d dif2 (x,y)=max(R int )-max(R ext ) (12)
wherein R is int Is represented by omega int Matrix of inner pixels, R ext From Ω ext A matrix of pixels. And finally extracting a target dipole, and enhancing the target:
d dipole1 (x,y)=[max(R int )-min(R int )] 2 (13)
wherein d is dipole1 (x, y) represents the dipole strength at (x, y). If no target exists at the point (x, y), the background and the strong clutter can be suppressed by the above formula, and when the target exists, the dipole exists after the difference due to the motion characteristic of the target, the dipole can be extracted and the target can be enhanced by the above formula.
6. And estimating the background moving direction. In a short time, the background can be considered to be in linear uniform motion, and m is determined max Then the background can be estimated to be [ b-l, b + l]The direction of movement of (a). As shown in FIG. 4 (b), assume that m max =9, then it can be said to be in [ b, b + l]The inner background moves towards the lower right corner, as indicated by the arrow in the figure; also in the small inter-frame space, at [ b-l, b ]]The inner background also has the same direction of movement as indicated by the arrow in fig. 4 (c). The background is in [ b-l, b ]]The inner displacement is:
where dx and dy represent the displacement of the background in the horizontal and vertical directions, mod (×, 3) represents the operation of taking the remainder of 3, and fix (×) represents the rounding-down, respectively.
7. Extraction of I b-l Partial slice R in (1) 31 The position of each element is located in the neighborhood of (x, y)Internal:
wherein (i, j) is a partial slice R 31 Internal elements in I b-l The middle position. Obtaining R according to equation (3) 31 Local normalized slice ofCalculating R according to the similarity function 11 And R 31 Match coefficient r therebetween 2 (x,y):
8. Construction I b And I b-l The two-way spatio-temporal union model of (1). Obtaining the non-uniform line streak suppression value d at (x, y) according to equations (8) - (12) dif3 (x, y) obtaining I according to the formula (13) b And I b-l Value d after dipole enhancement at position (x, y) dipole2 (x,y)。
9. Spatial background suppression, which is realized by locally subtracting the background, at the position (x, y):
10. obtaining a saliency map value I at a pixel (x, y) map (x,y):
The intermediate result matrix I can be obtained after traversing the whole image by the formula map 。
11. Calculating the adaptive segmentation threshold T pair I according to the following formula map And carrying out binarization segmentation and determining the target position.
Wherein k is a segmentation coefficient and is 20-30 by experience. When I is map The value of the element in (1) is greater than T, the value is set to be 1, otherwise, the value is set to be 0, and the point set to be 1 is the aerial target position.
FIG. 6 is an artwork and three-dimensional views thereof provided by embodiments of the present invention; FIG. 6 (a) is a grayscale image provided by an embodiment of the present invention, where the target is located in the middle box, and the bottom left box is an enlarged view of the target and its neighborhood; fig. 6 (b) is a three-dimensional view of the original provided by the embodiment of the present invention.
FIG. 7 is a saliency map and its three-dimensional view after being processed by the present method according to an embodiment of the present invention; FIG. 7 (a) is a grayscale image provided by an embodiment of the present invention, where the target is located in the middle box, and the bottom left box is an enlarged view of the target and its neighborhood; fig. 7 (b) is a three-dimensional view of an original provided by an embodiment of the present invention.
FIG. 8 is a diagram of the results of the tests provided by the embodiments of the present invention and a three-dimensional view thereof; FIG. 8 (a) is a grayscale image provided by the present invention, where the target is located in the middle box and the bottom left box is an enlarged view of the target and its neighborhood; fig. 8 (b) is a three-dimensional view of the original provided by the embodiment of the present invention.
The technical solution of the present invention is further described below with reference to simulation experiments.
Simulation environment: matlab2020b;
test input: the space-based medium wave image sequence has the size of 320 multiplied by 320, the background is a sea-land background, the target is an airplane, and the target size is 7 multiplied by 7.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When used in whole or in part, can be implemented in a computer program product that includes one or more computer instructions. When the computer program instructions are loaded or executed on a computer, the procedures or functions according to the embodiments of the present invention are wholly or partially generated. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, the computer instructions may be transmitted from one website site, computer, server, or data center to another website site, computer, server, or data center via wire (e.g., coaxial cable, fiber optic, digital Subscriber Line (DSL), or wireless (e.g., infrared, wireless, microwave, etc.)). The computer readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that includes one or more available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid State Disk (SSD)), among others.
The above description is only for the purpose of illustrating the present invention and the appended claims are not to be construed as limiting the scope of the invention, which is intended to cover all modifications, equivalents and improvements that are within the spirit and scope of the invention as defined by the appended claims.
Claims (10)
1. A space-based infrared aerial target multi-frame detection method is characterized by comprising the following steps: determining a reference frame I b And a reference frame I b+l And I b-l Local slices are extracted from the three-frame image, and [ b-l, b + l ] is estimated by local similarity matching]After the background in the interval moves in the direction, respectively constructing a bidirectional space-time combined model and a space background suppression modelRealizing the enhancement and strong clutter suppression of the infrared aerial target and obtaining a significance mapping chart; and setting a self-adaptive segmentation threshold, wherein the object with the gray value higher than the threshold in the significance map is the aerial object.
2. The space-based infrared aerial target multi-frame detection method as claimed in claim 1, wherein the space-based infrared aerial target multi-frame detection method comprises the following steps:
step one, local slice extraction and local slice group extraction are respectively carried out;
step two, local normalization and similarity matching are respectively carried out;
constructing a bidirectional space-time combined model, and estimating the background moving direction;
step four, performing spatial background suppression, and acquiring a significance mapping value at a pixel (x, y);
step five, calculating self-adaptive segmentation threshold value T to I map And carrying out binarization segmentation and determining the target position.
3. The space-based infrared aerial target multiframe detection method as claimed in claim 2, wherein the respectively performing of the local slice extraction and the local slice group extraction in the first step comprises:
(1) Local slice extraction
In I b At the middle position (x, y), extracting a local slice I according to the size of the target b (x, y) as R 11 The local slice moves from left to right and from top to bottom, and b represents a reference frame and is the b-th frame of the image sequence; enlarging the region of the local slice such that the weight, R, of pixels in the background region is increased in a subsequent similarity matching step 11 The position of each element in the neighborhood of (x, y)Internal:
wherein, (i, j) is a local slice R 11 Internal element is in b A middle position, s is a target size radius, and is determined by the actual target size in the original image;
(2) Local slice group extraction
In reference frame I b+l In (1), extract I b+l 3 × 3 neighborhood at (x, y), in Ω local Expressed, defined as follows:
Ω local ={(p,q)|max(|p-x|,|y-q|)≤1};
wherein, (p, q) is a reference frame I b+l A middle pixel coordinate; are respectively expressed by omega local Extracting local slice with each pixel as center, and recording as R 2m M =1,2, 3.., 9, the 9 local slice sizes and R 11 The same;
defining a local similarity function, and calculating R 11 And R 2m Degree of similarity between r m And r is m The medium maximum is defined as the matching coefficient r of the reference frame and the reference frame at the position (x, y) 1 (x, y), determining R 2m In with R 11 Sequence number m of layout slice with maximum similarity max (ii) a The relevant operations are as follows:
r 1 (x,y)=max(r m );
where | x | represents taking absolute value operation, and x represents taking local slice gray value mean.
(3) Extraction of I b-l Partial section R of (1) 31 Each element being located in the neighborhood of (x, y)Internal:
wherein, (i, j) is a local slice R 31 Internal element is in b-l A middle position;
obtaining R from a local normalization function 31 Local normalized slice ofCalculating R according to the similarity function 11 And R 31 Coefficient of matching between r 2 (x,y):
4. The space-based infrared aerial target multi-frame detection method as claimed in claim 2, wherein the respectively performing local normalization and similarity matching in the second step comprises:
(1) Local normalization
Normalizing the gray values of the local slice pixels to [0,1 respectively by a local normalization function](ii) a Due to R 11 And R 2m The dimensions are the same, so the coordinates in the two types of local slices are represented by a coordinate system (g, h), g, h ∈ {1, 2., 6 × s +9 }; the local normalization function is defined as follows:
(g,h),g,h∈{1,2,...,6×s+9};
wherein R is nor1 (g, h) represents R 11 The normalized value of the pixel at the inner (g, h),the same process is carried out;
(2) Similarity matching
Defining a local similarity function, and calculating R 11 And R 2m Degree of similarity between r m And r is m The medium maximum is defined as the matching coefficient r of the reference frame and the reference frame at the position (x, y) 1 (x, y), determining R 2m In and R 11 Sequence number m of layout slice with maximum similarity max (ii) a The relevant operations are as follows:
r 1 (x,y)=max(r m );
5. The space-based infrared air target multi-frame detection method as claimed in claim 2, wherein the step three of constructing a bidirectional spatiotemporal joint model and performing background moving direction estimation comprises:
(1) Construction of I b And I b+l Spatio-temporal union model of
And carrying out local interframe difference, and inhibiting background and strong clutter:
wherein R is dif Suppressing non-uniform stripes for the differential slices obtained after local differentiation; r 11 The neighborhood in which is divided into interiorsRegion and outer region at R dif Neighborhood of (2)Is also divided into inner regions omega int And an outer region omega ext The two regions are related as follows:
Ω int ={(g,h)|max(|g-x|,|h-j|)≤s+1},s=1,2,3,4;
since the residual non-uniformity fringes exhibit a characteristic of small-range fluctuation in the gray-scale value, suppression of the residual non-uniformity fringes is achieved by the following formula:
d dif2 (x,y)=max(R int )-max(R ext );
wherein R is int Is represented by omega int Matrix of inner pixels, R ext From Ω ext A matrix of pixels;
extracting a target dipole, and enhancing a target:
d dipole1 (x,y)=[max(R int )-min(R int )] 2 ;
wherein, d dipole1 (x, y) represents the dipole strength at (x, y);
if the target does not exist at the point (x, y), suppressing a background and strong clutter; when the target exists, the dipole exists after the difference due to the motion characteristic of the target, and then the dipole is extracted and the target is enhanced;
(2) Background movement direction estimation
Considering that the background is in straight-line uniform motion in short time, determining m max Then the background can be estimated to be [ b-l, b + l]The direction of movement of; the background is in [ b-l, b ]]The internal displacement is:
where dx and dy represent the displacement of the background in the horizontal and vertical directions, mod (×, 3) represents the operation of taking the remainder of 3, and fix (×) represents the rounding-down, respectively.
(3) Construction of I b And I b-l Inverse spatio-temporal union model of
Obtaining a non-uniform row-stripe suppression value d at (x, y) dif3 After (x, y), I is obtained b And I b-l Dipole enhanced value d at location (x, y) dipole2 (x,y)。
6. The space-based infrared air target multi-frame detection method according to claim 2, wherein the spatial background suppression in the fourth step and obtaining the significance map value at the pixel (x, y) comprises:
(1) Spatial background suppression
Spatial background suppression at location (x, y) is achieved by local background subtraction:
(2) Obtaining a saliency map value I at a pixel (x, y) map (x,y)
An intermediate result matrix I can be obtained after traversing the whole image through a formula map ;
In the fifth step, the self-adaptive segmentation threshold value T pair I is calculated according to the following formula map Carrying out binarization segmentation and determining a target position:
wherein k is a segmentation coefficient and is 20-30; when I map When the value of the element in the position (1) is larger than T, the value is set to be 1, otherwise, the value is set to be 0, and the point set to be 1 is the aerial target position.
7. A space-based infrared aerial target multi-frame detection system for implementing the space-based infrared aerial target multi-frame detection method according to any one of claims 1 to 6, wherein the space-based infrared aerial target multi-frame detection system comprises:
the slice extraction module is used for respectively carrying out local slice extraction and local slice group extraction;
the local normalization module is used for respectively carrying out normalization processing on the gray values of the local slice pixels through a local normalization function;
the similarity matching module is used for defining a local similarity function and calculating a matching coefficient and a slice serial number;
the bidirectional space-time combined model building module is used for building a bidirectional space-time combined model and estimating the background moving direction;
the spatial background suppression module is used for performing spatial background suppression and acquiring a significance mapping value of a pixel;
a target location determination module for adaptively segmenting the threshold T pair I by calculating map And carrying out binarization segmentation and determining the target position.
8. A computer device comprising a memory storing a computer program and a processor, the computer device being characterized in that the memory is adapted to store a computer programThe computer program, when executed by the processor, causes the processor to perform the steps of: determining a reference frame I b And a reference frame I b+l And I b-l Local slices are extracted from the three-frame image, and [ b-l, b + l ] is estimated through local similarity matching]After the background in the interval moves in the direction, respectively constructing a bidirectional space-time combined model and a space background suppression model, realizing the enhancement and strong clutter suppression of the infrared aerial target, and obtaining a significance mapping chart; and setting a self-adaptive segmentation threshold, wherein the object with the gray value higher than the threshold in the significance map is the aerial object.
9. A computer-readable storage medium storing a computer program which, when executed by a processor, causes the processor to perform the steps of: determining a reference frame I b And a reference frame I b+1 And I b-l Local slices are extracted from the three-frame image, and [ b-l, b + l ] is estimated by local similarity matching]After the background in the interval moves in the direction, respectively constructing a bidirectional space-time combined model and a space background suppression model, realizing the enhancement and strong clutter suppression of the infrared aerial target, and obtaining a significance mapping chart; and setting a self-adaptive segmentation threshold, wherein the object with the gray value higher than the threshold in the significance map is the aerial object.
10. An information data processing terminal, characterized in that the information data processing terminal is used for implementing the space-based infrared air target multi-frame detection system as claimed in claim 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210251163.1A CN115240055A (en) | 2022-03-15 | 2022-03-15 | Space-based infrared air target multi-frame detection method, system, medium and equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210251163.1A CN115240055A (en) | 2022-03-15 | 2022-03-15 | Space-based infrared air target multi-frame detection method, system, medium and equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115240055A true CN115240055A (en) | 2022-10-25 |
Family
ID=83668442
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210251163.1A Pending CN115240055A (en) | 2022-03-15 | 2022-03-15 | Space-based infrared air target multi-frame detection method, system, medium and equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115240055A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117689879A (en) * | 2024-01-05 | 2024-03-12 | 哈尔滨工业大学 | Space target detection method in space-based patrol telescope image |
-
2022
- 2022-03-15 CN CN202210251163.1A patent/CN115240055A/en active Pending
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117689879A (en) * | 2024-01-05 | 2024-03-12 | 哈尔滨工业大学 | Space target detection method in space-based patrol telescope image |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Agrawal et al. | A novel joint histogram equalization based image contrast enhancement | |
CN109242888B (en) | Infrared and visible light image fusion method combining image significance and non-subsampled contourlet transformation | |
CN109740445B (en) | Method for detecting infrared dim target with variable size | |
CN108230292B (en) | Object detection method, neural network training method, device and electronic equipment | |
CN107945111B (en) | Image stitching method based on SURF (speeded up robust features) feature extraction and CS-LBP (local binary Pattern) descriptor | |
US8823777B2 (en) | Real-time depth extraction using stereo correspondence | |
CN111666854B (en) | High-resolution SAR image vehicle target detection method fusing statistical significance | |
CN110825900A (en) | Training method of feature reconstruction layer, reconstruction method of image features and related device | |
CN113781406B (en) | Scratch detection method and device for electronic component and computer equipment | |
CN108010065A (en) | Low target quick determination method and device, storage medium and electric terminal | |
CN112633274A (en) | Sonar image target detection method and device and electronic equipment | |
CN114913211A (en) | Gas leakage infrared imaging monitoring method and device, electronic equipment and storage medium | |
Zhang et al. | Depth enhancement with improved exemplar-based inpainting and joint trilateral guided filtering | |
Zhang et al. | Underwater image enhancement using improved generative adversarial network | |
CN113205494B (en) | Infrared small target detection method and system based on adaptive scale image block weighting difference measurement | |
CN115240055A (en) | Space-based infrared air target multi-frame detection method, system, medium and equipment | |
CN113888438A (en) | Image processing method, device and storage medium | |
CN106778822B (en) | Image straight line detection method based on funnel transformation | |
Lu et al. | CNN‐Enabled Visibility Enhancement Framework for Vessel Detection under Haze Environment | |
CN116596801A (en) | Image non-local mean denoising method and device | |
Xu et al. | Features based spatial and temporal blotch detection for archive video restoration | |
CN113343758B (en) | Long-distance unmanned aerial vehicle small target detection method based on infrared image | |
Zhu et al. | Hybrid scheme for accurate stereo matching | |
CN114429593A (en) | Infrared small target detection method based on rapid guided filtering and application thereof | |
CN114155425A (en) | Weak and small target detection method based on Gaussian Markov random field motion direction estimation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |