CN109584192B - Target feature enhancement method and device based on multispectral fusion and electronic equipment - Google Patents

Target feature enhancement method and device based on multispectral fusion and electronic equipment Download PDF

Info

Publication number
CN109584192B
CN109584192B CN201811220592.2A CN201811220592A CN109584192B CN 109584192 B CN109584192 B CN 109584192B CN 201811220592 A CN201811220592 A CN 201811220592A CN 109584192 B CN109584192 B CN 109584192B
Authority
CN
China
Prior art keywords
fusion
frequency
coefficient
low
level
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811220592.2A
Other languages
Chinese (zh)
Other versions
CN109584192A (en
Inventor
王昕晔
朱敏
詹维
郭明
马敬伟
李彪
谢鑫
黄桂
黄璜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Naval University of Engineering PLA
Original Assignee
Naval University of Engineering PLA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Naval University of Engineering PLA filed Critical Naval University of Engineering PLA
Priority to CN201811220592.2A priority Critical patent/CN109584192B/en
Publication of CN109584192A publication Critical patent/CN109584192A/en
Application granted granted Critical
Publication of CN109584192B publication Critical patent/CN109584192B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Abstract

The embodiment of the invention provides a target characteristic enhancement method and device based on multispectral fusion and electronic equipment, wherein the method comprises the following steps: grading the target multi-spectral band infrared image based on different spectral band information of the target multi-spectral band infrared image; starting from the lowest level in the level division, performing coefficient decomposition of non-subsampled Contourlet conversion on the target multispectral infrared image of each level, and performing gradual coefficient fusion based on the coefficient decomposition to obtain a low-frequency fusion coefficient and a high-frequency fusion coefficient of the highest level; and based on the low-frequency fusion coefficient and the high-frequency fusion coefficient of the highest level, carrying out multisampling Contourlet conversion multi-resolution image reconstruction to obtain a target feature enhanced fusion image. The embodiment of the invention can effectively retain the detail information such as the edge, the texture and the like of the image in the fusion process, has better inhibition effect on clutter, and ensures that the brightness of the target in the fused image is moderate and the outline is clear.

Description

Target feature enhancement method and device based on multispectral fusion and electronic equipment
Technical Field
The embodiment of the invention relates to the technical field of image processing, in particular to a target feature enhancement method and device based on multispectral fusion and electronic equipment.
Background
In the field of imaging guidance, the infrared detector belongs to passive imaging, and has the advantages of all-weather work, strong penetrating power, long acting distance, anti-stealth, identification camouflage, strong anti-interference capability and the like, so that the infrared detector is widely applied. However, with the deterioration of the precise guided weapon operation environment and the further improvement of anti-stealth technology, the infrared guidance mode with a single spectrum band is difficult to adapt to the increasingly complex battlefield environment.
At present, in the field of infrared guidance, multiple sensor sources or multiple spectral band data are generally adopted for fusion to achieve the purposes of enhancing targets and inhibiting backgrounds. The image fusion is an advanced image processing technology for synthesizing a plurality of images obtained by a plurality of sensors in the same scene into one image, and aims to integrate a plurality of source image information, increase comprehensiveness of the image, obtain more accurate and reliable image description of the same scene and lay a foundation for next target detection and identification.
However, whether a multi-sensor source or multi-spectrum data is adopted, a plurality of sensors are required to be mounted on the detector for data acquisition, which undoubtedly increases the load capacity of the detector and is not beneficial to the flexible operation of the detector.
Disclosure of Invention
In order to overcome the above problems or at least partially solve the above problems, embodiments of the present invention provide a method, an apparatus, and an electronic device for enhancing target features based on multispectral fusion, so as to more effectively retain target detail information and increase information amount, thereby being more beneficial to improving the success rate of target detection.
In a first aspect, an embodiment of the present invention provides a target feature enhancing method based on multispectral fusion, including:
grading the target multi-spectral infrared image based on different spectral band information of the target multi-spectral infrared image;
starting from the lowest level in the level division, performing coefficient decomposition of non-subsampled Contourlet conversion on the target multispectral infrared image of each level, and performing gradual coefficient fusion based on the coefficient decomposition to obtain a low-frequency fusion coefficient and a high-frequency fusion coefficient of the highest level;
and performing multisampling Contourlet conversion multi-resolution image reconstruction based on the low-frequency fusion coefficient and the high-frequency fusion coefficient of the highest level to obtain a target feature enhanced fusion image.
In a second aspect, an embodiment of the present invention provides a target feature enhancement device based on multispectral fusion, including:
the level division module is used for carrying out level division on the target multi-spectral band infrared image based on different spectral band information of the target multi-spectral band infrared image;
the coefficient fusion module is used for carrying out nonsubsampled Contourlet transform coefficient decomposition on the target multispectral infrared image of each level from the lowest level in the level division, and carrying out progressive coefficient fusion based on the coefficient decomposition to obtain a low-frequency fusion coefficient and a high-frequency fusion coefficient of the highest level;
and the reconstruction module is used for carrying out multi-resolution image reconstruction of non-subsampled Contourlet transformation based on the low-frequency fusion coefficient and the high-frequency fusion coefficient of the highest level to obtain a fusion image with enhanced target characteristics.
In a third aspect, an embodiment of the present invention provides an electronic device, including: at least one memory, at least one processor, a communication interface, and a bus; the memory, the processor and the communication interface complete mutual communication through the bus, and the communication interface is used for information transmission between the electronic equipment and a target multi-spectral-band infrared image; the memory has stored therein a computer program operable on the processor, which when executed by the processor, implements a method of target feature enhancement based on multi-spectral fusion as described above in relation to the first aspect.
In a fourth aspect, an embodiment of the present invention provides a non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform the method for enhancing target features based on multispectral fusion as described in the first aspect above.
According to the target feature enhancement method and device based on multispectral fusion and the electronic equipment, fusion classification is carried out according to the mutual relation of different spectral bands, and then NSCT domain image fusion based on local energy is carried out in the same stage, so that the detailed information such as edges and textures of the image can be effectively retained in the fusion process, a good clutter suppression effect is achieved, the brightness of the target in the fused image is moderate, the outline is clear, and the subsequent target detection and identification are facilitated.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
Fig. 1 is a schematic flowchart of a target feature enhancement method based on multispectral fusion according to an embodiment of the present invention;
fig. 2 is a schematic diagram illustrating spectral band level division in a target feature enhancement method based on multispectral fusion according to an embodiment of the present invention;
fig. 3 is a schematic flowchart of multi-stage fusion in a target feature enhancement method based on multispectral fusion according to an embodiment of the present invention;
FIG. 4 is a schematic diagram illustrating a structure of a non-subsampled Contourlet transform in the target feature enhancement method based on multi-spectral fusion according to an embodiment of the present invention;
fig. 5 is a schematic flowchart illustrating NSCT domain data fusion in a target feature enhancement method based on multispectral fusion according to an embodiment of the present invention;
FIG. 6 is a schematic structural diagram of a target feature enhancement device based on multispectral fusion according to an embodiment of the present invention;
fig. 7 is a schematic physical structure diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments obtained by persons of ordinary skill in the art based on the embodiments of the present invention without any creative efforts belong to the protection scope of the embodiments of the present invention.
The fusion of the multispectral infrared image is homogeneous image fusion, and the fusion of the embodiment of the invention aims to fuse complementary information of different spectral bands and increase the information content of the image. Certain difference exists on infrared imaging of different spectral bands, and the difference is the basis of multi-spectral band infrared image fusion, and the difference is specifically represented as follows: the spectral radiation and reflection characteristics of targets, backgrounds and interferents in infrared images of different spectral bands are obviously different. For a target region, in the multispectral image, there is a high correlation within adjacent sub-spectral segments; for interferents, the distribution difference in adjacent sub-spectral bands is obvious, and the information correlation is low.
The embodiment of the invention provides a target characteristic enhancing idea based on multispectral fusion, aiming at the multispectral infrared ship target image fusion problem under a complex background, according to the differences of targets, backgrounds and clutter spectral characteristics in different spectral bands, and by combining the randomness characteristic generated by clutter and the correlation of different spectral band information. Firstly, fusion grading is carried out according to the mutual relation of different spectral band information, then NSCT decomposition is carried out on images in the same level, then low-frequency coefficient fusion is carried out by adopting a method based on local energy ratio and local energy weighting, high-frequency coefficient fusion is carried out by adopting a method based on scale variance selection, and finally a fusion image is obtained through NSCT reconstruction. The embodiment of the invention can effectively retain the target detail information, increase the information quantity and is beneficial to target detection. Embodiments of the present invention will be described and illustrated with reference to various embodiments.
Fig. 1 is a schematic flowchart of a target feature enhancement method based on multispectral fusion according to an embodiment of the present invention, as shown in fig. 1, the method is used for implementing target feature enhancement based on multispectral fusion, and the method includes:
s101, grading the target multi-spectral infrared image based on different spectral band information of the target multi-spectral infrared image.
It is understood that, before the processing of the embodiment of the present invention, an image of the target object may be collected by a multispectral infrared imager or the like to form a target multispectral infrared image. By acquiring a multispectral image of the target object, the target multispectral infrared image is formed to contain a plurality of different spectral bands. For example, taking a ship target on the sea as an example, when a multispectral infrared imager is used for collecting an infrared image, the target multispectral infrared image with the resolution of 320 × 256 and the diagonal field angle of 7.8 degrees can be obtained, wherein the spectral bands comprise five spectral bands of 3.7-4.1 μm, 4.4-4.8 μm, 3.7-3.9 μm and 4.65-4.75 μm.
And then, grading the target multispectral infrared image according to the spectral band information of the target multispectral infrared image. For example, the classification may be performed according to the length of each spectrum band, the range including the spectrum length, and the like. Optionally, the step of performing level division on the target multispectral infrared image based on different spectral band information of the target multispectral infrared image further includes: based on the spectrum length ranges of different common bands of the target multi-spectrum infrared image, dividing the target multi-spectrum infrared image into different levels according to the principle that the wider the spectrum length range is, the higher the level is and the preset total number of levels.
For example, for the above five spectral bands, the level division may be performed as shown in fig. 2, which is a schematic diagram of the level division of the spectral bands in the target feature enhancement method based on multispectral fusion according to an embodiment of the present invention. As can be seen from FIG. 2, the spectrum band of 3.7 μm to 4.8 μm, which has the longest spectral range, is divided into the first stage as the highest stage, and the spectrum bands of 3.7 μm to 4.1 μm and 4.4 μm to 4.8 μm, which have the second longest spectral range, are divided into the second stage, and the spectrum band of 3.7 μm to 3.9 μm and 4.65 μm to 4.75 μm, which have the shortest spectral range, is divided into the third stage, which is the lowest stage.
And S102, starting from the lowest level in the level division, performing coefficient decomposition of non-subsampled Contourlet conversion on the target multispectral infrared image of each level, and performing gradual coefficient fusion based on the coefficient decomposition to obtain a low-frequency fusion coefficient and a high-frequency fusion coefficient of the highest level.
It can be understood that, after the target multispectral infrared image is subjected to level division according to the above steps, the target multispectral infrared image of the same level is subjected to coefficient decomposition of non-downsampling Contourlet transform from the lowest level to the upper level step by step to obtain a decomposition coefficient, and then according to the decomposition coefficient obtained by coefficient decomposition, the coefficient fusion is performed on the fusion coefficients obtained by fusing the plurality of target multispectral infrared images of the current level and the previous low-level coefficient until the coefficient fusion of the highest level is realized to obtain the low-frequency fusion coefficient and the high-frequency fusion coefficient of the highest level.
Specifically, fig. 3 may be referred to as a schematic flowchart of multi-level fusion in the multispectral fusion-based target feature enhancement method according to an embodiment of the present invention. As shown in fig. 3, first, a spectral band 2 and a spectral band 3 belonging to the same lower level are subjected to coefficient fusion to obtain a fusion coefficient 1, a spectral band 4 and a spectral band 5 belonging to the same lower level are subjected to coefficient fusion to obtain a fusion coefficient 2, then the fusion coefficient 1 and the fusion coefficient 2 are subjected to coefficient fusion to obtain a fusion coefficient 3, and then, at the next higher level, the spectral band 1 and the fusion coefficient 3 at the level are subjected to similar coefficient fusion to obtain a fusion coefficient 4, that is, a fusion coefficient at the highest level, including a low-frequency fusion coefficient and a high-frequency fusion coefficient.
It will be appreciated that the non-subsampled Contourlet transform therein is an improvement over the Contourlet transform. The Countourlet transform is an efficient two-dimensional representation of images, which has the advantages of anisotropy and directionality over wavelet transforms. In order to overcome the defect that the Countourlet transform does not have translation invariance, a non-downsampled Countourlet transform (NSCT) with translation invariance and good directivity is generated.
Fig. 4 is a schematic diagram of a non-downsampling Contourlet transform in the target feature enhancement method based on multispectral fusion according to an embodiment of the present invention, and the schematic diagram includes a non-downsampling pyramid (NSP) and a non-downsampling directional filter bank (NSDFB). Wherein a non-downsampling pyramid (NSP) decomposes the image into high and low frequency coefficients, and a non-downsampling directional filter bank (NSDFB) decomposes the high frequency subband coefficients in multiple directions. The non-downsampling Countourlet transform is mainly divided into two steps:
carrying out scale decomposition on the image by using NSP (non-subsampled warping) and obtaining 1 low-frequency subband image and N high-frequency subband images through N-level decomposition;
and performing band-pass filtering on each level of high-frequency subband images by using the NSDFB, and obtaining 2N directional subband images with the same size as the source image.
S103, based on the low-frequency fusion coefficient and the high-frequency fusion coefficient of the highest level, multi-resolution image reconstruction of non-subsampled Contourlet transformation is carried out, and a fusion image with enhanced target features is obtained.
It can be understood that, on the basis of obtaining the low-frequency fusion coefficient and the high-frequency fusion coefficient of the highest level according to the above steps, a corresponding image needs to be obtained according to the low-frequency fusion coefficient and the high-frequency fusion coefficient, so that the target object with enhanced features can be visually represented. Therefore, in the step, the non-subsampled Contourlet conversion is correspondingly adopted to reconstruct the multi-resolution image of the target object, and the reconstructed image is the target feature enhanced fusion image.
According to the target feature enhancement method based on multispectral fusion, fusion classification is carried out according to the mutual relation of different spectral segments, and then NSCT domain image fusion based on local energy is carried out in the same stage, so that the edge, texture and other detailed information of an image can be effectively retained in the fusion process, a clutter suppression effect is good, the brightness of a target in the fused image is moderate, the outline is clear, and the subsequent target detection and identification are facilitated.
Optionally, according to the above embodiments, the step of performing non-downsampling Contourlet transform coefficient decomposition on the target multispectral infrared image of each level, and performing progressive coefficient fusion based on the coefficient decomposition further includes:
for any level:
decomposing the target multispectral infrared images at the level by adopting non-subsampled Contourlet transformation respectively to obtain a low-frequency decomposition coefficient and a high-frequency decomposition coefficient which correspond to each target multispectral infrared image respectively;
performing low-frequency data fusion on each low-frequency decomposition coefficient and a low-frequency fusion coefficient obtained by performing low-frequency decomposition coefficient fusion on the previous low level by adopting a mode based on the combination of a local energy ratio and a local energy weighting, and performing high-frequency data fusion on each high-frequency decomposition coefficient and a high-frequency fusion coefficient obtained by performing high-frequency decomposition coefficient fusion on the previous low level by adopting a mode of taking a large local scale variance;
and respectively transmitting the low-frequency fusion coefficient obtained by fusing the low-frequency data of the level and the high-frequency fusion coefficient obtained by fusing the high-frequency data to the next high level for carrying out low-frequency data fusion and high-frequency data fusion of the next high level.
It is understood that, when the coefficient fusion is performed stepwise upward from the lowest level, the coefficient fusion is performed separately for each level. Then, for the target multispectral infrared images belonging to the same level, coefficient decomposition of non-downsampling Contourlet transform is performed on the images respectively, so that a low-frequency decomposition coefficient and a high-frequency decomposition coefficient corresponding to each target multispectral infrared image can be obtained.
And then, performing data fusion on the low-frequency coefficient and the high-frequency coefficient of the level respectively. The low-frequency coefficients include a low-frequency decomposition coefficient obtained by performing non-subsampled Contourlet transform decomposition on each target multispectral infrared image of the current level and a low-frequency fusion coefficient obtained by performing low-frequency decomposition coefficient fusion on the previous low level. Similarly, the high-frequency coefficients include a high-frequency decomposition coefficient obtained by performing non-downsampling Contourlet transform decomposition on each target multispectral infrared image at the current level and a high-frequency fusion coefficient obtained by performing high-frequency decomposition coefficient fusion on the previous low level. When low-frequency data fusion is performed on the low-frequency coefficient, a method based on combination of local energy ratio and local energy weighting is specifically adopted for fusion, and when high-frequency data fusion is performed on the high-frequency coefficient, a method of taking a large local scale variance is adopted for fusion.
After the low-frequency data fusion is performed on the low-frequency coefficient, a low-frequency fusion coefficient of the level can be obtained, and similarly, after the high-frequency data fusion is performed on the high-frequency coefficient, a high-frequency fusion coefficient of the level can be obtained. The low frequency blending coefficients and high frequency blending coefficients of that level are passed to the next higher level in the neighborhood for the purpose of progressive blending. The next higher level, after accepting the low-frequency and high-frequency fusion coefficients, performs the same non-downsampled Contourlet transform decomposition, low-frequency and high-frequency data fusion, and transfer of the low-frequency and high-frequency fusion coefficients at their respective levels.
Specifically, fig. 5 may be referred to as a schematic flow chart of performing NSCT domain data fusion in the multispectral fusion-based target feature enhancement method according to an embodiment of the present invention. In the figure, taking target multispectral infrared images of two spectral bands contained in a certain level, namely a spectral band 1 image and a spectral band 2 image as an example, first, performing NSCT decomposition on the target multispectral infrared images and the spectral band 2 image respectively to obtain a spectral band 1 image low-frequency decomposition coefficient and a spectral band 2 image high-frequency decomposition coefficient respectively. Then, data fusion based on local energy ratio and local energy weighting is carried out on the spectral band 1 image low-frequency decomposition coefficient and the spectral band 2 image low-frequency decomposition coefficient, and data fusion based on scale variance and larger is carried out on the spectral band 1 image high-frequency decomposition coefficient and the spectral band 2 image high-frequency decomposition coefficient to obtain a fusion coefficient of the current level, wherein the fusion coefficient comprises a low-frequency fusion coefficient and a high-frequency fusion coefficient.
Optionally, according to the above embodiments, the step of performing low-frequency data fusion on each low-frequency decomposition coefficient and a low-frequency fusion coefficient obtained by performing low-frequency decomposition coefficient fusion on a previous low level by using a method based on a combination of a local energy ratio and a local energy weighting further includes:
for any level:
calculating the local energy ratio of the multispectral infrared images of the targets with different spectral bands of the level according to the local area energy of the multispectral infrared images of the targets with different spectral bands of the level;
if the energy of the local area of any target multispectral infrared image is larger than a first set threshold value, and the local energy ratio corresponding to the target multispectral infrared image is larger than a second set threshold value, taking the low-frequency decomposition coefficient corresponding to the target multispectral infrared image as a low-frequency fusion coefficient obtained by low-frequency data fusion, and otherwise, performing low-frequency data fusion on each low-frequency decomposition coefficient and the low-frequency fusion coefficient obtained by performing low-frequency decomposition coefficient fusion on the previous low level by adopting a local energy weighting method.
It can be understood that, when the low-frequency coefficient fusion is performed, after the source image is subjected to NSCT decomposition, most information of the image is contained in the low-frequency coefficient, and the approximation of the source image is reflected. Only through weighted average processing, the details of the fused image are blurred, the target is faded, and the scene contrast is low. The fusion rule based on the local characteristic energy ratio as the measure can well keep the target brightness, but can cause the scene contrast to be greatly reduced, thereby generating a false contour in a local area of an image. The main purpose of the multi-spectral-segment image fusion is to suppress clutter and background information randomly generated in the multi-spectral-segment image, and fuse highlight features of targets and definition of scenes in the image.
Therefore, the embodiment of the invention adopts a mode based on energy weighting to carry out fusion, and adds judgment based on the local characteristic energy ratio before weighting based on the local characteristic energy low-frequency coefficient. That is, for any level of low-frequency data fusion, the embodiment of the present invention may calculate the local region energy of each target multispectral infrared image in the level in advance. For example, for a target multispectral infrared image of any two spectral bands, the local region energy of the target multispectral infrared image is calculated by adopting the following formula:
Figure BDA0001834666890000091
in the formula, i is 1-2 and represents different spectral band target multi-spectral band infrared images in the same level of fusion, w (i, j) represents a low-frequency coefficient window weighting coefficient, Ci(x, y) represents the low frequency decomposition coefficients at the (x, y) coordinates of the ith spectral band image of the image. Wherein w (i, j) may adopt a weighting coefficient matrix as shown below:
Figure BDA0001834666890000092
then, according to the local region energy, the embodiment of the present invention calculates the local energy ratio of the multispectral infrared image of the target at different levels in the spectral band as follows:
R(x,y)=E1(x,y)/E2(x,y);
in the formula, R (x, y) represents the local energy ratio of the multi-spectral infrared images of different spectral targets.
R (x, y) expresses the energy difference of two spectral regions in the same level. Generally, the larger the ratio, the more likely the region contains the target, and the low-frequency coefficient of the region with larger local energy should be selected as the fusion coefficient. At the same time, however, the magnitude of R (x, y) depends on E1(x,y),E2(x, y). When E is2(x, y) is smaller, even if E1Less (x, y) will also result in a larger R (x, y). In order to make the selection of coefficients more efficient, E should be selected1(x, y) is greater than a first set threshold T1And R (x, y) is greater than a second set threshold T2The local image low-frequency coefficient is a fusion coefficient, otherwise, a self-adaptive weighting fusion strategy is adopted.
Namely, in order to keep the brightness, background and clutter suppression of the fused image target, for the place with a large ratio of the local region characteristic energy, the low-frequency coefficient of the infrared image is directly selected; in order to maintain the definition and contrast of the fused image, the weighted fusion of the multi-spectral band low-frequency coefficients is carried out for the place with smaller characteristic energy ratio.
Optionally, according to the above embodiments, the step of performing low-frequency data fusion on each low-frequency decomposition coefficient and a low-frequency fusion coefficient obtained by performing low-frequency decomposition coefficient fusion on a previous low level by using a method based on a combination of a local energy ratio and a local energy weighting further includes:
and for any two low-frequency decomposition coefficients or a low-frequency fusion coefficient obtained by fusing the low-frequency decomposition coefficients of the previous low level, carrying out low-frequency data fusion by adopting the following fusion formula:
Figure BDA0001834666890000101
in the formula, CF(x, y) represents a low-frequency fusion coefficient obtained by performing low-frequency data fusion, u, v represent adaptive weighting coefficients, T1,T2Respectively, a first set threshold and a second set threshold, wherein,
Figure BDA0001834666890000102
Figure BDA0001834666890000103
in the formula, ave [. degree]、std[·]Respectively representing a mean operation and a standard deviation operation, k1、k2、k3Representing a weighting parameter, E1(x,y)、E2(x, y) respectively represent local region energy of different multispectral infrared images to be fused, and R (x, y) represents a local energy ratio of the different multispectral infrared images to be fused.
Specifically, when the actual low frequency coefficient fusion operation is performed, the calculation may be performed in such a manner that the local energy ratio and the local energy weighting are combined.
Optionally, according to the above embodiments, the step of performing high-frequency data fusion on each high-frequency decomposition coefficient and the high-frequency fusion coefficient obtained by performing high-frequency decomposition coefficient fusion on the previous low level by using a manner that the local scale variance is large further includes:
and for any two high-frequency decomposition coefficients or a high-frequency fusion coefficient obtained by fusing the high-frequency decomposition coefficients of the previous low level, performing high-frequency data fusion by adopting the following fusion formula:
Figure BDA0001834666890000111
in the formula, CF,l,d(x,y)、C1,l,d(x,y)、C1,l,dAnd (x, y) are high-frequency decomposition coefficients of the fused image, the first spectral band image and the second spectral band image in the same stage in the l-stage d direction (x, y), respectively.
It can be understood that, because the high-frequency coefficient represents information such as edge and detail abrupt change in the image, the local area variance reflects the degree of dispersion of the pixel gray-scale value of the image, and the larger the area variance is, the larger the amount of information contained in the area image is, the more likely the area image contains interesting content such as an object. Therefore, the embodiment of the invention adopts the above fusion formula, and in the same-level spectral band fusion, the high-frequency coefficient fusion is fused by adopting a method of taking a large local scale variance.
On the basis of the above embodiments, before the step of performing high-frequency data fusion, the method of the embodiment of the present invention further includes:
for the multi-directional decomposition of the high-frequency decomposition coefficient, taking the maximum variance value in each direction as the direction variance of all target multi-spectral infrared images at the level:
Figure BDA0001834666890000112
in the formula, σ1,lAnd σ2,lRespectively representing the local variance maximum of the high frequency decomposition coefficient in all directions (x, y) of the first spectral band image and the second spectral band image in the same stage.
It can be understood that, considering that the high frequency decomposition coefficients are decomposed in multiple directions, if there is an edge of the image or detailed information, there will only be a large value in individual directions of the high frequency coefficients; if noise is present, it has a small difference in value in each direction of the high frequency coefficient, and generally the value is lower than the decomposition value at the edges and details of the image and higher than the values in other directions. Therefore, the maximum variance value in each direction is taken as the variance in all directions under the scale by adopting the formula, so that the noise can be inhibited, and the distortion caused by inconsistent high-frequency coefficient selection in each direction can be prevented.
To further illustrate the effectiveness of the technical solution of the embodiment of the present invention, the embodiment of the present invention performs a simulation test according to the above embodiments, but does not limit the protection scope of the embodiment of the present invention.
The embodiment of the invention is compared with a Maximum Absolute value (MASP) method Based on a Single Pixel, a low-frequency Local Energy Based Weighted average (LEBW) method Based on Local Energy Weighted average, a high-frequency subband Coefficient fusion rule (MLC + MLV) method combining Coefficient value selection and Local mean square, and a Local Energy Maximum (MLE) method Based on Single Pixel Absolute value selection. The method comprises the steps of carrying out algorithm experiment verification by using a win10 and matlab2015b platform, wherein experimental image data are in a 14-bit Raw format, and 5 spectral infrared ship target images under a shore island background and a sea-sky background are respectively selected. In addition, for experimental contrast, all images were normalized using NSCT for multiresolution decomposition, a "9-7" filter for Laplace filter, and a "pkva" filter for directional filter.
When the infrared ship image is used for ship target detection, the purpose of image fusion is to enable a ship target to be easily detected, the ship contour is as clear and complete as possible, the maximum interference of the infrared ship image under the shore island background comes from the complex shore island background, the maximum interference of the infrared ship image under the sea-sky background comes from sea and sky clutter, and therefore the target of multi-spectral-segment infrared ship target fusion mainly eliminates the interference of the background or the clutter on target detection, and utilizes multi-spectral-segment image complementary information to enhance target characteristic information.
In order to evaluate the effectiveness of the technical scheme of the embodiment of the invention more objectively, an average gradient (Q) is adoptedAG) Mutual Information (Q)MI) And a fused-image evaluation index (Q) Based on edge informationG) Evaluation index Based on Structural Similarity (Image Structural Similarity-Based Metrics, Q)SSIM) And evaluation indexes based on Human visual attention (Human Perception observed Fusion Metrics, Q)HPI)5 objective evaluation indexes are used for comparing the experimental results of the 5 fusion methods, and the final evaluation is carried out through multi-stage fusionThe price index result is an average value of evaluation indexes in a 4-level fusion process, and the experimental result is shown in table 1 and is a performance evaluation table of multi-spectral infrared image fusion results of a simulation test according to an embodiment of the invention.
Table 1 shows a performance evaluation table of multi-spectral infrared image fusion results of simulation tests according to an embodiment of the invention
Figure BDA0001834666890000121
Figure BDA0001834666890000131
As shown in table 1, from the subjective visual effect, the MASP and MLE method fused images generate large ghost images, the integrity of the target edge cannot be well maintained, and the LEBW + MLV and MLC + MLV methods have poor suppression effect on local region clutter.
According to analysis on objective evaluation, the algorithm provided by the embodiment of the invention is used for reflecting the evaluation index Q of image detail change and texture feature change based on statistical characteristics in the fusion evaluation of two groups of imagesAGAnd an evaluation index Q based on the edge retentionGThe medium and the best are all the evaluation indexes Q reflecting the information quantity of the fusion image reservation source image based on the information theoryMIEvaluation index Q based on structural similaritySSIMAnd an evaluation index Q based on the human visual systemHPIOptimal and sub-optimal. Therefore, the infrared multispectral image multilevel fusion algorithm provided by the embodiment of the invention has a better fusion effect.
As another aspect of the embodiments of the present invention, the embodiments of the present invention provide an apparatus for enhancing target features based on multispectral fusion according to the above embodiments, and the apparatus is used for implementing enhancement of target features based on multispectral fusion in the above embodiments. Therefore, the description and definition in the target feature enhancement method based on multispectral fusion in each embodiment described above may be used for understanding each execution module in the embodiment of the present invention, and reference may be specifically made to the above embodiments, which are not described herein again.
According to an embodiment of the present invention, the structure of the target feature enhancement device based on multispectral fusion is shown in fig. 6, which is a schematic structural diagram of a target feature enhancement device based on multispectral fusion provided by an embodiment of the present invention, and the device may be used to implement target feature enhancement based on multispectral fusion in the above method embodiments, and the device includes a level dividing module 601, a coefficient fusion module 602, and a reconstruction module 603. Wherein:
the level division module 601 is configured to perform level division on the target multispectral infrared image based on different spectral band information of the target multispectral infrared image; the coefficient fusion module 602 is configured to perform coefficient decomposition of non-subsampled Contourlet transform on the target multispectral infrared image of each level from the lowest level in the level division, and perform stepwise coefficient fusion based on the coefficient decomposition to obtain a low-frequency fusion coefficient and a high-frequency fusion coefficient of the highest level; the reconstruction module 603 is configured to perform multi-resolution image reconstruction based on the low-frequency fusion coefficient and the high-frequency fusion coefficient at the highest level through non-downsampling Contourlet transform, and obtain a fusion image with enhanced target features.
It can be understood that the apparatus according to the embodiment of the present invention may first utilize the level dividing module 601 to acquire an image of a target object through a multispectral infrared imager and other devices to form a target multispectral infrared image. By acquiring a multispectral image of the target object, the target multispectral infrared image is formed to contain a plurality of different spectral bands. Then, the level dividing module 601 grades the target multispectral infrared image according to the spectral band information of the target multispectral infrared image. For example, the classification may be performed according to the length of each spectrum band, the range including the spectrum length, and the like.
Then, the coefficient fusion module 602 performs non-downsampling Contourlet transform on the target multispectral infrared image of the same level in the level division from the lowest level to the top level, so as to obtain a decomposition coefficient. Then, the coefficient fusion module 602 performs coefficient fusion on the multiple target multispectral infrared images at the current level and the fusion coefficient obtained by fusing the previous low-level coefficient according to the decomposition coefficient obtained by coefficient decomposition until the coefficient fusion at the highest level is realized, so as to obtain the low-frequency fusion coefficient and the high-frequency fusion coefficient at the highest level.
Finally, on the basis of the above-mentioned execution module operation, the reconstruction module 603 needs to obtain a corresponding image according to the low-frequency fusion coefficient and the high-frequency fusion coefficient, so as to intuitively represent the target object with enhanced features. Therefore, the reconstruction module 603 performs multi-resolution image reconstruction of the target object by using non-downsampling Contourlet transform, and obtains a reconstructed image, i.e., a fused image with enhanced target features.
According to the target feature enhancement device based on multispectral fusion, provided by the embodiment of the invention, through arranging the corresponding execution module, fusion classification is carried out according to the mutual relation of different spectral bands, and then NSCT domain image fusion based on local energy is carried out in the same stage, so that the detailed information such as the edge, texture and the like of an image can be effectively retained in the fusion process, a good inhibition effect on clutter is achieved, the brightness of a target in the fused image is moderate, the outline is clear, and the subsequent target detection and identification are facilitated.
It is understood that, in the embodiment of the present invention, each relevant program module in the apparatus of each of the above embodiments may be implemented by a hardware processor (hardware processor). In addition, when the target feature enhancement device based on multispectral fusion in the embodiments of the present invention implements target feature enhancement based on multispectral fusion in the embodiments of the methods described above, the beneficial effects produced by the device are the same as those of the embodiments of the methods described above, and reference may be made to the embodiments of the methods described above, which are not described herein again.
As another aspect of the embodiment of the present invention, in this embodiment, an electronic device is provided according to the foregoing embodiment, and with reference to fig. 7, a schematic physical structure diagram of the electronic device provided in the embodiment of the present invention includes: at least one memory 701, at least one processor 702, a communications interface 703, and a bus 704.
The memory 701, the processor 702 and the communication interface 703 complete mutual communication through the bus 704, and the communication interface 703 is used for information transmission between the electronic device and the target multispectral infrared image; the memory 701 stores a computer program that can be executed on the processor 702, and when the computer program is executed by the processor 702, the target feature enhancement method based on multispectral fusion as the above embodiment is implemented.
It is understood that the electronic device at least includes a memory 701, a processor 702, a communication interface 703 and a bus 704, and the memory 701, the processor 702 and the communication interface 703 are communicatively connected to each other through the bus 704, and can complete mutual communication, for example, the processor 702 reads program instructions of the target feature enhancement method based on multispectral fusion from the memory 701, and the like. In addition, the communication interface 703 can also implement communication connection between the electronic device and the target multispectral infrared image, and can complete mutual information transmission, such as implementing target feature enhancement based on multispectral fusion through the communication interface 703.
When the electronic device is running, the processor 702 calls the program instructions in the memory 701 to execute the methods provided by the above-mentioned method embodiments, for example, including: grading the target multi-spectral infrared image based on different spectral band information of the target multi-spectral infrared image; starting from the lowest level in the level division, performing coefficient decomposition of non-subsampled Contourlet conversion on the target multispectral infrared image of each level, and performing gradual coefficient fusion based on the coefficient decomposition to obtain a low-frequency fusion coefficient and a high-frequency fusion coefficient of the highest level; and based on the low-frequency fusion coefficient and the high-frequency fusion coefficient of the highest level, performing multi-resolution image reconstruction of non-subsampled Contourlet conversion, and acquiring a fusion image with enhanced target features.
The program instructions in the memory 701 may be implemented in the form of software functional units and stored in a computer readable storage medium when sold or used as a stand-alone product. Alternatively, all or part of the steps for implementing the method embodiments may be implemented by hardware related to program instructions, where the program may be stored in a computer-readable storage medium, and when executed, the program performs the steps including the method embodiments; and the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Embodiments of the present invention also provide a non-transitory computer-readable storage medium storing computer instructions for causing a computer to execute a method for enhancing target features based on multi-spectral fusion as described in the above embodiments. Examples include: grading the target multi-spectral infrared image based on different spectral band information of the target multi-spectral infrared image; starting from the lowest level in the level division, performing coefficient decomposition of non-subsampled Contourlet conversion on the target multispectral infrared image of each level, and performing gradual coefficient fusion based on the coefficient decomposition to obtain a low-frequency fusion coefficient and a high-frequency fusion coefficient of the highest level; and based on the low-frequency fusion coefficient and the high-frequency fusion coefficient of the highest level, performing multi-resolution image reconstruction of non-subsampled Contourlet conversion, and acquiring a fusion image with enhanced target features.
According to the electronic equipment and the non-transitory computer readable storage medium provided by the embodiment of the invention, fusion grading is carried out according to the mutual relation of different spectral bands, and then NSCT domain image fusion based on local energy is carried out in the same stage, so that the detailed information such as the edge, texture and the like of an image can be effectively retained in the fusion process, a clutter suppression effect is better, the brightness of a target in the fused image is moderate, the outline is clear, and the subsequent target detection and identification are facilitated.
It is to be understood that the above-described embodiments of the apparatus, the electronic device and the storage medium are merely illustrative, and that elements described as separate components may or may not be physically separate, may be located in one place, or may be distributed on different network elements. Some or all of the modules can be selected according to actual needs to achieve the purpose of the scheme of the embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. Based on such understanding, the technical solutions mentioned above may be embodied in the form of a software product, which may be stored in a computer-readable storage medium, such as a usb disk, a removable hard disk, a ROM, a RAM, a magnetic or optical disk, etc., and includes several instructions for causing a computer device (such as a personal computer, a server, or a network device, etc.) to execute the methods described in the method embodiments or some parts of the method embodiments.
In addition, it should be understood by those skilled in the art that in the specification of the embodiments of the present invention, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
In the description of the embodiments of the invention, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description. Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the embodiments of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects.
However, the disclosed method should not be interpreted as reflecting an intention that: that is, the claimed embodiments of the invention require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of an embodiment of this invention.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the embodiments of the present invention, and not to limit the same; although embodiments of the present invention have been described in detail with reference to the foregoing embodiments, it should be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (7)

1. A target feature enhancement method based on multispectral fusion is characterized by comprising the following steps:
grading the target multi-spectral infrared image based on different spectral band information of the target multi-spectral infrared image;
starting from the lowest level in the level division, performing coefficient decomposition of non-subsampled Contourlet conversion on the target multispectral infrared image of each level, and performing gradual coefficient fusion based on the coefficient decomposition to obtain a low-frequency fusion coefficient and a high-frequency fusion coefficient of the highest level;
based on the low-frequency fusion coefficient and the high-frequency fusion coefficient of the highest level, carrying out multisampling Contourlet conversion multi-resolution image reconstruction to obtain a target feature enhanced fusion image;
the step of performing non-subsampled Contourlet transform coefficient decomposition on the target multispectral infrared image of each level and performing progressive coefficient fusion based on the coefficient decomposition further comprises:
for any of the levels:
decomposing the target multispectral infrared image at the level by adopting non-subsampled Contourlet transformation to obtain a low-frequency decomposition coefficient and a high-frequency decomposition coefficient which correspond to each target multispectral infrared image;
performing low-frequency data fusion on each low-frequency decomposition coefficient and a low-frequency fusion coefficient obtained by performing low-frequency decomposition coefficient fusion on a previous low level by adopting a mode based on the combination of a local energy ratio and a local energy weighting, and performing high-frequency data fusion on each high-frequency decomposition coefficient and a high-frequency fusion coefficient obtained by performing high-frequency decomposition coefficient fusion on the previous low level by adopting a mode of taking a large local scale variance;
respectively transmitting a low-frequency fusion coefficient obtained by fusing the low-frequency data of the level and a high-frequency fusion coefficient obtained by fusing the high-frequency data to a next high level for carrying out the low-frequency data fusion and the high-frequency data fusion of the next high level;
the step of performing low-frequency data fusion on each low-frequency decomposition coefficient and a low-frequency fusion coefficient obtained by performing low-frequency decomposition coefficient fusion on a previous low level by using a mode of combining local energy ratio and local energy weighting further comprises:
for any of the levels:
calculating the local energy ratio of the target multispectral infrared images of different spectral bands of the level according to the local area energy of each target multispectral infrared image of the level;
if the energy of the local area of any target multispectral infrared image is larger than a first set threshold value, and the local energy ratio corresponding to the target multispectral infrared image is larger than a second set threshold value, taking the low-frequency decomposition coefficients corresponding to the target multispectral infrared image as low-frequency fusion coefficients obtained by low-frequency data fusion, and otherwise, performing low-frequency data fusion on each low-frequency decomposition coefficient and the low-frequency fusion coefficients obtained by performing low-frequency decomposition coefficient fusion on the previous low level by adopting a local energy weighting method;
further, evaluating the multi-spectral infrared image fusion result of the target feature enhancement method based on multispectral fusion by adopting an average gradient, mutual information, a fusion image evaluation index based on edge information, an evaluation index based on structural similarity and 5 objective evaluation indexes based on an evaluation index of human visual attention;
the step of performing low-frequency data fusion on each low-frequency decomposition coefficient and a low-frequency fusion coefficient obtained by performing low-frequency decomposition coefficient fusion on a previous low level by using a mode of combining local energy ratio and local energy weighting further comprises:
and for any two low-frequency decomposition coefficients or the low-frequency fusion coefficient obtained by fusing the low-frequency decomposition coefficients of the previous low level, carrying out low-frequency data fusion by adopting the following fusion formula:
Figure FDA0002950356860000021
in the formula, CF(x, y) represents the low-frequency fusion coefficient obtained by performing the low-frequency data fusion, C1(x,y)、C2(x, y) respectively represent low-frequency decomposition coefficients at (x, y) coordinates of images of different spectral bands, u, v represent adaptive weighting coefficients, T1,T2Respectively representing the first set threshold and a second set threshold, wherein,
Figure FDA0002950356860000022
Figure FDA0002950356860000023
in the formula, ave [. degree]、std[·]Respectively representing a mean operation and a standard deviation operation, k1、k2、k3Representing a weighting parameter, E1(x,y)、E2(x, y) respectively represent local region energy of different multispectral infrared images to be fused, and R (x, y) represents a local energy ratio of the different multispectral infrared images to be fused.
2. The method according to claim 1, wherein the step of performing high-frequency data fusion on each of the high-frequency decomposition coefficients and the high-frequency fusion coefficient obtained by performing high-frequency decomposition coefficient fusion on the previous low level in such a manner that the local scale variance is large further comprises:
and for any two high-frequency decomposition coefficients or the high-frequency fusion coefficient obtained by fusing the high-frequency decomposition coefficients of the previous low level, performing high-frequency data fusion by adopting the following fusion formula:
Figure FDA0002950356860000031
in the formula, CF,l,d(x,y)、C1,l,d(x,y)、C2,l,d(x, y) denote fused images, respectively, the first spectral band image and the second spectral band image in the same stage have high decomposition coefficients, σ, at the l-th stage d-direction (x, y)1,l(x,y)、σ2,l(x, y) respectively represents the local variance maximum of the high frequency decomposition coefficient in all directions (x, y) of the first spectral band image and the second spectral band image in the same stage.
3. The method of claim 2, further comprising, prior to the step of performing high frequency data fusion:
for the multi-directional decomposition of the high-frequency decomposition coefficient, taking the maximum variance value in each direction as the direction variance of all target multi-spectral infrared images at the level:
Figure FDA0002950356860000032
in the formula, σ1,l(x,y) and σ2,l(x, y) represents the maximum of the local variance of the high-frequency decomposition coefficient in all directions (x, y) of the first spectral band image and the second spectral band image in the same stage respectively, and σ represents the maximum of the local variance of the high-frequency decomposition coefficient in all directions (x, y) of the l-th stage1,l,d(x, y) and σ2,l,d(x, y) respectively represents the local variance of the high frequency decomposition coefficient in the l-th level d direction of the first spectral band image and the second spectral band image in the same level,
Figure FDA0002950356860000033
representing the maximum value among the values of each direction.
4. The method of claim 1, wherein the step of ranking the target multispectral infrared image based on different spectral information of the target multispectral infrared image further comprises:
and dividing the target multi-spectral infrared image into different levels according to the principle that the wider the spectral length range is, the higher the level is and the preset total number of levels based on the spectral length ranges of different ordinary bands of the target multi-spectral infrared image.
5. A target feature enhancement device based on multispectral fusion, comprising:
the level division module is used for carrying out level division on the target multi-spectral band infrared image based on different spectral band information of the target multi-spectral band infrared image;
the coefficient fusion module is used for carrying out nonsubsampled Contourlet transform coefficient decomposition on the target multispectral infrared image of each level from the lowest level in the level division, and carrying out progressive coefficient fusion based on the coefficient decomposition to obtain a low-frequency fusion coefficient and a high-frequency fusion coefficient of the highest level;
the reconstruction module is used for carrying out multisampling Contourlet transform multi-resolution image reconstruction based on the low-frequency fusion coefficient and the high-frequency fusion coefficient of the highest level to obtain a target feature enhanced fusion image;
the step of performing non-subsampled Contourlet transform coefficient decomposition on the target multispectral infrared image of each level and performing progressive coefficient fusion based on the coefficient decomposition further comprises:
for any of the levels:
decomposing the target multispectral infrared image at the level by adopting non-subsampled Contourlet transformation to obtain a low-frequency decomposition coefficient and a high-frequency decomposition coefficient which correspond to each target multispectral infrared image;
performing low-frequency data fusion on each low-frequency decomposition coefficient and a low-frequency fusion coefficient obtained by performing low-frequency decomposition coefficient fusion on a previous low level by adopting a mode based on the combination of a local energy ratio and a local energy weighting, and performing high-frequency data fusion on each high-frequency decomposition coefficient and a high-frequency fusion coefficient obtained by performing high-frequency decomposition coefficient fusion on the previous low level by adopting a mode of taking a large local scale variance;
respectively transmitting a low-frequency fusion coefficient obtained by fusing the low-frequency data of the level and a high-frequency fusion coefficient obtained by fusing the high-frequency data to a next high level for carrying out the low-frequency data fusion and the high-frequency data fusion of the next high level;
the step of performing low-frequency data fusion on each low-frequency decomposition coefficient and a low-frequency fusion coefficient obtained by performing low-frequency decomposition coefficient fusion on a previous low level by using a mode of combining local energy ratio and local energy weighting further comprises:
for any of the levels:
calculating the local energy ratio of the target multispectral infrared images of different spectral bands of the level according to the local area energy of each target multispectral infrared image of the level;
if the energy of the local area of any target multispectral infrared image is larger than a first set threshold value, and the local energy ratio corresponding to the target multispectral infrared image is larger than a second set threshold value, taking the low-frequency decomposition coefficients corresponding to the target multispectral infrared image as low-frequency fusion coefficients obtained by low-frequency data fusion, and otherwise, performing low-frequency data fusion on each low-frequency decomposition coefficient and the low-frequency fusion coefficients obtained by performing low-frequency decomposition coefficient fusion on the previous low level by adopting a local energy weighting method;
the step of performing low-frequency data fusion on each low-frequency decomposition coefficient and a low-frequency fusion coefficient obtained by performing low-frequency decomposition coefficient fusion on a previous low level by using a mode of combining local energy ratio and local energy weighting further comprises:
and for any two low-frequency decomposition coefficients or the low-frequency fusion coefficient obtained by fusing the low-frequency decomposition coefficients of the previous low level, carrying out low-frequency data fusion by adopting the following fusion formula:
Figure FDA0002950356860000051
in the formula, CF(x, y) represents the low-frequency fusion coefficient obtained by performing the low-frequency data fusion, C1(x,y)、C2(x, y) respectively represent low-frequency decomposition coefficients at (x, y) coordinates of images of different spectral bands, u, v represent adaptive weighting coefficients, T1,T2Respectively representing the first set threshold and a second set threshold, wherein,
Figure FDA0002950356860000052
Figure FDA0002950356860000053
in the formula, ave [. degree]、std[·]Respectively representing a mean operation and a standard deviation operation, k1、k2、k3Representing a weighting parameter, E1(x,y)、E2(x, y) respectively represents the local region energy of the different multispectral infrared images to be fused, and R (x, y) represents the local energy ratio of the different multispectral infrared images to be fused;
further, the multi-spectral infrared image fusion result of the target feature enhancement method based on the multi-spectral fusion is evaluated by adopting 5 objective evaluation indexes, namely an average gradient, mutual information, a fusion image evaluation index based on edge information, an evaluation index based on structural similarity and an evaluation index based on human visual attention.
6. An electronic device, comprising: at least one memory, at least one processor, a communication interface, and a bus;
the memory, the processor and the communication interface complete mutual communication through the bus, and the communication interface is used for information transmission between the electronic equipment and target multi-spectral infrared image equipment;
the memory has stored therein a computer program operable on the processor, which when executed by the processor, implements the method of any of claims 1 to 4.
7. A non-transitory computer-readable storage medium storing computer instructions that cause a computer to perform the method of any one of claims 1-4.
CN201811220592.2A 2018-10-19 2018-10-19 Target feature enhancement method and device based on multispectral fusion and electronic equipment Active CN109584192B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811220592.2A CN109584192B (en) 2018-10-19 2018-10-19 Target feature enhancement method and device based on multispectral fusion and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811220592.2A CN109584192B (en) 2018-10-19 2018-10-19 Target feature enhancement method and device based on multispectral fusion and electronic equipment

Publications (2)

Publication Number Publication Date
CN109584192A CN109584192A (en) 2019-04-05
CN109584192B true CN109584192B (en) 2021-05-25

Family

ID=65920503

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811220592.2A Active CN109584192B (en) 2018-10-19 2018-10-19 Target feature enhancement method and device based on multispectral fusion and electronic equipment

Country Status (1)

Country Link
CN (1) CN109584192B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111861929A (en) * 2020-07-24 2020-10-30 深圳开立生物医疗科技股份有限公司 Ultrasonic image optimization processing method, system and device
CN111948215B (en) * 2020-08-11 2021-07-16 水利部交通运输部国家能源局南京水利科学研究院 Underwater structure flaw detection method based on optical imaging

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1897035A (en) * 2006-05-26 2007-01-17 上海大学 Visible-light and infrared imaging merging method based on Contourlet conversion
CN107230196B (en) * 2017-04-17 2020-08-28 江南大学 Infrared and visible light image fusion method based on non-downsampling contourlet and target reliability
CN107194904B (en) * 2017-05-09 2019-07-19 西北工业大学 NSCT area image fusion method based on supplement mechanism and PCNN
CN108364277A (en) * 2017-12-20 2018-08-03 南昌航空大学 A kind of infrared small target detection method of two-hand infrared image fusion

Also Published As

Publication number Publication date
CN109584192A (en) 2019-04-05

Similar Documents

Publication Publication Date Title
Xiao et al. Removing stripe noise from infrared cloud images via deep convolutional networks
CN109919870B (en) SAR image speckle suppression method based on BM3D
Zhang et al. Visibility enhancement using an image filtering approach
CN109583378A (en) A kind of vegetation coverage extracting method and system
CN103295204B (en) A kind of image self-adapting enhancement method based on non-down sampling contourlet transform
Bhandari et al. Improved feature extraction scheme for satellite images using NDVI and NDWI technique based on DWT and SVD
CN110660065B (en) Infrared fault detection and identification algorithm
CN109584192B (en) Target feature enhancement method and device based on multispectral fusion and electronic equipment
CN112669249A (en) Infrared and visible light image fusion method combining improved NSCT (non-subsampled Contourlet transform) transformation and deep learning
CN106886747A (en) Ship Detection under a kind of complex background based on extension wavelet transformation
CN111179208A (en) Infrared-visible light image fusion method based on saliency map and convolutional neural network
CN111861905B (en) SAR image speckle noise suppression method based on Gamma-Lee filtering
CN104036461A (en) Infrared complicated background inhibiting method based on combined filtering
CN114049566B (en) Method and device for detecting cloud and cloud shadow of land satellite image in step-by-step refinement manner
CN111311503A (en) Night low-brightness image enhancement system
CN113298147A (en) Image fusion method and device based on regional energy and intuitionistic fuzzy set
Aghababaee et al. Contextual PolSAR image classification using fractal dimension and support vector machines
CN111461999B (en) SAR image speckle suppression method based on super-pixel similarity measurement
CN117292250A (en) Water body extraction method, device and equipment based on multisource satellite remote sensing data fusion
CN110298807A (en) Based on the domain the NSCT infrared image enhancing method for improving Retinex and quantum flora algorithm
Jiang et al. Perceptual-based fusion of ir and visual images for human detection
CN108717689B (en) Medium-long wave infrared image fusion method and device for ship detection under sea-sky background
CN112950500B (en) Hyperspectral denoising method based on edge detection low-rank total variation model
Xu et al. Underwater Image Enhancement of Improved Retinex Base on Statistical Learning
CN109285127B (en) Improved PolSAR image non-local mean filtering method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant