CN103839243A - Multi-channel satellite cloud picture fusion method based on Shearlet conversion - Google Patents
Multi-channel satellite cloud picture fusion method based on Shearlet conversion Download PDFInfo
- Publication number
- CN103839243A CN103839243A CN201410056917.3A CN201410056917A CN103839243A CN 103839243 A CN103839243 A CN 103839243A CN 201410056917 A CN201410056917 A CN 201410056917A CN 103839243 A CN103839243 A CN 103839243A
- Authority
- CN
- China
- Prior art keywords
- mrow
- msub
- fusion
- image
- msubsup
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000007500 overflow downdraw method Methods 0.000 title claims abstract description 47
- 238000006243 chemical reaction Methods 0.000 title abstract 3
- 230000004927 fusion Effects 0.000 claims abstract description 246
- 238000000034 method Methods 0.000 claims abstract description 24
- 230000009466 transformation Effects 0.000 claims abstract description 24
- 238000012545 processing Methods 0.000 claims abstract description 10
- 238000010606 normalization Methods 0.000 claims abstract description 5
- 238000000354 decomposition reaction Methods 0.000 claims description 27
- 230000000694 effects Effects 0.000 claims description 20
- 238000012935 Averaging Methods 0.000 claims description 5
- 238000007499 fusion processing Methods 0.000 claims description 5
- 239000011049 pearl Substances 0.000 description 27
- 238000002474 experimental method Methods 0.000 description 11
- 238000011156 evaluation Methods 0.000 description 8
- 238000004458 analytical method Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 4
- 238000012544 monitoring process Methods 0.000 description 4
- 230000003595 spectral effect Effects 0.000 description 4
- 238000011161 development Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 239000011159 matrix material Substances 0.000 description 3
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Chemical compound O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 230000007547 defect Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000002349 favourable effect Effects 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 238000012952 Resampling Methods 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 239000006185 dispersion Substances 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000004321 preservation Methods 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 239000004576 sand Substances 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
Images
Landscapes
- Image Processing (AREA)
Abstract
The invention relates to a multi-channel satellite cloud picture fusion method based on Shearlet conversion and belongs to the field of weather prognoses. Firstly, two registered satellite cloud pictures are subjected to Shearlet conversion to acquire a low-frequency coefficient and a high-frequency coefficient; secondly, a low-frequency Shearlet domain part is divided again through a Laplacian pyramid, the mean value of the top layer of the Laplacian pyramid is worked out, and then reconstruction of other layers with large gray-level absolute values of the Laplacian pyramid is carried out; in the high-frequency Shearlet domain part, the information entropy, average gradient and standard deviation of each high-frequency sub-picture are worked out and are then subjected to normalization processing, the product of every group of three processed values is worked out, and the sub-picture with the large product serves as a fused sub-picture; the fused sub-picture is subjected to detail enhancement treatment through a non-linear operator; finally, a final fused picture is obtained through Shearlet inverse transformation. The method can be popularized to fusion of three or more satellite cloud picture to achieve multi-channel satellite cloud picture fusion and acquire high-precision typhoon center positioning results.
Description
Technical Field
The invention belongs to the field of meteorological prediction. In particular to a multichannel satellite cloud image fusion method based on Shearlet transformation for improving the typhoon center positioning precision.
Background
The meteorological satellite cloud picture plays an extremely important role in weather monitoring forecast and atmospheric environment detection, and especially plays a key role in monitoring extreme meteorological disasters. Therefore, the satellite cloud picture is subjected to subsequent analysis and processing, information such as atmosphere, land, ocean, cloud layer and the like can be better acquired, reliable data support is provided for monitoring and forecasting, the automation and the accuracy of forecasting can be improved, and the method has important practical significance.
China's Fengyun No. 2C star (FY-2C) geostationary orbit meteorological satellite receives visible light, infrared and aqueous vapor radiation from the earth through a scanning radiometer of a visible light channel, three infrared channels and a water vapor channel, five channels respectively obtain a panoramic cloud picture covering 1/3 earth every half an hour, and frequent observation for many times is particularly suitable for detecting the occurrence and development of disastrous weather such as rainstorm, typhoon, sand storm and the like with short life history and great harm. However, the imaging principles of the images of the channels are different, the obtained data information is also different, and the information obtained from the satellite cloud images of the single channel has certain limitation, which is not beneficial to reflecting the characteristics of the observed target. The image fusion method combines the satellite cloud picture information of different channels, can provide more comprehensive cloud picture information, is beneficial to acquiring more reliable data, and improves the precision of forecasting and monitoring. Therefore, scholars at home and abroad continuously explore the fusion technology of the multi-channel satellite cloud pictures.
The wavelet analysis theory has been widely applied in the field of image fusion through the long-standing development. Abd-Elrahman et al propose an enhancement method for improving cloud-related shadow regions when satellite cloud picture fusion is performed by using wavelet transform, and retain detail information, and the result effectively improves the quality of the cloud picture. Lee, y, et al propose a new wavelet domain satellite image fusion algorithm that takes into account the intensity and spectral range of each source image and the associated spectral response. The spectral response of each channel is represented by the sum of gaussian functions, and then the spatial and spectral resolution of the image is adjusted by gaussian function modeling. The PSNR (Peak Signal-Noise Ratio) value, the root mean square error and the correlation coefficient of the fusion result of the method are better than those of the traditional method. Yang W. et al introduces Compressive Sensing (CS) into the fusion algorithm of satellite images, proposes a fusion algorithm CS-FWT-PCA based on b-spline wavelet of the symmetric part, and the method uses Hama Da matrix as the measurement matrix, Sparsity Adaptive Matching Pursuit (SAMP) as the reconstruction algorithm, and adopts an improved fusion rule based on local variance to obtain the fusion effect superior to the traditional fusion method.
However, the wavelet transform has a certain limitation, and since there are only a small number of directions, i.e. horizontal, vertical and diagonal directions, information in only a limited direction can be captured, which easily causes information loss. Aiming at the limitations and the defects of the wavelet separation, the multi-resolution analysis theory is further developed in recent years, and a multi-scale geometric analysis tool is produced. The multi-scale geometric analysis tool not only has the multi-resolution of wavelets, but also has the multi-scale, good time-frequency local characteristics and high directionality and anisotropy. Common multi-scale geometric analysis tools include: bandelet, Ridgelet, Curvelet, Contourlet, NSCT (NonSubsampled Contourlet Transform), Shearlet, and the like. The Curvelet transform is more suitable for analyzing the edge characteristics of curves or straight lines in a two-dimensional image than a wavelet, has higher approximation precision and sparse expression capability, can better extract the characteristics of an original image by introducing the Curvelet transform into image fusion, and provides more information for the fused image. Shutao Li et al propose a multi-focus image fusion algorithm combining Curvelet and wavelet transform, and the image fusion result is superior to any other independent multi-scale fusion method. However, the redundancy of the Curvelet transform algorithm is high, a filter bank based on critical sampling is not available, and Gibbs phenomenon exists. Furthermore, the Contourlet transform is implemented by strictly using a sector filter and resampling, and is usually composed of a laplacian pyramid transform and a directional filter. The segmented quadratic continuous curve in the image can be more accurately captured by sub-bands with different scales and different frequencies, so that the edge energy of the image is more concentrated. Miao Qiguang et al proposed an image fusion method based on Contourlet transform, which compares the regional energy of the high frequency part and combines consistency check to obtain a result superior to the image fusion method of wavelet transform and Laplace pyramid in terms of edge preservation and texture information. Juan Lu et al propose an image fusion algorithm based on NSCT and energy entropy, and the fusion result of the algorithm has richer direction information and strong noise robustness. However, when the Contourlet transform is applied to image fusion, a pseudo Gibbs phenomenon is easily introduced, and the time required for overcoming the phenomenon is too long and the data size is large.
The Shearlet transformation introduces a Shear filter which has no constraint of direction number and can also be represented by a window function. The Shearlet transform can not only detect all singular points, but also track the direction of a singular curve in a self-adaptive manner, and the inverse fan filter bank transform is not needed during the inverse transform. Shearlet transformation overcomes the defect of information loss in the application of wavelet transformation in image fusion, and successfully gets rid of the limitation of direction number during filtering. Therefore, with the development of image processing technology, the Shearlet transformation has been paid more and more attention and attention by researchers, and has become one of the research hotspots. The Qi-guang Miao et al use the Shearlet transform for image fusion, taking advantage of its directionality, localization, anisotropy, multi-scale, etc., and the fusion result contains more details and less distortion information than other methods. Cheng s. et al propose an image fusion algorithm based on Shearlet transformation and pcnn (pulse Coupled Neural network). The gradient characteristics of the Shear matrix in each direction are extracted, multi-scale decomposition is carried out by using wavelets, and high-frequency coefficients are fused by using PCNN, so that a good fusion effect is obtained. Guorong G. et al propose a new multi-focus image fusion algorithm based on NSST (Non-sampled Shearlet Transform), propose the fusion rule of the information of the concentrated area to the small square error characteristic of the multi-focus image, the experimental result shows that the visual quality and objective evaluation of the method are obviously superior to the fusion result of the discrete wavelet Transform.
Disclosure of Invention
The invention provides a multi-channel satellite cloud picture fusion method based on Shearlet transformation, and aims to provide a fusion method which comprehensively considers each evaluation parameter of an image to reserve the high information degree of a cloud picture, realizes better typhoon center positioning and can be popularized to the fusion of multi-channel satellite cloud pictures.
The technical scheme adopted by the invention is that the method comprises the following steps:
step 1, performing Shearlet transformation on registered source images A and B respectively with the image size of M multiplied by N and the number of decomposition layers of M multiplied by NW, the number of decomposition directions is T, T =2r,r∈Z*To obtain the high frequency coefficient SHAAnd SHBLow frequency coefficient SLAAnd SLB;
Step 2, respectively aligning low-frequency coefficients SLAAnd SLBPerforming Laplacian pyramid decomposition, and obtaining decomposition images LA and LB with the number of decomposition layers being Q, wherein the sub-images of the Q layers are LA respectivelyqAnd LBq,1≤q≤Q;
Step 3, para Laplacian pyramid top subgraph LAQAnd LBQFusing by using an averaging method to obtain a fusion result LFQComprises the following steps:
wherein i is more than or equal to 1 and less than or equal to CLQ,1≤j≤RLQ,CLQIs the number of lines, RL, of the Q-th layer picture of the decomposition sub-pictureQIs the number of columns of the Q-th layer image of the decomposition subgraph;
step 4, para Laplacian pyramid other-layer subgraph LAqAnd LBqThe fusion rule with the gray absolute value is used for fusion, Q is more than or equal to 1 and less than or equal to Q-1, and then the fusion result LF is obtainedqComprises the following steps:
step 5, reconstructing the Laplacian pyramid LF obtained after fusion to obtain a fusion result TL of the low-frequency partF;
Step 6, respectively calculating the information entropy, the average gradient and the standard deviation of each layer of subgraph in each direction in the high-frequency part of the Shearlet transform domain, and recording the w-th layer of the t-th direction high-frequency subgraph asAndw is more than or equal to 1 and less than or equal to W, T is more than or equal to 1 and less than or equal to T, the size of the W is M multiplied by N as the size of the original image, and the information entropy E is as follows:
wherein, PiRepresenting the probability that the gray value of a pixel in the sub-image is i, L representing the number of gray levels in the image, and the average gradient of the high-frequency sub-imageIs shown as
Wherein,representing high frequency subgraphsOrIn xiiLine yjjThe pixel points of the columns have the following characteristics that ii is more than or equal to 1 and less than or equal to M, jj is more than or equal to 1 and less than or equal to N, and the standard deviation sigma of the high-frequency subgraph is expressed as:
wherein,representing a gray level mean value of a high-frequency subgraph;
step 7, aiming at the high-frequency subgraphAndcorresponding information entropy E, mean gradientRespectively carrying out normalization processing on the standard deviation sigma to obtain normalized information entropy EgAverage gradient ofStandard deviation sigmagSelecting the high-frequency subgraph with large product value as the fused high-frequency subgraph, i.e. selecting the subgraph with large product value
Step 8, carrying out discrete flat-based on the fused high-frequency subgraphNonlinear enhancement processing of wavelet transform with high-frequency subgraphThe maximum absolute value of all the pixel gray levels is maxh, so that the enhanced high-frequency subgraph can be obtainedComprises the following steps:
wherein b =0.35, c = 20; a = 1/(d)1-d2),d1=sigm(c×(1+b)),d2=sigm(-c×(1-b));
And 9, performing Shearlet inverse transformation on the Shearlet coefficient value after the fusion processing to obtain a final fusion image F.
In step 1 of the invention, the source images A and B after registration are satellite cloud images returned by a meteorological satellite FY-2C of China, and the satellite has 5 channels: images of an infrared 1 channel, an infrared 2 channel, a water-gas channel, an infrared 4 channel and a visible light channel are registered optionally.
The invention also includes: the cloud pictures of other channels are added to the fusion result, so that the fusion of three or more cloud pictures is realized, and the effect of the fusion of the multi-channel satellite cloud pictures is achieved.
The Shearlet transformation is applied to the fusion of the multi-channel satellite cloud images, the fusion rule which comprehensively considers each evaluation parameter of the images to keep the high information degree of the cloud images is provided based on the two registered cloud images and by combining the fusion method of Laplacian pyramid decomposition, the better typhoon center positioning is realized, and the fusion method can be popularized to the fusion of the multi-channel satellite cloud images.
The technical scheme of the invention can fully fuse the useful information of each channel, well realize the fusion of the multi-channel typhoon cloud pictures, furthest reserve the details of each channel cloud picture and keep the definition of the fused image. The center positioning is carried out on the typhoon with the eyes and the typhoon without the eyes by utilizing the fused cloud picture, so that a typhoon center positioning result with higher accuracy is obtained, and the fusion effect has good practical value.
Drawings
FIG. 1 is a flow chart of a multichannel satellite cloud image fusion method based on Shearlet transformation according to the present invention;
FIG. 2(a) is an infrared 1-channel cloud picture in a 5-channel satellite cloud picture returned by a meteorological satellite FY-2C in China;
FIG. 2(b) is an infrared 2-channel cloud picture in a 5-channel satellite cloud picture returned by a meteorological satellite FY-2C in China;
FIG. 2(C) is a water-gas channel cloud image in a 5-channel satellite cloud image returned by a meteorological satellite FY-2C in China;
FIG. 2(d) is an infrared 4-channel cloud picture in a 5-channel satellite cloud picture returned by a meteorological satellite FY-2C in China;
FIG. 2(e) is a visible light channel cloud image in a 5-channel satellite cloud image returned by a meteorological satellite FY-2C in China;
fig. 3(a) is an infrared 2-channel cloud picture in the result of a multichannel satellite cloud picture fusion experiment of an infrared 2-channel cloud picture and a water-gas channel cloud picture (eyed typhoon) of 00 minutes of typhoon "Taili" at 31 days and 12 days of 8 months in 2005;
fig. 3(b) is a water-gas channel cloud picture in a multi-channel satellite cloud picture fusion experiment of an infrared 2-channel and water-gas channel cloud picture (eyed typhoon) of 00 minutes typhoon "taili" at 31 days and 12 days of 8 months in 2005;
fig. 3(c) is the fusion result of laplacian pyramid in the multichannel satellite cloud image fusion experiment of the infrared 2 channel and water-gas channel cloud images (eyed typhoons) of 00 minutes typhoons "taili" at 31 days and 12 days of 8 months in 2005;
fig. 3(d) is a fusion result of classical discrete orthogonal wavelets in a multi-channel satellite cloud image fusion experiment result of an infrared 2-channel cloud image and a water-gas channel cloud image (eyed typhoon) of 00 minutes of typhoon "taili" at 31 days and 12 days of 8 months in 2005;
FIG. 3(e) is the Curvelet fusion result in the multi-channel satellite cloud image fusion experiment result of the infrared 2-channel and water-gas channel cloud images (eyed typhoons) of 00 minutes typhoon "Taili" at 31, 31 and 12 months in 2005;
fig. 3(f) is a Contourlet fusion result in a multichannel satellite cloud picture fusion experiment result of an infrared 2-channel cloud picture and a water-gas channel cloud picture (eyed typhoon) of 00 minutes typhoon "taili" at 31 days and 12 days of 8 months in 2005;
FIG. 3(g) is the NSCT fusion result in the multi-channel satellite cloud image fusion experiment result of the infrared 2-channel cloud image and the water-gas channel cloud image (eyed typhoon) of 00 minutes of typhoon "Taili" at 31 days and 12 days of 8 months in 2005;
FIG. 3(h) is the fusion result of the algorithm of the present invention in the multi-channel satellite cloud image fusion experimental result of the cloud images of infrared 2 channel and water channel (eyed typhoon) of 00 minutes typhoon "Taili" at 31 days and 12 days of 2005;
FIG. 4(a) is a partially enlarged image of the fusion result in FIG. 3 (c);
FIG. 4(b) is a partially enlarged image of the fusion result in FIG. 3 (d);
FIG. 4(c) is a partially enlarged image of the fusion result in FIG. 3 (e);
FIG. 4(d) is a partially enlarged image of the fusion result in FIG. 3 (f);
FIG. 4(e) is a partially enlarged image of the fusion result in FIG. 3 (g);
FIG. 4(f) is a partially enlarged image of the fusion result in FIG. 3 (h);
FIG. 5(a) is a cut-out cloud chart of the fusion result of the cloud charts of "Taili" in FIG. 3(c), which is convenient for typhoon center positioning;
FIG. 5(b) is a cloud image of the fusion result of the cloud images of Taili in FIG. 3(d), so as to facilitate the center positioning of typhoon;
FIG. 5(c) is a cut-out cloud chart of the fusion result of the cloud charts of Taili in FIG. 3(e) for facilitating the center positioning of typhoon;
FIG. 5(d) is a cut-out cloud of the fusion result of the cloud of "Taili" in FIG. 3(f), which facilitates the centering of the typhoon;
FIG. 5(e) is a cut-out cloud of the fusion result of the cloud of "Taili" in FIG. 3(g), which facilitates the centering of typhoon;
FIG. 5(f) is a cut-out cloud of the fusion result of the cloud of "Taili" in FIG. 3(h), which facilitates the center positioning of typhoon;
FIG. 6(a) is a schematic illustration of a typhoon center positioning result using an infrared 2-channel cloud chart of the "Taili" cloud chart of FIG. 3 (a);
FIG. 6(b) is a schematic view of the typhoon center positioning result of the water gas channel cloud chart using the "Taili" cloud chart in FIG. 3 (b);
FIG. 6(c) is a schematic view of the typhoon center positioning result using the fusion result image of the "Taili" cloud in FIG. 3 (c);
FIG. 6(d) is a schematic view of the typhoon center positioning result using the fusion result image of the "Taili" cloud image in FIG. 3 (d);
FIG. 6(e) is a schematic view of the typhoon center positioning result using the fusion result image of the "Taili" cloud in FIG. 3 (e);
FIG. 6(f) is a schematic view of the typhoon center positioning result using the fusion result image of the "Taili" cloud in FIG. 3 (f);
FIG. 6(g) is a schematic view of the typhoon center positioning result using the fusion result image of the "Taili" cloud in FIG. 3 (g);
FIG. 6(h) is a schematic view of a typhoon center positioning result using the fusion result image of the "Taili" cloud in FIG. 3 (h);
FIG. 7(a) is an infrared 1-channel cloud picture of 00 minutes typhoon "pearls" (no-eye typhoon) at 2006, 5, 11, 00 days;
fig. 7(b) is a water-gas channel cloud picture of 00 minutes typhoon "pearl" (no-eye typhoon) at 2006, 5, 11 and 00 days;
fig. 7(c) is a fusion experimental result of laplacian pyramid of a multichannel satellite cloud image of an infrared 1 channel and water-gas channel cloud image (non-eye typhoon) of 00 minutes typhoon "pearl" at 2006, 5, 11, 00 days;
fig. 7(d) is a result of a classic discrete orthogonal wavelet fusion experiment of a multichannel satellite cloud picture of an infrared 1-channel and water-gas channel cloud picture (eye-free typhoon) of 00 minutes typhoon "pearls" at 00, 11, 2006, 5 months;
FIG. 7(e) is the Curvelet fusion experimental result of the multi-channel satellite cloud chart of the infrared 1-channel and water-gas channel cloud charts (no-eye typhoon) of 00 minutes typhoon "pearl" at 2006, 5, 11, 00 days;
FIG. 7(f) is the Contourlet fusion experiment result of the multi-channel satellite cloud picture of the infrared 1 channel and water-gas channel cloud picture (non-eye typhoon) of 00 minutes typhoon "pearl" at 00 of 2006, 5, 11 and 00 days;
fig. 7(g) is the NSCT fusion experimental result of the multi-channel satellite cloud image of the infrared 1-channel and water-gas channel cloud images (non-eye typhoon) of 00 minutes typhoon "pearl" at 2006, 5, 11, 00 days;
fig. 7(h) is a result of a fusion experiment of the present invention for a multichannel satellite cloud picture of an infrared 1 channel and water-gas channel cloud picture (eye-free typhoon) of 00 minutes typhoon "pearls" at 11 days 00, 5 months 2006;
FIG. 8(a) is a partially enlarged image of the fusion result in FIG. 7 (c);
FIG. 8(b) is a partially enlarged image of the fusion result in FIG. 7 (d);
FIG. 8(c) is a partially enlarged image of the fusion result in FIG. 7 (e);
FIG. 8(d) is a partially enlarged image of the fusion result in FIG. 7 (f);
FIG. 8(e) is a partially enlarged image of the fusion result in FIG. 7 (g);
FIG. 8(f) is a partially enlarged image of the fusion result in FIG. 7 (h);
FIG. 9(a) is a cut cloud of the fusion result of the cloud of the pearl in FIG. 7(c), which is convenient for the center positioning of the typhoon;
FIG. 9(b) is a cut cloud of the fusion result of the cloud of the pearl in FIG. 7(d), which is convenient for the center positioning of the typhoon;
FIG. 9(c) is a cut cloud of the fusion result of the cloud of the pearl in FIG. 7(e), which is convenient for the center positioning of the typhoon;
FIG. 9(d) is a cut cloud of the fusion result of the cloud of the pearl in FIG. 7(f), which facilitates the center positioning of the typhoon;
FIG. 9(e) is a cut cloud of the fusion result of the cloud of pearls in FIG. 7(g), which facilitates the centering of the typhoon;
FIG. 9(f) is a cut cloud of the fusion result of the cloud of the pearl in FIG. 7(h), which facilitates the center positioning of the typhoon;
FIG. 10(a) is a schematic illustration of the typhoon center positioning results using the infrared 1-channel cloud picture of "pearl" in FIG. 7 (a);
FIG. 10(b) is a schematic view of the typhoon center positioning result using the water-gas channel cloud of the pearl in FIG. 7 (b);
FIG. 10(c) is a schematic diagram of the typhoon center positioning result of the fusion result image using the Laplacian pyramid of the pearl cloud image in FIG. 7 (c);
FIG. 10(d) is a schematic view of the typhoon center positioning result of the fusion result image using the classical discrete orthogonal wavelet of the "pearl" cloud in FIG. 7 (d);
FIG. 10(e) is a schematic view showing the typhoon centering result of the Curvelet fusion result image using the "pearl" cloud in FIG. 7 (e);
FIG. 10(f) is a schematic diagram of the typhoon center positioning result of the fusion result image using Contourlet of the "Pearl" cloud in FIG. 7 (f);
FIG. 10(g) is a schematic diagram of the typhoon center positioning result of the NSCT fusion result image using the "pearl" cloud image in FIG. 7 (g);
fig. 10(h) is a schematic view showing the typhoon center positioning result of the fusion result image of the present invention using the pearl cloud image in fig. 7 (h).
Detailed Description
The invention provides a multichannel satellite cloud picture fusion method based on Shearlet transformation. Based on the two registered satellite cloud images, firstly, Shearlet transformation is carried out on the satellite cloud images to obtain a low-frequency coefficient and a high-frequency coefficient; then, decomposing the low-frequency part of the Shearlet domain again by using a Laplacian pyramid, taking the average value of the top layer of the low-frequency part, and reconstructing other layers after taking the part with the large gray absolute value; in the high-frequency part of the Shearlet domain, three values of the information entropy, the average gradient and the standard deviation of each high-frequency sub-graph are calculated firstly, then normalization processing is carried out respectively, products are carried out on the three processed values, and finally the sub-graph with the large product is taken as a fusion sub-graph; then enhancing the details of the image details by using a nonlinear operator for the fused high-frequency sub-image; and finally, obtaining a final fusion image through Shearlet inverse transformation. The method can be popularized to the fusion of three or more satellite cloud pictures, and the fusion of the multi-channel satellite cloud pictures is realized.
Fig. 1 is a schematic flow chart of the multichannel satellite cloud image fusion method based on Shearlet transformation according to the present invention. The multichannel satellite cloud image fusion method based on Shearlet transformation comprises the following steps:
step 1, performing Shearlet transformation on registered source images A and B respectively with the image size of M multiplied by N, the number of decomposition layers being W, and the number of decomposition directions being T (T = 2)r,r∈Z*) To obtain the high frequency coefficient SHAAnd SHBLow frequency coefficient SLAAnd SLB;
Step 2, respectively aligning low-frequency coefficients SLAAnd SLBPerforming Laplacian pyramid decomposition, wherein the decomposition layer number is Q, obtaining decomposition images LA and LB, and the Q (Q is more than or equal to 1 and less than or equal to Q) layer sub-graphs are respectively LAqAnd LBq;
Step 3, para Laplacian pyramid top subgraph LAQAnd LBQFusing by using an averaging method to obtain a fusion result LFQIs composed of
Wherein i is more than or equal to 1 and less than or equal to CLQ,1≤j≤RLQ,CLQIs the number of lines, RL, of the Q-th layer picture of the decomposition sub-pictureQIs the number of columns of the Q-th layer image of the decomposition subgraph;
step 4, para Laplacian pyramid other-layer subgraph LAqAnd LBq(Q is more than or equal to 1 and less than or equal to Q-1) is fused by a fusion rule that the gray absolute value is greater, and then a fusion result LF is obtainedqIs composed of
Step 5, reconstructing the Laplacian pyramid LF obtained after fusion to obtain a fusion result TL of the low-frequency partF;
Step 6, respectively solving the information entropy, the average gradient and the standard deviation of each layer of sub-image in each direction in the high-frequency part of the Shearlet transform domain, and recording the W (W is more than or equal to 1 and less than or equal to W) th layer T (T is more than or equal to 1 and less than or equal to T) high-frequency sub-image in the T (T is more than or equal to 1 and less than or equal to T) th directionAndthe size of the entropy is M multiplied by N as the original size, and the information entropy E is
Wherein, PiRepresenting images in sub-imagesThe probability that the pixel gray value is i, and L represents the number of gray levels in the image; mean gradient of high frequency subgraphIs shown as
Wherein,representing high frequency subgraphsOrIn xiiLine yjjThe pixel points of the columns have ii larger than or equal to 1 and smaller than or equal to M, and jj larger than or equal to 1 and smaller than or equal to N. The standard deviation σ of the high-frequency subgraph is expressed as
step 7, aiming at the high-frequency subgraphAndcorresponding information entropy E, mean gradientRespectively carrying out normalization processing on the standard deviation sigma to obtain normalized information entropy EgAverage gradient ofStandard deviation sigmagSelecting the high-frequency subgraph with large product value as the fused high-frequency subgraph, i.e. selecting the subgraph with large product value
Step 8, carrying out non-linear enhancement processing based on discrete stationary wavelet transform on the fused high-frequency subgraph, and setting the high-frequency subgraphThe maximum absolute value of all the pixel gray levels is maxh, so that the enhanced high-frequency subgraph can be obtainedIs composed of
Wherein b =0.35, c = 20; a = 1/(d)1-d2),d1=sigm(c×(1+b)),d2=sigm(-c×(1-b));
And 9, performing Shearlet inverse transformation on the Shearlet coefficient value after the fusion processing to obtain a final fusion image F.
The invention also includes: the cloud pictures of other channels are added to the fusion result, so that the fusion of three or more cloud pictures is realized, and the effect of the fusion of the multi-channel satellite cloud pictures is achieved.
The Shearlet decomposition coefficients are fused according to respective fusion rules. Decomposing the low-frequency part of the Shearlet domain again by using a Laplacian pyramid, averaging the top layer of the Shearlet domain, and reconstructing the Shearlet domain after other layers take parts with large gray absolute values; and obtaining the information entropy, the average gradient and the standard deviation of the high-frequency subgraph in the high-frequency part of the Shearlet domain, and taking the part with the large product of the three as the fused high-frequency subgraph.
The fusion rule of the Shearlet low-frequency coefficient is that Laplace pyramid decomposition is firstly carried out, then the sub-images of the Laplace pyramid top layer are fused by an averaging method, the sub-images of other layers of the Laplace pyramid are fused by a fusion rule with a gray absolute value being large, and finally the fused Laplace pyramid is reconstructed to obtain a new Shearlet low-frequency coefficient.
The following experimental examples are reported to further illustrate the effects of the present invention.
Experimental example 1:
as shown in fig. 3(a) - (h), we select the infrared 2 channel and water-gas channel cloud images from the 2005-8-month-31-12-hour 00 typhoon "taili" as the original images for fusion processing. Processed into grayscale images by MATLAB7.0, were fused experimental images of 512 × 512 pixels, each taken from a satellite cloud of the type 2288 × 2288 size shown in fig. 2. Wherein each pixel point is numerically represented by its brightness. A larger number indicates a brighter illumination.
Shearlet transformation is respectively carried out on two cloud image images to be fused, and in the decomposition process of the step, the number of decomposition layers is one layer and eight directions are adopted. In order to verify the effectiveness of the fusion algorithm provided by the text, the fusion result of the text method is compared with fusion results of 5 methods, namely a laplacian pyramid image fusion method, a classical discrete orthogonal wavelet image fusion method, a Contourlet image fusion method (the fusion rule is that the low-frequency coefficient is averaged, the high-frequency coefficient is a part with large area energy, and the decomposition direction is set as [0,2 ]), a Curvelet image fusion method (a separate Curvelet image fusion method, the fusion rule is that the low-frequency coefficient is averaged, and the high-frequency coefficient is a part with large energy in a window area, wherein the window size is 3 x 3), and an NSCT image fusion method (NSCT combined energy fusion algorithm, wherein the decomposition direction of NSCT is set as [3,3 ]). The fusion rule of the Laplace pyramid image fusion method is the same as that of the classical discrete orthogonal wavelet image fusion method, and the method that the low-frequency part is used for taking the average value and the high-frequency part is used for taking the part with larger gray absolute value is adopted.
As shown in fig. 3(a) and 3(b), the infrared 2 channel and water-gas channel cloud pictures (512 × 512) of 2005, 31, 12 hours, 00 minutes typhoon "taili" are shown. Fig. 3(c) shows the fusion result of the laplacian pyramid, fig. 3(d) shows the fusion result of the classical discrete orthogonal wavelet, fig. 3(e) shows the fusion result of the Curvelet image fusion method, fig. 3(f) shows the fusion result of the Contourlet image fusion method, fig. 3(g) shows the fusion result of the NSCT image fusion method, and fig. 3(h) shows the fusion result of the text fusion algorithm.
As can be seen from fig. 3(a) - (h), the fusion image of the laplacian pyramid fusion algorithm of fig. 3(c) is clearer than the fusion result of the classical discrete orthogonal wavelet fusion algorithm of fig. 3 (d); FIG. 3(e) the fusion image of Curvelet image fusion algorithm and the fusion image of NSCT image fusion algorithm in FIG. 3(g) are closer to the fusion source image of water-gas channel in FIG. 3(b), the gray value of the images is slightly larger, and the difference between typhoon eyes and surrounding clouds is smaller; FIG. 3(f) the fused image of Contourlet image fusion algorithm has a fine gridding phenomenon. Fig. 3(h) the fused image of the image fusion algorithm of this document is similar to the fused image of the laplacian pyramid fusion algorithm of fig. 3(c), the image is clear, and the information near the wind eye is prominent. For clearer contrast detail, the partial image of the fusion result is intercepted, as shown in fig. 4.
It can be seen from fig. 4(a) to (f) that the fusion result of the Curvelet image fusion algorithm in fig. 4(e) and the fusion result of the NSCT image fusion algorithm in fig. 4(f) are not very different in the cloud cluster and the eye gray scale, and are different from the results of the other groups. The image effects of typhoon whirl in the fusion results of other groups are relatively similar. The fusion result of the algorithm can effectively highlight typhoon eye information, and a typhoon main body cloud system is smooth in whole, so that the accuracy of typhoon center positioning based on a satellite cloud picture is improved.
In order to objectively evaluate the fusion effect of the above images, the present invention calculates the information entropy E, the Average gradient G, the standard deviation σ, and the Average correlation coefficient Average _ Corr of the above fused images, respectively, and calculates the product of these four evaluation parameters. The fusion algorithm is specific to a multi-channel satellite cloud picture, and aims to improve the accuracy of typhoon center positioning, so that the fusion algorithm pays attention to indexes such as information content, spatial resolution, definition and the like of a fusion image, and the fusion image is ensured to have better detailed information and texture characteristics. The information quantity of the image can be evaluated by using the information entropy, the information entropy can objectively evaluate the information quantity of the image before and after fusion, the larger the information entropy E is, the larger the average information quantity contained in the fused image is, and the richer the information is, the better the fusion effect is. The spatial resolution of the image can be evaluated by correlation coefficient, standard deviation. The correlation coefficient can be used to measure the degree of correlation between two images. If the correlation coefficient of the fusion result and the fusion original image is closer to 1, the correlation degree is higher, namely the fusion effect is better. For the fusion source image A and the fusion image F, the correlation coefficient is Corr (A, F); for the fusion source image B and the fusion image F, the correlation coefficient is Corr (B, F); the average correlation coefficient is
The closer the Average correlation coefficient Average _ Corr is to 1, the better the fusion result is. The standard deviation σ reflects the degree of dispersion of the image gradation value with respect to the image gradation average value. Further, the larger the standard deviation σ, the larger the information contrast indicating the fused image, and the more easily the information is presented. On the contrary, the smaller the standard deviation σ is, the more concentrated the gray level distribution of the representation image, the less obvious the contrast is, and the detail information of the fusion image is not easily embodied. The sharpness of the image can be evaluated by the mean gradient, which isThe ability of the image to contrast the fine details is sensitively reflected. In general, the mean gradientThe larger the gradation change rate of the image, the sharper the image. The four evaluation parameters are comprehensively considered, the effect of the fused image is comprehensively evaluated by using the product of the four parameters, the larger the product value is, the better the effect of the fused image is, the richer the information content is, the clearer the image is, and the more favorable the typhoon center positioning is.
The performance index of the fusion result of the infrared 2 channel and the water-gas channel cloud chart of the typhoon Taili is shown in table 1.
TABLE 1 comparison of Performance parameters of various fusion results of Infrared 2 channel and Water channel cloud plots of typhoon "Taili" in FIGS. 3(c) - (h)
From table 1, the average gradient and the standard deviation of the fusion result of the algorithm are better than those of other fusion algorithms, and the product of the four evaluation parameters is also optimal, which shows that the comprehensive performance of the fusion effect of the algorithm is optimal. The information entropy and the average correlation coefficient are not optimal, but are not much different from the results of other fusion algorithms, the maximum difference between the information entropy and the average correlation coefficient is 0.008, and the maximum difference between the average correlation coefficient and the information entropy is 0.003, so that the two parameters can be considered to have equivalent effects with other fusion methods.
As shown in fig. 5(a) to (f), images of 39 × 39 sizes are captured for each of the fusion results in fig. 3(c) to (h), and then the typhoon center is located by the typhoon center locating algorithm. The typhoon center positioning algorithm is characterized in that a typhoon closed cloud area is firstly determined, then based on the characteristic that gradient information of the typhoon center area in a closed cloud is most abundant, a window with the size of 9 multiplied by 9 is used for traversing the closed cloud area, a window with the most intersection points of texture lines in the closed cloud area is selected and positioned as the typhoon center area, and then the geometric center of the center area is taken as the typhoon center. After finding the typhoon center, the center position is marked with a "+" sign in the 512 × 512 fusion result graph, as shown in fig. 6(a) - (h).
As can be seen from fig. 6(c) - (h), the typhoon center positioning result of the laplacian pyramid fusion result of fig. 6(c), the typhoon center positioning result of the classical discrete orthogonal wavelet fusion result of fig. 6(d), the typhoon center positioning result of the Contourlet fusion result of fig. 6(f), and the typhoon center positioning result of the text algorithm fusion result of fig. 6(h) are all relatively close to the typhoon center, and slight differences are difficult to observe with naked eyes, so we calculate the distance error of the typhoon center according to the longitude and latitude error of the typhoon center positioning, and the typhoon center positioning error of the fusion result of the 00 minutes typhoon "taili" infrared 2 channel and water-gas channel cloud map at 31/2005 is shown in table 2.
Table 22005, 8, 31, 12, 00 min typhoon "teli" infrared 2 channel and water-gas channel cloud picture center positioning error comparison of various fusion method results
As can be seen from Table 2, the error of the typhoon center of the algorithm is 39.37km, and the error of the center positioning result is the smallest, which is superior to the center positioning results of the single infrared 2 channel, the water-gas channel and other fusion methods.
Experimental example 2:
as shown in fig. 7(a) - (h), we select the infrared 1 channel and water-gas channel cloud images from 00 minutes typhoon "pearl" at 5, month, 11, day 00 of 2006 as the original images for fusion processing. The infrared 1 channel and water-gas channel clouds are shown in fig. 7(a) and 7 (b). Fig. 7(c) shows the fusion result of the laplacian pyramid, fig. 7(d) shows the fusion result of the classical discrete orthogonal wavelet, fig. 7(e) shows the fusion result of the Curvelet image fusion method, fig. 7(f) shows the fusion result of the Contourlet image fusion method, fig. 7(g) shows the fusion result of the NSCT image fusion method, and fig. 7(h) shows the fusion result of the present image fusion algorithm. Since fig. 7 is a sky typhoon, from the details of cloud images at the periphery of the cyclone, the fusion result of the Curvelet image fusion algorithm in fig. 7(e) and the fusion result of the NSCT image fusion algorithm in fig. 7(g) have a larger component ratio of gray values, and the details are blurry; in other groups of fusion results, the fusion result of the laplacian pyramid fusion algorithm in fig. 7(c) is slightly better than that of the classical orthogonal wavelet fusion algorithm in fig. 7(d), the fusion result of the algorithm fusion algorithm in fig. 7(h) is closer to that of the laplacian pyramid fusion algorithm in fig. 7(c), and the fusion result of the Contourlet fusion algorithm in fig. 7(f) is better. For clearer comparison detail, the partial image of the fusion result is cut off as shown in fig. 8(a) to (f).
As can be seen in fig. 8, the windward center clouds of the set of the eye-free typhoon clouds are still brighter, and the difference is not very obvious in comparison and almost not great. From the peripheral cloud image information, the laplacian pyramid fusion result in fig. 8(a), the classical orthogonal wavelet fusion result in fig. 8(b) and the algorithm fusion result in fig. 8(f) are slightly superior, the Curvelet fusion result in fig. 8(c) and the NSCT fusion result in fig. 8(e) are superior to over-bright, the cloud image is not very clear and cannot highlight the cloud cluster information, and the detail and edge part of the Contourlet fusion result in fig. 8(d) are not clear enough.
Various fusion algorithms fuse the infrared 1 channel and water-gas channel cloud pictures of the typhoon pearl in fig. 7, and the performance indexes of the fusion result are shown in table 3.
TABLE 3 comparison of Performance parameters for various fusion results of Infrared 1 channel and Water vapor channel cloud plots for typhoon "pearls" in FIGS. 7(c) - (h)
From table 3, the average gradient and the standard deviation of the fusion result of the algorithm are better than those of other fusion algorithms, and the product of the four evaluation parameters is also optimal, which shows that the fusion effect of the algorithm is optimal. The information entropy and the average correlation coefficient are not optimal, but are not much different from the results of other fusion algorithms, the maximum difference between the information entropy and the average correlation coefficient is 0.025, and the maximum difference between the average correlation coefficient and the information entropy is 0.005, so that the two parameters are basically equivalent to the performances of other fusion methods.
Then, the fusion results of various methods are intercepted into 39 × 39 images, and as shown in fig. 9(a) - (f), the images are subjected to typhoon center positioning by using a typhoon center positioning algorithm to verify the validity of the fusion algorithm. Because the group of typhoon cloud images are eye-free, the gray value of the images is larger, but the screenshots of a plurality of fusion results are not different. The result of typhoon centering is marked with a "+" sign at the center position in the 512 × 512 fusion result chart, as shown in fig. 10(a) to (h). The typhoon center positioning results of the various fusion methods in fig. 10(a) to (h) are different, and in the fusion source diagram, the typhoon center positioning of the infrared 1 channel in fig. 10(a) is far away from the center, and the typhoon center positioning of the water-gas channel in fig. 10(b) is relatively close to the center. Fig. 10(c) shows the typhoon center positioning result of the laplacian pyramid fusion result and fig. 10(g) shows the typhoon center positioning result of the NSCT fusion result, and fig. 10(d) shows the typhoon center positioning result of the classical discrete orthogonal wavelet fusion result, fig. 10(e) shows the typhoon center positioning result of the Curvelet fusion result, and fig. 10(f) shows the typhoon center positioning result of the Contourlet fusion result, which are slightly closer to the center. FIG. 10(f) the typhoon center positioning result of the fusion result of the present invention is closer to the center, and the effect is better. The distance error of the typhoon center is calculated according to the longitude and latitude error of the typhoon center positioning, and the typhoon center positioning error of the fusion result of the typhoon 'pearl' infrared 1 channel and the water-gas channel cloud picture is shown in table 4 in 5, 11, 00 and 2006.
Comparison of center positioning errors of various fusion method results of infrared 1 channel and water-gas channel cloud pictures of 00 minutes typhoon pearl at 5, 11 and 00 days in 42006 year
As can be seen from Table 4, the error of the typhoon center of the algorithm is 76.21km, which is obviously superior to the center positioning results of the single infrared 1 channel and other fusion methods, and the effect is optimal.
Experimental example 3:
to further illustrate the effectiveness of the fusion algorithm proposed herein, the computational complexity of the method proposed by the present invention is analyzed below. The image fusion algorithm is run on MatLab R2009a software, which runs on a Darlopplex 780 desktop computer with a processor of Intel core 2 quad-core Q94002.66GHz, a memory of 2GB (Kingston DDR31333 MHz) and an operating system of Windows XP professional 32-bit SP3 (DirectX9.0c). The run times for the various fusion methods are measured and tested using the second set of experimental images, and the run times for the various fusion algorithms are shown in table 5.
TABLE 5 run times of various fusion algorithms
As can be seen from table 5, except that the operation time of the image fusion algorithm based on the laplacian pyramid and the operation time of the image fusion algorithm based on the classical orthogonal discrete wavelet are shorter, the image fusion algorithms provided by the present invention all use less time. Therefore, the fusion algorithm provided by the invention has low calculation complexity and can obtain a better fusion effect.
The three groups of experiments show that the algorithm of the invention can well realize the fusion of two satellite cloud images, and the fusion result of 5 methods, namely a Laplace pyramid image fusion method, a classical discrete orthogonal wavelet image fusion method, a Contourlet image fusion method, a Curvelet image fusion method and an NSCT image fusion method is compared to prove that the algorithm has the optimal standard deviation and average gradient, the information entropy and average correlation coefficient equivalent to other directions, the comprehensive evaluation index is best, the visual effect of the fused image is good, the detail information of typhoon eyes and cloud systems can be clearly kept, the precision of typhoon center positioning by utilizing the fusion result is higher, the method is suitable for eyeless typhoons, the comprehensive effect of the satellite cloud image fusion result is best, the fusion of three or more satellite cloud images is carried out according to the method to realize the fusion of multi-channel satellite cloud images, the method is favorable for combining more information of the cloud picture and improving the accuracy of the typhoon center positioning.
Claims (3)
1. A multichannel satellite cloud image fusion method based on Shearlet transformation is characterized by comprising the following steps:
step 1, performing Shearlet transformation on registered source images A and B respectively, wherein the image size is M multiplied by N, the number of decomposition layers is W, the number of decomposition directions is T, and T =2r,r∈Z*To obtain the high frequency coefficient SHAAnd SHBLow frequency coefficient SLAAnd SLB;
Step 2, respectively aligning low-frequency coefficients SLAAnd SLBPerforming Laplacian pyramid decompositionThe number of layers is Q, decomposition images LA and LB are obtained, and sub-images of the Q layer are respectively LAqAnd LBq,1≤q≤Q;
Step 3, para Laplacian pyramid top subgraph LAQAnd LBQFusing by using an averaging method to obtain a fusion result LFQComprises the following steps:
wherein i is more than or equal to 1 and less than or equal to CLQ,1≤j≤RLQ,CLQIs the number of lines, RL, of the Q-th layer picture of the decomposition sub-pictureQIs the number of columns of the Q-th layer image of the decomposition subgraph;
step 4, para Laplacian pyramid other-layer subgraph LAqAnd LBqThe fusion rule with the gray absolute value is used for fusion, Q is more than or equal to 1 and less than or equal to Q-1, and then the fusion result LF is obtainedqComprises the following steps:
step 5, reconstructing the Laplacian pyramid LF obtained after fusion to obtain a fusion result TL of the low-frequency partF;
Step 6, respectively calculating the information entropy, the average gradient and the standard deviation of each layer of subgraph in each direction in the high-frequency part of the Shearlet transform domain, and recording the w-th layer of the t-th direction high-frequency subgraph asAndw is more than or equal to 1 and less than or equal to W, T is more than or equal to 1 and less than or equal to T, the size of the W is M multiplied by N as the size of the original image, and the information entropy E is as follows:
wherein, PiRepresenting the probability that the gray value of a pixel in the sub-image is i, L representing the number of gray levels in the image, and the average gradient of the high-frequency sub-imageIs shown as
Wherein,representing high frequency subgraphsOrIn xiiLine yjjThe pixel points of the columns have the following characteristics that ii is more than or equal to 1 and less than or equal to M, jj is more than or equal to 1 and less than or equal to N, and the standard deviation sigma of the high-frequency subgraph is expressed as:
wherein,representing a gray level mean value of a high-frequency subgraph;
step 7, aiming at the high-frequency subgraphAndcorresponding information entropy E, mean gradientRespectively carrying out normalization processing on the standard deviation sigma to obtain normalized information entropy EgAverage gradient ofStandard deviation sigmagSelecting the high-frequency subgraph with large product value as the fused high-frequency subgraph, i.e. selecting the subgraph with large product value
Step 8, carrying out non-linear enhancement processing based on discrete stationary wavelet transform on the fused high-frequency subgraph, and setting the high-frequency subgraphThe maximum absolute value of all the pixel gray levels is maxh, so that the enhanced high-frequency subgraph can be obtainedComprises the following steps:
wherein b =0.35, c = 20; a = 1/(d)1-d2),d1=sigm(c×(1+b)),d2=sigm(-c×(1-b));
And 9, performing Shearlet inverse transformation on the Shearlet coefficient value after the fusion processing to obtain a final fusion image F.
2. The Shearlet transform-based multi-channel satellite cloud image fusion method according to claim 1, wherein in the step 1, the registered source images a and B are satellite cloud image images returned from a meteorological satellite FY-2C, and the satellite has 5 channels: images of an infrared 1 channel, an infrared 2 channel, a water-gas channel, an infrared 4 channel and a visible light channel are registered optionally.
3. The Shearlet transform-based multi-channel satellite cloud image fusion method according to claim 1 or 2, further comprising: the cloud pictures of other channels are added to the fusion result, so that the fusion of three or more cloud pictures is realized, and the effect of the fusion of the multi-channel satellite cloud pictures is achieved.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410056917.3A CN103839243B (en) | 2014-02-19 | 2014-02-19 | Multi-channel satellite cloud picture fusion method based on Shearlet conversion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410056917.3A CN103839243B (en) | 2014-02-19 | 2014-02-19 | Multi-channel satellite cloud picture fusion method based on Shearlet conversion |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103839243A true CN103839243A (en) | 2014-06-04 |
CN103839243B CN103839243B (en) | 2017-01-11 |
Family
ID=50802713
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410056917.3A Active CN103839243B (en) | 2014-02-19 | 2014-02-19 | Multi-channel satellite cloud picture fusion method based on Shearlet conversion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103839243B (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104318532A (en) * | 2014-10-23 | 2015-01-28 | 湘潭大学 | Secondary image fusion method combined with compressed sensing |
CN107230197A (en) * | 2017-05-27 | 2017-10-03 | 浙江师范大学 | Tropical cyclone based on satellite cloud picture and RVM is objective to determine strong method |
CN108073865A (en) * | 2016-11-18 | 2018-05-25 | 南京信息工程大学 | A kind of aircraft trail cloud recognition methods based on satellite data |
CN109215008A (en) * | 2018-08-02 | 2019-01-15 | 上海海洋大学 | A kind of multispectral and panchromatic image fusion method of entirety two generations Bandelet transformation |
CN109272477A (en) * | 2018-09-11 | 2019-01-25 | 中国科学院长春光学精密机械与物理研究所 | A kind of fusion method and fusion treatment device based on NSST Yu adaptive binary channels PCNN |
CN109740629A (en) * | 2018-12-05 | 2019-05-10 | 电子科技大学 | A kind of non-down sampling contourlet decomposition transform system and its implementation based on FPGA |
CN113284079A (en) * | 2021-05-27 | 2021-08-20 | 山东第一医科大学(山东省医学科学院) | Multi-modal medical image fusion method |
CN113487529A (en) * | 2021-07-12 | 2021-10-08 | 吉林大学 | Meteorological satellite cloud picture target detection method based on yolk |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2006017233A1 (en) * | 2004-07-12 | 2006-02-16 | Lehigh University | Image fusion methods and apparatus |
CN103116881A (en) * | 2013-01-27 | 2013-05-22 | 西安电子科技大学 | Remote sensing image fusion method based on PCA (principal component analysis) and Shearlet conversion |
-
2014
- 2014-02-19 CN CN201410056917.3A patent/CN103839243B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2006017233A1 (en) * | 2004-07-12 | 2006-02-16 | Lehigh University | Image fusion methods and apparatus |
CN103116881A (en) * | 2013-01-27 | 2013-05-22 | 西安电子科技大学 | Remote sensing image fusion method based on PCA (principal component analysis) and Shearlet conversion |
Non-Patent Citations (3)
Title |
---|
QI-GUANG MIAO ET AL.: "A novel algorithm of image fusion using shearlets", 《OPTICS COMMUNICATIONS》 * |
WANG-Q LIM: "The Discrete Shearlet Transform: A New Directional Transform and Compactly Supported Shearlet Frames", 《IEEE TRANSACTIONS ON IMAGE PROCESSING》 * |
杨何群 等: "热带气旋卫星遥感客观定位方法研究进展", 《热带海洋学报》 * |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104318532A (en) * | 2014-10-23 | 2015-01-28 | 湘潭大学 | Secondary image fusion method combined with compressed sensing |
CN104318532B (en) * | 2014-10-23 | 2017-04-26 | 湘潭大学 | Secondary image fusion method combined with compressed sensing |
CN108073865B (en) * | 2016-11-18 | 2021-10-19 | 南京信息工程大学 | Aircraft trail cloud identification method based on satellite data |
CN108073865A (en) * | 2016-11-18 | 2018-05-25 | 南京信息工程大学 | A kind of aircraft trail cloud recognition methods based on satellite data |
CN107230197A (en) * | 2017-05-27 | 2017-10-03 | 浙江师范大学 | Tropical cyclone based on satellite cloud picture and RVM is objective to determine strong method |
CN107230197B (en) * | 2017-05-27 | 2023-05-12 | 浙江师范大学 | Tropical cyclone objective strength determination method based on satellite cloud image and RVM |
CN109215008A (en) * | 2018-08-02 | 2019-01-15 | 上海海洋大学 | A kind of multispectral and panchromatic image fusion method of entirety two generations Bandelet transformation |
CN109272477A (en) * | 2018-09-11 | 2019-01-25 | 中国科学院长春光学精密机械与物理研究所 | A kind of fusion method and fusion treatment device based on NSST Yu adaptive binary channels PCNN |
CN109740629A (en) * | 2018-12-05 | 2019-05-10 | 电子科技大学 | A kind of non-down sampling contourlet decomposition transform system and its implementation based on FPGA |
CN109740629B (en) * | 2018-12-05 | 2022-03-15 | 电子科技大学 | Non-downsampling contourlet decomposition transformation system based on FPGA and implementation method thereof |
CN113284079A (en) * | 2021-05-27 | 2021-08-20 | 山东第一医科大学(山东省医学科学院) | Multi-modal medical image fusion method |
CN113284079B (en) * | 2021-05-27 | 2023-02-28 | 山东第一医科大学(山东省医学科学院) | Multi-modal medical image fusion method |
CN113487529A (en) * | 2021-07-12 | 2021-10-08 | 吉林大学 | Meteorological satellite cloud picture target detection method based on yolk |
Also Published As
Publication number | Publication date |
---|---|
CN103839243B (en) | 2017-01-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103839243B (en) | Multi-channel satellite cloud picture fusion method based on Shearlet conversion | |
Coburn et al. | A multiscale texture analysis procedure for improved forest stand classification | |
Karvonen | Baltic sea ice concentration estimation using SENTINEL-1 SAR and AMSR2 microwave radiometer data | |
CN102800074B (en) | Synthetic aperture radar (SAR) image change detection difference chart generation method based on contourlet transform | |
CN109447089B (en) | High-resolution arctic sea ice type extraction method based on super-resolution technology | |
CN111008664B (en) | Hyperspectral sea ice detection method based on space-spectrum combined characteristics | |
CN105930772A (en) | City impervious surface extraction method based on fusion of SAR image and optical remote sensing image | |
CN103226826B (en) | Based on the method for detecting change of remote sensing image of local entropy visual attention model | |
CN107230197B (en) | Tropical cyclone objective strength determination method based on satellite cloud image and RVM | |
CN103456018A (en) | Remote sensing image change detection method based on fusion and PCA kernel fuzzy clustering | |
CN103700075A (en) | Tetrolet transform-based multichannel satellite cloud picture fusing method | |
CN109584284B (en) | Hierarchical decision-making coastal wetland ground object sample extraction method | |
CN104331698A (en) | Remote sensing type urban image extracting method | |
CN104200471A (en) | SAR image change detection method based on adaptive weight image fusion | |
Xiao et al. | Segmentation of multispectral high-resolution satellite imagery using log Gabor filters | |
CN104217426A (en) | Object-oriented water-body extracting method based on ENVISAT ASAR and Landsat TM remote sensing data | |
CN113657324A (en) | Urban functional area identification method based on remote sensing image ground object classification | |
Zhang et al. | Remote sensing of impervious surfaces in tropical and subtropical areas | |
CN105513060A (en) | Visual perception enlightening high-resolution remote-sensing image segmentation method | |
CN118661207A (en) | Method for large-scale near real-time flood detection in geographical areas covering urban and rural areas and related computer program product | |
Venkatakrishnamoorthy et al. | Cloud enhancement of NOAA multispectral images by using independent component analysis and principal component analysis for sustainable systems | |
CN113837123A (en) | Mid-resolution remote sensing image offshore culture area extraction method based on spectral-spatial information combination | |
CN112434590A (en) | SAR image wind stripe identification method based on wavelet transformation | |
CN117115671A (en) | Soil quality analysis method and device based on remote sensing and electronic equipment | |
CN101908211A (en) | High spectral image fusion method based on variational method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |