CN115170406A - High-precision fusion method for optical image and SAR (synthetic aperture radar) intensity image - Google Patents
High-precision fusion method for optical image and SAR (synthetic aperture radar) intensity image Download PDFInfo
- Publication number
- CN115170406A CN115170406A CN202210651992.9A CN202210651992A CN115170406A CN 115170406 A CN115170406 A CN 115170406A CN 202210651992 A CN202210651992 A CN 202210651992A CN 115170406 A CN115170406 A CN 115170406A
- Authority
- CN
- China
- Prior art keywords
- image
- sar
- fusion
- images
- optical
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000003287 optical effect Effects 0.000 title claims abstract description 41
- 238000007500 overflow downdraw method Methods 0.000 title claims abstract description 7
- 230000004927 fusion Effects 0.000 claims abstract description 28
- 238000000034 method Methods 0.000 claims abstract description 15
- 238000001228 spectrum Methods 0.000 claims abstract description 3
- 238000012937 correction Methods 0.000 claims description 10
- 238000007781 pre-processing Methods 0.000 claims description 9
- 238000012952 Resampling Methods 0.000 claims description 5
- 230000009467 reduction Effects 0.000 claims description 4
- 238000013528 artificial neural network Methods 0.000 claims description 3
- 238000007499 fusion processing Methods 0.000 claims description 3
- 238000013507 mapping Methods 0.000 claims description 3
- 238000012549 training Methods 0.000 claims description 3
- 238000001125 extrusion Methods 0.000 claims description 2
- 238000006243 chemical reaction Methods 0.000 claims 3
- 230000010354 integration Effects 0.000 claims 1
- 238000011160 research Methods 0.000 claims 1
- 238000005070 sampling Methods 0.000 claims 1
- 230000009466 transformation Effects 0.000 abstract description 12
- 230000003595 spectral effect Effects 0.000 abstract description 11
- 230000005540 biological transmission Effects 0.000 abstract description 10
- 238000012544 monitoring process Methods 0.000 abstract description 4
- 230000007547 defect Effects 0.000 abstract 1
- 238000012545 processing Methods 0.000 description 3
- 238000010276 construction Methods 0.000 description 2
- 230000007613 environmental effect Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4007—Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/10—Image enhancement or restoration using non-spatial domain filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/40—Image enhancement or restoration using histogram techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/80—Geometric correction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10032—Satellite or aerial image; Remote sensing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20048—Transform domain processing
- G06T2207/20064—Wavelet transform [DWT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Radar Systems Or Details Thereof (AREA)
- Image Processing (AREA)
Abstract
Description
技术领域technical field
本发明涉及异源卫星遥感影像高精度融合技术领域,具体来说是一种光学影 像与SAR强度影像高精度融合方法。The invention relates to the technical field of high-precision fusion of heterogeneous satellite remote sensing images, in particular to a high-precision fusion method of optical images and SAR intensity images.
背景技术Background technique
高空间分辨率光学卫星遥感影像可实现输变电运行环境监测,获取输变电一 定范围内的植被覆盖情况、地形地貌特征、房屋新增以及第三方大型施工等情况。 光学影像光谱特性可实现输变电运行环境中的各种地表变化监测,但受限于高空 间分辨率光学影像不能穿透云层,重访周期长,获取难度较大,往往几个月都难 以获取到一景高质量影像,如果用中分辨率光学影像,由于空间分辨率低,难以 实现输变电设备安全监测。SAR(Synthetic Aperture Radar,合成孔径雷达) 具有全天时、全天候且不受云雨影响,使用高空间分辨率SAR影像能够及时获取 地表的动态信息,但SAR影像不包含地物光谱信息,需要结合光学影像才能更好 的识别地表物体。针对这一现状,利用高空间分辨率SAR与中空间分辨率光学影 像融合来获取输变电设备的技术应运而生。High spatial resolution optical satellite remote sensing images can monitor the operation environment of power transmission and transformation, and obtain information on vegetation coverage, topographical features, new houses, and large-scale construction by third parties within a certain range of power transmission and transformation. The spectral characteristics of optical images can realize the monitoring of various surface changes in the power transmission and transformation operating environment, but due to the high spatial resolution optical images cannot penetrate the cloud layer, the revisit period is long, and the acquisition is difficult, often for several months. To obtain a high-quality image of a scene, if a medium-resolution optical image is used, it is difficult to realize the safety monitoring of power transmission and transformation equipment due to the low spatial resolution. SAR (Synthetic Aperture Radar, Synthetic Aperture Radar) has all-day, all-weather and is not affected by clouds and rain. Using high spatial resolution SAR images can obtain dynamic information of the surface in time, but SAR images do not contain spectral information of ground objects, and need to combine optical Images can better identify surface objects. In view of this situation, the technology of obtaining power transmission and transformation equipment by using high spatial resolution SAR and medium spatial resolution optical image fusion came into being.
发明内容SUMMARY OF THE INVENTION
本发明目的是提供一种方法,能够将光学影像与SAR强度影像高精度融合, 既能拥有SAR影像及时监测地表变化的特点,又能很好地保留光学影像识别地表 物体的优势。The purpose of the present invention is to provide a method capable of high-precision fusion of optical images and SAR intensity images, which not only has the characteristics of SAR images to monitor surface changes in time, but also retains the advantages of optical images to identify surface objects.
本发明为实现上述目的,通过以下技术方案实现。In order to achieve the above object, the present invention is achieved through the following technical solutions.
一种光学影像与SAR强度影像高精度融合方法,包括步骤:A high-precision fusion method of optical image and SAR intensity image, comprising the steps of:
获取研究区高空间分辨率X波段的SAR影像S1和同一区域同一天拍摄的同 一幅宽的中空间分辨率光学影像L1;Obtain the high spatial resolution X-band SAR image S1 of the study area and the same wide medium spatial resolution optical image L1 taken on the same day in the same area;
对影像S1和L1进行数据预处理,分别得到预处理后的影像S2和L2;Perform data preprocessing on images S1 and L1 to obtain preprocessed images S2 and L2 respectively;
对影像S2和L2进行亚像素级配准,以L2为挤出底图,使用频谱分级法将 S2配准到L2上,降低融合模糊;Perform sub-pixel registration on images S2 and L2, use L2 as the extrusion base map, and use the spectral classification method to register S2 to L2 to reduce fusion blur;
利用配准后的高分辨率SAR影像S2重分辨率光学影像进行融合处理,将两 种数据源均归一化为灰度图,再将两幅RGB图像经过HSI变换得到各自的亮度分 量,对亮度分量进行小波变换,选取融合规则并得到新的亮度分量,最后反变换 为融合后的RGB图像。The registered high-resolution SAR image S2 re-resolution optical image is used for fusion processing, the two data sources are normalized into grayscale images, and then the two RGB images are transformed by HSI to obtain their respective luminance components. The luminance component is subjected to wavelet transform, the fusion rule is selected and a new luminance component is obtained, and finally the fused RGB image is inversely transformed.
进一步的,对S1的预处理包括:轨道校正、影像裁剪、热噪声去除以及利 用DEM数据进行地形校正。Further, the preprocessing of S1 includes: orbit correction, image cropping, thermal noise removal, and terrain correction using DEM data.
进一步的,对L1的预处理包括:辐射定标、大气校正、正射校正、图像融 合、图像增强以及研究区裁剪。Further, the preprocessing of L1 includes: radiometric calibration, atmospheric correction, orthorectification, image fusion, image enhancement and cropping of the study area.
进一步的,影像S2和L2进行亚像素级配准包括步骤:Further, the sub-pixel level registration of the images S2 and L2 includes the following steps:
进行像元重采样,用邻近4个点的像元值利用双线性内插法将L2影像采样 到和S2像素间距一致,得到重采样影像L3;Carry out pixel resampling, use the pixel value of adjacent 4 points to utilize bilinear interpolation to sample the L2 image to be consistent with the S2 pixel spacing, and obtain the resampling image L3;
将S2与L3配准,使用多重约束的GAN网络学习可见光和遥感SAR图像之间 的映射关系,使用训练好的模型扩充训练样本的数量和多样性,使用神经网络提 取特征进行图像块匹配预测,粗配准后生成偏移量参数文件,进行偏移量估计, 最后利用训练好的模型去除偏移量,得到亚像素级的配准,分别导出新的SAR 影像和光学影像S3、L4。Register S2 with L3, use a multi-constrained GAN network to learn the mapping relationship between visible light and remote sensing SAR images, use the trained model to expand the number and diversity of training samples, and use neural networks to extract features for image patch matching prediction, After rough registration, an offset parameter file is generated, and the offset is estimated. Finally, the trained model is used to remove the offset to obtain sub-pixel registration, and new SAR images and optical images S3 and L4 are exported respectively.
进一步的,影像融合采用了纹理加权增强方法,使SAR影像的纹理特征更加 清晰,具体包括步骤:Further, the image fusion adopts the texture weighting enhancement method to make the texture features of the SAR image clearer. The specific steps include:
降噪完成后,进行影像重构,将后向散射强的区域纹理加强,增强其权重占 比,后向散射弱的区域纹理减弱,由此来突出地物纹理,增强纹理信息的SAR 标记为S4。其公式如下:After the noise reduction is completed, image reconstruction is carried out, the texture of the area with strong backscattering is strengthened, and its weight ratio is increased, and the texture of the area with weak backscattering is weakened, so as to highlight the texture of the ground object, and the SAR with enhanced texture information is marked as S4. Its formula is as follows:
其中,为自定义权重系数,是一个常数,S3w为后向散射强的区域影像,S3v为后向散射弱的区域影像。in, is a custom weight coefficient, which is a constant, S3 w is the image of the area with strong backscatter, and S3 v is the image of the area with weak backscatter.
本发明的优点在于:The advantages of the present invention are:
本发明基于异源卫星遥感影像,提取两种影像中的特征信息进行图像融合, 克服单独使用光学影像云遮挡难获取等困难,弥补SAR影像缺少地物光谱信息的 缺点,提高输变电设备安全环境监测准确度,为及时发现输电线路及周边第三方 外破、大型施工及山体滑坡等灾害提供第一手影像资料。Based on heterogeneous satellite remote sensing images, the present invention extracts feature information from two images to perform image fusion, overcomes difficulties such as difficulty in obtaining cloud occlusion by using optical images alone, makes up for the shortcoming of SAR images lacking spectral information of ground objects, and improves the safety of power transmission and transformation equipment. The accuracy of environmental monitoring provides first-hand image data for timely detection of power transmission lines and surrounding third-party external breakages, large-scale construction, and landslides.
利用频谱分级法配准SAR与光学影像,降低因配准精度低而引起的融合模糊。The spectral classification method is used to register SAR and optical images to reduce fusion blur caused by low registration accuracy.
利用纹理加权融合算法增强SAR影像的纹理信息,使融合后的影像既包含强 纹理有包含强光谱,能够有效识别输电线路周边环境变化情况。The texture weighted fusion algorithm is used to enhance the texture information of the SAR image, so that the fused image contains both strong texture and strong spectrum, which can effectively identify the environmental changes around the transmission line.
具体实施方式Detailed ways
下面将对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述 的实施例仅仅是本发明一部分实施例,而不是全部的实施例。The technical solutions in the embodiments of the present invention will be clearly and completely described below. Obviously, the described embodiments are only a part of the embodiments of the present invention, rather than all the embodiments.
一种光学影像与SAR强度影像高精度融合方法,包括如下步骤:A high-precision fusion method of optical image and SAR intensity image, comprising the following steps:
S1.获取研究区高空间分辨率X波段的SAR影像S1和同一区域同一天拍摄的 同一幅宽的中空间分辨率光学影像L1,对影像S1和L1进行数据预处理:S1. Obtain the high spatial resolution X-band SAR image S1 in the study area and the medium spatial resolution optical image L1 of the same width taken on the same day in the same area, and perform data preprocessing on the images S1 and L1:
对影像S1的预处理主要包括轨道校正、影像裁剪、热噪声去除以及利用DEM 数据进行地形校正,为与中分辨率光学影像,也就是L1影像配准做好准备。热 噪声是SAR卫星系统自带的噪声,是由于卫星本身工作期间产生的能量,需要对 影像进行多次重复去除热噪声的影响。侧视成像,地形的起伏会对SAR影像造成 畸变,并导致透视收缩、叠掩、阴影等现象,因此需要进行地形校正。处理后得 到影像S2。The preprocessing of the image S1 mainly includes orbit correction, image cropping, thermal noise removal, and terrain correction using DEM data to prepare for registration with the medium-resolution optical image, that is, the L1 image. Thermal noise is the noise inherent in the SAR satellite system. It is due to the energy generated during the operation of the satellite itself. It is necessary to repeat the image for many times to remove the influence of thermal noise. In side-view imaging, terrain fluctuations will distort SAR images, and lead to perspective shrinkage, overlapping, shadows, and other phenomena, so terrain correction is required. Image S2 is obtained after processing.
对中分辨率光学L1影像的预处理主要包括辐射定标、大气校正、正射校正、 图像融合、图像增强以及研究区裁剪。辐射定标在影像处理中即将影像的灰度值 (DN值)转化为辐射亮度值或表观反射率,以消除传感器本身的误差,确定传 感器入口处的准确辐射值。特别说明这里的融合仅是分辨率与光谱信息的融合, 融合后进行研究区裁剪,裁剪边界和大小以S1为标准,最终得到中分辨率光学 卫星影像L2。The preprocessing of the medium-resolution optical L1 image mainly includes radiometric calibration, atmospheric correction, orthorectification, image fusion, image enhancement and cropping of the study area. Radiometric calibration converts the gray value (DN value) of the image into radiance value or apparent reflectance in image processing to eliminate the error of the sensor itself and determine the accurate radiation value at the entrance of the sensor. In particular, the fusion here is only the fusion of resolution and spectral information. After fusion, the study area is cropped. The cropping boundary and size are based on S1, and finally the medium-resolution optical satellite image L2 is obtained.
S2.对S2和L2进行亚像素级配准,以L2为基础底图,使用频谱分级法将 S2配准到L2上,配准精度达到0.1个像素,为融合做好准备,降低融合模糊。S2. Perform sub-pixel-level registration on S2 and L2, use L2 as the base map, and use the spectral classification method to register S2 to L2. The registration accuracy reaches 0.1 pixel, which is ready for fusion and reduces fusion blur.
想实现亚像素级的配准精度,一般的地理配准是远远不够的,需要进行特殊 处理。To achieve sub-pixel registration accuracy, general geo-referencing is far from enough, and special processing is required.
首先进行像元重采样,因为L2影像的分辨率较低,需要将L2影像采样到和 S2像素间距一致,利用双线性内插法重采样,使用邻近4个点的像元值,按照 其距内插点的距离赋予不同的权重,进行线性内插。其原理是利用目标点所对应 的上左,上右、下左、下右四个临界点的像素值来求取目标点的像素值。重采样 完成后得到影像L3,具体计算公式如下:First, perform pixel resampling. Because the resolution of the L2 image is low, it is necessary to sample the L2 image to the same pixel spacing as the S2 pixel, and use the bilinear interpolation method to resample, using the pixel values of the adjacent 4 points, according to its The distance from the interpolation point is given different weights to perform linear interpolation. The principle is to use the pixel values of the four critical points of the upper left, upper right, lower left and lower right corresponding to the target point to obtain the pixel value of the target point. After the resampling is completed, the image L3 is obtained, and the specific calculation formula is as follows:
a=(1-t)(1-u),b=(1-t)×u,c=t×u,d=t(1-u)a=(1-t)(1-u), b=(1-t)×u, c=t×u, d=t(1-u)
其中,a,b,c,d分别为四个坐标系数,u,t分别为临界点与目标点在二维坐标 轴上的比例。Among them, a, b, c, d are the four coordinate coefficients, respectively, u, t are the ratio of the critical point and the target point on the two-dimensional coordinate axis.
使用多重约束的GAN网络学习可见光和遥感SAR图像之间的映射关系,然后 利用训练好的模型扩充训练样本的数量和多样性,之后使用神经网络提取特征进 行图像块的匹配预测,粗配准后生成偏移量参数文件,进行偏移量估计,最后利 用训练好的模型去除偏移量,得到亚像素级的配准。配准完成后,需要导出一个 新的SAR影像和光学影像,分别定义为S3和L4。Use a multi-constrained GAN network to learn the mapping relationship between visible light and remote sensing SAR images, and then use the trained model to expand the number and diversity of training samples, and then use the neural network to extract features for image patch matching prediction. After rough registration Generate an offset parameter file, estimate the offset, and finally use the trained model to remove the offset to obtain sub-pixel registration. After the registration is completed, a new SAR image and optical image need to be exported, which are defined as S3 and L4, respectively.
S3.利用配准后的高分辨率SAR影像S2与中分辨率光学影像进行融合处理:S3. Use the registered high-resolution SAR image S2 and the medium-resolution optical image to perform fusion processing:
S31.纹理加权增强,令SAR影像的纹理突出显示,其原理是进行二次降噪, 降噪完成后,进行影像重构,将后向散射强的区域纹理加强,增强其权重占比, 后向散射弱的区域纹理减弱,由此来突出地物纹理,增强纹理信息的SAR标记为 S4。其公式如下:S31. Texture weighting enhancement, so that the texture of the SAR image is highlighted. The principle is to perform secondary noise reduction. After the noise reduction is completed, image reconstruction is performed to strengthen the texture of the area with strong backscattering and increase its weight ratio. The texture of the area with weak scattering is weakened, thereby highlighting the texture of the ground object, and the SAR with enhanced texture information is marked as S4. Its formula is as follows:
S32.利用S4中地物纹理信息和L4的光谱信息进行融合,首先对光学影像 进行HSI变换,将光学影像从RGB变换到HSI空间,分离出影像的强度分量(纹 理信息)I和光谱分量H和S;然后利用分离出来的强度分量I对雷达影像进行 直方图匹配,使SAR与光学影像的直方图分布趋势一致,从而有效的保持光谱信 息;将小波变换算法应用到光学影像强度分量和直方图匹配后的SAR图像上,分 别得到强度分量和SAR影像的高频分量和低频分量,将二者的低频分量和高频分 量分别融合;将小波逆变换应用到低频融合结果与高频融合结果上,得到一个新 的强度分量I’,将新的强度分量I’与光学影像经过HSI变换分离出来的色调分 量H、饱和度分量S进行HSI逆变换,得到RGB空间的光学与雷达融合结果,得 到融合后的RGB图像。S32. Use the ground object texture information in S4 and the spectral information of L4 to fuse, first perform HSI transformation on the optical image, transform the optical image from RGB to HSI space, and separate the intensity component (texture information) I and spectral component H of the image. and S; then use the separated intensity component I to perform histogram matching on the radar image, so that the histogram distribution trend of SAR and optical image is consistent, so as to effectively maintain the spectral information; the wavelet transform algorithm is applied to the optical image intensity component and histogram On the SAR image after image matching, the intensity component and the high-frequency component and low-frequency component of the SAR image are obtained respectively, and the low-frequency and high-frequency components of the two are fused respectively; the wavelet inverse transform is applied to the low-frequency fusion result and the high-frequency fusion result. Then, a new intensity component I' is obtained, and the HSI inverse transform is performed between the new intensity component I' and the hue component H and saturation component S separated by the HSI transformation of the optical image, and the optical and radar fusion result of the RGB space is obtained, Get the fused RGB image.
最后应说明的是:以上所述仅为本发明的优选实施例而已,并不用于限制 本发明,尽管参照前述实施例对本发明进行了详细的说明,对于本领域的技术人 员来说,其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部 分技术特征进行等同替换。凡在本发明的精神和原则之内,所作的任何修改、等 同替换、改进等,均应包含在本发明的保护范围之内。Finally, it should be noted that the above descriptions are only preferred embodiments of the present invention, and are not intended to limit the present invention. Although the present invention has been described in detail with reference to the foregoing embodiments, for those skilled in the art, the The technical solutions described in the foregoing embodiments may be modified, or some technical features thereof may be equivalently replaced. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention shall be included within the protection scope of the present invention.
Claims (5)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210651992.9A CN115170406A (en) | 2022-06-10 | 2022-06-10 | High-precision fusion method for optical image and SAR (synthetic aperture radar) intensity image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210651992.9A CN115170406A (en) | 2022-06-10 | 2022-06-10 | High-precision fusion method for optical image and SAR (synthetic aperture radar) intensity image |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115170406A true CN115170406A (en) | 2022-10-11 |
Family
ID=83486069
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210651992.9A Pending CN115170406A (en) | 2022-06-10 | 2022-06-10 | High-precision fusion method for optical image and SAR (synthetic aperture radar) intensity image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115170406A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116452483A (en) * | 2023-05-10 | 2023-07-18 | 北京道达天际科技股份有限公司 | An Image Fusion Method Based on Wavelet Transform and HSI Color Space |
CN117475310A (en) * | 2023-11-09 | 2024-01-30 | 生态环境部卫星环境应用中心 | SAR image-based human activity change detection method and system |
EP4386430A1 (en) * | 2022-12-16 | 2024-06-19 | Iceye Oy | Geolocation error detection method and system for synthetic aperture radar images |
CN118366059A (en) * | 2024-06-20 | 2024-07-19 | 山东锋士信息技术有限公司 | Crop water demand calculating method based on optical and SAR data fusion |
-
2022
- 2022-06-10 CN CN202210651992.9A patent/CN115170406A/en active Pending
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP4386430A1 (en) * | 2022-12-16 | 2024-06-19 | Iceye Oy | Geolocation error detection method and system for synthetic aperture radar images |
WO2024126698A1 (en) * | 2022-12-16 | 2024-06-20 | Iceye Oy | Geolocation error detection method and system for synthetic aperture radar images |
CN116452483A (en) * | 2023-05-10 | 2023-07-18 | 北京道达天际科技股份有限公司 | An Image Fusion Method Based on Wavelet Transform and HSI Color Space |
CN117475310A (en) * | 2023-11-09 | 2024-01-30 | 生态环境部卫星环境应用中心 | SAR image-based human activity change detection method and system |
CN118366059A (en) * | 2024-06-20 | 2024-07-19 | 山东锋士信息技术有限公司 | Crop water demand calculating method based on optical and SAR data fusion |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN115170406A (en) | High-precision fusion method for optical image and SAR (synthetic aperture radar) intensity image | |
Sirguey et al. | Improving MODIS spatial resolution for snow mapping using wavelet fusion and ARSIS concept | |
CN111337434A (en) | A method and system for estimating biomass of reclaimed vegetation in mining area | |
CN110703244B (en) | Method and device for identifying urban water body based on remote sensing data | |
Zhang et al. | Preprocessing and fusion analysis of GF-2 satellite Remote-sensed spatial data | |
CN108764326A (en) | Urban impervious surface extracting method based on depth confidence network | |
CN114998365A (en) | Ground feature classification method based on polarimetric interference SAR | |
CN112734683B (en) | Multi-scale SAR and infrared image fusion method based on target enhancement | |
CN113744249A (en) | Marine ecological environment damage investigation method | |
Xia et al. | Quality assessment for remote sensing images: approaches and applications | |
CN114332085B (en) | Optical satellite remote sensing image detection method | |
Shawal et al. | Fundamentals of digital image processing and basic concept of classification | |
CN113469104B (en) | Method and equipment for detecting surface water changes in radar remote sensing images based on deep learning | |
CN114509758A (en) | A method for selecting flat regions on the lunar surface based on fractal dimension and polarization decomposition | |
CN109978982B (en) | Point cloud rapid coloring method based on oblique image | |
Zhang et al. | A general thin cloud correction method combining statistical information and a scattering model for visible and near-infrared satellite images | |
Flemming | Design of semi-automatic algorithm for shoreline extraction using Synthetic Aperture Radar (SAR) images | |
Hessel et al. | Relative radiometric normalization using several automatically chosen reference images for multi-sensor, multi-temporal series | |
Chen et al. | A polarization-spectrum fusion framework based on multiscale transform and generative adversarial network for improving water and different vegetation distinguishability | |
CN110136128B (en) | SAR image change detection method based on Rao test | |
CN109740468B (en) | An adaptive Gaussian low-pass filtering method for extracting organic matter information from black soil | |
Wang et al. | Framework to create cloud-free remote sensing data using passenger aircraft as the platform | |
Mishra et al. | Spatial enhancement of SWIR band from Resourcesat-2A by preserving spectral details for accurate mapping of water bodies | |
CN118506207B (en) | Urban impervious surface extraction method based on SAR image and visible light image | |
CN114708514B (en) | Method and device for detecting forest felling change based on deep learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |