CN103646399A - Registering method for optical and radar images - Google Patents

Registering method for optical and radar images Download PDF

Info

Publication number
CN103646399A
CN103646399A CN201310648087.9A CN201310648087A CN103646399A CN 103646399 A CN103646399 A CN 103646399A CN 201310648087 A CN201310648087 A CN 201310648087A CN 103646399 A CN103646399 A CN 103646399A
Authority
CN
China
Prior art keywords
image
registered
optical
radar
registration
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201310648087.9A
Other languages
Chinese (zh)
Inventor
吕江安
王峰
郝雪涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Center for Resource Satellite Data and Applications CRESDA
Original Assignee
China Center for Resource Satellite Data and Applications CRESDA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Center for Resource Satellite Data and Applications CRESDA filed Critical China Center for Resource Satellite Data and Applications CRESDA
Priority to CN201310648087.9A priority Critical patent/CN103646399A/en
Publication of CN103646399A publication Critical patent/CN103646399A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

一种光学与雷达图像配准方法,(1)以雷达图像为参考图像,光学图像作为待配准图像,分别对参考图像和待配准图像进行降采样生成不少于3层的不同分辨率的图像;(2)从第一层低分辨率图像层开始,利用负互信息值对每一层都进行图像变换;(3)分别提取光学图像、雷达图像中梯度幅值超过预设阈值的梯度幅值所在坐标特征点集;(4)利用步骤(2)中最后一层图像变换使用的平移变换参数,将步骤(3)中提取的光学图像坐标特征点集迁移到雷达图像坐标特征点集上;(5)在迁移后的点集范围内,对目标函数进行优化,选取目标函数最大值时对应的平移变换参数作为精配准参数;(6)利用上述精配准参数对待配准图像进行变换及重采样即得到配准后的图像。

Figure 201310648087

A registration method for optical and radar images, (1) using the radar image as a reference image and the optical image as an image to be registered, respectively down-sampling the reference image and the image to be registered to generate no less than 3 layers of different resolutions (2) Starting from the first low-resolution image layer, image transformation is performed on each layer using the negative mutual information value; (3) Extracting the images whose gradient amplitude exceeds the preset threshold in the optical image and radar image respectively The coordinate feature point set where the gradient amplitude is located; (4) Using the translation transformation parameters used in the last layer of image transformation in step (2), migrate the optical image coordinate feature point set extracted in step (3) to the radar image coordinate feature point (5) Within the scope of the migrated point set, optimize the objective function, and select the translation transformation parameter corresponding to the maximum value of the objective function as the fine registration parameter; (6) Use the above fine registration parameters to treat the registration The image is transformed and resampled to obtain the registered image.

Figure 201310648087

Description

一种光学与雷达图像配准方法A Registration Method of Optical and Radar Images

技术领域technical field

本发明涉及一种光学与雷达图像配准的处理方法。The invention relates to a processing method for optical and radar image registration.

背景技术Background technique

遥感技术目前已在社会和科技层面转向实际应用,这些应用包括自然灾害的处理、气候变化评估、自然资源管理、环境保护等,所有这些都涉及长期监测地球表面。近些年,图像配准在遥感应用中变的非常重要。图像配准是图像处理中的基本任务,指的是匹配两幅或多幅来自不同时间、不同遥感器、不同视角的同一物体或场景的图像。图像配准用于数字图像处理是为了把两幅或多幅数字图像准确对准以便分析与比较,涉及生理学、计算机视觉、模式识别、影像理解等多个领域知识。精确的配准算法对于支持镶嵌遥感卫星图像、追踪地球表面环境变化、基础科学研究是非常重要的。Remote sensing technology has now been transferred to practical applications at the social and scientific level, including natural disaster management, climate change assessment, natural resource management, environmental protection, etc., all of which involve long-term monitoring of the earth's surface. In recent years, image registration has become very important in remote sensing applications. Image registration is a basic task in image processing, which refers to matching two or more images of the same object or scene from different times, different remote sensors, and different perspectives. Image registration is used in digital image processing to accurately align two or more digital images for analysis and comparison, involving knowledge in many fields such as physiology, computer vision, pattern recognition, and image understanding. Accurate registration algorithms are very important to support mosaic remote sensing satellite images, track environmental changes on the earth's surface, and basic scientific research.

图像配准,即通过计算一组变换参数把两幅图像对准,这个问题看似定义简单明了,似乎应该已有清楚的、通用的方法,而实际上远非如此。由于对应于各种不同数据的应用众多,图像配准已经发展成为一项复杂的、具有很强挑战性的、包含许多方法策略的任务。随着遥感、医学、以及其他领域获取图像能力的不断增强,导致在过去20年里对图像配准技术进行了大量研究。但到目前为止,还没有一种配准方法能够解决所有的配准问题,只能根据具体的数据类型和应用来研究相应的算法。图像配准算法常分为基于区域和基于特征的两种方法,但是一般只适用于灰度差别较小的影像,反映数据之间的线性特征,不适用于灰度差别较大的影像之间的配准。Image registration, that is, aligning two images by computing a set of transformation parameters, seems to be a well-defined problem. It seems that there should be a clear and general method, but in fact it is far from it. Due to the multitude of applications corresponding to a variety of different data, image registration has developed into a complex and challenging task involving many methodological strategies. The increasing ability to acquire images in remote sensing, medicine, and other fields has led to a great deal of research on image registration techniques over the past 20 years. But so far, there is no one registration method that can solve all the registration problems, and the corresponding algorithm can only be studied according to the specific data type and application. Image registration algorithms are often divided into two methods based on region and feature, but they are generally only suitable for images with small grayscale differences, reflecting the linear characteristics between data, and not suitable for images with large grayscale differences. registration.

发明内容Contents of the invention

本发明的技术解决问题是:由于不同传感器成像原理不同,得到的图像间存在较大的灰度差异(如光学图像和雷达图像),同一景物在不同类型图像中呈现的特征也各不相同,共性特征难以提取,这就使得基于灰度相关和图像特征的各种配准方法不再适用、效果不好,因此本技术将提出新的方法解决上述难题并应用于光学与雷达图像的配准。The problem solved by the technology of the present invention is: due to the different imaging principles of different sensors, there are large grayscale differences between the obtained images (such as optical images and radar images), and the features of the same scene in different types of images are also different. Common features are difficult to extract, which makes various registration methods based on grayscale correlation and image features no longer applicable and the effect is not good. Therefore, this technology will propose a new method to solve the above problems and apply it to the registration of optical and radar images .

本发明的技术解决方案是:一种光学与雷达图像配准方法,步骤如下:The technical solution of the present invention is: an optical and radar image registration method, the steps are as follows:

(1)以雷达图像为参考图像,光学图像作为待配准图像,分别对参考图像和待配准图像进行降采样生成不少于3层的不同分辨率的图像;(1) With the radar image as the reference image and the optical image as the image to be registered, the reference image and the image to be registered are respectively down-sampled to generate images with different resolutions of no less than 3 layers;

(2)从第一层低分辨率图像层开始,每一层都进行如下处理:(2) Starting from the first low-resolution image layer, each layer is processed as follows:

(2.1)利用核密度函数计算参考图像和待配准图像的边缘概率分布和联合概率分布,并计算参考图像和待配准图像之间的负互信息值;以负互信息为目标函数进行叠代优化,求取最小的相似性测度值或达到规定的叠代次数时的平移变换参数;(2.1) Use the kernel density function to calculate the marginal probability distribution and joint probability distribution of the reference image and the image to be registered, and calculate the negative mutual information value between the reference image and the image to be registered; use the negative mutual information as the objective function to perform stacking Generation optimization, seeking the minimum similarity measure value or the translation transformation parameters when reaching the specified number of iterations;

(2.2)利用平移变换参数对下一层的待配准图像做变换,对变换后的待配准图像和参考图像重复步骤(2.1),直至对最后一层即分辨率最高一层待配准图像变换完成;(2.2) Use the translation transformation parameters to transform the image to be registered in the next layer, and repeat the step (2.1) for the transformed image to be registered and the reference image until the last layer, which is the layer with the highest resolution, is to be registered The image transformation is complete;

(3)分别提取光学图像、雷达图像中梯度幅值超过预设阈值的梯度幅值所在坐标特征点集;(3) Extract the coordinate feature point sets where the gradient amplitude exceeds the preset threshold in the optical image and the radar image respectively;

(4)利用步骤(2)中最后一层图像变换使用的平移变换参数,将步骤(3)中提取的光学图像坐标特征点集迁移到雷达图像坐标特征点集上,得到点集S1(P);(4) Using the translation transformation parameters used in the last layer of image transformation in step (2), the optical image coordinate feature point set extracted in step (3) is migrated to the radar image coordinate feature point set, and the point set S 1 ( P);

(5)在步骤(4)迁移后的点集范围内,对目标函数进行优化,选取目标函数最大值时对应的平移变换参数作为精配准参数;(5) Within the scope of the migrated point set in step (4), optimize the objective function, and select the translation transformation parameter corresponding to the maximum value of the objective function as the fine registration parameter;

(6)利用上述精配准参数对待配准图像进行变换及重采样即得到配准后的图像。(6) Transform and resample the image to be registered using the above fine registration parameters to obtain the registered image.

所述步骤(5)中的目标函数为:The objective function in the step (5) is:

Ff (( SS 11 (( pp )) )) == ΣΣ (( xx gg ,, ythe y gg )) ∈∈ SS 11 (( pp )) || ▿▿ Uu 11 (( xx gg ,, ythe y gg )) || 22

其中, | ▿ U 1 ( x g , y g ) | = U x 2 ( x g , y g ) + U y 2 ( x g , y g ) U x ( x g , y g ) = 0.5 ( U ( x g , y g + 1 ) - U ( x g , y g - 1 ) ) U y ( x g , y g ) = 0.5 ( U ( x g + 1 , y g ) - U ( x g - 1 , y g ) ) in, | ▿ u 1 ( x g , the y g ) | = u x 2 ( x g , the y g ) + u the y 2 ( x g , the y g ) u x ( x g , the y g ) = 0.5 ( u ( x g , the y g + 1 ) - u ( x g , the y g - 1 ) ) u the y ( x g , the y g ) = 0.5 ( u ( x g + 1 , the y g ) - u ( x g - 1 , the y g ) )

U(xg,yg+1)代表在(xg,yg+1)处的灰度值。U(x g ,y g +1) represents the gray value at (x g ,y g +1).

本发明与现有技术相比有益效果为:Compared with the prior art, the present invention has beneficial effects as follows:

(1)本发明针对多源遥感数据的特点,从配准的重要方面相似性测度进行突破,采用新的相似性测度准则,即基于图像统计分布信息的方法和基于图像固有结构特征的方法,有效的解决了传统图像配准方法相似性测度要求图像灰度相似、成线性关系的局限,该方法不需要对影像做分割、特征提取等预处理,适用面比较广,具有较高的精度和较好的鲁棒性。(1) Aiming at the characteristics of multi-source remote sensing data, the present invention makes breakthroughs in similarity measurement, which is an important aspect of registration, and adopts a new similarity measurement criterion, that is, a method based on image statistical distribution information and a method based on image inherent structural features, It effectively solves the limitation that the similarity measurement of traditional image registration methods requires images to have similar gray levels and a linear relationship. This method does not require preprocessing such as image segmentation and feature extraction. Better robustness.

(2)本发明采用核密度函数估计变量的概率密度.最早的利用直方图近似变量的概率密度,具有较大的估计误差,同时存储直方图所需的空间随着样本的特征变量数目成指数增长。(2) The present invention uses the kernel density function to estimate the probability density of the variable. The earliest use of the histogram to approximate the probability density of the variable has a large estimation error, and the space required for storing the histogram is exponential with the number of characteristic variables of the sample increase.

(3)本发明采用多分辨率配准策略,通过由粗到精的方式解决配准问题,可以避免互信息局部极值提高配准精度,同时提高了配准算法的速度和鲁棒性。(3) The present invention adopts a multi-resolution registration strategy and solves the registration problem in a coarse-to-fine manner, which can avoid local extremum values of mutual information and improve registration accuracy, while improving the speed and robustness of the registration algorithm.

(4)由于利用相同场景图像中的固有相似性构造相似度量准则,整个算法的重点放在了对目标函数优化上,特征提取过程大大简化,只需在图像结构相对较好的光学图像中提取高梯度幅值点集,有效避开了在两幅图像中都要精确提取和匹配同名特征这一难点。(4) Since the inherent similarity in the same scene image is used to construct the similarity metric criterion, the focus of the entire algorithm is on the optimization of the objective function, and the feature extraction process is greatly simplified. It only needs to be extracted from an optical image with a relatively good image structure The high gradient magnitude point set effectively avoids the difficulty of accurately extracting and matching features with the same name in both images.

(5)抗噪性能强,算法鲁棒性好。对于在一幅图中存在某种局部特征,而在另一幅图中不存在对应特征情况,本方法只会影响准则函数最优值大小,但不会影响到最+优点所在位置,因此仍然可以得到配准解。(5) Strong anti-noise performance and good algorithm robustness. For the case where some local feature exists in one image but does not exist in another image, this method will only affect the size of the optimal value of the criterion function, but will not affect the position of the optimal point, so it is still A matching solution can be obtained.

附图说明Description of drawings

图1为本发明流程图;Fig. 1 is a flowchart of the present invention;

图2为多分辨率金字塔分层示意图。Fig. 2 is a schematic diagram of multi-resolution pyramid layers.

具体实施方式Detailed ways

下面结合附图对本发明做详细说明,如图1所示,一种光学与雷达图像配准方法,步骤如下:Below in conjunction with accompanying drawing, the present invention is described in detail, as shown in Figure 1, a kind of optical and radar image registration method, the steps are as follows:

(1)以雷达图像为参考图像,光学图像作为待配准图像,分别对参考图像和待配准图像进行降采样生成不少于3层的不同分辨率的图像;本例以3层为例进行说明。(1) With the radar image as the reference image and the optical image as the image to be registered, the reference image and the image to be registered are respectively down-sampled to generate images with different resolutions of no less than 3 layers; this example takes 3 layers as an example Be explained.

上述对图像的分层采用建立高斯图像金字塔方式进行处理如图2所示,高斯金字塔计算公式如下:The above-mentioned layering of the image is processed by establishing a Gaussian image pyramid, as shown in Figure 2. The calculation formula of the Gaussian pyramid is as follows:

gg LL (( ii ,, jj )) == &Sigma;&Sigma; mm == -- 22 22 &Sigma;&Sigma; nno == -- 22 22 ww (( mm ,, nno )) gg LL -- 11 (( 22 ii ++ mm ,, 22 jj ++ nno )) ,, 00 << LL &le;&le; NN ,, 00 &le;&le; ii << CC LL ,, 00 &le;&le; jj << RR LL

高斯金字塔是一个图像序列,序列中的每一层图像均是前一层图像低通滤波的复制图像,上式中的gL(i,j)表示第L层的图像,CL代表第L层图像的列数,RL代表第L层图像的行数,N代表总层数,w(m,n)为窗口函数,常取5*5高斯模板。The Gaussian pyramid is an image sequence, and each layer image in the sequence is a low-pass filtered copy image of the previous layer image. In the above formula, g L (i, j) represents the image of the L-th layer, and C L represents the L-th layer image. The number of columns of the layer image, R L represents the number of rows of the L-th layer image, N represents the total number of layers, w(m,n) is the window function, and a 5*5 Gaussian template is often used.

(2)在第1层低分辨率图像层,利用核密度函数计算参考图像和待配准图像的边缘概率分布和联合概率分布,并计算参考图像和待配准图像之间的负互信息值;以负互信息为目标函数进行叠代优化,求取最小的相似性测度值或达到规定的叠代次数时的平移变换参数;在中间层分辨率图像层,利用上一层得到的平移变换参数对本层中的待配准图像做变换,然后再利用核密度函数计算参考图像和待配准图像的边缘概率分布和联合概率分布,并计算参考图像和待配准图像之间的负互信息值;以负互信息为目标函数进行叠代优化,求取最小的相似性测度值或达到规定的叠代次数时的平移变换参数,该平移变换参数记为粗配准参数。在高分辨率图像层,利用上一层得到的平移变换参数对本层中的待配准图像做变换。(2) In the first low-resolution image layer, use the kernel density function to calculate the edge probability distribution and joint probability distribution of the reference image and the image to be registered, and calculate the negative mutual information value between the reference image and the image to be registered ; Carry out iterative optimization with negative mutual information as the objective function, and find the minimum similarity measure value or the translation transformation parameters when reaching the specified number of iterations; in the middle resolution image layer, use the translation transformation obtained from the previous layer parameters to transform the image to be registered in this layer, and then use the kernel density function to calculate the edge probability distribution and joint probability distribution of the reference image and the image to be registered, and calculate the negative mutual information between the reference image and the image to be registered value; the iterative optimization is carried out with the negative mutual information as the objective function, and the minimum similarity measure value or the translation transformation parameter when reaching the specified number of iterations is obtained, and the translation transformation parameter is recorded as the coarse registration parameter. In the high-resolution image layer, the image to be registered in this layer is transformed by using the translation transformation parameters obtained in the previous layer.

对于两幅需要相互配准的影像X和Y,选择X作为参考影像,Y作为待配准影像,理想情况是将影像Y中的每个像元利用互信息匹配其在影像X中的对应像元位置。For two images X and Y that need to be registered with each other, choose X as the reference image and Y as the image to be registered. Ideally, each pixel in image Y is matched to its corresponding image in image X using mutual information. Meta location.

影像X,Y的互信息定义为:The mutual information of image X, Y is defined as:

I(X;Y)=H(X)+H(Y)-H(X,Y)I(X;Y)=H(X)+H(Y)-H(X,Y)

其中:H(X),H(Y)为影像X,Y的边缘熵,H(X,Y)为X,Y的联合熵。Among them: H(X), H(Y) is the edge entropy of images X and Y, and H(X, Y) is the joint entropy of X and Y.

Hh (( Xx )) == &Sigma;&Sigma; xx -- PP xx (( xx )) loglog PP xx (( xx ))

Hh (( YY )) == &Sigma;&Sigma; ythe y -- PP ythe y (( ythe y )) loglog PP ythe y (( ythe y ))

Hh (( Xx ,, YY )) == &Sigma;&Sigma; xx ,, ythe y -- PP xx ,, ythe y (( xx ,, ythe y )) loglog PP xx ,, ythe y (( xx ,, ythe y ))

Px(x),Py(y)分别是影像X和Y的边缘概率分布密度,Px,y(x,y)是影像X,Y的联合概率分布密度:P x (x), P y (y) are the marginal probability distribution densities of images X and Y respectively, and P x,y (x,y) is the joint probability distribution density of images X and Y:

为了估计概率密度函数P(x),与x靠近的样本,所起作用似应比远离X的样本要大些。核密度法就是从测量样本X(样本个数n)直接估计随机变量概率密度的一种精确的非参数估计方法。n个样本,从测量样本X估计出的x点的概率密度定义为:In order to estimate the probability density function P(x), samples closer to x seem to play a larger role than samples farther away from X. The kernel density method is an accurate non-parametric estimation method that directly estimates the probability density of random variables from the measurement sample X (sample number n). For n samples, the probability density of point x estimated from the measured sample X is defined as:

PP (( xx )) == 11 nno &Sigma;&Sigma; xx ll &Element;&Element; Xx WW (( xx -- xx ll hh ))

其中W(x)是窗函数,xl代表随机选取的与x附件的一个点;h是窗宽参数。窗函数必须满足下面两个条件:Among them, W(x) is the window function, x l represents a randomly selected point adjacent to x; h is the window width parameter. The window function must satisfy the following two conditions:

W(x)>=0;W(x)>=0;

∫W(x)dx=1∫W(x)dx=1

图像配准就是一个函数优化的问题,求出使相似性测度值S最大的一组变换参数μ记为μoptImage registration is a function optimization problem, and a set of transformation parameters μ that maximizes the similarity measure value S is obtained and denoted as μ opt .

μopt=argμmaxS(μ)μ opt =arg μ maxS(μ)

本发明以最小互信息的负值作为相似性测度函数S,以X和Y表示参考图像和待配准图像,参考图像和待配准图像之间的负互信息值表示为变换参数μ的函数为:In the present invention, the negative value of the minimum mutual information is used as the similarity measurement function S, the reference image and the image to be registered are represented by X and Y, and the negative mutual information value between the reference image and the image to be registered is expressed as a function of the transformation parameter μ for:

SS (( &mu;&mu; )) == -- &Sigma;&Sigma; ythe y &Element;&Element; YY &Sigma;&Sigma; xx &Element;&Element; Xx PP xx ,, ythe y (( xx ,, ythe y ;; &mu;&mu; )) loglog 22 PP xx ,, ythe y (( xx ,, ythe y ;; &mu;&mu; )) PP xx (( xx ;; &mu;&mu; )) PP ythe y (( ythe y ;; &mu;&mu; ))

Px,y(x,y;μ)代表考虑变换参数μ的联合概率分布密度;Py(y;μ)代表考虑变换参数μ的待配准图像的边缘概率分布密度;Px(x;μ)代表考虑变换参数μ的参考图像的边缘概率分布密度。P x, y (x, y; μ) represents the joint probability distribution density considering the transformation parameter μ; P y (y; μ) represents the edge probability distribution density of the image to be registered considering the transformation parameter μ; P x (x; μ) represents the marginal probability distribution density of the reference image considering the transformation parameter μ.

互信息梯度mutual information gradient

&dtri;&dtri; SS == &PartialD;&PartialD; SS &PartialD;&PartialD; &mu;&mu; 11 &PartialD;&PartialD; SS &PartialD;&PartialD; &mu;&mu; 22 .. .. .. &PartialD;&PartialD; SS &PartialD;&PartialD; &mu;&mu; kk TT

&PartialD;&PartialD; SS &PartialD;&PartialD; &mu;&mu; kk == -- &Sigma;&Sigma; ythe y &Element;&Element; YY &Sigma;&Sigma; xx &Element;&Element; Xx &PartialD;&PartialD; pp (( xx ,, ythe y ;; &mu;&mu; )) &PartialD;&PartialD; &mu;&mu; kk loglog pp (( xx ,, ythe y ;; &mu;&mu; )) pp ythe y (( ythe y ;; &mu;&mu; ))

Figure BDA0000430206210000063
是联合概率分布第k个偏导数,k∈Z即k为整数。
Figure BDA0000430206210000063
is the kth partial derivative of the joint probability distribution, k∈Z means k is an integer.

(3)分别提取光学图像、雷达图像中梯度幅值超过预设阈值的梯度幅值所在坐标特征点集;(3) Extract the coordinate feature point sets where the gradient amplitude exceeds the preset threshold in the optical image and the radar image respectively;

两类图像的提取方法相同,以光学图像为例:首先计算图像的梯度幅值:The extraction methods of the two types of images are the same, taking optical images as an example: first calculate the gradient magnitude of the image:

|| &dtri;&dtri; Uu 11 (( ii ,, jj )) || == Uu xx 22 (( ii ,, jj )) ++ Uu ythe y 22 (( ii ,, jj ))

Ux(i,j)=0.5(U(i,j+1)-U(i,j-1))U x (i,j)=0.5(U(i,j+1)-U(i,j-1))

Uy(i,j)=0.5(U(i+1,j)-U(i-1,j))U y (i,j)=0.5(U(i+1,j)-U(i-1,j))

取前25%的梯度幅值的点作为特征点集S1(p),点集中的点记为(xg,yg)。Take the points with the gradient magnitude of the first 25% as the feature point set S 1 (p), and record the points in the point set as (x g , y g ).

(4)利用步骤(2)中最后一层图像变换使用的平移变换参数,将步骤(3)中提取的光学图像坐标特征点集迁移到雷达图像坐标特征点集上,得到点集S1(P);(4) Using the translation transformation parameters used in the last layer of image transformation in step (2), the optical image coordinate feature point set extracted in step (3) is migrated to the radar image coordinate feature point set, and the point set S 1 ( P);

步骤(2)求得的粗配准平移变换参数P0=(μ1,μ2),将点集S1(x,y)中每个点的坐标加上平移变换参数P0,完成迁移。The coarse registration translation transformation parameter P 0 =(μ1,μ2) obtained in step (2), and the coordinates of each point in the point set S 1 (x,y) are added to the translation transformation parameter P 0 to complete the migration.

对于(X,Y)∈S1(P)有:For (X,Y)∈S 1 (P) have:

X=x+μ1 X=x+μ 1

Y=y+μ2 Y=y+μ 2

(5)在步骤(4)迁移后的点集范围内,对目标函数进行优化,选取目标函数最大值时对应的平移变换参数作为精配准参数;(5) Within the scope of the migrated point set in step (4), optimize the objective function, and select the translation transformation parameter corresponding to the maximum value of the objective function as the fine registration parameter;

优化目标函数F(S1(p))下简写为F:The optimization objective function F(S 1 (p)) is abbreviated as F below:

Ff (( SS 11 (( pp )) )) == &Delta;&Delta; &Sigma;&Sigma; (( xx gg ,, ythe y gg )) &Element;&Element; SS 11 (( pp )) || &dtri;&dtri; Uu 11 (( xx gg ,, ythe y gg )) || 22

Figure BDA0000430206210000071
为点(xi,yi)的梯度值,
Figure BDA0000430206210000071
is the gradient value of point (x i , y i ),

参数迭代: p n + 1 = p n + ( H p n ) - 1 &dtri; p F n Parameter iteration: p no + 1 = p no + ( h p no ) - 1 &dtri; p f no

Figure BDA0000430206210000073
是在迭代次数为n,参数为p时的F梯度,Hp n为F的Hessian矩阵,求法如下:
Figure BDA0000430206210000073
is the gradient of F when the number of iterations is n and the parameter is p, and H p n is the Hessian matrix of F. The calculation method is as follows:

&dtri;&dtri; pp Ff nno == &PartialD;&PartialD; Ff &PartialD;&PartialD; PP 11 &PartialD;&PartialD; Ff &PartialD;&PartialD; PP 22 .. .. .. &PartialD;&PartialD; Ff &PartialD;&PartialD; PP mm TT

Figure BDA0000430206210000075
Figure BDA0000430206210000075

(6)利用上述精配准参数对待配准图像进行变换及重采样即得到配准后的图像。(6) Transform and resample the image to be registered using the above fine registration parameters to obtain the registered image.

本发明未详细说明部分属于本领域技术人员公知常识。Parts not described in detail in the present invention belong to the common knowledge of those skilled in the art.

Claims (2)

1.一种光学与雷达图像配准方法,其特征在于步骤如下:1. A method for optical and radar image registration, characterized in that the steps are as follows: (1)以雷达图像为参考图像,光学图像作为待配准图像,分别对参考图像和待配准图像进行降采样生成不少于3层的不同分辨率的图像;(1) With the radar image as the reference image and the optical image as the image to be registered, the reference image and the image to be registered are respectively down-sampled to generate images with different resolutions of no less than 3 layers; (2)从第一层低分辨率图像层开始,每一层都进行如下处理:(2) Starting from the first low-resolution image layer, each layer is processed as follows: (2.1)利用核密度函数计算参考图像和待配准图像的边缘概率分布和联合概率分布,并计算参考图像和待配准图像之间的负互信息值;以负互信息为目标函数进行叠代优化,求取最小的相似性测度值或达到规定的叠代次数时的平移变换参数;(2.1) Use the kernel density function to calculate the marginal probability distribution and joint probability distribution of the reference image and the image to be registered, and calculate the negative mutual information value between the reference image and the image to be registered; use the negative mutual information as the objective function to perform stacking Generation optimization, seeking the minimum similarity measure value or the translation transformation parameters when reaching the specified number of iterations; (2.2)利用平移变换参数对下一层的待配准图像做变换,对变换后的待配准图像和参考图像重复步骤(2.1),直至对最后一层即分辨率最高一层待配准图像变换完成;(2.2) Use the translation transformation parameters to transform the image to be registered in the next layer, and repeat the step (2.1) for the transformed image to be registered and the reference image until the last layer, which is the layer with the highest resolution, is to be registered The image transformation is complete; (3)分别提取光学图像、雷达图像中梯度幅值超过预设阈值的梯度幅值所在坐标特征点集;(3) Extract the coordinate feature point sets where the gradient amplitude exceeds the preset threshold in the optical image and the radar image respectively; (4)利用步骤(2)中最后一层图像变换使用的平移变换参数,将步骤(3)中提取的光学图像坐标特征点集迁移到雷达图像坐标特征点集上,得到点集S1(P);(4) Using the translation transformation parameters used in the last layer of image transformation in step (2), the optical image coordinate feature point set extracted in step (3) is migrated to the radar image coordinate feature point set, and the point set S 1 ( P); (5)在步骤(4)迁移后的点集范围内,对目标函数进行优化,选取目标函数最大值时对应的平移变换参数作为精配准参数;(5) Within the scope of the migrated point set in step (4), optimize the objective function, and select the translation transformation parameter corresponding to the maximum value of the objective function as the fine registration parameter; (6)利用上述精配准参数对待配准图像进行变换及重采样即得到配准后的图像。(6) Transform and resample the image to be registered using the above fine registration parameters to obtain the registered image. 2.根据权利要求1所述的一种光学与雷达图像配准方法,其特征在于:所述步骤(5)中的目标函数为:2. A method for optical and radar image registration according to claim 1, characterized in that: the objective function in the step (5) is: Ff (( SS 11 (( pp )) )) == &Sigma;&Sigma; (( xx gg ,, ythe y gg )) &Element;&Element; SS 11 (( pp )) || &dtri;&dtri; Uu 11 (( xx gg ,, ythe y gg )) || 22 其中, | &dtri; U 1 ( x g , y g ) | = U x 2 ( x g , y g ) + U y 2 ( x g , y g ) U x ( x g , y g ) = 0.5 ( U ( x g , y g + 1 ) - U ( x g , y g - 1 ) ) U y ( x g , y g ) = 0.5 ( U ( x g + 1 , y g ) - U ( x g - 1 , y g ) ) in, | &dtri; u 1 ( x g , the y g ) | = u x 2 ( x g , the y g ) + u the y 2 ( x g , the y g ) u x ( x g , the y g ) = 0.5 ( u ( x g , the y g + 1 ) - u ( x g , the y g - 1 ) ) u the y ( x g , the y g ) = 0.5 ( u ( x g + 1 , the y g ) - u ( x g - 1 , the y g ) ) U(xg,yg+1)代表在(xg,yg+1)处的灰度值。U(x g ,y g +1) represents the gray value at (x g ,y g +1).
CN201310648087.9A 2013-12-04 2013-12-04 Registering method for optical and radar images Pending CN103646399A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310648087.9A CN103646399A (en) 2013-12-04 2013-12-04 Registering method for optical and radar images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310648087.9A CN103646399A (en) 2013-12-04 2013-12-04 Registering method for optical and radar images

Publications (1)

Publication Number Publication Date
CN103646399A true CN103646399A (en) 2014-03-19

Family

ID=50251609

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310648087.9A Pending CN103646399A (en) 2013-12-04 2013-12-04 Registering method for optical and radar images

Country Status (1)

Country Link
CN (1) CN103646399A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105574877A (en) * 2015-12-21 2016-05-11 中国资源卫星应用中心 Thermal infrared image registering method based on multiple dimensioned characteristic
CN109190651A (en) * 2018-07-06 2019-01-11 同济大学 Optical imagery and radar image matching process based on multichannel convolutive neural network

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6681057B1 (en) * 2000-02-22 2004-01-20 National Instruments Corporation Image registration system and method implementing PID control techniques
US20050094898A1 (en) * 2003-09-22 2005-05-05 Chenyang Xu Method and system for hybrid rigid registration of 2D/3D medical images
CN101071505A (en) * 2007-06-18 2007-11-14 华中科技大学 Multi likeness measure image registration method
CN101667293A (en) * 2009-09-24 2010-03-10 哈尔滨工业大学 Method for conducting high-precision and steady registration on diversified sensor remote sensing images
CN103020945A (en) * 2011-09-21 2013-04-03 中国科学院电子学研究所 Remote sensing image registration method of multi-source sensor
CN103218811A (en) * 2013-03-29 2013-07-24 中国资源卫星应用中心 Statistical distribution-based satellite multi-spectral image waveband registration method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6681057B1 (en) * 2000-02-22 2004-01-20 National Instruments Corporation Image registration system and method implementing PID control techniques
US20050094898A1 (en) * 2003-09-22 2005-05-05 Chenyang Xu Method and system for hybrid rigid registration of 2D/3D medical images
CN101071505A (en) * 2007-06-18 2007-11-14 华中科技大学 Multi likeness measure image registration method
CN101667293A (en) * 2009-09-24 2010-03-10 哈尔滨工业大学 Method for conducting high-precision and steady registration on diversified sensor remote sensing images
CN103020945A (en) * 2011-09-21 2013-04-03 中国科学院电子学研究所 Remote sensing image registration method of multi-source sensor
CN103218811A (en) * 2013-03-29 2013-07-24 中国资源卫星应用中心 Statistical distribution-based satellite multi-spectral image waveband registration method

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
CHEN TIANZE 等: "EDGE FEATURE MATCHING OF REMOTE SENSING IMAGES VIA PARAMETER DECOMPOSITION OF AFFINE TRANSFORMATION MODEL", 《ISPRS ANNALS OF THE PHOTOGRAMMETRY, REMOTE SENSING AND SPATIAL INFORMATION SCIENCES》, vol. 17, 1 September 2012 (2012-09-01) *
ULEEN: "梯度", 《新浪博客-HTTP://BLOG.SINA.COM.CN/S/BLOG_6F57A7150100OOIO.HTML》, 10 January 2011 (2011-01-10) *
Y.KELLER 等: "ROBUST MULTI-SENSOR IMAGE REGISTRATION USING PIXEL MIGRATION", 《2002 IEEE SENSOR ARRAY AND MULTICHANNEL SIGNAL PROCESSING WORKSHOP PROCEEDINGS》, 6 August 2008 (2008-08-06) *
YOSI KELLER 等: "Implicit similarity: a new approach to multi-sensor image registration", 《CVPR 2003》, 20 June 2003 (2003-06-20) *
李孟君 等: "基于隐含相似性的光学和SAR图像配准方法", 《中国图象图形学报》, vol. 14, no. 11, 15 November 2009 (2009-11-15) *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105574877A (en) * 2015-12-21 2016-05-11 中国资源卫星应用中心 Thermal infrared image registering method based on multiple dimensioned characteristic
CN109190651A (en) * 2018-07-06 2019-01-11 同济大学 Optical imagery and radar image matching process based on multichannel convolutive neural network

Similar Documents

Publication Publication Date Title
CN103218811B (en) A kind of satellite multispectral image waveband registration method of Corpus--based Method distribution
CN103839265B (en) SAR image registration method based on SIFT and normalized mutual information
CN103455797B (en) Detection and tracking method of moving small target in aerial shot video
CN106228129B (en) A kind of human face in-vivo detection method based on MATV feature
Yu et al. A fast and fully automatic registration approach based on point features for multi-source remote-sensing images
CN113838191A (en) A 3D reconstruction method based on attention mechanism and monocular multi-view
CN102800098B (en) Multi-characteristic multi-level visible light full-color and multi-spectrum high-precision registering method
CN112254656B (en) A Stereo Vision 3D Displacement Measurement Method Based on Structural Surface Point Features
CN107516322B (en) Image object size and rotation estimation calculation method based on log polar space
CN102722887A (en) Image registration method and device
CN102169581A (en) Feature vector-based fast and high-precision robustness matching method
CN102855621A (en) Infrared and visible remote sensing image registration method based on salient region analysis
CN113160287A (en) Complex component point cloud splicing method and system based on feature fusion
CN102446356A (en) Parallel self-adaptive matching method for obtaining remote sensing images with uniformly distributed matching points
CN103761768A (en) Stereo matching method of three-dimensional reconstruction
CN102881012B (en) Visual target tracking method for target scale change
CN106056625A (en) Airborne infrared moving target detection method based on geographical homologous point registration
CN110390338A (en) A High Precision Matching Method for SAR Based on Nonlinear Guided Filtering and Ratio Gradient
CN103035004A (en) Circular target centralized positioning method under large visual field
CN103646399A (en) Registering method for optical and radar images
CN115601569A (en) A method and system for optimal matching of heterogeneous images based on improved PIIFD
CN104484647B (en) A kind of high-resolution remote sensing image cloud height detection method
CN114581864A (en) Transformer-based dynamic densely aligned vehicle re-identification technology
CN105205825B (en) Multiresolution based on NSCT domains is infrared with visible ray Scene matching method
CN103559722B (en) Based on the sequence image amount of jitter computing method of gray scale linear modelling

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20140319

WD01 Invention patent application deemed withdrawn after publication