CN103646399A - Registering method for optical and radar images - Google Patents

Registering method for optical and radar images Download PDF

Info

Publication number
CN103646399A
CN103646399A CN201310648087.9A CN201310648087A CN103646399A CN 103646399 A CN103646399 A CN 103646399A CN 201310648087 A CN201310648087 A CN 201310648087A CN 103646399 A CN103646399 A CN 103646399A
Authority
CN
China
Prior art keywords
msub
mrow
image
registered
optical
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201310648087.9A
Other languages
Chinese (zh)
Inventor
吕江安
王峰
郝雪涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Center for Resource Satellite Data and Applications CRESDA
Original Assignee
China Center for Resource Satellite Data and Applications CRESDA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Center for Resource Satellite Data and Applications CRESDA filed Critical China Center for Resource Satellite Data and Applications CRESDA
Priority to CN201310648087.9A priority Critical patent/CN103646399A/en
Publication of CN103646399A publication Critical patent/CN103646399A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

Disclosed is a registering method for optical and radar images: (1) a radar image is used as a reference image and an optical image is used as a to-be-registered image and downsampling is performed on the reference image and the to-be-registered image respectively so that images of no less than three layers and different resolutions are generated; (2) starting from the image of the first layer and low resolution, image transformation is performed on the image of each layer through use of mutual negative information values; (3) characteristic point sets of coordinates at which gradient magnitudes in the optical and the radar images exceed the gradient magnitude of a preset threshold value, are extracted respectively; (4) translational transformation parameters used in transformation of the image of the last layer in step (2) are used to transfer optical-image coordinate characteristic point sets extracted in the step (3) to radar-image coordinate characteristic point sets; (5) within the range of the transferred point sets, a target function is optimized and a translational transformation parameter corresponding to the maximum of the target function is selected as a fine registration parameter; (6) the fine registration parameter is used to perform transformation and resampling on the to-be-registered image so that a registered image is obtained.

Description

Optical and radar image registration method
Technical Field
The invention relates to a processing method for registering optical and radar images.
Background
Remote sensing technology has now turned to practical applications at the social and scientific level, including natural disaster handling, climate change assessment, natural resource management, environmental protection, etc., all of which involve long-term monitoring of the earth's surface. In recent years, image registration has become very important in remote sensing applications. Image registration is a fundamental task in image processing, which refers to matching two or more images of the same object or scene from different times, different remote sensors, and different perspectives. Image registration is used in digital image processing to accurately align two or more digital images for analysis and comparison, involving a number of areas of knowledge in physiology, computer vision, pattern recognition, image understanding, and the like. The accurate registration algorithm is very important for supporting mosaic remote sensing satellite images, tracking the change of the earth surface environment and basic scientific research.
Image registration, i.e. the alignment of two images by computing a set of transformation parameters, appears to be well defined and seems to have a clear, general approach, but in practice far from that. Due to the multitude of applications corresponding to a variety of different data, image registration has evolved into a complex, highly challenging task involving many process strategies. With the increasing ability to acquire images in remote sensing, medicine, and other fields, there has been a great deal of research on image registration techniques in the past 20 years. However, no registration method has been able to solve all the registration problems so far, and only the corresponding algorithm can be studied according to the specific data type and application. Image registration algorithms are often divided into two methods based on regions and characteristics, but are generally only suitable for images with small gray difference, reflect linear characteristics between data, and are not suitable for registration between images with large gray difference.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: due to the fact that different sensors have different imaging principles, large gray scale differences (such as optical images and radar images) exist among obtained images, characteristics of the same scene in different types of images are different, common characteristics are difficult to extract, and therefore various registration methods based on gray scale correlation and image characteristics are not applicable and poor in effect.
The technical solution of the invention is as follows: an optical and radar image registration method comprises the following steps:
(1) respectively performing down-sampling on the reference image and the image to be registered to generate not less than 3 layers of images with different resolutions by taking the radar image as the reference image and the optical image as the image to be registered;
(2) starting from the first low-resolution image layer, each layer is processed as follows:
(2.1) calculating the edge probability distribution and the joint probability distribution of the reference image and the image to be registered by using a kernel density function, and calculating a negative mutual information value between the reference image and the image to be registered; carrying out iterative optimization by taking the negative mutual information as a target function, and solving a minimum similarity measurement value or a translation transformation parameter when a specified iterative frequency is reached;
(2.2) transforming the image to be registered of the next layer by using the translation transformation parameters, and repeating the step (2.1) on the transformed image to be registered and the reference image until the transformation of the last layer, namely the layer to be registered with the highest resolution is finished;
(3) respectively extracting coordinate feature point sets where gradient amplitudes in the optical image and the radar image exceed a preset threshold value;
(4) transferring the optical image coordinate feature point set extracted in the step (3) to a radar image coordinate feature point set by using a translation transformation parameter used for the transformation of the last layer of image in the step (2) to obtain a point set S1(P);
(5) Optimizing the target function in the range of the point set after the migration in the step (4), and selecting a corresponding translation transformation parameter when the target function is at the maximum value as a fine registration parameter;
(6) and transforming and resampling the image to be registered by using the fine registration parameters to obtain a registered image.
The objective function in the step (5) is as follows:
<math> <mrow> <mi>F</mi> <mrow> <mo>(</mo> <msub> <mi>S</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <mi>p</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mo>=</mo> <munder> <mi>&Sigma;</mi> <mrow> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>g</mi> </msub> <mo>,</mo> <msub> <mi>y</mi> <mi>g</mi> </msub> <mo>)</mo> </mrow> <mo>&Element;</mo> <msub> <mi>S</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <mi>p</mi> <mo>)</mo> </mrow> </mrow> </munder> <msup> <mrow> <mo>|</mo> <mo>&dtri;</mo> <msub> <mi>U</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>g</mi> </msub> <mo>,</mo> <msub> <mi>y</mi> <mi>g</mi> </msub> <mo>)</mo> </mrow> <mo>|</mo> </mrow> <mn>2</mn> </msup> </mrow> </math>
wherein, <math> <mfenced open='' close=''> <mtable> <mtr> <mtd> <mo>|</mo> <mo>&dtri;</mo> <msub> <mi>U</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>g</mi> </msub> <mo>,</mo> <msub> <mi>y</mi> <mi>g</mi> </msub> <mo>)</mo> </mrow> <mo>|</mo> <mo>=</mo> <msqrt> <msup> <msub> <mi>U</mi> <mi>x</mi> </msub> <mn>2</mn> </msup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>g</mi> </msub> <mo>,</mo> <msub> <mi>y</mi> <mi>g</mi> </msub> <mo>)</mo> </mrow> <mo>+</mo> <msup> <msub> <mi>U</mi> <mi>y</mi> </msub> <mn>2</mn> </msup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>g</mi> </msub> <mo>,</mo> <msub> <mi>y</mi> <mi>g</mi> </msub> <mo>)</mo> </mrow> </msqrt> </mtd> </mtr> <mtr> <mtd> <msub> <mi>U</mi> <mi>x</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>g</mi> </msub> <mo>,</mo> <msub> <mi>y</mi> <mi>g</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mn>0.5</mn> <mrow> <mo>(</mo> <mi>U</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>g</mi> </msub> <mo>,</mo> <msub> <mi>y</mi> <mi>g</mi> </msub> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> <mo>-</mo> <mi>U</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>g</mi> </msub> <mo>,</mo> <msub> <mi>y</mi> <mi>g</mi> </msub> <mo>-</mo> <mn>1</mn> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mtd> </mtr> <mtr> <mtd> <msub> <mi>U</mi> <mi>y</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>g</mi> </msub> <mo>,</mo> <msub> <mi>y</mi> <mi>g</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mn>0.5</mn> <mrow> <mo>(</mo> <mi>U</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>g</mi> </msub> <mo>+</mo> <mn>1</mn> <mo>,</mo> <msub> <mi>y</mi> <mi>g</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mi>U</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>g</mi> </msub> <mo>-</mo> <mn>1</mn> <mo>,</mo> <msub> <mi>y</mi> <mi>g</mi> </msub> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mtd> </mtr> </mtable> </mfenced> </math>
U(xg,yg+1) represents a radical in (x)g,yg+1) of the gray values.
Compared with the prior art, the invention has the beneficial effects that:
(1) the method is characterized in that a new similarity measure criterion, namely a method based on image statistical distribution information and a method based on image inherent structural features, is adopted, so that the limitation that the similarity measure of the traditional image registration method requires image gray level similarity and linear relation is effectively solved, the image is not required to be preprocessed by segmentation, feature extraction and the like, the application range is wide, and the method has high precision and good robustness.
(2) The earliest utilized the probability density of the histogram approximation variable, which had a large estimation error, while the space required to store the histogram increased exponentially with the number of characteristic variables of the sample.
(3) The invention adopts a multi-resolution registration strategy, solves the registration problem by a coarse-to-fine mode, can avoid mutual information local extremum to improve the registration precision, and simultaneously improves the speed and the robustness of a registration algorithm.
(4) Because the similarity measurement criterion is constructed by using the inherent similarity in the same scene image, the whole algorithm focuses on the optimization of the objective function, the characteristic extraction process is greatly simplified, only a high-gradient amplitude point set needs to be extracted from an optical image with a relatively good image structure, and the difficulty that the same-name characteristics need to be accurately extracted and matched in the two images is effectively avoided.
(5) The anti-noise performance is strong, and the algorithm robustness is good. For the case that some local feature exists in one graph and the corresponding feature does not exist in the other graph, the method only influences the optimal value of the criterion function, but does not influence the position of the most positive advantage, so that the registration solution can still be obtained.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a multi-resolution pyramid layering diagram.
Detailed Description
The present invention will be described in detail with reference to the accompanying drawings, as shown in fig. 1, a method for registering optical and radar images, comprising the following steps:
(1) respectively performing down-sampling on the reference image and the image to be registered to generate not less than 3 layers of images with different resolutions by taking the radar image as the reference image and the optical image as the image to be registered; this example will be described with 3 layers as an example.
As shown in fig. 2, the above-mentioned hierarchical image is processed by establishing a gaussian image pyramid, and the gaussian pyramid calculation formula is as follows:
<math> <mrow> <msub> <mi>g</mi> <mi>L</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>m</mi> <mo>=</mo> <mo>-</mo> <mn>2</mn> </mrow> <mn>2</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>n</mi> <mo>=</mo> <mo>-</mo> <mn>2</mn> </mrow> <mn>2</mn> </munderover> <mi>w</mi> <mrow> <mo>(</mo> <mi>m</mi> <mo>,</mo> <mi>n</mi> <mo>)</mo> </mrow> <msub> <mi>g</mi> <mrow> <mi>L</mi> <mo>-</mo> <mn>1</mn> </mrow> </msub> <mrow> <mo>(</mo> <mn>2</mn> <mi>i</mi> <mo>+</mo> <mi>m</mi> <mo>,</mo> <mn>2</mn> <mi>j</mi> <mo>+</mo> <mi>n</mi> <mo>)</mo> </mrow> <mo>,</mo> <mn>0</mn> <mo>&lt;</mo> <mi>L</mi> <mo>&le;</mo> <mi>N</mi> <mo>,</mo> <mn>0</mn> <mo>&le;</mo> <mi>i</mi> <mo>&lt;</mo> <msub> <mi>C</mi> <mi>L</mi> </msub> <mo>,</mo> <mn>0</mn> <mo>&le;</mo> <mi>j</mi> <mo>&lt;</mo> <msub> <mi>R</mi> <mi>L</mi> </msub> </mrow> </math>
the Gaussian pyramid is a sequence of images, each layer of image in the sequence is a low-pass filtered replica of the previous layer, g in the above formulaL(i, j) represents an image of the L-th layer, CLNumber of columns, R, representing the L-th layer imageLThe number of rows in the L-th layer image is represented, N represents the total number of layers, and w (m, N) is a window function, often taking a 5 x 5 Gaussian template.
(2) In a 1 st layer low-resolution image layer, calculating edge probability distribution and joint probability distribution of a reference image and an image to be registered by using a kernel density function, and calculating a negative mutual information value between the reference image and the image to be registered; carrying out iterative optimization by taking the negative mutual information as a target function, and solving a minimum similarity measurement value or a translation transformation parameter when a specified iterative frequency is reached; in the middle-layer resolution image layer, the translation transformation parameters obtained in the previous layer are used for transforming the image to be registered in the current layer, then the kernel density function is used for calculating the edge probability distribution and the joint probability distribution of the reference image and the image to be registered, and the negative mutual information value between the reference image and the image to be registered is calculated; and (4) performing iterative optimization by taking the negative mutual information as an objective function, and solving a translation transformation parameter when the minimum similarity measurement value or the specified iterative times is reached, wherein the translation transformation parameter is marked as a coarse registration parameter. And in the high-resolution image layer, the translation transformation parameters obtained in the previous layer are utilized to transform the image to be registered in the current layer.
For two images X and Y which need to be registered with each other, X is selected as a reference image, Y is selected as an image to be registered, and ideally, each pixel in the image Y is matched with the corresponding pixel position in the image X by utilizing mutual information.
The mutual information of the images X and Y is defined as:
I(X;Y)=H(X)+H(Y)-H(X,Y)
wherein: h (X), H (Y) is the edge entropy of X and Y, and H (X, Y) is the joint entropy of X and Y.
<math> <mrow> <mi>H</mi> <mrow> <mo>(</mo> <mi>X</mi> <mo>)</mo> </mrow> <mo>=</mo> <munder> <mi>&Sigma;</mi> <mi>x</mi> </munder> <mo>-</mo> <msub> <mi>P</mi> <mi>x</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> <mi>log</mi> <msub> <mi>P</mi> <mi>x</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> </mrow> </math>
<math> <mrow> <mi>H</mi> <mrow> <mo>(</mo> <mi>Y</mi> <mo>)</mo> </mrow> <mo>=</mo> <munder> <mi>&Sigma;</mi> <mi>y</mi> </munder> <mo>-</mo> <msub> <mi>P</mi> <mi>y</mi> </msub> <mrow> <mo>(</mo> <mi>y</mi> <mo>)</mo> </mrow> <mi>log</mi> <msub> <mi>P</mi> <mi>y</mi> </msub> <mrow> <mo>(</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </math>
<math> <mrow> <mi>H</mi> <mrow> <mo>(</mo> <mi>X</mi> <mo>,</mo> <mi>Y</mi> <mo>)</mo> </mrow> <mo>=</mo> <munder> <mi>&Sigma;</mi> <mrow> <mi>x</mi> <mo>,</mo> <mi>y</mi> </mrow> </munder> <mo>-</mo> <msub> <mi>P</mi> <mrow> <mi>x</mi> <mo>,</mo> <mi>y</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mi>log</mi> <msub> <mi>P</mi> <mrow> <mi>x</mi> <mo>,</mo> <mi>y</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </math>
Px(x),Py(Y) edge probability distribution densities, P, for images X and Y, respectivelyx,y(X, Y) is the joint probability distribution density of images X, Y:
to estimate the probability density function p (X), samples closer to X should behave more like samples farther from X. The kernel density method is an accurate nonparametric estimation method for directly estimating the probability density of random variables from a measurement sample X (the number n of samples). n samples, the probability density of X points estimated from the measurement sample X is defined as:
<math> <mrow> <mi>P</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <mi>n</mi> </mfrac> <munder> <mi>&Sigma;</mi> <mrow> <msub> <mi>x</mi> <mi>l</mi> </msub> <mo>&Element;</mo> <mi>X</mi> </mrow> </munder> <mi>W</mi> <mrow> <mo>(</mo> <mfrac> <mrow> <mi>x</mi> <mo>-</mo> <msub> <mi>x</mi> <mi>l</mi> </msub> </mrow> <mi>h</mi> </mfrac> <mo>)</mo> </mrow> </mrow> </math>
where W (x) is a window function, xlRepresents a randomly selected point associated with the x attachment; h is a window width parameter. The window function must satisfy the following two conditions:
W(x)>=0;
∫W(x)dx=1
image registration is a problem of function optimization, and a set of transformation parameters mu which enable the similarity measure value S to be maximum is solved and recorded as muopt
μopt=argμmaxS(μ)
The invention takes the negative value of the minimum mutual information as a similarity measure function S, X and Y represent a reference image and an image to be registered, and the negative mutual information value between the reference image and the image to be registered is expressed as a function of a transformation parameter mu as follows:
<math> <mrow> <mi>S</mi> <mrow> <mo>(</mo> <mi>&mu;</mi> <mo>)</mo> </mrow> <mo>=</mo> <mo>-</mo> <munder> <mi>&Sigma;</mi> <mrow> <mi>y</mi> <mo>&Element;</mo> <mi>Y</mi> </mrow> </munder> <munder> <mi>&Sigma;</mi> <mrow> <mi>x</mi> <mo>&Element;</mo> <mi>X</mi> </mrow> </munder> <msub> <mi>P</mi> <mrow> <mi>x</mi> <mo>,</mo> <mi>y</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>;</mo> <mi>&mu;</mi> <mo>)</mo> </mrow> <msub> <mi>log</mi> <mn>2</mn> </msub> <mfrac> <mrow> <msub> <mi>P</mi> <mrow> <mi>x</mi> <mo>,</mo> <mi>y</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>;</mo> <mi>&mu;</mi> <mo>)</mo> </mrow> </mrow> <mrow> <msub> <mi>P</mi> <mi>x</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>;</mo> <mi>&mu;</mi> <mo>)</mo> </mrow> <msub> <mi>P</mi> <mi>y</mi> </msub> <mrow> <mo>(</mo> <mi>y</mi> <mo>;</mo> <mi>&mu;</mi> <mo>)</mo> </mrow> </mrow> </mfrac> </mrow> </math>
Px,y(x, y; μ) represents the joint probability distribution density taking into account the transformation parameter μ; py(y; mu) represents the edge probability distribution density of the image to be registered considering the transformation parameter mu; px(x; mu) represents the edge probability distribution density of the reference image taking into account the transformation parameter mu.
Gradient of mutual information
<math> <mrow> <mo>&dtri;</mo> <mi>S</mi> <mo>=</mo> <msup> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <mfrac> <mrow> <mo>&PartialD;</mo> <mi>S</mi> </mrow> <mrow> <mo>&PartialD;</mo> <msub> <mi>&mu;</mi> <mn>1</mn> </msub> </mrow> </mfrac> </mtd> <mtd> <mfrac> <mrow> <mo>&PartialD;</mo> <mi>S</mi> </mrow> <mrow> <mo>&PartialD;</mo> <msub> <mi>&mu;</mi> <mn>2</mn> </msub> </mrow> </mfrac> </mtd> <mtd> <mo>.</mo> <mo>.</mo> <mo>.</mo> </mtd> <mtd> <mfrac> <mrow> <mo>&PartialD;</mo> <mi>S</mi> </mrow> <mrow> <mo>&PartialD;</mo> <msub> <mi>&mu;</mi> <mi>k</mi> </msub> </mrow> </mfrac> </mtd> </mtr> </mtable> </mfenced> <mi>T</mi> </msup> </mrow> </math>
<math> <mrow> <mfrac> <mrow> <mo>&PartialD;</mo> <mi>S</mi> </mrow> <mrow> <mo>&PartialD;</mo> <msub> <mi>&mu;</mi> <mi>k</mi> </msub> </mrow> </mfrac> <mo>=</mo> <mo>-</mo> <munder> <mi>&Sigma;</mi> <mrow> <mi>y</mi> <mo>&Element;</mo> <mi>Y</mi> </mrow> </munder> <munder> <mi>&Sigma;</mi> <mrow> <mi>x</mi> <mo>&Element;</mo> <mi>X</mi> </mrow> </munder> <mfrac> <mrow> <mo>&PartialD;</mo> <mi>p</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>;</mo> <mi>&mu;</mi> <mo>)</mo> </mrow> </mrow> <mrow> <mo>&PartialD;</mo> <msub> <mi>&mu;</mi> <mi>k</mi> </msub> </mrow> </mfrac> <mi>log</mi> <mfrac> <mrow> <mi>p</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>;</mo> <mi>&mu;</mi> <mo>)</mo> </mrow> </mrow> <mrow> <msub> <mi>p</mi> <mi>y</mi> </msub> <mrow> <mo>(</mo> <mi>y</mi> <mo>;</mo> <mi>&mu;</mi> <mo>)</mo> </mrow> </mrow> </mfrac> </mrow> </math>
Figure BDA0000430206210000063
Is the kth partial derivative of the joint probability distribution, k ∈ Z, i.e., k is an integer.
(3) Respectively extracting coordinate feature point sets where gradient amplitudes in the optical image and the radar image exceed a preset threshold value;
the two types of images are extracted by the same method, taking an optical image as an example: the gradient magnitude of the image is first calculated:
<math> <mrow> <mrow> <mo>|</mo> <mo>&dtri;</mo> <msub> <mi>U</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>|</mo> </mrow> <mo>=</mo> <msqrt> <msup> <msub> <mi>U</mi> <mi>x</mi> </msub> <mn>2</mn> </msup> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>+</mo> <msup> <msub> <mi>U</mi> <mi>y</mi> </msub> <mn>2</mn> </msup> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> </msqrt> </mrow> </math>
Ux(i,j)=0.5(U(i,j+1)-U(i,j-1))
Uy(i,j)=0.5(U(i+1,j)-U(i-1,j))
taking the point of the first 25% gradient amplitude as a characteristic point set S1(p) points in the set of points are denoted as (x)g,yg)。
(4) Transferring the optical image coordinate feature point set extracted in the step (3) to a radar image coordinate feature point set by using a translation transformation parameter used for the transformation of the last layer of image in the step (2) to obtain a point set S1(P);
Coarse registration translation transformation parameter P obtained in step (2)0= (, u 1,. mu.2), set of points S1Coordinates of each point in (x, y) plus translation transformation parameter P0And completing the migration.
For (X, Y) ∈ S1(P) has:
X=x+μ1
Y=y+μ2
(5) optimizing the target function in the range of the point set after the migration in the step (4), and selecting a corresponding translation transformation parameter when the target function is at the maximum value as a fine registration parameter;
optimizing the objective function F (S)1(p)) abbreviated F:
<math> <mrow> <mi>F</mi> <mrow> <mo>(</mo> <msub> <mi>S</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <mi>p</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mover> <mo>=</mo> <mi>&Delta;</mi> </mover> <munder> <mi>&Sigma;</mi> <mrow> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>g</mi> </msub> <mo>,</mo> <msub> <mi>y</mi> <mi>g</mi> </msub> <mo>)</mo> </mrow> <mo>&Element;</mo> <msub> <mi>S</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <mi>p</mi> <mo>)</mo> </mrow> </mrow> </munder> <msup> <mrow> <mo>|</mo> <mo>&dtri;</mo> <msub> <mi>U</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>g</mi> </msub> <mo>,</mo> <msub> <mi>y</mi> <mi>g</mi> </msub> <mo>)</mo> </mrow> <mo>|</mo> </mrow> <mn>2</mn> </msup> </mrow> </math>
Figure BDA0000430206210000071
is a point (x)i,yi) The gradient value of (a) is determined,
parameter iteration: <math> <mrow> <msub> <mi>p</mi> <mrow> <mi>n</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>=</mo> <msub> <mi>p</mi> <mi>n</mi> </msub> <mo>+</mo> <msup> <mrow> <mo>(</mo> <msup> <msub> <mi>H</mi> <mi>p</mi> </msub> <mi>n</mi> </msup> <mo>)</mo> </mrow> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> <msub> <mo>&dtri;</mo> <mi>p</mi> </msub> <msub> <mi>F</mi> <mi>n</mi> </msub> </mrow> </math>
Figure BDA0000430206210000073
is the F gradient, H, at an iteration number n and a parameter pp nThe Hessian matrix for F is solved as follows:
<math> <mrow> <msub> <mo>&dtri;</mo> <mi>p</mi> </msub> <msub> <mi>F</mi> <mi>n</mi> </msub> <mo>=</mo> <msup> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <mfrac> <mrow> <mo>&PartialD;</mo> <mi>F</mi> </mrow> <mrow> <mo>&PartialD;</mo> <msub> <mi>P</mi> <mn>1</mn> </msub> </mrow> </mfrac> </mtd> <mtd> <mrow> <mfrac> <mrow> <mo>&PartialD;</mo> <mi>F</mi> </mrow> <mrow> <mo>&PartialD;</mo> <msub> <mi>P</mi> <mn>2</mn> </msub> </mrow> </mfrac> </mrow> </mtd> <mtd> <mo>.</mo> <mo>.</mo> <mo>.</mo> </mtd> <mtd> <mfrac> <mrow> <mo>&PartialD;</mo> <mi>F</mi> </mrow> <mrow> <mo>&PartialD;</mo> <msub> <mi>P</mi> <mi>m</mi> </msub> </mrow> </mfrac> </mtd> </mtr> </mtable> </mfenced> <mi>T</mi> </msup> </mrow> </math>
Figure BDA0000430206210000075
(6) and transforming and resampling the image to be registered by using the fine registration parameters to obtain a registered image.
The invention has not been described in detail in part of the common general knowledge of those skilled in the art.

Claims (2)

1. An optical and radar image registration method is characterized by comprising the following steps:
(1) respectively performing down-sampling on the reference image and the image to be registered to generate not less than 3 layers of images with different resolutions by taking the radar image as the reference image and the optical image as the image to be registered;
(2) starting from the first low-resolution image layer, each layer is processed as follows:
(2.1) calculating the edge probability distribution and the joint probability distribution of the reference image and the image to be registered by using a kernel density function, and calculating a negative mutual information value between the reference image and the image to be registered; carrying out iterative optimization by taking the negative mutual information as a target function, and solving a minimum similarity measurement value or a translation transformation parameter when a specified iterative frequency is reached;
(2.2) transforming the image to be registered of the next layer by using the translation transformation parameters, and repeating the step (2.1) on the transformed image to be registered and the reference image until the transformation of the last layer, namely the layer to be registered with the highest resolution is finished;
(3) respectively extracting coordinate feature point sets where gradient amplitudes in the optical image and the radar image exceed a preset threshold value;
(4) transferring the optical image coordinate feature point set extracted in the step (3) to a radar image coordinate feature point set by using a translation transformation parameter used for the transformation of the last layer of image in the step (2) to obtain a point set S1(P);
(5) Optimizing the target function in the range of the point set after the migration in the step (4), and selecting a corresponding translation transformation parameter when the target function is at the maximum value as a fine registration parameter;
(6) and transforming and resampling the image to be registered by using the fine registration parameters to obtain a registered image.
2. An optical and radar image registration method according to claim 1, wherein: the objective function in the step (5) is as follows:
<math> <mrow> <mi>F</mi> <mrow> <mo>(</mo> <msub> <mi>S</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <mi>p</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mo>=</mo> <munder> <mi>&Sigma;</mi> <mrow> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>g</mi> </msub> <mo>,</mo> <msub> <mi>y</mi> <mi>g</mi> </msub> <mo>)</mo> </mrow> <mo>&Element;</mo> <msub> <mi>S</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <mi>p</mi> <mo>)</mo> </mrow> </mrow> </munder> <msup> <mrow> <mo>|</mo> <mo>&dtri;</mo> <msub> <mi>U</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>g</mi> </msub> <mo>,</mo> <msub> <mi>y</mi> <mi>g</mi> </msub> <mo>)</mo> </mrow> <mo>|</mo> </mrow> <mn>2</mn> </msup> </mrow> </math>
wherein, <math> <mfenced open='' close=''> <mtable> <mtr> <mtd> <mo>|</mo> <mo>&dtri;</mo> <msub> <mi>U</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>g</mi> </msub> <mo>,</mo> <msub> <mi>y</mi> <mi>g</mi> </msub> <mo>)</mo> </mrow> <mo>|</mo> <mo>=</mo> <msqrt> <msup> <msub> <mi>U</mi> <mi>x</mi> </msub> <mn>2</mn> </msup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>g</mi> </msub> <mo>,</mo> <msub> <mi>y</mi> <mi>g</mi> </msub> <mo>)</mo> </mrow> <mo>+</mo> <msup> <msub> <mi>U</mi> <mi>y</mi> </msub> <mn>2</mn> </msup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>g</mi> </msub> <mo>,</mo> <msub> <mi>y</mi> <mi>g</mi> </msub> <mo>)</mo> </mrow> </msqrt> </mtd> </mtr> <mtr> <mtd> <msub> <mi>U</mi> <mi>x</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>g</mi> </msub> <mo>,</mo> <msub> <mi>y</mi> <mi>g</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mn>0.5</mn> <mrow> <mo>(</mo> <mi>U</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>g</mi> </msub> <mo>,</mo> <msub> <mi>y</mi> <mi>g</mi> </msub> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> <mo>-</mo> <mi>U</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>g</mi> </msub> <mo>,</mo> <msub> <mi>y</mi> <mi>g</mi> </msub> <mo>-</mo> <mn>1</mn> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mtd> </mtr> <mtr> <mtd> <msub> <mi>U</mi> <mi>y</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>g</mi> </msub> <mo>,</mo> <msub> <mi>y</mi> <mi>g</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mn>0.5</mn> <mrow> <mo>(</mo> <mi>U</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>g</mi> </msub> <mo>+</mo> <mn>1</mn> <mo>,</mo> <msub> <mi>y</mi> <mi>g</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mi>U</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>g</mi> </msub> <mo>-</mo> <mn>1</mn> <mo>,</mo> <msub> <mi>y</mi> <mi>g</mi> </msub> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mtd> </mtr> </mtable> </mfenced> </math>
U(xg,yg+1) represents a radical in (x)g,yg+1) of the gray values.
CN201310648087.9A 2013-12-04 2013-12-04 Registering method for optical and radar images Pending CN103646399A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310648087.9A CN103646399A (en) 2013-12-04 2013-12-04 Registering method for optical and radar images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310648087.9A CN103646399A (en) 2013-12-04 2013-12-04 Registering method for optical and radar images

Publications (1)

Publication Number Publication Date
CN103646399A true CN103646399A (en) 2014-03-19

Family

ID=50251609

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310648087.9A Pending CN103646399A (en) 2013-12-04 2013-12-04 Registering method for optical and radar images

Country Status (1)

Country Link
CN (1) CN103646399A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105574877A (en) * 2015-12-21 2016-05-11 中国资源卫星应用中心 Thermal infrared image registering method based on multiple dimensioned characteristic
CN109190651A (en) * 2018-07-06 2019-01-11 同济大学 Optical imagery and radar image matching process based on multichannel convolutive neural network

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6681057B1 (en) * 2000-02-22 2004-01-20 National Instruments Corporation Image registration system and method implementing PID control techniques
US20050094898A1 (en) * 2003-09-22 2005-05-05 Chenyang Xu Method and system for hybrid rigid registration of 2D/3D medical images
CN101071505A (en) * 2007-06-18 2007-11-14 华中科技大学 Multi likeness measure image registration method
CN101667293A (en) * 2009-09-24 2010-03-10 哈尔滨工业大学 Method for conducting high-precision and steady registration on diversified sensor remote sensing images
CN103020945A (en) * 2011-09-21 2013-04-03 中国科学院电子学研究所 Remote sensing image registration method of multi-source sensor
CN103218811A (en) * 2013-03-29 2013-07-24 中国资源卫星应用中心 Statistical distribution-based satellite multi-spectral image waveband registration method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6681057B1 (en) * 2000-02-22 2004-01-20 National Instruments Corporation Image registration system and method implementing PID control techniques
US20050094898A1 (en) * 2003-09-22 2005-05-05 Chenyang Xu Method and system for hybrid rigid registration of 2D/3D medical images
CN101071505A (en) * 2007-06-18 2007-11-14 华中科技大学 Multi likeness measure image registration method
CN101667293A (en) * 2009-09-24 2010-03-10 哈尔滨工业大学 Method for conducting high-precision and steady registration on diversified sensor remote sensing images
CN103020945A (en) * 2011-09-21 2013-04-03 中国科学院电子学研究所 Remote sensing image registration method of multi-source sensor
CN103218811A (en) * 2013-03-29 2013-07-24 中国资源卫星应用中心 Statistical distribution-based satellite multi-spectral image waveband registration method

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
CHEN TIANZE 等: "EDGE FEATURE MATCHING OF REMOTE SENSING IMAGES VIA PARAMETER DECOMPOSITION OF AFFINE TRANSFORMATION MODEL", 《ISPRS ANNALS OF THE PHOTOGRAMMETRY, REMOTE SENSING AND SPATIAL INFORMATION SCIENCES》, vol. 17, 1 September 2012 (2012-09-01) *
ULEEN: "梯度", 《新浪博客-HTTP://BLOG.SINA.COM.CN/S/BLOG_6F57A7150100OOIO.HTML》, 10 January 2011 (2011-01-10) *
Y.KELLER 等: "ROBUST MULTI-SENSOR IMAGE REGISTRATION USING PIXEL MIGRATION", 《2002 IEEE SENSOR ARRAY AND MULTICHANNEL SIGNAL PROCESSING WORKSHOP PROCEEDINGS》, 6 August 2008 (2008-08-06) *
YOSI KELLER 等: "Implicit similarity: a new approach to multi-sensor image registration", 《CVPR 2003》, 20 June 2003 (2003-06-20) *
李孟君 等: "基于隐含相似性的光学和SAR图像配准方法", 《中国图象图形学报》, vol. 14, no. 11, 15 November 2009 (2009-11-15) *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105574877A (en) * 2015-12-21 2016-05-11 中国资源卫星应用中心 Thermal infrared image registering method based on multiple dimensioned characteristic
CN109190651A (en) * 2018-07-06 2019-01-11 同济大学 Optical imagery and radar image matching process based on multichannel convolutive neural network

Similar Documents

Publication Publication Date Title
CN104574347B (en) Satellite in orbit image geometry positioning accuracy evaluation method based on multi- source Remote Sensing Data data
Yu et al. A fast and fully automatic registration approach based on point features for multi-source remote-sensing images
CN103310453B (en) A kind of fast image registration method based on subimage Corner Feature
CN103455797B (en) Detection and tracking method of moving small target in aerial shot video
CN102800097B (en) The visible ray of multi-feature multi-level and infrared image high registration accuracy method
CN103456022B (en) A kind of high-resolution remote sensing image feature matching method
CN109509164B (en) Multi-sensor image fusion method and system based on GDGF
CN103218811B (en) A kind of satellite multispectral image waveband registration method of Corpus--based Method distribution
CN109859256B (en) Three-dimensional point cloud registration method based on automatic corresponding point matching
CN102629374B (en) Image super resolution (SR) reconstruction method based on subspace projection and neighborhood embedding
CN102800098B (en) Multi-characteristic multi-level visible light full-color and multi-spectrum high-precision registering method
CN102800099B (en) Multi-feature multi-level visible light and high-spectrum image high-precision registering method
CN113838191A (en) Three-dimensional reconstruction method based on attention mechanism and monocular multi-view
CN108765476B (en) Polarized image registration method
CN105427298A (en) Remote sensing image registration method based on anisotropic gradient dimension space
CN102819839B (en) High-precision registration method for multi-characteristic and multilevel infrared and hyperspectral images
CN102122359B (en) Image registration method and device
CN102169581A (en) Feature vector-based fast and high-precision robustness matching method
CN102722887A (en) Image registration method and device
CN109635726B (en) Landslide identification method based on combination of symmetric deep network and multi-scale pooling
CN104318583B (en) Visible light broadband spectrum image registration method
CN114494371B (en) Optical image and SAR image registration method based on multi-scale phase consistency
CN106097256A (en) A kind of video image fuzziness detection method based on Image Blind deblurring
CN102982556B (en) Based on the video target tracking method of particle filter algorithm in manifold
Han et al. An improved corner detection algorithm based on harris

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20140319