CN113269817A - Real-time remote sensing map splicing method and device combining spatial domain and frequency domain - Google Patents

Real-time remote sensing map splicing method and device combining spatial domain and frequency domain Download PDF

Info

Publication number
CN113269817A
CN113269817A CN202110627355.3A CN202110627355A CN113269817A CN 113269817 A CN113269817 A CN 113269817A CN 202110627355 A CN202110627355 A CN 202110627355A CN 113269817 A CN113269817 A CN 113269817A
Authority
CN
China
Prior art keywords
image
registered
points
point
relation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110627355.3A
Other languages
Chinese (zh)
Inventor
付慧倩
杜春虎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Avic Shike Electronic Technology Co ltd
Original Assignee
Beijing Avic Shike Electronic Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Avic Shike Electronic Technology Co ltd filed Critical Beijing Avic Shike Electronic Technology Co ltd
Priority to CN202110627355.3A priority Critical patent/CN113269817A/en
Publication of CN113269817A publication Critical patent/CN113269817A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a real-time remote sensing map splicing method and device combining a space domain and a frequency domain, wherein the method comprises the following steps: calculating the translation relation between the first image to be registered and the second image to be registered by a phase correlation method; performing corner detection on the first image to be registered and the second image to be registered according to the translation relation by an improved SUSAN detection method, and determining corresponding corners in the first image to be registered and the second image to be registered; and registering by taking the angular points as characteristic points, solving a transformation relation between the first image to be registered and the second image to be registered, and splicing the first image to be registered and the second image to be registered through the transformation relation. The invention improves the efficiency and the accuracy of the map splicing method by combining the phase correlation method with the improved SUSAN detection method.

Description

Real-time remote sensing map splicing method and device combining spatial domain and frequency domain
Technical Field
The invention relates to the technical field of computers, in particular to a method and a device for splicing a real-time remote sensing map by combining a space domain and a frequency domain.
Background
In a flight data real-time transmission system, positions of an aircraft are simulated and reproduced in real time according to position information transmitted by the aircraft, as some special background maps are derived from a specific aerial photography map, an aerial photography camera can only shoot a local area of a certain region, and geometric deformation among different images of the same scene is caused due to different imaging time, imaging viewpoints, sensors and the like, in order to enable the real-time map background to be displayed continuously, maps in a specific range need to be spliced according to the position information, the geometric corresponding relation among sequence maps is determined, the geometric deformation among different maps is eliminated, and the sequence maps can be displayed continuously and visually in a common reference coordinate system.
At present, the common remote sensing map splicing technology mainly comprises two technologies of image registration and image fusion, wherein the core technology is the image registration technology. Common image registration technologies are a spatial domain method and a frequency domain method, wherein the spatial domain method includes a feature block-based and feature point-based registration method, and the frequency domain method is mainly a phase correlation method. The feature block based algorithm is accurate in registration, but is large in calculation amount and slow in speed, and cannot be well registered under the condition that a map is rotated and zoomed. The feature point-based registration method is accurate in registration of a map with rotation and zooming, but when the image has large offset, the registration accuracy is reduced, and the calculated amount is relatively large. The phase correlation algorithm in the frequency domain has the advantages of high speed, strong anti-interference capability and insensitivity to brightness change, but the algorithm has poor robustness and has influence when an extreme value is obtained due to the influence of image noise and the like.
Disclosure of Invention
The invention aims to provide a method and a device for splicing a real-time remote sensing map by combining a spatial domain and a frequency domain, and aims to solve the problems in the prior art.
The invention provides a real-time remote sensing map splicing method combining a space domain and a frequency domain, which specifically comprises the following steps:
calculating the translation relation between the first image to be registered and the second image to be registered by a phase correlation method;
performing corner detection on the first image to be registered and the second image to be registered according to the translation relation by an improved SUSAN detection method, and determining corresponding corners in the first image to be registered and the second image to be registered;
and registering by taking the angular points as characteristic points, solving a transformation relation between the first image to be registered and the second image to be registered, and splicing the first image to be registered and the second image to be registered through the transformation relation.
The invention provides a real-time remote sensing map splicing device combining a space domain and a frequency domain, which specifically comprises the following steps:
the translation relation calculation module is used for calculating the translation relation between the first image to be registered and the second image to be registered through a phase correlation method;
the corner determining module is used for performing corner detection on the first image to be registered and the second image to be registered according to the translation relation by an improved SUSAN detection method, and determining corresponding corners in the first image to be registered and the second image to be registered;
and the splicing module is used for registering by taking the angular points as characteristic points, solving the transformation relation between the first image to be registered and the second image to be registered, and splicing the first image to be registered and the second image to be registered by the transformation relation.
Further, the corner determination module is specifically configured to:
performing binarization segmentation on the image according to a formula 8, and setting a threshold value in a normalization manner;
Figure BDA0003102093730000031
where f (x, y) denotes a grayscale at a point (x, y), T denotes a threshold, and T ═ f (f)max(x,y)+fmin(x,y))/2;
According to a formula of derivation in the horizontal direction, that is, formula 9, and a formula of derivation in the vertical direction, that is, formula 10, the gray level and the vertical direction at the point (x, y) are respectively derived;
Figure BDA0003102093730000032
Figure BDA0003102093730000033
wherein f (x, y) is the gray level at the point (x, y), f (x-1, y) is the gray level at the point (x-1, y), and f (x, y-1) is the gray level at the point (x, y-1);
according to a formula 11, finding out local maximum values in blocks, and carrying out corner point detection on local maximum value points by an SUSAN detection method;
Figure BDA0003102093730000034
where C (x, y) is a similarity comparison function, f (x, y) represents the gray level at a point (x, y) in the template, f (x, y)0,y0) Representing the center of a circle (x) in the template0,y0) The gray scale of (1) is classified into a USAN region by using the point which is the same as the gray scale value of the circle center, namely the value of C (x, y) which is 1;
calculated according to equation 12 as (x)0,y0) Total number of similar comparison functions C (x, y) in template as center:
Figure BDA0003102093730000035
wherein, c (x)0,y0) Is (x)0,y0) A template centered at a center, n (x, y) being the size of the USAN region of said template, C (x, y) being the point within the template belonging to said USAN region;
selecting the corner points according to the formula 13, selecting the points with n (x, y) less than g as the corner points:
Figure BDA0003102093730000041
wherein, R (x, y) is a corner response function, and g is a geometric threshold.
Further, the corner determination module is specifically configured to:
the improved SUSAN detection method is used for selecting three angular points which are not on the same straight line in a first image to be registered according to the translation relation, extracting a template, selecting a registration template in a second image to be registered according to the translation relation, and finally determining the corresponding angular points in the second image to be registered.
Further, the splicing module is specifically configured to:
establishing a mapping relation between the first image to be registered and the second image to be registered according to a formula 14, and calculating parameters of the position transformation relation by using three diagonal points as feature points, wherein the corresponding position transformation relation is as follows:
Figure BDA0003102093730000042
wherein (x)1,y1),(x2,y2) And coordinates of corresponding corner points in the first image to be registered and the second image to be registered are respectively.
Further, the splicing module is specifically configured to:
and searching a specific number of corresponding points through the position transformation relation, solving transformation parameters by using the specific number of corresponding points through a least square method, determining the transformation relation between the first image to be registered and the second image to be registered through the transformation parameters, and splicing the first image to be registered and the second image to be registered through the transformation relation.
By adopting the embodiment of the invention, the SUSAN algorithm is improved at two points, and the influence of the threshold on the detection accuracy is effectively reduced by carrying out binarization segmentation on the image and setting the threshold in a normalization manner; by means of derivation in the horizontal direction and the vertical direction of the image, template comparison is carried out on the points with the maximum difference in the neighborhood, accuracy is improved, point-by-point comparison is not needed, and algorithm operation amount is effectively reduced. The translation relation of the two images to be spliced is solved by adopting a phase correlation method of a frequency domain, great information is provided for corner point matching in a space domain, a template is extracted from a first image for finding a corner point according to the translation relation, and the area of a registration template searched in a second image is reduced according to the translation relation, so that the accuracy of the registration method is improved, and the efficiency is improved.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flow chart of a real-time remote sensing map splicing method combining a space domain and a frequency domain according to an embodiment of the invention;
fig. 2 is a structural diagram of a real-time remote sensing map stitching device combining a space domain and a frequency domain according to an embodiment of the invention.
Detailed Description
The technical solutions of the present invention will be described clearly and completely with reference to the following embodiments, and it should be understood that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Method embodiment
According to an embodiment of the present invention, a method for splicing a real-time remote sensing map by combining a space domain and a frequency domain is provided, fig. 1 is a flowchart of a method for splicing a real-time remote sensing map by combining a space domain and a frequency domain according to an embodiment of the present invention, and as shown in fig. 1, the method for splicing a real-time remote sensing map by combining a space domain and a frequency domain according to an embodiment of the present invention specifically includes:
step S101, calculating a translation relationship between the first image to be registered and the second image to be registered by a phase correlation method, wherein the step S101 specifically includes:
f2(x,y)=f1(x-x0,y-y0) Equation 15;
wherein f is2(x, y) is f1(x, y) translating x in the x and y directions0And y0The latter image;
f1(x, y) and f2Fourier transform corresponding to (x, y) to F1(u, v) and F2(u, v) according to the fourier transform shift invariant theory, they satisfy the following relationship:
Figure BDA0003102093730000064
f is then1(x, y) and f2The cross power spectrum of (x, y) is:
Figure BDA0003102093730000061
wherein, F2 *Is F2The complex conjugate of (a) and (b),
Figure BDA0003102093730000063
in order to perform the operation of taking the modulus,
Figure BDA0003102093730000065
is a phase correlation function;
the inverse fourier transform of the phase correlation function is:
Figure BDA0003102093730000062
wherein, δ (x-x)0,y-y0) Is (x) in (x, y) space0,y0) Form a pulse function, x0And y0The relative translation amounts of the two images to be registered are determined.
Step S102, performing corner detection on the first image to be registered and the second image to be registered according to the translation relationship by using an improved SUSAN detection method, and determining corresponding corners in the first image to be registered and the second image to be registered, wherein the step S102 specifically includes:
selecting three angular points which are not on a straight line in a first image to be registered according to a translation relation by an improved SUSAN detection method, extracting a template, selecting a registration template in a second image to be registered according to the translation relation, and finally determining corresponding angular points in the second image to be registered;
aiming at the defects that the SUSAN operator threshold is not easy to determine and the accuracy is not high, the SUSAN algorithm is improved by two points:
1. the image is subjected to binarization segmentation through a formula 1, and a threshold value is set in a normalization manner, so that the influence of the threshold value on the detection accuracy is effectively reduced;
2. and the difference between the horizontal direction and the vertical direction of the image is solved, and template comparison is carried out at the point with the maximum difference in the neighborhood, so that the accuracy is improved. The algorithm does not need to compare point by point, so the operation amount of the algorithm is effectively reduced.
The improved SUSAN algorithm comprises the following steps:
performing binarization segmentation on the image according to a formula 1, and setting a threshold value in a normalization manner;
Figure BDA0003102093730000071
where f (x, y) denotes a grayscale at a point (x, y), T denotes a threshold, and T ═ f (f)max(x,y)+fmin(x,y))/2;
According to a formula of derivation in the horizontal direction, namely formula 2, and a formula of derivation in the vertical direction, namely formula 3, the gray level and the vertical direction at the point (x, y) are respectively derived;
Figure BDA0003102093730000072
Figure BDA0003102093730000073
wherein f (x, y) is the gray level at the point (x, y), f (x-1, y) is the gray level at the point (x-1, y), and f (x, y-1) is the gray level at the point (x, y-1);
according to a formula 4, finding out local maximum values in blocks, and carrying out corner point detection on local maximum value points by an SUSAN detection method;
Figure BDA0003102093730000074
where C (x, y) is a similarity comparison function, f (x, y) represents the gray level at a point (x, y) in the template, f (x, y)0,y0) Representing the center of a circle (x) in the template0,y0) The gray scale of (1) is classified into a USAN region by using the point which is the same as the gray scale value of the circle center, namely the value of C (x, y) which is 1;
calculated according to equation 5 as (x)0,y0) Total number of similar comparison functions C (x, y) in template as center:
Figure BDA0003102093730000081
wherein, c (x)0,y0) Is (x)0,y0) A template centered at a center, n (x, y) being the size of the USAN region of said template, C (x, y) being the point within the template belonging to said USAN region;
selecting the angular points according to a formula 6, and selecting the points with n (x, y) less than g as the angular points:
Figure BDA0003102093730000082
where R (x, y) is a corner response function, g is a geometric threshold, and in the embodiment of the present invention, g is equal to nmax/2。
Step S103, registering by taking the corner points as feature points, solving a transformation relation between the first image to be registered and the second image to be registered, and splicing the first image to be registered and the second image to be registered by the transformation relation, wherein the step S103 specifically comprises the following steps:
the registration based on the feature points firstly extracts feature points which are kept unchanged in two images respectively, then carries out matching correspondence on a set formed by the two groups of feature points to generate a group of corresponding feature pair sets, and finally estimates global transformation parameters by utilizing the corresponding relation between the group of feature pairs. In the image feature-based method, the number of feature points obtained after feature extraction is greatly reduced, so that the registration speed can be improved, but the registration effect of the method also depends on the extraction precision of the feature points and the matching accuracy of the feature points to a great extent. The method has two important steps, wherein the first step is to extract feature points which are kept unchanged from two images, and the second step is to align the feature points and find out the mapping relation. For the selection of the feature points, the feature points of the image are selected as the feature points of the image, because the feature points have invariance under the condition of space geometric transformation and are easy to observe manually.
For two images to be registered, scaling, rotation and/or translation relations exist between the two images, and an 8-parameter model is adopted to establish a mapping relation;
the transformation relation of the corresponding point position is as follows:
Figure BDA0003102093730000091
wherein (x)1,y1),(x2,y2) And coordinates of corresponding corner points in the first image to be registered and the second image to be registered are respectively.
The angular points obtained in step S102 are used as feature points, and the parameters of the transformation relationship are calculated using the three pairs of feature points and used as initial parameters of the transformation relationship between the two images to be registered. According to initial parameters, all points in a region near a certain characteristic point of a first image to be registered are transformed into a second image to be registered, the difference between the pixel values of all points in the region before transformation and the pixel values of points at corresponding positions after transformation is calculated according to a defined difference formula, and if the difference is smaller than a specific threshold, which is 0.02 in the embodiment, the points are taken as corresponding points. This process is repeated until the number of corresponding points is sufficient, the set of transformation relationships is considered acceptable, and the transformation parameters are then re-solved in a least squares manner using all corresponding points determined by the transformation. And if the number of the corresponding points cannot meet the requirement, selecting a new matching block near the matching block, and calculating the corresponding transformation parameters of the new matching block. The transformation relation between the two images is determined through the transformation parameters, the two images to be registered are spliced and fused according to the transformation relation, the size of a background display image is calculated according to the size of a display window during real-time display, only map data in the range is calculated, the calculation amount is reduced, and the real-time display efficiency is improved.
The phase correlation algorithm is a nonlinear frequency domain correlation algorithm based on Fourier transform power spectrum, only phase information in cross power spectrum is extracted by adopting the method, dependence on image content is reduced, and the obtained correlation peak value is sharp and outstanding, so that the displacement detection range is large, and high matching precision is achieved.
The translation relation of the two images to be registered is solved by adopting a phase correlation algorithm of a frequency domain, and great information is provided for corner point matching in a spatial domain. Three angular points which are not on a straight line are selected in the first image to be registered according to the translation relation obtained by the phase correlation algorithm, the template is extracted, the area for searching the registration template in the second image to be registered is reduced according to the translation relation, and the angular points of the two images to be registered are finally determined, so that the accuracy of the registration method is improved, and the efficiency is improved.
Device embodiment
According to an embodiment of the present invention, a real-time remote sensing map stitching device combining a space domain and a frequency domain is provided, fig. 2 is a structural diagram of the real-time remote sensing map stitching device combining the space domain and the frequency domain according to the embodiment of the present invention, and as shown in fig. 2, the real-time remote sensing map stitching device combining the space domain and the frequency domain according to the embodiment of the present invention specifically includes: a calculate translation relations module 20, a determine corner module 22, and a stitch module 24.
A translation relation calculation module 20, configured to calculate a translation relation between the first image to be registered and the second image to be registered by using a phase correlation method;
a corner determining module 22, configured to perform corner detection on the first image to be registered and the second image to be registered according to the translation relationship by using an improved SUSAN detection method, determine corresponding corners in the first image to be registered and the second image to be registered, where the corner determining module 22 is specifically configured to:
selecting three angular points which are not on a straight line in a first image to be registered according to a translation relation by an improved SUSAN detection method, extracting a template, selecting a registration template in a second image to be registered according to the translation relation, and finally determining corresponding angular points in the second image to be registered;
the improved SUSAN algorithm comprises the following steps:
performing binarization segmentation on the image according to a formula 8, and setting a threshold value in a normalization manner;
Figure BDA0003102093730000101
where f (x, y) denotes a grayscale at a point (x, y), T denotes a threshold, and T ═ f (f)max(x,y)+fmin(x,y))/2;
According to a formula of derivation in the horizontal direction, that is, formula 9, and a formula of derivation in the vertical direction, that is, formula 10, the gray level and the vertical direction at the point (x, y) are respectively derived;
Figure BDA0003102093730000111
Figure BDA0003102093730000112
wherein f (x, y) is the gray level at the point (x, y), f (x-1, y) is the gray level at the point (x-1, y), and f (x, y-1) is the gray level at the point (x, y-1);
according to a formula 11, finding out local maximum values in blocks, and carrying out corner point detection on local maximum value points by an SUSAN detection method;
Figure BDA0003102093730000113
where C (x, y) is a similarity comparison function, f (x, y) represents the gray level at a point (x, y) in the template, f (x, y)0,y0) Representing the center of a circle (x) in the template0,y0) The gray scale of (1) is classified into a USAN region by using the point which is the same as the gray scale value of the circle center, namely the value of C (x, y) which is 1;
calculated according to equation 12 as (x)0,y0) Total number of similar comparison functions C (x, y) in template as center:
Figure BDA0003102093730000114
wherein, c (x)0,y0) Is (x)0,y0) A template centered at a center, n (x, y) being the size of the USAN region of said template, C (x, y) being the point within the template belonging to said USAN region;
selecting the corner points according to the formula 13, selecting the points with n (x, y) less than g as the corner points:
Figure BDA0003102093730000115
wherein, R (x, y) is a corner response function, g is a geometric threshold, and g is taken as n in the embodiment of the present inventionmax/2。
The stitching module 24 is configured to perform registration by using the corner points as feature points, solve a transformation relationship between the first image to be registered and the second image to be registered, and stitch the first image to be registered and the second image to be registered by using the transformation relationship, where the stitching module 24 is specifically configured to:
the registration based on the feature points firstly extracts feature points which are kept unchanged in two images respectively, then carries out matching correspondence on a set formed by the two groups of feature points to generate a group of corresponding feature pair sets, and finally estimates global transformation parameters by utilizing the corresponding relation between the group of feature pairs. In the image feature-based method, the number of feature points obtained after feature extraction is greatly reduced, so that the registration speed can be improved, but the registration effect of the method also depends on the extraction precision of the feature points and the matching accuracy of the feature points to a great extent. The method has two important steps, wherein the first step is to extract feature points which are kept unchanged from two images, and the second step is to align the feature points and find out the mapping relation. For the selection of the feature points, the feature points of the image are selected as the feature points of the image, because the feature points have invariance under the condition of space geometric transformation and are easy to observe manually.
For two images to be registered, scaling, rotation and/or translation relations exist between the two images, and an 8-parameter model is adopted to establish a mapping relation;
the transformation relation of the corresponding point position is as follows:
Figure BDA0003102093730000121
wherein (x)1,y1),(x2,y2) And coordinates of corresponding corner points in the first image to be registered and the second image to be registered are respectively.
The angular points obtained by the angular point module are determined as characteristic points, the parameters of the transformation relation are calculated by utilizing three pairs of characteristic points, and the parameters are used as initial parameters of the transformation relation of the two images to be registered. According to initial parameters, all points in a region near a certain characteristic point of a first image to be registered are transformed into a second image to be registered, the difference between the pixel values of all points in the region before transformation and the pixel values of points at corresponding positions after transformation is calculated according to a defined difference formula, and if the difference is smaller than a specific threshold, which is 0.02 in the embodiment, the points are taken as corresponding points. This process is repeated until the number of corresponding points is sufficient, the set of transformation relationships is considered acceptable, and the transformation parameters are then re-solved in a least squares manner using all corresponding points determined by the transformation. And if the number of the corresponding points cannot meet the requirement, selecting a new matching block near the matching block, and calculating the corresponding transformation parameters of the new matching block. The transformation relation between the two images is determined through the transformation parameters, the two images to be registered are spliced and fused according to the transformation relation, the size of a background display image is calculated according to the size of a display window during real-time display, only map data in the range is calculated, the calculation amount is reduced, and the real-time display efficiency is improved.
The phase correlation algorithm is a nonlinear frequency domain correlation algorithm based on Fourier transform power spectrum, only phase information in cross power spectrum is extracted by adopting the method, dependence on image content is reduced, and the obtained correlation peak value is sharp and outstanding, so that the displacement detection range is large, and high matching precision is achieved.
The translation relation of the two images to be registered is solved by adopting a phase correlation algorithm of a frequency domain, and great information is provided for corner point matching in a spatial domain. Three angular points which are not on a straight line are selected in the first image to be registered according to the translation relation obtained by the phase correlation algorithm, the template is extracted, the area for searching the registration template in the second image to be registered is reduced according to the translation relation, and the angular points of the two images to be registered are finally determined, so that the accuracy of the registration method is improved, and the efficiency is improved.
The above description is only an example of this document and is not intended to limit this document. Various modifications and changes may occur to those skilled in the art from this document. Any modifications, equivalents, improvements, etc. which come within the spirit and principle of the disclosure are intended to be included within the scope of the claims of this document.

Claims (10)

1. A real-time remote sensing map splicing method combining a space domain and a frequency domain is characterized by comprising the following steps:
calculating the translation relation between the first image to be registered and the second image to be registered by a phase correlation method;
performing corner detection on the first image to be registered and the second image to be registered according to the translation relation by an improved SUSAN detection method, and determining corresponding corners in the first image to be registered and the second image to be registered;
and registering by taking the angular points as characteristic points, solving a transformation relation between the first image to be registered and the second image to be registered, and splicing the first image to be registered and the second image to be registered through the transformation relation.
2. The method according to claim 1, wherein the determining of the corresponding corner points in the first image to be registered and the second image to be registered by performing corner point detection on the images according to the translation relationship by using an improved SUSAN detection method specifically comprises:
the process of detecting the corner of the image by the improved SUSAN detection method comprises the following steps:
performing binarization segmentation on the image according to a formula 1, and setting a threshold value in a normalization manner;
Figure FDA0003102093720000011
where f (x, y) denotes a grayscale at a point (x, y), T denotes a threshold, and T ═ f (f)max(x,y)+fmin(x,y))/2;
According to a formula of derivation in the horizontal direction, namely formula 2, and a formula of derivation in the vertical direction, namely formula 3, the gray level and the vertical direction at the point (x, y) are respectively derived;
Figure FDA0003102093720000012
Figure FDA0003102093720000013
wherein f (x, y) is the gray level at the point (x, y), f (x-1, y) is the gray level at the point (x-1, y), and f (x, y-1) is the gray level at the point (x, y-1);
according to a formula 4, finding out local maximum values in blocks, and carrying out corner point detection on local maximum value points by an SUSAN detection method;
Figure FDA0003102093720000021
where C (x, y) is a similarity comparison function, f (x, y) represents the gray level at a point (x, y) in the template, f (x, y)0,y0) Representing the center of a circle (x) in the template0,y0) The gray scale of (1) is classified into a USAN region by using the point which is the same as the gray scale value of the circle center, namely the value of C (x, y) which is 1;
calculated according to equation 5 as (x)0,y0) Total number of similar comparison functions C (x, y) in template as center:
Figure FDA0003102093720000022
wherein, c (x)0,y0) Is (x)0,y0) A template centered at a center, n (x, y) being the size of the USAN region of said template, C (x, y) being the point within the template belonging to said USAN region;
selecting the angular points according to a formula 6, and selecting the points with n (x, y) less than g as the angular points:
Figure FDA0003102093720000023
wherein, R (x, y) is a corner response function, and g is a geometric threshold.
3. The method according to claim 1, wherein the determining of the corresponding corner points in the first image to be registered and the second image to be registered by performing corner point detection on the images according to the translation relationship by using an improved SUSAN detection method specifically comprises:
and selecting three angular points which are not on a straight line in the first image to be registered according to the translation relation by the improved SUSAN detection method, extracting a template, selecting a registration template in the second image to be registered according to the translation relation, and finally determining the corresponding angular points in the second image to be registered.
4. The method according to claim 1, wherein the registration is performed by using the corner points as feature points, a transformation relationship between the first image to be registered and the second image to be registered is solved, and the first image to be registered and the second image to be registered are spliced by the transformation relationship, specifically comprising:
establishing a mapping relation between the first image to be registered and the second image to be registered according to a formula 7, and calculating parameters of the position transformation relation by using three diagonal points as feature points, wherein the corresponding position transformation relation is as follows:
Figure FDA0003102093720000031
wherein (x)1,y1),(x2,y2) And coordinates of corresponding corner points in the first image to be registered and the second image to be registered are respectively.
5. The method according to claim 4, wherein the registration is performed by using the corner points as feature points, a transformation relationship between the first image to be registered and the second image to be registered is solved, and the first image to be registered and the second image to be registered are spliced by the transformation relationship, specifically comprising:
and finding a specific number of corresponding points according to the position transformation relation, solving transformation parameters by using the specific number of corresponding points through a least square method, determining the transformation relation between the first image to be registered and the second image to be registered according to the transformation parameters, and splicing the first image to be registered and the second image to be registered according to the transformation relation.
6. A real-time remote sensing map splicing device combining a space domain and a frequency domain is characterized by specifically comprising:
the translation relation calculation module is used for calculating the translation relation between the first image to be registered and the second image to be registered through a phase correlation method;
the corner determining module is used for performing corner detection on the first image to be registered and the second image to be registered according to the translation relation by an improved SUSAN detection method, and determining corresponding corners in the first image to be registered and the second image to be registered;
and the splicing module is used for registering by taking the angular points as characteristic points, solving the transformation relation between the first image to be registered and the second image to be registered, and splicing the first image to be registered and the second image to be registered by the transformation relation.
7. The apparatus according to claim 6, wherein the corner point determining module is specifically configured to:
performing binarization segmentation on the image according to a formula 8, and setting a threshold value in a normalization manner;
Figure FDA0003102093720000041
where f (x, y) denotes a grayscale at a point (x, y), T denotes a threshold, and T ═ f (f)max(x,y)+fmin(x,y))/2;
According to a formula of derivation in the horizontal direction, that is, formula 9, and a formula of derivation in the vertical direction, that is, formula 10, the gray level and the vertical direction at the point (x, y) are respectively derived;
Figure FDA0003102093720000042
Figure FDA0003102093720000043
wherein f (x, y) is the gray level at the point (x, y), f (x-1, y) is the gray level at the point (x-1, y), and f (x, y-1) is the gray level at the point (x, y-1);
according to a formula 11, finding out local maximum values in blocks, and carrying out corner point detection on local maximum value points by an SUSAN detection method;
Figure FDA0003102093720000044
where C (x, y) is a similarity comparison function, f (x, y) represents the gray level at a point (x, y) in the template, f (x, y)0,y0) Representing the center of a circle (x) in the template0,y0) The gray scale of (1) is classified into a USAN region by using the point which is the same as the gray scale value of the circle center, namely the value of C (x, y) which is 1;
calculated according to equation 12 as (x)0,y0) Total number of similar comparison functions C (x, y) in template as center:
Figure FDA0003102093720000045
wherein, c (x)0,y0) Is (x)0,y0) A template centered at a center, n (x, y) being the size of the USAN region of said template, C (x, y) being the point within the template belonging to said USAN region;
selecting the corner points according to the formula 13, selecting the points with n (x, y) less than g as the corner points:
Figure FDA0003102093720000051
wherein, R (x, y) is a corner response function, and g is a geometric threshold.
8. The apparatus according to claim 6, wherein the corner point determining module is specifically configured to:
and selecting three angular points which are not on a straight line in the first image to be registered according to the translation relation by the improved SUSAN detection method, extracting a template, selecting a registration template in the second image to be registered according to the translation relation, and finally determining the corresponding angular points in the second image to be registered.
9. The apparatus of claim 6, wherein the splicing module is specifically configured to:
establishing a mapping relation between the first image to be registered and the second image to be registered according to a formula 14, and calculating parameters of the position transformation relation by using three diagonal points as feature points, wherein the corresponding position transformation relation is as follows:
Figure FDA0003102093720000052
wherein (x)1,y1),(x2,y2) And coordinates of corresponding corner points in the first image to be registered and the second image to be registered are respectively.
10. The apparatus of claim 6, wherein the splicing module is specifically configured to:
and finding a specific number of corresponding points according to the position transformation relation, solving transformation parameters by using the specific number of corresponding points through a least square method, determining the transformation relation between the first image to be registered and the second image to be registered according to the transformation parameters, and splicing the first image to be registered and the second image to be registered according to the transformation relation.
CN202110627355.3A 2021-06-04 2021-06-04 Real-time remote sensing map splicing method and device combining spatial domain and frequency domain Pending CN113269817A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110627355.3A CN113269817A (en) 2021-06-04 2021-06-04 Real-time remote sensing map splicing method and device combining spatial domain and frequency domain

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110627355.3A CN113269817A (en) 2021-06-04 2021-06-04 Real-time remote sensing map splicing method and device combining spatial domain and frequency domain

Publications (1)

Publication Number Publication Date
CN113269817A true CN113269817A (en) 2021-08-17

Family

ID=77234366

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110627355.3A Pending CN113269817A (en) 2021-06-04 2021-06-04 Real-time remote sensing map splicing method and device combining spatial domain and frequency domain

Country Status (1)

Country Link
CN (1) CN113269817A (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5901246A (en) * 1995-06-06 1999-05-04 Hoffberg; Steven M. Ergonomic man-machine interface incorporating adaptive pattern recognition based control system
CN102156995A (en) * 2011-04-21 2011-08-17 北京理工大学 Video movement foreground dividing method in moving camera
CN202025360U (en) * 2010-11-16 2011-11-02 上海中和软件有限公司 Automatic mosaic system for medical images
CN104156968A (en) * 2014-08-19 2014-11-19 山东临沂烟草有限公司 Large-area complex-terrain-region unmanned plane sequence image rapid seamless splicing method
CN104167003A (en) * 2014-08-29 2014-11-26 福州大学 Method for fast registering remote-sensing image
US9247129B1 (en) * 2013-08-30 2016-01-26 A9.Com, Inc. Self-portrait enhancement techniques
CN106127690A (en) * 2016-07-06 2016-11-16 李长春 A kind of quick joining method of unmanned aerial vehicle remote sensing image
CN106709894A (en) * 2015-08-17 2017-05-24 北京亿羽舜海科技有限公司 Real-time image splicing method and system
US20170262986A1 (en) * 2016-03-14 2017-09-14 Sensors Unlimited, Inc. Image-based signal detection for object metrology
WO2018145508A1 (en) * 2017-02-13 2018-08-16 中兴通讯股份有限公司 Image processing method and device
CN109829853A (en) * 2019-01-18 2019-05-31 电子科技大学 A kind of unmanned plane image split-joint method
CN111583110A (en) * 2020-04-24 2020-08-25 华南理工大学 Splicing method of aerial images

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5901246A (en) * 1995-06-06 1999-05-04 Hoffberg; Steven M. Ergonomic man-machine interface incorporating adaptive pattern recognition based control system
CN202025360U (en) * 2010-11-16 2011-11-02 上海中和软件有限公司 Automatic mosaic system for medical images
CN102156995A (en) * 2011-04-21 2011-08-17 北京理工大学 Video movement foreground dividing method in moving camera
US9247129B1 (en) * 2013-08-30 2016-01-26 A9.Com, Inc. Self-portrait enhancement techniques
CN104156968A (en) * 2014-08-19 2014-11-19 山东临沂烟草有限公司 Large-area complex-terrain-region unmanned plane sequence image rapid seamless splicing method
CN104167003A (en) * 2014-08-29 2014-11-26 福州大学 Method for fast registering remote-sensing image
CN106709894A (en) * 2015-08-17 2017-05-24 北京亿羽舜海科技有限公司 Real-time image splicing method and system
US20170262986A1 (en) * 2016-03-14 2017-09-14 Sensors Unlimited, Inc. Image-based signal detection for object metrology
CN106127690A (en) * 2016-07-06 2016-11-16 李长春 A kind of quick joining method of unmanned aerial vehicle remote sensing image
WO2018145508A1 (en) * 2017-02-13 2018-08-16 中兴通讯股份有限公司 Image processing method and device
CN108429887A (en) * 2017-02-13 2018-08-21 中兴通讯股份有限公司 A kind of image processing method and device
CN109829853A (en) * 2019-01-18 2019-05-31 电子科技大学 A kind of unmanned plane image split-joint method
CN111583110A (en) * 2020-04-24 2020-08-25 华南理工大学 Splicing method of aerial images

Non-Patent Citations (9)

* Cited by examiner, † Cited by third party
Title
CHENG PENG JIANG: "Image Mosaic Method Based on SUSAN Algorithm and Log-Polar Transformation", 《ADVANCED MATERIALS RESEARCH》 *
余道明: "图像配准技术研究及应用", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
冯宇平;戴明;孙立悦;张威;: "图像自动拼接融合的优化设计", 光学精密工程, no. 02 *
孙振华;陈东;沈振康;: "基于改进SUSAN算子的一种遥感影像配准方法", 影像技术, no. 03 *
巫远征: "低空航片视频图像拼接", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
方壮: "基于点特征的图像配准技术研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
方壮;: "一种基于SUSAN算子和相位相关实现图像配准的算法", 湖北民族学院学报(自然科学版), no. 01 *
纪利娥;石继升;: "基于SUSAN特征点的图像配准算法", 传感器世界, no. 04 *
蒋虎: "航片拼接及其与矢量地图的可视化集成技术", 《中国优秀硕士学位论文全文数据库 信息科技辑》, pages 2 *

Similar Documents

Publication Publication Date Title
CN107993258B (en) Image registration method and device
US20150262346A1 (en) Image processing apparatus, image processing method, and image processing program
US20140233827A1 (en) Identification method for valuable file and identification device thereof
CN105279372A (en) Building height computing method and apparatus
CN107516322B (en) Image object size and rotation estimation calculation method based on log polar space
CN110956661B (en) Method for calculating dynamic pose of visible light and infrared camera based on bidirectional homography matrix
CN111160291B (en) Human eye detection method based on depth information and CNN
CN106803262A (en) The method that car speed is independently resolved using binocular vision
CN110310331A (en) A kind of position and orientation estimation method based on linear feature in conjunction with point cloud feature
CN110120013A (en) A kind of cloud method and device
CN114331879A (en) Visible light and infrared image registration method for equalized second-order gradient histogram descriptor
CN103700082B (en) Image split-joint method based on dual quaterion relative orientation
CN113393439A (en) Forging defect detection method based on deep learning
CN112348869A (en) Method for recovering monocular SLAM scale through detection and calibration
Chen et al. Image stitching algorithm research based on OpenCV
CN113362467B (en) Point cloud preprocessing and ShuffleNet-based mobile terminal three-dimensional pose estimation method
CN110197104B (en) Distance measurement method and device based on vehicle
CN110675442A (en) Local stereo matching method and system combined with target identification technology
KR20120020711A (en) Object recognition system and method the same
CN109658523A (en) The method for realizing each function operation instruction of vehicle using the application of AR augmented reality
WO2021167910A1 (en) A method for generating a dataset, a method for generating a neural network, and a method for constructing a model of a scene
Wang et al. A real-time correction and stitching algorithm for underwater fisheye images
CN113269817A (en) Real-time remote sensing map splicing method and device combining spatial domain and frequency domain
CN110674817B (en) License plate anti-counterfeiting method and device based on binocular camera
Gao et al. Image matching method based on multi-scale corner detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination