CN117761695B - Multi-angle SAR three-dimensional imaging method based on self-adaptive partition SIFT - Google Patents

Multi-angle SAR three-dimensional imaging method based on self-adaptive partition SIFT Download PDF

Info

Publication number
CN117761695B
CN117761695B CN202410195654.8A CN202410195654A CN117761695B CN 117761695 B CN117761695 B CN 117761695B CN 202410195654 A CN202410195654 A CN 202410195654A CN 117761695 B CN117761695 B CN 117761695B
Authority
CN
China
Prior art keywords
image
sar
images
sift
point cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410195654.8A
Other languages
Chinese (zh)
Other versions
CN117761695A (en
Inventor
向卫
罗焱高
邓云凯
张衡
杨从瑞
王龙祥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Aerospace Information Research Institute of CAS
Original Assignee
Aerospace Information Research Institute of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Aerospace Information Research Institute of CAS filed Critical Aerospace Information Research Institute of CAS
Priority to CN202410195654.8A priority Critical patent/CN117761695B/en
Publication of CN117761695A publication Critical patent/CN117761695A/en
Application granted granted Critical
Publication of CN117761695B publication Critical patent/CN117761695B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Radar Systems Or Details Thereof (AREA)

Abstract

The invention provides a multi-angle SAR three-dimensional imaging method based on self-adaptive partition SIFT, which belongs to the technical field of SAR image three-dimensional imaging, and aims to obtain different-angle SAR images of the same target area by performing multi-angle imaging on targets in an observation scene, and perform regional homonymous point detection on a large-scene SAR image by adopting multi-scale SIFT so as to realize efficient and rapid registration of homonymous points of the images. And finally, according to the F.Leberl conformational equation and platform parameters, realizing high-precision three-dimensional imaging of the target area. The invention can improve the matching detection efficiency and accuracy, and ensure the reconstruction accuracy of the key region in a partition imaging mode.

Description

Multi-angle SAR three-dimensional imaging method based on self-adaptive partition SIFT
Technical Field
The invention belongs to the technical field of SAR image three-dimensional imaging, and particularly relates to a multi-angle SAR three-dimensional imaging method based on self-adaptive partition SIFT.
Background
Synthetic aperture radar (SAR, SYNTHETIC APERTURE RADAR) is an active microwave radar imaging system, and compared with other imaging sensors such as optical or photoelectric imaging sensors, the active microwave radar imaging system has the advantages of full-time, all-weather, high-resolution imaging and the like, and is an important technical means for current long-distance earth observation. Meanwhile, the SAR imaging technology is utilized to obtain the high-quality high-resolution three-dimensional scene image, which has extremely important significance in military application and national economy construction, and is widely applied to the fields of academic research, rescue and relief work, commercial application and the like. Along with rapid development of SAR theory and technology, various SAR imaging technologies are sequentially proposed to acquire three-dimensional information of an observation scene, such as interferometric SAR (Interferometric SAR, inSAR), tomographic SAR (Tomographic SAR, tomoSAR), and radar photogrammetry (RADARGRAMMETRY). The radar photogrammetry, also called as radar stereo measurement technology, is a process of calculating the height information of corresponding points on the ground by substituting parallax information of image points with the same name into an equation set of a conformational model by using two images with certain parallax. Because the used images can be acquired in different time periods and positions, the constraint on the platform and the images is lower than that of the commonly used InSAR and TomoSAR, the technology and the optical photogrammetry technology are mutually complemented, and a great deal of achievement is achieved in the fields of digital photogrammetry, ground surface elevation inversion and the like.
Aiming at SAR image homonymy point measurement, a detection method based on statistics of gray level and the like and a multi-feature matching method based on vision are commonly used, and image matching with high robustness can be realized by using ground control points or corner reflectors and further adopting affine transformation. However, high-precision homonymy point measurement is one of application technologies that need to be developed in radar photography imaging, and can directly influence model precision obtained by subsequent calculation of a conformational equation. Therefore, aiming at high-precision detection of the same name point of the complex scene of the SAR image, deep research is needed in the aspects of SAR imaging principle and SAR image multi-scale feature extraction algorithm. The detection method based on statistics generally utilizes gray level or gradient information of images, matches windows of two images in a feature space through a correlation method, a mutual information method and the like, and is suitable for SAR images with simpler backgrounds. Features extracted by a feature-based matching algorithm are not affected by heterogeneous gray level changes basically, but due to complex distortion of SAR images and noise interference, when the features are extracted, the overall matching effect is not ideal, SAR targets cannot be completely described under a single scale, and the imaging precision and efficiency of the results of direct matching of complex scenes are not ideal.
In the field of analytic stereo positioning, a commonly used conformational equation set comprises a mathematical model proposed by F.Leber and G.Konency and a mathematical model adopting line center projection, wherein the F.Leber model considers the change of external orientation elements of a sensor, and has fewer correction parameters to be solved, but does not consider the change of angle elements, and the imaging result has larger up-down parallax; the G.Konency model not only considers the change of external azimuth elements of the sensor, but also considers the change of topography fluctuation, the formula form is similar to a common collinear equation in photogrammetry, the attitude angle parameter is considered, and better correction precision is not brought at the same time, so that the adaptability is poor under the condition of lacking a higher precision parameter initial value; the mathematical model of line-centered projection is used to treat the radar image as a line array CCD scanning image, so the process of geometrically correcting the radar image must be approximate.
The existing SAR image registration algorithm mainly comprises a ground control point correction-based method, a statistics-based detection method, a vision-based multi-feature matching method and the like, and also comprises a plurality of special matching methods based on a neural network, a genetic algorithm and the like. When the homonymous point detection is carried out on the large-area SAR image, the main defects are as follows:
The image matching with high robustness can be realized by using ground control points or corner reflectors and further adopting affine transformation, and the method has high requirements on the distribution of the ground control points of a scene and poor applicability although higher accuracy can be obtained.
In the statistical-based detection method, a gray-scale-based registration method is commonly used, information of a certain domain of an image, such as gray scale or gradient information of the image, is generally directly used for matching, window matching is performed on two images in a feature space through a correlation method, a mutual information method and the like, and similarity standard judgment is generally performed on the basis of a traversal search mode. The method is suitable for SAR images with simpler backgrounds, and can have poorer matching effect on heterologous images, the traversal matching of windows can cause larger calculated amount and lower efficiency.
The feature-based matching algorithm extracts some common features on the reference image and the image to be matched as matching primitives, such as point features, line features and surface features, the influence of gray scale distinction between images on the features is small, the matching task of the images is realized by solving transformation parameters between the images through the matching primitives, and the combination of different feature values and feature matching methods improves the image matching accuracy. Classical point feature extraction algorithm: and (3) carrying out scale-invariant feature transformation (SCALE INVARIACNT feature transform, SIFT), describing the feature points by adopting a feature descriptor, and determining the matching points by utilizing the Euclidean distance. Because SAR images are distorted and complex and noise interference exists, the overall matching effect is not ideal directly according to the SIFT algorithm, the operation of the high-dimensional descriptors is time-consuming, and mismatching point pairs also need to be removed. Therefore, for complex scenes, the SAR target cannot be completely described under a single scale, and the imaging precision and efficiency of the result of directly adopting SIFT matching are not ideal.
Disclosure of Invention
In order to solve the technical problems, the invention provides a multi-angle SAR three-dimensional imaging method based on self-adaptive partition SIFT, which is used for obtaining different-angle SAR images of the same target area by performing multi-angle imaging on targets in an observation scene, and performing regional homonymous point detection on a large-scene SAR image by adopting multi-scale SIFT so as to realize efficient and rapid registration of homonymous points of the images. And finally, according to the F.Leberl conformational equation and platform parameters, realizing high-precision three-dimensional imaging of the target area. The method is oriented to the high-efficiency three-dimensional imaging requirement of a large scene with certain complexity, and adopts the radar three-dimensional measurement method of self-adaptive partition SIFT to perform three-dimensional imaging of the target scene.
In order to achieve the above purpose, the invention adopts the following technical scheme:
A multi-angle SAR three-dimensional imaging method based on self-adaptive partition SIFT comprises the following steps:
s1, acquiring a multi-angle SAR image sequence and corresponding platform information of the same area, and screening and matching all SAR images to obtain an SAR image sequence containing n SAR images;
S2, selecting two SAR images from the SAR image sequence to respectively serve as main images and auxiliary images to form an initial image pair, registering the initial image pair by adopting an image pyramid model according to the adaptive partition SIFT to obtain a rough registration result based on SIFT and a fine registration result based on cross correlation, judging whether adjacent SAR images are added to serve as auxiliary images according to the rough registration result to perform the same operation, and obtaining image pair groups corresponding to at most n-1 main images and corresponding registration results; replacing the main images until the SAR image sequence is traversed, and obtaining registration results of n main images;
S3, performing three-dimensional imaging on the registration result of each main image, performing three-dimensional imaging on the fine registration result of the cross-correlation of each image pair in the image pair group to obtain a sparse point cloud set corresponding to the main image, and performing repeated homonymous point correction to obtain the final point cloud of the main image, so as to obtain a point cloud sequence corresponding to the SAR image sequence and containing n dense point clouds;
s4, dividing the target area to obtain partitions, equally dividing the three-dimensional point cloud of each SAR image according to the partition where the registration result is located, and obtaining a digital elevation model of each area according to the point cloud sequence, so that the more complete and accurate scene point cloud reconstruction result is obtained through combination.
The beneficial effects are that:
(1) The current SAR stereo measurement generally utilizes a ground control point method to realize SAR image homonymous point measurement, so that three-dimensional imaging is carried out through a conformational model, but the effect on an unknown area is poor, or three-dimensional imaging is realized through InSAR and TomoSAR modes, but the requirement on a platform is high, and the applicability is poor. According to the method, target recognition is carried out according to common features among the multi-angle SAR images, imaging is carried out through radar photogrammetry, and a radar stereo measurement method of incremental self-adaptive partition SIFT is adopted to process the multi-angle SAR image sequence according to the characteristics of more SAR images, low target recognition rate and the like. Aiming at the problem that the conventional method can not realize multi-angle matching of SAR images of complex large scenes in batches, the invention provides a SIFT-based different-scale self-adaptive partition matching method for realizing two-dimensional homonymous point accurate extraction of SAR image feature points and high-precision three-dimensional imaging of target areas.
(2) Aiming at the problems of large range, high resolution and large data volume of a multi-angle SAR image sequence of a target scene, the image pyramid mode is adopted to utilize multi-scale SIFT to extract and screen characteristic points of the large-scene SAR image under different resolutions, the matching detection efficiency and accuracy are improved, and meanwhile, the reconstruction accuracy of key areas is ensured in a partition imaging mode.
(3) The method is based on a self-adaptive partition SIFT method, the partition and affine matrix estimation is carried out through the SIFT result of the multi-scale space, and then the homonymous points of the multi-angle SAR image sequence can be efficiently and accurately extracted through fine registration.
(4) The method is used for carrying out three-dimensional imaging on the homonymous points based on radar photogrammetry, and obtaining a high-precision three-dimensional model of the target scene through point cloud fusion based on the homonymous points.
Drawings
FIG. 1 is a flow chart of a multi-angle SAR three-dimensional imaging method based on adaptive zoned SIFT according to the present invention;
FIG. 2 is a partial region division result of a sub-aperture SAR image based on SIFT;
FIG. 3 is a partial scene reconstruction result based on adaptive partition SIFT;
Fig. 4 shows the sub-aperture DEM fusion results.
Detailed Description
The following description of the embodiments of the present invention is provided to facilitate understanding of the present invention by those skilled in the art, but it should be understood that the present invention is not limited to the scope of the embodiments, and all the inventions which make use of the inventive concept are protected by the spirit and scope of the present invention as defined and defined in the appended claims to those skilled in the art.
The multi-angle SAR image sequence of the target area can be obtained by crossing multiple airlines and the like, and platform parameters such as coordinates, speed and antenna beam view angle corresponding to each SAR image are recorded, so that a circular imaging mode similar to a CSAR (circular SAR) mode is obtained. The angle interval of the SAR image sequence can be properly reduced according to the target complex condition, the visual angle difference of the two SAR images can be controlled at 5 degrees, and the calculation redundancy is less matched and calculated while the high-resolution image with a larger imaging range is ensured to be obtained, so that the trade-off of efficiency, error and matching success rate can be realized.
As shown in fig. 1, the multi-angle SAR three-dimensional imaging method based on adaptive partition SIFT in the embodiment of the present invention mainly includes the following steps:
Step S1, SAR imaging is carried out on the same observation area by mainly utilizing platforms from different times and angles to obtain SAR original data, and platform parameters are compensated in the same way according to a motion compensation method adopted in the imaging process, so that a plurality of SAR image sequences with a certain view angle difference and platform parameters are obtained, wherein the platform parameters comprise a platform distance and a view angle. And carrying out space-time sequencing on the SAR images according to the acquired platform distance and the acquired view angle, and then carrying out sequential matching screening on all the SAR images according to the space-time correlation, the feature value checking result and the similarity requirement to obtain a sequence containing n SAR images.
In step S2, mainly for improving the accuracy of the subsequent three-dimensional imaging, two adjacent SAR images are selected from the SAR image sequence to be respectively used as the main image and the auxiliary image of the initial image pair, the image pyramid model is adopted to perform multi-scale partition SIFT pairing (i.e. multi-scale SIFT in fig. 1), and then the partition affine transformation and the normalized cross-correlation algorithm are utilized to complete the image registration. And then continuously replacing the image pairs formed by the auxiliary image and the main image, and carrying out the same operation to obtain an image pair group formed by all the image pairs of the main image and all corresponding registration results. And finally, selecting a new main image until all n SAR images have corresponding registration results.
And step S3, based on the image pair group of each main image obtained in the step S2, sequentially selecting the image pair group, and performing three-dimensional imaging by using a registration result through an F.Leberl conformational equation to obtain a three-dimensional point cloud of each image pair. And (3) obtaining at most n-1 sparse point cloud sets for each main image, connecting the same points through matching pairs of the same-name point detection results of the image pairs to obtain a conformation equation, fitting different imaging results of the same points of the main image to finish point cloud correction, obtaining final sparse point cloud of each main image, and forming a final point cloud set formed by a point cloud sequence corresponding to the SAR image sequence.
And S4, mainly for improving the integration efficiency of the DEM (digital elevation model), dividing the target scene into areas according to the partitioning result of S2 to obtain a plurality of divided areas, clustering the three-dimensional point cloud sequences corresponding to the SAR image sequences obtained in the step S3, and dividing the areas according to the geometric areas. And then carrying out three-dimensional coordinate correction on the points in the point cloud sequence by the same pixel of each region by adopting a least square method, namely, region elevation adjustment, and finally integrating elevation results of the pixel points of all the regions, interpolating according to requirements, and carrying out DEM integration, so as to obtain a final three-dimensional reconstruction result of the target region.
Specifically, the step S2 further includes:
S21, selecting a pair of adjacent images in the SAR image sequence as a main image and a secondary image of an initial image pair, wherein each SAR image can be divided into an image pyramid consisting of m image layers with different resolutions in a downsampling mode, and a preliminary registration result between two images is sequentially obtained for each layer of image according to multi-scale SIFT.
S22, combining the primary registration results of SAR images under different image layers, and carrying out primary screening on the matching points according to random sampling consistency (RANSAC, random Sample Consensus), so as to obtain the integral affine matrix optimal estimation and cross-correlation fine registration results of the SAR image pair.
S23, clustering according to geometric distances according to the feature points of the high-level images, and according to the set maximum region limitation, normally setting the distance between adjacent pixels to be 5% of the image scale and connecting the feature points with the distance lower than the distance, so as to divide the regions of the two images. After the region containing at least 10% of characteristic points is enlarged and up-sampled according to the division, multi-scale SIFT is carried out to obtain more matching points, and a high-precision self-adaptive partition SIFT registration result is obtained, as shown in fig. 2.
S24, selecting a new auxiliary image and the main image related in S2 to form a new added image, executing the same operation until the SAR image sequence is traversed, finally forming an image pair group of the same main image, and obtaining a registration result set corresponding to the main image.
S25, sequentially selecting new SAR images along the SAR image sequence to serve as main images, and executing S21-S24 to obtain n registration result sets corresponding to the n main images.
The specific calculation model and method are as follows:
Under the condition of keeping the resolution ratio of the image unchanged, continuous variable-scale filtering is carried out on the SAR image to realize multi-scale continuation of the SAR image, so as to form a multi-scale space. The scale space of the SAR image is divided into a plurality of groups, and the image sizes between two adjacent groups differ by 2 times. Each group of SAR images is divided into S layers and S+3 layers of images, the standard deviation of the Gaussian function is gradually increased layer by layer, the difference between adjacent layers is k times, k=2 1/S is usually taken, each group of 3 rd image is resampled by 0.5 times to be used as the first layer of the next group of scale space, and the next group of scale space is sequentially generated. After a scale space of the SAR image is established, an extreme point is detected in the DoG (Gaussian difference scale space) to serve as a candidate feature point.
The Gaussian kernel is a linear kernel for establishing a scale space, and the scale space expression of the image can be established by convolving the Gaussian function with the continuously increased variance with the image. The expression is as follows:
(1)
Wherein, Is Gaussian scale space,/>Is the original image,/>Is the standard deviation of/>Gaussian function of/>Representing convolution operations, (x, y) is the coordinates of the pixels in the image.
SIFT typically employs a laplace gaussian operator (LoG) to detect target points in an image, and approximates with a gaussian differential scale space (DoG), where detection of extreme points in the differential scale space is equivalent to detection of extreme points of a laplace response in the scale space, thereby reducing computation. The expression is as follows:
(2)
Wherein, Is Gaussian differential scale space,/>Is a laplace operator.
After the local extreme point of the scale space is obtained, the refinement key point is obtained through sub-pixel interpolation and region edge effect refinement. And then generating corresponding main direction and local feature descriptors through gradient histograms in the key point field. And finally, performing feature point matching and primary screening by SIFT through the Euclidean distance between descriptors.
And screening matching points of the two SAR images by adopting a RANSAC method, and carrying out least square estimation on affine matrixes of the two images according to screening results. Firstly, randomly selecting k 2 matching points and solving an unknown rotation matrix by adopting a least square methodThe expression is as follows:
(3)
Wherein, The superscript T represents the transposition of the matrix for the homogeneous coordinate axis coordinate of the mth matching point in the ith and jth image spaces. /(I)Representing an unknown rotation matrix that minimizes the expression.
Obtaining matrix estimation by adopting a certain method, screening the rest matching points by using the estimated affine matrix to obtain affine matrix estimation, adding the matching points meeting the error requirement into the affine matrix estimation, finally obtaining screened matching points, and obtaining final affine matrix estimation.
The SAR images of the two azimuth angles are subjected to image matching by adopting a normalized cross-correlation (Normalized Cross Correlation, NCC) method, and the method is an amplitude processing-based method capable of coping with radiation difference image matching. After the zoned affine transformation, the rotation and scale difference of the target are basically eliminated, an NCC method can be adopted to find image homonymous points, the correlation coefficient and the offset of the homonymous points are calculated, and the information can be used in subsequent three-dimensional imaging and point cloud fusion. The expression that NCC generalizes into two-dimensional images is:
(4)
Wherein, For cross-correlation coefficients, n is a determined constant,/>V is a formula variable; /(I),/>Image means of the image I 1 and the image I 2 respectively, wherein the definition of the image means is as in formula (5);
Wherein:
(5)
and/> The amplitude values at the corresponding coordinates in the images are respectively represented, so that the similarity between the reference image I 1 and the image I 2 to be matched can be calculated. The NCC algorithm comprises the following implementation methods: searching by taking a sliding window of (2n+1) ∙ (2n+1) taking one point in the reference image I 1 as a center point, and likewise taking a sliding window of (2n+1) ∙ (2n+1) in the image I 2 to be matched, and calculating the similarity of the two windows, namely the mutual coherence coefficient/>. In the image I 2 to be matched, the window is continuously moved, and the cross-correlation coefficient is calculated, and the point with the largest cross-correlation coefficient is selected as the matching point of the selected point in the reference image I 1. And traversing all points in the reference image I 1, and repeating the above operation to obtain a fine registration result.
Specifically, the step S3 further includes:
S31, for each image pair group of the main image, according to the registration result of each pair of the image pairs and the parameters of the carrier, calculating the three-dimensional imaging result of all the matching points by using an F.Leberl conformational equation, and obtaining the sparse point cloud of the image pairs. Finally forming a sparse point cloud set corresponding to the image pair group of the same main image;
And S32, marking and connecting the imaging results of the same point of the main image in different image pairs, and performing correction fitting to obtain a three-dimensional point cloud corresponding to the main image. Traversing all the main images to obtain a point cloud sequence which is opposite to the n images and contains n three-dimensional point clouds, as shown in fig. 3 and 4.
The specific calculation model and method are as follows:
Calculating three-dimensional coordinates of all matching points according to platform parameters by adopting an F.Leberl conformational equation, wherein the F.Leberl conformational equation expresses a mathematical model of the instantaneous conformation of a radar image according to the distance condition and the zero Doppler condition of the image point of the radar image, and has the following functions for images displayed in a diagonal range:
(6)
Wherein, Is the object space coordinate of the ground point,/>For the object space coordinates of the instantaneous position of the radar antenna at the corresponding moment,/>Is the coordinate of the ground point S in the distance direction from the ground to the display image,/>Is the denominator of the scale of the distance image,/>Is the scan delay.
When the satellite velocity vectorPerpendicular to the vector of the target relative antenna, the zero doppler condition is:
(7)
Since each image pair can be subjected to partition matching and imaging, multiple positioning results can appear at the same point of the main image due to different partition scales. K 3 positioning results obtained in a plurality of image pairs for a certain pixel point s of the main image M=1, 2, …, k 3, the optimal positioning estimate p s is obtained by the least squares method. And (3) screening pixel points with a plurality of positioning results in the main image, performing the same operation, and taking a series of calculated positioning results as reference point clouds for solving a rotation matrix of each three-dimensional point cloud in the image pair group relative to the three-dimensional point clouds.
And solving the unknown affine matrix by adopting a least square method, so as to obtain transformation coordinates of reference point clouds relative to different point clouds, and the influence of platform parameter errors on the three-dimensional imaging result can be reduced to a certain extent. Three-dimensional point cloud obtained for image pair composed of main image i and auxiliary image jM 2=1,2,…,k4, solving for the relative reference point cloud/>Unknown affine matrix/>, m 2=1,2,…,k4 The expression is:
(8)
And iterating to error convergence through least square selection weight, correcting all point clouds in the group by the image of the main image to obtain a three-dimensional point cloud set, estimating a new reference point cloud by using the corrected three-dimensional point cloud set, and correcting again until the reference point cloud converges to be used as the final three-dimensional point cloud of the main image.
In conclusion, the method has good applicability to the large-area observation scene target of the SAR image, and can realize high-efficiency and high-precision three-dimensional imaging of the large-area observation scene.

Claims (3)

1. The multi-angle SAR three-dimensional imaging method based on the adaptive partition SIFT is characterized by comprising the following steps of:
s1, acquiring a multi-angle SAR image sequence and corresponding platform information of the same area, and screening and matching all SAR images to obtain an SAR image sequence containing n SAR images;
S2, selecting two SAR images from the SAR image sequence to respectively serve as main images and auxiliary images to form an initial image pair, registering the initial image pair by adopting an image pyramid model according to the adaptive partition SIFT to obtain a rough registration result based on SIFT and a fine registration result based on cross correlation, judging whether adjacent SAR images are added to serve as auxiliary images according to the rough registration result to perform the same operation, and obtaining image pair groups corresponding to at most n-1 main images and corresponding registration results; replacing the main image until the SAR image sequence is traversed to obtain a registration result of n main images, wherein the registration result comprises the following steps:
S21, selecting two SAR images as a primary image and a secondary image to form an initial image pair, wherein each SAR image is divided into an image pyramid formed by m image layers with different resolutions, and obtaining a primary registration result between the two images according to a SIFT result of each image layer to obtain a coarse registration result;
S22, in order to obtain more matching points, primarily improving the density of sparse point cloud, combining SAR images under different scales to perform SIFT matching, primarily screening the matching points according to random sampling consistency to obtain affine transformation of two images, resampling the images according to the transformation, and performing cross-correlation fine registration to obtain a fine registration result;
s23, carrying out feature point clustering according to the result distribution condition of the image pyramid high-level image in SIFT calculation, further carrying out network region division and up-sampling on the low-level image to obtain a plurality of key target areas, expanding the target areas to ensure enough feature points, further registering with the corresponding areas of the auxiliary image, and registering each final area to obtain more homonymous point measurement results;
S24, selecting another image which is adjacent to the main image and has the SIFT rough registration point density higher than a preset threshold value along the SAR image sequence to form a new image pair, and executing S21-S23 until no new image can be used as a secondary image, and finally obtaining an image pair group of the main image and a corresponding registration result;
s25, sequentially selecting new SAR images along the SAR image sequence to serve as main images, and executing S21-S24 to process n multi-angle SAR images to obtain registration results corresponding to the n main images;
S3, performing three-dimensional imaging on the registration result of each main image, performing three-dimensional imaging on the fine registration result of the cross-correlation of each image pair in the image pair group to obtain a sparse point cloud set corresponding to the main image, and performing repeated homonymous point correction to obtain the final point cloud of the main image, so as to obtain a point cloud sequence corresponding to the SAR image sequence and containing n dense point clouds;
S4, dividing the target area to obtain partitions, equally dividing the three-dimensional point cloud of each SAR image according to the partition where the registration result is located, obtaining a digital elevation model of each area according to the point cloud sequence, and combining to obtain a more complete and accurate scene point cloud reconstruction result, wherein the method comprises the following steps:
S41, determining the area of the target scene, dividing the area and confirming the resolution; three-dimensional point clouds of all SAR images in the SAR image sequence form a three-dimensional point cloud sequence, and the three-dimensional point cloud sequence is divided into different point cloud groups according to the region of the target scene;
S42, calculating the elevation of each coordinate pixel by a maximum likelihood or least square method for each region; and then integrating all the areas to obtain a final dense point cloud and a three-dimensional model of the target.
2. The multi-angle SAR three-dimensional imaging method based on adaptive zoning SIFT according to claim 1, wherein S1 comprises:
s11, performing two-dimensional imaging on a target area, and correcting imaging parameters according to a motion compensation mode adopted by imaging to obtain an SAR image sequence, a platform coordinate and an oblique viewing angle; the imaging parameters include platform coordinates;
s12, selecting one SAR image from the SAR image sequence containing the same area as a main image, and sequencing other images according to the space-time relationship of the other images relative to the main image;
s13, screening from the main image according to the characteristic value checking result, ensuring that two adjacent image pairs have enough matched characteristic points, and simultaneously primarily screening and removing images with parallax less than a set threshold value, and finally obtaining the SAR image sequence with a certain viewing angle difference.
3. The multi-angle SAR three-dimensional imaging method based on adaptive zoning SIFT according to claim 2, wherein S3 comprises:
S31, according to the registration result of each image pair in the image pair group and the parameters of the carrier, calculating three-dimensional imaging results of all matching points by using an F.Leberl conformational equation to obtain sparse point clouds of the image pairs, thereby obtaining a three-dimensional sparse point cloud set corresponding to the image pair group of each main image;
S32, recording registration results of the same points in the image pair group of the main image, marking and connecting imaging results of the same points of the main image in different image pairs, and performing fitting correction according to the correlation and the three-dimensional imaging coordinates to obtain a three-dimensional point cloud corresponding to the main image.
CN202410195654.8A 2024-02-22 2024-02-22 Multi-angle SAR three-dimensional imaging method based on self-adaptive partition SIFT Active CN117761695B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410195654.8A CN117761695B (en) 2024-02-22 2024-02-22 Multi-angle SAR three-dimensional imaging method based on self-adaptive partition SIFT

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410195654.8A CN117761695B (en) 2024-02-22 2024-02-22 Multi-angle SAR three-dimensional imaging method based on self-adaptive partition SIFT

Publications (2)

Publication Number Publication Date
CN117761695A CN117761695A (en) 2024-03-26
CN117761695B true CN117761695B (en) 2024-04-30

Family

ID=90324144

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410195654.8A Active CN117761695B (en) 2024-02-22 2024-02-22 Multi-angle SAR three-dimensional imaging method based on self-adaptive partition SIFT

Country Status (1)

Country Link
CN (1) CN117761695B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102213762A (en) * 2011-04-12 2011-10-12 中交第二公路勘察设计研究院有限公司 Method for automatically matching multisource space-borne SAR (Synthetic Aperture Radar) images based on RFM (Rational Function Model)
CN102663348A (en) * 2012-03-21 2012-09-12 中国人民解放军国防科学技术大学 Marine ship detection method in optical remote sensing image
CN103177444A (en) * 2013-03-08 2013-06-26 中国电子科技集团公司第十四研究所 Automatic SAR (synthetic-aperture radar) image rectification method
CN103337052A (en) * 2013-04-17 2013-10-02 国家测绘地理信息局卫星测绘应用中心 Automatic geometric correction method for wide remote-sensing images
CN103839265A (en) * 2014-02-26 2014-06-04 西安电子科技大学 SAR image registration method based on SIFT and normalized mutual information
CN104867126A (en) * 2014-02-25 2015-08-26 西安电子科技大学 Method for registering synthetic aperture radar image with change area based on point pair constraint and Delaunay
CN107563438A (en) * 2017-08-31 2018-01-09 西南交通大学 The multi-modal Remote Sensing Images Matching Method and system of a kind of fast robust
CN107909018A (en) * 2017-11-06 2018-04-13 西南交通大学 A kind of sane multi-modal Remote Sensing Images Matching Method and system
CN108304883A (en) * 2018-02-12 2018-07-20 西安电子科技大学 Based on the SAR image matching process for improving SIFT
CN117422753A (en) * 2023-11-22 2024-01-19 北京理工大学 High-precision scene real-time three-dimensional reconstruction method combining optics and SAR (synthetic aperture radar) images

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10755428B2 (en) * 2017-04-17 2020-08-25 The United States Of America, As Represented By The Secretary Of The Navy Apparatuses and methods for machine vision system including creation of a point cloud model and/or three dimensional model

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102213762A (en) * 2011-04-12 2011-10-12 中交第二公路勘察设计研究院有限公司 Method for automatically matching multisource space-borne SAR (Synthetic Aperture Radar) images based on RFM (Rational Function Model)
CN102663348A (en) * 2012-03-21 2012-09-12 中国人民解放军国防科学技术大学 Marine ship detection method in optical remote sensing image
CN103177444A (en) * 2013-03-08 2013-06-26 中国电子科技集团公司第十四研究所 Automatic SAR (synthetic-aperture radar) image rectification method
CN103337052A (en) * 2013-04-17 2013-10-02 国家测绘地理信息局卫星测绘应用中心 Automatic geometric correction method for wide remote-sensing images
CN104867126A (en) * 2014-02-25 2015-08-26 西安电子科技大学 Method for registering synthetic aperture radar image with change area based on point pair constraint and Delaunay
CN103839265A (en) * 2014-02-26 2014-06-04 西安电子科技大学 SAR image registration method based on SIFT and normalized mutual information
CN107563438A (en) * 2017-08-31 2018-01-09 西南交通大学 The multi-modal Remote Sensing Images Matching Method and system of a kind of fast robust
CN107909018A (en) * 2017-11-06 2018-04-13 西南交通大学 A kind of sane multi-modal Remote Sensing Images Matching Method and system
CN108304883A (en) * 2018-02-12 2018-07-20 西安电子科技大学 Based on the SAR image matching process for improving SIFT
CN117422753A (en) * 2023-11-22 2024-01-19 北京理工大学 High-precision scene real-time three-dimensional reconstruction method combining optics and SAR (synthetic aperture radar) images

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Radargrammetric 3D Imaging through Composite Registration Method Using Multi-Aspect Synthetic Aperture Radar Imagery;Yangao Luo 等;remote sensing;20240129;第16卷(第523期);第11-15页 *

Also Published As

Publication number Publication date
CN117761695A (en) 2024-03-26

Similar Documents

Publication Publication Date Title
CN112419380B (en) Cloud mask-based high-precision registration method for stationary orbit satellite sequence images
CN111208512A (en) Interferometric measurement method based on video synthetic aperture radar
Gianinetto et al. Automated geometric correction of high-resolution pushbroom satellite data
CN112882030B (en) InSAR imaging interference integrated processing method
CN108564532B (en) Large-scale ground distance satellite-borne SAR image mosaic method
CN107945216B (en) More images based on least-squares estimation combine method for registering
CN112068136A (en) Azimuth deformation monitoring method based on amplitude offset
Dong et al. Radargrammetric DSM generation in mountainous areas through adaptive-window least squares matching constrained by enhanced epipolar geometry
CN109946682B (en) GF3 data baseline estimation method based on ICESat/GLAS
CN117830543A (en) Method, device, equipment and medium for estimating DEM (digital elevation model) based on satellite-borne double-station InSAR (interferometric synthetic aperture radar) and laser radar data
CN117761695B (en) Multi-angle SAR three-dimensional imaging method based on self-adaptive partition SIFT
Le Besnerais et al. Dense height map estimation from oblique aerial image sequences
CN116894923A (en) High-resolution remote sensing image mapping conversion dense matching and three-dimensional reconstruction method
CN116740151A (en) InSAR point cloud registration method and terminal equipment
CN117029870A (en) Laser odometer based on road surface point cloud
Jang et al. Topographic information extraction from KOMPSAT satellite stereo data using SGM
CN115546264A (en) Satellite-borne InSAR image fine registration and stereo measurement method
CN114187332A (en) Radar image registration method and system
Oh et al. Automated RPCs Bias Compensation for KOMPSAT Imagery Using Orthoimage GCP Chips in Korea
CN115690172A (en) Fusion method of interference SAR multiple observation data
Linglin et al. A fast SAR image position algorithm for maritime target location
CN111738135A (en) SAR image feature extraction method considering slant range and geographic coordinate mapping deviation
CN113280789B (en) Method for taking laser height measurement points of relief area as image elevation control points
CN118640878B (en) Topography mapping method based on aviation mapping technology
RU2778076C1 (en) Method for building a digital surface model based on data from space stereo imaging

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant