CN116580284A - Deep learning-based interferometric synthetic aperture radar offset measurement method - Google Patents
Deep learning-based interferometric synthetic aperture radar offset measurement method Download PDFInfo
- Publication number
- CN116580284A CN116580284A CN202310862312.2A CN202310862312A CN116580284A CN 116580284 A CN116580284 A CN 116580284A CN 202310862312 A CN202310862312 A CN 202310862312A CN 116580284 A CN116580284 A CN 116580284A
- Authority
- CN
- China
- Prior art keywords
- offset
- dimensional
- feature
- fusion
- pixel
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000013135 deep learning Methods 0.000 title claims abstract description 9
- 238000000691 measurement method Methods 0.000 title abstract description 5
- 238000000034 method Methods 0.000 claims abstract description 42
- 230000004927 fusion Effects 0.000 claims abstract description 33
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 17
- 238000005259 measurement Methods 0.000 claims abstract description 16
- 238000011176 pooling Methods 0.000 claims abstract description 13
- 238000013528 artificial neural network Methods 0.000 claims description 13
- 230000001427 coherent effect Effects 0.000 claims description 10
- 238000012545 processing Methods 0.000 claims description 7
- 230000003247 decreasing effect Effects 0.000 claims description 2
- 238000012952 Resampling Methods 0.000 abstract 1
- 238000012935 Averaging Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000021615 conjugation Effects 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S13/00—Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
- G01S13/88—Radar or analogous systems specially adapted for specific applications
- G01S13/89—Radar or analogous systems specially adapted for specific applications for mapping or imaging
- G01S13/90—Radar or analogous systems specially adapted for specific applications for mapping or imaging using synthetic aperture techniques, e.g. synthetic aperture radar [SAR] techniques
- G01S13/9021—SAR image post-processing techniques
- G01S13/9023—SAR image post-processing techniques combined with interferometric techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/806—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Remote Sensing (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Radar, Positioning & Navigation (AREA)
- Software Systems (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Multimedia (AREA)
- Medical Informatics (AREA)
- Biophysics (AREA)
- Electromagnetism (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Computer Networks & Wireless Communication (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Radar Systems Or Details Thereof (AREA)
Abstract
The invention relates to an interference synthetic aperture radar offset measuring method based on deep learning, which comprises the following steps: resampling and moving the auxiliary single-view complex image relative to the main single-view complex image to construct a multi-scale coherence coefficient cube; carrying out feature fusion on the multi-scale coherence coefficient cube by adopting a three-dimensional convolutional neural network and a maximum value pooling method; encoding the fusion feature map by using a three-dimensional convolutional neural network; and finally, adopting a fully-connected network to decode the feature map pixel by pixel to obtain a high-resolution and low-noise offset map. The method realizes the offset measurement of the interferometric synthetic aperture radar data, can be used for the tasks of precise registration, absolute phase estimation and the like of the interferometric synthetic aperture radar data, and solves the problems of low resolution and large noise of the traditional interferometric synthetic aperture radar data offset measurement method.
Description
Technical Field
The invention relates to a method for measuring offset of an interference synthetic aperture radar based on deep learning, in particular to a method for measuring the offset between main and auxiliary single vision complex (Single Look Complex, SLC) images of the interference synthetic aperture radar (Interferometric Synthetic Aperture Radar, inSAR) based on the deep learning, which belongs to the technical field of image processing.
Background
Interferometric synthetic aperture radar (Interferometric Synthetic Aperture Radar, inSAR) is an important mapping tool that has important implications for global digital elevation model (Digital Elevation Model, DEM) generation, deformation measurement, and other applications. Currently, many steps in the interferometric process flow rely on offset measurements, such as: registration of interference images, phase unwrapping, absolute phase measurement, etc. Some studies use external digital elevation models to simulate absolute phase to assist in the above procedure. However, external digital elevation model information is not always available and its resolution, accuracy, all have an impact on the processing results. It is therefore important to extract high quality offsets from the interior of the interferometric synthetic aperture radar image pair.
The current coherent offset estimation algorithm has two key problems: the first problem is the trade-off between resolution and measurement accuracy. According to the relationship between the offset and the measurement variance, the estimation accuracy improves as the number of samples increases, but the resolution of the offset map decreases. The second problem is that the coherent offset measurement algorithm requires a priori information to compensate for the terrain phase. Without the external digital elevation model, the terrain phase is difficult to compensate and the coherent offset estimation will not meet the gaussian circle distribution assumption, resulting in increased error. Therefore, low-noise, high-resolution and high-performance interferometric synthetic aperture radar offset measurement methods are to be further researched and developed.
Disclosure of Invention
The present invention has been devised in view of the above-mentioned problems of the prior art. The invention aims to precisely measure the offset in the application of the interference synthetic aperture radar so as to generate a high-resolution low-noise offset graph for the application of registration, phase unwrapping, absolute phase measurement and the like of the subsequent interference synthetic aperture radar image.
In order to achieve the above purpose, the technical scheme provided by the invention is as follows:
a method for measuring the offset of an interference synthetic aperture radar based on deep learning comprises the following steps:
1. moving the auxiliary single-view complex image relative to the main single-view complex image at fixed intervals in an offset measurement interval, and constructing a multi-scale coherence coefficient cube;
2. carrying out feature fusion on the multi-scale coherence coefficient cubes by adopting a three-dimensional convolutional neural network and maximum value pooling to obtain a fusion feature map;
3. encoding the fusion feature map by using a three-dimensional convolutional neural network;
4. and decoding the encoded fusion feature map pixel by pixel through a fully connected network to obtain an offset map.
Further, the method for moving the auxiliary single-view complex image relative to the main single-view complex image at fixed intervals in the offset measurement interval and constructing the multi-scale coherence coefficient cube comprises the following steps:
and moving the auxiliary single-view complex images along the distance at fixed intervals to the corresponding main single-view complex images to obtain a series of auxiliary single-view complex images with different offsets, calculating coherence coefficient graphs by using the moved auxiliary single-view complex images and the main single-view complex images, stacking the coherence coefficient graphs corresponding to the different offsets in order of the offsets from small to large to obtain a three-dimensional coherence coefficient cube, and obtaining the coherence coefficient cubes with different scales according to different estimated window sizes adopted for calculating the coherence coefficient graphs.
Further, the steps adopt a three-dimensional convolution neural network and a maximum value pooling method to perform feature fusion on the multi-scale coherence coefficient cubes, and the specific method for obtaining the fusion feature map comprises the following steps:
and respectively processing the coherent coefficient cubes of different scales by utilizing a three-dimensional convolutional neural network to obtain three-dimensional feature images of different scales, then adopting a maximum value pooling method, taking the three-dimensional feature image with the lowest resolution as a reference, downsampling the three-dimensional feature images of different scales into the same resolution, and combining the feature images with the same resolution as the same group of features to obtain a fusion feature image.
Further, the specific method for encoding the fusion feature map by using the three-dimensional convolutional neural network comprises the following steps:
the method comprises the steps of processing fusion feature graphs by using a series of three-dimensional convolution neural networks, firstly, increasing the number of the output feature graphs of the three-dimensional convolution neural networks step by step, adopting a maximum value pooling method to downsample data, then, decreasing the number of the output feature graphs of the three-dimensional convolution neural networks step by step, adopting an upsampling method to interpolate the data, and finally, realizing the coding of the fusion feature graphs, wherein the number of the output feature graphs of the three-dimensional convolution neural networks is 1.
Further, the step of decoding the coded fusion feature map pixel by pixel through a fully connected network comprises the following specific steps of:
and inputting the coded data of the coded fusion feature map under different offsets corresponding to each pixel into a full-connection network, wherein the full-connection network is connected with each other through a plurality of full-connection layers, and outputting the result as the offset corresponding to each pixel.
The beneficial effects are that:
compared with the prior art, the method for measuring the offset of the interference synthetic aperture radar based on deep learning realizes the offset graph generation with high resolution and low noise, and has the following remarkable effects compared with the prior art:
(1) The invention adopts a multiscale coherence coefficient cube estimation and fusion scheme, and can extract more offset information from the interference synthetic aperture radar image pair.
(2) The invention adopts the three-dimensional convolution neural network and the full-connection neural network to process the image data, can rapidly output the offset estimation value with high resolution and low noise, and is used for the applications of interference image registration, phase unwrapping, absolute phase measurement and the like.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a reference truth value of the offset plot for the experimental zone;
FIG. 3 is a graph of offset calculated by a conventional coherent cross correlation method;
fig. 4 is a graph of the offset calculated by the algorithm of the present invention.
Detailed Description
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic flow chart of a method for measuring offset of an interferometric synthetic aperture radar based on deep learning, as shown in FIG. 1, the method comprises the following steps:
step 101: and moving the auxiliary single-view complex image relative to the main single-view complex image at fixed intervals in an offset measurement interval, and constructing a multi-scale coherence coefficient cube.
Specifically, the auxiliary single-view complex image is moved relative to the main single-view complex image at fixed intervals along the distance of the auxiliary single-view complex image, and the auxiliary single-view complex image and the main single-view complex image are moved togetherSecondary (S)/(S)>Obtaining a series of auxiliary single vision complex images with different offsets as positive integers>Wherein->Representing the distance of the image to the pixel position +.>Representing the orientation of the image to the pixel location, ">Indicating the amount of offset of the movement.
The moved auxiliary single-view complex images are then used to calculate a coherence coefficient map with the main single-view complex image. Stacking the coherence coefficient graphs corresponding to different offsets in order from small offset to large offset to obtain a three-dimensional coherence coefficient cube。
,
Wherein,,representing a pair size of +.>Averaging elements within the estimation window of +.>Representing a main single vision complex image,>representation->Conjugation of->Represents the ground-leveling phase calculated by the radar imaging geometry, < >>,/>Representing natural constant->Is->To the power of (I)>Representing the calculation of the modulus of each element therein, +.>Representing the coherence factor map distance to the pixel location, +.>Representing the azimuthal pixel position of the coherence coefficient map,indicating the amount of offset of the movement.
Estimating window size employed in accordance with a calculated coherence coefficient mapDifferent scales of coherence coefficient cubes can be obtained.
Step 102: and carrying out feature fusion on the multi-scale coherent coefficient cube by adopting a three-dimensional convolutional neural network and a maximum value pooling method to obtain a fusion feature map.
Specifically, firstly, a three-dimensional convolutional neural network is utilized to respectively process the coherent coefficient cubes with different scales output in the step 101, so as to obtain three-dimensional feature graphs with different scales.
And then taking the three-dimensional feature map with the lowest resolution as a reference, and adopting a maximum value pooling layer (Max-pooling) to downsample the three-dimensional feature maps with different scales to the same resolution. And then, combining the feature images with the same resolution as the same group of features to obtain a fusion feature image. The network parameters for a set of different scale coherence coefficient cube fusions are shown in Table 1, table 1 showsA method for fusing feature maps of three scales of 4, 8 and 16. Wherein 3D-Conv represents a three-dimensional convolution layer, M and N respectively represent the number of samples in azimuth and distance directions of input main and auxiliary single-view complex images, and four dimensions of the output dimension are respectively the number of samples in azimuth and the number of samples in distance directions,Number of movements, number of feature maps.
Table 1 a set of network parameters fused by different scale coherence coefficient cubes
,
Step 103: and encoding the fusion feature map by using a three-dimensional convolutional neural network.
Specifically, the fused feature map is processed using a series of three-dimensional convolutional neural networks. Firstly, the number of the output feature graphs of the three-dimensional convolutional neural network is increased step by step, the data is downsampled by adopting a maximum value pooling method, and the resolution of the feature graphs is reduced step by step. Then, the number of the output feature images of the three-dimensional convolutional neural network is gradually reduced, the data is interpolated by an up-sampling method, and the resolution of the feature images is gradually increased. And the final three-dimensional convolution layer is directly output, the number of the characteristic images is 1, and the encoding of the fusion characteristic images is completed. The profile encoding neural network parameters are shown in table 2.
Table 2 feature map encoded neural network parameters
,
Step 104: and decoding the encoded fusion feature map pixel by pixel through a fully connected network to obtain an offset map.
The method comprises the following steps: the coding data (dimension is) And inputting a fully connected network. The fully connected network is interconnected by a plurality of fully connected layers. The full connection offset decoding network parameters are shown in table 3, where FC represents the full connection layer. The output result is the corresponding offset of each pixel. After pixel-by-pixel processing, a pair of high resolution, low noise offset maps is finally obtained.
Table 3 full connection offset decoding network parameters
,
The technical scheme of the invention is further described in detail below with reference to specific embodiments.
Example 1
The technical scheme of the invention is verified by adopting the actual measurement data of an ALOS satellite synthetic aperture radar sensor (PALSAR) in mountain areas. Fig. 2 is a reference truth value of the offset map of the experimental area. Fig. 3 is a graph of the offset calculated by the conventional coherent cross correlation method. Fig. 4 is a graph of the offset calculated by the algorithm of the present invention. It can be seen that the offset graph measured by the cross-correlation method has larger noise and lower resolution, and the offset measurement method provided by the invention has the advantages that the image noise statement is improved and the resolution is higher. According to the statistical result, the average absolute error of measurement is reduced from 0.0337 pixel to 0.0075 pixel, and the improvement is quite remarkable.
The foregoing is merely a partial embodiment of the present invention, and is not intended to limit the scope of the present invention.
Claims (5)
1. The method for measuring the offset of the interference synthetic aperture radar based on the deep learning is characterized by comprising the following steps of:
moving the auxiliary single-view complex image relative to the main single-view complex image at fixed intervals in an offset measurement interval, and constructing a multi-scale coherence coefficient cube;
carrying out feature fusion on the multi-scale coherent coefficient cube by adopting a three-dimensional convolutional neural network and a maximum value pooling method to obtain a fusion feature map;
encoding the fusion feature map by using a three-dimensional convolutional neural network;
and decoding the encoded fusion feature map pixel by pixel through a fully connected network to obtain an offset map.
2. The method of claim 1, wherein the step of moving the secondary single-view complex image relative to the primary single-view complex image at fixed intervals within the offset measurement interval and constructing the multi-scale coherence factor cube comprises the steps of:
and moving the auxiliary single-view complex images along the distance at fixed intervals to the corresponding main single-view complex images to obtain a series of auxiliary single-view complex images with different offsets, calculating coherence coefficient graphs by using the moved auxiliary single-view complex images and the main single-view complex images, stacking the coherence coefficient graphs corresponding to the different offsets in order of the offsets from small to large to obtain a three-dimensional coherence coefficient cube, and obtaining the coherence coefficient cubes with different scales according to different estimated window sizes adopted for calculating the coherence coefficient graphs.
3. The method of claim 1, wherein the step of performing feature fusion on the multi-scale coherence coefficient cube by using a three-dimensional convolutional neural network and a maximum value pooling method comprises the following specific steps of:
and respectively processing the coherent coefficient cubes of different scales by utilizing a three-dimensional convolutional neural network to obtain three-dimensional feature images of different scales, then adopting a maximum value pooling method, taking the three-dimensional feature image with the lowest resolution as a reference, downsampling the three-dimensional feature images of different scales into the same resolution, and combining the feature images with the same resolution as the same group of features to obtain a fusion feature image.
4. The method of claim 1, wherein the specific method for encoding the fusion profile using the three-dimensional convolutional neural network comprises:
the method comprises the steps of processing fusion feature graphs by using a series of three-dimensional convolution neural networks, firstly, increasing the number of the output feature graphs of the three-dimensional convolution neural networks step by step, adopting a maximum value pooling method to downsample data, then, decreasing the number of the output feature graphs of the three-dimensional convolution neural networks step by step, adopting an upsampling method to interpolate the data, and finally, realizing the coding of the fusion feature graphs, wherein the number of the output feature graphs of the three-dimensional convolution neural networks is 1.
5. The method of claim 1, wherein the step of decoding the encoded fusion feature map pixel by pixel through a fully connected network comprises the following specific steps of:
and inputting the coded data of the coded fusion feature map under different offsets corresponding to each pixel into a full-connection network, wherein the full-connection network is connected with each other through a plurality of full-connection layers, and outputting the result as the offset corresponding to each pixel.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310862312.2A CN116580284B (en) | 2023-07-14 | 2023-07-14 | Deep learning-based interferometric synthetic aperture radar offset measurement method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310862312.2A CN116580284B (en) | 2023-07-14 | 2023-07-14 | Deep learning-based interferometric synthetic aperture radar offset measurement method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116580284A true CN116580284A (en) | 2023-08-11 |
CN116580284B CN116580284B (en) | 2023-09-15 |
Family
ID=87540074
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310862312.2A Active CN116580284B (en) | 2023-07-14 | 2023-07-14 | Deep learning-based interferometric synthetic aperture radar offset measurement method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116580284B (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE19626556A1 (en) * | 1996-07-02 | 1998-01-15 | Joao R Dr Ing Moreira | Determining absolute phase between two interferometric images generated by synthetic aperture radar |
CN109001735A (en) * | 2018-07-27 | 2018-12-14 | 中国科学院国家空间科学中心 | A kind of scene classification method based on interference synthetic aperture radar image |
CN109738896A (en) * | 2019-02-11 | 2019-05-10 | 黄河水利职业技术学院 | A kind of Ground Deformation monitoring method based on SAR Image Matching technology |
US20210033726A1 (en) * | 2019-08-01 | 2021-02-04 | University Of Seoul Industry Cooperation Foundation | Method and apparatus for phase unwrapping of synthetic aperture radar (sar) interferogram based on sar offset tracking surface displacement model |
CN113065467A (en) * | 2021-04-01 | 2021-07-02 | 中科星图空间技术有限公司 | Satellite image low-coherence region identification method and device based on deep learning |
CN115546264A (en) * | 2022-09-29 | 2022-12-30 | 中国科学院空天信息创新研究院 | Satellite-borne InSAR image fine registration and stereo measurement method |
-
2023
- 2023-07-14 CN CN202310862312.2A patent/CN116580284B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE19626556A1 (en) * | 1996-07-02 | 1998-01-15 | Joao R Dr Ing Moreira | Determining absolute phase between two interferometric images generated by synthetic aperture radar |
CN109001735A (en) * | 2018-07-27 | 2018-12-14 | 中国科学院国家空间科学中心 | A kind of scene classification method based on interference synthetic aperture radar image |
CN109738896A (en) * | 2019-02-11 | 2019-05-10 | 黄河水利职业技术学院 | A kind of Ground Deformation monitoring method based on SAR Image Matching technology |
US20210033726A1 (en) * | 2019-08-01 | 2021-02-04 | University Of Seoul Industry Cooperation Foundation | Method and apparatus for phase unwrapping of synthetic aperture radar (sar) interferogram based on sar offset tracking surface displacement model |
CN113065467A (en) * | 2021-04-01 | 2021-07-02 | 中科星图空间技术有限公司 | Satellite image low-coherence region identification method and device based on deep learning |
CN115546264A (en) * | 2022-09-29 | 2022-12-30 | 中国科学院空天信息创新研究院 | Satellite-borne InSAR image fine registration and stereo measurement method |
Non-Patent Citations (2)
Title |
---|
CHARLES WERNER 等: "Precision estimation of local offsets between pairs of SAR SLCs and detected SAR images", PROCEEDINGS. 2005 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM, 2005. IGARSS \'05., pages 4803 - 4805 * |
郭交;刘艳阳;苏宝峰;: "利用粗DEM信息的分布式卫星InSAR图像精配准算法", 信号处理, no. 04 * |
Also Published As
Publication number | Publication date |
---|---|
CN116580284B (en) | 2023-09-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111833393A (en) | Binocular stereo matching method based on edge information | |
CN112233179B (en) | Visual odometer measuring method | |
CN102184540A (en) | Sub-pixel level stereo matching method based on scale space | |
CN103454636B (en) | Differential interferometric phase estimation method based on multi-pixel covariance matrixes | |
CN113065467B (en) | Satellite image low coherence region identification method and device based on deep learning | |
CN111985551B (en) | Stereo matching algorithm based on multi-attention network | |
CN115330935A (en) | Three-dimensional reconstruction method and system based on deep learning | |
CN111879258A (en) | Dynamic high-precision three-dimensional measurement method based on fringe image conversion network FPTNet | |
CN116305902B (en) | Flood maximum submerged depth space simulation method based on multi-mode remote sensing | |
CN112927348B (en) | High-resolution human body three-dimensional reconstruction method based on multi-viewpoint RGBD camera | |
CN110109105A (en) | A method of the InSAR technical monitoring Ground Deformation based on timing | |
CN106408531A (en) | GPU acceleration-based hierarchical adaptive three-dimensional reconstruction method | |
CN101561931A (en) | Unscented kalman filtering-based method for calibrating camera | |
Yao et al. | Toward real-world super-resolution technique for fringe projection profilometry | |
CN114265062B (en) | InSAR phase unwrapping method based on phase gradient estimation network | |
CN117115336A (en) | Point cloud reconstruction method based on remote sensing stereoscopic image | |
CN115457022A (en) | Three-dimensional deformation detection method based on real-scene three-dimensional model front-view image | |
Song et al. | Super-resolution phase retrieval network for single-pattern structured light 3D imaging | |
CN110703252A (en) | Digital elevation model correction method for interferometric synthetic aperture radar shadow area | |
CN116580284B (en) | Deep learning-based interferometric synthetic aperture radar offset measurement method | |
CN106157258A (en) | A kind of new satellite-borne SAR image geometric correction method | |
CN117830543A (en) | Method, device, equipment and medium for estimating DEM (digital elevation model) based on satellite-borne double-station InSAR (interferometric synthetic aperture radar) and laser radar data | |
CN115546264A (en) | Satellite-borne InSAR image fine registration and stereo measurement method | |
Li et al. | Deep learning-based phase unwrapping method | |
CN115311168A (en) | Depth estimation method for multi-view visual system, electronic device and medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |