CN116580284B - Deep learning-based interferometric synthetic aperture radar offset measurement method - Google Patents

Deep learning-based interferometric synthetic aperture radar offset measurement method Download PDF

Info

Publication number
CN116580284B
CN116580284B CN202310862312.2A CN202310862312A CN116580284B CN 116580284 B CN116580284 B CN 116580284B CN 202310862312 A CN202310862312 A CN 202310862312A CN 116580284 B CN116580284 B CN 116580284B
Authority
CN
China
Prior art keywords
offset
dimensional
feature
fusion
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310862312.2A
Other languages
Chinese (zh)
Other versions
CN116580284A (en
Inventor
吴羽纶
王吉利
张衡
赵凤军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Aerospace Information Research Institute of CAS
Original Assignee
Aerospace Information Research Institute of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Aerospace Information Research Institute of CAS filed Critical Aerospace Information Research Institute of CAS
Priority to CN202310862312.2A priority Critical patent/CN116580284B/en
Publication of CN116580284A publication Critical patent/CN116580284A/en
Application granted granted Critical
Publication of CN116580284B publication Critical patent/CN116580284B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/89Radar or analogous systems specially adapted for specific applications for mapping or imaging
    • G01S13/90Radar or analogous systems specially adapted for specific applications for mapping or imaging using synthetic aperture techniques, e.g. synthetic aperture radar [SAR] techniques
    • G01S13/9021SAR image post-processing techniques
    • G01S13/9023SAR image post-processing techniques combined with interferometric techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • Medical Informatics (AREA)
  • Biophysics (AREA)
  • Electromagnetism (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The invention relates to an interference synthetic aperture radar offset measuring method based on deep learning, which comprises the following steps: resampling and moving the auxiliary single-view complex image relative to the main single-view complex image to construct a multi-scale coherence coefficient cube; carrying out feature fusion on the multi-scale coherence coefficient cube by adopting a three-dimensional convolutional neural network and a maximum value pooling method; encoding the fusion feature map by using a three-dimensional convolutional neural network; and finally, adopting a fully-connected network to decode the feature map pixel by pixel to obtain a high-resolution and low-noise offset map. The method realizes the offset measurement of the interferometric synthetic aperture radar data, can be used for the tasks of precise registration, absolute phase estimation and the like of the interferometric synthetic aperture radar data, and solves the problems of low resolution and large noise of the traditional interferometric synthetic aperture radar data offset measurement method.

Description

Deep learning-based interferometric synthetic aperture radar offset measurement method
Technical Field
The invention relates to a method for measuring offset of an interference synthetic aperture radar based on deep learning, in particular to a method for measuring the offset between main and auxiliary single vision complex (Single Look Complex, SLC) images of the interference synthetic aperture radar (Interferometric Synthetic Aperture Radar, inSAR) based on the deep learning, which belongs to the technical field of image processing.
Background
Interferometric synthetic aperture radar (Interferometric Synthetic Aperture Radar, inSAR) is an important mapping tool that has important implications for global digital elevation model (Digital Elevation Model, DEM) generation, deformation measurement, and other applications. Currently, many steps in the interferometric process flow rely on offset measurements, such as: registration of interference images, phase unwrapping, absolute phase measurement, etc. Some studies use external digital elevation models to simulate absolute phase to assist in the above procedure. However, external digital elevation model information is not always available and its resolution, accuracy, all have an impact on the processing results. It is therefore important to extract high quality offsets from the interior of the interferometric synthetic aperture radar image pair.
The current coherent offset estimation algorithm has two key problems: the first problem is the trade-off between resolution and measurement accuracy. According to the relationship between the offset and the measurement variance, the estimation accuracy improves as the number of samples increases, but the resolution of the offset map decreases. The second problem is that the coherent offset measurement algorithm requires a priori information to compensate for the terrain phase. Without the external digital elevation model, the terrain phase is difficult to compensate and the coherent offset estimation will not meet the gaussian circle distribution assumption, resulting in increased error. Therefore, low-noise, high-resolution and high-performance interferometric synthetic aperture radar offset measurement methods are to be further researched and developed.
Disclosure of Invention
The present invention has been devised in view of the above-mentioned problems of the prior art. The invention aims to precisely measure the offset in the application of the interference synthetic aperture radar so as to generate a high-resolution low-noise offset graph for the application of registration, phase unwrapping, absolute phase measurement and the like of the subsequent interference synthetic aperture radar image.
In order to achieve the above purpose, the technical scheme provided by the invention is as follows:
a method for measuring the offset of an interference synthetic aperture radar based on deep learning comprises the following steps:
1. moving the auxiliary single-view complex image relative to the main single-view complex image at fixed intervals in an offset measurement interval, and constructing a multi-scale coherence coefficient cube;
2. carrying out feature fusion on the multi-scale coherence coefficient cubes by adopting a three-dimensional convolutional neural network and maximum value pooling to obtain a fusion feature map;
3. encoding the fusion feature map by using a three-dimensional convolutional neural network;
4. and decoding the encoded fusion feature map pixel by pixel through a fully connected network to obtain an offset map.
Further, the method for moving the auxiliary single-view complex image relative to the main single-view complex image at fixed intervals in the offset measurement interval and constructing the multi-scale coherence coefficient cube comprises the following steps:
and moving the auxiliary single-view complex images along the distance at fixed intervals to the corresponding main single-view complex images to obtain a series of auxiliary single-view complex images with different offsets, calculating coherence coefficient graphs by using the moved auxiliary single-view complex images and the main single-view complex images, stacking the coherence coefficient graphs corresponding to the different offsets in order of the offsets from small to large to obtain a three-dimensional coherence coefficient cube, and obtaining the coherence coefficient cubes with different scales according to different estimated window sizes adopted for calculating the coherence coefficient graphs.
Further, the steps adopt a three-dimensional convolution neural network and a maximum value pooling method to perform feature fusion on the multi-scale coherence coefficient cubes, and the specific method for obtaining the fusion feature map comprises the following steps:
and respectively processing the coherent coefficient cubes of different scales by utilizing a three-dimensional convolutional neural network to obtain three-dimensional feature images of different scales, then adopting a maximum value pooling method, taking the three-dimensional feature image with the lowest resolution as a reference, downsampling the three-dimensional feature images of different scales into the same resolution, and combining the feature images with the same resolution as the same group of features to obtain a fusion feature image.
Further, the specific method for encoding the fusion feature map by using the three-dimensional convolutional neural network comprises the following steps:
the method comprises the steps of processing fusion feature graphs by using a series of three-dimensional convolution neural networks, firstly, increasing the number of the output feature graphs of the three-dimensional convolution neural networks step by step, adopting a maximum value pooling method to downsample data, then, decreasing the number of the output feature graphs of the three-dimensional convolution neural networks step by step, adopting an upsampling method to interpolate the data, and finally, realizing the coding of the fusion feature graphs, wherein the number of the output feature graphs of the three-dimensional convolution neural networks is 1.
Further, the step of decoding the coded fusion feature map pixel by pixel through a fully connected network comprises the following specific steps of:
and inputting the coded data of the coded fusion feature map under different offsets corresponding to each pixel into a full-connection network, wherein the full-connection network is connected with each other through a plurality of full-connection layers, and outputting the result as the offset corresponding to each pixel.
The beneficial effects are that:
compared with the prior art, the method for measuring the offset of the interference synthetic aperture radar based on deep learning realizes the offset graph generation with high resolution and low noise, and has the following remarkable effects compared with the prior art:
(1) The invention adopts a multiscale coherence coefficient cube estimation and fusion scheme, and can extract more offset information from the interference synthetic aperture radar image pair.
(2) The invention adopts the three-dimensional convolution neural network and the full-connection neural network to process the image data, can rapidly output the offset estimation value with high resolution and low noise, and is used for the applications of interference image registration, phase unwrapping, absolute phase measurement and the like.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a reference truth value of the offset plot for the experimental zone;
FIG. 3 is a graph of offset calculated by a conventional coherent cross correlation method;
fig. 4 is a graph of the offset calculated by the algorithm of the present invention.
Detailed Description
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic flow chart of a method for measuring offset of an interferometric synthetic aperture radar based on deep learning, as shown in FIG. 1, the method comprises the following steps:
step 101: and moving the auxiliary single-view complex image relative to the main single-view complex image at fixed intervals in an offset measurement interval, and constructing a multi-scale coherence coefficient cube.
Specifically, the auxiliary single-view complex image is moved relative to the main single-view complex image at fixed intervals along the distance of the auxiliary single-view complex image, and the auxiliary single-view complex image and the main single-view complex image are moved togetherSecondary (S)/(S)>Obtaining a series of auxiliary single-view complex images with different offsets as positive integersWherein->Representing the distance of the image to the pixel position +.>Representing the orientation of the image to the pixel location, ">Indicating the amount of offset of the movement.
The moved auxiliary single-view complex images are then used to calculate a coherence coefficient map with the main single-view complex image. Stacking the coherence coefficient graphs corresponding to different offsets in order from small offset to large offset to obtain a three-dimensional coherence coefficient cube
Wherein, the liquid crystal display device comprises a liquid crystal display device,representing a pair size of +.>Averaging elements within the estimation window of +.>Representing a main single vision complex image,>representation->Conjugation of->Represents the ground-leveling phase calculated by the radar imaging geometry, < >>,/>Representing natural constant->Is->To the power of (I)>Representing the calculation of the modulus of each element therein, +.>Representing the coherence factor map distance to the pixel location, +.>Direction of the coherence coefficient map to the pixel position, +.>Indicating the amount of offset of the movement.
Estimating window size employed in accordance with a calculated coherence coefficient mapDifferent scales of coherence coefficient cubes can be obtained.
Step 102: and carrying out feature fusion on the multi-scale coherent coefficient cube by adopting a three-dimensional convolutional neural network and a maximum value pooling method to obtain a fusion feature map.
Specifically, firstly, a three-dimensional convolutional neural network is utilized to respectively process the coherent coefficient cubes with different scales output in the step 101, so as to obtain three-dimensional feature graphs with different scales.
And then taking the three-dimensional feature map with the lowest resolution as a reference, and adopting a maximum value pooling layer (Max-pooling) to downsample the three-dimensional feature maps with different scales to the same resolution. And then, combining the feature images with the same resolution as the same group of features to obtain a fusion feature image. The network parameters for a set of different scale coherence coefficient cube fusions are shown in Table 1, table 1 showsA method for fusing feature maps of three scales of 4, 8 and 16. The 3D-Conv represents a three-dimensional convolution layer, M and N respectively represent the number of samples in the azimuth direction and the distance direction of the input main single-view complex image and the auxiliary single-view complex image, and the four dimensions of the output size are the number of samples in the azimuth direction, the number of samples in the distance direction, the number of times of movement and the number of feature images respectively.
Table 1 a set of network parameters fused by different scale coherence coefficient cubes
Step 103: and encoding the fusion feature map by using a three-dimensional convolutional neural network.
Specifically, the fused feature map is processed using a series of three-dimensional convolutional neural networks. Firstly, the number of the output feature graphs of the three-dimensional convolutional neural network is increased step by step, the data is downsampled by adopting a maximum value pooling method, and the resolution of the feature graphs is reduced step by step. Then, the number of the output feature images of the three-dimensional convolutional neural network is gradually reduced, the data is interpolated by an up-sampling method, and the resolution of the feature images is gradually increased. And the final three-dimensional convolution layer is directly output, the number of the characteristic images is 1, and the encoding of the fusion characteristic images is completed. The profile encoding neural network parameters are shown in table 2.
Table 2 feature map encoded neural network parameters
Step 104: and decoding the encoded fusion feature map pixel by pixel through a fully connected network to obtain an offset map.
The method comprises the following steps: the coding data (dimension is) And inputting a fully connected network. The fully connected network is interconnected by a plurality of fully connected layers. The full connection offset decoding network parameters are shown in table 3, where FC represents the full connection layer. The output result is the corresponding offset of each pixel. After pixel-by-pixel processing, a pair of high resolution, low noise offset maps is finally obtained.
Table 3 full connection offset decoding network parameters
The technical scheme of the invention is further described in detail below with reference to specific embodiments.
Example 1
The technical scheme of the invention is verified by adopting the actual measurement data of an ALOS satellite synthetic aperture radar sensor (PALSAR) in mountain areas. Fig. 2 is a reference truth value of the offset map of the experimental area. Fig. 3 is a graph of the offset calculated by the conventional coherent cross correlation method. Fig. 4 is a graph of the offset calculated by the algorithm of the present invention. It can be seen that the offset graph measured by the cross-correlation method has larger noise and lower resolution, and the offset measurement method provided by the invention has the advantages that the image noise statement is improved and the resolution is higher. According to the statistical result, the average absolute error of measurement is reduced from 0.0337 pixel to 0.0075 pixel, and the improvement is quite remarkable.
The foregoing is merely a partial embodiment of the present invention, and is not intended to limit the scope of the present invention.

Claims (4)

1. The method for measuring the offset of the interference synthetic aperture radar based on the deep learning is characterized by comprising the following steps of:
moving the auxiliary single-view complex image relative to the main single-view complex image at fixed intervals in an offset measurement interval, and constructing a multi-scale coherence coefficient cube;
carrying out feature fusion on the multi-scale coherent coefficient cube by adopting a three-dimensional convolutional neural network and a maximum value pooling method to obtain a fusion feature map;
encoding the fusion feature map by using a three-dimensional convolutional neural network;
decoding the coded fusion feature map pixel by pixel through a fully connected network to obtain an offset map;
the method comprises the steps of moving the auxiliary single-view complex image relative to the main single-view complex image at fixed intervals in an offset measurement interval, and constructing a multi-scale coherence coefficient cube, wherein the specific method comprises the following steps of:
and moving the auxiliary single-view complex images along the distance at fixed intervals to the corresponding main single-view complex images to obtain a series of auxiliary single-view complex images with different offsets, calculating coherence coefficient graphs by using the moved auxiliary single-view complex images and the main single-view complex images, stacking the coherence coefficient graphs corresponding to the different offsets in order of the offsets from small to large to obtain a three-dimensional coherence coefficient cube, and obtaining the coherence coefficient cubes with different scales according to different estimated window sizes adopted for calculating the coherence coefficient graphs.
2. The method of claim 1, wherein the step of performing feature fusion on the multi-scale coherence coefficient cube by using a three-dimensional convolutional neural network and a maximum value pooling method comprises the following specific steps of:
and respectively processing the coherent coefficient cubes of different scales by utilizing a three-dimensional convolutional neural network to obtain three-dimensional feature images of different scales, then adopting a maximum value pooling method, taking the three-dimensional feature image with the lowest resolution as a reference, downsampling the three-dimensional feature images of different scales into the same resolution, and combining the feature images with the same resolution as the same group of features to obtain a fusion feature image.
3. The method of claim 1, wherein the specific method for encoding the fusion profile using the three-dimensional convolutional neural network comprises:
the method comprises the steps of processing fusion feature graphs by using a series of three-dimensional convolution neural networks, firstly, increasing the number of the output feature graphs of the three-dimensional convolution neural networks step by step, adopting a maximum value pooling method to downsample data, then, decreasing the number of the output feature graphs of the three-dimensional convolution neural networks step by step, adopting an upsampling method to interpolate the data, and finally, realizing the coding of the fusion feature graphs, wherein the number of the output feature graphs of the three-dimensional convolution neural networks is 1.
4. The method of claim 1, wherein the step of decoding the encoded fusion feature map pixel by pixel through a fully connected network comprises the following specific steps of:
and inputting the coded data of the coded fusion feature map under different offsets corresponding to each pixel into a full-connection network, wherein the full-connection network is connected with each other through a plurality of full-connection layers, and outputting the result as the offset corresponding to each pixel.
CN202310862312.2A 2023-07-14 2023-07-14 Deep learning-based interferometric synthetic aperture radar offset measurement method Active CN116580284B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310862312.2A CN116580284B (en) 2023-07-14 2023-07-14 Deep learning-based interferometric synthetic aperture radar offset measurement method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310862312.2A CN116580284B (en) 2023-07-14 2023-07-14 Deep learning-based interferometric synthetic aperture radar offset measurement method

Publications (2)

Publication Number Publication Date
CN116580284A CN116580284A (en) 2023-08-11
CN116580284B true CN116580284B (en) 2023-09-15

Family

ID=87540074

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310862312.2A Active CN116580284B (en) 2023-07-14 2023-07-14 Deep learning-based interferometric synthetic aperture radar offset measurement method

Country Status (1)

Country Link
CN (1) CN116580284B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE19626556A1 (en) * 1996-07-02 1998-01-15 Joao R Dr Ing Moreira Determining absolute phase between two interferometric images generated by synthetic aperture radar
CN109001735A (en) * 2018-07-27 2018-12-14 中国科学院国家空间科学中心 A kind of scene classification method based on interference synthetic aperture radar image
CN109738896A (en) * 2019-02-11 2019-05-10 黄河水利职业技术学院 A kind of Ground Deformation monitoring method based on SAR Image Matching technology
CN113065467A (en) * 2021-04-01 2021-07-02 中科星图空间技术有限公司 Satellite image low-coherence region identification method and device based on deep learning
CN115546264A (en) * 2022-09-29 2022-12-30 中国科学院空天信息创新研究院 Satellite-borne InSAR image fine registration and stereo measurement method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102142674B1 (en) * 2019-08-01 2020-08-07 서울시립대학교 산학협력단 Method and Apparatus for Synthetic Aperture Radar Phase Unwrapping based on SAR Offset Tracking Displacement Model

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE19626556A1 (en) * 1996-07-02 1998-01-15 Joao R Dr Ing Moreira Determining absolute phase between two interferometric images generated by synthetic aperture radar
CN109001735A (en) * 2018-07-27 2018-12-14 中国科学院国家空间科学中心 A kind of scene classification method based on interference synthetic aperture radar image
CN109738896A (en) * 2019-02-11 2019-05-10 黄河水利职业技术学院 A kind of Ground Deformation monitoring method based on SAR Image Matching technology
CN113065467A (en) * 2021-04-01 2021-07-02 中科星图空间技术有限公司 Satellite image low-coherence region identification method and device based on deep learning
CN115546264A (en) * 2022-09-29 2022-12-30 中国科学院空天信息创新研究院 Satellite-borne InSAR image fine registration and stereo measurement method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Charles Werner 等.Precision estimation of local offsets between pairs of SAR SLCs and detected SAR images. Proceedings. 2005 IEEE International Geoscience and Remote Sensing Symposium, 2005. IGARSS '05..2005,4803-4805页. *
利用粗DEM信息的分布式卫星InSAR图像精配准算法;郭交;刘艳阳;苏宝峰;;信号处理(第04期);全文 *

Also Published As

Publication number Publication date
CN116580284A (en) 2023-08-11

Similar Documents

Publication Publication Date Title
CN111833393A (en) Binocular stereo matching method based on edge information
WO2022206020A1 (en) Method and apparatus for estimating depth of field of image, and terminal device and storage medium
Thomas et al. High resolution (400 m) motion characterization of sea ice using ERS-1 SAR imagery
CN111985551B (en) Stereo matching algorithm based on multi-attention network
CN112233179B (en) Visual odometer measuring method
CN113065467B (en) Satellite image low coherence region identification method and device based on deep learning
CN111242999B (en) Parallax estimation optimization method based on up-sampling and accurate re-matching
CN111861884A (en) Satellite cloud image super-resolution reconstruction method based on deep learning
CN116305902B (en) Flood maximum submerged depth space simulation method based on multi-mode remote sensing
CN103454636A (en) Differential interferometric phase estimation method based on multi-pixel covariance matrixes
CN111879258A (en) Dynamic high-precision three-dimensional measurement method based on fringe image conversion network FPTNet
CN115311314B (en) Resampling method, system and storage medium for line laser contour data
CN114265062B (en) InSAR phase unwrapping method based on phase gradient estimation network
CN106157258B (en) A kind of satellite-borne SAR image geometric correction method
Yao et al. Toward real-world super-resolution technique for fringe projection profilometry
CN112802184B (en) Three-dimensional point cloud reconstruction method, three-dimensional point cloud reconstruction system, electronic equipment and storage medium
CN116580284B (en) Deep learning-based interferometric synthetic aperture radar offset measurement method
CN117788296A (en) Infrared remote sensing image super-resolution reconstruction method based on heterogeneous combined depth network
Pouderoux et al. Global contour lines reconstruction in topographic maps
CN115311168A (en) Depth estimation method for multi-view visual system, electronic device and medium
CN115731345A (en) Human body three-dimensional reconstruction method based on binocular vision
CN115457022A (en) Three-dimensional deformation detection method based on real-scene three-dimensional model front-view image
CN115546264A (en) Satellite-borne InSAR image fine registration and stereo measurement method
CN115601423A (en) Edge enhancement-based round hole pose measurement method in binocular vision scene
CN110033493B (en) Camera 3D calibration method and terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant