CN114926669A - Efficient speckle matching method based on deep learning - Google Patents

Efficient speckle matching method based on deep learning Download PDF

Info

Publication number
CN114926669A
CN114926669A CN202210535331.XA CN202210535331A CN114926669A CN 114926669 A CN114926669 A CN 114926669A CN 202210535331 A CN202210535331 A CN 202210535331A CN 114926669 A CN114926669 A CN 114926669A
Authority
CN
China
Prior art keywords
tensor
feature
speckle
cost
deep learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210535331.XA
Other languages
Chinese (zh)
Inventor
赵航
尹维
冯世杰
左超
陈钱
胡岩
季逸凡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Original Assignee
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and Technology filed Critical Nanjing University of Science and Technology
Priority to CN202210535331.XA priority Critical patent/CN114926669A/en
Publication of CN114926669A publication Critical patent/CN114926669A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Multimedia (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a high-efficiency speckle matching method based on deep learning. Firstly, synchronously collecting speckle patterns by using a projector for projection and a binocular stereo camera, performing distortion and epipolar calibration on the collected speckle patterns by using a circular plate calibration method, inputting the calibrated patterns as a network, extracting pattern features by combining an attention mechanism and a Spatial Pyramid Pooling Module (SPPM), and constructing a 4-dimensional body by combining a series of feature layers obtained by convolution layers, realizing cost aggregation by a multi-scale feature fusion method, obtaining a disparity map by disparity regression, processing the disparity map by a disparity map formula to obtain a depth map, and finally realizing high-efficiency and high-precision three-dimensional imaging.

Description

Efficient speckle matching method based on deep learning
Technical Field
The invention belongs to the technical field of three-dimensional imaging, and particularly relates to a high-efficiency speckle matching method based on deep learning.
Background
At present, the basic principle of the speckle projection technology and the related art method are mature. The key technology influencing speckle projection is a high-performance speckle stereo matching algorithm. However, due to the complex reflection characteristic of the surface of the measured object and the difference of viewing angles between the two cameras, it is still difficult to ensure the global uniqueness of each pixel in the whole measurement space by projecting only one speckle pattern, and there is a problem of poor measurement accuracy caused by mismatching in actual measurement. Thus, the expensive computational overhead required for stereo matching presents a significant challenge to potential applications based on real-time three-dimensional imaging. In addition, the rapid and high-precision three-dimensional imaging technology mostly stays in industrial application and even in laboratory stage at present, and the miniaturization and low cost make the technology difficult to popularize and apply to the field of consumer electronics. Therefore, how to realize fast and high-precision speckle stereo matching is gradually becoming the main development direction.
Disclosure of Invention
The invention aims to provide an efficient speckle matching method based on deep learning.
The technical solution for realizing the purpose of the invention is as follows: an efficient speckle matching method based on deep learning comprises the following steps:
step 1: projecting to a measured object by using a projector, synchronously acquiring speckle patterns by using a binocular stereo camera, and performing distortion correction and polar line calibration on the speckle patterns by using a circular plate calibration method;
step 2: inputting speckle patterns into a feature extraction submodule of a network to obtain a feature tensor, wherein the feature extraction submodule comprises two parallel parts and a fusion part for splicing outputs of the two parallel parts, the first part of the two parallel parts is a spatial pyramid pooling module fused with attention, and the second part is a plurality of convolution layers;
and 3, step 3: constructing a 4-dimensional matching ontology by combining the feature tensor and the candidate parallax range;
and 4, step 4: inputting a 4-dimensional matching cost body as a cost aggregation module, realizing cost aggregation by a multi-scale feature fusion method, and obtaining a disparity map by disparity regression;
and 5: and processing the disparity map by a stereoscopic vision method formula to obtain a depth map, thereby realizing three-dimensional reconstruction.
Preferably, the spatial pyramid pooling module integrated with the attention mechanism is used for extracting pattern features, and the specific process is as follows:
obtaining a tensor with the size of H/32 multiplied by W/32 by 5 convolution layers with the step size of 2;
the tensor of H/32 multiplied by W/32 is subjected to up-sampling by 4 interpolations to obtain the tensor of H/2 multiplied by W/2;
the H/2 xW/2 tensor gets a tensor of size 160 xH/4 xW/4 through 4 convolutional layers of step size 2 and 3 interpolated upsampling.
Preferably, in the process that the speckle pattern obtains a tensor with the size of H/32 × W/32 through 5 convolutional layers with the step length of 2, an eigen tensor obtained by processing each convolutional layer is input into an excitation function module, after the eigen tensor is processed by the excitation function module, weight information is obtained and is connected to an eigen tensor output by each convolutional layer to form a new eigen map which is input into the next convolutional layer, and the eigen tensor output by the last convolutional layer is connected with the weight information and is subjected to up-sampling by 4 interpolation to obtain the tensor of H/2 × W/2.
Preferably, the excitation function is in particular:
α=σ(F 2D (I(s)))
C o (s)=α×C i (s)
wherein, F 2D Refers to a two-dimensional convolution operationI(s) is the feature tensor obtained by convolution layer processing of the original image, and sigma is the activation function sigmoid, C i (s) denotes initial cost amount before weight information processing, α denotes weight information, C o And(s) refers to the splicing cost obtained after the weight information processing.
Preferably, the processing procedure of the second part of the feature extraction sub-module is as follows: images collected by the binocular stereo camera are directly processed by the two convolution layers to obtain tensors with the size of H/4 multiplied by W/4.
Preferably, the fusion part splices two tensors with the size of 48 × H/4 × W/4 and two tensors with the size of 160 × H/4 × W/4 on the eigen channel to obtain a tensor with the size of 256 × H/4 × W/4; after two convolution layer processes, a tensor with the size of 32 xH/2 xW/2 is obtained.
Preferably, the specific formula for constructing the 4-dimensional matching ontology by combining the feature tensor and the candidate parallax range in step 3 is as follows:
Cost(1:32,D i -D min +1,1:H,1:W-D i )=Feature left (1:32,1:H,1:W-D i )
Cost(33:64,D i -D min +1,1:H,1:W-D i )=Feature right (1:32,1:H,D i :W)
wherein Feature left And Feature right The feature tensor for two views, Cost represents the Cost of the ontology, [ D [ [ D ] min ,D max ]As the parallax range, D i H × W is the size of the speckle pattern as a candidate parallax.
Preferably, the normalized probability of each candidate disparity Di in the four-dimensional component ontology is obtained by utilizing softmax operation, and the normalized probability is weighted and summed to obtain a predicted disparity map, as shown in the following formula:
Figure BDA0003647699160000031
wherein [ D ] min ,D max ]For the parallax range, Softmax (·) stands for Softmax operation, and Disparity stands for pass-through parallaxAnd (5) obtaining an initial disparity map through regression, wherein Cost is an ontology formed by matching 4 dimensions after Cost filtering.
Preferably, the body vision method formula is:
Figure BDA0003647699160000032
wherein, B represents the distance of the imaging system baseline, namely the horizontal distance between the physical optical centers of the left camera and the right camera, f is the focal length of the two cameras, d is the horizontal parallax between two points of an object, and Z refers to the depth information obtained by the stereo vision method.
Compared with the prior art, the invention has the following remarkable advantages: the speckle matching network provided by the invention can obtain a parallax image with higher precision and can predict the parallax image at a higher speed.
The present invention is described in further detail below with reference to the attached drawing figures.
Drawings
Fig. 1 is a schematic flow chart of an efficient speckle matching method based on deep learning.
Fig. 2 is a basic schematic diagram of the speckle stereo matching method based on deep learning according to the present invention.
Detailed Description
The conception of the invention is as follows: a high-efficiency speckle matching method based on deep learning includes the steps of firstly, synchronously collecting speckle patterns by using a projector for projection and a binocular stereo camera, utilizing a circular plate calibration method to conduct distortion and epipolar calibration on the collected speckle patterns, enabling the calibrated patterns to serve as network input, extracting pattern features through combination of an attention mechanism and a Space Pyramid Pooling Module (SPPM), and combining a series of feature layers obtained by convolution layers to jointly construct a 4-dimensional body, achieving cost aggregation through a multi-scale feature fusion method, finally obtaining a disparity map through disparity regression, processing the disparity map through a stereo vision method formula to obtain a depth map, and finally achieving high-efficiency and high-precision single-frame three-dimensional imaging.
As an embodiment, an efficient speckle matching method based on deep learning includes the following specific steps:
step 1: projecting to a measured object by using a projector, synchronously acquiring speckle patterns by using a binocular stereo camera, and performing distortion correction and polar line calibration on the speckle patterns by using a circular plate calibration method;
step 2: inputting the calibrated speckle pattern into a feature extraction submodule of a stereo matching network, wherein the feature extraction submodule comprises two parallel parts and a fusion part for splicing the outputs of the two parallel parts, and the two parallel parts comprise: one part is to extract pattern features through an attention mechanism and a Space Pyramid Pooling Module (SPPM), and the other part is to directly extract features of an original drawing through a series of convolution layers;
a first part, obtaining a tensor with the size of H/32 multiplied by W/32 through 5 convolution layers with the step length of 2; then the tensor of H/2 multiplied by W/2 is obtained through 4 interpolation upsampling.
The feature tensor obtained by processing each convolution layer is input into an excitation function module, after the excitation function module is processed, weight information is obtained and is connected to the feature tensor output by each convolution layer to form a new feature graph which is input into the next convolution layer, and the feature tensor output by the last convolution layer is connected with the weight information and then is subjected to 4 interpolation up-sampling to obtain the tensor of H/2 xW/2;
the tensor of H/2 xW/2 passes through 4 convolution layers with the step length of 2 and 3 interpolation upsampling, and the tensor with the size of 160 xH/4 xW/4 is finally obtained;
in a further embodiment, the excitation function is specifically:
α=σ(F 2D (I(s)))
C o (s)=α×C i (s)
wherein, F 2D Is two-dimensional convolution operation, I(s) is characteristic tensor obtained by convolution layer processing of original image, sigma is activation function sigmoid, C i (s) denotes initial cost amount before weight information processing, α denotes weight information, C o And(s) refers to the splicing cost obtained after the weight information processing.
The second part, directly processing the image collected by the binocular stereo camera through two convolution layers to obtain a tensor with the size of H/4 multiplied by W/4;
splicing two tensors with the size of 48 multiplied by H/4 multiplied by W/4 and two tensors with the size of 160 multiplied by H/4 multiplied by W/4 on the characteristic channel to obtain the tensor with the size of 256 multiplied by H/4 multiplied by W/4; after two convolutional layer processes, a tensor with a size of 32 xH/2 xW/2 is obtained.
And step 3: constructing a 4-dimensional matching ontology by combining the feature tensor and the candidate parallax range, specifically:
Cost(1:32,D i -D min +1,1:H,1:W-D i )=Feature left (1:32,1:H,1:W-D i )
Cost(33:64,D i -D min +1,1:H,1:W-D i )=Feature right (1:32,1:H,D i :W)
wherein Feature left And Feature right The feature tensor for two views, Cost represents the Cost of the ontology, [ D min ,D max ]As a range of parallax, D i Are candidate disparities.
And 4, step 4: the method comprises the following steps of taking a 4-dimensional matching cost body as an input of a cost aggregation module, realizing cost aggregation through a multi-scale feature fusion method, and obtaining a disparity map through disparity regression, wherein the specific process comprises the following steps:
obtaining the normalized probability of each candidate parallax Di in the four-dimensional component ontology by utilizing softmax operation, and performing weighted summation on the normalized probability to obtain a predicted parallax map, wherein the predicted parallax map is shown as the following formula:
Figure BDA0003647699160000051
wherein [ D ] min ,D max ]For the Disparity range, Softmax (·) represents Softmax operation, Disparity represents initial Disparity map obtained by Disparity regression, Cost is an ontology of Cost-filtered 4-dimensional matches, and Softmax operation is shown as follows:
Figure BDA0003647699160000052
and obtaining an initial disparity map of the original resolution by using bilinear interpolation.
And 5: and processing the disparity map into a depth map according to a stereo vision method formula as shown in the following formula:
Figure BDA0003647699160000053
wherein, B represents the distance of the imaging system baseline, namely the horizontal distance between the physical optical centers of the left camera and the right camera, f is the focal length of the two cameras, and d is the horizontal parallax between two points of the object. And Z refers to depth information obtained by processing through a stereo vision method.
The stereo matching network provided by the invention comprises the following parts:
extracting a feature network of pattern features based on an attention mechanism and a Spatial Pyramid Pooling Module (SPPM);
constructing a 4-dimensional matching body;
processing the three-dimensional convolution layer into a body to realize cost aggregation;
obtaining a disparity map by performing disparity regression on the 4-dimensional matching cost after cost aggregation;
and processing the disparity map into a depth map according to a stereoscopic vision method formula.
The embodiment is as follows:
in order to verify the effectiveness of the invention, a binocular stereo vision system is built. The two cameras used in the binocular system of the present embodiment were Basler industrial cameras (Basler acA 640750 um), and the projector used was a digital projector (DLP4500 Pro). And (3) synchronously acquiring speckle patterns by using the content in the step (1), projecting by using a projector and a binocular camera, performing distortion correction and epipolar calibration on the patterns by using a circular plate calibration method, and taking the obtained patterns as network input. Fig. 2 is a basic schematic diagram of the speckle stereo matching algorithm based on deep learning according to the invention. And finally, realizing high-efficiency and high-precision three-dimensional imaging by utilizing the steps 2 to 5.

Claims (9)

1. An efficient speckle matching method based on deep learning is characterized by comprising the following steps:
step 1: projecting to a measured object by using a projector, synchronously acquiring speckle patterns by using a binocular stereo camera, and performing distortion correction and polar line calibration on the speckle patterns by using a circular plate calibration method;
and 2, step: inputting speckle patterns into a feature extraction submodule of a network to obtain a feature tensor, wherein the feature extraction submodule comprises two parallel parts and a fusion part for splicing outputs of the two parallel parts, the first part of the two parallel parts is a spatial pyramid pooling module fused with an attention system, and the second part of the two parallel parts is a plurality of convolution layers;
and step 3: constructing a 4-dimensional matching ontology by combining the feature tensor and the candidate parallax range;
and 4, step 4: inputting a 4-dimensional matching cost body as a cost aggregation module, realizing cost aggregation by a multi-scale feature fusion method, and obtaining a disparity map by disparity regression;
and 5: and processing the disparity map by a stereoscopic vision method formula to obtain a depth map, thereby realizing three-dimensional reconstruction.
2. The efficient speckle matching method based on deep learning of claim 1, wherein the spatial pyramid pooling module integrated with attention mechanism is used for extracting pattern features, and the specific process is as follows:
obtaining a tensor with the size of H/32 multiplied by W/32 by 5 convolution layers with the step size of 2;
the tensor of H/32 multiplied by W/32 is subjected to up-sampling by 4 interpolations to obtain the tensor of H/2 multiplied by W/2;
the H/2 xW/2 tensor gets a tensor of size 160 xH/4 xW/4 through 4 convolutional layers of step size 2 and 3 interpolated upsampling.
3. The efficient speckle matching method based on deep learning as claimed in claim 2, wherein in the process that a speckle pattern passes through 5 convolutional layers with step length of 2 to obtain a tensor with size of H/32 × W/32, the feature tensor obtained by processing of each convolutional layer is input to an excitation function module, after processing of the excitation function, weight information is obtained and connected to the feature tensor output by each convolutional layer to form a new feature map input to the next convolutional layer, and the feature tensor output by the last convolutional layer is connected with the weight information and then is subjected to 4 interpolation up-sampling to obtain the tensor of H/2 × W/2.
4. The efficient speckle matching method based on deep learning of claim 1, wherein the excitation function is specifically:
α=σ(F 2D (I(s)))
C o (s)=α×C i (s)
wherein, F 2D Is two-dimensional convolution operation, I(s) is characteristic tensor obtained by convolution layer processing of original image, sigma is activation function sigmoid, C i (s) denotes initial cost amount before weight information processing, α denotes weight information, C o And(s) refers to the splicing cost obtained after the weight information processing.
5. The efficient deep learning-based speckle matching method according to claim 1, wherein the second part of the feature extraction sub-module is processed by: images collected by the binocular stereo camera are directly processed by the two convolution layers to obtain tensors with the size of H/4 multiplied by W/4.
6. The efficient speckle matching method based on deep learning as claimed in claim 1, wherein the fusion part splices two tensors with size of 48 × H/4 × W/4 and two tensors with size of 160 × H/4 × W/4 on the eigen channel to obtain a tensor with size of 256 × H/4 × W/4; after two convolution layer processes, a tensor with the size of 32 xH/2 xW/2 is obtained.
7. The efficient speckle matching method based on deep learning of claim 1, wherein the specific formula of constructing the 4-dimensional matching ontology by combining the feature tensor and the candidate parallax range in step 3 is as follows:
Cost(1:32,D i -D min +1,1:H,1:W-D i )=Feature left (1:32,1:H,1:W-D i )
Cost(33:64,D i -D min +1,1:H,1:W-D i )=Feature right (1:32,1:H,D i :W)
wherein Feature left And Feature right The feature tensor for two views, Cost represents the Cost of the ontology, [ D min ,D max ]As the parallax range, D i H × W is the size of the speckle pattern as a candidate parallax.
8. The efficient speckle matching method based on deep learning of claim 1, wherein the normalized probability of each candidate disparity Di in the four-dimensional ontology is obtained by softmax operation, and is weighted and summed to obtain the predicted disparity map, as shown in the following formula:
Figure FDA0003647699150000021
wherein [ D ] min ,D max ]For the Disparity range, Softmax (·) stands for Softmax operation, Disparity stands for the initial Disparity map obtained by Disparity regression, and Cost is the ontology of 4-dimensional matching after Cost filtering.
9. The efficient speckle matching method based on deep learning of claim 1, wherein the volume vision method formula is as follows:
Figure FDA0003647699150000022
wherein, B represents the distance of the imaging system baseline, namely the horizontal distance between the physical optical centers of the left camera and the right camera, f is the focal length of the two cameras, d is the horizontal parallax between two points of the object, and Z refers to the depth information obtained by the stereo vision method.
CN202210535331.XA 2022-05-17 2022-05-17 Efficient speckle matching method based on deep learning Pending CN114926669A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210535331.XA CN114926669A (en) 2022-05-17 2022-05-17 Efficient speckle matching method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210535331.XA CN114926669A (en) 2022-05-17 2022-05-17 Efficient speckle matching method based on deep learning

Publications (1)

Publication Number Publication Date
CN114926669A true CN114926669A (en) 2022-08-19

Family

ID=82809610

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210535331.XA Pending CN114926669A (en) 2022-05-17 2022-05-17 Efficient speckle matching method based on deep learning

Country Status (1)

Country Link
CN (1) CN114926669A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116128946A (en) * 2022-12-09 2023-05-16 东南大学 Binocular infrared depth estimation method based on edge guiding and attention mechanism
CN116188701A (en) * 2023-04-27 2023-05-30 四川大学 Three-dimensional face reconstruction method and device based on speckle structured light
CN116894798A (en) * 2023-09-11 2023-10-17 金华飞光科技有限公司 Projection deformity correction method and system of photo-curing 3D printer

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116128946A (en) * 2022-12-09 2023-05-16 东南大学 Binocular infrared depth estimation method based on edge guiding and attention mechanism
CN116128946B (en) * 2022-12-09 2024-02-09 东南大学 Binocular infrared depth estimation method based on edge guiding and attention mechanism
CN116188701A (en) * 2023-04-27 2023-05-30 四川大学 Three-dimensional face reconstruction method and device based on speckle structured light
CN116894798A (en) * 2023-09-11 2023-10-17 金华飞光科技有限公司 Projection deformity correction method and system of photo-curing 3D printer
CN116894798B (en) * 2023-09-11 2023-12-05 金华飞光科技有限公司 Projection deformity correction method and system of photo-curing 3D printer

Similar Documents

Publication Publication Date Title
Jeon et al. Depth from a light field image with learning-based matching costs
CN109685842B (en) Sparse depth densification method based on multi-scale network
CN114926669A (en) Efficient speckle matching method based on deep learning
AU2021103300A4 (en) Unsupervised Monocular Depth Estimation Method Based On Multi- Scale Unification
CN113763446B (en) Three-dimensional matching method based on guide information
CN112634379B (en) Three-dimensional positioning measurement method based on mixed vision field light field
CN110136048B (en) Image registration method and system, storage medium and terminal
CN110852979A (en) Point cloud registration and fusion method based on phase information matching
CN113762358A (en) Semi-supervised learning three-dimensional reconstruction method based on relative deep training
CN115035235A (en) Three-dimensional reconstruction method and device
Luo et al. Wavelet synthesis net for disparity estimation to synthesize dslr calibre bokeh effect on smartphones
CN112150518B (en) Attention mechanism-based image stereo matching method and binocular device
WO2024032233A1 (en) Stereophotogrammetric method based on binocular vision
CN111105451B (en) Driving scene binocular depth estimation method for overcoming occlusion effect
CN112270701B (en) Parallax prediction method, system and storage medium based on packet distance network
CN113887568B (en) Anisotropic convolution binocular image stereo matching method
CN112419386B (en) End-to-end speckle projection three-dimensional measurement method based on deep learning
CN114742875A (en) Binocular stereo matching method based on multi-scale feature extraction and self-adaptive aggregation
CN110910457B (en) Multispectral three-dimensional camera external parameter calculation method based on angular point characteristics
Zhou et al. Single-view view synthesis with self-rectified pseudo-stereo
CN108805937B (en) Single-camera polarization information prediction method
CN113808070B (en) Binocular digital speckle image related parallax measurement method
CN115601423A (en) Edge enhancement-based round hole pose measurement method in binocular vision scene
CN115330935A (en) Three-dimensional reconstruction method and system based on deep learning
CN114119704A (en) Light field image depth estimation method based on spatial pyramid pooling

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination