CN112862687B - Binocular endoscopic image three-dimensional stitching method based on two-dimensional feature points - Google Patents

Binocular endoscopic image three-dimensional stitching method based on two-dimensional feature points Download PDF

Info

Publication number
CN112862687B
CN112862687B CN202110204642.3A CN202110204642A CN112862687B CN 112862687 B CN112862687 B CN 112862687B CN 202110204642 A CN202110204642 A CN 202110204642A CN 112862687 B CN112862687 B CN 112862687B
Authority
CN
China
Prior art keywords
point cloud
dimensional
binocular
point
matching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110204642.3A
Other languages
Chinese (zh)
Other versions
CN112862687A (en
Inventor
王立强
周长江
杨青
袁波
余浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Zhejiang Lab
Original Assignee
Zhejiang University ZJU
Zhejiang Lab
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU, Zhejiang Lab filed Critical Zhejiang University ZJU
Priority to CN202110204642.3A priority Critical patent/CN112862687B/en
Publication of CN112862687A publication Critical patent/CN112862687A/en
Application granted granted Critical
Publication of CN112862687B publication Critical patent/CN112862687B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4007Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • G06T2207/10021Stereoscopic video; Stereoscopic image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a binocular endoscopic image three-dimensional splicing method based on two-dimensional characteristic points, which comprises the steps of point cloud generation, point cloud preprocessing, two-dimensional characteristic point matching, point cloud registration and the like, wherein after a binocular endoscope is moved to obtain left and right image sequences of each view angle, the binocular matching is carried out through SGBM (Semi-global block matching) to generate point clouds, preprocessing such as outlier rejection and downsampling is carried out, two-dimensional characteristic matching is carried out on adjacent left views through SURF (Speeded Up Robust Features) algorithm, two viewpoint offset is calculated, an initial matrix translation matrix is changed, point clouds are registered and spliced through ICP (Iterative Closest Point) algorithm, three-dimensional reconstruction of endoscopic images with unobvious texture characteristics is realized, the view field is enlarged, dense point clouds are obtained through the binocular images, the reconstruction accuracy is high, and doctor operation can be better assisted.

Description

Binocular endoscopic image three-dimensional stitching method based on two-dimensional feature points
Technical Field
The invention relates to the technical field of endoscopes, in particular to a binocular endoscopic image three-dimensional stitching method based on two-dimensional feature points.
Background
The endoscope can help doctors to observe the real scenes in the gastrointestinal tract, and has wide application in clinical diagnosis and treatment. The binocular endoscope can provide three-dimensional information to assist doctors in depth perception, so that the operation efficiency and safety are improved. In order to provide accurate three-dimensional information to endoscopists, researchers at home and abroad try to reconstruct three-dimensional images of endoscopes to obtain three-dimensional forms of target surfaces.
Common three-dimensional reconstruction methods such as SfS (shadow restoration structure) can reconstruct only a single field of view, and the limitation of the field of view will affect the observation of doctors and easily cause errors. Although SLAM (simultaneous localization and mapping) and SfM (motion restoration structure) methods can also expand the field of view, because the methods can only carry out sparse reconstruction, the generated sparse point cloud has poor precision, and the requirements of three-dimensional reconstruction detail observation cannot be met.
Point cloud registration is a commonly used method for three-dimensional reconstruction of stereoscopic vision, such as random consensus sampling (RANSAC) is an algorithm for matching according to three-dimensional features of extracted point cloud, the gastrointestinal tract is smooth in surface under normal conditions, and texture features are insufficient for feature matching through RANSAC. The most widely used point cloud registration method, ICP, is currently used to obtain the optimal transformation matrix by repeated search. Although it is easy to understand and the splicing effect is ideal, the method is too dependent on the initial matrix, so that not only is the locally optimal solution easily trapped, but also huge computing resources are needed.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a binocular endoscopic image three-dimensional stitching method based on two-dimensional feature points, which realizes dense three-dimensional reconstruction of the whole stomach organ.
A binocular endoscopic image three-dimensional splicing method based on two-dimensional feature points comprises the steps of generating a point cloud through binocular matching of left and right views, and preprocessing; calculating the two-dimensional characteristic point offset of adjacent sites of the left view sequence as an initial transformation matrix translation vector of point cloud registration so as to realize three-dimensional stitching of endoscopic images, wherein the method specifically comprises the following steps:
s1, obtaining left and right view sequences of an endoscope: namely, moving the lens of the binocular endoscope, and shooting left and right view sequences of a target organ;
s2, generating point cloud: namely, binocular matching is carried out on the left view and the right view by using an SGBM algorithm to generate a parallax image, iterative mean interpolation is carried out on the parallax image through a self-adaptive threshold, holes are removed, and corresponding point clouds are generated according to the parallax image;
s3, preprocessing point cloud: performing outlier rejection, point cloud segmentation and downsampling pretreatment operation on the initial point cloud;
s4, two-dimensional feature matching of adjacent left views: SURF two-dimensional feature matching is carried out on adjacent sites of the left view sequence, and a transformation matrix is generated according to the average change value of the matching points at the two adjacent sites as offset;
s5, point cloud registration: the ICP registration is carried out by taking a transformation matrix generated by two-dimensional feature matching as an initial change matrix of the point cloud sequence.
Preferably, the step S2 specifically includes:
binocular matching is carried out on the left view and the right view through an SGBM algorithm;
arranging parallax values from small to large, and setting the value at the first 10% as a threshold value, namely, the point smaller than the parallax value is a hole;
for the hole, calculating the average value of the surrounding 3*3 matrix as a new parallax value;
the above operations are iterated.
Preferably, the step S3 specifically includes:
adopting a radius type outlier removing method to remove points with almost no neighborhood points in a sphere range with a fixed radius in an initial point cloud;
intercepting a point cloud region with a Z value within a fixed range through point cloud segmentation, and eliminating a large cavity generated due to illumination and the like;
and carrying out down-sampling operation on the point cloud with the holes removed through voxel down-sampling.
Preferably, the step S4 specifically includes:
performing two-dimensional feature matching on adjacent sites of the left view sequence through SURF;
calculating the average distance of all the feature matching point pairs offset in the X and Y directions;
in the transformation matrix, by translating the vectorTx and ty representing the X and Y direction transforms assign corresponding values to generate an initial transform matrix;
setting a threshold value to be 15 times of the voxel size, and performing ICP coarse registration of adjacent point clouds by using the initial matrix;
and setting a threshold value to be 1.5 times of the voxel size, and performing ICP fine registration of adjacent point clouds by using a transformation matrix obtained by coarse registration as an initial transformation matrix.
The invention performs three-dimensional stitching on the endoscopic images of the stomach organ with weak textures, realizes the three-dimensional stitching of the whole stomach organ, and enlarges the visual field; the dense point cloud generated by the binocular endoscope has high accuracy, and can assist doctors in observation and operation.
Drawings
FIG. 1 is a flow chart of a method for three-dimensional stitching of binocular endoscopic images based on two-dimensional feature points in an embodiment of the present invention;
FIG. 2 is a diagram of initial point cloud results according to an embodiment of the present invention;
FIG. 3 is a graph of the result of matching feature points of adjacent image sequences according to an embodiment of the present invention;
fig. 4 is a graph of a multi-point cloud stitching result according to an embodiment of the present invention.
Detailed Description
The objects and effects of the present invention will become more apparent from the following detailed description of the preferred embodiments and the accompanying drawings, it being understood that the specific embodiments described herein are merely illustrative of the invention and not limiting thereof.
The invention relates to a binocular endoscopic image three-dimensional stitching method based on two-dimensional feature points, which comprises the steps of obtaining left and right images through a binocular endoscope, and realizing three-dimensional reconstruction of a main area of an organ through three-dimensional stitching, as shown in fig. 1, wherein the method specifically comprises the following steps:
s1: endoscopic left and right view sequence acquisition
And (3) completing the construction of the binocular endoscope system, and calibrating the binocular camera.
In the embodiment, the Zhang Zhengyou checkerboard calibration method is adopted to calculate the internal parameters and the external parameters of the binocular camera. A black-and-white checkerboard with a checkerboard of 9*6 and a checkerboard edge of 12mm is used as a calibration board, 15 groups of calibration board binocular images with different poses are shot in the range of the working distance of the binocular endoscope, and the shooting binocular images are calibrated by using an OpenCV library.
The method comprises the steps of placing a stomach model in a working distance range of a binocular endoscope, using a part of the binocular endoscope aligned with the stomach model as a movement starting point, controlling the binocular endoscope to slowly move and scan the stomach model through an endoscope handle, shooting the stomach model at fixed movement intervals to obtain left and right image sequences, and ending movement after scanning the whole area of the stomach model. The resolution of the image taken in this embodiment is 1920×800.
S2: point cloud generation
Binocular matching is carried out through SGBM, and point clouds of all the sites are obtained.
In this embodiment, the SGBM algorithm is used to perform binocular matching on all the left and right image pairs, so as to obtain a disparity map result corresponding to the left view, and the disparity values are arranged from small to large, and the first 10% of the values are the threshold, i.e. the holes smaller than the disparity values. For the hole, the average value of the surrounding 3*3 matrix is calculated as its new disparity value. The above operations are iterated.
And calculating three-dimensional coordinate values of each point under a camera coordinate system according to binocular camera parameters and obtaining an initial point cloud from the three-dimensional coordinate results by using Open 3D. The generated point cloud is shown in fig. 2.
S3: point cloud preprocessing
And filling and removing the holes of the initial point cloud. Firstly, deleting points with almost no neighborhood points in a sphere range with a fixed radius in an initial point cloud by using a radius outlier removal method. This process eliminates mismatching points. And then intercepting a point cloud region with a Z value within a fixed range through point cloud segmentation, and eliminating a large cavity generated due to illumination and the like to obtain an effective point cloud for splicing. And finally, performing voxel downsampling operation on the point cloud, and setting a voxel parameter to be 5.
S4: adjacent left view two-dimensional feature matching
And carrying out two-dimensional feature extraction on adjacent sites of the left view sequence by adopting SURF, wherein the specific flow is as follows:
firstly, constructing a black plug matrix, and generating all interest points for feature extraction. The scale space is then constructed.
Next, feature point positioning and descriptor generation are performed. Each pixel point processed by the black plug matrix is adjacent to 2 in the two-dimensional image space and the scale space 6 And comparing the points, preliminarily positioning the key points, filtering the key points with weak energy and the key points positioned in error, and finally screening out stable characteristic points. And distributing the main directions of the feature points by adopting the characteristics of the Harr wavelet in the circular neighborhood of the statistical feature points. And taking a 4*4 rectangular region block around the main direction of the feature points, wherein each region block contains 4 feature vectors, obtaining 64-dimensional feature descriptors, and finally performing feature point matching. Two-dimensional feature matching is shown in fig. 3.
S5: point cloud registration
And taking the point cloud corresponding to the former point of the adjacent image sequence as a source point cloud, taking the point cloud corresponding to the latter point as a target point cloud, and generating an initial transformation matrix through a two-dimensional feature matching result. The three-dimensional transformation matrix is mainly divided into the following functional blocks: a three-dimensional linear transformation portion, a three-dimensional translational transformation portion, a perspective transformation portion, and an overall scale factor. The transformation size of the point cloud in the X and Y directions is far larger than the variation of the Z direction and the rotation matrix due to the two adjacent points. Calculating the average value of the offset in the X and Y directions according to the position change of all the matching points in the adjacent left view field, and using the translation vectorCorresponding values of tx and ty representing the X and Y direction transformation are assigned to generate an initial transformation matrix as initial values of corresponding positions of the three-dimensional transformation matrix.
The ICP algorithm considers approximately the closest two points as the same point, and for a given two-piece point cloud P and Q, its initial positional relationship is [ r0|t0]. And selecting any point pi of the point cloud P in the initial pose relation and the nearest point qi in the point cloud Q as a matching point pair, thereby establishing an error function, and obtaining the optimal values of R and t by enabling the error function to reach the minimum value, wherein the process is one round in iteration. And combining the obtained R and t into a new position relation [ R1|t1], updating the position relation of the point cloud, and continuously repeating the process to finally realize one of two conditions of error function convergence or reaching the upper limit of iteration times.
The error for the ith point can be expressed as:
ei=pi-(Rpi′+t) (1)
thus, to utilize the least squares method, the error function can be expressed as:
firstly, setting a larger threshold value, and performing coarse registration by using a matrix obtained by two-dimensional feature matching calculation as an initial transformation matrix; and setting a smaller threshold value, and using a transformation matrix obtained by coarse registration as an initial transformation matrix to realize the fine registration of the point cloud, thereby improving the calculation efficiency on the premise of ensuring a better registration result of the point cloud. The error between corresponding points of point-to-plane metrics used in the solving process of pose by traditional ICP is improved. And finally, obtaining a posture graph, namely, a point cloud node and an edge comprising a transformation matrix for registering the point cloud, and performing corresponding transformation on the point cloud to realize three-dimensional splicing of the point cloud. The three-dimensional stitching results are shown in fig. 4.

Claims (4)

1. A binocular endoscopic image three-dimensional stitching method based on two-dimensional feature points is characterized by comprising the following steps of: the method comprises the steps of generating point cloud through binocular matching of left and right views, and preprocessing; calculating the two-dimensional characteristic point offset of adjacent sites of the left view sequence as an initial transformation matrix translation vector of point cloud registration so as to realize three-dimensional stitching of endoscopic images, wherein the method specifically comprises the following steps:
s1, obtaining left and right view sequences of an endoscope: namely, moving the lens of the binocular endoscope, and shooting left and right view sequences of a target organ;
s2, generating point cloud: namely, binocular matching is carried out on the left view and the right view by using an SGBM algorithm to generate a parallax image, iterative mean interpolation is carried out on the parallax image through a self-adaptive threshold, holes are removed, and corresponding point clouds are generated according to the parallax image;
s3, preprocessing point cloud: performing outlier rejection, point cloud segmentation and downsampling pretreatment operation on the initial point cloud;
s4, two-dimensional feature matching of adjacent left views: SURF two-dimensional feature matching is carried out on adjacent sites of the left view sequence, and a transformation matrix is generated according to the average change value of the matching points at the two adjacent sites as offset;
s5, point cloud registration: the ICP registration is carried out by taking a transformation matrix generated by two-dimensional feature matching as an initial change matrix of the point cloud sequence.
2. The method for three-dimensional stitching of binocular endoscopic images based on two-dimensional feature points according to claim 1, wherein the step S2 specifically comprises:
binocular matching is carried out on the left view and the right view through an SGBM algorithm;
arranging parallax values from small to large, and setting the value at the first 10% as a threshold value, namely, the point smaller than the parallax value is a hole;
for the hole, calculating the average value of the surrounding 3*3 matrix as a new parallax value;
the above operations are iterated.
3. The method for three-dimensional stitching of binocular endoscopic images based on two-dimensional feature points according to claim 1, wherein the step S3 specifically comprises:
adopting a radius type outlier removing method to remove points with almost no neighborhood points in a sphere range with a fixed radius in an initial point cloud;
intercepting a point cloud region with a Z value within a fixed range through point cloud segmentation, and eliminating a large cavity generated due to illumination;
and carrying out down-sampling operation on the point cloud with the holes removed through voxel down-sampling.
4. The method for three-dimensional stitching of binocular endoscopic images based on two-dimensional feature points according to claim 1, wherein the step S4 specifically comprises:
performing two-dimensional feature matching on adjacent sites of the left view sequence through SURF;
calculating the average distance of all the feature matching point pairs offset in the X and Y directions;
in the transformation matrix, by shifting the vector t=Tx and ty representing the X and Y direction transforms assign corresponding values to generate an initial transform matrix;
setting a threshold value to be 15 times of the voxel size, and performing ICP coarse registration of adjacent point clouds by using the initial matrix;
and setting a threshold value to be 1.5 times of the voxel size, and performing ICP fine registration of adjacent point clouds by using a transformation matrix obtained by coarse registration as an initial transformation matrix.
CN202110204642.3A 2021-02-24 2021-02-24 Binocular endoscopic image three-dimensional stitching method based on two-dimensional feature points Active CN112862687B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110204642.3A CN112862687B (en) 2021-02-24 2021-02-24 Binocular endoscopic image three-dimensional stitching method based on two-dimensional feature points

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110204642.3A CN112862687B (en) 2021-02-24 2021-02-24 Binocular endoscopic image three-dimensional stitching method based on two-dimensional feature points

Publications (2)

Publication Number Publication Date
CN112862687A CN112862687A (en) 2021-05-28
CN112862687B true CN112862687B (en) 2023-10-31

Family

ID=75990589

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110204642.3A Active CN112862687B (en) 2021-02-24 2021-02-24 Binocular endoscopic image three-dimensional stitching method based on two-dimensional feature points

Country Status (1)

Country Link
CN (1) CN112862687B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113538540A (en) * 2021-08-24 2021-10-22 北京理工大学 Medical endoscope continuous frame image feature point matching method and device
CN115919461B (en) * 2022-12-12 2023-08-08 之江实验室 SLAM-based surgical navigation method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107767442A (en) * 2017-10-16 2018-03-06 浙江工业大学 A kind of foot type three-dimensional reconstruction and measuring method based on Kinect and binocular vision
WO2018095278A1 (en) * 2016-11-24 2018-05-31 腾讯科技(深圳)有限公司 Aircraft information acquisition method, apparatus and device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018095278A1 (en) * 2016-11-24 2018-05-31 腾讯科技(深圳)有限公司 Aircraft information acquisition method, apparatus and device
CN107767442A (en) * 2017-10-16 2018-03-06 浙江工业大学 A kind of foot type three-dimensional reconstruction and measuring method based on Kinect and binocular vision

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
应用摄像机位姿估计的点云初始配准;郭清达;全燕鸣;姜长城;陈健武;;光学精密工程(06);全文 *

Also Published As

Publication number Publication date
CN112862687A (en) 2021-05-28

Similar Documents

Publication Publication Date Title
CN110288642B (en) Three-dimensional object rapid reconstruction method based on camera array
CN107977997B (en) Camera self-calibration method combined with laser radar three-dimensional point cloud data
JP5955406B2 (en) How to align data
CN109461180A (en) A kind of method for reconstructing three-dimensional scene based on deep learning
CN112862687B (en) Binocular endoscopic image three-dimensional stitching method based on two-dimensional feature points
CN113178009B (en) Indoor three-dimensional reconstruction method utilizing point cloud segmentation and grid repair
CN111028295A (en) 3D imaging method based on coded structured light and dual purposes
EP1889204A2 (en) A fast 2d-3d image registration method with application to continuously guided endoscopy
CN112967330B (en) Endoscopic image three-dimensional reconstruction method combining SfM and binocular matching
CN112991464B (en) Point cloud error compensation method and system based on three-dimensional reconstruction of stereoscopic vision
CN111612731B (en) Measuring method, device, system and medium based on binocular microscopic vision
CN112132876B (en) Initial pose estimation method in 2D-3D image registration
CN113160335A (en) Model point cloud and three-dimensional surface reconstruction method based on binocular vision
CN113450416B (en) TCSC method applied to three-dimensional calibration of three-dimensional camera
CN115222889A (en) 3D reconstruction method and device based on multi-view image and related equipment
CN112261399B (en) Capsule endoscope image three-dimensional reconstruction method, electronic device and readable storage medium
CN112802186B (en) Dynamic scene real-time three-dimensional reconstruction method based on binarization characteristic coding matching
CN117372647A (en) Rapid construction method and system of three-dimensional model for building
CN117197333A (en) Space target reconstruction and pose estimation method and system based on multi-view vision
Zhou et al. Synthesis of stereoscopic views from monocular endoscopic videos
CN114935316B (en) Standard depth image generation method based on optical tracking and monocular vision
CN112284293B (en) Method for measuring space non-cooperative target fine three-dimensional morphology
CN112381721A (en) Human face three-dimensional reconstruction method based on binocular vision
TW201509360A (en) Three-dimensional visualization system for single-lens endoscope and method thereof
CN112184887A (en) Human face three-dimensional reconstruction optimization method based on binocular vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant