CN113781305A - Point cloud fusion method of double-monocular three-dimensional imaging system - Google Patents

Point cloud fusion method of double-monocular three-dimensional imaging system Download PDF

Info

Publication number
CN113781305A
CN113781305A CN202111058784.XA CN202111058784A CN113781305A CN 113781305 A CN113781305 A CN 113781305A CN 202111058784 A CN202111058784 A CN 202111058784A CN 113781305 A CN113781305 A CN 113781305A
Authority
CN
China
Prior art keywords
point cloud
monocular
point
rzl
monocular system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111058784.XA
Other languages
Chinese (zh)
Inventor
陈贵
王盛杰
王芳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Hanzhen Intelligent Technology Co ltd
Original Assignee
Zhejiang Hanzhen Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Hanzhen Intelligent Technology Co ltd filed Critical Zhejiang Hanzhen Intelligent Technology Co ltd
Priority to CN202111058784.XA priority Critical patent/CN113781305A/en
Publication of CN113781305A publication Critical patent/CN113781305A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses a point cloud fusion method of a double-monocular three-dimensional imaging system, which comprises a structured light projection module and image acquisition modules distributed on two sides of the structured light projection module, wherein the point cloud fusion method of the monocular systems on the left side and the right side comprises a calibration stage and a reconstruction stage: determining the one-to-one corresponding relation of two monocular point clouds by utilizing the coding information in a calibration stage; in the reconstruction stage, the SVD is used for decomposing blocks by blocks to determine rotational translation transformation, and after the point cloud splicing operation is completed through the rotational translation transformation, the point clouds in the overlapped area are subjected to average fusion processing to obtain a group of point clouds with good integrity. The method can realize the splicing of the left and right monocular point clouds through one-time data acquisition, saves the complicated operation of multiple acquisition, does not need to calculate the point cloud characteristics or the 2D characteristics to determine the matching relation of the two groups of point clouds, can save a large amount of calculation time, and obviously improves the integrity of the three-dimensional imaging data.

Description

Point cloud fusion method of double-monocular three-dimensional imaging system
Technical Field
The invention belongs to the technical field of three-dimensional imaging, and particularly relates to a point cloud fusion method of a double-monocular three-dimensional imaging system.
Background
The three-dimensional imaging technology based on the structured light method has the advantages of high measurement precision, non-contact, short imaging time and the like, and is widely applied to the fields of industrial detection, reverse engineering, virtual reality, dental diagnosis and treatment, cultural relic protection and the like. Generally, the three-dimensional imaging system can be divided into a monocular structured light system and a binocular structured light system according to the number of image acquisition modules in the three-dimensional imaging system. The binocular structured light system determines the pixel matching relation of the two cameras according to the coding information of the structured light, and obtains a disparity map, and point cloud loss is easily caused when shielding and near-end overexposure conditions exist; and the double-monocular three-dimensional imaging system, namely the left and right image acquisition modules and the structured light projection module form a monocular system respectively, and a union set of two groups of monocular point clouds can be obtained through one-time shooting, so that the integrity of the point clouds is effectively improved.
Aiming at the problem of point cloud loss, the traditional method is that the same three-dimensional scanner changes the position, multiple times of shooting are carried out to obtain multiple groups of point clouds, and finally, the multiple groups of point clouds are spliced into a complete group of point clouds. The patent technology combines a point cloud ICP (inductively coupled plasma) splicing method/mark point splicing method and a texture matching (SIFT) splicing method together, can effectively make up for the defects of a single splicing method, but can cause the increase of algorithm complexity and the increase of calculation time.
The Shenzhen Lingyun video science and technology Limited liability company provides a point cloud splicing method in the invention patent application with the publication number of CN110120013A, the method combines 2D characteristics and 3D point cloud characteristics to determine matching point pairs, nearest point iterative calculation is not needed, the complex process in point cloud registration is avoided, but the method is not suitable for workpieces with a large number of planes in an industrial scene, and especially the unique 2D and 3D characteristics are few.
Disclosure of Invention
In view of the above, the invention provides a point cloud fusion method for a dual-monocular three-dimensional imaging system, which does not need additional operation, and can accurately realize the point cloud splicing operation of a left monocular system and a right monocular system only by one structured light projection and image process, thereby significantly improving the data integrity of three-dimensional imaging.
A point cloud fusion method of a double-monocular three-dimensional imaging system is disclosed, wherein the imaging system is composed of a structured light projection module and image acquisition modules distributed on the left side and the right side of the structured light projection module, the left image acquisition module and the structured light projection module form a left monocular system, the right image acquisition module and the structured light projection module form a right monocular system, and the method is used for point cloud fusion of the left monocular system and the right monocular system and specifically comprises the following steps:
(1) the structured light projection template for calibration is designed and consists of stripe patterns which are vertical to each other in the horizontal and vertical directions;
(2) projecting the structured light projection template to a white board by using a structured light projection module, synchronously acquiring template images by using image acquisition modules on two sides, and determining the one-to-one correspondence relationship between image pixels acquired by a left monocular system and a right monocular system by using the uniqueness of codes;
(3) according to the triangulation principle, three-dimensional point cloud data of an image acquired by a left monocular system, namely left point cloud, is reconstructed, and three-dimensional point cloud data of an image acquired by a right monocular system, namely right point cloud, is reconstructed;
(4) determining a transformation matrix from a right monocular system coordinate system to a left monocular system coordinate system block by utilizing the one-to-one correspondence between the pixels of the images acquired by the left monocular system and the right monocular system, and transforming the right point cloud into the left monocular system coordinate system;
(5) and (4) merging the overlapped parts of the right point cloud and the left point cloud which are transformed into the coordinate system of the left monocular system into a block of point cloud through average operation.
Preferably, the fringe patterns in two directions in the structured light projection template are generated by encoding through a gray code and a four-step phase shift method.
Further, in the step (2), a one-to-one correspondence relationship between the pixels of the image collected by the left monocular system and the pixels of the image collected by the right monocular system is determined, and the specific process is as follows: for any pixel point in the acquired image, calculating a phase value of the point through decoding; for the phase value of any pixel point in the image collected by the left monocular system, the pixel point corresponding to the same phase in the image collected by the right monocular system is searched, and the one-to-one corresponding relation between the pixels of the image collected by the left monocular system and the pixels of the image collected by the right monocular system can be determined.
Further, the specific implementation process of the step (3) is as follows: firstly, obtaining internal parameters and external parameters of a left image acquisition module and internal parameters and external parameters of a structured light projection module through system calibration, calculating a projection matrix between the left image acquisition module and the structured light projection module based on the information, and then calculating three-dimensional point cloud data of any pixel point in an image acquired by a left monocular system according to the phase value of the pixel point and the projection matrix; and similarly, the three-dimensional point cloud data of each pixel point in the image acquired by the right monocular system can also be calculated.
Further, the specific implementation process of the step (4) is as follows:
4.1 dividing the right point cloud into a plurality of blocks according to the size of NxN, wherein each block of point cloud consists of three-dimensional point cloud data of all points in the block, and N is a self-set positive integer;
4.2 for the ith Right Point cloud Block
Figure BDA0003251982170000031
According to the one-to-one correspondence relationship between the pixels of the images collected by the left monocular system and the right monocular system, the matched left point cloud block can be determined
Figure BDA0003251982170000032
i and j are [0, NxN-1 ]]A natural number within the range;
4.3 calculating the centroids of the right point clouds
Figure BDA0003251982170000033
And the center of mass of the left point cloud
Figure BDA0003251982170000034
4.4 calculation matrix
Figure BDA0003251982170000035
Performing singular value decomposition on H to obtain H ═ U Σ VTWherein U andv is a unitary matrix obtained by decomposition, sigma is a singular value diagonal matrix,Trepresenting a transpose;
4.5 calculate the rotation matrix from the right monocular system coordinate system to the left monocular system coordinate system, R ═ VUTAnd a translation matrix
Figure BDA0003251982170000036
Further the right point cloud block
Figure BDA0003251982170000037
Transforming the coordinate system into a left monocular system coordinate system;
4.6 traversing all the right point cloud blocks in sequence according to the steps 4.2-4.5, and transforming the whole right point cloud into a left monocular system coordinate system.
Further, the specific implementation process of the step (5) is as follows: for any point p in the right point cloud transformed to the left monocular system coordinate systemrzl=(Xrzl,Yrzl,Zrzl) Calculating the pixel coordinate (u) of the point in the coordinate system of the left monocular system by projective transformationrzl,vrzl) The specific calculation expression is as follows:
Figure BDA0003251982170000038
wherein: kleftIs an internal reference, X, of the left image acquisition modulerzl、Yrzl、ZrzlThe components of the three-dimensional point cloud data of the point in the direction of X, Y, Z;
then, the coordinates (u) in the left point cloud are locatedrzl,vrzl) Three-dimensional point cloud data of the upper corresponding point and (X)rzl,Yrzl,Zrzl) After averaging, the point cloud data is used as new point cloud data after the point fusion.
Based on the technical scheme, the invention has the following beneficial technical effects:
1. according to the invention, through one-side data acquisition, the point clouds of the left monocular system and the right monocular system can be spliced, and the data integrity is obviously improved compared with that of a binocular imaging system.
2. The method can obviously improve the distortion resistance of the three-dimensional imaging system, and the point cloud splicing precision is high when the point cloud error caused by lens distortion is large.
Drawings
Fig. 1 is a schematic structural diagram of a double-monocular three-dimensional imaging system.
FIG. 2 is a schematic flow chart of the point cloud fusion method of the double monocular system of the present invention.
Fig. 3 is a schematic diagram illustrating a principle of analyzing a phase value of a structured light projection module.
Fig. 4 is a schematic diagram showing comparison of imaging results of a conventional binocular structured light imaging system and the binocular monocular three-dimensional imaging system of the present invention.
Detailed Description
In order to more specifically describe the present invention, the following detailed description is provided for the technical solution of the present invention with reference to the accompanying drawings and the specific embodiments.
As shown in fig. 1, the dual-monocular system of the present invention is composed of two industrial cameras and one DLP projector, wherein the left camera and the DLP projector constitute a left monocular system, and the right camera and the DLP projector constitute a right monocular system. As shown in fig. 2, the point cloud fusion method for the left monocular system and the right monocular system of the three-dimensional imaging system includes the following steps:
(1) in the calibration stage, a template map projected by the DLP projector is designed, as shown in fig. 3, the template map is composed of horizontal and vertical stripe maps, and the encoding mode of the stripe map in each direction adopts gray code and a four-step phase shift method.
The left and right cameras shoot the deformed fringe image modulated by the scene, as shown in fig. 3, the corresponding decoding algorithm specifically comprises the following steps:
1.1 binarizing black and white gray code images by a threshold value method, then combining a pair of 01 sequences of each pixel point according to a projection sequence, and determining the value (k) of each pixel point in the horizontal and vertical directions according to the corresponding relation between the gray code and the binary systemu,kv)。
1.2 the horizontal and vertical phase values of the pixel points can be obtained according to the following formula
Figure BDA0003251982170000051
Wherein InIs the nth phase shift diagram.
Figure BDA0003251982170000052
1.3 the phase value at any point (u, v) can be expressed as:
Figure BDA0003251982170000053
by the method, the image sequences shot by the left camera and the right camera can be decoded into phase values in the horizontal direction and the vertical direction, namely (phi)leftuleftv) And (phi)righturightv)。
1.4 by means of traversal, for each point (u) of the right camera systemlm,vlm) Finding equal phase corresponding pixel coordinates (u) in the left camera systemrn,vrn) This allows a one-to-one correspondence between pixel coordinates in the left and right camera systems to be determined.
(2) In the reconstruction stage, the specific implementation process is as follows:
firstly, three-dimensional point cloud data p of each pixel coordinate of a left monocular system is calculated through a monocular structured light reconstruction modell=(Xleft,Yleft,Zleft) (ii) a Similarly, three-dimensional point cloud data p of each pixel coordinate of the right monocular system is calculatedr=(Xright,Yright,Zright) The method comprises the following specific steps:
2.1 internal reference K of left Camera obtained by System calibrationleftGinseng (radix Ginseng) RleftAnd TleftInternal reference K of DLP optical machineprojGinseng (radix Ginseng) RprojAnd TprojCalculating projection matrixes of the left camera and the DLP projector according to the following formulas:
Figure BDA0003251982170000054
Figure BDA0003251982170000055
2.2 left Camera any Pixel position (u)l,vl) Corresponding to an absolute phase of phileftuThen three-dimensional point cloud data p of the pointl=(Xleft,Yleft,Zleft) The calculation formula of (a) is as follows:
Figure BDA0003251982170000061
2.3 similarly, the three-dimensional point cloud data p of each pixel coordinate can be calculatedr=(Xright,Yright,Zright)。
Then, determining the transformation relation from the right point cloud to the left point cloud coordinate system block by block, and the specific steps are as follows:
2.4 dividing the right point cloud into several blocks according to N × N size, wherein the ith block of point cloud data can be expressed as
Figure BDA0003251982170000062
2.5 according to the pixel matching relation between the left camera and the right camera obtained in the calibration stage, determining a left point cloud pair matched with the ith right point cloud and expressing the left point cloud pair as
Figure BDA0003251982170000063
2.6 calculating the centroid of the cloud block of the right point
Figure BDA0003251982170000064
Corresponding left point cloud centroid
Figure BDA0003251982170000065
Order to
Figure BDA0003251982170000066
Performing singular value decomposition on H to obtain H ═ U ∑ VTThen, thenTransformation from right point cloud to rotation matrix R ═ VU for left camera systemTTranslation matrix
Figure BDA0003251982170000067
And transforming the ith block of right point cloud to a left camera system through matrix operation.
2.7 according to steps 2.5-2.6, all right point cloud blocks are transformed to the left camera system in sequence, denoted as pr2l=(Xr2l,Yr2l,Zr2l)。
And finally, fusing the point cloud transformed to the left coordinate system and the point cloud obtained by the left monocular system, and specifically comprising the following steps:
2.8 transformation to the right Point cloud p of the left Camera Systemr2lThe pixel coordinate (u) of the point in the left camera system is calculated by projective transformationr2l,vr2l) The projective transformation is:
Figure BDA0003251982170000068
2.9 will also be located at (u)r2l,vr2l) Left point cloud data (X)left,Yleft,Zleft) And (X)r2l,Yr2l,Zr2l) And obtaining new point cloud data after fusion by averaging.
FIG. 4 shows the results obtained using conventional binocular structured light imaging and the inventive bi-monocular fusion imaging method, on the same hardware platform; in the traditional binocular structured light imaging system, as only the public part in the left and right image acquisition systems has three-dimensional point cloud data, large-area data loss (black shadow) occurs, and the fusion method can obtain more complete three-dimensional data, thereby greatly reducing the data loss area.
The embodiments described above are presented to enable a person having ordinary skill in the art to make and use the invention. It will be readily apparent to those skilled in the art that various modifications to the above-described embodiments may be made, and the generic principles defined herein may be applied to other embodiments without the use of the inventive faculty. Therefore, the present invention is not limited to the above embodiments, and those skilled in the art should make improvements and modifications to the present invention based on the disclosure of the present invention within the protection scope of the present invention.

Claims (6)

1. A point cloud fusion method of a double-monocular three-dimensional imaging system is disclosed, wherein the imaging system is composed of a structured light projection module and image acquisition modules distributed on the left side and the right side of the structured light projection module, the left image acquisition module and the structured light projection module form a left monocular system, the right image acquisition module and the structured light projection module form a right monocular system, and the method is used for point cloud fusion of the left monocular system and the right monocular system and specifically comprises the following steps:
(1) the structured light projection template for calibration is designed and consists of stripe patterns which are vertical to each other in the horizontal and vertical directions;
(2) projecting the structured light projection template to a white board by using a structured light projection module, synchronously acquiring template images by using image acquisition modules on two sides, and determining the one-to-one correspondence relationship between image pixels acquired by a left monocular system and a right monocular system by using the uniqueness of codes;
(3) according to the triangulation principle, three-dimensional point cloud data of an image acquired by a left monocular system, namely left point cloud, is reconstructed, and three-dimensional point cloud data of an image acquired by a right monocular system, namely right point cloud, is reconstructed;
(4) determining a transformation matrix from a right monocular system coordinate system to a left monocular system coordinate system block by utilizing the one-to-one correspondence between the pixels of the images acquired by the left monocular system and the right monocular system, and transforming the right point cloud into the left monocular system coordinate system;
(5) and (4) merging the overlapped parts of the right point cloud and the left point cloud which are transformed into the coordinate system of the left monocular system into a block of point cloud through average operation.
2. The point cloud fusion method of claim 1, wherein: and the fringe patterns in two directions in the structured light projection template are generated by encoding through a Gray code and a four-step phase shift method.
3. The point cloud fusion method of claim 1, wherein: in the step (2), a one-to-one correspondence relationship between the pixels of the image collected by the left monocular system and the pixels of the image collected by the right monocular system is determined, and the specific process is as follows: for any pixel point in the acquired image, calculating a phase value of the point through decoding; for the phase value of any pixel point in the image collected by the left monocular system, the pixel point corresponding to the same phase in the image collected by the right monocular system is searched, and the one-to-one corresponding relation between the pixels of the image collected by the left monocular system and the pixels of the image collected by the right monocular system can be determined.
4. The point cloud fusion method of claim 1, wherein: firstly, obtaining internal parameters and external parameters of a left image acquisition module and internal parameters and external parameters of a structured light projection module through system calibration, calculating a projection matrix between the left image acquisition module and the structured light projection module based on the information, and then calculating three-dimensional point cloud data of any pixel point in an image acquired by a left monocular system according to the phase value of the pixel point and the projection matrix; and similarly, the three-dimensional point cloud data of each pixel point in the image acquired by the right monocular system can also be calculated.
5. The point cloud fusion method of claim 1, wherein: the specific implementation process of the step (4) is as follows:
4.1 dividing the right point cloud into a plurality of blocks according to the size of NxN, wherein each block of point cloud consists of three-dimensional point cloud data of all points in the block, and N is a self-set positive integer;
4.2 for the ith Right Point cloud Block
Figure FDA0003251982160000021
According to the one-to-one correspondence relationship between the pixels of the images collected by the left monocular system and the right monocular system, the matched left point cloud block can be determined
Figure FDA0003251982160000022
i and j are [0, NxN-1 ]]A natural number within the range;
4.3 calculating the centroids of the right point clouds
Figure FDA0003251982160000023
And the center of mass of the left point cloud
Figure FDA0003251982160000024
4.4 calculation matrix
Figure FDA0003251982160000025
Performing singular value decomposition on H to obtain H ═ U Σ VTWherein U and V are unitary matrixes obtained by decomposition, and Sigma is a singular value diagonal matrix,Trepresenting a transpose;
4.5 calculate the rotation matrix from the right monocular system coordinate system to the left monocular system coordinate system, R ═ VUTAnd a translation matrix
Figure FDA0003251982160000026
Further the right point cloud block
Figure FDA0003251982160000027
Transforming the coordinate system into a left monocular system coordinate system;
4.6 traversing all the right point cloud blocks in sequence according to the steps 4.2-4.5, and transforming the whole right point cloud into a left monocular system coordinate system.
6. The point cloud fusion method of claim 1, wherein: the specific implementation process of the step (5) is as follows: for any point p in the right point cloud transformed to the left monocular system coordinate systemrzl=(Xrzl,Yrzl,Zrzl) Calculating the pixel coordinate (u) of the point in the coordinate system of the left monocular system by projective transformationrzl,vrzl) The specific calculation expression is as follows:
Figure FDA0003251982160000028
wherein: kleftIs an internal reference, X, of the left image acquisition modulerzl、Yrzl、ZrzlThe components of the three-dimensional point cloud data of the point in the direction of X, Y, Z;
then, the coordinates (u) in the left point cloud are locatedrzl,vrzl) Three-dimensional point cloud data of the upper corresponding point and (X)rzl,Yrzl,Zrzl) After averaging, the point cloud data is used as new point cloud data after the point fusion.
CN202111058784.XA 2021-09-08 2021-09-08 Point cloud fusion method of double-monocular three-dimensional imaging system Pending CN113781305A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111058784.XA CN113781305A (en) 2021-09-08 2021-09-08 Point cloud fusion method of double-monocular three-dimensional imaging system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111058784.XA CN113781305A (en) 2021-09-08 2021-09-08 Point cloud fusion method of double-monocular three-dimensional imaging system

Publications (1)

Publication Number Publication Date
CN113781305A true CN113781305A (en) 2021-12-10

Family

ID=78842238

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111058784.XA Pending CN113781305A (en) 2021-09-08 2021-09-08 Point cloud fusion method of double-monocular three-dimensional imaging system

Country Status (1)

Country Link
CN (1) CN113781305A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110288699A (en) * 2019-06-26 2019-09-27 电子科技大学 A kind of three-dimensional rebuilding method based on structure light
CN110852979A (en) * 2019-11-12 2020-02-28 广东省智能机器人研究院 Point cloud registration and fusion method based on phase information matching
CN111536904A (en) * 2020-05-27 2020-08-14 深圳市华汉伟业科技有限公司 Three-dimensional measurement method and system based on structural illumination and storage medium
CN113012277A (en) * 2021-02-03 2021-06-22 中国地质大学(武汉) DLP (digital light processing) -surface-based structured light multi-camera reconstruction method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110288699A (en) * 2019-06-26 2019-09-27 电子科技大学 A kind of three-dimensional rebuilding method based on structure light
CN110852979A (en) * 2019-11-12 2020-02-28 广东省智能机器人研究院 Point cloud registration and fusion method based on phase information matching
CN111536904A (en) * 2020-05-27 2020-08-14 深圳市华汉伟业科技有限公司 Three-dimensional measurement method and system based on structural illumination and storage medium
CN113012277A (en) * 2021-02-03 2021-06-22 中国地质大学(武汉) DLP (digital light processing) -surface-based structured light multi-camera reconstruction method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
吴二星: "基于结构光视觉的手机外壳三维测量系统研究", 《中国优秀硕士学位论文全文数据库基础科学辑》 *
王玉奥: "基于融合改进特征匹配和增强ICP的三维点云重建", 《中国优秀硕士学位论文全文数据库信息科技辑》 *

Similar Documents

Publication Publication Date Title
CN107170043B (en) A kind of three-dimensional rebuilding method
CN110288642B (en) Three-dimensional object rapid reconstruction method based on camera array
CN106600686B (en) Three-dimensional point cloud reconstruction method based on multiple uncalibrated images
CN108470370A (en) The method that three-dimensional laser scanner external camera joint obtains three-dimensional colour point clouds
CN109727290B (en) Zoom camera dynamic calibration method based on monocular vision triangulation distance measurement method
WO2012096747A1 (en) Forming range maps using periodic illumination patterns
CN111028295A (en) 3D imaging method based on coded structured light and dual purposes
CN113129430B (en) Underwater three-dimensional reconstruction method based on binocular structured light
CN110853151A (en) Three-dimensional point set recovery method based on video
CN109724537B (en) Binocular three-dimensional imaging method and system
CN110852979A (en) Point cloud registration and fusion method based on phase information matching
Mahdy et al. Projector calibration using passive stereo and triangulation
CN110738730A (en) Point cloud matching method and device, computer equipment and storage medium
CN113971691A (en) Underwater three-dimensional reconstruction method based on multi-view binocular structured light
Sui et al. Accurate 3D Reconstruction of Dynamic Objects by Spatial-Temporal Multiplexing and Motion-Induced Error Elimination
CN116579962A (en) Panoramic sensing method, device, equipment and medium based on fisheye camera
Jang et al. Egocentric scene reconstruction from an omnidirectional video
CN108645353B (en) Three-dimensional data acquisition system and method based on multi-frame random binary coding light field
CN111023994B (en) Grating three-dimensional scanning method and system based on multiple measurement
CN114998532B (en) Three-dimensional image visual transmission optimization method based on digital image reconstruction
CN114935316B (en) Standard depth image generation method based on optical tracking and monocular vision
JP7489253B2 (en) Depth map generating device and program thereof, and depth map generating system
CN113781305A (en) Point cloud fusion method of double-monocular three-dimensional imaging system
CN111899293B (en) Virtual and real shielding processing method in AR application
Chen et al. Bidirectional optical flow NeRF: high accuracy and high quality under fewer views

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20211210