CN107633532B - Point cloud fusion method and system based on white light scanner - Google Patents

Point cloud fusion method and system based on white light scanner Download PDF

Info

Publication number
CN107633532B
CN107633532B CN201710868725.6A CN201710868725A CN107633532B CN 107633532 B CN107633532 B CN 107633532B CN 201710868725 A CN201710868725 A CN 201710868725A CN 107633532 B CN107633532 B CN 107633532B
Authority
CN
China
Prior art keywords
white light
point cloud
point
sample
cloud fusion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710868725.6A
Other languages
Chinese (zh)
Other versions
CN107633532A (en
Inventor
郑顺义
何源
宋月婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Zhongguan Automation Technology Co ltd
Original Assignee
Wuhan Zhongguan Automation Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Zhongguan Automation Technology Co ltd filed Critical Wuhan Zhongguan Automation Technology Co ltd
Priority to CN201710868725.6A priority Critical patent/CN107633532B/en
Publication of CN107633532A publication Critical patent/CN107633532A/en
Application granted granted Critical
Publication of CN107633532B publication Critical patent/CN107633532B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Length Measuring Devices By Optical Means (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a point cloud fusion method and a point cloud fusion system based on a white light scanner, wherein the method comprises the following steps of shooting a plurality of images of a sample under different vision in a projection scene of the sample by adopting speckle texture light of the white light scanner; respectively carrying out image matching on every two images shot at the same moment to obtain a plurality of depth maps for representing three-dimensional points of a sample in the projection scene; respectively placing the depth maps at different positions in corresponding positions of a pre-established TSDF model, so that each three-dimensional point in a plurality of depth maps falls into a voxel corresponding to the TSDF model; and carrying out weighted average processing on the distance from each three-dimensional point to the central point of the corresponding voxel to complete point cloud fusion of the real surface of the sample. The invention protects the integrity and authenticity of the object to be measured, effectively improves the point cloud fusion precision, enhances the point cloud fusion stability and leads the point cloud fusion result to be closer to the real surface of the sample.

Description

Point cloud fusion method and system based on white light scanner
Technical Field
The invention relates to the field of photogrammetry, in particular to a point cloud fusion method and system based on a white light scanner.
Background
The existing point cloud fusion technology has poor stability, the point cloud fusion result is not uniform and fine, the phenomenon that partial areas are too dense and the partial areas contain data-free 'holes' exists, and the point cloud fusion result cannot accurately and clearly reflect the shape information of the surface of a sample. In addition, in the conventional method, when the image of the sample to be detected is obtained, the mark point needs to be pasted on the surface of the sample to be detected, so that the sample to be detected is damaged to a certain extent, and scanning and measurement of protected objects such as criminal investigation sites and historical cultural relics are not facilitated.
Disclosure of Invention
The invention aims to solve the technical problem of providing a point cloud fusion method and system based on a white light scanner, which can protect the integrity and authenticity of an object to be detected, effectively improve the point cloud fusion precision, enhance the point cloud fusion stability and enable the point cloud fusion result to be closer to the real surface of a sample.
The technical scheme for solving the technical problems is as follows: a point cloud fusion method based on a white light scanner comprises the following steps,
s1, shooting a plurality of images of the sample under different vision in a projection scene by adopting speckle texture light of the white light scanner;
s2, respectively carrying out image matching on every two images shot at the same time to obtain a plurality of depth maps for representing the three-dimensional points of the sample in the projection scene;
s3, respectively placing the depth maps at different positions in corresponding positions of a pre-established TSDF model, so that each three-dimensional point in a plurality of depth maps falls into a voxel corresponding to the TSDF model;
and S4, performing weighted average processing on the distance from each three-dimensional point to the center point of the corresponding voxel to complete point cloud fusion of the real surface of the sample.
The invention has the beneficial effects that: according to the point cloud fusion method based on the white light scanner, the point cloud fusion of the white light scanner is carried out by using the speckle texture technology of the white light scanner and the point cloud fusion method based on the TSDF model, the integrity and the authenticity of an object to be detected are protected, the point cloud fusion precision is effectively improved, the point cloud fusion stability is enhanced, and the point cloud fusion result is closer to the real surface of a sample.
On the basis of the technical scheme, the invention can be further improved as follows.
Further comprises S5, wherein S5 is specifically,
repeating the above steps S1-S4 until the image capturing is completed;
and calculating the zero-crossing value among the voxels as the approximate position of the sample surface point based on the direction of the projection light of the white light scanner to obtain a point cloud fusion result.
Further, in S1, specifically,
s11, a projector in the white light scanner projects randomly generated speckle texture rays onto the sample;
and S12, simultaneously acquiring a plurality of images of the sample under different visions in a projection scene of a projector by using two grayscale cameras which are separated by a preset distance in the white light scanner.
Further, in S2, specifically,
s21, performing epipolar line correction on every two images shot by the two gray-scale cameras at the same time respectively to generate two epipolar line images;
s22, performing image matching on every two corresponding epipolar line images by adopting a semi-global image matching method, and respectively obtaining homonymous image points of every two corresponding epipolar line images;
s23, calculating the parallax between each two corresponding images according to the coordinate difference of each two corresponding image points with the same name;
and S24, respectively converting the corresponding parallax expression between every two images to obtain a plurality of depth maps.
Further, in S4, specifically,
s41, retrieving the spatial position coordinates of the center point of the voxel corresponding to each three-dimensional point through indexing;
s42, respectively back projecting the center point of the voxel corresponding to each three-dimensional point into the corresponding depth map, interpolating the distance from the center point of the voxel corresponding to each three-dimensional point to the corresponding grayscale camera, and respectively calculating the directed distance field of each three-dimensional point by combining the spatial position coordinate of the center point of the corresponding voxel and the position coordinate of the corresponding grayscale camera;
s43, normalizing each of the directed distance fields;
and S44, performing weighted average processing on each normalized directional distance field respectively to complete point cloud fusion of the real surface of the sample.
Based on the point cloud fusion method based on the white light scanner, the invention also provides a point cloud fusion system based on the white light scanner.
A point cloud fusion system based on a white light scanner comprises the white light scanner, an image matching module, a TSDF model module and a weighted average processing module,
the white light scanner is used for shooting a plurality of images of the sample under different visions in a projection scene of the sample by adopting speckle texture light rays of the white light scanner;
the image matching module is used for respectively performing image matching on every two images shot at the same moment to obtain a plurality of depth maps for representing three-dimensional points of a sample in the projection scene;
the TSDF model module is configured to place the depth maps at different positions in corresponding positions of a pre-established TSDF model, respectively, so that each three-dimensional point in a plurality of depth maps falls into a voxel corresponding to the TSDF model;
and the weighted average processing module is used for carrying out weighted average processing on the distance from each three-dimensional point to the central point of the corresponding voxel to complete point cloud fusion of the real surface of the sample.
The invention has the beneficial effects that: the point cloud fusion system based on the white light scanner performs point cloud fusion of the white light scanner by using a speckle texture technology of the white light scanner and a point cloud fusion principle based on a TSDF model, protects the integrity and authenticity of an object to be detected, effectively improves the point cloud fusion precision, enhances the point cloud fusion stability, and enables the point cloud fusion result to be closer to the real surface of a sample.
On the basis of the technical scheme, the invention can be further improved as follows.
Further, the system also comprises a zero-crossing value searching module which is specifically used for searching zero-crossing values,
and after the image shooting is finished, calculating a zero-crossing value between the voxels as an approximate position of the sample surface point based on the direction of the projection light of the white light scanner to obtain a point cloud fusion result.
Further, the white light scanner includes a projector and two gray cameras distributed at a predetermined distance on both sides of the projector, the white light scanner is specifically configured to,
a projector projecting randomly generated speckle texture rays onto the sample;
two grayscale cameras simultaneously acquire a plurality of the images of the sample under different visions in a projection scene of a projector.
Further, the image matching module is specifically configured to,
performing epipolar correction on every two images shot by the two gray cameras at the same moment respectively to generate two epipolar images;
performing image matching on every two corresponding epipolar line images by adopting a semi-global image matching method, and respectively obtaining homonymous image points of every two corresponding epipolar line images;
calculating the parallax between each two corresponding images according to the coordinate difference of each two corresponding image points with the same name;
and respectively converting the corresponding parallax expression between every two images to obtain a plurality of depth maps.
Further, the weighted average processing module is specifically configured to,
retrieving spatial position coordinates of a center point of the voxel corresponding to each of the three-dimensional points through an index;
respectively back projecting the center point of the voxel corresponding to each three-dimensional point into the depth map, interpolating the distance from the center point of the voxel corresponding to each three-dimensional point to the corresponding gray-scale camera, and respectively calculating the directed distance field of each three-dimensional point by combining the spatial position coordinate of the center point of the corresponding voxel and the position coordinate of the corresponding gray-scale camera;
normalizing each of the directed distance fields respectively;
and respectively carrying out weighted average processing on each normalized directional distance field to complete point cloud fusion of the real surface of the sample.
Drawings
FIG. 1 is a flow chart of a point cloud fusion method based on a white light scanner according to the present invention;
FIG. 2 is a schematic structural diagram of a white light scanner in a point cloud fusion method based on the white light scanner according to the present invention;
FIG. 3 is a speckle texture pattern projected by a white light scanner in the point cloud fusion method based on the white light scanner according to the present invention;
FIG. 4 is a schematic structural diagram of a TSDF model in a point cloud fusion method based on a white light scanner according to the present invention;
fig. 5 is a structural block diagram of a point cloud fusion system based on a white light scanner.
In the drawings, the components represented by the respective reference numerals are listed below:
1. projector, 2, color camera, 3 gray scale camera.
Detailed Description
The principles and features of this invention are described below in conjunction with the following drawings, which are set forth by way of illustration only and are not intended to limit the scope of the invention.
As shown in fig. 1, a point cloud fusion method based on white light scanner includes the following steps,
s1, shooting a plurality of images of the sample under different vision in a projection scene by adopting speckle texture light of the white light scanner;
s2, respectively carrying out image matching on every two images shot at the same time to obtain a plurality of depth maps for representing the three-dimensional points of the sample in the projection scene;
s3, respectively placing the depth maps at different positions in corresponding positions of a pre-established TSDF model, so that each three-dimensional point in a plurality of depth maps falls into a voxel corresponding to the TSDF model;
and S4, performing weighted average processing on the distance from each three-dimensional point to the center point of the corresponding voxel to complete point cloud fusion of the real surface of the sample.
In the point cloud fusion method based on the white light scanner, the white light scanner is a handheld portable three-dimensional scanning device, can carry out three-dimensional reconstruction on a scanned scene or a sample, and is a result of improving the technology and equipment structure of the existing three-dimensional handheld scanner. As shown in fig. 2, the white light scanner is composed of two gray cameras 3, one projector 1, and one color camera 2 (the function of the color camera 2 is not used in the present invention). Firstly, a randomly generated speckle texture (as shown in fig. 3) is projected onto a sample by using a projector 1, then two images of the sample are simultaneously acquired by using two grayscale cameras 3 which are spaced at a certain distance, the matching of the images is completed, and a group of point cloud data is generated. And (4) acquiring point clouds of different viewing angles of a scene for many times by holding the mobile white light scanner to complete point cloud fusion. In summary, in S1, specifically,
s11, a projector in the white light scanner projects randomly generated speckle texture rays onto the sample;
and S12, simultaneously acquiring a plurality of images of the sample under different visions in a projection scene of a projector by using two grayscale cameras which are separated by a preset distance in the white light scanner.
In the point cloud fusion method based on the white light scanner of the present invention, the S2 is specifically,
s21, performing epipolar line correction on every two images shot by the two gray-scale cameras at the same time respectively to generate two epipolar line images;
s22, performing image matching on every two corresponding epipolar line images by adopting a semi-global image matching method, and respectively obtaining homonymous image points of every two corresponding epipolar line images; the semi-global image matching (SGM) method is a dense image matching method, and a global optimal value is searched through the construction of a parallax cost function and multi-path aggregation to obtain dense homonymous image points. Setting a same name image point p1,p2Respectively has p as the coordinate of the image point1(x1,y1),p2(x2,y2);
S23, calculating the parallax between each two corresponding images according to the coordinate difference of each two corresponding image points with the same name; namely, it isThe corresponding parallax dis between the two images is: x is2-x1
S24, converting the corresponding parallax expression between every two images to obtain a plurality of depth maps; the parallax can obtain a depth map for representing three-dimensional points of a sample in a scene through the following conversion, and the three-dimensional points can be reconstructed by using the parallax, and if the spatial coordinates of the three-dimensional points are P (X, Y, Z), the three-dimensional points are
Figure BDA0001416691300000071
Figure BDA0001416691300000072
Figure BDA0001416691300000073
Wherein B is the length of the base line of the gray camera, f is the focal length of the gray camera, and x0L,y0LIs the image principal point coordinate, x, of the left image0R,y0RIs the image principal point coordinates of the right image (wherein, the left image and the right image are respectively shot by two gray cameras).
Fig. 4 is a schematic structural diagram of a TSDF model in a point cloud fusion method based on a white light scanner, which divides the whole space to be reconstructed into grids of equal size, each grid being called a voxel. And placing the depth maps of different positions in corresponding positions of a pre-established TSDF model, wherein each three-dimensional point in the depth maps falls into a voxel corresponding to the TSDF model.
In the point cloud fusion method based on the white light scanner of the present invention, the S4 is specifically,
s41, searching the space position coordinate P of the center point of the voxel corresponding to each three-dimensional point by indexingt(Xt,Yt,Zt);
S42, respectively corresponding the voxels to each three-dimensional pointBack-projecting the central point of the voxel into the corresponding depth map, and interpolating the distance D from the central point of the voxel corresponding to each three-dimensional point to the corresponding gray-scale camerai(position O (X) of the corresponding gray-scale camera is knownc,Yc,Zc) And respectively calculating a directed distance field of each three-dimensional point by combining the spatial position coordinates of the center point of the corresponding voxel and the position coordinates of the corresponding grayscale camera; specifically, the directed distance field sdf is calculated according to the following equationi
sdfi=||Zc-Zt||-Di
S43, normalizing each of the directed distance fields; specifically, the normalization processing is performed according to the following formula:
Figure BDA0001416691300000081
wherein sdfiFor directed distance fields, tsdfiFor the normalized value of the directed distance field, max traversal is the voxel ceiling value and min traversal is the voxel ceiling value.
S44, carrying out weighted average processing on each normalized directional distance field respectively to complete point cloud fusion of the real surface of the sample; specifically, the weighted average processing is performed according to the following formula:
Figure BDA0001416691300000082
wi=min(mx weight,wi-1+1),w0=0
max weight=128
wherein, tsdfavgIs a distance weighted average, wiIs the weight of the current frame, wi-1And i is the number of shooting frames.
The point cloud fusion method based on the white light scanner further comprises S5, wherein S5 specifically comprises the steps of S1-S4 which are repeatedly and circularly executed until the image shooting is completed; and calculating the zero-crossing value among the voxels as the approximate position of the sample surface point based on the direction of the projection light of the white light scanner to obtain a point cloud fusion result. Namely: shooting a sample for multiple times by using a white light scanner, circularly executing the steps S1-S4 after each shooting is finished, and continuously updating the weighted average value of all voxels; after shooting is finished, a zero crossing point is interpolated in the voxel along the direction of the projection light of the corresponding gray camera, and the point is used as a surface point of the sample, so that an approximate sample surface is obtained by using a TSDF model, and point cloud fusion is finished.
The invention relates to a point cloud fusion method based on a white light scanner, which comprises the following steps:
(1) the invention utilizes the white light projector to project speckle texture light rays on the object to be measured, replaces the traditional method of pasting mark points on the surface of the sample, and can effectively protect the integrity and the authenticity of the sample; and speckle texture light is generated at random, is easy to generate and has no autocorrelation, and the matching precision of the SGM images can be effectively improved.
(2) When the white light scanner carries out point cloud fusion, a point cloud fusion method based on a TSDF model is used for establishing the TSDF model for a space to be reconstructed, multi-frame image data is used for carrying out weighted average on voxels, and finally, zero-crossing values among the voxels are searched along the direction of projection light to serve as approximate positions of sample surface points, so that a point cloud fusion result is obtained. The method enables the result of point cloud fusion to be closer to the real surface of the sample, and improves the precision and stability of point cloud fusion.
Based on the point cloud fusion method based on the white light scanner, the invention also provides a point cloud fusion system based on the white light scanner.
As shown in fig. 5, a point cloud fusion system based on a white light scanner includes a white light scanner, an image matching module, a TSDF model module and a weighted average processing module,
the white light scanner is used for shooting a plurality of images of the sample under different visions in a projection scene of the sample by adopting speckle texture light rays of the white light scanner;
the image matching module is used for respectively performing image matching on every two images shot at the same moment to obtain a plurality of depth maps for representing three-dimensional points of a sample in the projection scene;
the TSDF model module is configured to place the depth maps at different positions in corresponding positions of a pre-established TSDF model, respectively, so that each three-dimensional point in a plurality of depth maps falls into a voxel corresponding to the TSDF model;
and the weighted average processing module is used for carrying out weighted average processing on the distance from each three-dimensional point to the central point of the corresponding voxel to complete point cloud fusion of the real surface of the sample.
The point cloud fusion system based on the white light scanner further comprises a zero crossing value searching module, wherein the zero crossing value searching module is specifically used for calculating the zero crossing value among the voxels along the direction of the projection light of the white light scanner as the approximate position of the sample surface point after shooting is completed, so that a point cloud fusion result is obtained.
The white light scanner comprises a projector and two gray cameras distributed at two sides of the projector at a preset distance, and is specifically used for projecting randomly generated speckle texture light rays onto the sample by the projector; two grayscale cameras simultaneously acquire a plurality of the images of the sample under different visions in a projection scene of a projector.
The image matching module is specifically used for performing epipolar line correction on every two images shot by the two gray-scale cameras at the same moment respectively to generate two epipolar line images; performing image matching on every two corresponding epipolar line images by adopting a semi-global image matching method, and respectively obtaining homonymous image points of every two corresponding epipolar line images; calculating the parallax between each two corresponding images according to the coordinate difference of each two corresponding image points with the same name; and respectively converting the corresponding parallax expression between every two images to obtain a plurality of depth maps.
The weighted average processing module is specifically configured to retrieve, through an index, a spatial position coordinate of a center point of the voxel corresponding to each of the three-dimensional points; respectively back projecting the center point of the voxel corresponding to each three-dimensional point into the depth map, interpolating the distance from the center point of the voxel corresponding to each three-dimensional point to the corresponding gray-scale camera, and respectively calculating the directed distance field of each three-dimensional point by combining the spatial position coordinate of the center point of the corresponding voxel and the position coordinate of the corresponding gray-scale camera; normalizing each of the directed distance fields respectively; and respectively carrying out weighted average processing on each normalized directional distance field to complete point cloud fusion of the real surface of the sample.
The point cloud fusion system based on the white light scanner performs point cloud fusion of the white light scanner by using speckle texture light of the white light scanner and a point cloud fusion method based on a TSDF model, and the method can protect the integrity and authenticity of an object to be measured, effectively improve the point cloud fusion precision, enhance the point cloud fusion stability, enable the point cloud fusion result to be closer to the real surface of the object and realize full-automatic point cloud fusion.
The white light handheld scanner is customized for fast and accurate texture scanning work, and the point cloud fusion method can ensure that the white light scanner can fast and stably obtain a true color three-dimensional model of an article. Based on the characteristics of rapidness, true color and the like, the white light scanner can be widely applied to the fields of rapid prototyping, reverse engineering, animation, clothing, human body scanning and the like.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (6)

1. A point cloud fusion method based on a white light scanner is characterized in that: comprises the following steps of (a) carrying out,
s1, shooting a plurality of images of the sample under different vision in a projection scene by adopting speckle texture light of the white light scanner;
s2, respectively carrying out image matching on every two images shot at the same time to obtain a plurality of depth maps for representing the three-dimensional points of the sample in the projection scene;
s3, respectively placing the depth maps at different positions in corresponding positions of a pre-established TSDF model, so that each three-dimensional point in a plurality of depth maps falls into a voxel corresponding to the TSDF model;
s4, carrying out weighted average processing on the distance from each three-dimensional point to the center point of the corresponding voxel to complete point cloud fusion of the real surface of the sample;
specifically, the step S1 is,
s11, a projector in the white light scanner projects randomly generated speckle texture rays onto the sample;
s12, simultaneously acquiring a plurality of images of the sample under different vision in a projection scene of a projector by using two grayscale cameras which are separated by a preset distance in the white light scanner;
specifically, the step S4 is,
s41, retrieving the spatial position coordinates of the center point of the voxel corresponding to each three-dimensional point through indexing;
s42, respectively back projecting the center point of the voxel corresponding to each three-dimensional point into the corresponding depth map, interpolating the distance from the center point of the voxel corresponding to each three-dimensional point to the corresponding grayscale camera, and respectively calculating the directed distance field of each three-dimensional point by combining the spatial position coordinate of the center point of the corresponding voxel and the position coordinate of the corresponding grayscale camera;
s43, normalizing each of the directed distance fields;
and S44, performing weighted average processing on each normalized directional distance field respectively to complete point cloud fusion of the real surface of the sample.
2. The white light scanner-based point cloud fusion method of claim 1, characterized in that: also comprises S5, the S5 is specifically,
repeating the above steps S1-S4 until the image capturing is completed;
and calculating the zero-crossing value among the voxels as the approximate position of the sample surface point based on the direction of the projection light of the white light scanner to obtain a point cloud fusion result.
3. A white light scanner based point cloud fusion method according to claim 1 or 2, characterized by: specifically, the step S2 is,
s21, performing epipolar line correction on every two images shot by the two gray-scale cameras at the same time respectively to generate two epipolar line images;
s22, performing image matching on every two corresponding epipolar line images by adopting a semi-global image matching method, and respectively obtaining homonymous image points of every two corresponding epipolar line images;
s23, calculating the parallax between each two corresponding images according to the coordinate difference of each two corresponding image points with the same name;
and S24, respectively converting the corresponding parallax expression between every two images to obtain a plurality of depth maps.
4. A point cloud fusion system based on a white light scanner is characterized in that: comprises a white light scanner, an image matching module, a TSDF model module and a weighted average processing module,
the white light scanner is used for shooting a plurality of images of the sample under different visions in a projection scene of the sample by adopting speckle texture light rays of the white light scanner;
the image matching module is used for respectively performing image matching on every two images shot at the same moment to obtain a plurality of depth maps for representing three-dimensional points of a sample in the projection scene;
the TSDF model module is configured to place the depth maps at different positions in corresponding positions of a pre-established TSDF model, respectively, so that each three-dimensional point in a plurality of depth maps falls into a voxel corresponding to the TSDF model;
the weighted average processing module is used for carrying out weighted average processing on the distance from each three-dimensional point to the central point of the corresponding voxel to complete point cloud fusion of the real surface of the sample;
the white light scanner comprises a projector and two gray cameras distributed at a preset distance on two sides of the projector, the white light scanner is used for,
a projector projecting randomly generated speckle texture rays onto the sample;
two gray level cameras simultaneously acquire a plurality of images of the sample under different visions in a projection scene of a projector;
the weighted average processing module is specifically configured to,
retrieving spatial position coordinates of a center point of the voxel corresponding to each of the three-dimensional points through an index;
respectively back projecting the center point of the voxel corresponding to each three-dimensional point into the depth map, interpolating the distance from the center point of the voxel corresponding to each three-dimensional point to the corresponding gray-scale camera, and respectively calculating the directed distance field of each three-dimensional point by combining the spatial position coordinate of the center point of the corresponding voxel and the position coordinate of the corresponding gray-scale camera;
normalizing each of the directed distance fields respectively;
and respectively carrying out weighted average processing on each normalized directional distance field to complete point cloud fusion of the real surface of the sample.
5. The white light scanner based point cloud fusion system of claim 4, wherein: the system also comprises a zero crossing value searching module which is specifically used for searching the zero crossing value,
and after the image shooting is finished, calculating a zero-crossing value between the voxels as an approximate position of the sample surface point based on the direction of the projection light of the white light scanner to obtain a point cloud fusion result.
6. A white light scanner based point cloud fusion system as claimed in claim 4 or 5, characterized by: the image matching module is specifically configured to,
performing epipolar correction on every two images shot by the two gray cameras at the same moment respectively to generate two epipolar images;
performing image matching on every two corresponding epipolar line images by adopting a semi-global image matching method, and respectively obtaining homonymous image points of every two corresponding epipolar line images;
calculating the parallax between each two corresponding images according to the coordinate difference of each two corresponding image points with the same name;
and respectively converting the corresponding parallax expression between every two images to obtain a plurality of depth maps.
CN201710868725.6A 2017-09-22 2017-09-22 Point cloud fusion method and system based on white light scanner Active CN107633532B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710868725.6A CN107633532B (en) 2017-09-22 2017-09-22 Point cloud fusion method and system based on white light scanner

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710868725.6A CN107633532B (en) 2017-09-22 2017-09-22 Point cloud fusion method and system based on white light scanner

Publications (2)

Publication Number Publication Date
CN107633532A CN107633532A (en) 2018-01-26
CN107633532B true CN107633532B (en) 2020-10-23

Family

ID=61103559

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710868725.6A Active CN107633532B (en) 2017-09-22 2017-09-22 Point cloud fusion method and system based on white light scanner

Country Status (1)

Country Link
CN (1) CN107633532B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108347545A (en) * 2018-03-30 2018-07-31 深圳积木易搭科技技术有限公司 A kind of binocular scanner
CN109242898B (en) * 2018-08-30 2022-03-22 华强方特(深圳)电影有限公司 Three-dimensional modeling method and system based on image sequence
CN110008843B (en) * 2019-03-11 2021-01-05 武汉环宇智行科技有限公司 Vehicle target joint cognition method and system based on point cloud and image data
CN110044266B (en) * 2019-06-03 2023-10-31 易思维(杭州)科技有限公司 Photogrammetry system based on speckle projection
CN113129348B (en) * 2021-03-31 2022-09-30 中国地质大学(武汉) Monocular vision-based three-dimensional reconstruction method for vehicle target in road scene

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104182968A (en) * 2014-08-05 2014-12-03 西北工业大学 Method for segmenting fuzzy moving targets by wide-baseline multi-array optical detection system
CN105654492A (en) * 2015-12-30 2016-06-08 哈尔滨工业大学 Robust real-time three-dimensional (3D) reconstruction method based on consumer camera
CN106412426A (en) * 2016-09-24 2017-02-15 上海大学 Omni-focus photographing apparatus and method
CN106803267A (en) * 2017-01-10 2017-06-06 西安电子科技大学 Indoor scene three-dimensional rebuilding method based on Kinect

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104182968A (en) * 2014-08-05 2014-12-03 西北工业大学 Method for segmenting fuzzy moving targets by wide-baseline multi-array optical detection system
CN105654492A (en) * 2015-12-30 2016-06-08 哈尔滨工业大学 Robust real-time three-dimensional (3D) reconstruction method based on consumer camera
CN106412426A (en) * 2016-09-24 2017-02-15 上海大学 Omni-focus photographing apparatus and method
CN106803267A (en) * 2017-01-10 2017-06-06 西安电子科技大学 Indoor scene three-dimensional rebuilding method based on Kinect

Also Published As

Publication number Publication date
CN107633532A (en) 2018-01-26

Similar Documents

Publication Publication Date Title
CN107633532B (en) Point cloud fusion method and system based on white light scanner
CN110288642B (en) Three-dimensional object rapid reconstruction method based on camera array
US7206080B2 (en) Surface shape measurement apparatus, surface shape measurement method, surface state graphic apparatus
EP1580523A1 (en) Three-dimensional shape measuring method and its device
Chandler et al. Autodesk 123D catch: how accurate is it
CN106705849B (en) Calibrating Technique For The Light-strip Sensors
CN105115560B (en) A kind of non-contact measurement method of cabin volume of compartment
CN104268935A (en) Feature-based airborne laser point cloud and image data fusion system and method
CN110462686A (en) For obtaining the device and method of depth information from scene
CN108362223B (en) Portable 3D scanner, scanning system and scanning method
CN111754573B (en) Scanning method and system
CN105654547B (en) Three-dimensional rebuilding method
CN112132908B (en) Camera external parameter calibration method and device based on intelligent detection technology
CN109559349A (en) A kind of method and apparatus for calibration
CN109242898B (en) Three-dimensional modeling method and system based on image sequence
CN111445529B (en) Calibration equipment and method based on multi-laser ranging
CN107084680A (en) A kind of target depth measuring method based on machine monocular vision
Gajski et al. Applications of macro photogrammetry in archaeology
Mahdy et al. Projector calibration using passive stereo and triangulation
CN109341537A (en) Dimension measurement method and device based on binocular vision
KR101707279B1 (en) Coordinate Calculation Acquisition Device using Stereo Image and Method Thereof
CN112254670B (en) 3D information acquisition equipment based on optical scanning and intelligent vision integration
CN107990846A (en) Master based on single frames structure light passively combines depth information acquisition method
Hafeez et al. Image based 3D reconstruction of texture-less objects for VR contents
CN110443228A (en) A kind of method for pedestrian matching, device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant