CN109360267B - Rapid three-dimensional reconstruction method for thin object - Google Patents

Rapid three-dimensional reconstruction method for thin object Download PDF

Info

Publication number
CN109360267B
CN109360267B CN201811147802.XA CN201811147802A CN109360267B CN 109360267 B CN109360267 B CN 109360267B CN 201811147802 A CN201811147802 A CN 201811147802A CN 109360267 B CN109360267 B CN 109360267B
Authority
CN
China
Prior art keywords
base
point
sample
transformation matrix
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811147802.XA
Other languages
Chinese (zh)
Other versions
CN109360267A (en
Inventor
徐羊元
时岭
徐松柏
杨静
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Lanxin Technology Co ltd
Original Assignee
Hangzhou Lanxin Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Lanxin Technology Co ltd filed Critical Hangzhou Lanxin Technology Co ltd
Priority to CN201811147802.XA priority Critical patent/CN109360267B/en
Publication of CN109360267A publication Critical patent/CN109360267A/en
Application granted granted Critical
Publication of CN109360267B publication Critical patent/CN109360267B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/60Rotation of a whole image or part thereof
    • G06T3/604Rotation of a whole image or part thereof using a CORDIC [COordinate Rotation Digital Compute] device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P10/00Technologies related to metal processing
    • Y02P10/25Process efficiency

Abstract

The invention discloses a method for quickly reconstructing three dimensions of a thin object, which comprises the following steps: and fixing the thin object to be reconstructed on a base with marking points, acquiring a depth image and a color image by a depth camera, identifying the marking points, obtaining a transformation matrix, carrying out coordinate transformation on the point cloud of the object to be reconstructed according to the transformation matrix, and finally splicing the point clouds of two visual angles of the transformed object, thereby completing the three-dimensional reconstruction of the sample. The three-dimensional reconstruction of the thin object is realized by splicing the point clouds of the two visual angles based on the mark points, so that the reconstruction speed is high, the reconstruction precision is high and the operation is simple for the thin object with the thickness of about 2 mm-30 mm. Meanwhile, the invention only needs one depth camera and one base with simple mark points, and has lower cost.

Description

Rapid three-dimensional reconstruction method for thin object
Technical Field
The invention relates to the technical field of computer vision, in particular to a rapid three-dimensional reconstruction method for a thin object.
Background
With the development of computer vision, the three-dimensional reconstruction technology is increasingly widely applied in the fields of computer-aided medical treatment, reverse engineering, industrial automatic detection and the like. The three-dimensional reconstruction technology of the real object comprises reconstruction based on professional software, reconstruction by a computer vision method and the like. The modeling technology by special software is very mature, has wide application and good modeling effect, but the special modeling software for application needs to be trained by professionals. A lot of manpower and material resources are consumed. In the field of computer vision, an Iterative Closest Point (ICP) algorithm is a common method in point cloud registration, and accurate registration of two point sets can be realized through continuous iterative optimization matrix, but the algorithm has strong dependence on the given initial position and the corresponding relation in the iterative process, and has the problems of large calculated amount, long modeling time, local optimum sinking of iteration and the like.
Disclosure of Invention
Aiming at the defects, the invention provides a rapid three-dimensional reconstruction method for a thin object, which has the advantages of high speed, high precision, low cost and simple operation, by using a mark point and a depth camera.
In order to achieve the above purpose, the technical scheme adopted by the invention is as follows: a method for quickly reconstructing three dimensions of a thin object comprises the following steps:
and fixing the thin object to be reconstructed on a base with marking points, acquiring a depth image and a color image by a depth camera, identifying the marking points, obtaining a transformation matrix, carrying out coordinate transformation on the point cloud with the reconstructed object according to the transformation matrix, and finally splicing the point clouds of two visual angles of the transformed object, thereby completing the three-dimensional reconstruction of the sample.
Further, the method comprises the following steps:
(1) Fixing a sample to be three-dimensionally reconstructed on a base with marking points, wherein the shape of the front surface and the back surface of the base are completely the same;
(2) Collecting a color image and a depth image of a first visual angle (front) comprising a base and a sample by using a depth camera, and converting the color image and the depth image into point clouds by combining internal and external parameters of the camera;
(3) Detecting mark points in the base according to the color image;
(4) Repeating the step (2) -the step (3) by rotating the base to obtain a point cloud and a marked point of a second view angle (back);
(5) Calculating transformation matrices T of two views by using the mark points detected in the step (3) and the step (4), respectively 1 ,T 2 The method comprises the steps of carrying out a first treatment on the surface of the Rotating both the front and back of the base to be parallel to the xoy plane through the transformation matrix, and translating the center of the base to the origin of the camera coordinate system;
(6) Using a transformation matrix T 1 ,T 2 Respectively carrying out coordinate transformation on point clouds of two visual angles of the sample, and enabling the point clouds of a second visual angle of the sample to pass through a transformation matrix T 3 And rotating the sample by 180 degrees around the Y axis, translating the sample along the Z axis by a value which is the same as the thickness of the base, and finally splicing the transformed point cloud to finish the three-dimensional reconstruction of the sample.
Further, the marked point on the base in the step (3) is marked as (p 0 ,p 1 ,p 2 ,p 3 )。
Further, the transform matrix of the two views in the step (5) is obtained as follows:
by calculating the center point coordinates (x c ,y c ,z c ) Thereby obtaining the front S of the base 1 The relative distance between the center point and the origin point can obtain a translation vector t;
Figure BDA0001817213250000021
front S of base 1 The coordinates of the center point of (c) are:
(x c ,y c ,z c )=(p 5 +p 6 )/2 (2)
wherein: p is p 5 For marking point p 0 ,p 1 At the center of two points, p is 5 =(p 0 +p 1 )/2;
p 6 For marking point p 2 ,p 3 At the center of two points, p is 6 =(p 2 +p 3 )/2;
Front S of the base 1 Rotated to be parallel to the xoy plane, the rotation matrix R is shown as formula (3):
Figure BDA0001817213250000022
wherein r is 3 =[a,b,c]The method comprises the steps of carrying out a first treatment on the surface of the Vectors [ a, b, c ]]Is S 1 Normal vector of the plane;
r 1 =(p 5 -p 6 ).normalized();
r 2 =r 3 .cross(r 1 );
the transformation matrix for the first viewing angle of the base is shown in the following formula:
Figure BDA0001817213250000023
similarly, a transformation matrix T of the second view angle of the base can be obtained 2
Further, the step (6) specifically includes the following steps:
the three-dimensional reconstruction model of the sample is specifically:
C=T 1 *C 1 +T 3 *T 2 *C 2 (5)
T 3 as shown in formula (6):
Figure BDA0001817213250000031
wherein l is the thickness of the base; c is the reconstructed point cloud, C 1 ,C 2 The point clouds of the front and back sides of the sample before reconstruction, respectively.
The invention has the beneficial effects that: and fixing the thin object to be reconstructed on a base with marking points, acquiring a depth image and a color image by a depth camera, identifying the marking points, obtaining a transformation matrix, carrying out coordinate transformation on the point cloud with the reconstructed object according to the transformation matrix, and finally splicing the point clouds of two visual angles of the transformed object, thereby completing the three-dimensional reconstruction of the sample. The three-dimensional reconstruction of the object can be realized by only collecting two visual angles of the object to be reconstructed, so the reconstruction speed is high. Meanwhile, only one depth camera and a base with marking points are needed, and the cost is low. Compared with the traditional ICP algorithm, the method can complete reconstruction more quickly, and has the advantages of high reconstruction accuracy, simple cost, strong expandability, visualization and the like. The three-dimensional reconstruction of the thin object is realized by splicing the point clouds of the two visual angles based on the mark points, so that the reconstruction speed is high, the reconstruction precision is high and the operation is simple for the thin object with the thickness of about 2 mm-30 mm. Meanwhile, the invention only needs one depth camera and one base with simple mark points, and has lower cost.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a schematic view of a base marked with a mark point;
FIG. 3 is a schematic view of the base before and after coordinate transformation;
fig. 4 is a schematic diagram of the three-dimensional reconstruction result of a thin object.
Detailed Description
Embodiments of the technical scheme of the present invention are described in detail below with reference to the accompanying drawings. The following examples are given only for the purpose of more clearly illustrating the technical solutions of the present invention, and are therefore only exemplary and not intended to limit the scope of the present invention.
As shown in fig. 1, the invention provides a rapid three-dimensional reconstruction method for a thin object, which is characterized in that the thin object to be reconstructed is fixed on a base with marking points, a depth image and a color image are acquired through a depth camera, the marking points are identified, a transformation matrix can be obtained, coordinate transformation is carried out on point clouds with the reconstructed object according to the transformation matrix, and finally, the point clouds with two visual angles of the transformed object are spliced, so that the three-dimensional reconstruction of a sample is completed.
The method specifically comprises the following steps:
step 1: fixing a sample to be three-dimensionally reconstructed on a base with marking points, wherein the shape of the front surface and the back surface of the base are completely the same, and the specific structure size is known;
step 2: collecting a color image and a depth image of a first visual angle (front) comprising a base and a sample by using a depth camera, and converting the color image and the depth image into point clouds by combining internal and external parameters of the camera;
step 3: from the color image, the mark points in the base are detected, and the detected mark points provided in this embodiment are sequentially denoted as (p 0 ,p 1 ,p 2 ,p 3 ) Of course, 3 or more are possible;
step 4: repeating the step 2-3 by rotating the base to obtain a point cloud and a mark point of a second view angle (back);
step 5: because the width of the base is known, and the shape of the front surface is the same as that of the back surface, the reconstruction of the base can be realized by enabling two point clouds on the front surface and the back surface to coincide through coordinate conversion and then moving the back surface along the z-axis direction by the unit with the same thickness as the base. Since the sample is fixed on the base, a three-dimensional reconstruction model of the sample can be realized by only making the same rotation and translation as the base. For this purpose use is made ofThe mark points detected in the step 3 and the step 4 respectively calculate transformation matrixes T of two visual angles 1 ,T 2 The method comprises the steps of carrying out a first treatment on the surface of the Rotating both the front and back sides of the mount to be parallel to the xoy plane by a transformation matrix, as shown in fig. 3, and translating the center of the mount to the origin of the camera coordinate system; specifically, taking the front side as an example:
by calculating the center point coordinates (x c ,y c ,z c ) Thereby obtaining the front S of the base 1 The relative distance between the center point and the origin point can obtain a translation vector t.
Figure BDA0001817213250000041
To accurately calculate S 1 Is to identify the centers of four mark points (p 0 ,p 1 ,p 2 ,p 3 (ii) as shown in figure 2. Deducing S 1 The coordinates of the center point of (c) are:
(x c ,y c ,z c )=(p 5 +p 6 )/2 (2)
wherein: p is p 5 For marking point p 0 ,p 1 At the center of two points, p is 5 =(p 0 +p 1 )/2;
p 6 For marking point p 2 ,p 3 At the center of two points, p is 6 =(p 2 +p 3 )/2;
Due to the front S of the base 1 Rotated to be parallel to the xoy plane, the rotation matrix R is shown in equation 3:
Figure BDA0001817213250000042
wherein r is 3 =[a,b,c]The method comprises the steps of carrying out a first treatment on the surface of the Vectors [ a, b, c ]]Is S 1 Normal vector of the plane.
r 1 =(p 5 -p 6 ).normalized();
r 2 =r 3 .cross(r 1 );
The transformation matrix for the front of the base is shown as:
Figure BDA0001817213250000051
wherein R represents a rotation matrix, and t represents a translation vector;
similarly, a transformation matrix T on the back of the base can be obtained 2
Step 6: using a transformation matrix T 1 ,T 2 Respectively carrying out coordinate transformation on point clouds of two visual angles of the sample, and enabling the point clouds of a second visual angle of the sample to pass through a transformation matrix T 3 And rotating the sample by 180 degrees around the Y axis, translating the sample along the Z axis by a value which is the same as the thickness of the base, and finally splicing the transformed point cloud to finish the three-dimensional reconstruction of the sample. (as shown in fig. 4), specifically:
C=T 1 *C 1 +T 3 *T 2 *C 2 (5)
T 3 as shown in equation 6:
Figure BDA0001817213250000052
wherein l is the thickness of the base; t (T) 1 ,T 2 Representing the transformation matrix for the front and back of the chassis, respectively. C is the reconstructed point cloud, C 1 ,C 2 The point clouds of the front and back sides of the sample before reconstruction, respectively.
The foregoing description is only of the preferred embodiments of the present invention and is not intended to limit the scope of the invention, and all equivalent structures or equivalent processes using the descriptions and drawings of the present invention or directly or indirectly applied to other related technical fields are included in the scope of the invention.

Claims (2)

1. A method for quickly reconstructing three dimensions of a thin object is characterized in that the method comprises the following steps:
fixing a thin object to be reconstructed on a base with marking points, collecting a depth image and a color image through a depth camera, identifying the marking points to obtain a transformation matrix, carrying out coordinate transformation on point clouds of the object to be reconstructed according to the transformation matrix, and finally splicing the point clouds of two visual angles of the transformed object to finish three-dimensional reconstruction of a sample;
wherein the method comprises the following steps:
(1) Fixing a sample to be three-dimensionally reconstructed on a base with marking points, wherein the shape of the front surface and the back surface of the base are completely the same;
(2) Collecting a color image and a depth image of a first visual angle comprising a base and a sample by using a depth camera, and converting the color image and the depth image into point cloud by combining internal and external parameters of the camera;
(3) Detecting a mark point in the base from the color image, the mark point on the base being denoted (p 0 ,p 1 ,p 2 ,p 3 );
(4) Rotating the base to repeat the step (2) -the step (3) to obtain a point cloud and a marked point of the second view angle;
(5) Calculating transformation matrices T of two views by using the mark points detected in the step (3) and the step (4), respectively 1 ,T 2 The method comprises the steps of carrying out a first treatment on the surface of the Rotating both the front and back of the base to be parallel to the xoy plane through the transformation matrix, and translating the center of the base to the origin of the camera coordinate system; the process of solving the transformation matrix of the two view angles is as follows:
by calculating the center point coordinates (x c ,y c ,z c ) Thereby obtaining the front S of the base 1 The relative distance between the center point and the origin is obtained, namely a translation vector t is obtained;
Figure FDA0004129812150000011
front S of base 1 The coordinates of the center point of (c) are:
(x c ,y c ,z c )=(p 5 +p 6 )/2 (2)
wherein: p is p 5 For marking point p 0 ,p 1 At the center of two points, p is 5 =(p 0 +p 1 )/2;
p 6 For marking point p 2 ,p 3 At the center of two points, p is 6 =(p 2 +p 3 )/2;
Front S of the base 1 Rotated to be parallel to the xoy plane, the rotation matrix R is shown as formula (3):
Figure FDA0004129812150000012
wherein r is 3 =[a,b,c]The method comprises the steps of carrying out a first treatment on the surface of the Vectors [ a, b, c ]]Is S 1 Normal vector of the plane;
r 1 =(p 5 -p 6 ).normalized();
r 2 =r 3 .cross(r 1 );
the transformation matrix for the first viewing angle of the base is shown in the following formula:
Figure FDA0004129812150000021
similarly, a transformation matrix T of a second view angle of the base is obtained 2
(6) Using a transformation matrix T 1 ,T 2 Respectively carrying out coordinate transformation on point clouds of two visual angles of the sample, and enabling the point clouds of a second visual angle of the sample to pass through a transformation matrix T 3 And rotating the sample by 180 degrees around the Y axis, translating the sample along the Z axis by a value which is the same as the thickness of the base, and finally splicing the transformed point cloud to finish the three-dimensional reconstruction of the sample.
2. The method for rapid three-dimensional reconstruction of a thin object according to claim 1, wherein the step (6) is specifically as follows:
the three-dimensional reconstruction model of the sample is specifically:
C=T 1 *C 1 +T 3 *T 2 *C 2 (5)
T 3 as shown in formula (6):
Figure FDA0004129812150000022
wherein l is the thickness of the base; c is the reconstructed point cloud, C 1 ,C 2 The point clouds of the front and back sides of the sample before reconstruction, respectively.
CN201811147802.XA 2018-09-29 2018-09-29 Rapid three-dimensional reconstruction method for thin object Active CN109360267B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811147802.XA CN109360267B (en) 2018-09-29 2018-09-29 Rapid three-dimensional reconstruction method for thin object

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811147802.XA CN109360267B (en) 2018-09-29 2018-09-29 Rapid three-dimensional reconstruction method for thin object

Publications (2)

Publication Number Publication Date
CN109360267A CN109360267A (en) 2019-02-19
CN109360267B true CN109360267B (en) 2023-06-06

Family

ID=65348194

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811147802.XA Active CN109360267B (en) 2018-09-29 2018-09-29 Rapid three-dimensional reconstruction method for thin object

Country Status (1)

Country Link
CN (1) CN109360267B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111768490B (en) * 2020-05-14 2023-06-27 华南农业大学 Plant three-dimensional modeling method and system based on iteration closest point and manual intervention
CN112069923A (en) * 2020-08-18 2020-12-11 东莞正扬电子机械有限公司 3D face point cloud reconstruction method and system

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103198523B (en) * 2013-04-26 2016-09-21 清华大学 A kind of three-dimensional non-rigid body reconstruction method based on many depth maps and system
CN104330074B (en) * 2014-11-03 2017-01-18 广州欧科信息技术股份有限公司 Intelligent surveying and mapping platform and realizing method thereof
CN105627948B (en) * 2016-01-31 2018-02-06 山东科技大学 A kind of method that large complicated carved measuring system carries out complex-curved sampling
CN105989604A (en) * 2016-02-18 2016-10-05 合肥工业大学 Target object three-dimensional color point cloud generation method based on KINECT
CN106556356A (en) * 2016-12-07 2017-04-05 西安知象光电科技有限公司 A kind of multi-angle measuring three-dimensional profile system and measuring method
CN107240129A (en) * 2017-05-10 2017-10-10 同济大学 Object and indoor small scene based on RGB D camera datas recover and modeling method
CN107631700B (en) * 2017-09-07 2019-06-21 西安电子科技大学 The three-dimensional vision information method that spatial digitizer is combined with total station

Also Published As

Publication number Publication date
CN109360267A (en) 2019-02-19

Similar Documents

Publication Publication Date Title
CN108759665B (en) Spatial target three-dimensional reconstruction precision analysis method based on coordinate transformation
CN111121655B (en) Visual detection method for pose and aperture of coplanar workpiece with equal large hole patterns
CN109035327B (en) Panoramic camera attitude estimation method based on deep learning
CN110866969A (en) Engine blade reconstruction method based on neural network and point cloud registration
CN113744351B (en) Underwater structure light measurement calibration method and system based on multi-medium refraction imaging
WO2018201677A1 (en) Bundle adjustment-based calibration method and device for telecentric lens-containing three-dimensional imaging system
CN104463969B (en) A kind of method for building up of the model of geographical photo to aviation tilt
CN111784778A (en) Binocular camera external parameter calibration method and system based on linear solving and nonlinear optimization
CN111415391A (en) Multi-view camera external orientation parameter calibration method adopting inter-shooting method
CN109583377B (en) Control method and device for pipeline model reconstruction and upper computer
CN114332348B (en) Track three-dimensional reconstruction method integrating laser radar and image data
Li et al. Research on the calibration technology of an underwater camera based on equivalent focal length
CN106097433A (en) Object industry and the stacking method of Image model and system
CN109360267B (en) Rapid three-dimensional reconstruction method for thin object
CN113870366B (en) Calibration method and calibration system of three-dimensional scanning system based on pose sensor
CN114066983A (en) Intelligent supplementary scanning method based on two-axis rotary table and computer readable storage medium
CN106500625A (en) A kind of telecentricity stereo vision measuring apparatus and its method for being applied to the measurement of object dimensional pattern micron accuracies
Langming et al. A flexible method for multi-view point clouds alignment of small-size object
CN109342008B (en) Wind tunnel test model attack angle single-camera video measuring method based on homography matrix
CN116840258A (en) Pier disease detection method based on multifunctional underwater robot and stereoscopic vision
Fukai et al. Fast and robust registration of multiple 3d point clouds
CN115082446B (en) Method for measuring aircraft skin rivet based on image boundary extraction
Hyeon et al. Automatic spatial template generation for realistic 3d modeling of large-scale indoor spaces
CN111612071B (en) Deep learning method for generating depth map from curved surface part shadow map
CN114511637A (en) Weak-feature object image three-dimensional reconstruction system and method based on strong feature construction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant