CN112489193A - Three-dimensional reconstruction method based on structured light - Google Patents

Three-dimensional reconstruction method based on structured light Download PDF

Info

Publication number
CN112489193A
CN112489193A CN202011334701.0A CN202011334701A CN112489193A CN 112489193 A CN112489193 A CN 112489193A CN 202011334701 A CN202011334701 A CN 202011334701A CN 112489193 A CN112489193 A CN 112489193A
Authority
CN
China
Prior art keywords
image
point
points
camera
dimensional reconstruction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011334701.0A
Other languages
Chinese (zh)
Other versions
CN112489193B (en
Inventor
李锋
汪平
张勇停
臧利年
周斌斌
刘玉红
孙晗笑
叶童玲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu University of Science and Technology
Original Assignee
Jiangsu University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu University of Science and Technology filed Critical Jiangsu University of Science and Technology
Priority to CN202011334701.0A priority Critical patent/CN112489193B/en
Publication of CN112489193A publication Critical patent/CN112489193A/en
Application granted granted Critical
Publication of CN112489193B publication Critical patent/CN112489193B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a three-dimensional reconstruction method based on structured light, and belongs to the technical field of three-dimensional reconstruction. The invention comprises the following steps: calibrating a camera and obtaining internal and external parameters of the camera; solving a distortion mapping matrix according to the internal and external parameters of the camera; projecting an RGB format structured dot pattern onto a target object surface using a projector; shooting a target object by using a left camera and a right camera to acquire a left image and a right image; carrying out image segmentation and image point clustering; matching points in the two views; after all the sample points are calculated, the surface of the target object is reconstructed. The invention has the beneficial effects that: the combination of binocular stereo vision and structured light technology avoids the calibration of a projector and simplifies the steps of three-dimensional reconstruction; the method for designing the projection RGB point pattern and the iterative point segmentation can effectively segment the points in the image, so that the matching precision of the left image and the right image is higher, the online measurement can be realized, and the reconstruction effect on the region with non-bright color and non-rich texture is good.

Description

Three-dimensional reconstruction method based on structured light
Technical Field
The invention relates to the technical field of three-dimensional reconstruction, in particular to a three-dimensional reconstruction method based on structured light.
Background
Computer vision is that a computer acquires descriptions and information of an objective world through processing pictures or picture sequences so as to help people to better understand contents contained in the pictures. The three-dimensional reconstruction technology is a branch of computer vision and is a research direction combining computer vision and computer graphic image processing. The method is widely applied to industrial automation, reverse engineering, cultural relic protection, computer-assisted medical treatment, virtual reality, augmented reality, robot application and other scenes.
Structured light three-dimensional reconstruction is one of the important techniques in computer vision. However, most of the existing methods require multiple projections of the designed pattern to implement closed form solutions, which makes them unable to measure dynamic objects. Most of related systems are based on the reconstruction of three-dimensional color images, the edge detection of the images and the feature matching algorithm, the three colors are independently processed according to RGB, the association between the color information of the images is artificially stripped, and the reliability of the detection is influenced.
The binocular stereo vision technology is characterized in that a left image and a right image of an object are shot by two cameras at two angles, then the same-name points in the left image and the right image are found out by utilizing a stereo matching algorithm, and the three-dimensional space position coordinate information of the object to be measured is calculated by combining the internal and external parameters of the cameras and utilizing triangular intersection. The binocular stereo vision technology does not need to actively project pattern information, has simple hardware structure, but has the defects of low reconstruction point cloud precision, low reconstruction speed, easy occurrence of matching point errors and the like for objects with little surface texture information. The structured light technology projects a specific coding pattern to the surface of an object through a projector, then shoots the coding pattern modulated by the surface of the object through a camera, and recovers the depth information of the object through decoding operation of the coding pattern. Structured light technology rebuilds the precision height, and is fast, even the object that surface texture information is few also can obtain fine reconstruction effect, but traditional structured light system of rebuilding mostly all is the single-purpose, need mark the projecting apparatus at the in-process of calculating depth information, and the process of demarcating of projecting apparatus is extremely loaded down with trivial details again.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a three-dimensional reconstruction method combining binocular stereo vision and a structured light technology, which utilizes the structured light in active vision to increase the texture characteristics of the surface of an object and realizes three-dimensional reconstruction through passive vision, thereby being beneficial to improving the precision and efficiency of the three-dimensional reconstruction and saving the cost.
A three-dimensional reconstruction method based on structured light comprises the following steps:
s1, building a three-dimensional reconstruction system, wherein the three-dimensional reconstruction system comprises two cameras, a projector and a computer;
s2, calibrating the camera and obtaining internal and external parameters of the camera;
s3, solving a distortion mapping matrix according to the parameters of the camera;
s4, projecting the designed RGB point structured light pattern to the object by using a projector;
s5, acquiring a left image and a right image of a reconstructed object by using a camera of a binocular stereo vision system, and performing stereo correction on the images;
s6, based on the regional similarity, red points, green points and blue points of the left image and the right image are respectively segmented by a three-channel combined RGB point segmentation method;
s7, clustering the points with the same color according to the Euclidean distance;
s8, matching points in the left view and points in the right view, and obtaining the parallax of corresponding matching points on the left image and the right image relative to a point P on the object;
s9, combining internal and external parameters of a camera, and obtaining three-dimensional space coordinates of each point on the object by using a parallax principle;
and S10, generating a point cloud picture with sparse object according to the three-dimensional space coordinates of multiple points on the object, and finishing the three-dimensional reconstruction of the object.
Preferably, the step S2 includes the following sub-steps:
s21, calibrating a left camera to obtain internal and external parameters of the left camera, wherein the external parameters comprise a rotation matrix and a translation matrix;
s22, the internal parameters and the rotation matrixes of the left camera and the right camera are the same, and the translation matrix of the right camera is obtained by the translation matrix of the left camera and the distance between the two cameras, so that the internal and external parameters of the right camera are obtained.
Preferably, in step S4, the RGB structured-light dot pattern is based on a structured-light dot matrix of RGB three primary colors.
Preferably, the RBG structured light dot pattern is a structured light dot matrix capable of forming the same color of each dot of each row of red, green and blue and different colors of adjacent rows.
Preferably, in S4, the structured light is projected toward the object from the front of the object.
Preferably, the step S5 includes the following sub-steps:
s51, respectively obtaining a left correction matrix and a right correction matrix based on the internal and external parameters and the distortion mapping matrix of the left camera and the internal and external parameters and the distortion mapping matrix of the right camera;
s52, performing stereo correction on the left image by using the left correction matrix, performing stereo correction on the right image by using the right correction matrix, wherein the point in the left image processed by the left correction matrix and the matching point in the right image processed by the right correction matrix are on the same scanning line, namely the y-axis coordinate of the point and the matching point is the same.
Preferably, the step S6 includes the following sub-steps:
s601, calculating a first threshold value T for a point image of an R channel by using a threshold value selection method based on slope difference distribution;
s602, adopting a threshold value T pairDividing the point image, and dividing the divided binary image I1All points in (a) are marked as:
Figure BDA0002796324310000041
wherein (x, y) is an index of the binary image;
s603, setting the image resolution as NX×NYDefining a set X as {1, 2.,. NXIn the set Y, the set is {1, 2.,. N }YAnd calculating an index set of the kth mark point as:
(Xk,Yk)={(x,y)|I1(x,y)=k}(2);
calculating the area A of the k-th mark pointkComprises the following steps:
Ak=|Xk|=|Yk| (3);
s604. area set
Figure BDA0002796324310000043
Is ordered as
Figure BDA0002796324310000042
The following conditions are satisfied:
Figure BDA0002796324310000051
since the regions of the points are similar, the difference in the sorted regions should not be too large if all points are segmented accurately enough. Therefore, the accuracy of the segmentation result can be judged by the calculated difference value of the classification areas;
s605, calculating a difference value D of the sequencing regioniComprises the following steps:
Figure BDA0002796324310000052
the maximum difference is calculated as:
Dmax=max Di,i=1,2,...,NB-1 (6);
if the maximum difference Dmax is greater than a threshold (which is calculated as the aggregate area)
Figure BDA0002796324310000053
Because the selected global threshold T is smaller than the optimal threshold, some adjacent points in the segmentation result are combined into one point; the optimal threshold is defined as the threshold that can separate all bright and dark spots from the background; on the other hand, a smaller threshold may more completely segment the bright spots relative to the optimal threshold; therefore, some points of smaller threshold segmentation should be used. The region threshold is used to select which segmentation points to use;
s606, calculating a region set
Figure BDA0002796324310000054
Average value of (d):
Figure BDA0002796324310000055
s607, all the areas are smaller than AmIndex set of division points of
Figure BDA0002796324310000056
The calculation is as follows:
Figure BDA0002796324310000057
s608, updating the global threshold value as follows:
T=T+ΔT (9);
where Δ T is the step size of the loop, its value being an integer greater than or equal to 1; in order to accelerate convergence speed, selecting delta T as 10;
s609, segmenting the image by using the updated threshold value again;
s610, repeating the steps S601 to S609 until Dmax is smaller than the area set
Figure BDA0002796324310000061
One tenth of the median;
s611, repeating the steps for m times to obtain a m-th segmentation result ImIndex set (X) of all divided pointsm,Ym) Comprises the following steps:
(Xm,Ym)={(x,y)|Im(x,y)>0} (10);
s612, finally segmenting the image IRResolution N ofX×NYThe initialization is as follows:
Figure BDA0002796324310000062
calculating to obtain:
Figure BDA0002796324310000063
s613, segmenting the images of the G channel and the B channel to obtain an IGAnd IBAnd adding the division results to form a final division result.
Preferably, the step S7 includes the following substeps:
s71, spreading the divided points 5 times by using a structural element B which is {0,0} in each channel, and connecting adjacent points to form a linear image;
and S72, multiplying the clustering straight line image in each channel with the corresponding segmentation point image to generate a clustering point image. In each lane, points on different lines are assigned different identification numbers, including line identification numbers, row identification numbers.
Preferably, the step S8 includes the following substeps:
s81, in two views of each channel, firstly, matching is carried out according to line identification numbers of points, and then the points with the same line identification numbers are matched according to the line identification numbers of the points, so that clustering points in the two views are matched;
s82, obtaining the pixel coordinates of the matched corner points of the left image and the right image, namely the certain corner point l (x) of the left imagel,yl) And a corner r (x) of the right imager,yr);
S83. since the image has been stereo corrected to achieve line alignment, the y-axis coordinates of point l and point r are the same, and the parallax of the corresponding matching point on the left and right images with respect to point P on the object can be directly expressed as d ═ xl-xr
Preferably, the step S9 includes the following substeps:
s91, obtaining a triangular PO according to the parallax of the corresponding matching points on the left image and the right image relative to the point P on the object and the optical centers of the left camera and the right cameralOrSimilar to triangle P1r, wherein the similar triangle scale formula is:
Figure BDA0002796324310000071
where T is the optical center distance of the left and right cameras, d is the parallax d ═ x of the corresponding matching point on the left and right images with respect to the point P on the objectl-xrF is the focal length of the left and right cameras, Z is the depth value of point P, OlIs the optical center of the left camera, OrIs the right camera optical center;
s92, obtaining the three-dimensional coordinates (X, Y, Z) of the point P by using the formula (13),
Figure BDA0002796324310000072
and finally, obtaining the three-dimensional coordinates (X, Y, Z) of all the points on the image.
The invention has the beneficial effects that: the invention combines binocular stereo vision and structured light technology, avoids the calibration of the projector, and simplifies the steps of three-dimensional reconstruction; the double-view reconstruction method designed by the invention only needs one projection, and can realize the measurement of the dynamic object; the designed RGB structure light point pattern and the region similarity based on the point structure light pattern provide the iterative point segmentation method, which can effectively segment red, blue and green points of the three-channel combined RGB structure light point pattern respectively, so that subsequent unsupervised point clustering is facilitated, and points in left and right views can be rapidly matched; and the three-dimensional reconstruction is realized through passive vision, the precision and the efficiency of the three-dimensional reconstruction are favorably improved, and the method has a good reconstruction effect on targets with non-bright colors, non-abundant textures and unobvious shielding.
Drawings
FIG. 1 is a flow chart of a method for reconstructing a surface of an object based on a structured light point pattern;
FIG. 2 is a schematic diagram of an imaging system set up;
FIG. 3 is a designed RGB format point structured light pattern;
FIG. 4 is a photograph of a spherical object projected with regular structured light according to the present invention;
FIG. 5 is a three-dimensional reconstructed volumetric imaging model;
fig. 6 is a schematic view of a similar triangle using the parallax principle according to the present invention.
Detailed Description
The geometric model adopted by the invention is shown in fig. 5, wherein Ol is the optical center of the left camera, Or is the optical center of the right camera, P is any point in the space, and the optical centers of the left camera and the right camera and the point P form a plane POlOr. Pl and Pr are the image points of the P point in the left and right cameras respectively and are called a pair of homonymous points, and the intersecting lines Lp1 and Lpr of the plane POLOr and the left and right image planes are called a pair of epipolar lines.
The invention is further illustrated by the following figures and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some but not all of the relevant aspects of the present invention are shown in the drawings.
Example 1
Referring to fig. 1 and 2, a three-dimensional reconstruction method based on structured light includes the following steps:
s1, building a three-dimensional reconstruction system, wherein the three-dimensional reconstruction system comprises two cameras, a projector and a computer;
s2, calibrating the camera and obtaining internal and external parameters of the camera;
the step S2 includes the following sub-steps:
s21, calibrating a left camera to obtain internal and external parameters of the left camera, wherein the external parameters comprise a rotation matrix and a translation matrix;
s22, the internal parameters and the rotation matrixes of the left camera and the right camera are the same, and the translation matrix of the right camera is obtained by the translation matrix of the left camera and the distance between the two cameras, so that the internal and external parameters of the right camera are obtained.
Namely the rotation matrix and the translation matrix of the calibrated left camera and the calibrated right camera are respectively R1、t1And R2、t2Wherein R is1=R2,t1=(x,y,z)T,t2=(x+d,y,z)TAnd d is the translation distance from the left camera to the right camera.
S3, solving a distortion mapping matrix according to the parameters of the camera;
s4, referring to the figures 2, 3 and 4, projecting the designed RGB point structured light pattern to an object by using a projector;
in step S4, the RGB structural dot pattern is a structured light lattice based on RGB three primary colors, and the structured light is projected from the front of the object to the object.
In this embodiment, the RBG structure dot pattern is a structured light dot matrix that can form the same color for each row of dots of red, green, and blue, and different colors for adjacent rows. The structured light lattice projecting the RGB three primary colors is chosen because the color structured light dot pattern is more favorable for matching of the feature points.
S5, acquiring a left image and a right image of a reconstructed object by using a camera of a binocular stereo vision system, and performing stereo correction on the images;
the step S5 includes the following sub-steps:
s51, respectively obtaining a left correction matrix and a right correction matrix based on the internal and external parameters and the distortion mapping matrix of the left camera and the internal and external parameters and the distortion mapping matrix of the right camera;
s52, performing stereo correction on the left image by using the left correction matrix, performing stereo correction on the right image by using the right correction matrix, wherein the point in the left image processed by the left correction matrix and the matching point in the right image processed by the right correction matrix are on the same scanning line, namely the y-axis coordinate of the point and the matching point is the same.
S6, based on the regional similarity, red points, green points and blue points of the left image and the right image are respectively segmented by a three-channel combined RGB point segmentation method;
the step S6 includes the following sub-steps:
s601, calculating a first threshold value T for a point image of an R channel by using a threshold value selection method based on slope difference distribution;
s602, segmenting the point image by adopting a threshold value T, and segmenting the segmented binary image I1All points in (a) are marked as:
Figure BDA0002796324310000101
wherein (x, y) is an index of the binary image;
s603, setting the image resolution as NX×NYDefining a set X as {1, 2.,. NXIn the set Y, the set is {1, 2.,. N }YAnd calculating an index set of the kth mark point as:
(Xk,Yk)={(x,y)|I1(x,y)=k} (2);
calculating the area A of the k-th mark pointkComprises the following steps:
Ak=|Xk|=|Yk| (3);
s604. area set
Figure BDA0002796324310000111
Is ordered as
Figure BDA0002796324310000112
The following conditions are satisfied:
Figure BDA0002796324310000113
since the regions of the points are similar, the difference in the sorted regions should not be too large if all points are segmented accurately enough. Therefore, the accuracy of the segmentation result can be judged by the calculated difference value of the classification areas;
s605, calculating a difference value D of the sequencing regioniComprises the following steps:
Figure BDA0002796324310000114
the maximum difference is calculated as:
Dmax=max Di,i=1,2,...,NB-1 (6);
if the maximum difference Dmax is greater than a threshold (which is calculated as the aggregate area)
Figure BDA0002796324310000115
Because the selected global threshold T is smaller than the optimal threshold, some adjacent points in the segmentation result are combined into one point; the optimal threshold is defined as the threshold that can separate all bright and dark spots from the background; on the other hand, a smaller threshold may more completely segment the bright spots relative to the optimal threshold; therefore, some points of smaller threshold segmentation should be used. The region threshold is used to select which segmentation points to use;
s606, calculating a region set
Figure BDA0002796324310000121
Average value of (d):
Figure BDA0002796324310000122
s607, all the areas are smaller than AmIndex set of division points of
Figure BDA0002796324310000123
The calculation is as follows:
Figure BDA0002796324310000124
s608, updating the global threshold value as follows:
T=T+ΔT (9);
where Δ T is the step size of the loop, its value being an integer greater than or equal to 1; in order to accelerate convergence speed, selecting delta T as 10;
s609, segmenting the image by using the updated threshold value again;
s610, repeating the steps S601 to S609 until Dmax is smaller than the area set
Figure BDA0002796324310000125
One tenth of the median;
s611, repeating the steps for m times to obtain a m-th segmentation result ImIndex set (X) of all divided pointsm,Ym) Comprises the following steps:
(Xm,Ym)={(x,y)|Im(x,y)>0} (10);
s612, finally segmenting the image IRResolution N ofX×NYThe initialization is as follows:
Figure BDA0002796324310000126
calculating to obtain:
Figure BDA0002796324310000127
s613, segmenting the images of the G channel and the B channel to obtain an IGAnd IBAnd adding the division results to form a final division result.
S7, clustering the points with the same color according to the Euclidean distance;
the step S7 includes the following substeps:
s71, spreading the divided points 5 times by using a structural element B which is {0,0} in each channel, and connecting adjacent points to form a linear image;
and S72, multiplying the clustering straight line image in each channel with the corresponding segmentation point image to generate a clustering point image. In each lane, points on different lines are assigned different identification numbers, including line identification numbers, row identification numbers.
S8, matching points in the left view and points in the right view, and obtaining the parallax of corresponding matching points on the left image and the right image relative to a point P on the object;
the step S8 includes the following substeps:
s81, in two views of each channel, firstly, matching is carried out according to line identification numbers of points, and then the points with the same line identification numbers are matched according to the line identification numbers of the points, so that clustering points in the two views are matched;
s82, obtaining the pixel coordinates of the matched corner points of the left image and the right image, namely the certain corner point l (x) of the left imagel,yl) And a corner r (x) of the right imager,yr);
S83. since the image has been stereo corrected to achieve line alignment, the y-axis coordinates of point l and point r are the same, and the parallax of the corresponding matching point on the left and right images with respect to point P on the object can be directly expressed as d ═ xl-xr
S9, combining internal and external parameters of a camera, and obtaining three-dimensional space coordinates of each point on the object by using a parallax principle;
the step S9 includes the following substeps:
s91, obtaining a triangular PO according to the parallax of the corresponding matching points on the left image and the right image relative to the point P on the object and the optical centers of the left camera and the right cameralOrSimilar to triangle P1r, wherein the similar triangle scale formula is:
Figure BDA0002796324310000141
where T is the optical center distance of the left and right cameras, d is the parallax d ═ x of the corresponding matching point on the left and right images with respect to the point P on the objectl-xrF isFocal lengths of the left and right cameras, Z being the depth value of point P, OlIs the optical center of the left camera, OrIs the right camera optical center;
s92, obtaining the three-dimensional coordinates (X, Y, Z) of the point P by using the formula (13),
Figure BDA0002796324310000142
and finally, obtaining the three-dimensional coordinates (X, Y, Z) of all the points on the image.
And S10, generating a point cloud picture with sparse object according to the three-dimensional space coordinates of multiple points on the object, and finishing the three-dimensional reconstruction of the object.

Claims (10)

1. A three-dimensional reconstruction method based on structured light is characterized by comprising the following steps:
s1, building a three-dimensional reconstruction system, wherein the three-dimensional reconstruction system comprises two cameras, a projector and a computer;
s2, calibrating the camera and obtaining internal and external parameters of the camera;
s3, solving a distortion mapping matrix according to the parameters of the camera;
s4, projecting the designed RGB point structured light pattern to the object by using a projector;
s5, acquiring a left image and a right image of a reconstructed object by using a camera of a binocular stereo vision system, and performing stereo correction on the images;
s6, based on the regional similarity, red points, green points and blue points of the left image and the right image are respectively segmented by a three-channel combined RGB point segmentation method;
s7, clustering the points with the same color according to the Euclidean distance;
s8, matching points in the left view and the right view, and obtaining the parallax of corresponding matching points on the left image and the right image relative to a point P on the object;
s9, combining internal and external parameters of a camera, and obtaining three-dimensional space coordinates of each point on the object by using a parallax principle;
and S10, generating a point cloud picture with sparse object according to the three-dimensional space coordinates of multiple points on the object, and finishing the three-dimensional reconstruction of the object.
2. The structured-light based three-dimensional reconstruction method according to claim 1, wherein: the step S2 includes the following sub-steps:
s21, calibrating a left camera to obtain internal and external parameters of the left camera, wherein the external parameters comprise a rotation matrix and a translation matrix;
s22, the internal parameters and the rotation matrixes of the left camera and the right camera are the same, and the translation matrix of the right camera is obtained by the translation matrix of the left camera and the distance between the two cameras, so that the internal and external parameters of the right camera are obtained.
3. The structured-light based three-dimensional reconstruction method according to claim 1, wherein: in step S4, the RGB structural dot pattern is based on a structured light dot matrix of RGB three primary colors.
4. The structured-light based three-dimensional reconstruction method according to claim 3, wherein: the RBG structure light point pattern is a structure light point matrix which can form the same color of each point of each row of red, green and blue and different colors of adjacent rows.
5. The structured-light based three-dimensional reconstruction method according to claim 1, wherein: in S4, the structured light is projected toward the object from directly in front of the object.
6. The structured-light based three-dimensional reconstruction method according to claim 1, wherein: the step S5 includes the following sub-steps:
s51, respectively obtaining a left correction matrix and a right correction matrix based on the internal and external parameters and the distortion mapping matrix of the left camera and the internal and external parameters and the distortion mapping matrix of the right camera;
s52, performing stereo correction on the left image by using the left correction matrix, performing stereo correction on the right image by using the right correction matrix, wherein the point in the left image processed by the left correction matrix and the matching point in the right image processed by the right correction matrix are on the same scanning line, namely the y-axis coordinate of the point and the matching point is the same.
7. The structured-light based three-dimensional reconstruction method according to claim 1, wherein: the step S6 includes the following sub-steps:
s601, calculating a first threshold value T for a point image of an R channel by using a threshold value selection method based on slope difference distribution;
s602, segmenting the point image by adopting a threshold value T, and segmenting the segmented binary image I1All points in (a) are marked as:
Figure FDA0002796324300000031
wherein (x, y) is an index of the binary image;
s603, setting the image resolution as NX×NYDefining a set X as {1, 2.,. NXIn the set Y, the set is {1, 2.,. N }YAnd calculating an index set of the kth mark point as:
(Xk,Yk)={(x,y)|I1(x,y)=k} (2);
calculating the area A of the k-th mark pointkComprises the following steps:
Ak=|Xk|=|Yk| (3);
s604. area set
Figure FDA0002796324300000032
Is ordered as
Figure FDA0002796324300000033
The following conditions are satisfied:
Figure FDA0002796324300000034
s605, calculating a difference value D of the sequencing regioniComprises the following steps:
Figure FDA0002796324300000035
the maximum difference is calculated as:
Dmax=maxDi,i=1,2,...,NB-1 (6);
s606, calculating a region set
Figure FDA0002796324300000036
Average value of (d):
Figure FDA0002796324300000037
s607, all the areas are smaller than AmIndex set of division points of
Figure FDA0002796324300000038
The calculation is as follows:
Figure FDA0002796324300000039
s608, updating the global threshold value as follows:
T=T+ΔT (9);
where Δ T is the step size of the loop, its value being an integer greater than or equal to 1;
s609, segmenting the image by using the updated threshold value again;
s610, repeating the steps S601 to S609 until Dmax is smaller than the area set
Figure FDA0002796324300000043
One tenth of the median;
s611, repeating the steps for m times to obtain a m-th segmentation result ImIndex set (X) of all divided pointsm,Ym) Comprises the following steps:
(Xm,Ym)={(x,y)|Im(x,y)>0} (10);
s612, finally segmenting the image IRResolution N ofX×NYThe initialization is as follows:
Figure FDA0002796324300000041
calculating to obtain:
Figure FDA0002796324300000042
s613, segmenting the images of the G channel and the B channel to obtain an IGAnd IBAnd adding the division results to form a final division result.
8. The structured-light based three-dimensional reconstruction method according to claim 1, wherein: the step S7 includes the following substeps:
s71, spreading the divided points 5 times by using a structural element B which is {0,0} in each channel, and connecting adjacent points to form a linear image;
s72, multiplying the clustering straight line image in each channel with the corresponding segmentation point image to generate a clustering point image; in each lane, points on different lines are assigned different identification numbers, including line identification numbers, row identification numbers.
9. The structured-light based three-dimensional reconstruction method according to claim 1, wherein: the step S8 includes the following substeps:
s81, in two views of each channel, firstly, matching is carried out according to line identification numbers of points, and then the points with the same line identification numbers are matched according to the line identification numbers of the points, so that clustering points in the two views are matched;
s82, obtaining the pixel coordinates of the matched corner points of the left image and the right image, namely the certain corner point l (x) of the left imagel,yl) And a corner r (x) of the right imager,yr);
S83. since the image has been stereo corrected to achieve line alignment, the y-axis coordinates of point l and point r are the same, and the parallax of the corresponding matching point on the left and right images with respect to point P on the object can be directly expressed as d ═ xl-xr
10. The structured-light based three-dimensional reconstruction method according to claim 1, wherein: the step S9 includes the following substeps:
s91, obtaining a triangular PO according to the parallax of the corresponding matching points on the left image and the right image relative to the point P on the object and the optical centers of the left camera and the right cameralOrSimilar to triangle P1r, wherein the similar triangle scale formula is:
Figure FDA0002796324300000051
where T is the optical center distance of the left and right cameras, d is the parallax d ═ x of the corresponding matching point on the left and right images with respect to the point P on the objectl-xrF is the focal length of the left and right cameras, Z is the depth value of point P, OlIs the optical center of the left camera, OrIs the right camera optical center;
s92, obtaining the three-dimensional coordinates (X, Y, Z) of the point P by using the formula (13),
Figure FDA0002796324300000061
and finally, obtaining the three-dimensional coordinates (X, Y, Z) of all the points on the image.
CN202011334701.0A 2020-11-24 2020-11-24 Three-dimensional reconstruction method based on structured light Active CN112489193B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011334701.0A CN112489193B (en) 2020-11-24 2020-11-24 Three-dimensional reconstruction method based on structured light

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011334701.0A CN112489193B (en) 2020-11-24 2020-11-24 Three-dimensional reconstruction method based on structured light

Publications (2)

Publication Number Publication Date
CN112489193A true CN112489193A (en) 2021-03-12
CN112489193B CN112489193B (en) 2024-06-14

Family

ID=74934011

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011334701.0A Active CN112489193B (en) 2020-11-24 2020-11-24 Three-dimensional reconstruction method based on structured light

Country Status (1)

Country Link
CN (1) CN112489193B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113066064A (en) * 2021-03-29 2021-07-02 郑州铁路职业技术学院 Cone beam CT image biological structure identification and three-dimensional reconstruction system based on artificial intelligence
CN114332349A (en) * 2021-11-17 2022-04-12 浙江智慧视频安防创新中心有限公司 Binocular structured light edge reconstruction method and system and storage medium
WO2022218081A1 (en) * 2021-04-14 2022-10-20 东莞埃科思科技有限公司 Binocular camera and robot

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101667303A (en) * 2009-09-29 2010-03-10 浙江工业大学 Three-dimensional reconstruction method based on coding structured light
CN102422832A (en) * 2011-08-17 2012-04-25 中国农业大学 Visual spraying location system and location method
CN107945268A (en) * 2017-12-15 2018-04-20 深圳大学 A kind of high-precision three-dimensional method for reconstructing and system based on binary area-structure light
CN109191509A (en) * 2018-07-25 2019-01-11 广东工业大学 A kind of virtual binocular three-dimensional reconstruction method based on structure light
CN110880186A (en) * 2018-09-06 2020-03-13 山东理工大学 Real-time human hand three-dimensional measurement method based on one-time projection structured light parallel stripe pattern
CN110926339A (en) * 2018-09-19 2020-03-27 山东理工大学 Real-time three-dimensional measurement method based on one-time projection structured light parallel stripe pattern

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101667303A (en) * 2009-09-29 2010-03-10 浙江工业大学 Three-dimensional reconstruction method based on coding structured light
CN102422832A (en) * 2011-08-17 2012-04-25 中国农业大学 Visual spraying location system and location method
CN107945268A (en) * 2017-12-15 2018-04-20 深圳大学 A kind of high-precision three-dimensional method for reconstructing and system based on binary area-structure light
CN109191509A (en) * 2018-07-25 2019-01-11 广东工业大学 A kind of virtual binocular three-dimensional reconstruction method based on structure light
CN110880186A (en) * 2018-09-06 2020-03-13 山东理工大学 Real-time human hand three-dimensional measurement method based on one-time projection structured light parallel stripe pattern
CN110926339A (en) * 2018-09-19 2020-03-27 山东理工大学 Real-time three-dimensional measurement method based on one-time projection structured light parallel stripe pattern

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113066064A (en) * 2021-03-29 2021-07-02 郑州铁路职业技术学院 Cone beam CT image biological structure identification and three-dimensional reconstruction system based on artificial intelligence
CN113066064B (en) * 2021-03-29 2023-06-06 郑州铁路职业技术学院 Cone beam CT image biological structure identification and three-dimensional reconstruction system based on artificial intelligence
WO2022218081A1 (en) * 2021-04-14 2022-10-20 东莞埃科思科技有限公司 Binocular camera and robot
CN114332349A (en) * 2021-11-17 2022-04-12 浙江智慧视频安防创新中心有限公司 Binocular structured light edge reconstruction method and system and storage medium
CN114332349B (en) * 2021-11-17 2023-11-03 浙江视觉智能创新中心有限公司 Binocular structured light edge reconstruction method, system and storage medium

Also Published As

Publication number Publication date
CN112489193B (en) 2024-06-14

Similar Documents

Publication Publication Date Title
Kaskman et al. Homebreweddb: Rgb-d dataset for 6d pose estimation of 3d objects
CN107392947B (en) 2D-3D image registration method based on contour coplanar four-point set
CN109872397B (en) Three-dimensional reconstruction method of airplane parts based on multi-view stereo vision
CN112489193B (en) Three-dimensional reconstruction method based on structured light
CN107945267B (en) Method and equipment for fusing textures of three-dimensional model of human face
Furukawa et al. Accurate camera calibration from multi-view stereo and bundle adjustment
CN110853075B (en) Visual tracking positioning method based on dense point cloud and synthetic view
CN106091984B (en) A kind of three dimensional point cloud acquisition methods based on line laser
US8452081B2 (en) Forming 3D models using multiple images
CN111028155B (en) Parallax image splicing method based on multiple pairs of binocular cameras
CN108629829B (en) Three-dimensional modeling method and system of the one bulb curtain camera in conjunction with depth camera
CN113160421B (en) Projection-based spatial real object interaction virtual experiment method
CN205451195U (en) Real -time three -dimensional some cloud system that rebuilds based on many cameras
CN111046843A (en) Monocular distance measurement method under intelligent driving environment
CN109613974B (en) AR home experience method in large scene
CN113393524A (en) Target pose estimation method combining deep learning and contour point cloud reconstruction
CN114998448A (en) Method for calibrating multi-constraint binocular fisheye camera and positioning space point
CN110517348B (en) Target object three-dimensional point cloud reconstruction method based on image foreground segmentation
CN112712566B (en) Binocular stereo vision sensor measuring method based on structure parameter online correction
Ling et al. A dense 3D reconstruction approach from uncalibrated video sequences
CN111914790B (en) Real-time human body rotation angle identification method based on double cameras under different scenes
CN106157321B (en) Real point light source position measuring and calculating method based on plane surface high dynamic range image
CN114935316B (en) Standard depth image generation method based on optical tracking and monocular vision
CN113808185B (en) Image depth recovery method, electronic device and storage medium
CN114608558A (en) SLAM method, system, device and storage medium based on feature matching network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant