CN108961378B - Multi-eye point cloud three-dimensional reconstruction method, device and equipment - Google Patents

Multi-eye point cloud three-dimensional reconstruction method, device and equipment Download PDF

Info

Publication number
CN108961378B
CN108961378B CN201810732037.1A CN201810732037A CN108961378B CN 108961378 B CN108961378 B CN 108961378B CN 201810732037 A CN201810732037 A CN 201810732037A CN 108961378 B CN108961378 B CN 108961378B
Authority
CN
China
Prior art keywords
point cloud
point
image
cameras
dimensional reconstruction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810732037.1A
Other languages
Chinese (zh)
Other versions
CN108961378A (en
Inventor
陈先开
邓亮
姚杰
冯良炳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Cosmosvision Intelligent Technology Co ltd
Original Assignee
Shenzhen Cosmosvision Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Cosmosvision Intelligent Technology Co ltd filed Critical Shenzhen Cosmosvision Intelligent Technology Co ltd
Priority to CN201810732037.1A priority Critical patent/CN108961378B/en
Publication of CN108961378A publication Critical patent/CN108961378A/en
Application granted granted Critical
Publication of CN108961378B publication Critical patent/CN108961378B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The embodiment of the invention provides a method, a device and equipment for three-dimensional reconstruction of multi-objective point cloud,the method comprises the following steps: starting the projection light machine O for projecting stripe light image, starting the K cameras C 1 ,C 2 ,...,C K For shooting a grayscale image; collecting an image; generating point clouds under two cameras; and point cloud fusion. The method and the device for three-dimensional reconstruction of the multi-view point cloud provided by the embodiment of the invention generate the three-dimensional point cloud by combining a multi-view image through a structured light technology, thereby accurately and quickly carrying out three-dimensional reconstruction on an object so as to facilitate subsequent industrial automation treatment. The method is simple to use, high in speed and high in precision of three-dimensional reconstruction of the object, and easy to apply and implement in the actual industrial application environment.

Description

Multi-eye point cloud three-dimensional reconstruction method, device and equipment
Technical Field
The invention relates to the field of industrial machine vision, in particular to a multi-eye point cloud three-dimensional reconstruction method, a multi-eye point cloud three-dimensional reconstruction device and equipment.
Background
Currently, in the three-dimensional imaging problem in the industrial field, a target is textured mainly by structured light, and then the target is three-dimensionally reconstructed by using a visual technology. The existing commonly adopted methods are a line structured light method and a surface structured light method, and a camera can be selected from a monocular camera or a monocular camera. The linear structured light method mainly adopts single or multiple linear lights to irradiate on a target object, and then carries out three-dimensional reconstruction on the target, and the method has the defect that the precision in the direction of X, Y is poor because of large deviation of a moving mechanism for carrying out structured light scanning. The surface structured light method mainly adopts a projector to print appointed textures on a target, and then carries out three-dimensional reconstruction on the target through coding and decoding the textures, although the method can achieve high precision in X, Y and the Z direction, the method has limitation on the field of view, and therefore, the method can not rapidly carry out positioning three-dimensional reconstruction on the object.
Disclosure of Invention
In view of the above, the present invention provides a method, an apparatus and a device for reconstructing a multi-objective point cloud in three dimensions, which can precisely and rapidly reconstruct an object in three dimensions to facilitate subsequent industrial automation processing.
The technical scheme adopted by the invention for solving the technical problems is as follows:
the first aspect of the invention provides a multi-objective point cloud three-dimensional reconstruction method, which comprises the following steps:
starting the projection light machine O for projecting stripe light images and starting the K cameras C 1 ,C 2 ,...,C K For shooting a grayscale image;
collecting an image;
generating point clouds under two cameras;
point cloud fusion;
wherein K is an even natural number.
In some embodiments, said acquiring an image specifically comprises:
initializing counter q =0;
controlling the optical machine O to project the q-th structural surface light pattern I q0
Triggering K cameras to acquire pictures to obtain K gray-scale images with structured light, and representing the mechanism surface light image and the gray-scale image as a subset S q ={I q0 ,I q1 ,...,I qK }。
In some embodiments, the generating a point cloud under two cameras comprises:
for each group of cameras C k ,C k+1 Wherein k is a natural number range [1,K-1]One value of (1) using a gray scale image set { I) according to the phase method 0k ,I 1k ,...,I (Q-1)k And { I } 0k+1 ,I 1k+1 ,...,I (Q-1)k+1 Is generated at camera C k Point cloud collection under coordinate system P k;
And eliminating noise points in the point cloud set.
In some embodiments, the rejecting noise points in the point cloud set specifically includes:
for each point cloud set P k The point in (1) is projected to another corresponding camera image pixel point respectively, and the phase is analyzed, so that whether the point is a noise point or not is determined; if the point is a noise point, corresponding point is selected from the point cloud collection P k To obtain a new point cloud set P '= { P' 1 ,P′ 2 ,...,P′ K }。
In some embodiments, the method may further comprise the step of:
for camera C 1 According to the phase method, using a set of grey-scale images { I 01 ,I 11 ,...,I 41 Calculate the image at (u) 1 ,v 1 ) Phase value of position theta (u) 1 ,v 1 ):
Figure GDA0004044722770000031
Wherein, I 41 (u 1 ,v 1 ),I 21 (u 1 ,v 1 ),I 11 (u 1 ,v 1 ),I 31 (u 1 ,v 1 ) Respectively representing images I 41 ,I 21 ,I 11 ,I 31 In (u) 1 ,v 1 ) The gray value of the pixel at the position is in a value range of 0,255]The set of natural numbers.
In some embodiments, resolving the phase value comprises:
if the phase value | θ (u) 2i ,v 2i )-θ(u 1i ,v 1i ) If | < Delta theta, then from the point cloud set P k Deleting the pixel point to obtain a new point cloud set P' 1
In some embodiments, the point cloud fusion comprises the steps of: all point clouds P '= { P' 1 ,P′ 2 ,...,P′ K Is fused under a projector coordinate system, thereby obtaining a point cloud set P merge
In some embodiments, point cloud fusion comprises the steps of: filtering point cloud set P merg e to obtain a new point cloud set P' merge
The second aspect of the present invention also provides a multi-objective point cloud three-dimensional reconstruction apparatus, including:
the image acquisition module is used for synchronously acquiring multi-view images;
the binocular three-dimensional point cloud module is used for generating three-dimensional point clouds under two cameras;
the point cloud fusion module is used for fusing all point clouds to a projector coordinate system so as to obtain a new point cloud set;
and the output module is used for outputting a new point cloud set.
The third aspect of the present invention also provides a multi-object point cloud three-dimensional reconstruction apparatus, which includes a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the computer program, when executed by the processor, implements the steps of any one of the methods described above.
The multi-objective point cloud three-dimensional reconstruction method, the multi-objective point cloud three-dimensional reconstruction device and the equipment thereof provided by the embodiment of the invention generate the three-dimensional point cloud by combining a multi-view image through a structure 5 optical technology, so that the object is accurately and quickly subjected to three-dimensional reconstruction, and the subsequent industrial automation treatment is facilitated. The method is simple to use, high in speed and high in precision of three-dimensional reconstruction of the object, and easy to apply and implement in the actual industrial application environment.
Drawings
FIG. 1 is a flow chart of a multi-object point cloud three-dimensional reconstruction method according to an embodiment of the present invention;
FIG. 2 is a schematic structural diagram of a multi-objective point cloud three-dimensional reconstruction device according to an embodiment of the present invention;
FIG. 3 is a flowchart of a method for generating a point cloud under two cameras according to an embodiment of the present invention;
FIG. 4 is a flowchart of a method for eliminating noise points in a point cloud set according to an embodiment of the present invention;
FIG. 5 is a schematic structural diagram of another multi-object point cloud three-dimensional reconstruction apparatus according to an embodiment of the present invention;
fig. 5 is a block diagram of a three-dimensional reconstruction device for multi-object point cloud according to an embodiment of the invention.
Detailed Description
In order to make the technical problems, technical schemes and beneficial effects to be solved by the invention clearer and more obvious,
the present invention will be described in further detail with reference to the following drawings and examples. It should be understood that the specific embodiments described herein with reference to 0 are merely illustrative of the invention and are not intended to limit the invention.
Aiming at the problems that the precision of three-dimensional reconstruction of a target is not high or the limitation exists in the field of view in the prior art, the invention provides a method, a device and equipment for three-dimensional reconstruction of multi-objective point cloud, which can accurately and quickly carry out three-dimensional reconstruction on an object, thereby facilitating subsequent industrial automation treatment.
The first embodiment is as follows:
the invention provides a multi-objective point cloud three-dimensional reconstruction method, please refer to fig. 1, which specifically comprises the following steps:
s1: synchronous image acquisition
S11, starting a projection light machine O for projecting a stripe light image and starting K cameras C 1 ,C 2 ,...,C K For shooting a grayscale image; wherein K is an even natural number.
Specifically, in an embodiment, the system further includes a light supplement system, the light supplement system is deployed according to an actual environment and used according to a working environment, and the system is started to be used when the environment needs to be supplemented with light. The projector engine O is used to project a fringe light image with pixel resolutions 1280 and 1024 for width and height, respectively. K cameras C 1 ,C 2 ,...,C K For taking a gray scale image, the pixel resolutions of width and height of the taken image are 1028 and 1024, respectively. Wherein all cameras share a projector, and the fields of view of the cameras and the projector have the largest common field of view at a specified shooting distance.
S12, acquiring an image.
Collecting Q groups of images, wherein Q is an even natural number, and the method comprises the following specific steps:
s121 initializes a counter q =0.
S122, controlling the optical machine to project a q-th structural surface light pattern I through a serial port signal q0 . Simultaneously, synchronously triggering K cameras to acquire pictures through software or hardware. Obtaining K gray-scale maps with structured light, i.e. I q1 ,...,I qK The mechanism surface light map and the gray level image map are represented as a subset S q ={I q0 ,I q1 ,...,I qK }。
Repeating the step S122, and acquiring Q groups of subset images of the mechanism surface light image and the gray level image through the step S122, wherein the Q groups of subset images are recorded as U = { S = { (S) 0 ,S 1 ,...,S Q-1 }。
S2: and generating point clouds under two cameras.
S21 for each group of cameras C k ,C k+1 Wherein k is a natural number range [1,K-1]One value of (1) using a gray scale image set { I) according to the phase method 0k ,I 1k ,...,I (Q-1)k And { I } 0k+1 ,I 1k+1 ,...,I (Q-1)k+1 Is generated at camera C k Point cloud set P under coordinate system k
S22, eliminating noise points in the point cloud set.
In particular, for each point cloud set P k And (4) respectively projecting the points to the pixel points of the other corresponding camera image, and analyzing the phase, thereby determining whether the points are noise points. If the point is a noise point, corresponding point is selected from the point cloud set P k To obtain a new point cloud set P '= { P' 1 ,P′ 2 ,...,P′ K }。
S3: point cloud fusion
S31, according to external parameters of the projector and each camera, all point cloud sets P '= { P' 1 ,P′ 2 ,...,P′ K Is fused under a projector coordinate system, thereby obtaining a point cloud set P merge
S32, filtering the point cloud set P by using a K-neighbor method merge To obtain a new point cloud set P' merge
S4, outputting a new point cloud set P' merge
The method can effectively carry out three-dimensional reconstruction on various industrial workpieces, the target three-dimensional model can be completely reconstructed, and the method has high calculation speed and high reconstruction precision, thereby greatly improving the application of three-dimensional vision in industrial vision.
Example two:
the following describes a specific detailed method for reconstructing an object in three dimensions according to the present invention by taking 2 cameras and one projector as an example. The scheme provides a multi-target point cloud three-dimensional reconstruction method which is mainly realized based on a light-section method. The specific method steps are shown in fig. 1:
the following describes the technical solution of the present invention in further detail with reference to the attached drawing 1:
s1: synchronous image acquisition
S11 starting the projection light machine O for projecting stripe light image, starting the camera C 1 ,C 2 For shooting a gray image;
specifically, as shown in fig. 2, the fill light system, the projector and all the cameras are started. The light supplementing system is deployed according to the actual environment and used according to the working environment, and is started to be used when the environment needs to be supplemented with light. The projection light machine O is used for projecting a fringe light image, and the pixel resolution of the width and the height of the projected image is 1280 and 1024 respectively. 2 cameras C 1 ,C 2 For capturing a gray image, the pixel ratios of the width and height of the captured image are 1028 and 1024, respectively. Wherein all cameras share a projector, and the fields of view of the cameras and the projector have the largest common field of view at a specified shooting distance.
S12, acquiring an image.
Q =4 sets of images were acquired, the steps were as follows:
s121 initializes a counter q =0.
S122, controlling the optical machine to project a q-th structural surface light pattern I through a serial port signal q0 . Simultaneously, 2 cameras are synchronously triggered to acquire pictures through software or hardware. Obtaining 2 gray-scale images with structured light, and performing stereo correction on the images through external parameters of two cameras to obtain registered images, namely I q1 ,I q2 The mechanism surface light image and the gray level image map are represented as a subset S q ={I q0 ,I q1 ,I q2 }。
Repeating the step S122, collecting subset images of Q sets of mechanism surface light images and grayscale images, and recording as U = { S = { (S) 0 ,S 1 ,...,S Q-1 }。
S2: and generating point clouds under two cameras.
Referring to fig. 3, the specific steps are as follows:
s21 for the camera C 1 According to the phase method, using a set of grey-scale images { I 01 ,I 11 ,...,I 41 Calculate the image at (u) 1 ,v 1 ) Phase value of position theta (u) 1 ,v 1 ):
Figure GDA0004044722770000081
Wherein, I 41 (u 1 ,v 1 ),I 21 (u 1 ,v 1 ),I 11 (u 1 ,v 1 ),I 31 (u 1 ,v 1 ) Respectively representing images I 41 ,I 21 ,I 11 ,I 31 In (u) 1 ,v 1 ) The gray value of the pixel at the position is in a value range of 0,255]A set of natural numbers.
Due to the camera C 1 ,C 2 The lower captured images have been aligned for stereo rectification so that the rectified pictures from the two cameras have the same ordinate value and are therefore at C 2 Image under camera, at v 1 U of a row 1 Near [ u ] 1 -Δu,u 1 +Δu]Calculating the phase values of all the position points, and finding out the phase values and theta (u) 1 ,v 1 ) Same image position (u) 2 ,v 1 )。
The method comprises the following specific steps:
s211 is in [ u ] 1 -Δu,u 1 +Δu]Each element u of the interval, computing a camera C 2 The image is at position (u, v) 1 ) Phase value of (d):
Figure GDA0004044722770000091
wherein, I 42 (u,v 1 ),I 22 (u,v 1 ),I 12 (u,v 1 ),I 32 (u,v 1 ) Respectively represent images I 42 ,I 22 ,I 12 ,I 32 In (u, v) 1 ) The gray value of the pixel at the position is in a value range of 0,255]A set of natural numbers.
S212 if | theta (u, v) 1 )-θ(u 1 ,v 1 ) If | is less than δ, the value u2 is assigned u, jumping to execute step S213; judging whether an element which is not traversed exists, and if the element which is not traversed still exists, jumping to continue to execute the step S211; otherwise, the step S21 is executed again, and the camera C is continuously calculated 1 The next pixel point of the image. In this embodiment, δ is 0.01, but is not limited to this value, and in the embodiment of the present invention, δ ranges from greater than 0.001 to less than 1.
S213 at image Point (u) 1 ,v 1 ) Position, camera C 1 And C 2 Has a parallax of u 2 -u 1 Is calculated in the camera C 1 Lower pixel point (u) 1 ,v 1 ) The corresponding spatial three-dimensional coordinates.
Then at camera C 1 Lower pixel (u) 1 ,v 1 ) Corresponding spatial three-dimensional coordinate (X) 1 ,Y 1 ,Z 1 ) Comprises the following steps:
Figure GDA0004044722770000101
where B is the baseline for both cameras and f is the focal length at the normalized pixel.
Calculating Camera C according to the above step S21 1 Spatial three-dimensional coordinate points of all pixel points, thereby forming a camera C 1 Point cloud set P under coordinate system 1
S22, eliminating noise points in the point cloud set.
Referring to fig. 4, the specific steps are as follows:
s221, calculating each point cloud set P 1 Point p in (1) i =(X i ,Y i ,Z i ) Projection to camera C 1 Image pixel (u) 1i ,v 1i );
For each point cloud set P 1 Point p in (1) i =(X i ,Y i ,Z i ) Projected to camera C 1 Image pixel (u) 1i ,v 1i ) Namely:
Figure GDA0004044722770000102
wherein M is 1 Is a camera C 1 The 3x3 matrix of intrinsic parameters.
S222 analysis pixel (u) 1i ,v 1i ) The phase value of (a), namely:
Figure GDA0004044722770000103
s223 Point p i =(X i ,Y i ,Z i ) Projected to camera C 2 Pixel point (u) of image 2i ,v 2i ) Namely:
Figure GDA0004044722770000111
wherein M is 2 Is a camera C 2 The 3x3 matrix of intrinsic parameters; t is 21 Is a camera C 1 To camera C 2 4x4 transformation matrix.
S224 analysis pixel (u) 2i ,v 2i ) The phase value of (a), namely:
Figure GDA0004044722770000112
if | θ (u) 2i ,v 2i )-θ(u 1i ,v 1i ) If | is less than Δ θ, the phase difference between the two is less than Δ θ, and Δ θ is 0.02 (but not limited to this value) in this embodiment. From the point cloud collection P k To obtain a new point cloud set P' 1
S3: point cloud fusion
S31, all point clouds P 'are transformed according to the external reference transformation matrix of the projector and the camera' 1 The transformation is fused under the projector coordinate system, thereby obtaining a point cloud set P ″ 1 And further outputting the result of the point cloud set as P ″) 1
S32, filtering the point cloud set P' by using a K proximity method 1 To obtain a new point cloud set P ″ merge
S4, outputting a new point cloud set P ″ merge
The multi-objective point cloud three-dimensional reconstruction method and the multi-objective point cloud three-dimensional reconstruction device provided by the embodiment of the invention generate the three-dimensional point cloud by combining a multi-view image through a structured light technology, so that the object is accurately and quickly subjected to three-dimensional reconstruction, and the subsequent industrial automation treatment is facilitated. The method is simple to use, high in speed and high in precision of three-dimensional reconstruction of the object, and easy to apply and implement in the actual industrial application environment.
EXAMPLE III
Referring to fig. 5, a detailed method for reconstructing a three-dimensional object according to the present invention is further described below by taking 4 cameras and one projector as an example, and the present invention provides a multi-view three-dimensional reconstruction method, which is mainly based on a light-section method and can effectively improve the precision, speed and point cloud integrity of three-dimensional reconstruction. The specific steps of the method are substantially the same as those of the first embodiment and the second embodiment, please refer to fig. 1.
The following describes embodiments of the present invention in further detail with reference to fig. 1, fig. 3 and fig. 5.
The method comprises the following specific steps:
s1: synchronous image acquisition
S11 starting the projection light machine O for projecting stripe light image, starting the camera C 1 ,C 2 ,C 3 ,C 4 For shooting a grayscale image;
specifically, as shown in fig. 5, the fill light system, the projector and all the cameras are started. The light supplementing system is deployed according to the actual environment and used according to the working environment, and is started to be used when the environment needs to be supplemented with light. The projection light machine O is used for projecting a fringe light image, and the pixel resolution of the width and the height of the projected image is 1280 and 1024 respectively. 4 cameras C 1 ,C 2 ,C 3 ,C 4 For taking a gray scale image, the pixel rates of the width and height of the taken image are 1028 and 1024, respectively. Wherein all cameras share a projector, and the fields of view of the cameras and the projector have the largest common field of view at a specified shooting distance.
S12, acquiring an image.
Q =4 sets of images were acquired, the steps were as follows:
s121 initializes a counter q =0.
S122, controlling the optical machine to project a q-th structural surface light pattern I through a serial port signal q0 . Simultaneously, 4 cameras are synchronously triggered to acquire pictures through software or hardware. Obtaining 4 gray-scale images with structured light, and comparing C through external parameters of two cameras 1 ,C 2 The image of the camera is corrected stereoscopically to obtain a registered image, i.e. I q1 ,I q2 (ii) a And to C 3 ,C 4 The image of the camera is corrected stereoscopically to obtain a registered image, i.e. I q3 ,I q4 . Representing the structured surface light map and the grey scale image map as a subset S q ={I q0 ,I q1 ,I q2 ,I q3 ,I q4 }。
Repeating the step S122, and collecting Q sets of subset images of the mechanism plane light map and the grayscale image map, which are marked as U = { S =, = by the step S122 0 ,S 1 ,...,S Q-1 }。
S2: and generating point clouds under two cameras.
S21 for the camera C1, according to the phase method, a gray image set { I }is used 01 ,I 11 ,...,I 41 Calculate the image at (u) 1 ,v 1 ) Phase value of position theta (u) 1 ,v 1 ):
Figure GDA0004044722770000131
Wherein, I 41 (u 1 ,v 1 ),I 21 (u 1 ,v 1 ),I 11 (u 1 ,v 1 ),I 31 (u 1 ,v 1 ) Respectively representing images I 41 ,I 21 ,I 11 ,I 31 In (u) 1 ,v 1 ) The gray value of the pixel at the position is [0,255 ]]A set of natural numbers.
Due to the camera C 1 ,C 2 The lower captured images have been aligned for stereo rectification so that the rectified pictures from the two cameras have the same ordinate value and are therefore at C 2 Image under camera, at v 1 U of a row 1 Near [ u ] 1 -Δu,u 1 +Δu]Calculating the phase values of all the position points, and finding out the phase values and theta (u) 1 ,v 1 ) Same image position (u) 2 ,v 1 ). The method comprises the following specific steps:
s211 is in [ u ] 1 -Δu,u 1 +Δu]Each element u of the interval, computing a camera C 2 The image is at position (u, v) 1 ) Phase value of (d):
Figure GDA0004044722770000141
wherein, I 42 (u,v 1 ),I 22 (u,v 1 ),I 12 (u,v 1 ),I 32 (u,v 1 ) Respectively representing images I 42 ,I 22 ,I 12 ,I 32 In (u, v) 1 ) The gray value of the pixel at the position is in a value range of 0,255]A set of natural numbers.
S212 if | theta (u, v) 1 )-θ(u 1 ,v 1 ) If | is less than δ, the value u is assigned 2 = u, jump to perform step S213; judging whether there is an element which is not traversed, if there is an element which is not traversedThen, go to step S211; otherwise, returning to execute the step S21 and continuing to calculate the camera C 1 The next pixel point of the image. In this embodiment, δ is 0.01, but is not limited to this value.
S213 at image Point (u) 1 ,v 1 ) Position, camera C 1 And C 2 Has a parallax of u 2 -u 1 Is calculated in the camera C 1 Lower pixel (u) 1 ,v 1 ) Corresponding space three-dimensional coordinate (X) 1 ,Y 1 ,Z 1 )。
At camera C 1 Lower pixel (u) 1 ,v 1 ) Corresponding space three-dimensional coordinate (X) 1 ,Y 1 ,Z 1 ) Comprises the following steps:
Figure GDA0004044722770000151
where B is the baseline for both cameras and f is the focal length at the normalized pixel.
Calculating Camera C according to the above step S21 1 The spatial three-dimensional coordinate points of all the pixel points are formed, thereby forming a camera C 1 Point cloud set P under coordinate system 1
S22, eliminating noise points in the point cloud set.
Referring to fig. 4, the specific steps are as follows:
s221. For each point cloud set P 1 Point p in (1) i =(X i ,Y i ,Z i ) Projected to camera C 1 Image pixel point (u) 1i ,v i1 ) Namely:
Figure GDA0004044722770000152
wherein M is 1 Is a camera C 1 The 3x3 matrix of intrinsic parameters.
S222 analyzing the pixel (u) 1i ,v 1i ) The phase value of (a), namely:
Figure GDA0004044722770000153
s223 Point p i =(X i ,Y i ,Z i ) Projection to camera C 2 Image pixel (u) 2i ,v 2i ) Namely:
Figure GDA0004044722770000161
wherein M is 2 Is a camera C 2 3x3 matrix of intrinsic parameters; t is 21 Is a camera C 1 To camera C 2 4x4 transformation matrix.
S224 analysis pixel (u) 2i ,v 2i ) The phase value of (a), namely:
Figure GDA0004044722770000162
if | θ (u) 2i ,v 2i )-θ(u 1i ,v 1i ) If | is less than delta theta and the phase difference between the two is less than delta theta, then from the point cloud set P k Deleting the pixel point to obtain a new point cloud set P' 1 . In this embodiment, Δ θ is 0.02 (but not limited to this value).
S23 according to the above steps S21 and S22, similarly using C 3 ,C 4 Cameras as a group, calculated at C 3 Point cloud set P 'under camera coordinate system' 3
S3: point cloud fusion
S31, according to the external parameters of the projector and each camera, the camera C is known 1 Transformation matrix in projector coordinate system of
Figure GDA0004044722770000163
Camera C 3 A transformation matrix in the projector coordinate system of ^ is ^ or ^>
Figure GDA0004044722770000164
All point clouds P' 1 ,P′ 3 Fused to the projector coordinate System, i.e. for the point cloud P' 1 Each point of (a), respectively applying->
Figure GDA0004044722770000165
Transformation, for Point cloud P' 3 Is applied separately to each point of->
Figure GDA0004044722770000166
Transforming, and recording the set of all the obtained coordinate points as P merge
S32, filtering the point cloud set P by using a K-neighbor method merge To obtain a new point cloud set P' merge . In this embodiment, the number of the minimum neighboring points is 3, but is not limited to this value, and the value range of the minimum neighboring points is greater than 1 and less than 10.
S4, outputting a new point cloud set P' merge
The multi-objective point cloud three-dimensional reconstruction method and the multi-objective point cloud three-dimensional reconstruction device provided by the embodiment of the invention generate the three-dimensional point cloud by combining a multi-view image through a structured light technology, so that the object is accurately and quickly subjected to three-dimensional reconstruction, and the subsequent industrial automation treatment is facilitated. The method is simple to use, high in speed and high in precision of three-dimensional reconstruction of the object, and easy to apply and implement in the actual industrial application environment.
Example four
The embodiment of the present invention further provides a multi-object point cloud three-dimensional reconstruction apparatus, please refer to fig. 6, the apparatus includes the following modules:
the system comprises an image acquisition module 10, a binocular three-dimensional point cloud module 20, a point cloud fusion module 30 and an output module 40.
The image acquisition module 10 is used for synchronously acquiring multi-view images;
specifically, the image acquisition module 10 includes a camera and a lens light machine, the camera is used for shooting a gray image of a target image, the projection light machine is used for projecting a stripe light image, and the image acquisition module 10 is used for acquiring a gray image with structured light at the same time.
The binocular three-dimensional point cloud module 20 is used for generating three-dimensional point clouds under two cameras.
The binocular three-dimensional point cloud module 20 comprises a generating unit 201 and a noise removing unit 202.
The generating unit 201 is used for each group of cameras C k ,C k+1 Using sets of grey-scale images { I 0k ,I 1k ,...,I (Q-1)k And { I } 0k+1 ,I 1k+1 ,...,I (Q-1)k+1 Is generated at camera C k Point cloud set P under coordinate system k
The noise removing unit 202 is configured to remove noise points in the point cloud set.
In particular, for each point cloud set P k And (4) respectively projecting the point to another camera image pixel point corresponding to the point, and analyzing the phase, thereby determining whether the point is a noise point. If the point is a noise point, corresponding point is selected from the point cloud collection P k And deleting the point cloud to obtain a new point cloud set.
The point cloud fusion module 30 is configured to fuse all point clouds to a projector coordinate system, so as to obtain a point cloud set.
The point cloud fusion module 30 includes a fusion unit 301 and a filtering unit 302.
The fusion unit 301 is configured to fuse all point clouds into a projector coordinate system, so as to obtain a point cloud set.
The filtering unit 302 is configured to filter outliers in the point cloud set, so as to obtain a new point cloud set.
And the output module 40 is configured to output the new point cloud set with the outliers filtered out.
The multi-mesh three-dimensional reconstruction device synchronously acquires multi-mesh images through the image acquisition module 10, then generates three-dimensional point clouds under two cameras through the binocular three-dimensional point cloud module 20, and fuses all the point clouds under a projector coordinate system through the point cloud fusion module 30, so that a point cloud set is obtained, and three-dimensional reconstruction is carried out on an object. By using the device, the precision, the speed and the point cloud integrity of three-dimensional reconstruction can be effectively improved. The device can accurately and quickly carry out three-dimensional reconstruction on the object, thereby facilitating subsequent industrial automation processing and having high identification precision.
Example five:
according to an embodiment of the present invention, the apparatus includes a processor, a computer-readable storage medium, and a computer program stored thereon, where the computer program, when executed by the processor, implements the steps in the above-mentioned multi-view three-dimensional reconstruction method, and the specific steps are as described in the first embodiment, and are not described herein again.
The memory in the present embodiment may be used to store software programs as well as various data. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function, and the like; the storage data area may store data created according to the use of the mobile phone, and the like. Further, the memory may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
According to an example of this embodiment, all or part of the processes in the methods of the embodiments described above may be implemented by a computer program to instruct related hardware, where the program may be stored in a computer-readable storage medium, and in this embodiment of the present invention, the program may be stored in the storage medium of a computer system and executed by at least one processor in the computer system, so as to implement the processes including the embodiments of the methods described above. The storage medium includes, but is not limited to, a magnetic disk, a flash disk, an optical disk, a Read-Only Memory (ROM), and the like.
The preferred embodiments of the present invention have been described above with reference to the accompanying drawings, and are not to be construed as limiting the scope of the invention. Those skilled in the art can implement the invention in various modifications, such as features from one embodiment can be used in another embodiment to yield yet a further embodiment, without departing from the scope and spirit of the invention. Any modification, equivalent replacement and improvement made within the technical idea of using the present invention should be within the scope of the right of the present invention.

Claims (7)

1. A multi-objective point cloud three-dimensional reconstruction method is characterized by comprising the following steps:
starting the projection light machine O for projecting stripe light image, starting the K cameras C 1 ,C 2 ,...,C K For shooting a gray image;
collecting an image;
generating point clouds under two cameras;
point cloud fusion;
wherein K is an even natural number;
the method for generating the point cloud under each two cameras comprises the following steps:
for each group of cameras C k ,C k+1 Wherein k is a natural number range [1,K-1 ]]One value of (1) using a gray scale image set { I) according to the phase method 0k ,I 1k ,...,I (Q-1)k And { I } 0k+1 ,I 1k+1 ,...,I (Q-1)k+1 Is generated at camera C k Point cloud set P under coordinate system k
Eliminating noise points in the point cloud set;
the method for eliminating the noise points in the point cloud set specifically comprises the following steps:
for each point cloud set P k The point in (1) is projected to another corresponding camera image pixel point respectively, and the phase is analyzed, so that whether the point is a noise point or not is determined; if the point is a noise point, corresponding point is selected from the point cloud collection P k To obtain a new point cloud set P '= { P' 1 ,P′ 2 ,...,P′ k };
Further comprising the steps of: for camera C 1 According to the phase method, using a set of grey-scale images { I 01 ,I 11 ,...,I 41 Calculate the image at (u) 1 ,v 1 ) Phase value of position theta (u) 1 ,v 1 ):
Figure QLYQS_1
Wherein, I 41 (u 1 ,v 1 ),I 21 (u 1 ,v 1 ),I 11 (u 1 ,v 1 ),I 31 (u 1 ,v 1 ) Respectively representing images I 41 ,I 21 ,I 11 ,I 31 In (u) 1 ,v 1 ) The gray value of the pixel at the position is in the range of 0,255]A set of natural numbers.
2. The method for three-dimensional reconstruction of a multi-object point cloud according to claim 1, wherein the acquiring of the image specifically comprises:
initializing counter q =0;
controlling the optical machine O to project the q-th structural surface light pattern I q0
Triggering K cameras to acquire pictures to obtain K gray-scale maps with structured light, wherein the structured surface light map and the gray-scale image map are represented as a subset S q ={I q0 ,I q1 ,...,I qK }。
3. The method of claim 1, wherein the resolving the phase values comprises: if the phase value | θ (u) 2i ,v 2i )-θ(u 1i ,v 1i ) If | is less than Delta theta, then from the point cloud set P k Deleting the pixel point to obtain a new point cloud set P' 1
θ(u 2i ,v 2i ) Is shown in (u) 2i ,v 2i ) A phase value of the location;
θ(u 1i ,v 1i ) Is shown in (u) 1i ,v 1 i) A phase value of the location;
Δ θ represents a value in the range of more than 0.001 and less than 1.
4. The method of claim 1, wherein the method comprises reconstructing a multi-object point cloudThe point cloud fusion comprises the following steps: all point clouds P '= { P' 1 ,P′ 2 ,...,P′ K Is fused under a projector coordinate system, thereby obtaining a point cloud set P merge
5. The method for three-dimensional reconstruction of a multi-object point cloud according to claim 4, wherein said point cloud fusion comprises the steps of: filtering point cloud set P merge To obtain a new point cloud set P' merge
6. A multi-object point cloud three-dimensional reconstruction apparatus, the apparatus comprising:
the image acquisition module is used for synchronously acquiring multi-view images;
the binocular three-dimensional point cloud module is used for generating three-dimensional point clouds under two cameras;
the point cloud fusion module is used for fusing all point clouds to a projector coordinate system so as to obtain a new point cloud set;
the output module is used for outputting a new point cloud set;
the binocular three-dimensional point cloud module comprises a generating unit and a noise eliminating unit;
the generation unit is used for each group of cameras C k ,C k+1 Wherein k is a natural number range [1,K-1]One value of (1) using a gray scale image set { I) according to the phase method 0k ,I 1k ,...,I (Q-1)k And { I } 0k+1 ,I 1k+1 ,...,I (Q-1)k+1 Is generated at camera C k Point cloud set P under coordinate system k
For camera C 1 According to the phase method, using a set of grey-scale images { I } 01 ,I 11 ,...,I 41 Calculate the image at (u) 1 ,v 1 ) Phase value of position theta (u) 1 ,v 1 ):
Figure QLYQS_2
Wherein, I 41 (u 1 ,v 1 ),I 21 (u 1 ,v 1 ),I 11 (u 1 ,v 1 ),I 31 (u 1 ,v 1 ) Respectively representing images I 41 ,I 21 ,I 11 ,I 31 In (u) 1 ,v 1 ) The gray value of the pixel at the position is in the range of 0,255]A set of natural numbers of;
the noise eliminating unit is used for eliminating noise points in the point cloud set;
the point cloud fusion module comprises a fusion unit and a filtering unit;
the fusion unit is used for fusing all point clouds to a projector coordinate system so as to obtain a point cloud set;
and the filtering unit is used for filtering outliers in the point cloud set so as to obtain a new point cloud set.
7. A multi-object point cloud three-dimensional reconstruction device, characterized in that the device comprises a memory, a processor and a computer program stored on the memory and executable on the processor, the computer program realizing the steps of the method according to any one of claims 1 to 5 when executed by the processor.
CN201810732037.1A 2018-07-05 2018-07-05 Multi-eye point cloud three-dimensional reconstruction method, device and equipment Active CN108961378B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810732037.1A CN108961378B (en) 2018-07-05 2018-07-05 Multi-eye point cloud three-dimensional reconstruction method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810732037.1A CN108961378B (en) 2018-07-05 2018-07-05 Multi-eye point cloud three-dimensional reconstruction method, device and equipment

Publications (2)

Publication Number Publication Date
CN108961378A CN108961378A (en) 2018-12-07
CN108961378B true CN108961378B (en) 2023-03-28

Family

ID=64486040

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810732037.1A Active CN108961378B (en) 2018-07-05 2018-07-05 Multi-eye point cloud three-dimensional reconstruction method, device and equipment

Country Status (1)

Country Link
CN (1) CN108961378B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113012277B (en) * 2021-02-03 2022-06-21 中国地质大学(武汉) DLP (digital light processing) -surface-based structured light multi-camera reconstruction method
CN114863036B (en) * 2022-07-06 2022-11-15 深圳市信润富联数字科技有限公司 Data processing method and device based on structured light, electronic equipment and storage medium
CN116124032A (en) * 2023-02-13 2023-05-16 杭州海康机器人股份有限公司 Three-dimensional profiler, three-dimensional profile establishing method and device and electronic equipment
CN116336966A (en) * 2023-03-20 2023-06-27 上海联耀医疗技术有限公司 Three-dimensional imaging method and system based on structured light imaging light path

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104665107B (en) * 2015-03-10 2017-07-04 南京脚度健康科技有限公司 A kind of vola three dimensional data collection processing system and data acquisition treatment method
CN107170043B (en) * 2017-06-19 2019-06-18 电子科技大学 A kind of three-dimensional rebuilding method
CN107367245B (en) * 2017-08-22 2019-12-24 西安交通大学 Invalid point detection and elimination method in optical three-dimensional profile measurement
CN107483911A (en) * 2017-08-25 2017-12-15 秦山 A kind of signal processing method and system based on more mesh imaging sensors

Also Published As

Publication number Publication date
CN108961378A (en) 2018-12-07

Similar Documents

Publication Publication Date Title
CN110349251B (en) Three-dimensional reconstruction method and device based on binocular camera
CN108961378B (en) Multi-eye point cloud three-dimensional reconstruction method, device and equipment
CN107977997B (en) Camera self-calibration method combined with laser radar three-dimensional point cloud data
TWI555379B (en) An image calibrating, composing and depth rebuilding method of a panoramic fish-eye camera and a system thereof
CN111563921B (en) Underwater point cloud acquisition method based on binocular camera
CN110657785B (en) Efficient scene depth information acquisition method and system
CN113160068B (en) Point cloud completion method and system based on image
WO2012096747A1 (en) Forming range maps using periodic illumination patterns
Martel et al. An active approach to solving the stereo matching problem using event-based sensors
US10186051B2 (en) Method and system for calibrating a velocimetry system
CN111080776B (en) Human body action three-dimensional data acquisition and reproduction processing method and system
KR101592405B1 (en) Method for obtaining three-dimensional image, apparatus and computer-readable recording medium using the same
CN112630469B (en) Three-dimensional detection method based on structured light and multiple light field cameras
Ann et al. Study on 3D scene reconstruction in robot navigation using stereo vision
CN114428259A (en) Automatic vehicle extraction method in laser point cloud of ground library based on map vehicle acquisition
JP2007025863A (en) Photographing system, photographing method, and image processing program
JP2015019346A (en) Parallax image generator
CN109859313B (en) 3D point cloud data acquisition method and device, and 3D data generation method and system
CN110310371B (en) Method for constructing three-dimensional contour of object based on vehicle-mounted monocular focusing sequence image
JP6369897B2 (en) Self-position calculation device and self-position calculation method
CN107622514B (en) Autonomous calibration method for convex lens model of camera
JP6595878B2 (en) Element image group generation apparatus and program thereof
CN114565720A (en) Active three-dimensional reconstruction system and method based on line structured light rotation scanning
CN106846469B (en) Based on tracing characteristic points by the method and apparatus of focusing storehouse reconstruct three-dimensional scenic
CN107610170B (en) Multi-view image refocusing depth acquisition method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant