CN108961378A - A kind of more mesh point cloud three-dimensional rebuilding methods, device and its equipment - Google Patents

A kind of more mesh point cloud three-dimensional rebuilding methods, device and its equipment Download PDF

Info

Publication number
CN108961378A
CN108961378A CN201810732037.1A CN201810732037A CN108961378A CN 108961378 A CN108961378 A CN 108961378A CN 201810732037 A CN201810732037 A CN 201810732037A CN 108961378 A CN108961378 A CN 108961378A
Authority
CN
China
Prior art keywords
point cloud
point
image
camera
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810732037.1A
Other languages
Chinese (zh)
Other versions
CN108961378B (en
Inventor
陈先开
邓亮
姚杰
冯良炳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Chen Shi Intelligent Technology Co Ltd
Original Assignee
Shenzhen Chen Shi Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Chen Shi Intelligent Technology Co Ltd filed Critical Shenzhen Chen Shi Intelligent Technology Co Ltd
Priority to CN201810732037.1A priority Critical patent/CN108961378B/en
Publication of CN108961378A publication Critical patent/CN108961378A/en
Application granted granted Critical
Publication of CN108961378B publication Critical patent/CN108961378B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The embodiment of the invention provides a kind of more mesh point cloud three-dimensional rebuilding methods, device and its equipment, which comprises starting projection light machine O is used for projected fringe light image, starts K platform camera C1, C2..., CKFor shooting gray level image;Acquire image;Generate the point cloud under camera two-by-two;Point cloud fusion.A kind of more mesh point cloud three-dimensional rebuilding methods and device provided in an embodiment of the present invention pass through structured light technique and multi-view image are combined to generate three-dimensional point cloud, so that accurately and fast three-dimensionalreconstruction is carried out to object, to facilitate subsequent industrial automation to handle.This method is easy to apply and implement using simply, the speed for carrying out three-dimensionalreconstruction to object is fast, precision is high in practical application in industry environment.

Description

A kind of more mesh point cloud three-dimensional rebuilding methods, device and its equipment
Technical field
The present invention relates to industrial machine visual field more particularly to a kind of more mesh point cloud three-dimensional rebuilding methods, device and its Equipment.
Background technique
Currently, mainly beating texture to target by structure light in the three-dimensional imaging problems of industrial circle, view is then utilized Feel technology carries out three-dimensionalreconstruction to target.Existing two methods of the method cable architecture light method and area-structure light method generallyd use, Monocular or more mesh cameras may be selected in camera.Line-structured light method mainly uses single or a plurality of linear light to beat on target object, Then three-dimensionalreconstruction is carried out to target, the defect of this method is that precision in the x, y direction is poor because sweeping to structure light is carried out There are relatively large deviations for the movement mechanism retouched.Area-structure light method mainly uses projection light machine to carry out beating specified texture to target, so Three-dimensionalreconstruction is carried out to target by encoding and decoding texture afterwards, although this method can reach high-precision in the x, y, and z directions, There is limitations in terms of the range of the visual field, therefore quickly can not carry out positioning three-dimensionalreconstruction to object.
Summary of the invention
In view of this, the purpose of the present invention is to provide a kind of more mesh point cloud three-dimensional rebuilding methods, device and its equipment, purport Three-dimensionalreconstruction accurately and fast can carried out to object, to facilitate subsequent industrial automation to handle.
It is as follows that the present invention solves technical solution used by above-mentioned technical problem:
The first aspect of the present invention provides a kind of more mesh point cloud three-dimensional rebuilding methods, this method comprises:
Start projection light machine O and be used for projected fringe light image, starts K platform camera C1, C2..., CKFor shooting grayscale image Picture;
Acquire image;
Generate the point cloud under camera two-by-two;
Point cloud fusion;
Wherein, K is even number natural number.
In some embodiments, the acquisition image specifically includes:
Initialization counter q=0;
It controls ray machine O and projects q width structural plane light figure Iq0
It triggers K platform camera and acquires picture, obtain the K grayscale images with structure light, mechanism face light figure and gray level image figure It is expressed as subclass Sq={ Iq0, Iq1..., IqK}。
In some embodiments, the generation two-by-two the point cloud under camera comprising steps of
For every group of camera Ck, Ck+1, wherein k is that a value in natural number range [1, K-1] is utilized according to phase method Grayscale image image set { I0k, I1k..., I(Q-1)kAnd { I0k+1, I1k+1..., I(Q-1)k+1, it generates in camera CkPoint under coordinate system It converges and closes Pk
It rejects point and converges the noise spot in closing.
In some embodiments, the noise spot converged in conjunction of rejecting specifically includes:
Each point is converged and closes PkIn point, project to its another corresponding camera image pixel respectively, and Phase is parsed, so that it is determined that whether the point is noise spot;If it is noise spot, then corresponding point is converged from point and close PkIn delete It removes, closes P '={ P ' to obtain new point and converge1, P '2..., P 'K}。
In some embodiments, the method may further comprise the step of:
For camera C1, according to phase method, utilize grayscale image image set { I01, I11..., I41, image is calculated in (u1, v1) Phase value θ (the u of position1, v1):
Wherein, I41(u1, v1), I21(u1, v1), I11(u1, v1), I31(u1, v1) respectively indicate image I41, I21, I11, I31? (u1, v1) position grey scale pixel value, value range is the natural number set of [0,255].
In some embodiments, the parsing phase value includes:
If phase value | θ (u2i, v2i)-θ(u1i, v1i) | < Δ θ then converges from point and closes PkMiddle deletion pixel, thus It obtains new point and converges conjunction P '1
In some embodiments, described cloud fusion is comprising steps of by all point cloud P '={ P '1, P '2..., P 'K} It is fused under projector coordinates system, closes P to obtain a little convergingmerge
In some embodiments, the fusion of point cloud closes P comprising steps of filtered points are convergedmergeIn outlier, to obtain new Point converge close P 'merge
The second aspect of the present invention also provides a kind of more mesh point cloud three-dimensional reconstruction apparatus, comprising:
Image capture module is used for the more mesh images of synchronous acquisition;
Binocular three-dimensional point cloud module, for generating the three-dimensional point cloud under camera two-by-two;
Point cloud Fusion Module is converged for all point clouds to be fused under projector coordinates system to obtain new point It closes;
And output module, the point for exporting new converge conjunction.
The third aspect of the present invention also provides a kind of more mesh point cloud three-dimensional reconstruction equipment, which includes memory, processing Device and it is stored in the computer program that can be run on the memory and on the processor, the computer program is described It realizes when processor executes such as the step in any one of above method the method.
A kind of more mesh point cloud three-dimensional rebuilding methods, device and its equipment provided in an embodiment of the present invention pass through structured light technique And multi-view image is combined to generate three-dimensional point cloud, so that three-dimensionalreconstruction accurately and fast has been carried out to object, it is subsequent to facilitate Industrial automation processing.This method is answered using simply, the speed for carrying out three-dimensionalreconstruction to object is fast, precision is high in actual industrial With being easy to apply in environment and implement.
Detailed description of the invention
Fig. 1 is more mesh point cloud three-dimensional rebuilding method flow charts of the embodiment of the present invention;
Fig. 2 is the more mesh point cloud three-dimensional reconstruction apparatus structural schematic diagrams of the embodiment of the present invention;
Fig. 3 is the method flow diagram for the point cloud that the embodiment of the present invention generates under camera two-by-two;
Fig. 4 is the method flow diagram that the embodiment of the present invention rejects that point converges the noise spot in closing;
Fig. 5 is another more mesh point cloud three-dimensional reconstruction apparatus structural schematic diagrams of the embodiment of the present invention;
Fig. 6 is the more mesh point cloud three-dimensional reconstruction apparatus structural block diagrams of the embodiment of the present invention.
Specific embodiment
In order to be clearer and more clear technical problems, technical solutions and advantages to be solved, tie below Drawings and examples are closed, the present invention will be described in further detail.It should be appreciated that specific embodiment described herein is only To explain the present invention, it is not intended to limit the present invention.
Be directed in the prior art to target carry out three-dimensionalreconstruction precision it is not high or in terms of the range of the visual field there are limitations The problem of, the present invention proposes a kind of more mesh point cloud three-dimensional rebuilding methods, device and equipment, can accurately and fast to object into Row three-dimensionalreconstruction, so that subsequent industrial automation be facilitated to handle.
Embodiment one:
A kind of more mesh point cloud three-dimensional rebuilding methods provided by the invention, referring to Fig. 1, specifically includes the following steps:
S1: synchronous acquisition image
S11 starts projection light machine O and is used for projected fringe light image, starts K platform camera C1, C2..., CKFor shooting gray scale Image;Wherein K is even number natural number.
It specifically, in one embodiment further include starting light-supplementing system, the light-supplementing system is disposed according to actual environment, root It is used according to working environment, use is then opened when environment needs to carry out light filling.Projection light machine O is used for projected fringe light image, The wide and high pixel resolution of the image of projection is respectively 1280 and 1024.K platform camera C1, C2..., CKFor shooting grayscale image The wide and high pixel resolution of the image of picture, shooting is respectively 1028 and 1024.The wherein public projector of all cameras, and The visual field of camera and projector has the maximum common visual field under specified shooting distance.
S12 acquires image.
Q group image is acquired, Q is even number natural number, the specific steps are as follows:
S121 initialization counter q=0.
S122 controls ray machine by rs 232 serial interface signal and projects q width structural plane light figure Iq0.Simultaneously, pass through software or hardware Synchronous triggering K platform camera acquires picture.Obtain the K grayscale images with structure light, i.e. Iq1..., IqK, mechanism face light figure and ash Degree image graph is expressed as subclass Sq={ Iq0, Iq1..., IqK}。
Step S122 is repeated, the subclass image of Q group mechanism face light figure and gray level image figure, note are acquired by step S122 For U={ S0, S1..., SQ-1}。
S2: the point cloud under camera two-by-two is generated.
S21 is for every group of camera Ck, Ck+1, wherein k is a value in natural number range [1, K-1], according to phase method, Utilize grayscale image image set { I0k, I1k..., I(Q-1)kAnd { I0k+1, I1k+1..., I(Q-1)k+1, it generates in camera CkUnder coordinate system Point converge close Pk
S22 rejects point and converges the noise spot in closing.
Specifically, each point is converged and closes PkIn point, project to its another corresponding camera image respectively Pixel, and phase is parsed, so that it is determined that whether the point is noise spot.If it is noise spot, then corresponding point is converged from point Close PkMiddle deletion closes P '={ P ' to obtain new point and converge1, P '2..., P 'K}。
S3: point cloud fusion
All points are converged according to the outer parameter of projector and each camera and close P '={ P ' by S311, P '2..., P 'K} It is fused under projector coordinates system, closes P to obtain a little convergingmerge
For S32 using K adjacent to method, filtered points, which are converged, closes PmergeIn outlier, thus obtain new point converge close P 'merge
S4 exports new point and converges conjunction P 'merge
Using the method for the invention, three-dimensional reconstruction, target three-dimensional effectively can be carried out to various industrial work pieces It more can completely reconstruct and, this method calculating speed is fast, and reconstruction accuracy is high, to substantially increase 3D vision in work Application in industry vision.
Embodiment two:
Illustrate the specific method detailed of three-dimensional reconstruction target of the present invention by taking 2 cameras add a projector as an example below. This programme proposes a kind of more mesh point cloud three-dimensional rebuilding methods, is mainly based upon light cross-section method to realize, using the embodiment of the present invention one The method can improve the integrality of the precision of target three-dimensional reconstruction, speed and target point cloud.Specific method and step such as Fig. 1 It is shown:
1 pair of technical solution of the present invention implementation is described in further detail with reference to the accompanying drawing:
S1: synchronous acquisition image
S11 starts projection light machine O and is used for projected fringe light image, starts camera C1, C2For shooting gray level image;
Specifically, as shown in Fig. 2, starting light-supplementing system, projection light machine and all cameras.Light-supplementing system is according to actual rings Border deployment, uses according to working environment, use is then opened when environment needs to carry out light filling.Projection light machine O is used for projected fringe Rate is respectively 1280 and 1024 to the wide and high pixel of the image of light image, projection respectively.2 camera C1, C2For shooting grayscale image Rate is respectively 1028 and 1024 to the wide and high pixel of the image of picture, shooting respectively.The wherein public projector of all cameras, and The visual field of camera and projector has the maximum common visual field under specified shooting distance.
S12 acquires image.
Q=4 group image is acquired, steps are as follows:
S121 initialization counter q=0.
S122 controls ray machine by rs 232 serial interface signal and projects q width structural plane light figure Iq0.Simultaneously, pass through software or hardware It is synchronous to trigger 2 cameras acquisition pictures.2 grayscale images for having structure light are obtained, and pass through the outer parameter of two cameras, to figure As carrying out three-dimensional correction, the image after being registrated, i.e. Iq1, Iq2, mechanism face light figure and gray level image chart are shown as subclass Sq={ Iq0, Iq1, Iq2}。
Repeat the above steps S122, acquires the subclass image of Q group mechanism face light figure and gray level image figure, is denoted as U= {S0, S1..., SQ-1}。
S2: the point cloud under camera two-by-two is generated.
Referring to Fig. 3, specific step is as follows:
S21 is for camera C1, according to phase method, utilize grayscale image image set { I01, I11..., I41, image is calculated in (u1, v1) position phase value θ (u1, v1):
Wherein, I41(u1, v1), I21(u1, v1), I11(u1, v1), I31(u1, v1) respectively indicate image I41, I21, I11, I31? (u1, v1) position grey scale pixel value, value range is the natural number set of [0,255].
Due to camera C1, C2The three-dimensional correction alignment of the image of lower acquisition, therefore the correction picture of two cameras has phase Same ordinate value, therefore in C2Image under camera, in v1Capable u1[u nearby1Δ u, u1+ Δ u], calculate all positions The phase value of point, and find phase value and θ (u1, v1) identical picture position (u2, v1)。
Specifically step are as follows:
S211 is in [u1Δ u, u1+ Δ u] section each element u, calculate camera C2Image is in position (u, v1) phase Value:
Wherein, I42(u, v1), I22(u, v1), I12(u, v1), I32 (u, v1) respectively indicate image I42, I22, I12, I32? (u, v1) position grey scale pixel value, value range is the natural number set of [0,255].
If S212 | θ (u, v1)-θ(u1, v1) | < δ, then assignment u2=u is jumped to and is executed step S213;Judge whether There is the element not traversed, if still there is the element not traversed, jumps to and continue to execute step S211;Otherwise it returns to step S21 continues to calculate camera C1Next pixel of image.δ value is 0.01 in this example, but is not limited to the value, the present invention In embodiment, δ value range is greater than 0.001, less than 1.
S213 is in picture point (u1, v1) position, camera C1And C2Parallax be u2-u1, calculate in camera C1Under pixel (u1, v1) corresponding to 3 d space coordinate.
So in camera C1Under pixel (u1, v1) corresponding to 3 d space coordinate (X1, Y1, Z1) are as follows:
Wherein B is the baseline of two cameras, and f is the focal length standardized under pixel.
Camera C is calculated according to above-mentioned steps S211Under all pixels point 3 d space coordinate point, to constitute Camera C1Point under coordinate system, which converges, closes P1
S22 rejects point and converges the noise spot in closing.
Referring to Fig. 4, specific step is as follows:
S221 calculates each point and converges conjunction P1In point pi=(Xi, Yi, Zi) project to camera C1Image slices vegetarian refreshments (u1i, v1i);
Each point is converged and closes P1In point pi=(Xi, Yi, Zi), project to camera C1Image slices vegetarian refreshments (u1i, v1i), That is:
Wherein M1For camera C1Intrinsic parameter 3x3 matrix.
S222 parses pixel (u1i, v1i) phase value, it may be assumed that
S223 will point pi=(Xi, Yi, Zi) project to camera C2Pixel (the u of image2i, v2i), it may be assumed that
Wherein, M2For camera C2Intrinsic parameter 3x3 matrix;T21For camera C1To camera C24x4 transformation matrix.
S224 parses pixel (u2i, v2i) phase value, it may be assumed that
If | θ (u2i, v2i)-θ(u1i, v1i) | the phase difference of < Δ θ, the two are less than Δ θ, and the present embodiment Δ θ value is 0.02 (but being not limited to the value).It is then converged from point and closes PkMiddle deletion closes P ' to obtain new point and converge1
S3: point cloud fusion
S31 is according to the outer ginseng transformation matrix of projector and camera, by all point cloud P '1Conversion is fused to projector coordinates Under system, P " is closed to obtain a little converging1, and then output point cloud aggregated result is P "1
For S32 using K adjacent to method, filtered points, which are converged, closes P "1In outlier, thus obtain new point converge close P "merge
S4: it exports new point and converges conjunction P "merge
A kind of more mesh point cloud three-dimensional rebuilding methods and device provided in an embodiment of the present invention pass through structured light technique and combine Multi-view image generates three-dimensional point cloud, so that accurately and fast three-dimensionalreconstruction has been carried out to object, to facilitate subsequent industry certainly Dynamicization processing.This method is using simply, the speed for carrying out three-dimensionalreconstruction to object is fast, precision is high, in practical application in industry environment In be easy to apply and implement.
Embodiment three
Referring to Fig. 5, illustrating three-dimensional reconstruction target of the present invention so that 4 cameras add a projector as an example further below Specific method detailed, this programme proposes a kind of more mesh three-dimensional rebuilding methods, light cross-section method is based primarily upon, by using this method energy Effectively improve the precision, speed and point cloud integrality of three-dimensional reconstruction.Specific method and step and embodiment one, embodiment two are basic It is identical, it please refers to shown in Fig. 1.
1, attached drawing 3 and attached drawing 5 are described in further detail the embodiment of the present invention with reference to the accompanying drawing.
Specific step is as follows:
S1: synchronous acquisition image
S11 starts projection light machine O and is used for projected fringe light image, starts camera C1, C2, C3, C4For shooting grayscale image Picture;
Specifically, as shown in figure 5, starting light-supplementing system, projection light machine and all cameras.Light-supplementing system is according to actual rings Border deployment, uses according to working environment, use is then opened when environment needs to carry out light filling.Projection light machine O is used for projected fringe Rate is respectively 1280 and 1024 to the wide and high pixel of the image of light image, projection respectively.4 camera C1, C2, C3, C4For shooting Rate is respectively 1028 and 1024 to the wide and high pixel of the image of gray level image, shooting respectively.The wherein public throwing of all cameras Shadow instrument, and the visual field of camera and projector has the maximum common visual field under specified shooting distance.
S12 acquires image.
Q=4 group image is acquired, steps are as follows:
S121 initialization counter q=0.
S122 controls ray machine by rs 232 serial interface signal and projects q width structural plane light figure Iq0.Simultaneously, pass through software or hardware It is synchronous to trigger 4 cameras acquisition pictures.4 grayscale images for having structure light are obtained, and pass through the outer parameter of two cameras, it is right C1, C2The image of camera carries out three-dimensional correction, the image after being registrated, i.e. Iq1, Iq2;And to C3, C4The image of camera carries out Solid correction, the image after being registrated, i.e. Iq3, Iq4.Structural plane light figure and gray level image chart are shown as subclass Sq= {Iq0, Iq1, Iq2, Iq3, Iq4}。
Step S122 is repeated, the subclass image of Q group mechanism face light figure and gray level image figure, note are acquired by step S122 For U={ S0, S1..., SQ-1}。
S2: the point cloud under camera two-by-two is generated.
S21 is for camera C1, according to phase method, utilize grayscale image image set { I01, I11..., I41, image is calculated in (u1, v1) position phase value θ (u1, v1):
Wherein, I41(u1, v1), I21(u1, v1), I11(u1, v1), I31(u1, v1) respectively indicate image I41, I21, I11, I31? (u1, v1) position grey scale pixel value, value range is the natural number set of [0,255].
Due to camera C1, C2The three-dimensional correction alignment of the image of lower acquisition, therefore the correction picture of two cameras has phase Same ordinate value, therefore in C2Image under camera, in v1Capable u1[u nearby1Δ u, u1+ Δ u], calculate all positions The phase value of point, and find phase value and θ (u1, v1) identical picture position (u2, v1).Specifically step are as follows:
S211 is in [u1Δ u, u1+ Δ u] section each element u, calculate camera C2Image is in position (u, v1) phase Value:
Wherein, I42(u, v1), I22(u, v1), I12(u, v1), I32(u, v1) respectively indicate image I42, I22, I12, I32(u, v1) position grey scale pixel value, value range is the natural number set of [0,255].
If S212 | θ (u, v1)-θ(u1, v1) | < δ, then assignment u2=u is jumped to and is executed step S213;Judge whether There is the element not traversed, if still there is the element not traversed, jumps to and continue to execute step S211;Otherwise it returns to step S21 continues to calculate camera C1Next pixel of image.In the present embodiment example, δ value is 0.01, but is not limited to the value.
S213 is in picture point (u1, v1) position, camera C1And C2Parallax be u2-u1, calculate in camera C1Under pixel (u1, v1) corresponding to 3 d space coordinate (X1, Y1, Z1)。
In camera C1Under pixel (u1, v1) corresponding to 3 d space coordinate (X1, Y1, Z1) are as follows:
Wherein B is the baseline of two cameras, and f is the focal length standardized under pixel.
Camera C is calculated according to above-mentioned steps S211Under all pixels point 3 d space coordinate point, to constitute Camera C1Point under coordinate system, which converges, closes P1
S22 rejects point and converges the noise spot in closing.
Referring to Fig. 4, specific step is as follows:
S221, which converges each point, closes P1In point pi=(Xi, Yi, Zi), project to camera C1Image slices vegetarian refreshments (u1i, v1i), it may be assumed that
Wherein M1For camera C1Intrinsic parameter 3x3 matrix.
S222 parses pixel (u1i, v1i) phase value, it may be assumed that
S223 will point pi=(Xi, Yi, Zi) project to camera C2Image slices vegetarian refreshments (u2i, v2i), it may be assumed that
Wherein, M2For camera C2Intrinsic parameter 3x3 matrix;T21For camera C1To camera C24x4 transformation matrix.
S224 parses pixel (u2i, v2i) phase value, it may be assumed that
If | θ (u2i, v2i)-θ(u1i, v1i) | the phase difference of < Δ θ, the two are less than Δ θ, then it is converged from point and closes PkIn delete Except the pixel, P ' is closed to obtain new point and converge1.Δ θ value is 0.02 (but being not limited to the value) in the present embodiment.
S23 is according to above-mentioned steps S21 and S22, similarly with C3, C4Camera is calculated as one group in C3Under camera coordinates system Point converge close P '3
S3: point cloud fusion
S31 is according to the outer parameter of projector and each camera, it is known that camera C1Transformation matrix under projector coordinates system ForCamera C3Transformation matrix under projector coordinates system isBy all point cloud P '1, P '3It is fused to projector Under coordinate system, i.e., for a cloud P '1Each point, apply respectivelyTransformation, for a cloud P '3Each point, respectively UsingTransformation, the set that obtained all coordinate points are constituted are denoted as Pmerge
For S32 using K adjacent to method, filtered points, which are converged, closes PmergeIn outlier, thus obtain new point converge close P 'mrge。 In the present embodiment, minimum neighbor point point number is 3, but is not limited to the value, and the value range of minimum neighbor point is to be less than greater than 1 10。
S4 exports new point and converges conjunction P 'merge
A kind of more mesh point cloud three-dimensional rebuilding methods and device provided in an embodiment of the present invention pass through structured light technique and combine Multi-view image generates three-dimensional point cloud, so that accurately and fast three-dimensionalreconstruction has been carried out to object, to facilitate subsequent industry certainly Dynamicization processing.This method is using simply, the speed for carrying out three-dimensionalreconstruction to object is fast, precision is high, in practical application in industry environment In be easy to apply and implement.
Example IV
The embodiment of the present invention also provides a kind of more mesh point cloud three-dimensional reconstruction apparatus, referring to Fig. 6, described device includes as follows Module:
Image capture module 10, binocular three-dimensional point cloud module 20, point cloud Fusion Module 30 and output module 40.
Described image acquisition module 10 is used for the more mesh images of synchronous acquisition;
Specifically, described image acquisition module 10 includes camera and lens ray machine, and the camera is used for photographic subjects image Gray level image, the projection light machine be used for projected fringe light image, described image acquisition module 10 for acquire simultaneous with The grayscale image of structure light.
The binocular three-dimensional point cloud module 20, for generating the three-dimensional point cloud under camera two-by-two.
The binocular three-dimensional point cloud module 20 includes generation unit 201 and noise eliminating unit 202.
The generation unit 201, for every group of camera Ck, Ck+1, utilize grayscale image image set { I0k, I1k..., I(Q-1)k} { I0k+1, I1k+1..., I(Q-1)k+1, it generates in camera CkPoint under coordinate system, which converges, closes Pk
The noise eliminating unit 202 a little converges the noise spot in closing for rejecting.
Specifically, each point is converged and closes PkIn point, project to its another corresponding camera image picture respectively Vegetarian refreshments, and phase is parsed, so that it is determined that whether the point is noise spot.If it is noise spot, then corresponding point is converged into conjunction from point PkMiddle deletion, so that obtaining new point converges conjunction.
Described cloud Fusion Module 30, for all point clouds to be fused under projector coordinates system, to obtain a cloud Set.
Described cloud Fusion Module 30 includes integrated unit 301 and filter element 302.
The integrated unit 301, for all point clouds to be fused under projector coordinates system, to obtain a little converging It closes.
The filter element 302 converges the outlier in closing for filtered points, so that obtaining new point converges conjunction.
The output module 40 converges conjunction for exporting the new point after filtering out outlier.
More mesh three-dimensional reconstruction apparatus of the present invention are by the more mesh images of 10 synchronous acquisition of Image Acquisition mould, then by binocular Three-dimensional point cloud module 20 generates the three-dimensional point cloud under camera two-by-two, and all point clouds are fused to throwing by cloud Fusion Module 30 Under shadow instrument coordinate system, to obtain a little converging conjunction, three-dimensionalreconstruction is carried out to object.Three-dimensional can be effectively improved by using the device Precision, speed and the point cloud integrality of reconstruction.Three accurately and fast can be carried out to object by using device of the present invention Dimension reconstruct, to facilitate subsequent industrial automation to handle, and accuracy of identification is high.
Embodiment five:
A kind of more mesh point cloud three-dimensional reconstruction equipment provided according to one embodiment of present invention, the equipment include processing Device, computer readable storage medium and the computer program stored thereon, it is real when the computer program is executed by processor Step in existing above-mentioned more mesh three-dimensional rebuilding methods, specific steps are as described in description in embodiment one, and details are not described herein.
Memory in the present embodiment can be used for storing software program and various data.Memory can mainly include storage Program area and storage data area, wherein storing program area can application program needed for storage program area, at least one function Deng;Storage data area, which can be stored, uses created data etc. according to mobile phone.In addition, memory may include that high speed is deposited at random Access to memory, can also include nonvolatile memory, a for example, at least disk memory, flush memory device or other easily The property lost solid-state memory.
According to the present embodiment example, all or part of the process in above-described embodiment method can pass through calculating Machine program is completed to instruct relevant hardware, and described program can be stored in a computer-readable storage medium, such as this hair In bright embodiment, which be can be stored in the storage medium of computer system, and by least one of the computer system Processor executes, and includes the process such as the embodiment of above-mentioned each method with realization.The storage medium includes but is not limited to magnetic disk, excellent Disk, CD, read-only memory (Read-On l yMemory, ROM) etc..
Preferred embodiments of the present invention have been described above with reference to the accompanying drawings, not thereby limiting the scope of the invention.This Without departing from the scope and spirit of the invention, there are many variations to implement the present invention by field technical staff, for example as one The feature of a embodiment can be used for another embodiment and obtain another embodiment.It is all to use institute within technical concept of the invention Any modifications, equivalent replacements, and improvements of work, should all be within interest field of the invention.

Claims (10)

1. a kind of more mesh point cloud three-dimensional rebuilding methods, which is characterized in that comprising steps of
Start projection light machine O and be used for projected fringe light image, starts K platform camera C1, C2..., CKFor shooting gray level image;
Acquire image;
Generate the point cloud under camera two-by-two;
Point cloud fusion;
Wherein, K is even number natural number.
2. more mesh point cloud three-dimensional rebuilding methods according to claim 1, which is characterized in that the acquisition image specifically wraps It includes:
Initialization counter q=0;
It controls ray machine O and projects q width structural plane light figure Iq0
It triggers K platform camera and acquires picture, obtain the K grayscale images with structure light, mechanism face light figure and gray level image figure indicate For subclass Sq={ Iq0, Iq1..., IqK}。
3. more mesh point cloud three-dimensional rebuilding methods according to claim 1, which is characterized in that the generation is two-by-two under camera Point cloud comprising steps of
For every group of camera Ck, Ck+1, wherein k is a value in natural number range [1, K-1], according to phase method, utilizes gray scale Image set { I0k, I1k..., I(Q-1)kAnd { I0k+1, I1k+1..., I(Q-1)k+1, it generates in camera CkPoint under coordinate system converges Close Pk
It rejects point and converges the noise spot in closing.
4. more mesh point cloud three-dimensional rebuilding methods according to claim 3, which is characterized in that the rejecting point converges in conjunction Noise spot specifically includes:
Each point is converged and closes PkIn point, project to its another corresponding camera image pixel respectively, and solve phase separation Position, so that it is determined that whether the point is noise spot;If it is noise spot, then corresponding point is converged from point and close PkMiddle deletion, thus It obtains new point and converges conjunction P '={ P '1, P '2..., P 'K}。
5. more mesh point cloud three-dimensional rebuilding methods according to claim 4, which is characterized in that comprising steps of for camera C1, According to phase method, grayscale image image set { I is utilized01, I11..., I41, image is calculated in (u1, v1) position phase value θ (u1, v1):
Wherein, I41(u1, v1), I21(u1, v1), I11(u1, v1), I31(u1, v1) respectively indicate image I41, I21, I11, I31In (u1, v1) position grey scale pixel value, value range is the natural number set of [0,255].
6. more mesh point cloud three-dimensional rebuilding methods according to claim 5, which is characterized in that the parsing phase value includes: If phase value | θ (u2i, v2i)-θ(u1i, v1i) | < Δ θ then converges from point and closes PkMiddle deletion pixel, to obtain new Point, which converges, closes P '1
7. more mesh point cloud three-dimensional rebuilding methods according to claim 1, which is characterized in that described cloud fusion includes step It is rapid: by all point cloud P '={ P '1, P '2..., P 'KBe fused under projector coordinates system, P is closed to obtain a little convergingmerge
8. more mesh point cloud three-dimensional rebuilding methods according to claim 7, which is characterized in that described cloud fusion includes step Rapid: filtered points, which are converged, closes PmergeIn outlier, thus obtain new point converge close P 'merge
9. a kind of more mesh point cloud three-dimensional reconstruction apparatus, which is characterized in that described device includes:
Image capture module is used for the more mesh images of synchronous acquisition;
Binocular three-dimensional point cloud module, for generating the three-dimensional point cloud under camera two-by-two;
Point cloud Fusion Module, for all point clouds to be fused under projector coordinates system, so that obtaining new point converges conjunction;
And output module, the point for exporting new converge conjunction.
10. a kind of more mesh point cloud three-dimensional reconstruction equipment, which is characterized in that the equipment includes memory, processor and is stored in institute The computer program that can be run on memory and on the processor is stated, when the computer program is executed by the processor It realizes such as the step in any one of claims 1 to 8 the method.
CN201810732037.1A 2018-07-05 2018-07-05 Multi-eye point cloud three-dimensional reconstruction method, device and equipment Active CN108961378B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810732037.1A CN108961378B (en) 2018-07-05 2018-07-05 Multi-eye point cloud three-dimensional reconstruction method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810732037.1A CN108961378B (en) 2018-07-05 2018-07-05 Multi-eye point cloud three-dimensional reconstruction method, device and equipment

Publications (2)

Publication Number Publication Date
CN108961378A true CN108961378A (en) 2018-12-07
CN108961378B CN108961378B (en) 2023-03-28

Family

ID=64486040

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810732037.1A Active CN108961378B (en) 2018-07-05 2018-07-05 Multi-eye point cloud three-dimensional reconstruction method, device and equipment

Country Status (1)

Country Link
CN (1) CN108961378B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113012277A (en) * 2021-02-03 2021-06-22 中国地质大学(武汉) DLP (digital light processing) -surface-based structured light multi-camera reconstruction method
CN114863036A (en) * 2022-07-06 2022-08-05 深圳市信润富联数字科技有限公司 Data processing method and device based on structured light, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104665107A (en) * 2015-03-10 2015-06-03 南京脚度健康科技有限公司 Three-dimensional data acquisition and processing system and three-dimensional data acquisition and processing method for soles
CN107170043A (en) * 2017-06-19 2017-09-15 电子科技大学 A kind of three-dimensional rebuilding method
CN107367245A (en) * 2017-08-22 2017-11-21 西安交通大学 Null Spot detection and elimination method in optical three-dimensional contour measurement
CN107483911A (en) * 2017-08-25 2017-12-15 秦山 A kind of signal processing method and system based on more mesh imaging sensors

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104665107A (en) * 2015-03-10 2015-06-03 南京脚度健康科技有限公司 Three-dimensional data acquisition and processing system and three-dimensional data acquisition and processing method for soles
CN107170043A (en) * 2017-06-19 2017-09-15 电子科技大学 A kind of three-dimensional rebuilding method
CN107367245A (en) * 2017-08-22 2017-11-21 西安交通大学 Null Spot detection and elimination method in optical three-dimensional contour measurement
CN107483911A (en) * 2017-08-25 2017-12-15 秦山 A kind of signal processing method and system based on more mesh imaging sensors

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113012277A (en) * 2021-02-03 2021-06-22 中国地质大学(武汉) DLP (digital light processing) -surface-based structured light multi-camera reconstruction method
CN113012277B (en) * 2021-02-03 2022-06-21 中国地质大学(武汉) DLP (digital light processing) -surface-based structured light multi-camera reconstruction method
CN114863036A (en) * 2022-07-06 2022-08-05 深圳市信润富联数字科技有限公司 Data processing method and device based on structured light, electronic equipment and storage medium
CN114863036B (en) * 2022-07-06 2022-11-15 深圳市信润富联数字科技有限公司 Data processing method and device based on structured light, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN108961378B (en) 2023-03-28

Similar Documents

Publication Publication Date Title
JP6564537B1 (en) 3D reconstruction method and apparatus using monocular 3D scanning system
CN111023970B (en) Multi-mode three-dimensional scanning method and system
EP3444560B1 (en) Three-dimensional scanning system and scanning method thereof
US10789765B2 (en) Three-dimensional reconstruction method
TWI555379B (en) An image calibrating, composing and depth rebuilding method of a panoramic fish-eye camera and a system thereof
JP3593466B2 (en) Method and apparatus for generating virtual viewpoint image
Zhang et al. Rapid shape acquisition using color structured light and multi-pass dynamic programming
CN109035394B (en) Face three-dimensional model reconstruction method, device, equipment and system and mobile terminal
CN108369639B (en) Image-based image rendering method and system using multiple cameras and depth camera array
CN106767527A (en) A kind of optics mixing detection method of three-D profile
CN108475327A (en) three-dimensional acquisition and rendering
KR20160121569A (en) Image processing method and image processing device
JP2021520008A (en) Vehicle inspection system and its method
CN110544273B (en) Motion capture method, device and system
JP2016537901A (en) Light field processing method
CN207766424U (en) A kind of filming apparatus and imaging device
CN109118544A (en) Synthetic aperture imaging method based on perspective transform
CN101916455A (en) Method and device for reconstructing three-dimensional model of high dynamic range texture
JP7479729B2 (en) Three-dimensional representation method and device
KR20190040746A (en) System and method for restoring three-dimensional interest region
JP7489253B2 (en) Depth map generating device and program thereof, and depth map generating system
CN107454377A (en) A kind of algorithm and system that three-dimensional imaging is carried out using camera
Yu et al. Multiperspective modeling, rendering, and imaging
CN108961378A (en) A kind of more mesh point cloud three-dimensional rebuilding methods, device and its equipment
Sakashita et al. A system for capturing textured 3D shapes based on one-shot grid pattern with multi-band camera and infrared projector

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant