WO2018011335A1 - Alignment of scan parts on a turntable - Google Patents

Alignment of scan parts on a turntable Download PDF

Info

Publication number
WO2018011335A1
WO2018011335A1 PCT/EP2017/067671 EP2017067671W WO2018011335A1 WO 2018011335 A1 WO2018011335 A1 WO 2018011335A1 EP 2017067671 W EP2017067671 W EP 2017067671W WO 2018011335 A1 WO2018011335 A1 WO 2018011335A1
Authority
WO
WIPO (PCT)
Prior art keywords
camera
common
rotation axis
scan
turntable
Prior art date
Application number
PCT/EP2017/067671
Other languages
French (fr)
Inventor
William Nguyen
Edwin TAFERNER
Original Assignee
Naked Labs Austria Gmbh
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Naked Labs Austria Gmbh filed Critical Naked Labs Austria Gmbh
Publication of WO2018011335A1 publication Critical patent/WO2018011335A1/en

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/4808Evaluating distance, position or velocity data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/87Combinations of systems using electromagnetic waves other than radio waves
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • G01S17/8943D imaging with simultaneous measurement of time-of-flight at a 2D array of receiver pixels, e.g. time-of-flight cameras or flash lidar
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person

Definitions

  • the invention relates to a method for forming a 3D scanned model of an object comprising the following steps: rotating an object with a turntable around a turntable rotation axis and capturing a first bottom scan part of the object with a first camera and a second top scan part of the object with a second camera, wherein these scan parts share a common overlapping area and the camera's positions are fixed while the object is rotated by the turntable so that each camera forms a circular camera trajectory.
  • the disclosure relates to a 3D scanning system for forming a 3D scanned model of an object comprising: a turntable for rotating an object around a turntable rotation axis, a first camera for capturing a first bottom scan part of the object, a second camera for capturing a second top scan part of the object, wherein these scan parts share a common overlapping area and the camera's positions are fixed while the object is rotated by the turntable so that each camera forms a circular camera trajectory, and a processor for aligning the scan parts.
  • a method for registering multiple range camera's point cloud into a common coordinate system, which may be one of the range camera's coordinate systems or any other coordinate system, effectively aligning together the point clouds from each range camera.
  • Registration into a common coordinate system may be required to generate a complete point cloud of a subject.
  • Each range camera's point clouds are registered via an iterative closest point algorithm.
  • An initial estimate of the parameters required to register a range camera point cloud to another, may be determined by a range camera configuration.
  • Each pair-wise transformation may be refined using the ICP algorithm, and the refined transformation data may be saved for future body scans. ICP may be used, where there is sufficient overlap between each camera's point cloud.
  • the object of the present disclosure is to provide a highly efficient method and/or 3D scanning system for forming a 3D scanned model of an object.
  • the aforementioned object is achieved by means of a method for forming a 3D scanned model of an object and by means of a 3D scanning system for forming a 3D scanned model of an object exhibiting the features disclosed in the independent patent claims.
  • a method for forming a 3D scanned model of an object comprises the following steps: rotating an object with a turntable around a turntable rotation axis, capturing a first bottom scan part of the object with a first camera and a second top scan part of the object with a second camera, wherein these scan parts share a common overlapping area and the camera's positions are fixed while the object is rotated by the turntable so that each camera forms a circular camera trajectory, and aligning the two scan parts to each other by using the information that the circular camera trajectories are perpendicular to the common turntable rotation axis and that the common turntable rotation axis goes through the center of the circular camera trajectories.
  • first bottom scan part and the second top scan part are transformed into common rotation axis coordinates and/or into a common coordinates system along the common turntable rotation axis.
  • the projection of the first or second camera origin on the common turntable rotation axis is chosen as a common coordinate origin.
  • the common turntable rotation axis in particular the down direction, is chosen as a x-axis of the common
  • first circle and the second circle are aligned concentrically to each other and the common turntable rotation axis.
  • a point-cloud alignment is performed to align the two scan parts, in particular in axial and/or peripheral direction of the common turntable rotation axis, to each other and to form the 3D scanned model as a final aligned part in the common coordinates.
  • the orientation of the two scan parts around the common turn-table rotation axis is aligned to each other.
  • the axial position of the two scan parts in the direction of the common turntable rotation axis is aligned to each other.
  • the knowledge of the physical arrangement of the first camera and the second camera, in particular the relative pose of the second camera to the first camera, is not used for the point-cloud alignment.
  • a pairing step and/or downstream to the point-cloud alignment a voting step is performed. It is further advantageous if in the pairing step potential matches of the scan parts are paired and/or all possible
  • a 3D scanning system for forming a 3D scanned model of an object.
  • the 3D scanning system comprises a turntable for rotating an object around a turntable rotation axis, a first camera for capturing a first bottom scan part of the object, a second camera for capturing a second top scan part of the object, wherein these scan parts share a common overlapping area and the camera's positions are fixed while the object is rotated by the turntable so that each camera forms a circular camera trajectory, and a processor operating with a method according to the previous specification.
  • the processor aligns the two scan parts to each other by using the information that the circular camera trajectories are perpendicular to the common turntable rotation axis and that the common turntable rotation axis goes through the center of the circular camera trajectories.
  • the invention introduces a robust alignment solution for 3D reconstructed parts of an object on a turntable.
  • the solution enables automatic alignment of 3D reconstructed parts of an object with minimal requirement about prior knowledge of physical arrangement of the parts.
  • FIG. 1 shows in: a schematic perspective view of a 3D scanning system, a block diagram of a first alignment method for aligning the two scan parts, a block diagram of a second alignment method for aligning the two scan parts, a schematic perspective view of the reconstructed parts and the associated camera trajectories in a not aligned position and a schematic perspective view of the aligned scan parts exploiting common rotation axis.
  • FIG. 1 shows a 3D scanning system 100 for forming a 3D scanned model of an object 102.
  • the 3D scanning system 100 comprises a turntable 101 for rotating an object 102 around a turntable rotation axis 106. It further comprises a first camera 104 for capturing a first scan part 401 of the object 102.
  • the 3D scanning system 100 comprises a second camera 105 for capturing a second scan part 403 of the object 102. These scan parts 401 , 403 share a common overlapping area 107.
  • the camera's 104, 105 positions are further fixed while the object 102 is rotated by the turntable 101 so that each camera 104, 105 forms a circular camera trajectory 402, 404.
  • the 3D scanning system 100 comprises a processor 108.
  • the processor 108 is connected to the cameras 104, 105. Additionally, the processor 108 can be connected to the turntable 101 .
  • the turntable 101 is motor driven and/or controlled by the processor 108.
  • the processor is operating with a method shown in figure 2 and/or 3.
  • FIG 1 shows the 3D scanning system 100.
  • the 3D scanning system 100 comprises the turntable 101 or rather a turntable device.
  • the object of interest 102 for scanning or reconstructing is placed on the turntable 101 .
  • the object 102 is a human body.
  • the turntable 101 rotates around the turntable rotation axis 106.
  • the rotation direction 103 of the turntable 101 is identified in figure 1 with an arrow.
  • the object 102 is rotated with the turntable 101 around the turntable rotation axis 106.
  • the 3D scanning system 100 further comprises the first camera 104 and the second 105.
  • the two cameras 104, 105 are stationary and comprise a vertically offset to each other. Thus they scan different parts of the object 102.
  • the first depth sensors camera 104 captures depth maps of the bottom part of object 102 while the turntable 101 is rotating.
  • the second depth sensors camera captures depth maps of the top part of object 102 while the turntable 101 is rotating.
  • the first scan part 401 and the second scan part 403 of the object 102 are shown in figure 4 and 5.
  • the detection zone of the cameras 104, 105 overlap in a common overlapping area 107. Thus the scanned parts 401 , 403 share this common overlapping area 107, which helps to align the two scanned parts 401 , 403.
  • FIG 2 shows a block diagram of a first alignment method for aligning the two scan parts 401 , 403.
  • the first step is the circle estimation step 201 .
  • the circle estimation 201 is to estimate a circle on a plane of a given camera trajectory 402, 404 as shown in figure 4 and 5.
  • the second step is the rotation axis alignment step 202.
  • the rotation axis alignment 202 is to transform scan parts 401 , 403 into common rotation axis coordinates.
  • the third step is the mesh border clipping step 203.
  • the mesh border clipping 203 is to clean noisy border vertices of meshes of scan parts 401 , 403. This reduces influence of noise vertices in a point-cloud alignment step 204.
  • the point-cloud alignment step 204 the axial position of the two scan parts 401 , 403 in the direction of the common turntable rotation axis 106 is aligned to each other.
  • figure 3 shows a block diagram of a second alignment method for aligning the two scan parts 401 , 403.
  • the circle estimation step 301 , the rotation axis alignment step 302, the mesh border clipping step 303 and the point-cloud alignment step 305 are the same as those of the first method shown in figure 2.
  • the method comprises a pairing step 304 and a voting step 306.
  • the pairing step 304 is before and the voting step 306 after the point-cloud alignment step 305.
  • Figure 4 shows a schematic perspective view of the reconstructed parts 401 , 403 and the associated camera trajectories 402, 404 in a not aligned position.
  • the bottom part 401 of object 102 is reconstructed from depth maps observed by the depth sensors camera 104.
  • the first camera 104 is shown in different relative positions in accordance to the scanned object 102.
  • Figure 4 shows the first camera trajectory 402 of the first depth sensors camera 104.
  • the top part 403 of object 102 is reconstructed from depth maps observed by the second depth sensors camera 105.
  • a first center 405 of the first circular camera trajectory 402 and a second center 406 of the second circular camera trajectory 406 are not concentrically to each other.
  • Figure 5 shows a schematic perspective view of the aligned scan parts 401 , 403 exploiting common rotation axis 106.
  • the common rotation axis 106 of the turntable 101 is also perpendicular to the camera trajectories 402, 404.
  • the axis 106 is at the center 405, 406.
  • the first center 405 of the first circular camera trajectory 402 and a second center 406 of the second circular camera trajectory 406 are concentrically to both each other and the turntable rotation axis 106.
  • the invention exploits the common axis rotation property of a setup like figure 1 .
  • a setup there are two cameras 104, 105, capturing bottom and top parts 401 , 403 of the object 102 respectively.
  • the two cameras' positions are fixed while the object 102 is rotated by the turntable device 101 .
  • the system exploits one important property of this setup, namely that the object 102 is rotated around a single rotation axis 106; the rotation axis 106 is perpendicular to the turntable surface. Consequently, the rotation axis 106 goes through the center 405, 406 of circular camera trajectories 402, 404 of camera 104 and camera 105.
  • the cameras 104, 105 capture a 3D structure of different parts 401 , 403 of the object 102.
  • first camera 104 captures the bottom part 401 and forms a circular camera trajectory 402 (see figure 4).
  • second camera 105 captures the top part 403 and forms a circular camera trajectory 404 (see figure 4).
  • These parts 401 , 403 share a common overlapping area 107, which helps to align the two parts 401 , 403.
  • the first alignment pipeline/method is a first alignment pipeline/method
  • first alignment pipeline or rather method shown in figure 2 it is assumed that the physical arrangement of the cameras 104, 105 is known relatively well. For instance, the relative pose of second camera 105 to first camera 104 can be manually calibrated. This manual calibration might be inaccurate, but it gives good hints to help with the alignment later.
  • the pipeline starts in the circle estimation step 201 with estimating a circle on a plane of the first camera trajectory 402 and the second camera trajectory 404. Because the turntable 101 rotates object 102 around a common axis 106, this leads the camera trajectory 402, 404 of a fixed camera 104, 105 to form a circle on a 3D plane. This circle has a center 405, 406, which lies on the common rotation axis 106, and the 3D plane is perpendicular to the axis 106.
  • the system transforms the scan parts 401 , 403 to a chosen common coordinates system along the common rotation axis 106. For instance, the projection of the first camera's 104 origin on the common rotation axis 106 can be chosen as common coordinate origin. Preferentially the down direction of the axis 106 is chosen as x-axis of the common coordinates.
  • the system can transform different scan parts 401 , 403 to common coordinates with relatively good overlapping areas between the scan parts 401 , 403.
  • the border vertices of the reconstructed parts are removed.
  • the border vertices of the reconstructed parts 401 , 403 are pretty noisy.
  • removing those vertices helps reducing the influence of noise on the point- cloud alignment 204.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Electromagnetism (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention relates to a method for forming a 3D scanned model of an object comprising the following steps: rotating an object with a turntable around a turntable rotation axis, capturing a first bottom scan part of the object with a first camera and a second top scan part of the object with a second camera, wherein these scan parts share a common overlapping area and the camera's positions are fixed while the object is rotated by the turntable so that each camera forms a circular camera trajectory, and aligning the two scan parts to each other by using the information that the circular camera trajectories are perpendicular to the common turntable rotation axis and that the common turntable rotation axis goes through the center of the circular camera trajectories. The invention further relates to a 3D scanning system for forming a 3D scanned model of an object comprising: a turntable, a first camera for capturing a first bottom scan part of the object, a second camera for capturing a second top scan part of the object and a processor operating with the above mentioned method.

Description

ALIGNMENT OF SCAN PARTS ON A TURNTABLE
The invention relates to a method for forming a 3D scanned model of an object comprising the following steps: rotating an object with a turntable around a turntable rotation axis and capturing a first bottom scan part of the object with a first camera and a second top scan part of the object with a second camera, wherein these scan parts share a common overlapping area and the camera's positions are fixed while the object is rotated by the turntable so that each camera forms a circular camera trajectory.
Furthermore, the disclosure relates to a 3D scanning system for forming a 3D scanned model of an object comprising: a turntable for rotating an object around a turntable rotation axis, a first camera for capturing a first bottom scan part of the object, a second camera for capturing a second top scan part of the object, wherein these scan parts share a common overlapping area and the camera's positions are fixed while the object is rotated by the turntable so that each camera forms a circular camera trajectory, and a processor for aligning the scan parts.
To reduce space requirement for 3D scanning systems, more than one camera is normally used. In order to make it easy for the user to perform a scan, several systems are proposed equipped with a turntable device (see Figure 1 ). Such a setup has become more common recently with the introduction of new scanning products such as Naked Labs' fitness tracking device. However, the setup generates difficulties in forming a final 3D scanned model. It requires the alignment of different scan parts captured by different cameras. The objective of this invention is to tackle that problem.
3D reconstruction or 3D scanning technologies have recently become pretty popular in wide fields of applications:
- Virtual clothes try-on for e-commerce - Fitness tracking
- Medical application
- Game industry
From the US 2016/0078663 A1 a method is known, for registering multiple range camera's point cloud into a common coordinate system, which may be one of the range camera's coordinate systems or any other coordinate system, effectively aligning together the point clouds from each range camera. Registration into a common coordinate system may be required to generate a complete point cloud of a subject. Each range camera's point clouds are registered via an iterative closest point algorithm. An initial estimate of the parameters required to register a range camera point cloud to another, may be determined by a range camera configuration. Each pair-wise transformation may be refined using the ICP algorithm, and the refined transformation data may be saved for future body scans. ICP may be used, where there is sufficient overlap between each camera's point cloud.
The object of the present disclosure is to provide a highly efficient method and/or 3D scanning system for forming a 3D scanned model of an object.
The aforementioned object is achieved by means of a method for forming a 3D scanned model of an object and by means of a 3D scanning system for forming a 3D scanned model of an object exhibiting the features disclosed in the independent patent claims.
According to the disclosure a method for forming a 3D scanned model of an object is suggested. The method comprises the following steps: rotating an object with a turntable around a turntable rotation axis, capturing a first bottom scan part of the object with a first camera and a second top scan part of the object with a second camera, wherein these scan parts share a common overlapping area and the camera's positions are fixed while the object is rotated by the turntable so that each camera forms a circular camera trajectory, and aligning the two scan parts to each other by using the information that the circular camera trajectories are perpendicular to the common turntable rotation axis and that the common turntable rotation axis goes through the center of the circular camera trajectories.
It is advantageous if a first circle on a plane of the first camera trajectory and a second circle on the plane of the second camera trajectory are estimated.
In an advantageous further aspect, the first bottom scan part and the second top scan part are transformed into common rotation axis coordinates and/or into a common coordinates system along the common turntable rotation axis.
It is advantageous if the projection of the first or second camera origin on the common turntable rotation axis is chosen as a common coordinate origin.
In an advantageous further aspect, the common turntable rotation axis, in particular the down direction, is chosen as a x-axis of the common
coordinates system.
It is advantageous if the first circle and the second circle are aligned concentrically to each other and the common turntable rotation axis.
It is advantageous if a mesh border clipping is performed to clean border vertices of meshes of the first bottom scan part and the second top scan part.
In an advantageous further aspect, a point-cloud alignment is performed to align the two scan parts, in particular in axial and/or peripheral direction of the common turntable rotation axis, to each other and to form the 3D scanned model as a final aligned part in the common coordinates.
It is advantageous if the orientation of the two scan parts around the common turn-table rotation axis is aligned to each other. In an advantageous further aspect, the axial position of the two scan parts in the direction of the common turntable rotation axis is aligned to each other.
It is advantageous if the knowledge of the physical arrangement of the first camera and the second camera, in particular the relative pose of the second camera to the first camera, is used for the point-cloud alignment.
Alternatively, it is advantageous if the knowledge of the physical arrangement of the first camera and the second camera, in particular the relative pose of the second camera to the first camera, is not used for the point-cloud alignment. In this case it is advantageous if upstream to the point-cloud alignment a pairing step and/or downstream to the point-cloud alignment a voting step is performed. It is further advantageous if in the pairing step potential matches of the scan parts are paired and/or all possible
combinations of arrangements between the scan parts are formed. It is also advantageous if in the voting step alignment results of possible pairs of scan parts are collected and/or a voting is applied to find good matching results depending on point-cloud alignment errors and inlier ratio.
According to the disclosure a 3D scanning system for forming a 3D scanned model of an object is suggested. The 3D scanning system comprises a turntable for rotating an object around a turntable rotation axis, a first camera for capturing a first bottom scan part of the object, a second camera for capturing a second top scan part of the object, wherein these scan parts share a common overlapping area and the camera's positions are fixed while the object is rotated by the turntable so that each camera forms a circular camera trajectory, and a processor operating with a method according to the previous specification.
It is advantageous if the processor aligns the two scan parts to each other by using the information that the circular camera trajectories are perpendicular to the common turntable rotation axis and that the common turntable rotation axis goes through the center of the circular camera trajectories.
The invention introduces a robust alignment solution for 3D reconstructed parts of an object on a turntable. The solution enables automatic alignment of 3D reconstructed parts of an object with minimal requirement about prior knowledge of physical arrangement of the parts.
Additional advantages of the invention are described in the following exemplary embodiments. The drawings show in: a schematic perspective view of a 3D scanning system, a block diagram of a first alignment method for aligning the two scan parts, a block diagram of a second alignment method for aligning the two scan parts, a schematic perspective view of the reconstructed parts and the associated camera trajectories in a not aligned position and a schematic perspective view of the aligned scan parts exploiting common rotation axis.
Alignment of different reconstructed or scanned parts 401 , 403 of a given object 102 on a turntable 101 is hard. Such problem is common to many introduced scanning systems 100, where two or more cameras 104, 105 are used with a turntable 101 . Our solution is robust and reliable without requirement of any calibration procedure. This makes using multiple cameras 104, 105 for 3D scanning more robust; consequently possible for home uses with limited space. Figure 1 shows a 3D scanning system 100 for forming a 3D scanned model of an object 102. The 3D scanning system 100 comprises a turntable 101 for rotating an object 102 around a turntable rotation axis 106. It further comprises a first camera 104 for capturing a first scan part 401 of the object 102. The 3D scanning system 100 comprises a second camera 105 for capturing a second scan part 403 of the object 102. These scan parts 401 , 403 share a common overlapping area 107. The camera's 104, 105 positions are further fixed while the object 102 is rotated by the turntable 101 so that each camera 104, 105 forms a circular camera trajectory 402, 404. The 3D scanning system 100 comprises a processor 108. The processor 108 is connected to the cameras 104, 105. Additionally, the processor 108 can be connected to the turntable 101 . The turntable 101 is motor driven and/or controlled by the processor 108. The processor is operating with a method shown in figure 2 and/or 3.
Figure 1 shows the 3D scanning system 100. The 3D scanning system 100 comprises the turntable 101 or rather a turntable device. On the turntable 101 the object of interest 102 for scanning or reconstructing is placed. In a preferred embodiment the object 102 is a human body. The turntable 101 rotates around the turntable rotation axis 106. The rotation direction 103 of the turntable 101 is identified in figure 1 with an arrow. The object 102 is rotated with the turntable 101 around the turntable rotation axis 106.
The 3D scanning system 100 further comprises the first camera 104 and the second 105. The two cameras 104, 105 are stationary and comprise a vertically offset to each other. Thus they scan different parts of the object 102. The first depth sensors camera 104 captures depth maps of the bottom part of object 102 while the turntable 101 is rotating. The second depth sensors camera captures depth maps of the top part of object 102 while the turntable 101 is rotating. The first scan part 401 and the second scan part 403 of the object 102 are shown in figure 4 and 5. The detection zone of the cameras 104, 105 overlap in a common overlapping area 107. Thus the scanned parts 401 , 403 share this common overlapping area 107, which helps to align the two scanned parts 401 , 403.
As explained below in detail figure 2 shows a block diagram of a first alignment method for aligning the two scan parts 401 , 403. For this alignment method the knowledge of the physical arrangement of the first camera 104 and the second camera 105, in particular the relative pose of the second camera 105 to the first camera 104, is used. The first step is the circle estimation step 201 . The circle estimation 201 is to estimate a circle on a plane of a given camera trajectory 402, 404 as shown in figure 4 and 5. The second step is the rotation axis alignment step 202. The rotation axis alignment 202 is to transform scan parts 401 , 403 into common rotation axis coordinates. The third step is the mesh border clipping step 203. The mesh border clipping 203 is to clean noisy border vertices of meshes of scan parts 401 , 403. This reduces influence of noise vertices in a point-cloud alignment step 204. In the point-cloud alignment step 204 the axial position of the two scan parts 401 , 403 in the direction of the common turntable rotation axis 106 is aligned to each other.
As explained below in detail figure 3 shows a block diagram of a second alignment method for aligning the two scan parts 401 , 403. The circle estimation step 301 , the rotation axis alignment step 302, the mesh border clipping step 303 and the point-cloud alignment step 305 are the same as those of the first method shown in figure 2. For the alignment method shown in figure 3 the knowledge of the physical arrangement of the first camera 104 and the second camera 105 is not necessary. Therefore, the method comprises a pairing step 304 and a voting step 306. The pairing step 304 is before and the voting step 306 after the point-cloud alignment step 305.
During the paring step 304 potential matches of scan parts 401 , 403 are paired. This is done before the point-cloud alignment step 305 in the case the 3D scanning system 100 has no prior knowledge about the physical arrangement of the depth sensors cameras 104, 105. During the voting step 306 a voting on scores of alignments of pairs of scan parts 401 , 403 is done. The scores can be based on convergence rate and inlier ratio of point-cloud alignment results.
Figure 4 shows a schematic perspective view of the reconstructed parts 401 , 403 and the associated camera trajectories 402, 404 in a not aligned position. The bottom part 401 of object 102 is reconstructed from depth maps observed by the depth sensors camera 104. In figure 4 the first camera 104 is shown in different relative positions in accordance to the scanned object 102. Figure 4 shows the first camera trajectory 402 of the first depth sensors camera 104. Further, is shows the second camera trajectory 404 of the second depth sensors camera 105. The top part 403 of object 102 is reconstructed from depth maps observed by the second depth sensors camera 105. A first center 405 of the first circular camera trajectory 402 and a second center 406 of the second circular camera trajectory 406 are not concentrically to each other.
Figure 5 shows a schematic perspective view of the aligned scan parts 401 , 403 exploiting common rotation axis 106. The common rotation axis 106 of the turntable 101 is also perpendicular to the camera trajectories 402, 404. The axis 106 is at the center 405, 406. Thus the first center 405 of the first circular camera trajectory 402 and a second center 406 of the second circular camera trajectory 406 are concentrically to both each other and the turntable rotation axis 106.
The invention exploits the common axis rotation property of a setup like figure 1 . In that setup, there are two cameras 104, 105, capturing bottom and top parts 401 , 403 of the object 102 respectively. The two cameras' positions are fixed while the object 102 is rotated by the turntable device 101 . The system exploits one important property of this setup, namely that the object 102 is rotated around a single rotation axis 106; the rotation axis 106 is perpendicular to the turntable surface. Consequently, the rotation axis 106 goes through the center 405, 406 of circular camera trajectories 402, 404 of camera 104 and camera 105. Using this property, we can initialize relative positions between reconstructed parts 401 , 403 along the rotation axis 106. This initialization is the key to help point-cloud alignment converging optimally. The invention uses an illustration of two cameras 104, 105, shown in figure 1 ; however, more than two cameras can also be applied within this invention.
Reconstruction of each part:
When the turntable 101 rotates the object 102 around the turntable rotation axis 106, the cameras 104, 105 capture a 3D structure of different parts 401 , 403 of the object 102. For instance, first camera 104 captures the bottom part 401 and forms a circular camera trajectory 402 (see figure 4). Similarly, second camera 105 captures the top part 403 and forms a circular camera trajectory 404 (see figure 4). These parts 401 , 403 share a common overlapping area 107, which helps to align the two parts 401 , 403.
The first alignment pipeline/method:
In the first alignment pipeline or rather method shown in figure 2, it is assumed that the physical arrangement of the cameras 104, 105 is known relatively well. For instance, the relative pose of second camera 105 to first camera 104 can be manually calibrated. This manual calibration might be inaccurate, but it gives good hints to help with the alignment later.
Firstly, the pipeline starts in the circle estimation step 201 with estimating a circle on a plane of the first camera trajectory 402 and the second camera trajectory 404. Because the turntable 101 rotates object 102 around a common axis 106, this leads the camera trajectory 402, 404 of a fixed camera 104, 105 to form a circle on a 3D plane. This circle has a center 405, 406, which lies on the common rotation axis 106, and the 3D plane is perpendicular to the axis 106. Secondly, the system transforms the scan parts 401 , 403 to a chosen common coordinates system along the common rotation axis 106. For instance, the projection of the first camera's 104 origin on the common rotation axis 106 can be chosen as common coordinate origin. Preferentially the down direction of the axis 106 is chosen as x-axis of the common coordinates.
Exploiting prior knowledge about physical arrangement, the system can transform different scan parts 401 , 403 to common coordinates with relatively good overlapping areas between the scan parts 401 , 403.
Thirdly, the border vertices of the reconstructed parts, illustrated as mesh border clipping component in figure 2, are removed. We observe that the border vertices of the reconstructed parts 401 , 403 are pretty noisy. Thus, removing those vertices helps reducing the influence of noise on the point- cloud alignment 204.
Finally, we apply the point-cloud alignment 204 in order to align scan parts 401 , 403 and form final aligned parts in common coordinates as shown in figure 5.
The second alignment pipeline/method:
In the second pipeline or rather method, shown in figure 3, the system does not require knowledge about the physical arrangement of the cameras 104, 105 (Figure 1 ). Different from the first pipeline/method (Figure 2), two additional steps are added: Pairing 304 and Voting 306 to the
pipeline/method. The other steps are similar to the first pipeline/method shown in figure 2.
Before doing point-cloud alignment 305, we need to form all possible combinations of arrangements between the scan parts 401 , 403. Up to this point, we only know these scan parts 401 , 403 share a common rotation axis 106. So, we can use that information to limit possible physical arrangements between scan parts 401 , 403.
In the final step of this pipeline, we collect alignment results of possible pairs of scan parts 401 , 403. Then, we apply voting to find good matching results depending on point-cloud alignment errors and inlier ratio.
The invention is not limited to the embodiments shown or described. Rather, any and all combinations of the individual features described, as shown in the figures or described in the description, and to the extent that a corresponding combination appears possible and sensible, are subject matters of the invention.
The above described methods and/or 3D scanning system allows:
1 . a highly efficient alignment of 3D scan parts 401 , 403 on a turntable 101 ;
2. an incremental alignment of scan parts 401 , 403, which share an
overlapping observation;
3. a cross camera pose forwarding between different scans of common object 102 on a turntable 101 .
LIST OF REFERENCE CHARACTERS
100 3D scanning system
101 turntable
102 object
103 rotation direction of the turntable
104 first camera
105 second camera
106 a turntable rotation axis
107 common overlapping area
108 processor
201 circle estimation step
202 rotation axis alignment step
203 mesh border clipping step
204 point-cloud alignment step
301 circle estimation step
302 rotation axis alignment step
303 mesh border clipping step
304 pairing step
305 point-cloud alignment step
306 voting step
401 first scan part of the object
402 first circular camera trajectory of the first camera
403 second scan part of the object
404 second circular camera trajectory of the second camera
405 first center of the first circular camera trajectory
406 second center of the second circular camera trajectory

Claims

P a t e n t C l a i m s
1 . Method for forming a 3D scanned model of an object comprising the following steps:
rotating an object with a turntable around a turntable rotation axis, capturing a first bottom scan part of the object with a first camera and a second top scan part of the object with a second camera, wherein these scan parts share a common overlapping area and the cameras positions are fixed while the object is rotated by the turntable so that each camera forms a circular camera trajectory, and
aligning the two scan parts to each other by using the information that the circular camera trajectories are perpendicular to the common turntable rotation axis and that the common turntable rotation axis goes through the center of the circular camera trajectories.
2. Method according to the previous claim, wherein a first circle on a plane of the first camera trajectory and a second circle on the plane of the second camera trajectory are estimated.
3. Method according to one or several of the previous claims, wherein the first bottom scan part and the second top scan part are
transformed into common rotation axis coordinates and/or into a common coordinates system along the common turntable rotation axis.
4. Method according to one or several of the previous claims, wherein the projection of the first or second camera origin on the common turntable rotation axis is chosen as a common coordinate origin.
5. Method according to one or several of the previous claims, wherein the common turntable rotation axis, in particular the down direction, is chosen as a x-axis of the common coordinates system.
6. Method according to one or several of the previous claims, wherein the first circle and the second circle are aligned concentrically to each other and the common turntable rotation axis.
7. Method according to one or several of the previous claims, wherein a mesh border clipping is performed to clean border vertices of meshes of the first bottom scan part and the second top scan part.
8. Method according to one or several of the previous claims, wherein a point-cloud alignment is performed to align the two scan parts, in particular in axial and/or peripheral direction of the common turntable rotation axis, to each other and to form the 3D scanned model as a final aligned part in the common coordinates.
9. Method according to one or several of the previous claims, wherein the orientation of the two scan parts around the common turntable rotation axis is aligned to each other and/or wherein the axial position of the two scan parts in the direction of the common turntable rotation axis is aligned to each other.
10. Method according to one or several of the previous claims, wherein the knowledge of the physical arrangement of the first camera and the second camera, in particular the relative pose of the second camera to the first camera, is used for the point-cloud alignment.
1 1 . Method according to one or several of the previous claims, wherein the knowledge of the physical arrangement of the first camera and the second camera, in particular the relative pose of the second camera to the first camera, is not used for the point-cloud alignment.
12. Method according to the previous claim 1 1 , wherein upstream to the point-cloud alignment step a pairing step and/or downstream to the point-cloud alignment step a voting step is performed.
13. Method according to one or several of the previous claims 1 1 to 12, wherein in the pairing step potential matches of the scan parts are paired and/or all possible combinations of arrangements between the scan parts are formed.
14. Method according to one or several of the previous claims 1 1 to 13, wherein in the voting step alignment results of possible pairs of scan parts are collected and/or a voting is applied to find good matching results depending on point-cloud alignment errors and inlier ratio.
15. 3D scanning system for forming a 3D scanned model of an object comprising:
a turntable for rotating an object around a turntable rotation axis, a first camera for capturing a first bottom scan part of the object, a second camera for capturing a second top scan part of the object, wherein these scan parts share a common overlapping area and the camera's positions are fixed while the object is rotated by the turntable so that each camera forms a circular camera trajectory, and
a processor operating with a method as set forth in one or several of the previous claims,
wherein the processor aligns the two scan parts to each other by using the information that the circular camera trajectories are perpendicular to the common turntable rotation axis and that the common turntable rotation axis goes through the center of the circular camera
trajectories.
PCT/EP2017/067671 2016-07-13 2017-07-13 Alignment of scan parts on a turntable WO2018011335A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE102016112890.2 2016-07-13
DE102016112890 2016-07-13

Publications (1)

Publication Number Publication Date
WO2018011335A1 true WO2018011335A1 (en) 2018-01-18

Family

ID=59631721

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2017/067671 WO2018011335A1 (en) 2016-07-13 2017-07-13 Alignment of scan parts on a turntable

Country Status (1)

Country Link
WO (1) WO2018011335A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112578327A (en) * 2020-12-01 2021-03-30 深圳市通用测试系统有限公司 Calibration method, equipment and storage medium of spherical scanning test system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030038801A1 (en) * 2001-08-24 2003-02-27 Sanyo Electric Co., Ltd. Three dimensional modeling apparatus
US20030202691A1 (en) * 2002-04-24 2003-10-30 Paul Beardsley Calibration of multiple cameras for a turntable-based 3D scanner
US20150381968A1 (en) * 2014-06-27 2015-12-31 A9.Com, Inc. 3-d model generation
US20160078663A1 (en) 2010-06-08 2016-03-17 Styku, Inc. Cloud server body scan data system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030038801A1 (en) * 2001-08-24 2003-02-27 Sanyo Electric Co., Ltd. Three dimensional modeling apparatus
US20030202691A1 (en) * 2002-04-24 2003-10-30 Paul Beardsley Calibration of multiple cameras for a turntable-based 3D scanner
US20160078663A1 (en) 2010-06-08 2016-03-17 Styku, Inc. Cloud server body scan data system
US20150381968A1 (en) * 2014-06-27 2015-12-31 A9.Com, Inc. 3-d model generation

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112578327A (en) * 2020-12-01 2021-03-30 深圳市通用测试系统有限公司 Calibration method, equipment and storage medium of spherical scanning test system
CN112578327B (en) * 2020-12-01 2023-09-12 深圳市通用测试系统有限公司 Calibration method, device and storage medium of spherical scanning test system

Similar Documents

Publication Publication Date Title
CN109977770B (en) Automatic tracking shooting method, device, system and storage medium
CN110766716B (en) Method and system for acquiring information of space unknown moving target
CN102005047B (en) Image registration system and method thereof
CN105957007B (en) Image split-joint method based on characteristic point plane similarity
KR101758058B1 (en) Apparatus and method for estimating camera motion using depth information, augmented reality system
JP2019509545A (en) Live person face verification method and device
WO2016009811A1 (en) Method for calibrating one or more cameras
EP2615580A1 (en) Automatic scene calibration
JP2014102766A5 (en)
WO2017087821A3 (en) X-ray image feature detection and registration systems and methods
CN107851196B (en) Image pattern matching method and device
JP2008506953A5 (en)
CN110570477A (en) Method, device and storage medium for calibrating relative attitude of camera and rotating shaft
JP6174104B2 (en) Method, apparatus and system for generating indoor 2D plan view
Weinmann et al. Thermal 3D mapping for object detection in dynamic scenes
JP2016116774A5 (en)
CN108182663B (en) Millimeter wave image effect enhancement method and device and readable storage medium
TW201123842A (en) Apparatus for acquiring distance information and images
JP2020027439A5 (en)
WO2018011335A1 (en) Alignment of scan parts on a turntable
Song et al. Robust 3D reconstruction with omni-directional camera based on structure from motion
CN104966283A (en) Imaging layered registering method
Ito et al. Probe localization using structure from motion for 3D ultrasound image reconstruction
He et al. Three-point-based solution for automated motion parameter estimation of a multi-camera indoor mapping system with planar motion constraint
JP2008203995A (en) Object shape generation method, object shape generation device and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17752294

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17752294

Country of ref document: EP

Kind code of ref document: A1