WO2016089431A1 - Utilisation d'images de caméra de profondeur pour l'enregistrement de la vitesse de balayages en trois dimensions - Google Patents

Utilisation d'images de caméra de profondeur pour l'enregistrement de la vitesse de balayages en trois dimensions Download PDF

Info

Publication number
WO2016089431A1
WO2016089431A1 PCT/US2014/069185 US2014069185W WO2016089431A1 WO 2016089431 A1 WO2016089431 A1 WO 2016089431A1 US 2014069185 W US2014069185 W US 2014069185W WO 2016089431 A1 WO2016089431 A1 WO 2016089431A1
Authority
WO
WIPO (PCT)
Prior art keywords
scanner
measuring device
processor system
depth
coordinates
Prior art date
Application number
PCT/US2014/069185
Other languages
English (en)
Inventor
Oliver Zweigle
Bernd-Dietmar Becker
Reinhard Becker
Original Assignee
Faro Technologies, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US14/559,367 external-priority patent/US9618620B2/en
Application filed by Faro Technologies, Inc. filed Critical Faro Technologies, Inc.
Priority to GB1708697.6A priority Critical patent/GB2547391A/en
Priority to DE112014007234.6T priority patent/DE112014007234T5/de
Publication of WO2016089431A1 publication Critical patent/WO2016089431A1/fr

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/42Simultaneous measurement of distance and other co-ordinates
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/87Combinations of systems using electromagnetic waves other than radio waves
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • G01S17/8943D imaging with simultaneous measurement of time-of-flight at a 2D array of receiver pixels, e.g. time-of-flight cameras or flash lidar
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/003Transmission of data between radar, sonar or lidar systems and remote stations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/4808Evaluating distance, position or velocity data

Definitions

  • U.S. Patent No. 8,705,016 ( ⁇ 16) describes a laser scanner which, through use of a rotatable mirror, emits a light beam into its environment to generate a three-dimensional (3D) scan.
  • 3D three-dimensional
  • a 3D laser scanner of this type steers a beam of light to a non-cooperative target such as a diffusely scattering surface of an object.
  • a distance meter in the device measures a distance to the object, and angular encoders measure the angles of rotation of two axles in the device. The measured distance and two angles enable a processor in the device to determine the 3D coordinates of the target.
  • a TOF laser scanner is a scanner in which the distance to a target point is determined based on the speed of light in air between the scanner and a target point.
  • Laser scanners are typically used for scanning closed or open spaces such as interior areas of buildings, industrial installations and tunnels. They may be used, for example, in industrial applications and accident reconstruction applications.
  • a laser scanner optically scans and measures objects in a volume around the scanner through the acquisition of data points representing object surfaces within the volume. Such data points are obtained by transmitting a beam of light onto the objects and collecting the reflected or scattered light to determine the distance, two-angles (i.e., an azimuth and a zenith angle), and optionally a gray-scale value.
  • This raw scan data is collected, stored and sent to a processor or processors to generate a 3D image representing the scanned area or object.
  • Generating an image requires at least three values for each data point. These three values may include the distance and two angles, or may be transformed values, such as the x, y, z coordinates.
  • an image is also based on a fourth gray-scale value, which is a value related to irradiance of scattered light returning to the scanner.
  • the beam steering mechanism includes a first motor that steers the beam of light about a first axis by a first angle that is measured by a first angular encoder (or other angle transducer).
  • the beam steering mechanism also includes a second motor that steers the beam of light about a second axis by a second angle that is measured by a second angular encoder (or other angle transducer).
  • Many contemporary laser scanners include a camera mounted on the laser scanner for gathering camera digital images of the environment and for presenting the camera digital images to an operator of the laser scanner. By viewing the camera images, the operator of the scanner can determine the field of view of the measured volume and adjust settings on the laser scanner to measure over a larger or smaller region of space.
  • the camera digital images may be transmitted to a processor to add color to the scanner image.
  • To generate a color scanner image at least three positional coordinates (such as x, y, z) and three color values (such as red, green, blue "RGB”) are collected for each data point.
  • a 3D image of a scene may require multiple scans from different registration positions.
  • the overlapping scans are registered in a joint coordinate system, for example, as described in U.S. Published Patent Application No. 2012/0069352 ('352), the contents of which are incorporated herein by reference.
  • Such registration is performed by matching targets in overlapping regions of the multiple scans.
  • the targets may be artificial targets such as spheres or checkerboards or they may be natural features such as corners or edges of walls.
  • Some registration procedures involve relatively time-consuming manual procedures such as identifying by a user each target and matching the targets obtained by the scanner in each of the different registration positions.
  • Some registration procedures also require establishing an external "control network" of registration targets measured by an external device such as a total station.
  • the registration method disclosed in '352 eliminates the need for user matching of registration targets and establishing of a control network. [0009] However, even with the simplifications provided by the methods of '352, it is today still difficult to remove the need for a user to carry out the manual registration steps as described above. In a typical case, only 30% of 3D scans can be automatically registered to scans taken from other registration positions. Today such registration is seldom carried out at the site of the 3D measurement but instead in an office following the scanning procedure. In a typical case, a project requiring a week of scanning requires two to five days to manually register the multiple scans. This adds to the cost of the scanning project. Furthermore, the manual registration process sometimes reveals that the overlap between adjacent scans was insufficient to provide proper registration.
  • the manual registration process may reveal that certain sections of the scanning environment have been omitted.
  • the operator must return to the site to obtain additional scans.
  • a building that was available for scanning at one time may be impossible to access at a later time.
  • a forensics scene of an automobile accident or a homicide is often not available for taking of scans for more than a short time after the incident.
  • a three-dimensional (3D) measuring device including: a processor system including at least one of a 3D scanner controller, an external computer, and a cloud computer configured for remote network access; a 3D scanner having a first light source, a first beam steering unit, a first angle measuring device, a second angle measuring device, and a first light receiver, the first light source configured to emit a first beam of light, the first beam steering unit configured to steer the first beam of light to a first direction onto a first object point, the first direction determined by a first angle of rotation about a first axis and a second angle of rotation about a second axis, the first angle measuring device configured to measure the first angle of rotation and the second angle measuring device configured to measure the second angle of rotation, the first light receiver configured to receive first reflected light, the first reflected light being a portion of the first beam of light reflected by the first object point, the first light receiver configured to produce a first electrical signal in response to the first reflected light, the first light source configured to emit a first beam of light, the
  • a method for measuring and registering three-dimensional (3D) coordinates including: providing a 3D measuring device that includes a processor system, a 3D scanner, a depth camera, and a moveable platform, the processor system having at least one of a 3D scanner controller, an external computer, and a cloud computer configured for remote network access, the 3D scanner having a first light source, a first beam steering unit, a first angle measuring device, a second angle measuring device, and a first light receiver, the first light source configured to emit a first beam of light, the first beam steering unit configured to steer the first beam of light to a first direction onto a first object point, the first direction determined by a first angle of rotation about a first axis and a second angle of rotation about a second axis, the first angle measuring device configured to measure the first angle of rotation and the second angle measuring device configured to measure the second angle of rotation, the first light receiver configured to receive first reflected light, the first reflected light being a portion
  • FIG. 1 is a perspective view of a laser scanner in accordance with an embodiment of the invention
  • FIG. 2 is a side view of the laser scanner illustrating a method of measurement according to an embodiment
  • FIG. 3 is a schematic illustration of the optical, mechanical, and electrical components of the laser scanner according to an embodiment
  • FIG. 4 depicts a planar view of a 3D scanned image according to an embodiment
  • FIG. 5 depicts an embodiment of a panoramic view of a 3D scanned image generated by mapping a planar view onto a sphere according to an embodiment
  • FIGs. 6 A, 6B and 6C depict embodiments of a 3D view of a 3D scanned image according to an embodiment
  • FIG. 7 depicts an embodiment of a 3D view made up of an image of the object of FIG. 6B but viewed from a different perspective and shown only partially, according to an embodiment
  • FIG. 8A is a perspective view of a 3D measuring device according to an embodiment
  • FIG. 8B is a front view of a camera used to collect depth-camera image data while the 3D measuring device moves along a horizontal plane according to an embodiment
  • FIG. 8C is a perspective view of a 3D measuring device according to an embodiment
  • FIG. 9 is a block diagram depicting a processor system according to an embodiment
  • FIG. 10 is a schematic representation of a 3D scanner measuring an object from two registration positions according to an embodiment
  • FIG. 11 is a schematic representation of a depth camera capturing depth- camera images at a plurality of intermediate positions as the 3D measuring device is moved along a horizontal plane, according to an embodiment
  • FIG. 12 shows the depth camera capturing depth-camera images at a plurality of intermediate positions as the 3D measuring device is moved along a horizontal plane, according to an embodiment
  • FIG. 13 shows the depth camera capturing depth-camera images at a plurality of intermediate positions as the 3D measuring device is moved along a horizontal plane, according to an embodiment
  • FIGs. 14A and 14B illustrate a method for finding changes in the position and orientation of the 3D scanner over time according to an embodiment
  • FIG. 15 includes steps in a method for measuring and registering 3D coordinates with a 3D measuring device according to an embodiment.
  • the present invention relates to a 3D measuring device having a 3D scanner and a depth camera.
  • the depth camera may be an integral part of the 3D scanner or a separate camera unit.
  • the 3D measuring device is used in two modes, a first mode in which the 3D scanner obtains 3D coordinates of an object surface over a 3D region of space and a second mode in which depth-camera images are obtained as the camera is moved between positions at which 3D scans are taken.
  • the depth-camera images are used together with the 3D scan data from the 3D scanner to provide automatic registration of the 3D scans.
  • a laser scanner 20 is shown for optically scanning and measuring the environment surrounding the laser scanner 20.
  • the laser scanner 20 has a measuring head 22 and a base 24.
  • the measuring head 22 is mounted on the base 24 such that the laser scanner 20 may be rotated about a vertical axis 23.
  • the measuring head 22 includes a gimbal point 27 that is a center of rotation about the vertical axis 23 and a horizontal axis 25.
  • the measuring head 22 has a rotary mirror 26, which may be rotated about the horizontal axis 25.
  • the rotation about the vertical axis may be about the center of the base 24.
  • the terms vertical axis and horizontal axis refer to the scanner in its normal upright position.
  • azimuth axis and zenith axis may be substituted for the terms vertical axis and horizontal axis, respectively.
  • pan axis or standing axis may also be used as an alternative to vertical axis.
  • the measuring head 22 is further provided with an electromagnetic radiation emitter, such as light emitter 28, for example, that emits an emitted light beam 30.
  • the emitted light beam 30 is a coherent light beam such as a laser beam.
  • the laser beam may have a wavelength range of approximately 300 to 1600 nanometers, for example 790 nanometers, 905 nanometers, 1550 nm, or less than 400 nanometers. It should be appreciated that other electromagnetic radiation beams having greater or smaller wavelengths may also be used.
  • the emitted light beam 30 is amplitude or intensity modulated, for example, with a sinusoidal waveform or with a rectangular waveform.
  • the emitted light beam 30 is emitted by the light emitter 28 onto the rotary mirror 26, where it is deflected to the environment.
  • a reflected light beam 32 is reflected from the environment by an object 34.
  • the reflected or scattered light is intercepted by the rotary mirror 26 and directed into a light receiver 36.
  • the directions of the emitted light beam 30 and the reflected light beam 32 result from the angular positions of the rotary mirror 26 and the measuring head 22 about the axes 25 and 23, respectively. These angular positions in turn depend on the corresponding rotary drives or motors.
  • the controller 38 determines, for a multitude of measuring points X, a corresponding number of distances d between the laser scanner 20 and the points X on object 34.
  • the distance to a particular point X is determined based at least in part on the speed of light in air through which electromagnetic radiation propagates from the device to the object point X.
  • the phase shift of modulation in light emitted by the laser scanner 20 and the point X is determined and evaluated to obtain a measured distance d.
  • the speed of light in air depends on the properties of the air such as the air temperature, barometric pressure, relative humidity, and concentration of carbon dioxide. Such air properties influence the index of refraction n of the air.
  • a laser scanner of the type discussed herein is based on the time-of-flight (TOF) of the light in the air (the round- trip time for the light to travel from the device to the object and back to the device).
  • TOF time-of-flight
  • TOF scanners examples include scanners that measure round trip time using the time interval between emitted and returning pulses (pulsed TOF scanners), scanners that modulate light sinusoidally and measure phase shift of the returning light (phase-based scanners), as well as many other types.
  • a method of measuring distance based on the time-of- flight of light depends on the speed of light in air and is therefore easily distinguished from methods of measuring distance based on triangulation.
  • Triangulation-based methods involve projecting light from a light source along a particular direction and then intercepting the light on a camera pixel along a particular direction.
  • the method of triangulation enables the distance to the object to be determined based on one known length and two known angles of a triangle.
  • the method of triangulation does not directly depend on the speed of light in air.
  • the scanning of the volume around the laser scanner 20 takes place by rotating the rotary mirror 26 relatively quickly about axis 25 while rotating the measuring head 22 relatively slowly about axis 23, thereby moving the assembly in a spiral pattern.
  • the rotary mirror rotates at a maximum speed of 5820 revolutions per minute.
  • the gimbal point 27 defines the origin of the local stationary reference system.
  • the base 24 rests in this local stationary reference system.
  • the scanner 20 may also collect gray-scale information related to the received optical power (equivalent to the term "brightness.")
  • the gray-scale value may be determined at least in part, for example, by integration of the bandpass-filtered and amplified signal in the light receiver 36 over a measuring period attributed to the object point X.
  • the measuring head 22 may include a display device 40 integrated into the laser scanner 20.
  • the display device 40 may include a graphical touch screen 41, as shown in FIG. 1, which allows the operator to set the parameters or initiate the operation of the laser scanner 20.
  • the screen 41 may have a user interface that allows the operator to provide measurement instructions to the device, and the screen may also display
  • the laser scanner 20 includes a carrying structure 42 that provides a frame for the measuring head 22 and a platform for attaching the components of the laser scanner 20.
  • the carrying structure 42 is made from a metal such as aluminum.
  • the carrying structure 42 includes a traverse member 44 having a pair of walls 46, 48 on opposing ends.
  • the walls 46, 48 are parallel to each other and extend in a direction opposite the base 24.
  • Shells 50, 52 are coupled to the walls 46, 48 and cover the components of the laser scanner 20.
  • the shells 50, 52 are made from a plastic material, such as polycarbonate or polyethylene for example.
  • the shells 50, 52 cooperate with the walls 46, 48 to form a housing for the laser scanner 20.
  • a pair of yokes 54, 56 are arranged to partially cover the respective shells 50, 52.
  • the yokes 54, 56 are made from a suitably durable material, such as aluminum for example, that assists in protecting the shells 50, 52 during transport and operation.
  • the yokes 54, 56 each includes a first arm portion 58 that is coupled, such as with a fastener for example, to the traverse 44 adjacent the base 24.
  • the arm portion 58 for each yoke 54, 56 extends from the traverse 44 obliquely to an outer corner of the respective shell 50, 54.
  • the yokes 54, 56 extend along the side edge of the shell to an opposite outer corner of the shell.
  • Each yoke 54, 56 further includes a second arm portion that extends obliquely to the walls 46, 48. It should be appreciated that the yokes 54, 56 may be coupled to the traverse 42, the walls 46, 48 and the shells 50, 54 at multiple locations.
  • the pair of yokes 54, 56 cooperate to circumscribe a convex space within which the two shells 50, 52 are arranged.
  • the yokes 54, 56 cooperate to cover all of the outer edges of the shells 50, 54, while the top and bottom arm portions project over at least a portion of the top and bottom edges of the shells 50, 52. This provides advantages in protecting the shells 50, 52 and the measuring head 22 from damage during transportation and operation.
  • the yokes 54, 56 may include additional features, such as handles to facilitate the carrying of the laser scanner 20 or attachment points for accessories for example.
  • a prism 60 is provided on top of the traverse 44.
  • the prism extends parallel to the walls 46, 48.
  • the prism 60 is integrally formed as part of the carrying structure 42.
  • the prism 60 is a separate component that is coupled to the traverse 44.
  • the measured distances d may depend on signal strength, which may be measured in optical power entering the scanner or optical power entering optical detectors within the light receiver 36, for example.
  • a distance correction is stored in the scanner as a function (possibly a nonlinear function) of distance to a measured point and optical power (generally unsealed quantity of light power sometimes referred to as "brightness") returned from the measured point and sent to an optical detector in the light receiver 36. Since the prism 60 is at a known distance from the gimbal point 27, the measured optical power level of light reflected by the prism 60 may be used to correct distance measurements for other measured points, thereby allowing for compensation to correct for the effects of environmental variables such as temperature. In the exemplary embodiment, the resulting correction of distance is performed by the controller 38.
  • the base 24 is coupled to a swivel assembly (not shown) such as that described in commonly owned U.S. Patent No. 8,705,012 ( ⁇ 12), which is incorporated by reference herein.
  • the swivel assembly is housed within the carrying structure 42 and includes a motor that is configured to rotate the measuring head 22 about the axis 23.
  • An auxiliary image acquisition device 66 may be a device that captures and measures a parameter associated with the scanned volume or the scanned object and provides a signal representing the measured quantities over an image acquisition area.
  • the auxiliary image acquisition device 66 may be, but is not limited to, a pyrometer, a thermal imager, an ionizing radiation detector, or a millimeter- wave detector.
  • the auxiliary image acquisition device 66 is a color camera.
  • a central color camera (first image acquisition device) 112 is located internally to the scanner and may have the same optical axis as the 3D scanner device.
  • the first image acquisition device 112 is integrated into the measuring head 22 and arranged to acquire images along the same optical pathway as emitted light beam 30 and reflected light beam 32.
  • the light from the light emitter 28 reflects off a fixed mirror 116 and travels to dichroic beam-splitter 118 that reflects the light 117 from the light emitter 28 onto the rotary mirror 26.
  • the dichroic beamsplitter 118 allows light to pass through at wavelengths different than the wavelength of light 117.
  • the light emitter 28 may be a near infrared laser light (for example, light at wavelengths of 780 nm or 1150 nm), with the dichroic beam-splitter 118 configured to reflect the infrared laser light while allowing visible light (e.g., wavelengths of 400 to 700 nm) to transmit through.
  • the determination of whether the light passes through the beam-splitter 118 or is reflected depends on the polarization of the light.
  • the digital camera 112 obtains 2D images of the scanned area to capture color data to add to the scanned image.
  • FIG. 4 depicts an example of a planar view of a 3D scanned image 400.
  • the planar view depicted in FIG. 4 maps an image based on direct mapping of data collected by the scanner.
  • the scanner collects data in a spherical pattern but with data points collected near the poles more tightly compressed than those collected nearer the horizon.
  • each point collected near a pole represents a smaller solid angle than does each point collected nearer the horizon.
  • data from the scanner may be directly represented in rows and column
  • data in a planar image is conveniently presented in a rectilinear format, as shown in FIG. 4.
  • straight lines appear to be curved, as for example the straight fence railings 420 that appear curved in the planar view of the 3D image.
  • the planar view may be a 3D unprocessed scanned image displaying just the gray-scale values received from the distance sensor arranged in columns and rows as they were recorded.
  • the 3D unprocessed scanned image of the planar view may be in full resolution or reduced resolution depending on system characteristics (e.g., display device, storage, processor).
  • the planar view may be a 3D processed scanned image that depicts either gray- scale values (resulting from the light irradiance measured by the distance sensor for each pixel) or color values (resulting from camera images which have been mapped onto the scan).
  • the planar view extracted from the 3D scanner is ordinarily a gray- scale or color image
  • FIG. 4 is shown as a line drawing for clarity in document reproduction.
  • the user interface associated with the display unit which may be integral to the laser scanner, may provide a point selection mechanism, which in FIG. 4 is the cursor 410.
  • the point selection mechanism may be used to reveal dimensional information about the volume of space being measured by the laser scanner.
  • the row and column at the location of the cursor are indicated on the display at 430.
  • the two measured angles and one measured distance (the 3D coordinates in a spherical coordinate system) at the cursor location are indicated on the display at 440.
  • Cartesian XYZ coordinate representations of the cursor location are indicated on the display at 450.
  • FIG. 5 depicts an example of a panoramic view of a 3D scanned image 600 generated by mapping a planar view onto a sphere, or in some cases a cylinder.
  • a panoramic view can be a 3D processed scanned image (such as that shown in FIG. 5) in which 3D information (e.g., 3D coordinates) is available.
  • the panoramic view may be in full resolution or reduced resolution depending on system characteristics.
  • an image such as FIG. 5 is a 2D image that represents a 3D scene when viewed from a particular perspective. In this sense, the image of FIG. 5 is much like an image that might be captured by a 2D camera or a human eye.
  • the panoramic view extracted from the 3D scanner is ordinarily a gray-scale or color image
  • FIG. 5 is shown as a line drawing for clarity in document reproduction.
  • panoramic view refers to a display in which angular movement is generally possible about a point in space, but translational movement is not possible (for a single panoramic image).
  • 3D view refers to generally refers to a display in which provision is made (through user controls) to enable not only rotation about a fixed point but also translational movement from point to point in space.
  • FIGs. 6A, 6B and 6C depict an example of a 3D view of a 3D scanned image.
  • the 3D view is an example of a 3D processed scanned image.
  • the 3D view may be in full resolution or reduced resolution depending on system characteristics.
  • the 3D view allows multiple registered scans to be displayed in one view.
  • FIG. 6A is a 3D view 710 over which a selection mask 730 has been placed by a user.
  • FIG. 6B is a 3D view 740 in which only that part of the 3D view 710 covered by the selection mask 730 has been retained.
  • FIG. 6C shows the same 3D measurement data as in FIG.
  • FIG. 7 shows a different view of FIG. 6B, the view in this instance being obtained from a translation and rotation of the observer viewpoint, as well as a reduction in observed area.
  • the 3D views extracted from the 3D scanner are ordinarily a gray- scale or color image
  • FIGs. 6A-C and 7 are shown as line drawings for clarity in document reproduction.
  • FIGs. 8A, 8B, 8C and 9 show an embodiment of a 3D measuring device 800 that includes a 3D scanner 20, a processor system 950, an optional moveable platform 820, and a depth camera at locations discussed further below.
  • the 3D measuring device 800 may be a 3D TOF scanner 20 as described in reference to FIG. 1.
  • the processor system 950 includes one or more processing elements that may include a 3D scanner processor (controller) 38, an external computer 970, and a cloud computer 980.
  • the processors may be microprocessors, field programmable gate arrays
  • FPGAs digital signal processors
  • DSPs digital signal processors
  • the one or more processors have access to memory for storing information.
  • the controller 38 represents one or more processors distributed throughout the 3D scanner.
  • only one or two of the processors 38, 970, and 980 is provided in the processor system.
  • Communication among the processors may be through wired links, wireless links, or a combination of wired and wireless links.
  • scan results are uploaded after each scanning session to the cloud (remote network) for storage and future use.
  • the depth-camera data may be sent to the processor system through wired or wireless communication channels.
  • the depth cameras may be either of two types: a central-element depth camera and a triangulation-based depth camera.
  • a central-element depth camera uses a single integrated sensor element combined with an illumination element to determine distance ("depth") and angles from the camera to points on an object.
  • One type of central-element depth camera uses a lens combined with a semiconductor chip to measure round-trip time of light travelling from the camera to the object and back.
  • the Microsoft Xbox One includes a Kinect depth camera that uses an infrared (IR) light source to illuminate a 640x480 pixel photosensitive array. This depth camera is used in parallel with a 640x480 pixel RGB camera that measures red, blue, and green colors.
  • IR infrared
  • Infrared illumination is provided in the IR illuminators adjacent to the lens and IR array.
  • a central-element depth camera includes a lens and a PMD Technologies PhotonlCs 19k-S3 3D chip used in conjunction with an IR light source. The measurement distance range of this 160x120 pixel chip is scalable based on the camera layout. Many other central-element depth cameras and associated IR sources are available today. Most central-element depth cameras include a modulated light source. The light source may use pulse modulation for direct determination of round-trip travel time. Alternatively, the light source may use continuous wave (CW) modulation with sinusoidal or rectangular waveforms to obtain round-trip travel time based on measured phase shift.
  • CW continuous wave
  • FIG. 8A shows two possible locations for a central-element depth camera.
  • the central-element depth camera 66 is located on the 3D scanner 20.
  • the depth camera 66 includes an integrated light source.
  • the central-element depth camera 840 is located on the optional moveable platform 820. It may be located on a base of the platform 820 or attached to one or more tripod legs, for example.
  • a central-element depth camera takes the place of central-color camera 112.
  • the light source may be integrated into the central-depth camera package or placed near to it so that the illumination light passes through the dichroic beam splitter 118.
  • the beam splitter 118 may not be a dichroic beam splitter but instead transmit and reflect wavelengths used by the central-element depth camera 112.
  • the wavelengths used by the depth camera 112 may be sent from the launch 28, reflected off the beam splitter 118 onto the object, and reflected back from the object onto the depth camera.
  • the second type of depth camera is a triangulation-based depth camera.
  • An example of such a camera is the Kinect of the Microsoft Xbox 360, which is a different Kinect than the Kinect of the Microsoft Xbox One described herein above.
  • An IR light source on the Kinect of the Xbox 360 projects a pattern of light onto an object, which is imaged by an IR camera that includes a photosensitive array.
  • the Kinect determines a correspondence between the projected pattern and the image received by the photosensitive array. It uses this information in a triangulation calculation to determine the distance to object points in the measurement volume. This calculation is based partly on the baseline between the projector and the IR camera and partly on the camera pattern received and projector pattern sent out.
  • a triangulation camera cannot be brought arbitrarily close to the light source (pattern projector) as accuracy is reduced with decreasing baseline distance.
  • Many types of triangulation-based depth cameras are available.
  • FIG. 8C shows two possible locations for a triangulation-based depth camera.
  • the triangulation-based depth camera 66 is located on the 3D scanner 20.
  • the depth camera 66 includes a camera 882 and a pattern projector 884. It may also include an optional color camera 886.
  • the triangulation-based depth camera 870 is located on the optional moveable platform 820.
  • the depth camera 870 includes a camera 872 and a pattern projector 874. It may also include a color camera 876.
  • the triangulation-based depth camera 870 may be located on a base of the platform 820 or attached to one or more tripod legs, for example.
  • the depth camera (112, 70 or 840) captures overlapping depth-camera images as the 3D measuring device is moved between positions at which 3D scans are taken.
  • the depth camera is an internal camera (for example, in place of the central color camera 112) or a camera 66 mounted on the measuring head 22, the camera may be optionally steered about the vertical axis 23 to increase the effective FOV of the depth camera.
  • the laser power from the 3D scanner is turned off as the depth-camera images are collected.
  • the laser power is left on so that the 3D scanner 20 may make 2D scans in a horizontal plane while the depth-camera images are collected.
  • the direction at which the depth camera is pointed is unaffected by rotation of horizontal axis 25 or vertical axis 23.
  • the optional position/orientation sensor 920 in the 3D scanner 20 may include inclinometers (accelerometers), gyroscopes, magnetometers, and altimeters.
  • inclinometers accelerometers
  • gyroscopes magnetometers
  • altimeters usually devices that include one or more of an inclinometer and gyroscope are referred to as an inertial measurement unit (IMU).
  • IMU inertial measurement unit
  • the term IMU is used in a broader sense to include a variety of additional devices that indicate position and/or orientation - for example, magnetometers that indicate heading based on changes in magnetic field direction relative to the earth's magnetic north and altimeters that indicate altitude (height).
  • An example of a widely used altimeter is a pressure sensor.
  • the optional moveable platform 820 enables the 3D measuring device 20 to be moved from place to place, typically along a floor that is approximately horizontal.
  • the optional moveable platform 820 is a tripod that includes wheels 822.
  • the wheels 822 may be locked in place using wheel brakes 824.
  • the wheels 822 are retractable, enabling the tripod to sit stably on three feet attached to the tripod.
  • the tripod has no wheels but is simply pushed or pulled along a surface that is approximately horizontal, for example, a floor.
  • the optional moveable platform 820 is a wheeled cart that may be hand pushed/pulled or motorized. [0061] FIG.
  • FIG. 10 shows the 3D measuring device 800 moved to a first registration position 1112 in front of an object 1102 that is to be measured.
  • the object 1102 might for example be a wall in a room.
  • the 3D measuring device 800 is brought to a stop and is held in place with brakes, which in an embodiment are brakes 824 on wheels 822.
  • the 3D scanner 20 in the 3D measuring device 800 takes a first 3D scan of the object 1102.
  • the 3D scanner 20 may if desired obtain 3D measurements in all directions except in downward directions blocked by the structure of the 3D measuring device 800.
  • a smaller effective FOV 1130 may be selected to provide a more face-on view of features on the structure.
  • the processor system 950 causes the 3D measuring device 800 to change from 3D scanning mode to a depth-camera imaging mode.
  • the depth-camera imaging data is sent from the camera (112, 70 or 840) to the processor system 950 for mathematical analysis.
  • the scanner begins collecting depth-camera imaging data as the 3D scanning stops.
  • the collection of depth-camera imaging data starts when the processor system 950 receives a signal such as a signal form the position/orientation sensor 920, a signal from a brake release sensor, or a signal sent in response to a command from an operator.
  • the processor system 950 may cause depth-camera imaging data to be collected when the 3D measuring device 800 starts to move, or it may cause depth-camera imaging data to be continually collected, even when the 3D measuring device 800 is stationary.
  • the depth-camera imaging data is collected as the 3D measuring device 800 is moved toward the second registration position 1114.
  • the depth-camera imaging data is collected and processed as the 3D scanner 20 passes through a plurality of 2D measuring positions 1120.
  • the depth camera collects depth-camera imaging data over an effective FOV 1140.
  • the processor system 950 uses the depth-camera imaging data from a plurality of depth-camera images at positions 1120 to determine a position and orientation of the 3D scanner 20 at the second registration position 1114 relative to the first registration position 1112, where the first registration position and the second registration position are known in a 3D coordinate system common to both.
  • the common coordinate system is represented by 2D Cartesian coordinates x, y and by an angle of rotation # relative to the x or y axis.
  • the x and y axes lie in the horizontal x-y plane of the 3D scanner 20 and may be further based on a direction of a "front" of the 3D scanner 20.
  • An example of such an (x, y, ⁇ ) coordinate system is the coordinate system 1410 of FIG. 14A.
  • the object 1102 there is a region of overlap 1150 between the first 3D scan (collected at the first registration position 1112) and the second 3D scan (collected at the second registration position 1114).
  • the overlap region 1150 there are registration targets (which may be natural features of the object 1102) that are seen in both the first 3D scan and the second 3D scan.
  • a problem that often occurs in practice is that, in moving the 3D scanner 20 from the first registration position 1112 to the second registration position 1114, the processor system 950 loses track of the position and orientation of the 3D scanner 20 and hence is unable to correctly associate the registration targets in the overlap regions to enable the registration procedure to be performed reliably.
  • the processor system 950 is able to determine the position and orientation of the 3D scanner 20 at the second registration position 1114 relative to the first registration position 1112. This information enables the processor system 950 to correctly match registration targets in the region of overlap 1150, thereby enabling the registration procedure to be properly completed.
  • FIG. 12 shows the 3D measuring device 800 collecting depth-camera imaging data at selected positions 1120 over an effective FOV 1140. At different positions 1120, the depth camera captures a portion of the object 1102 marked A, B, C, D, and E.
  • FIG. 12 shows depth camera moving in time relative to a fixed frame of reference of the object 1102.
  • FIG. 13 includes the same information as FIG. 12 but shows it from the frame of reference of the 3D scanner 20 while collecting depth-camera images rather than the frame of reference of the object 1102.
  • This figure makes clear that in the scanner frame of reference, the position of features on the object change over time. Hence it is clear that the distance traveled by the 3D scanner 20 between registration position 1 and registration position 2 can be determined from the depth-camera imaging data sent from the camera to the processor system 950.
  • FIG. 14A shows a coordinate system that may be used in FIGs. 14B and 14C.
  • the 2D coordinates x and y are selected to lie on a plane parallel to the horizontal plane of movement of the moveable platform.
  • the angle # is selected as a rotation angle in the plane, the rotation angle relative to an axis such as x or y.
  • FIGs. 14B, 14C represent a realistic case in which the 3D scanner 20 is moved not exactly on a straight line, for example, nominally parallel to the object 1102, but also to the side. Furthermore, the 3D scanner 20 may be rotated as it is moved.
  • FIG. 14B shows the movement of the object 1102 as seen from the frame of reference of the 3D scanner 20 in traveling from the first registration position to the second registration position.
  • the object 1102 In the scanner frame of reference (that is, as seen from the scanner's point of view), the object 1102 is moving while the depth camera is fixed in place.
  • the portions of the object 1102 seen by the depth camera appear to translate and rotate in time.
  • the depth camera provides a succession of such translated and rotated depth-camera images to the processor system 950.
  • the scanner translates in the +y direction by a distance 1420 shown in FIG. 14B and rotates by an angle 1430, which in this example is +5 degrees.
  • the scanner could equally well have moved in the +x or -x direction by a small amount.
  • the processor system 950 uses the data recorded in successive depth-camera images as seen in the frame of reference of the scanner 20, as shown in FIG. 14B.
  • the processor system 950 keeps track of the translation and rotation of the 3D scanner 20. In this way, the processor system 950 is able to accurately determine the change in the values of x, y, # as the measuring device 800 moves from the first registration position 1112 to the second registration position 1114.
  • the processor system 950 determines the position and orientation of the 3D measuring device 800 based on a comparison of the succession of depth-camera images and not on fusion of the depth-camera imaging data with 3D scan data provided by the 3D scanner 20 at the first registration position 1112 or the second registration position 1114. [0071] Instead, the processor system 950 is configured to determine a first translation value, a second translation value, and a first rotation value that, when applied to a
  • the translation and rotation may be applied to the first scan data from the 3D scanner, the second scan data from the 3D scanner, or to a combination of the two.
  • a translation applied to the first scan data set is equivalent to a negative of the translation applied to the second scan data set in the sense that both actions produce the same match in the transformed data sets.
  • An example of an "objective mathematical criterion" is that of minimizing the sum of squared residual errors for those portions of the scan data judged to overlap.
  • Another type of objective mathematical criterion may involve a matching of multiple features identified on the object. For example, such features might be the edge transitions 1103, 1104, and 1105 shown in FIG. 11B.
  • the processor system 950 extracts a horizontal slice from the depth-camera image.
  • the resulting 2D coordinates on the horizontal plane provides information of the sort shown in FIGs. 12 - 14. As explained herein above, such information may be used to provide first and second translation values and a first rotation value to provide a good starting point for 3D registration.
  • the light from depth-camera light source is sent off the mirror 26 and the scattered light is reflected off the object onto the mirror 26 and then onto the depth camera.
  • the mirror 26 is kept fixed about the horizontal axis 25 as shown in FIG. 8B.
  • a single horizontal slice is sufficient to provide accurate first and second translation values and first rotation value.
  • other methods may be used. One method is to take multiple horizontal slices, each at a different height. A scanned region that is nearly featureless at one height may include several features at a different height.
  • Another mathematical method that may be used to determine the first and second translation values and the first rotation value is an enhanced version of "optical flow.”
  • the mathematical method known in the art as "optical flow” is used extract information to evaluate sequentially overlapping camera images. This method is adapted from the studies of American psychologist James Gibson in the 1940s. A tutorial on optical flow estimation as used today is given in "Mathematical Models in Computer Vision: The Handbook” by N. Paragios, Y. Chen, and O. Faugeras (editors), Chapter 15, Springer 2005, pp. 239-258, the contents of which are incorporated by reference herein.
  • the camera is a depth camera rather than an ordinary 2D camera
  • depth information is further available for application to the optical-flow algorithm to further improve
  • the mathematical criterion may involve processing of the raw camera image data provided by the camera to the processor system 950, or it may involve a first
  • ICP Iterative Closest Point
  • the first translation value is dx
  • the second translation value is dy
  • the processor system 950 is further configured to determine a third translation value (for example, dz) and a second and third rotation values (for example, pitch and roll). The third translation value, second rotation value, and third rotation value may be determined based at least in part on readings from the
  • the 3D scanner 20 collects depth-camera image data at the first registration position 1112 and more depth-camera image data at the second registration position 1114. In some cases, these may suffice to determine the position and orientation of the 3D measuring device at the second registration position 1114 relative to the first registration position 1112. In other cases, the two sets of depth-camera image data are not sufficient to enable the processor system 950 to accurately determine the first translation value, the second translation value, and the first rotation value. This problem may be avoided by collecting depth-camera image data at intermediate locations 1120. In an embodiment, the depth-camera image data is collected and processed at regular intervals, for example, once per second. In this way, features are easily identified in successive depth-camera images 1120.
  • the processor system 950 may choose to use the information from all the successive depth-camera images in determining the translation and rotation values in moving from the first registration position 1112 to the second registration position 1114.
  • the processor may choose to use only the first and last depth- camera images in the final calculation, simply using the intermediate depth-camera images to ensure proper correspondence of matching features. In most cases, accuracy of matching is improved by incorporating information from multiple successive depth-camera images.
  • the 3D measuring device 800 is moved to the second registration position 1114.
  • the 3D measuring device 800 is brought to a stop and brakes are locked to hold the 3D scanner stationary.
  • the processor system 950 starts the 3D scan automatically when the moveable platform is brought to a stop, for example, by the position/orientation sensor 920 noting the lack of movement.
  • the 3D scanner 20 in the 3D measuring device 800 takes a 3D scan of the object 1102. This 3D scan is referred to as the second 3D scan to distinguish it from the first 3D scan taken at the first registration position.
  • the processor system 950 applies the already calculated first translation value, the second translation value, and the first rotation value to adjust the position and orientation of the second 3D scan relative to the first 3D scan.
  • This adjustment which may be considered to provide a "first alignment,” brings the registration targets (which may be natural features in the overlap region 1150) into close proximity.
  • the processor system 950 performs a fine registration in which it makes fine adjustments to the six degrees of freedom of the second 3D scan relative to the first 3D scan. It makes the fine adjustment based on an objective mathematical criterion, which may be the same as or different than the mathematical criterion applied to the depth-camera image data.
  • the objective mathematical criterion may be that of minimizing the sum of squared residual errors for those portions of the scan data judged to overlap.
  • the objective mathematical criterion may be applied to a plurality of features in the overlap region.
  • the mathematical calculations in the registration may be applied to raw 3D scan data or to geometrical representations of the 3D scan data, for example, by a collection of line segments.
  • the aligned values of the first 3D scan and the second 3D scan are combined in a registered 3D data set.
  • the 3D scan values included in the registered 3D data set are based on some combination of 3D scanner data from the aligned values of the first 3D scan and the second 3D scan.
  • FIG. 15 shows elements of a method 1500 for measuring and registering 3D coordinates.
  • An element 1505 includes providing a 3D measuring device that includes a processor system, a 3D scanner, a depth camera, and a moveable platform.
  • the processor system has at least one of a 3D scanner controller, an external computer, and a cloud computer configured for remote network access. Any of these processing elements within the processor system may include a single processor or multiple distributed processing elements, the processing elements being a microprocessor, digital signal processor, FPGA, or any other type of computing device.
  • the processing elements have access to computer memory.
  • the 3D scanner has a first light source, a first beam steering unit, a first angle measuring device, a second angle measuring device, and a first light receiver.
  • the first light source is configured to emit a first beam of light, which in an embodiment is a beam of laser light.
  • the first beam steering unit is provided to steer the first beam of light to a first direction onto a first object point.
  • the beam steering unit may be a rotating mirror such as the mirror 26 or it may be another type of beam steering mechanism.
  • the 3D scanner may contain a base onto which is placed a first structure that rotates about a vertical axis, and onto this structure may be placed a second structure that rotates about a horizontal axis. With this type of mechanical assembly, the beam of light may be emitted directly from the second structure and point in a desired direction. Many other types of beam steering mechanisms are possible. In most cases, a beam steering mechanism includes one or two motors.
  • the first direction is determined by a first angle of rotation about a first axis and a second angle of rotation about a second axis.
  • the first angle measuring device is configured to measure the first angle of rotation and the second angle measuring device configured to measure the second angle of rotation.
  • the first light receiver is configured to receive first reflected light, the first reflected light being a portion of the first beam of light reflected by the first object point.
  • the first light receiver is further configured to produce a first electrical signal in response to the first reflected light.
  • the first light receiver is further configured to cooperate with the processor system to determine a first distance to the first object point based at least in part on the first electrical signal
  • the 3D scanner is configured to cooperate with the processor system to determine 3D coordinates of the first object point based at least in part on the first distance, the first angle of rotation and the second angle of rotation.
  • the moveable platform is configured to carry the 3D scanner.
  • the depth camera is configured to obtain depth-camera images and to send the depth-camera image data to the processor system 950.
  • the depth camera may be located internal to the 3D scanner, mounted on the 3D scanner, or attached to the moveable platform.
  • An element 1510 includes determining with the processor system 950, in cooperation with the 3D scanner 20, 3D coordinates of a first collection of points on an object surface while the 3D scanner is fixedly located at a first registration position.
  • An element 1515 includes obtaining by the 3D measuring device in
  • Each of the plurality of depth-camera image sets is a set of 3D coordinates of points on the object surface collected as the 3D scanner moves from the first registration position to a second registration position.
  • Each of the plurality of depth-camera image sets is collected by the depth camera at a different position relative to the first registration position.
  • An element 1520 includes determining by the processor system a first translation value corresponding to a first translation direction, a second translation value corresponding to a second translation direction, and a first rotation value corresponding to a first orientational axis, wherein the first translation value, the second translation value, and the first rotation value are determined based at least in part on a fitting of the plurality of depth-camera image sets according to a first mathematical criterion.
  • the first orientation axis is a vertical axis perpendicular to the planes in which the depth-camera image sets are collected.
  • An element 1525 includes determining with the processor system, in cooperation with the 3D scanner, 3D coordinates of a second collection of points on the object surface while the 3D scanner is fixedly located at the second registration position.
  • An element 1535 includes identifying by the processor system a correspondence among registration targets present in both the first collection of points and the second collection of points, the correspondence based at least in part on the first translation value, the second translation value, and the first rotation value. This is a step that aligns to a relatively high accuracy level the 3D scan data collected at the first and second registration positions.
  • An element 1545 includes determining 3D coordinates of a registered 3D collection of points based at least in part on a second mathematical criterion, the
  • An element 1550 includes storing the 3D coordinates of the registered 3D collection of points.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • General Physics & Mathematics (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

L'invention concerne un procédé destiné à mesurer et à enregistrer les coordonnées 3D et possède un scanner 3D pour mesurer un premier ensemble de coordonnées 3D de points à partir d'une première position d'enregistrement et un second ensemble de coordonnées 3D de points à partir d'une seconde position d'enregistrement. Entre ces positions, le dispositif de mesure 3D collecte des images de caméra de profondeur. Un processeur détermine des première et seconde valeurs de translation et une première valeur de rotation en se basant sur les images de la caméra de profondeur. Le processeur identifie une correspondance entre des cibles d'alignement dans les premier et second ensembles de coordonnées 3D sur la base au moins en partie des première et seconde valeurs de translation et de la première valeur de rotation. Le processeur utilise cette correspondance et le premier et le second ensemble de coordonnées 3D pour déterminer des coordonnées 3D d'un ensemble de points 3D enregistré.
PCT/US2014/069185 2012-10-05 2014-12-09 Utilisation d'images de caméra de profondeur pour l'enregistrement de la vitesse de balayages en trois dimensions WO2016089431A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
GB1708697.6A GB2547391A (en) 2012-10-05 2014-12-09 Using depth-camera images to speed registration of three-dimensional scans
DE112014007234.6T DE112014007234T5 (de) 2012-10-05 2014-12-09 Verwendung von Tiefenkamerabildern zur Beschleunigung der Registrierung von dreidimensionalen Scans

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US14/559,367 US9618620B2 (en) 2012-10-05 2014-12-03 Using depth-camera images to speed registration of three-dimensional scans
US14/559,367 2014-12-03

Publications (1)

Publication Number Publication Date
WO2016089431A1 true WO2016089431A1 (fr) 2016-06-09

Family

ID=52273540

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2014/069185 WO2016089431A1 (fr) 2012-10-05 2014-12-09 Utilisation d'images de caméra de profondeur pour l'enregistrement de la vitesse de balayages en trois dimensions

Country Status (1)

Country Link
WO (1) WO2016089431A1 (fr)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106514611A (zh) * 2017-01-14 2017-03-22 许建芹 一种公路检测机器人
CN114073075A (zh) * 2019-05-12 2022-02-18 魔眼公司 将三维深度图数据映射到二维图像上
CN115356261A (zh) * 2022-07-29 2022-11-18 燕山大学 一种汽车球笼防尘罩的缺陷检测系统及方法
EP4258011A1 (fr) * 2022-04-04 2023-10-11 Faro Technologies, Inc. Pré-enregistrement de balayage basé sur une image

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120069352A1 (en) 2009-03-25 2012-03-22 Faro Technologies, Inc. Method for optically scanning and measuring a scene
DE102012109481A1 (de) * 2012-10-05 2014-04-10 Faro Technologies, Inc. Vorrichtung zum optischen Abtasten und Vermessen einer Umgebung
US8705016B2 (en) 2009-11-20 2014-04-22 Faro Technologies, Inc. Device for optically scanning and measuring an environment
US8705012B2 (en) 2010-07-26 2014-04-22 Faro Technologies, Inc. Device for optically scanning and measuring an environment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120069352A1 (en) 2009-03-25 2012-03-22 Faro Technologies, Inc. Method for optically scanning and measuring a scene
US8705016B2 (en) 2009-11-20 2014-04-22 Faro Technologies, Inc. Device for optically scanning and measuring an environment
US8705012B2 (en) 2010-07-26 2014-04-22 Faro Technologies, Inc. Device for optically scanning and measuring an environment
DE102012109481A1 (de) * 2012-10-05 2014-04-10 Faro Technologies, Inc. Vorrichtung zum optischen Abtasten und Vermessen einer Umgebung

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
"Mathematical Models in Computer Vision: The Handbook", 2005, SPRINGER, pages: 239 - 258
CENSI, A.: "An ICP variant using a point-to-line metric", IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA, 2008
KAZUNORI OHNO ET AL: "Real-Time Robot Trajectory Estimation and 3D Map Construction using 3D Camera", INTELLIGENT ROBOTS AND SYSTEMS, 2006 IEEE/RSJ INTERNATIONAL CONFERENCE ON, IEEE, PI, 1 October 2006 (2006-10-01), pages 5279 - 5285, XP031006974, ISBN: 978-1-4244-0258-8 *
MAY S ET AL: "Robust 3D-mapping with time-of-flight cameras", INTELLIGENT ROBOTS AND SYSTEMS, 2009. IROS 2009. IEEE/RSJ INTERNATIONAL CONFERENCE ON, IEEE, PISCATAWAY, NJ, USA, 10 October 2009 (2009-10-10), pages 1673 - 1678, XP031581042, ISBN: 978-1-4244-3803-7 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106514611A (zh) * 2017-01-14 2017-03-22 许建芹 一种公路检测机器人
CN114073075A (zh) * 2019-05-12 2022-02-18 魔眼公司 将三维深度图数据映射到二维图像上
EP4258011A1 (fr) * 2022-04-04 2023-10-11 Faro Technologies, Inc. Pré-enregistrement de balayage basé sur une image
CN115356261A (zh) * 2022-07-29 2022-11-18 燕山大学 一种汽车球笼防尘罩的缺陷检测系统及方法

Similar Documents

Publication Publication Date Title
US11815600B2 (en) Using a two-dimensional scanner to speed registration of three-dimensional scan data
US11035955B2 (en) Registration calculation of three-dimensional scanner data performed between scans based on measurements by two-dimensional scanner
US9513107B2 (en) Registration calculation between three-dimensional (3D) scans based on two-dimensional (2D) scan data from a 3D scanner
US10120075B2 (en) Using a two-dimensional scanner to speed registration of three-dimensional scan data
US10782118B2 (en) Laser scanner with photogrammetry shadow filling
US10282854B2 (en) Two-dimensional mapping system and method of operation
US20180100927A1 (en) Two-dimensional mapping system and method of operation
WO2016089430A1 (fr) Utilisation d'images de caméras bidimensionnelles pour accélérer l'alignement de balayages en trois dimensions
US10830889B2 (en) System measuring 3D coordinates and method thereof
WO2016089431A1 (fr) Utilisation d'images de caméra de profondeur pour l'enregistrement de la vitesse de balayages en trois dimensions
US11927692B2 (en) Correcting positions after loop closure in simultaneous localization and mapping algorithm
WO2016089428A1 (fr) Utilisation d'un scanner bidimensionnel pour accélérer l'enregistrement de données de balayage tridimensionnelles
JP2017111118A (ja) 3dスキャナからの2次元(2d)スキャンデータに基づく3次元(3d)スキャン間の位置合せ計算
US20240161435A1 (en) Alignment of location-dependent visualization data in augmented reality
WO2016089429A1 (fr) Balayage bidimensionnel intermédiaire avec scanneur tridimensionnel pour l'enregistrement de vitesse
JP2017111117A (ja) 2次元スキャナによる測定値に基づいて各スキャン間で実行される3次元スキャナデータの位置合せ計算
EP4231053A1 (fr) Alignement de balayages d'un environnement à l'aide d'un objet de référence
WO2024102428A1 (fr) Alignement de données de visualisation dépendant de l'emplacement dans une réalité augmentée

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14821393

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 201708697

Country of ref document: GB

Kind code of ref document: A

Free format text: PCT FILING DATE = 20141209

WWE Wipo information: entry into national phase

Ref document number: 112014007234

Country of ref document: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14821393

Country of ref document: EP

Kind code of ref document: A1