WO2024083553A1 - Method of calibrating a range finder, calibration arrangement and range finder - Google Patents

Method of calibrating a range finder, calibration arrangement and range finder Download PDF

Info

Publication number
WO2024083553A1
WO2024083553A1 PCT/EP2023/077887 EP2023077887W WO2024083553A1 WO 2024083553 A1 WO2024083553 A1 WO 2024083553A1 EP 2023077887 W EP2023077887 W EP 2023077887W WO 2024083553 A1 WO2024083553 A1 WO 2024083553A1
Authority
WO
WIPO (PCT)
Prior art keywords
calibration
imager
dots
determined
array
Prior art date
Application number
PCT/EP2023/077887
Other languages
French (fr)
Inventor
Jérôme MAYE
Nicolas JACQUEMIN
Florent Monay
Original Assignee
Ams International Ag
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ams International Ag filed Critical Ams International Ag
Publication of WO2024083553A1 publication Critical patent/WO2024083553A1/en

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/497Means for monitoring or calibrating
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
    • G01B11/2504Calibration devices
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
    • G01B11/2513Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object with several lines being projected in more than one direction, e.g. grids, patterns
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B9/00Measuring instruments characterised by the use of optical techniques
    • G01B9/02Interferometers
    • G01B9/02015Interferometers characterised by the beam path configuration
    • G01B9/02029Combination with non-interferometric systems, i.e. for measuring the object
    • G01B9/0203With imaging systems
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B9/00Measuring instruments characterised by the use of optical techniques
    • G01B9/02Interferometers
    • G01B9/02092Self-mixing interferometers, i.e. feedback of light from object into laser cavity
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • G01S17/8943D imaging with simultaneous measurement of time-of-flight at a 2D array of receiver pixels, e.g. time-of-flight cameras or flash lidar
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/481Constructional features, e.g. arrangements of optical elements
    • G01S7/4814Constructional features, e.g. arrangements of optical elements of transmitters alone
    • G01S7/4815Constructional features, e.g. arrangements of optical elements of transmitters alone using multiple transmitters
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/491Details of non-pulse systems
    • G01S7/4912Receivers
    • G01S7/4916Receivers using self-mixing in the laser cavity
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/497Means for monitoring or calibrating
    • G01S7/4972Alignment of sensor
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B21/00Measuring arrangements or details thereof, where the measuring technique is not covered by the other groups of this subclass, unspecified or not relevant
    • G01B21/02Measuring arrangements or details thereof, where the measuring technique is not covered by the other groups of this subclass, unspecified or not relevant for measuring length, width, or thickness
    • G01B21/04Measuring arrangements or details thereof, where the measuring technique is not covered by the other groups of this subclass, unspecified or not relevant for measuring length, width, or thickness by measuring coordinates of points
    • G01B21/042Calibration or calibration artifacts

Definitions

  • This disclosure relates to a method of calibrating a range finder, a calibration arrangement for a range finder and a range finder, e.g. based on self-mixing interferometry.
  • self-mixing interferometry has become available for sensing and monitoring distance and speed using mobile electronic devices, such as smartphones, watches, and other wearable devices.
  • SMI has successfully been applied to sensing and/or monitoring distance or range.
  • Self-mixing interferometry occurs when part of the light emitted from a coherent light source is retro-fed back into the coherent source cavity (e.g. a laser such as a verticalcavity surface-emitting laser (VCSEL) or a distributed feedback laser (DFB) ) .
  • the coherent light source cavity produces a change in carrier population and refractive index. This change can be observed in a threshold current or threshold voltage change as well as in the optical power emitted by the cavity.
  • a 4D imager, or range finder, based on the self-mixing interferometry principle may be comprised of a set of lasers that emit frequency-modulated continuous light waves (FMCW) .
  • An object placed along a light beam and within half the coherence length of the laser may create an interferometric signal with a fringe frequency proportional to its distance and radial speed to the emitter.
  • Generating the FMCW signal with a triangular current modulation the distance depends on the nominal laser wavelength, the factor relating bandwidth and current , and the current amplitude . While the wavelength can be considered known, each laser has a di f ferent bandwidth response when driven with a current modulation .
  • a tuning factor needs to be calibrated for each laser source for accurate distance measurements . This procedure will be denoted intrinsic calibration hereinafter .
  • the sensor When the tuning factor is calibrated, the sensor may be able to return distance and speed along each laser beam . While for some applications this might be suf ficient , 3D reconstruction often necessitates an anchor in Euclidean space , i . e . information of interest is not only distances but points in 3D . To this end, another calibration may be needed for each laser source , including direction vectors of laser beams in a global coordinate system . This procedure will be denoted extrinsic calibration hereinafter .
  • One obj ect is to provide a method of calibrating a range finder, a calibration arrangement for a range finder and a range finder that overcome the above limitations of existing solutions and provide a simpler and robust means of calibration, e . g . including absolute 3D reconstruction .
  • the range finder comprises a self-mixing interferometer (SMI) projector and an imager.
  • the projector comprises an array of coherent light emitters.
  • a calibration pattern is placed in a common f ield-of-view of the projector and imager.
  • the array of coherent light emitters may emit an array of laser dots onto the calibration pattern.
  • the extrinsic part of the calibration can be accomplished within a structured light calibration framework.
  • the projector, 4D SMI sensor can be mounted next to the imager (e.g. an HD NIR camera) .
  • intrinsic parameters of the imager can be calibrated using the known calibration pattern, e.g. viewed from different angles.
  • a series of images can be captured at different distances of the pattern (series of planes) , where for each plane, i.e. calibration at a defined distance, and respective images of the known calibration pattern and of the laser dots are captured using the imager.
  • the projector is used to record corresponding SMI signals , i . e . the interferometric signal of each laser emitter due to SMI caused by back inj ection of reflections at the calibration target .
  • an optical center of the "proj ector" and direction vectors of each emitter can be determined .
  • This information provides an anchor, or absolute position, in a Euclidean coordinate system and allows to place the distance measurements in 3D space .
  • d_i be a distance measured by the light emitter l_i
  • O_p being the optical center of the proj ector with respect to the imager O_c
  • dir_i a calibrated direction vector .
  • a point P_i in 3D will be at O_c + dir i * d i .
  • the distance between the optical center and the plane along the light beam can be computed using the proposed structured light calibration approach .
  • a fast Fourier trans form FFT
  • SMI signals interferometric signals
  • the most prominent frequencies can be selected .
  • the light emitters can be associated with a set of distances and corresponding frequencies .
  • the frequencies can be fitted to the distances using a robust linear regression, e . g . with a Huber loss .
  • the range finder comprises a sel f-mixing interferometer ( SMI ) proj ector and an imager .
  • the proj ector comprises an array of coherent light emitters .
  • a calibration pattern is placed in a common f ield-of-view of the proj ector and imager .
  • the array of coherent light emitters may emit an array of laser dots onto the calibration pattern .
  • the method involves placing the calibration target at di f ferent distances and conduct the following steps for each distance .
  • One step relates to proj ecting an array of dots emitted by the light emitters onto a calibration pattern on the calibration target and capturing, using the imager, second images of the calibration target .
  • One step relates to capturing SMI signals of the array of light emitters .
  • One step relates to determining beat frequencies of the SMI signal for each light emitter of the array .
  • One step relates to determining spatial positions of the dots from the second images .
  • One step relates to determining from the spatial positions of the dots direction vectors of rays using line fitting .
  • One step relates to determining an optical origin for each light emitter .
  • the SMI signals generated by the proj ector, i . e . the range finder can be related to absolute positions on three-dimensional space .
  • the range finder is calibrated to yield accurate distances and motion, for example .
  • the calibration process can be performed by a simpler and robust means without the need of complex optical components , such as Fabry-Perot interferometers .
  • the method further comprises the steps of capturing, using the imager, first images of the calibration pattern and calibrating a camera matrix of intrinsic parameters of the imager from the first images , and capturing the second images using the calibrated imager .
  • the intrinsic calibration of the camera matrix allows to further increase accuracy .
  • the first images are captured from di f ferent angles and the camera matrix is determined by means of image processing, such as pose computation .
  • the second images form pairs of images of the calibration target including the calibration pattern and including the proj ected dots . The spatial positions of the dots are determined from the pairs of second images .
  • the optical center and spatial positions are determined from the proj ected dots using quadratic interpolation .
  • the optical center is determined from the proj ected dots using convergence of the direction vectors using a least-squares fit .
  • the optical center is determined from the proj ected dots using convergence of the direction vectors including calculating the position of the optical center, determining outlier lines and then recalculating the position of the optical center without including the outlier lines . Neglecting outliers allows to further increase accuracy .
  • determining the beat frequencies involves Fourier trans forming and extracting of peak frequencies from the trans formed SMI signals and/or averaging to yield a single beat frequency per distance .
  • d_i denotes a distance measured by the light emitter l_i
  • O_p denotes the optical center of the proj ector with respect to the imager O_c
  • dir_i denotes a calibrated direction vector
  • the ranger may suf fice to save the above parameters to determine a point P_i during normal operation of the range finder, e.g. by accessing a memory with the parameters.
  • the linear mapping is determined by fitting the beat frequencies to the corresponding distances from the optical origin using linear regression, e.g. with a Huber loss.
  • the calibration arrangement comprises a self-mixing interferometer (SMI) projector, an imager and a calibration target placed in a common f ield-of-view of the projector and imager, wherein the projector comprises an array of coherent light emitters.
  • SI self-mixing interferometer
  • the calibration arrangement comprises a controller.
  • the controller is operable to cause the calibration arrangement to conduct the method of any previous claim, with the calibration target placed at different distances .
  • the range finder comprises a self-mixing interferometer (SMI) projector, further comprising an array of coherent light emitters operable to generate SMI signals .
  • SI self-mixing interferometer
  • the range finder comprises a memory to save calibration parameters to generate during normal operation a linear mapping of beat frequencies to a corresponding distance from an optical origin of each coherent light emitter .
  • a controller is operable to access the memory, read the saved calibration parameters and calibrate the SMI signals as a function of the linear mapping .
  • Figure 1 shows an example embodiment of a calibration arrangement
  • Figures 2A, 2B show the calibration target
  • Figure 3 shows an example 3D reconstruction of planes
  • Figure 4 shows an example intersection of laser beams into the optical center
  • Figure 5 shows an example linear mapping of beat frequencies vs . distance
  • Figure 6 shows a comparison of predicted vs . measured linear mapping
  • Figures 7A to 7C show example calibrated 3D mapping for the SMI signals of the range finder .
  • Figure 1 shows an example embodiment of a calibration arrangement for a range finder .
  • the arrangement comprises a proj ector 10 , an imager 20 and a controller 30 .
  • the calibration arrangement comprises a calibration target 50 with a calibration pattern, which is placed in a common f ield-of-view 51 of the proj ector and imager .
  • the proj ector, imager and controller can be implemented in a module , or as separate units , which are to be aligned with respect to each other to use a method according to an embodiment of the proposed concept.
  • the controller 30 may form part of a processor.
  • the projector 10 comprises an array of coherent light emitters, e.g. lasers or laser diodes, e.g. semiconductor lasers such as vertical-cavity surface-emitting lasers (VCSELs) , or distributed feedback lasers (DFBs) .
  • the projector is operable to emit an array of discrete radiation beams.
  • the array of discrete radiation beams may for example be infrared radiation beams. Other wavelengths of radiation may be emitted, although infrared may be preferred because it is not seen by users.
  • the term "light" is used in this document for brevity and encompasses infrared radiation and radiation of other wavelengths.
  • the light emitters enable self-mixing interferometry, and typically comprise a cavity resonator, into which at least a fraction of the light emitted by the light emitters can be reflected, or backscattered, from an external object, such as the calibration target 50 or an object of interest.
  • the light emitters are configured to emit coherent light, e.g. in an infrared (IR) , visible or ultraviolet (UV) range of the electromagnetic spectrum, out of the sensor.
  • IR infrared
  • UV ultraviolet
  • the light emitters are configured to generate a continuous coherent emission or to emit coherent light in a pulsed fashion, the latter potentially aiding in an overall reduction in power consumption.
  • the projector 10 may further comprise optics which are configured to condition the plurality of discrete light beams.
  • the conditioning may for example form an array of discrete areas of light (which may be referred to as dots) , the dots 11 having positions which do not vary with distance from the projection system over an operating range of the calibration arrangement (when viewed from the projector) .
  • the optics may comprise one or more micro-lens arrays, a diffractive optical element, or other optics.
  • the plurality of discrete light beams may illuminate the calibration target 50, which is placed in the field of view of the projector 10 as an external object to form the calibration arrangement.
  • the discrete light beams will produce an array of two-dimensional Gaussian spots (or dots 11) when hitting an external object, such as the calibration target 50, with a radius depending on the distance to said object.
  • the projector can be considered a dot projector.
  • the imager 20 comprises an imaging sensor and associated optics.
  • the imaging sensor comprises a two-dimensional array of sensing elements.
  • the imaging sensor may comprise various light-sensitive technologies, including silicon photomultipliers (SiPM) , single-photon avalanche diodes (SPAD) , complementary metal-oxide semiconductors (CMOS) or charge- coupled devices (CCD) .
  • SiPM silicon photomultipliers
  • SPAD single-photon avalanche diodes
  • CMOS complementary metal-oxide semiconductors
  • CCD charge- coupled devices
  • the imaging sensor may be comprised of the order of 100 rows and of the order of 100 columns of sensing elements (for example SPADS) .
  • the imaging system may comprise other numbers of sensing elements (for example SPADS) . For example, around 200 x 200 sensing elements, around 300 x 200 sensing elements, around 600 x 500 sensing elements, or other numbers of sensing elements may be used.
  • the optics of the imaging system may be focusing optics which are arranged to form an image of the calibration target placed in a field of view in a plane of the imaging sensor.
  • the imager is depicted with various fields- of-view. This is to indicate that the imager may be designed to view a target, such as the calibration target, from different angles, e.g. by means of a lens system. Alternatively or in addition, the imager may be moved to different locations to image the target under changing angles.
  • the imager may be integrated together with the projector into a common module.
  • the imager and projector may be separate components.
  • the calibration target may be placed in a common field of view of the imager and projector at different distances with respect to the imager and projector.
  • the imager 20 is operable to receive and detect a reflected portion of at least some of the plurality of discrete light beams emitted by the projector 10.
  • the reflected portions may, for example, be reflected from the calibration target 50 disposed in the field of view 51.
  • the term "reflected" light includes light which is scattered towards the imager.
  • the focusing optics of the imager forms an image of the field of view 51 in a plane of the imaging sensor.
  • the two-dimensional array of light emitters divides the field of view into a plurality of pixels (which may be referred to as sensing elements) , each pixel corresponding to a different solid angle element.
  • the focusing optics are arranged to focus light received from each solid angle element onto a different pixel of the imager.
  • the controller 30 is operable to control operation of the projector 10 and the imager 20.
  • the controller is operable to send a control signal to the projector to control emission of light from the projector.
  • the controller is operable to exchange signals with the imager .
  • the signals may include control signals to the imager to control activation of sensing elements within the imaging sensor . Intensity and timing information from the imaging sensor may be trans ferred to the controller ( e . g . processor ) .
  • the controller 30 is operable to control operation of the calibration arrangement , including the proj ector 10 and the imager 20 , in a calibration mode of operation .
  • the calibration mode of operation implements the method of calibrating a range finder, as will be discussed in further detail below .
  • a set of calibration data can be stored in a memory 31 .
  • the controller is operable to control operation of the proj ector, or proj ector and imager i f these are integrated in a common module , in a normal mode of operation . In this mode the saved set of calibration data may be accessed, and data acquisition of the proj ector can be calibrated .
  • the controller 30 may comprise any suitable processor which may be configured to process intensity information received from the imager 20 and SMI signals from the proj ector 10 .
  • the controller may be operable to conduct calculations , such as a range ( i . e . distance ) of an obj ect within the field of view from which each reflected portion was reflected, and detected as SMI signals based on sel f-mixing interferometry ( SMI ) .
  • the controller may be operable to read the SMI signals generated by the light emitters and corresponding to discrete light beams from which a reflected portion originated .
  • the controller may be operable to generate a depth map comprising a plurality of points , each point having : a depth value corresponding to a calculated range for a detected reflected portion of a discrete light beam; and a position within the depth map corresponding to a position of the identified light beam in the field of view.
  • the light emitters are configured to undergo self-mixing interferometry.
  • the SMI alters a property, e.g. a wavelength, of the light within or emitted from the laser cavity, e.g. it causes a modulation in an amplitude and/or frequency of the emitted light, hence generating periodic fringes in the signal.
  • SMI modulates the optical power (which is usually observed by measuring the optical power or the biasing voltage) .
  • the modulation causes a change in an electronic property of the light emitter.
  • a driving current and/or voltage is likewise modulated.
  • SMI eventually alters a property of the light emitters. This property is indirectly measured by means of the SMI signals as a function of said property, or change of said property.
  • the SMI signals may be a measured current or voltage, for example.
  • the controller may comprise means, e.g. active or passive circuitry, to measure said change as an electronic property.
  • the controller receives the SMI signals containing information on distance or movement. A detected change in the SMI signal depends on movement of the external object.
  • the schematic calibration arrangement shown in Figure 1 was set up with a JAI camera of 2560x2048 pixels as an imager and an SMI distance sensor ( range finder ) as a proj ector with eight VCSELs operating at 940 nm and corresponding photodiodes to detect the SMI signals mounted in TO-can packages .
  • the SMI sensor was driven by a custom electronic board that generates a current modulation of 0 . 1 mA at 100 Hz and handles the data acquisition of the photodiodes with a 14-bit ADC at 10 MHz .
  • the proposed concept generali zes to any number of VCSELs , in any layout , and at any wavelength .
  • the only constraint for the imager is being able to see the VCSEL dots when proj ected on a screen, i . e . it should essentially be sensitive in the same wavelength as the VCSELs .
  • overlapping FOVs have been reali zed at all recorded operating distances .
  • Figure 2A shows the calibration pattern of the calibration target .
  • the calibration pattern comprises an array of squares , each of which is provided with a di f ferent internal shape or pattern .
  • corners of each square of the array are connected by smaller squares .
  • Other calibration patterns may be used, for example a chessboard pattern, ChArUco pattern, circle grid, etc .
  • the imager is used to obtain images of the calibration pattern .
  • the calibration pattern may be provided on the calibration target by printing the calibration pattern and then fixing the printed calibration pattern onto the front of the calibration target .
  • the printed calibration pattern may then be removed for subsequent steps of the method .
  • the calibration pattern may remain in place but may be illuminated such that it is visible when being used but not visible when not being used .
  • the calibration pattern may be provided on a translucent layer ( e . g . paper ) which is provided on the wall (which may be transparent ) .
  • a blank side of the translucent layer faces the proj ector and imager .
  • the calibration pattern When the calibration pattern is not being used, light is not shone on the patterned side of the translucent layer, and as a result the calibration pattern is not visible to the proj ector and imager . In this case , the calibration pattern may only be illuminated with the proj ector, showing the array of dots depicted in Figure 2B .
  • intrinsic properties of the imager are calibrated .
  • the intrinsic calibration involves capturing first images of the calibration pattern using the imager . This may be repeated from di f ferent angles to generate a set of first images . Then a camera matrix of intrinsic parameters of the imager is determined from the first images .
  • the intrinsic properties form the camera matrix and may comprise focal length, principal point , and distortion of the imager, for example .
  • a classic algorithm e . g . pose computation
  • minimi ze re-proj ection errors This has to be done only once per imager i f the lens remains fixed and will allow for reference metric measurements .
  • the camera matrix can be used together with the sensed first image of the calibration pattern to calculate the position of the calibration target relative to the imager .
  • a three-dimensional plane may be fitted to the sensed calibration pattern, and this plane may be recorded as being a plane of the calibration target , as shown in Figure 3 .
  • Calculating the position of the calibration target may be performed by the controller, e . g . by a processor .
  • the proj ected calibration pattern may be removed from the calibration target ( the proj ection system proj ecting the calibration pattern may be switched of f ) .
  • a laser distance measuring tool may be used to determine the plane of the calibration target .
  • the smartphone may be positioned at a predetermined distance and orientation from the calibration target .
  • the smartphone may be located on a conveyor belt , or other moving system, which is configured to move the smartphone to positions at predetermined distances from the calibration target ( or other planar surface ) . These methods may provide a lower accuracy, but the accuracy may be suf ficient for the calibration .
  • the calibration target is placed at di f ferent distances . For each distance the following steps are conducted .
  • An array of dots is emitted onto the calibration target using the light emitters , as depicted in Figure 2B . Then, using the calibrated imager second images are captured . These images are captured of the calibration target showing the calibration pattern and showing the projected dots, respectively. Thus, pairs of second images can be formed which show the calibration pattern and the projected dots.
  • SMI signals of the array of light emitters are captured.
  • the SMI signals are generated from light being reflected off the calibration target and reinjected back into the coherent light emitters, i.e. the cavities of the light emitters.
  • the light emitters undergo SMI and the calibration target is detected via current and/or optical power modulation, for example.
  • the modulated SMI signals can be analyzed, e.g. by the controller, and a beat frequency of the SMI signals is determined for each light emitter of the array.
  • the SMI signals are first low-pass filtered, e.g. at 150 KHz, to remove high-frequency noise. Then, the filtered signals are segmented into up and down ramps by analyzing the peaks of the (filtered) smoothed version of the original SMI signals. Finally, a triangular component can be removed by subtracting the smoothed version of the SMI signals from itself. FFT can be performed for each resulting up and down ramp signal and to extract peak frequencies. Finally, the median of up and down frequencies is determined and the results can be averaged to yield a single beat frequency per distance.
  • the calibration target is placed at at least two different distances. Additional distances will improve the calibration results. For example, to record a proof of principle 20 planes have been observed from 25 cm to 1.5 m with steps of some 5 cm. For each distance, a set of data has been captured, including an image of the calibration pattern, an image of array of dots, e.g. VCSEL light beams projected on the calibration target, and SMI signals for each light emitter .
  • a set of data has been captured, including an image of the calibration pattern, an image of array of dots, e.g. VCSEL light beams projected on the calibration target, and SMI signals for each light emitter .
  • the optical center of the SMI sensor is estimated. This includes determining spatial positions of the dots from the second images, e.g. with respect to the calibration pattern.
  • the spatial positions are in 3D Euclidean space and have a z-component determined by the distance corresponding to a given pair of second images.
  • the analysis yields also x- and y-components from the images as these have been calibrated in the first step using the camera matrix.
  • an optical center of the "projector” and direction vectors of each emitter can be determined.
  • This information provides an anchor, or absolute position, in a Euclidean coordinate system and allows to place the distance measurements in 3D space. For example , let d_i be a distance measured by the light emitter l_i , O_p being the optical center of the proj ector with respect to the imager O_c, and dir_i a calibrated direction vector . Then, a point P_i in 3D will be at O_c + dir_i * d_i .
  • the parameters O_c and dir_i can be saved in the memory and accessed to determine point P_i in 3D from a measured distance d_i at any time .
  • geometric distances of dots from the previously determined optical origin can be determined . For example , this can be achieved by computing for each dot its corresponding distance to the calibration target by using the estimated planes from the structured light calibration .
  • a linear mapping is established for each coherent light emitter, which relates the determined beat frequencies to a corresponding distance from the optical origin ( see Figure 5 ) .
  • the light emitters can be associated with a set of distances and corresponding frequencies .
  • the frequencies can be fitted to the distances using a robust linear regression, e . g . with a Huber loss .
  • the linear mapping provides a direct linear relationship between frequency and distance .
  • the SMI signals of the array of light emitters are calibrated as a function of the linear mapping .
  • the parameters a_i and b_i can be saved to the memory and accessed for calibration, e . g . by means of the controller during operation of the range finder .
  • Figure 6 shows that the distances estimated using the proposed concept resemble the distances determined in an alternative way, e . g . a laser meter etc . , with high accuracy .
  • the SMI measurements can be reconstructed in 3D space as shown in Figures 7A to 7C .
  • the proposed concept can be modi fied to derive the linear mapping to 3D space of radial distances .
  • the proposed method can be advantageous as it solves calibration with a single setup .
  • the proposed concept allows for industrial deployment of SMI sensor arrays . Indeed, one can automate the calibration method, which can work with solely 2 or 3 planes captured at di f ferent distances .
  • the proposed procedure solves the entire calibration problem for this kind of sensor in one shot .
  • other methods would use an external Fabry-Perot interferometer, requiring a more complex setup, and would solve only parts of the problem .
  • the SMI sensor would be deployed along an imager, such as an NIR camera, for instance in a smartphone , online calibration of the setup is possible .
  • the term “comprising” does not exclude other elements .
  • the article “a” is intended to include one or more than one component or element , and is not limited to be construed as meaning only one .

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Optics & Photonics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Electromagnetism (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

A method of calibrating a range finder is presented, the range finder comprising a self-mixing interferometer projector (10), an imager (20) and a calibration target (50) placed in a common field-of-view (51) of the projector and imager, wherein the projector comprises an array of coherent light emitters. The method comprises the steps of placing the calibration target at different distances; for each distance an array of dots emitted by the light emitters is projected onto a calibration pattern on the calibration target and, using the imager, second images of the calibration target and SMI signals of the array of light emitters are captured. Beat frequencies of the SMI signal are determined for each light emitter of the array. Furthermore, spatial positions of the dots are determined from the second images and from the spatial positions of the dots direction vectors of the rays are determined using line fitting. An optical origin is determined for each light emitter. Then, from the optical origin geometric distances of the projected dots are determined and, for each light emitter, a linear mapping is determined of the beat frequencies to a corresponding distance from the optical origin. Finally, the SMI signals of the array of light emitters are calibrated as a function of the linear mapping.

Description

Description
METHOD OF CALIBRATING A RANGE FINDER, CALIBRATION ARRANGEMENT AND RANGE FINDER
This disclosure relates to a method of calibrating a range finder, a calibration arrangement for a range finder and a range finder, e.g. based on self-mixing interferometry.
BACKGROUND OF THE DISCLOSURE
In recent years, self-mixing interferometry (SMI) has become available for sensing and monitoring distance and speed using mobile electronic devices, such as smartphones, watches, and other wearable devices. For example, SMI has successfully been applied to sensing and/or monitoring distance or range. Self-mixing interferometry occurs when part of the light emitted from a coherent light source is retro-fed back into the coherent source cavity (e.g. a laser such as a verticalcavity surface-emitting laser (VCSEL) or a distributed feedback laser (DFB) ) . In turn, the coherent light source cavity produces a change in carrier population and refractive index. This change can be observed in a threshold current or threshold voltage change as well as in the optical power emitted by the cavity.
A 4D imager, or range finder, based on the self-mixing interferometry principle may be comprised of a set of lasers that emit frequency-modulated continuous light waves (FMCW) . An object placed along a light beam and within half the coherence length of the laser may create an interferometric signal with a fringe frequency proportional to its distance and radial speed to the emitter. Generating the FMCW signal with a triangular current modulation, the distance depends on the nominal laser wavelength, the factor relating bandwidth and current , and the current amplitude . While the wavelength can be considered known, each laser has a di f ferent bandwidth response when driven with a current modulation . A tuning factor needs to be calibrated for each laser source for accurate distance measurements . This procedure will be denoted intrinsic calibration hereinafter .
When the tuning factor is calibrated, the sensor may be able to return distance and speed along each laser beam . While for some applications this might be suf ficient , 3D reconstruction often necessitates an anchor in Euclidean space , i . e . information of interest is not only distances but points in 3D . To this end, another calibration may be needed for each laser source , including direction vectors of laser beams in a global coordinate system . This procedure will be denoted extrinsic calibration hereinafter .
To date , there are known methods which rely on a Fabry-Perot interferometer with known characteristics and typically only calibrate the tuning factor of lasers , such as VCSELs , in wavelength over current . A generic geometric calibration further allowing for 3D reconstruction has not yet been suggested .
One obj ect is to provide a method of calibrating a range finder, a calibration arrangement for a range finder and a range finder that overcome the above limitations of existing solutions and provide a simpler and robust means of calibration, e . g . including absolute 3D reconstruction . These objects are achieved with the subject-matter of the independent claims. Further developments and embodiments are described in the dependent claims.
SUMMARY OF THE DISCLOSURE
The following relates to an improved concept in the field of optical sensing. One aspect employs the idea of mounting a 4D SMI-based ranger next to an image camera to construct a structured light sensor for calibrating the 4D ranger. A joint structured light calibration, both intrinsic and extrinsic, and laser tuning factor suggested and allows to indirectly compute absolute 3D positions through a direct linear relationship between frequency and distance. The range finder comprises a self-mixing interferometer (SMI) projector and an imager. The projector comprises an array of coherent light emitters. A calibration pattern is placed in a common f ield-of-view of the projector and imager. The array of coherent light emitters may emit an array of laser dots onto the calibration pattern. The extrinsic part of the calibration can be accomplished within a structured light calibration framework. For example, the projector, 4D SMI sensor, can be mounted next to the imager (e.g. an HD NIR camera) .
For example, in a first step, intrinsic parameters of the imager can be calibrated using the known calibration pattern, e.g. viewed from different angles. In a next step, consecutively a series of images can be captured at different distances of the pattern (series of planes) , where for each plane, i.e. calibration at a defined distance, and respective images of the known calibration pattern and of the laser dots are captured using the imager. Furthermore, the projector is used to record corresponding SMI signals , i . e . the interferometric signal of each laser emitter due to SMI caused by back inj ection of reflections at the calibration target . Using the images for each distance , an optical center of the "proj ector" and direction vectors of each emitter can be determined . This information provides an anchor, or absolute position, in a Euclidean coordinate system and allows to place the distance measurements in 3D space . For example , namely, let d_i be a distance measured by the light emitter l_i , O_p being the optical center of the proj ector with respect to the imager O_c, and dir_i a calibrated direction vector . Then, a point P_i in 3D will be at O_c + dir i * d i .
In order to map the interferometric signals ( SMI signals ) to distances for each light emitter, it can be noted that the fringe frequency follows a linear relationship with the distance . This linear relation is discussed in literature and backed up by experimental data . As the typical formulas involve the laser wavelength and its bandwidth, most published work, however, attempts to determine those physical parameters . The proposed concept takes another approach and directly performs a linear regression between estimated fringe frequencies and measured distances through the imager .
For example , for each light emitter and for each plane , the distance between the optical center and the plane along the light beam can be computed using the proposed structured light calibration approach . In parallel , a fast Fourier trans form ( FFT ) can be performed on the interferometric signals ( SMI signals ) and the most prominent frequencies can be selected . As a result , the light emitters can be associated with a set of distances and corresponding frequencies . The frequencies can be fitted to the distances using a robust linear regression, e . g . with a Huber loss . Finally, for each light emitter coef ficients a_i and b_i can be determined such that d_i = a_i * f_i + b_i , where f_i is the estimated frequency for laser l_i .
The following suggests a method of calibrating a range finder . The range finder comprises a sel f-mixing interferometer ( SMI ) proj ector and an imager . The proj ector comprises an array of coherent light emitters . A calibration pattern is placed in a common f ield-of-view of the proj ector and imager . The array of coherent light emitters may emit an array of laser dots onto the calibration pattern .
The method involves placing the calibration target at di f ferent distances and conduct the following steps for each distance .
One step relates to proj ecting an array of dots emitted by the light emitters onto a calibration pattern on the calibration target and capturing, using the imager, second images of the calibration target .
One step relates to capturing SMI signals of the array of light emitters .
One step relates to determining beat frequencies of the SMI signal for each light emitter of the array .
One step relates to determining spatial positions of the dots from the second images . One step relates to determining from the spatial positions of the dots direction vectors of rays using line fitting .
One step relates to determining an optical origin for each light emitter .
Then, from the optical origin geometric distances of the proj ected dots are determined and, for each light emitter, a linear mapping of the beat frequencies to a corresponding distance from the optical origin is determined . Finally, the SMI signals of the array of light emitters are calibrated as a function of the linear mapping .
As a final result , the SMI signals generated by the proj ector, i . e . the range finder, can be related to absolute positions on three-dimensional space . This way, the range finder is calibrated to yield accurate distances and motion, for example . The calibration process can be performed by a simpler and robust means without the need of complex optical components , such as Fabry-Perot interferometers .
In at least one embodiment , the method further comprises the steps of capturing, using the imager, first images of the calibration pattern and calibrating a camera matrix of intrinsic parameters of the imager from the first images , and capturing the second images using the calibrated imager . The intrinsic calibration of the camera matrix allows to further increase accuracy .
In at least one embodiment , the first images are captured from di f ferent angles and the camera matrix is determined by means of image processing, such as pose computation . In at least one embodiment , the second images form pairs of images of the calibration target including the calibration pattern and including the proj ected dots . The spatial positions of the dots are determined from the pairs of second images .
In at least one embodiment , the optical center and spatial positions are determined from the proj ected dots using quadratic interpolation . In addition or alternatively, the optical center is determined from the proj ected dots using convergence of the direction vectors using a least-squares fit . In addition or alternatively, the optical center is determined from the proj ected dots using convergence of the direction vectors including calculating the position of the optical center, determining outlier lines and then recalculating the position of the optical center without including the outlier lines . Neglecting outliers allows to further increase accuracy .
In at least one embodiment , determining the beat frequencies involves Fourier trans forming and extracting of peak frequencies from the trans formed SMI signals and/or averaging to yield a single beat frequency per distance .
In at least one embodiment , d_i denotes a distance measured by the light emitter l_i , O_p denotes the optical center of the proj ector with respect to the imager O_c, and dir_i denotes a calibrated direction vector, a point P_i in three- dimensional space is determined by O_c + dir_i * d_i .
For normal operation of the ranger it may suf fice to save the above parameters to determine a point P_i during normal operation of the range finder, e.g. by accessing a memory with the parameters.
In at least one embodiment, the linear mapping is determined by fitting the beat frequencies to the corresponding distances from the optical origin using linear regression, e.g. with a Huber loss. In addition or alternatively, the linear mapping provides a direct linear relationship between frequency and distance, such that for each light emitter l_i linear coefficients a_i and b_i are determined such that d_i = a_i * f_i + b_i, where f_i is the estimated beat frequency for light emitter l_i .
For normal operation of the ranger, it may suffice to save the above linear coefficients a_i and b_i to determine a calibration during normal operation of the range finder, e.g. by accessing a memory with the parameters.
Furthermore, a calibration arrangement is suggested. The calibration arrangement comprises a self-mixing interferometer (SMI) projector, an imager and a calibration target placed in a common f ield-of-view of the projector and imager, wherein the projector comprises an array of coherent light emitters.
Furthermore, the calibration arrangement comprises a controller. The controller is operable to cause the calibration arrangement to conduct the method of any previous claim, with the calibration target placed at different distances .
Furthermore, a range finder is suggested. The range finder comprises a self-mixing interferometer (SMI) projector, further comprising an array of coherent light emitters operable to generate SMI signals .
Furthermore , the range finder comprises a memory to save calibration parameters to generate during normal operation a linear mapping of beat frequencies to a corresponding distance from an optical origin of each coherent light emitter . A controller is operable to access the memory, read the saved calibration parameters and calibrate the SMI signals as a function of the linear mapping .
Further embodiments of the range finder and calibration arrangement become apparent to the skilled reader from the aforementioned embodiments of the method of calibrating a range finder, and vice-versa .
BRIEF DESCRIPTION OF THE DRAWINGS
The following description of figures may further illustrate and explain aspects of the sel f-mixing interferometry sensor module , electronic device and the method of determining an optical power ratio for sel f-mixing interferometry .
Components and parts of the sel f-mixing interferometry sensor that are functionally identical or have an identical ef fect are denoted by identical reference symbols . Identical or ef fectively identical components and parts might be described only with respect to the figures where they occur first .
Their description is not necessarily repeated in successive figures .
In the figures :
Figure 1 shows an example embodiment of a calibration arrangement , Figures 2A, 2B show the calibration target ,
Figure 3 shows an example 3D reconstruction of planes ,
Figure 4 shows an example intersection of laser beams into the optical center,
Figure 5 shows an example linear mapping of beat frequencies vs . distance ,
Figure 6 shows a comparison of predicted vs . measured linear mapping, and
Figures 7A to 7C show example calibrated 3D mapping for the SMI signals of the range finder .
Figure 1 shows an example embodiment of a calibration arrangement for a range finder . The arrangement comprises a proj ector 10 , an imager 20 and a controller 30 . Furthermore , the calibration arrangement comprises a calibration target 50 with a calibration pattern, which is placed in a common f ield-of-view 51 of the proj ector and imager . The proj ector, imager and controller can be implemented in a module , or as separate units , which are to be aligned with respect to each other to use a method according to an embodiment of the proposed concept. In some embodiments the controller 30 may form part of a processor.
The projector 10 comprises an array of coherent light emitters, e.g. lasers or laser diodes, e.g. semiconductor lasers such as vertical-cavity surface-emitting lasers (VCSELs) , or distributed feedback lasers (DFBs) . Correspondingly, the projector is operable to emit an array of discrete radiation beams. The array of discrete radiation beams may for example be infrared radiation beams. Other wavelengths of radiation may be emitted, although infrared may be preferred because it is not seen by users. The term "light" is used in this document for brevity and encompasses infrared radiation and radiation of other wavelengths.
The light emitters enable self-mixing interferometry, and typically comprise a cavity resonator, into which at least a fraction of the light emitted by the light emitters can be reflected, or backscattered, from an external object, such as the calibration target 50 or an object of interest. The light emitters are configured to emit coherent light, e.g. in an infrared (IR) , visible or ultraviolet (UV) range of the electromagnetic spectrum, out of the sensor. For example, the light emitters are configured to generate a continuous coherent emission or to emit coherent light in a pulsed fashion, the latter potentially aiding in an overall reduction in power consumption.
The projector 10 may further comprise optics which are configured to condition the plurality of discrete light beams. The conditioning may for example form an array of discrete areas of light (which may be referred to as dots) , the dots 11 having positions which do not vary with distance from the projection system over an operating range of the calibration arrangement (when viewed from the projector) . The optics may comprise one or more micro-lens arrays, a diffractive optical element, or other optics. The plurality of discrete light beams may illuminate the calibration target 50, which is placed in the field of view of the projector 10 as an external object to form the calibration arrangement.
For the purpose of this disclosure it can be assumed that the discrete light beams will produce an array of two-dimensional Gaussian spots (or dots 11) when hitting an external object, such as the calibration target 50, with a radius depending on the distance to said object. This means that the projector can be considered a dot projector.
The imager 20 comprises an imaging sensor and associated optics. The imaging sensor comprises a two-dimensional array of sensing elements. The imaging sensor may comprise various light-sensitive technologies, including silicon photomultipliers (SiPM) , single-photon avalanche diodes (SPAD) , complementary metal-oxide semiconductors (CMOS) or charge- coupled devices (CCD) . In some embodiments, the imaging sensor may be comprised of the order of 100 rows and of the order of 100 columns of sensing elements (for example SPADS) . The imaging system may comprise other numbers of sensing elements (for example SPADS) . For example, around 200 x 200 sensing elements, around 300 x 200 sensing elements, around 600 x 500 sensing elements, or other numbers of sensing elements may be used. The optics of the imaging system may be focusing optics which are arranged to form an image of the calibration target placed in a field of view in a plane of the imaging sensor. In the drawing, the imager is depicted with various fields- of-view. This is to indicate that the imager may be designed to view a target, such as the calibration target, from different angles, e.g. by means of a lens system. Alternatively or in addition, the imager may be moved to different locations to image the target under changing angles. The imager may be integrated together with the projector into a common module. The imager and projector may be separate components. The calibration target may be placed in a common field of view of the imager and projector at different distances with respect to the imager and projector.
The imager 20 is operable to receive and detect a reflected portion of at least some of the plurality of discrete light beams emitted by the projector 10. The reflected portions may, for example, be reflected from the calibration target 50 disposed in the field of view 51. In this disclosure, the term "reflected" light includes light which is scattered towards the imager.
The focusing optics of the imager (not shown) forms an image of the field of view 51 in a plane of the imaging sensor. The two-dimensional array of light emitters divides the field of view into a plurality of pixels (which may be referred to as sensing elements) , each pixel corresponding to a different solid angle element. The focusing optics are arranged to focus light received from each solid angle element onto a different pixel of the imager.
The controller 30 is operable to control operation of the projector 10 and the imager 20. For example, the controller is operable to send a control signal to the projector to control emission of light from the projector. Similarly, the controller is operable to exchange signals with the imager . The signals may include control signals to the imager to control activation of sensing elements within the imaging sensor . Intensity and timing information from the imaging sensor may be trans ferred to the controller ( e . g . processor ) .
The controller 30 is operable to control operation of the calibration arrangement , including the proj ector 10 and the imager 20 , in a calibration mode of operation . The calibration mode of operation implements the method of calibrating a range finder, as will be discussed in further detail below . As a result a set of calibration data can be stored in a memory 31 . The controller is operable to control operation of the proj ector, or proj ector and imager i f these are integrated in a common module , in a normal mode of operation . In this mode the saved set of calibration data may be accessed, and data acquisition of the proj ector can be calibrated .
The controller 30 may comprise any suitable processor which may be configured to process intensity information received from the imager 20 and SMI signals from the proj ector 10 . The controller may be operable to conduct calculations , such as a range ( i . e . distance ) of an obj ect within the field of view from which each reflected portion was reflected, and detected as SMI signals based on sel f-mixing interferometry ( SMI ) . The controller may be operable to read the SMI signals generated by the light emitters and corresponding to discrete light beams from which a reflected portion originated . Using this information, the controller may be operable to generate a depth map comprising a plurality of points , each point having : a depth value corresponding to a calculated range for a detected reflected portion of a discrete light beam; and a position within the depth map corresponding to a position of the identified light beam in the field of view.
Upon the aforementioned back-injection of the emitted light into the cavity of the light emitters and due to a movement of an external object the light is reflected off, the light emitters are configured to undergo self-mixing interferometry. The SMI alters a property, e.g. a wavelength, of the light within or emitted from the laser cavity, e.g. it causes a modulation in an amplitude and/or frequency of the emitted light, hence generating periodic fringes in the signal. More accurately, SMI modulates the optical power (which is usually observed by measuring the optical power or the biasing voltage) . In turn, the modulation causes a change in an electronic property of the light emitter. For example, a driving current and/or voltage is likewise modulated. When no target is present outside the module in the field of emission so as to intercept and reflect light of the latter, no self-mixing interferometry occurs within the light emitter .
As discussed above, SMI eventually alters a property of the light emitters. This property is indirectly measured by means of the SMI signals as a function of said property, or change of said property. The SMI signals may be a measured current or voltage, for example. Thus, the controller may comprise means, e.g. active or passive circuitry, to measure said change as an electronic property. Furthermore, the controller receives the SMI signals containing information on distance or movement. A detected change in the SMI signal depends on movement of the external object. In order to demonstrate the proposed concept , the schematic calibration arrangement shown in Figure 1 was set up with a JAI camera of 2560x2048 pixels as an imager and an SMI distance sensor ( range finder ) as a proj ector with eight VCSELs operating at 940 nm and corresponding photodiodes to detect the SMI signals mounted in TO-can packages . The SMI sensor was driven by a custom electronic board that generates a current modulation of 0 . 1 mA at 100 Hz and handles the data acquisition of the photodiodes with a 14-bit ADC at 10 MHz . It should be noted that the proposed concept generali zes to any number of VCSELs , in any layout , and at any wavelength . Similarly, the only constraint for the imager is being able to see the VCSEL dots when proj ected on a screen, i . e . it should essentially be sensitive in the same wavelength as the VCSELs . By a suitable choice of lenses and arrangement of the camera and the VCSELs , overlapping FOVs have been reali zed at all recorded operating distances .
Figure 2A shows the calibration pattern of the calibration target . For example , the calibration pattern comprises an array of squares , each of which is provided with a di f ferent internal shape or pattern . In addition, corners of each square of the array are connected by smaller squares . Other calibration patterns may be used, for example a chessboard pattern, ChArUco pattern, circle grid, etc . The imager is used to obtain images of the calibration pattern .
The calibration pattern may be provided on the calibration target by printing the calibration pattern and then fixing the printed calibration pattern onto the front of the calibration target . The printed calibration pattern may then be removed for subsequent steps of the method . Alternatively, the calibration pattern may remain in place but may be illuminated such that it is visible when being used but not visible when not being used . For example , the calibration pattern may be provided on a translucent layer ( e . g . paper ) which is provided on the wall (which may be transparent ) . A blank side of the translucent layer faces the proj ector and imager . When the calibration pattern is being used, light is shone on the patterned side of the translucent layer, and as a result the calibration pattern is visible to the proj ector and imager . When the calibration pattern is not being used, light is not shone on the patterned side of the translucent layer, and as a result the calibration pattern is not visible to the proj ector and imager . In this case , the calibration pattern may only be illuminated with the proj ector, showing the array of dots depicted in Figure 2B .
In a first step, intrinsic properties of the imager are calibrated . The intrinsic calibration involves capturing first images of the calibration pattern using the imager . This may be repeated from di f ferent angles to generate a set of first images . Then a camera matrix of intrinsic parameters of the imager is determined from the first images . The intrinsic properties form the camera matrix and may comprise focal length, principal point , and distortion of the imager, for example . A classic algorithm ( e . g . pose computation) can be used to minimi ze re-proj ection errors . This has to be done only once per imager i f the lens remains fixed and will allow for reference metric measurements .
Since the calibration pattern is known, the camera matrix can be used together with the sensed first image of the calibration pattern to calculate the position of the calibration target relative to the imager . Speci fically, a three-dimensional plane may be fitted to the sensed calibration pattern, and this plane may be recorded as being a plane of the calibration target , as shown in Figure 3 . Calculating the position of the calibration target may be performed by the controller, e . g . by a processor . Once the plane of the calibration target has been calculated, the proj ected calibration pattern may be removed from the calibration target ( the proj ection system proj ecting the calibration pattern may be switched of f ) .
One could even not use a calibration pattern i f the distances from the VCSELs to the wall can be measured in some other manner, for instance with a laser meter .
Other methods may be used to determine the plane of the calibration target ( or other planar surface ) . These methods may not need a calibration pattern . For example , a laser distance measuring tool may be used to determine the plane of the calibration target . Alternatively, the smartphone may be positioned at a predetermined distance and orientation from the calibration target . In one example the smartphone may be located on a conveyor belt , or other moving system, which is configured to move the smartphone to positions at predetermined distances from the calibration target ( or other planar surface ) . These methods may provide a lower accuracy, but the accuracy may be suf ficient for the calibration .
In a next step, the calibration target is placed at di f ferent distances . For each distance the following steps are conducted .
An array of dots is emitted onto the calibration target using the light emitters , as depicted in Figure 2B . Then, using the calibrated imager second images are captured . These images are captured of the calibration target showing the calibration pattern and showing the projected dots, respectively. Thus, pairs of second images can be formed which show the calibration pattern and the projected dots.
In parallel, SMI signals of the array of light emitters are captured. The SMI signals are generated from light being reflected off the calibration target and reinjected back into the coherent light emitters, i.e. the cavities of the light emitters. As a consequence, the light emitters undergo SMI and the calibration target is detected via current and/or optical power modulation, for example. The modulated SMI signals can be analyzed, e.g. by the controller, and a beat frequency of the SMI signals is determined for each light emitter of the array.
For each coherent light emitter and each distance, data analysis of the SMI signals is performed. In this example embodiment, the SMI signals are first low-pass filtered, e.g. at 150 KHz, to remove high-frequency noise. Then, the filtered signals are segmented into up and down ramps by analyzing the peaks of the (filtered) smoothed version of the original SMI signals. Finally, a triangular component can be removed by subtracting the smoothed version of the SMI signals from itself. FFT can be performed for each resulting up and down ramp signal and to extract peak frequencies. Finally, the median of up and down frequencies is determined and the results can be averaged to yield a single beat frequency per distance.
For example, the calibration target is placed at at least two different distances. Additional distances will improve the calibration results. For example, to record a proof of principle 20 planes have been observed from 25 cm to 1.5 m with steps of some 5 cm. For each distance, a set of data has been captured, including an image of the calibration pattern, an image of array of dots, e.g. VCSEL light beams projected on the calibration target, and SMI signals for each light emitter .
Using the pairs of second images, i.e. the calibration pattern and dots images, the optical center of the SMI sensor is estimated. This includes determining spatial positions of the dots from the second images, e.g. with respect to the calibration pattern. The spatial positions are in 3D Euclidean space and have a z-component determined by the distance corresponding to a given pair of second images. Furthermore, the analysis yields also x- and y-components from the images as these have been calibrated in the first step using the camera matrix.
From the spatial positions of the dots direction vectors of the rays are determined using line fitting. This is depicted in Figure 4 which shows an example data set. Finally, an origin for each coherent light emitter (e.g. a geometric mean) is determined (see Figure 4: intersection of laser beams into the optical center) . The direction vectors are estimated for each light emitter and indicate the direction of the respective light beams. This constitutes the extrinsic calibration part and can be stored into a memory.
Thus, using the images for each distance, an optical center of the "projector" and direction vectors of each emitter can be determined. This information provides an anchor, or absolute position, in a Euclidean coordinate system and allows to place the distance measurements in 3D space. For example , let d_i be a distance measured by the light emitter l_i , O_p being the optical center of the proj ector with respect to the imager O_c, and dir_i a calibrated direction vector . Then, a point P_i in 3D will be at O_c + dir_i * d_i . The parameters O_c and dir_i can be saved in the memory and accessed to determine point P_i in 3D from a measured distance d_i at any time .
In a next step, geometric distances of dots from the previously determined optical origin can be determined . For example , this can be achieved by computing for each dot its corresponding distance to the calibration target by using the estimated planes from the structured light calibration .
In a next step, a linear mapping is established for each coherent light emitter, which relates the determined beat frequencies to a corresponding distance from the optical origin ( see Figure 5 ) . The light emitters can be associated with a set of distances and corresponding frequencies . The frequencies can be fitted to the distances using a robust linear regression, e . g . with a Huber loss . In other words , the linear mapping provides a direct linear relationship between frequency and distance . For each light emitter coef ficients a_i and b_i can be determined such that d_i = a_i * f_i + b_i , where f_i is the estimated frequency for laser 1 i . This constitutes the extrinsic calibration .
In a next step, the SMI signals of the array of light emitters are calibrated as a function of the linear mapping . For example , the parameters a_i and b_i can be saved to the memory and accessed for calibration, e . g . by means of the controller during operation of the range finder . Figure 6 shows that the distances estimated using the proposed concept resemble the distances determined in an alternative way, e . g . a laser meter etc . , with high accuracy . By combining the extrinsic and intrinsic calibration, the SMI measurements can be reconstructed in 3D space as shown in Figures 7A to 7C .
As mentioned above , finding the relation between frequency and distance can be done with an external Fabry-Perot interferometer . I f this is the case , the proposed concept can be modi fied to derive the linear mapping to 3D space of radial distances . However, the proposed method can be advantageous as it solves calibration with a single setup .
The proposed concept allows for industrial deployment of SMI sensor arrays . Indeed, one can automate the calibration method, which can work with solely 2 or 3 planes captured at di f ferent distances . The proposed procedure solves the entire calibration problem for this kind of sensor in one shot . In contrast , other methods would use an external Fabry-Perot interferometer, requiring a more complex setup, and would solve only parts of the problem . In case the SMI sensor would be deployed along an imager, such as an NIR camera, for instance in a smartphone , online calibration of the setup is possible .
While this speci fication contains many speci fics , these should not be construed as limitations on the scope of the invention or of what may be claimed, but rather as descriptions of features speci fic to particular embodiments of the invention . Certain features that are described in this speci fication in the context of separate embodiments can also be implemented in combination in a single embodiment . Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination . Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a sub-combination or variation of a sub-combination .
Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results . In certain circumstances , multitasking and parallel processing may be advantageous .
Furthermore , as used herein, the term "comprising" does not exclude other elements . In addition, as used herein, the article "a" is intended to include one or more than one component or element , and is not limited to be construed as meaning only one .
This patent application claims the priority of German patent application 102022127850 . 6 , the disclosure content of which is hereby incorporated by reference .
References
10 proj ector
11 dots ( discrete areas of light ) 20 imager
30 controller
31 memory
50 calibration target
51 overlapping f ield-of-view

Claims

Claims
1. A method of calibrating a range finder comprising a selfmixing interferometer projector (10) , an imager (20) and a calibration target (50) placed in a common f ield-of-view (51) of the projector and imager, wherein the projector comprises an array of coherent light emitters, the method comprising the steps of:
- placing the calibration target at different distances; for each distance:
- projecting an array of dots emitted by the light emitters onto a calibration pattern on the calibration target and capturing, using the imager, second images of the calibration target,
- capturing SMI signals of the array of light emitters,
- determine beat frequencies of the SMI signal for each light emitter of the array;
- determining spatial positions of the dots from the second images ,
- from the spatial positions of the dots determine direction vectors of the rays using line fitting,
- determine an optical origin for each light emitter,
- from the optical origin determine geometric distances of the projected dots,
- for each light emitter determine a linear mapping of the beat frequencies to a corresponding distance from the optical origin, and
- calibrate the SMI signals of the array of light emitters as a function of the linear mapping.
2. The method according to claim 1, further comprising the steps of: - capturing, using the imager, first images of the calibration pattern and calibrating a camera matrix of intrinsic parameters of the imager from the first images , and
- capturing the second images using the calibrated imager .
3 . The method according to claim 2 , wherein the first images are captured from di f ferent angles and the camera matrix is determined by means of image processing, such as pose computation .
4 . The method according to one of claims 1 to 3 , wherein
- the second images form pairs of images of the calibration target including the calibration pattern and including the proj ected dots , and
- the spatial positions of the dots are determined from the pairs of second images .
5 . The method according to one of claims 1 to 4 , wherein
- the optical center and spatial positions are determined from the proj ected dots using quadratic interpolation,
- the optical center is determined from the proj ected dots using convergence of the direction vectors using a leastsquares fit , and/or
- the optical center is determined from the proj ected dots using convergence of the direction vectors including calculating the position of the optical center, determining outlier lines and then recalculating the position of the optical center without including the outlier lines .
6. The method according to one of claims 1 to 5, wherein determining the beat frequencies involves Fourier transforming and extracting of peak frequencies from the transformed SMI signals and/or averaging to yield a single beat frequency per distance.
7. The method according to one of claims 1 to 6, wherein d_i denotes a distance measured by the light emitter l_i, O_p denotes the optical center of the projector with respect to the imager O_c, and dir_i denotes a calibrated direction vector, a point P_i in three-dimensional space is determined by O_c + dir_i * d_i .
8. The method according to one of claims 1 to 7, wherein
- the linear mapping is determined by fitting the beat frequencies to the corresponding distances from the optical origin using linear regression, e.g. with a Huber loss, and/or
- the linear mapping provides a direct linear relationship between frequency and distance, such that for each light emitter l_i linear coefficients a_i and b_i are determined such that d_i = a_i * f_i + b_i, where f_i is the estimated beat frequency for light emitter l_i .
9. A calibration arrangement comprising:
- a self-mixing interferometer projector (10) , an imager (20) and a calibration target (50) placed in a common f ield-of-view (51) of the projector and imager, wherein the projector comprises an array of coherent light emitters, and
- a controller (30) , wherein the controller is operable to cause the calibration arrangement to conduct the method of any previous claim, with the calibration target placed at different distances.
10. A range finder comprising: - a self-mixing interferometer ( SMI ) projector (10) , further comprising an array of coherent light emitters operable to generate SMI signals,
- a memory (31) to save calibration parameters to generate during normal operation a linear mapping of beat frequencies to a corresponding distance from an optical origin of each coherent light emitter, and
- a controller (30) operable to access the memory, read the saved calibration parameters and calibrate the SMI signals as a function of the linear mapping.
PCT/EP2023/077887 2022-10-21 2023-10-09 Method of calibrating a range finder, calibration arrangement and range finder WO2024083553A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE102022127850 2022-10-21
DE102022127850.6 2022-10-21

Publications (1)

Publication Number Publication Date
WO2024083553A1 true WO2024083553A1 (en) 2024-04-25

Family

ID=88413199

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2023/077887 WO2024083553A1 (en) 2022-10-21 2023-10-09 Method of calibrating a range finder, calibration arrangement and range finder

Country Status (1)

Country Link
WO (1) WO2024083553A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130120759A1 (en) * 2010-07-26 2013-05-16 Koninklijke Philips Electronics N.V. Apparatus for measuring a distance
US20200319322A1 (en) * 2018-05-04 2020-10-08 Microsoft Technology Licensing, Llc Field calibration of a structured light range-sensor
US20200356159A1 (en) * 2019-05-09 2020-11-12 Apple Inc. Self-Mixing Based 2D/3D User Input Detection and Scanning Laser System

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130120759A1 (en) * 2010-07-26 2013-05-16 Koninklijke Philips Electronics N.V. Apparatus for measuring a distance
US20200319322A1 (en) * 2018-05-04 2020-10-08 Microsoft Technology Licensing, Llc Field calibration of a structured light range-sensor
US20200356159A1 (en) * 2019-05-09 2020-11-12 Apple Inc. Self-Mixing Based 2D/3D User Input Detection and Scanning Laser System

Similar Documents

Publication Publication Date Title
US10330780B2 (en) LIDAR based 3-D imaging with structured light and integrated illumination and detection
US9267787B2 (en) Depth scanning with multiple emitters
KR102399788B1 (en) Optoelectronic modules operable to recognize spurious reflections and to compensate for errors caused by spurious reflections
US9170097B2 (en) Hybrid system
US7701592B2 (en) Method and apparatus for combining a targetless optical measurement function and optical projection of information
US11269075B2 (en) Hybrid sensor system and method for providing 3D imaging
US8014002B2 (en) Contour sensor incorporating MEMS mirrors
US10739444B2 (en) LIDAR signal acquisition
JP5976201B2 (en) Laser tracker with interferometer and absolute distance measuring unit and calibration method for laser tracker
WO2017213737A1 (en) Optical system for object detection and location
US6741082B2 (en) Distance information obtaining apparatus and distance information obtaining method
EP3227714A1 (en) Depth sensor module and depth sensing method
US9013711B2 (en) Contour sensor incorporating MEMS mirrors
US6765606B1 (en) Three dimension imaging by dual wavelength triangulation
KR102324449B1 (en) Multi-detector with interleaved photodetector arrays and analog readout circuits for lidar receiver
CN112066906A (en) Depth imaging device
CN110596720A (en) Distance measuring system
KR101802894B1 (en) 3D image obtaining system
WO2024083553A1 (en) Method of calibrating a range finder, calibration arrangement and range finder
CN112066907B (en) Depth imaging device
CN114322844B (en) High-speed laser profiler
Haugen et al. High speed line range sensor for mobile platforms
WO2023195911A1 (en) Calibration of depth map generating system
US20230408694A1 (en) Segmented flash lidar using stationary reflectors
JPH08178632A (en) Surface shape measuring device