WO2021234420A1 - A time-of-flight sensor system - Google Patents

A time-of-flight sensor system Download PDF

Info

Publication number
WO2021234420A1
WO2021234420A1 PCT/GB2021/051253 GB2021051253W WO2021234420A1 WO 2021234420 A1 WO2021234420 A1 WO 2021234420A1 GB 2021051253 W GB2021051253 W GB 2021051253W WO 2021234420 A1 WO2021234420 A1 WO 2021234420A1
Authority
WO
WIPO (PCT)
Prior art keywords
time
sensor system
position data
illumination
flight sensor
Prior art date
Application number
PCT/GB2021/051253
Other languages
French (fr)
Inventor
Joshua CARR
David Richards
Marc-Sebastian SCHOLZ
Original Assignee
Cambridge Mechatronics Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cambridge Mechatronics Limited filed Critical Cambridge Mechatronics Limited
Priority to CN202180036666.0A priority Critical patent/CN115667978A/en
Priority to GB2218936.9A priority patent/GB2611451A/en
Publication of WO2021234420A1 publication Critical patent/WO2021234420A1/en

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/481Constructional features, e.g. arrangements of optical elements
    • G01S7/4817Constructional features, e.g. arrangements of optical elements relating to scanning
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/42Simultaneous measurement of distance and other co-ordinates
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/481Constructional features, e.g. arrangements of optical elements
    • G01S7/4814Constructional features, e.g. arrangements of optical elements of transmitters alone
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/497Means for monitoring or calibrating
    • G01S7/4972Alignment of sensor
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F03MACHINES OR ENGINES FOR LIQUIDS; WIND, SPRING, OR WEIGHT MOTORS; PRODUCING MECHANICAL POWER OR A REACTIVE PROPULSIVE THRUST, NOT OTHERWISE PROVIDED FOR
    • F03GSPRING, WEIGHT, INERTIA OR LIKE MOTORS; MECHANICAL-POWER PRODUCING DEVICES OR MECHANISMS, NOT OTHERWISE PROVIDED FOR OR USING ENERGY SOURCES NOT OTHERWISE PROVIDED FOR
    • F03G7/00Mechanical-power-producing mechanisms, not otherwise provided for or using energy sources not otherwise provided for
    • F03G7/06Mechanical-power-producing mechanisms, not otherwise provided for or using energy sources not otherwise provided for using expansion or contraction of bodies due to heating, cooling, moistening, drying or the like
    • F03G7/061Mechanical-power-producing mechanisms, not otherwise provided for or using energy sources not otherwise provided for using expansion or contraction of bodies due to heating, cooling, moistening, drying or the like characterised by the actuating element
    • F03G7/0614Mechanical-power-producing mechanisms, not otherwise provided for or using energy sources not otherwise provided for using expansion or contraction of bodies due to heating, cooling, moistening, drying or the like characterised by the actuating element using shape memory elements
    • F03G7/06143Wires
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F03MACHINES OR ENGINES FOR LIQUIDS; WIND, SPRING, OR WEIGHT MOTORS; PRODUCING MECHANICAL POWER OR A REACTIVE PROPULSIVE THRUST, NOT OTHERWISE PROVIDED FOR
    • F03GSPRING, WEIGHT, INERTIA OR LIKE MOTORS; MECHANICAL-POWER PRODUCING DEVICES OR MECHANISMS, NOT OTHERWISE PROVIDED FOR OR USING ENERGY SOURCES NOT OTHERWISE PROVIDED FOR
    • F03G7/00Mechanical-power-producing mechanisms, not otherwise provided for or using energy sources not otherwise provided for
    • F03G7/06Mechanical-power-producing mechanisms, not otherwise provided for or using energy sources not otherwise provided for using expansion or contraction of bodies due to heating, cooling, moistening, drying or the like
    • F03G7/066Actuator control or monitoring
    • F03G7/0665Actuator control or monitoring controlled displacement, e.g. by using a lens positioning actuator
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/08Systems determining position data of a target for measuring distance only
    • G01S17/10Systems determining position data of a target for measuring distance only using transmission of interrupted, pulse-modulated waves

Definitions

  • the present invention relates to a time-of-flight sensor system and a method for sensing light scattered by a subject for a time-of-flight sensor system.
  • a time-of-flight sensor system uses time-of-flight to resolve distance between the sensor and the subject for each point of an image.
  • the time-of-flight is measured, for example, by measuring the round trip time of an artificial light signal or pulse to and then reflected from the subject in a direct time-of-flight system.
  • the distance to the subject is half the product of speed of light (3x10 8 ms 1 ) and measured time of flight to and from the subject.
  • the time-of flight can be based on measuring phase difference between an emitted signal and a received signal.
  • Invisible light wavelengths may be used for time-of-flight camera systems to avoid disturbing a subject that is being imaged (which may also be captured with a visible light camera).
  • the near infrared (NIR) band (wavelengths 750nm to 1.4pm) is typically chosen due to the availability of small (portable) lasers with good resolving potential.
  • a time-of-flight three dimensional (3D) sensor can use light provided by an artificial light source. It is known, in some 3D sensing systems, to scan the illumination across a subject in order to produce an output frame.
  • the inventors of the present invention have appreciated that it is advantageous to know the position of a moveable element of an actuator with great accuracy, where the moveable element moves illumination provided by a light source across a subject.
  • knowing the position of said moveable element with high accuracy allows more accurate processing of data captured by the time-of-flight sensor system and thus advantageously provides a better depth map output.
  • a time-of-flight sensor system comprising: an illumination source configured to provide illumination for illuminating a subject to which a time-of-flight is to be measured; an optical system having an actuator, the actuator comprising a support structure and moveable element moveable relative to the support structure,; and a sensor having a sensor surface and being configured to sense light scattered by the subject from the illumination source and to provide depth data dependent on sensed light, wherein the actuator is configured to, by moving the moveable element, move the illumination across at least part of the subject to generate an output frame, and the time-of-flight sensor system further comprises a position sensor configured to determine position data of the moveable element.
  • the position data may comprise the position of the moveable element at a given time.
  • the incorporation of the position sensor enables the time- of-flight sensor system to have accurate position information of the moveable element.
  • Depth data may comprise one or more depth data points resulting from light scattered by different points of a subject as the illumination is moved across at least part of the subject.
  • the depth data points may correspond to instances of emission of illumination and can be correlated with position data points.
  • Position data may be correlated with depth data at two or more positions across the subject at different instances.
  • the time-of-flight sensor system may be, or may be provided in, any compatible apparatus or device, including a smartphone, a mobile computing device, a laptop, a tablet computing device, a security system, a gaming system, an augmented reality system, an augmented reality device, a wearable device, a drone, an aircraft, a spacecraft, a vehicle, an autonomous vehicle, a robotic device, a consumer electronics device, a domotic device, and a home automation device.
  • any compatible apparatus or device including a smartphone, a mobile computing device, a laptop, a tablet computing device, a security system, a gaming system, an augmented reality system, an augmented reality device, a wearable device, a drone, an aircraft, a spacecraft, a vehicle, an autonomous vehicle, a robotic device, a consumer electronics device, a domotic device, and a home automation device.
  • the illumination may be provided or emitted by any suitable light source.
  • the illumination source may be a source of non-visible light or a source of near infrared light.
  • the light source may comprise at least one laser, laser array (such as a vertical cavity surface emitting laser (VCSEL) array), or may comprise at least one light emitting diode (LED).
  • the actuator may be configured to move the moveable element at relatively high speeds. For example, at a speed of 0.08m/s, or travelling 1 pm in 12.5ps. At these scales, latency in signal transmission and transmission times themselves are significant. For example, latency and transmission times relating to the transmission of data from the position sensor to, for example, a processor may be considered.
  • Latency in such signals may arise from various sources.
  • the position sensor may comprise a signal converter, such as an analogue to digital converter, which may have a certain amount of latency. For example, 10ps signal latency is not atypical for such converters.
  • the signal transmission itself may comprise inherent latency.
  • a communication interface such as a serial peripheral interface (SPI) may have relatively small overhead and also be capable of running at relatively high clock rates. However, a clock rate higher than 10MHz poses certain problems. Even at this speed, a latency of at least 1 ps is present. In practice, this is typically 10ps.
  • An inter-integrated circuit (I2C) may have a higher overhead, but is typically not supported at more than 3.4MHz. This implies a latency of at least 8ps.
  • the position sensor may transmit determined position data to the processor. This may introduce latency from the sources described above. It is therefore desirable to reduce the impact of this latency in order to provide accurate position information of the moveable element. There is provided various ways of reducing said impact.
  • the position sensor may be synchronised with the illumination source such that the position data is determined when the illumination source provides illumination.
  • the position sensor may comprise a signal converter, and the signal converter may be synchronised with the illumination source such that the position data is determined when the illumination source provides illumination.
  • a direct correlation can be made between the depth data received from a particular instance of illumination emission, and the position data of the moveable element at the time of the emission of that illumination.
  • the depth data and position data for each instance may therefore be accurately correlated to produce an accurate depth map with significantly reduced latency effects.
  • the position data may not be directly obtainable from the position sensor. That is, the position data may not be obtained in real time with the depth data. Rather, the position data may be obtainable by 1) obtaining stored position data from a memory module, 2) obtaining position data that corresponds to one or more movement commands, 3) obtaining position data by prediction or 4) obtaining position data by interpolation, as explained in further details below. Therefore, when the processor correlates the position data with the depth data to provide a depth map, it may carry out additional processing steps other than corresponding positional data with depth data.
  • the time-of-flight system may further comprise a memory module.
  • the position data may be stored in the memory module.
  • the processor may be configured to correlate the depth data with position data stored in the memory to provide the depth map.
  • each position data sample may be accompanied by a timestamp/corresponding time data, e.g. in the form of metadata. These data may be used in the interpolation / prediction methods as described.
  • the position data may be one-dimensional position data indicating a linear displacement of the moveable element from a reference point, e.g. a zero or default position.
  • the position data may preferably refer to a two-dimensional coordinate representing the position of the moveable element in a region of interest, e.g. across an X-Y plane.
  • the position data may refer to a three-dimensional coordinate representing the position of the moveable element within a volume of interest.
  • the optical system may further comprise a controller configured to control movement of the actuator by providing one or more movement commands.
  • the controller may issue instructions to the actuator to move the moveable element.
  • the one or more commands may control the actuator to move the moveable element in a first direction, or a plurality of subsequent directions, where the directions may be the same or different directions.
  • the position sensor may be configured to determine position data corresponding to each of the one or more movement commands.
  • the controller provides a movement command to the actuator
  • the position sensor may be configured to determine position data of the moveable element after moving as instructed by the controller.
  • the position sensor may determine position data for each of the movement commands of the one or more movement commands.
  • the determined position data may be stored in the memory.
  • a correspondence to the one or more movement commands may also be stored in the memory.
  • the time- of-flight system may, for example using the processor, then predict position data of the moveable element after a movement command is issued without the need for the position sensor to determine the position data of the moveable element again.
  • the processor may be configured to correlate the depth data with stored position data corresponding to the one or more movement commands to provide the depth map. This avoids the impact of the latency of transmission of the position data from the position sensor to the processor.
  • the one or more movement commands provided by the controller may be synchronised with the illumination provided by the illumination source such that position data corresponds to each of the one or more movement commands.
  • the position data of the moveable element may be known from the memory, and its correspondence with the depth data from the sensor may be obtained by synchronisation.
  • an accurate correlation and depth map may be provided.
  • the processor may be configured to calculate a velocity of movement of the moveable element based on the position data stored in the memory. For example, two or more position data points may be stored in the memory. The processor may calculate, using the two or more position data points, a velocity of movement of the moveable element based on a first and subsequent position data point or points. The processor may also be configured to extrapolate future position data based on stored position data. The processor may also calculate the direction of movement of the moveable element from the stored position data points. Based on one or more of these, the processor may be configured to extrapolate or calculate a future position of the moveable element. The depth data and predicted position data may then be correlated to produce the accurate depth map. Most advantageously, the position data calculated or predicted by the processor will correspond to instances of illumination emission such that the depth data can be properly correlated with position data. The corresponding point of illumination emission may be at the beginning, during, or end of emission as set out above.
  • the position sensor may be configured to periodically determine position data of the moveable element.
  • the periods may be regular intervals.
  • the intervals may be set manually by a user or may be factory calibrated.
  • the periodically determined position data may be stored in the memory.
  • the processor may be configured to interpolate intermediate position data based on the periodically determined position data. That is, the processor may predict position data at a point between one or more periodically determined position data points.
  • the position sensor may comprise a driver configured to drive the movement of the moveable element.
  • the actuator may be configured to move the illumination in a scanning pattern across at least a part of the subject.
  • the scanning pattern may comprise moving the illumination along a first direction across at least a part of the subject.
  • the scanning pattern may comprise moving the illumination across at least part of the subject in a single direction, such as from one side of the subject to the other side of the subject, so as to substantially cover the subject, or fully cover the subject.
  • the scanning pattern may further comprise moving the illumination along a second direction across at least part of the subject.
  • the first direction may be perpendicular to the second direction, or angled to the second direction in a plane. That is, the first direction may be angled at a non-zero angle to the second direction.
  • the scanning pattern may be a raster scanning pattern.
  • the scanning pattern may be boustrophedonic. Increasing the number of points in the scanning pattern may result in a more uniformly illuminated subject or field of view, which may allow improved resolution of the output frame. However, the more points in the scanning pattern, the more frames which need to be captured and combined in order to generate the output frame. The more frames there are, the more time it takes to combine the frames. Thus, the scanning pattern may be chosen to suit the application.
  • the illumination may have any suitable form or shape.
  • the illumination may comprise: a light beam having a beam projection configured to tessellate; a light beam having a circular or polygonal beam projection; or a stripe of light.
  • tessellate it is meant that the beam shape is configured to substantially cover the subject when moving the illumination across at least part of the subject without the beam shape overlapping. This may be without gaps between the projections, or there may be gaps between projections.
  • the illumination source may be configured to provide the illumination in discrete flashes of light.
  • the actuator may comprise the position sensor.
  • the actuator may comprise a shape memory alloy (SMA) component, the SMA component being configured to, on contraction, drive movement of the moveable element.
  • SMA component may be an SMA actuator wire.
  • An SMA is a material that changes shape over temperature as it transitions from a martensite to an austenite phase. When the SMA wire is heated, it shortens in length and when it cools it becomes more elastic and can be stretched by applying force.
  • an SMA wire is used so that it can respond quickly when heated, typically by applying an electric current and relying on the resistance of the wire to dissipate power. The SMA is returned to a longer state by taking advantage of its elasticity when cooled and applying an opposing force, which might be provided by resilient means or another SMA wire.
  • the actuator may comprise at least two SMA components.
  • the actuator may comprise four SMA components, or eight SMA components.
  • the SMA components may be arranged in a manner allowing movement of the moveable element relative to the support structure in two orthogonal directions, for example along x and y axes, perpendicular to a notional primary axis extending through the moveable element, for example a z axis.
  • the at least two SMA components may be connected between the moveable element and the support structure and arranged to, on contraction, move the moveable element.
  • the four SMA components may be arranged in a loop around the notional primary axis, here referred to as the optical axis.
  • the four SMA components may consist of a first pair of SMA components arranged on opposite sides of the optical axis and a second pair of SMA components arranged on opposite sides of the optical axis.
  • the first pair of SMA components may be capable of selective driving to move the moveable element relative to the support structure in a first direction
  • the second pair of SMA components may be capable of selective driving to move the moveable element relative to the support structure in a second direction transverse to the first direction. Movement in directions other than parallel to the SMA components may be driven by a combination of actuation of these pairs of SMA components to provide a linear combination of movement of the moveable element in the transverse directions. Another way to view this movement is that simultaneous contraction of any pair of the SMA components that are adjacent to each other in the loop will drive movement of the moveable element in a direction bisecting those two of the SMA components (i.e. producing diagonal movement).
  • the SMA components may be capable of being selectively driven to move the moveable element relative to the support structure to any position in a range of movement in two orthogonal directions perpendicular to the optical axis.
  • the magnitude of the range of movement depends on the geometry and the range of contraction of the SMA components within their normal operating parameters.
  • bearings may be provided between the support structure and the moveable element.
  • plain bearings may be provided on the support structure to enable the movement of the moveable element.
  • the moveable element may comprise a lens element.
  • the support structure may support an image sensor.
  • the lens element may be arranged to focus an illumination onto the subject, whereby light scattered by the subject may be captured by the image sensor.
  • the image sensor may capture the image and be of any suitable type, for example, a charge-coupled device (CCD) or a complimentary metal-oxide-semiconductor (CMOS) device.
  • CCD charge-coupled device
  • CMOS complimentary metal-oxide-semiconductor
  • the SMA component may comprise the position sensor.
  • the position sensor may be configured to determine the position data by measuring the resistance of the SMA component.
  • a sense resistor may be connected in series with the SMA component.
  • a measurement circuit may be provided to perform a measurement indicative of potential difference across at least the SMA component.
  • the time-of-flight sensor system may further comprise a measurement switch configured to connect between the SMA component and the sense resister. The measurement switch may be configured to connect either to the measurement circuit such that the measurement circuit can perform the measurement, or to a circuit that bypasses the sense resistor.
  • Such an apparatus improves the efficiency of measuring the resistance of the SMA component, while maintaining the sensitivity and accuracy. This therefore provides an accurate measurement of the position of the moveable element.
  • the measurement switch makes it possible to bypass the sense resistor when the SMA component is driven (i.e. heated) and to switch the sense resistor into the circuit when a measurement is required. It may only be necessary to measure the resistance intermittently. Between measurements, the sense resistor can be bypassed. This reduces the power required to heat the SMA component, thereby improving the efficiency.
  • the measurement circuit may be configured to perform a measurement indicative of the potential difference across at least the SMA component relative to a reference potential.
  • the reference potential may be connected to a connection potential at the opposite side of the sense resistor from the SMA component such that the reference potential is equal to the connection potential.
  • the measurement pulse can be shorter than the pulses for heating the SMA component to control its length.
  • the measurement can be performed without undesirably heating the SMA component.
  • the contraction of the SMA component causes movement of the illumination by movement of the moveable element.
  • the resistance of the SMA component at a given time therefore gives an indication of position data of the moveable element at that time, and thus the position data can be used in correlation with the depth data.
  • the position sensor may comprise a magnetic sensor, such as a Hall effect sensor or a magnetic tunnel junction.
  • the actuator may instead be a voice coil motor.
  • the voice coil motor may be separate to the position sensor. That is, the voice coil motor may be a different component to the position sensor.
  • the illumination source may be supported on the moveable element.
  • a method of sensing light scattered from a subject for a time-of-flight sensor system comprising: an illumination source illuminating a subject to which a time-of-flight is to be measured; a sensor having a sensor surface sensing light scattered by the subject from the illumination source to provide depth data dependent on sensed light; an actuator moving a moveable element relative to a support structure to move the illumination across at least part of the subject to generate an output frame; and a position sensor determining position data of the moveable element.
  • non-transitory computer-readable medium comprising instructions for performing the method set out above.
  • the non-transitory computer-readable medium may be, for example, a solid state memory, a microprocessor, a CD or DVD-ROM, programmed memory such as non-volatile memory (such as Flash), or read-only memory (firmware), or on a data carrier such as an optical or electrical signal carrier.
  • Figure 1 is a schematic drawing of a time-of-flight sensor system embodying an aspect of the present invention
  • Figure 2 is a schematic drawing of illumination of a subject using a time-of-flight sensor system embodying an aspect of the present invention
  • Figure 3 is a graph illustrating operation of a time-of-flight sensor system embodying an aspect of the present invention
  • Figure 4 is a graph illustrating operation of a time-of-f light sensor system embodying an aspect of the present invention
  • Figure 5 is a graph illustrating operation of a time-of-f light sensor system embodying an aspect of the present invention.
  • Figure 6 is a schematic drawing of an actuator for a time-of-flight sensor system embodying an aspect of the present invention.
  • FIG. 1 illustrates a time-of-flight sensor system 100.
  • the system 100 comprises an illumination source which in this example is a vertical cavity surface emitting laser (VCSEL) 113 for illuminating a subject 101 to which a time-of-flight is to be measured.
  • the system 100 also comprises an optical system comprising an actuator 111.
  • the actuator 111 comprises four SMA components. This actuator is described in much more detail below and is illustrated by Figure 6.
  • the actuator 111 is configured to move the illumination 103 provided by the VCSEL 113 by moving a moveable element 109 relative to a support structure (not shown). More specifically, the moveable element 109 may move in a direction orthogonal to the optical axis of the illumination.
  • the moveable element 109 comprises a lens element.
  • the system 100 also comprises a sensor 107 having a sensor surface and being configured to sense light scattered by the subject 101 from the VCSEL 113 and to provide depth data dependent on sensed light.
  • the system 100 also comprises a position sensor.
  • the actuator 111 comprises the position sensor.
  • the position sensor may be a separate component to the actuator.
  • the position sensor determines position data of the moveable element by measuring the resistance of at least one of the SMA components.
  • a sense resistor is connected in series with the SMA components, and a measurement circuit is provided to perform a measurement indicative of potential difference across at least one of the SMA components.
  • the system 100 also comprises a processor 115 in connection with the VCSEL 113, the sensor 107 and the position sensor.
  • the processor 115 also comprises a controller.
  • VCSEL 113 provides illumination 103 in the form of a stripe and in a discrete flash.
  • the controller then provides a movement command to the actuator 111 to move the moveable element 109, such that the illumination 103 is moved across at least part of the subject 101.
  • the moveable element is moved by the actuator by the contraction of at least one of the SMA components upon heating the at least one of the SMA components.
  • the VCSEL continues to provide flashes of stripes of illumination 103 as the moveable element 109 moves the illumination 103 across the subject 101.
  • the position sensor is synchronised with the VCSEL.
  • the position sensor comprises an analogue to digital converter, and the converter is synchronised with the VCSEL.
  • the position sensor provides position data of the moveable element by measuring the resistance of at least one of the SMA components. Due to the synchronisation, this measurement is synchronised to be performed when the VCSEL emits illumination 103. In particular, the position sensor is synchronised to perform the measurement at the start of illumination. However, it will be understood that this measurement could be taken at any time during illumination, or at the end of illumination, as long as consistent synchronisation exists.
  • Light scattered by the subject 101 is received by the sensor 107, which provides depth data dependent on the received light to the processor 115.
  • the position sensor provides position data to the processor. As the illumination 103 is scanned across the subject 101, a plurality of data points for both position data and depth data are obtained, all of which are provided to the processor.
  • the transmission of this data may typically introduce latency from various sources discussed above.
  • the effect of this latency is mitigated due to the synchronisation of the position sensor with the VCSEL, as the processor has knowledge of which position data points correspond to which depth data values.
  • the position data and depth data is correlated by the processor and an accurate depth map 117 is produced due to the avoidance of impact of latency.
  • time-of-flight system 10 illustrated in Figure 1 is implemented in smartphone device, but it will be understood that the time-of-flight system could be implemented in any appropriate system.
  • Figure 2 illustrates a scanning pattern of illumination across a subject 207.
  • the illumination is provided by a VCSEL (not shown) in flashes of stripes of light.
  • the illumination is moved across the subject 207 in a first direction along a single axis, from right to left.
  • a first flash of illumination 201 is first provided.
  • the moveable element is moved by the actuator, the illumination is scanned across the subject.
  • a second flash of light 203 and a third flash of light 205 are provided.
  • depth data will be obtained by the receiving sensor.
  • position data corresponding to the position of the moveable element will be obtained, to correspond to each flash of light of the scanning pattern.
  • a plurality of data points of each depth data and position data may be obtained in order to provide correlation to produce a depth map.
  • the shape of illumination may be any suitable shape as set out above, and may also be scanned in two or more directions.
  • Figure 3 is a graph illustrating the use of a time-of-flight sensor system as illustrated in Figure 1.
  • Figure 3 emphasises the synchronisation of the position sensor and the VCSEL.
  • the illumination intensity 201 , 203, 205 corresponds to the flashes of illumination scanning across the subject 207 of Figure 2.
  • a synchronisation signal is provided at the point of emission of illumination from the VCSEL.
  • the synchronisation signal causes the position sensor to determine the position 301 , 303, 305 of the moveable element at the instance of illumination emission.
  • Figure 4 is a graph illustrating an alternative way of mitigating latency in a time-of-flight sensor system to that illustrated in Figure 1.
  • the time-of-flight sensor system use illustrated in Figure 4 predicts position data of the moveable element using extrapolation.
  • the position sensor determines position data 401 of the moveable element periodically. The intervals between each determination are in this example constant, but it will be understood that they need not be.
  • the position data 401 is stored in a memory module, in connection with the processor. Based on the position data 401, the processor extrapolates in order to predict a future position data point 403 of the moveable element.
  • the processor also calculates the velocity of movement of the moveable element based on the position data points 401, but it will be understood that this is not necessary to extrapolate a future position data point. Therefore, position data of the moveable element may be predicted such that position data points are determined corresponding to each instance of emission of light 201, 203, 205. By predicting a position data point 403, the amount of measurements needed from the position sensor is reduced. Furthermore, it is not necessary to determine the position data points at a specific time because the position data points may be acquired by prediction/calculation. Therefore, this reduces the amount the latency is introduced into the system, and therefore improves the accuracy of position data and ultimately the depth map produced from correlating the depth data with position data.
  • Figure 5 illustrates yet another use of a time-of-f light system configured to reduce the impact of latency in another way.
  • position data of the moveable element may be interpolated.
  • the position sensor periodically determines position data of the moveable element.
  • the position data 503 is determined at regular intervals, it will be understood that it need not be determined at regular intervals.
  • the position data 503 is stored in the memory module in connection with the processor. Based on the position data points 503, the processor interpolates to determine position data 501 of the moveable element at a given time between one or more of the position data points 503.
  • the system is therefore capable of determining position data of the moveable element 501 at any point in time, without requiring further measurement from the position sensor.
  • the processor can interpolate to find the position of the moveable element at an instance of illumination emission, and this a position data point can be determined to correspond to each instance of emission of light 201 , 203, 205. This, again, mitigates the impact of latency of transmission from the position sensor and therefore improves the accuracy of the position data.
  • FIG 6 illustrates an SMA actuator arrangement as implemented in the time-of-flight sensor system illustrated in Figure 1.
  • the actuator arrangement 10 comprises a total of four SMA actuator wires 11, 12, 13, 14 connected between a support block 16 that forms part of a support structure and is mounted to a base and a moveable element 15.
  • Each of the SMA actuator wires 11 to 14 is held in tension, thereby applying a force between the moveable element 15 and the support block 16 in a direction perpendicular to a notional primary axis, here referred to as an optical axis.
  • the SMA actuator wires 11 to 14 move the moveable element 15 relative to the support block 16 in two orthogonal directions perpendicular to the optical axis.
  • the SMA actuator wires 11 to 14 are connected at one end to the moveable element 15 by respective crimping members 17 and at the other end to the support block by crimping members 18.
  • the crimping members 17, 18 crimp the wire to hold it mechanically, optionally strengthened by use of an adhesive.
  • the crimping members 17, 18 also provide an electrical connection to the SMA actuator wires 11 to 14.
  • any suitable means for connecting the SMA actuator wires 11 to 14 may alternatively be used.
  • the four SMA wires 11 to 14 are arranged in a loop around the optical axis.
  • the four SMA wires consist of a first pair of SMA wires 11 , 13 arranged on opposite sides of the optical axis and a second pair of SMA wires 12, 14 arranged on opposite sides of the optical axis.
  • the first pair of SMA wires 11 , 13 are capable of selective driving to move the moveable element 15 relative to the support structure in a first direction
  • the second pair of SMA wires 12, 14 are capable of selective driving to move the moveable element 15 relative to the support structure in a second direction transverse to the first direction. Movement in directions other than parallel to the SMA wires 11 to 14 are driven by a combination of actuation of these pairs of SMA wires to provide a linear combination of movement of the moveable element in the transverse directions. Another way to view this movement is that simultaneous contraction of any pair of the SMA wires that are adjacent to each other in the loop will drive movement of the moveable element in a direction bisecting those two of the SMA wires (i.e. producing diagonal movement).
  • the SMA wires 11 to 14 may be capable of being selectively driven to move the moveable element 15 relative to the support structure to any position in a range of movement in two orthogonal directions perpendicular to the optical axis.
  • the magnitude of the range of movement depends on the geometry and the range of contraction of the SMA wires within their normal operating parameters.
  • a controller may provide movement commands to the actuator to move the moveable element, such that the illumination is moved across the subject.
  • the position sensor determines the position of the moveable element. That is, the position sensor determines the position of the moveable element corresponding to the movement command issued by the controller.
  • the position sensor determines the resulting position of the moveable element.
  • the position data obtained is stored in a memory module. If the controller repeats a movement command, the memory module will contain position data corresponding to the movement command.
  • the processor by accessing the stored position data in the memory module, is able to predict position data without requiring the position sensor to re-determine the position data of the moveable element. This therefore removes the need to use the position sensor when the position data can be predicted, thus mitigating the latency introduced by transmission of the position data from the position sensor, therefore improving the accuracy of the position data. Thus ultimately improves the accuracy of the depth map.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Electromagnetism (AREA)
  • Optical Radar Systems And Details Thereof (AREA)
  • Measurement Of Unknown Time Intervals (AREA)

Abstract

A time-of-flight sensor system (100) comprising: an illumination source (113) configured to provide illumination (103) for illuminating a subject (101) to which a time-of-flight is to be measured; an optical system having an actuator (111), the actuator comprising a support structure and moveable element (109) moveable relative to the support structure,; and a sensor (107) having a sensor surface. The sensor is configured to sense light scattered by the subject (101) from the illumination source (113) and to provide depth data dependent on sensed light. The actuator (111) is configured to, by moving the moveable element, move the illumination (103) across at least part of the subject (101) to generate an output frame. The time- of-flight sensor system (100) also comprises a positions sensor configured to determine position data of the moveable element (109). The time-of-flight sensor system (100) also comprises a processor (115) configured to correlate the position data with the depth data to provide a depth map.

Description

A TIME-OF-FLIGHT SENSOR SYSTEM
Field
The present invention relates to a time-of-flight sensor system and a method for sensing light scattered by a subject for a time-of-flight sensor system.
Background
A time-of-flight sensor system uses time-of-flight to resolve distance between the sensor and the subject for each point of an image. The time-of-flight is measured, for example, by measuring the round trip time of an artificial light signal or pulse to and then reflected from the subject in a direct time-of-flight system. Thus, the distance to the subject is half the product of speed of light (3x108 ms 1) and measured time of flight to and from the subject. Alternatively, in an indirect time-of-flight system, the time-of flight can be based on measuring phase difference between an emitted signal and a received signal.
Invisible light wavelengths may be used for time-of-flight camera systems to avoid disturbing a subject that is being imaged (which may also be captured with a visible light camera). The near infrared (NIR) band (wavelengths 750nm to 1.4pm) is typically chosen due to the availability of small (portable) lasers with good resolving potential.
A time-of-flight three dimensional (3D) sensor can use light provided by an artificial light source. It is known, in some 3D sensing systems, to scan the illumination across a subject in order to produce an output frame.
Summary
The inventors of the present invention have appreciated that it is advantageous to know the position of a moveable element of an actuator with great accuracy, where the moveable element moves illumination provided by a light source across a subject. As set out in more detail below, knowing the position of said moveable element with high accuracy allows more accurate processing of data captured by the time-of-flight sensor system and thus advantageously provides a better depth map output.
In addition, the inventors of the present invention have also appreciated the benefits of reducing the impact of latency potentially introduced in arrangements requiring signal transmission to provide information about the position of the moveable element. Such arrangements are described in more detail below. The invention is defined by the independent claims to which reference should now be made. Optional features are set forth in the dependent claims.
According to an aspect of the present invention, there is provided a time-of-flight sensor system comprising: an illumination source configured to provide illumination for illuminating a subject to which a time-of-flight is to be measured; an optical system having an actuator, the actuator comprising a support structure and moveable element moveable relative to the support structure,; and a sensor having a sensor surface and being configured to sense light scattered by the subject from the illumination source and to provide depth data dependent on sensed light, wherein the actuator is configured to, by moving the moveable element, move the illumination across at least part of the subject to generate an output frame, and the time-of-flight sensor system further comprises a position sensor configured to determine position data of the moveable element. The position data may comprise the position of the moveable element at a given time. The incorporation of the position sensor enables the time- of-flight sensor system to have accurate position information of the moveable element. Depth data may comprise one or more depth data points resulting from light scattered by different points of a subject as the illumination is moved across at least part of the subject. The depth data points may correspond to instances of emission of illumination and can be correlated with position data points. Position data may be correlated with depth data at two or more positions across the subject at different instances.
In doing this, accurate position data can be correlated with depth data provided by the sensor. This advantageously provides the production of an accurate depth map.
The time-of-flight sensor system may be, or may be provided in, any compatible apparatus or device, including a smartphone, a mobile computing device, a laptop, a tablet computing device, a security system, a gaming system, an augmented reality system, an augmented reality device, a wearable device, a drone, an aircraft, a spacecraft, a vehicle, an autonomous vehicle, a robotic device, a consumer electronics device, a domotic device, and a home automation device.
The illumination may be provided or emitted by any suitable light source. For example, the illumination source may be a source of non-visible light or a source of near infrared light. The light source may comprise at least one laser, laser array (such as a vertical cavity surface emitting laser (VCSEL) array), or may comprise at least one light emitting diode (LED). The actuator may be configured to move the moveable element at relatively high speeds. For example, at a speed of 0.08m/s, or travelling 1 pm in 12.5ps. At these scales, latency in signal transmission and transmission times themselves are significant. For example, latency and transmission times relating to the transmission of data from the position sensor to, for example, a processor may be considered. This is particularly the case given that at least 9 bits are required to encode position data to the desired precision. For example, in typical applications where a precision of 1pm is required across a length of 400pm, there is at least 400 unique values. Therefore, a 9 bit system (29=512) is required to encode with sufficient precision.
Latency in such signals may arise from various sources. The position sensor may comprise a signal converter, such as an analogue to digital converter, which may have a certain amount of latency. For example, 10ps signal latency is not atypical for such converters.
The signal transmission itself may comprise inherent latency. A communication interface such as a serial peripheral interface (SPI) may have relatively small overhead and also be capable of running at relatively high clock rates. However, a clock rate higher than 10MHz poses certain problems. Even at this speed, a latency of at least 1 ps is present. In practice, this is typically 10ps. An inter-integrated circuit (I2C) may have a higher overhead, but is typically not supported at more than 3.4MHz. This implies a latency of at least 8ps.
The position sensor may transmit determined position data to the processor. This may introduce latency from the sources described above. It is therefore desirable to reduce the impact of this latency in order to provide accurate position information of the moveable element. There is provided various ways of reducing said impact.
For example, hardware components of the time-of-flight sensor system may be synchronised. The position sensor may be synchronised with the illumination source such that the position data is determined when the illumination source provides illumination. By this it is meant that the position data may be determined at the start of emission of the illumination, any time during the emission of illumination, or at the end of emission of illumination, as long as the synchronisation is consistent. The position sensor may comprise a signal converter, and the signal converter may be synchronised with the illumination source such that the position data is determined when the illumination source provides illumination. By synchronising the position sensor and illumination source, the impact of latency in signal transmission or processing is reduced greatly because the system (such as the processor) knows the position data of the moveable element at a particular instance of illumination emission. Therefore, a direct correlation can be made between the depth data received from a particular instance of illumination emission, and the position data of the moveable element at the time of the emission of that illumination. The depth data and position data for each instance may therefore be accurately correlated to produce an accurate depth map with significantly reduced latency effects.
The position data may not be directly obtainable from the position sensor. That is, the position data may not be obtained in real time with the depth data. Rather, the position data may be obtainable by 1) obtaining stored position data from a memory module, 2) obtaining position data that corresponds to one or more movement commands, 3) obtaining position data by prediction or 4) obtaining position data by interpolation, as explained in further details below. Therefore, when the processor correlates the position data with the depth data to provide a depth map, it may carry out additional processing steps other than corresponding positional data with depth data.
The time-of-flight system may further comprise a memory module. The position data may be stored in the memory module. The processor may be configured to correlate the depth data with position data stored in the memory to provide the depth map. In addition to the position data, each position data sample may be accompanied by a timestamp/corresponding time data, e.g. in the form of metadata. These data may be used in the interpolation / prediction methods as described.
The position data may be one-dimensional position data indicating a linear displacement of the moveable element from a reference point, e.g. a zero or default position. The position data may preferably refer to a two-dimensional coordinate representing the position of the moveable element in a region of interest, e.g. across an X-Y plane. The position data may refer to a three-dimensional coordinate representing the position of the moveable element within a volume of interest.
Another way of reducing the impact of latency may be to predict position data of the moveable element. For example, the optical system may further comprise a controller configured to control movement of the actuator by providing one or more movement commands. In other words, the controller may issue instructions to the actuator to move the moveable element. The one or more commands may control the actuator to move the moveable element in a first direction, or a plurality of subsequent directions, where the directions may be the same or different directions. The position sensor may be configured to determine position data corresponding to each of the one or more movement commands. When the controller provides a movement command to the actuator, the position sensor may be configured to determine position data of the moveable element after moving as instructed by the controller. The position sensor may determine position data for each of the movement commands of the one or more movement commands. The determined position data may be stored in the memory. A correspondence to the one or more movement commands may also be stored in the memory. Thus, there may be stored in the memory one or more movement commands and their corresponding position data of the moveable element. By doing this, if a movement command provided by the controller has been issued before, the memory will contain said movement command and its corresponding position data of the moveable element. The time- of-flight system may, for example using the processor, then predict position data of the moveable element after a movement command is issued without the need for the position sensor to determine the position data of the moveable element again. The processor may be configured to correlate the depth data with stored position data corresponding to the one or more movement commands to provide the depth map. This avoids the impact of the latency of transmission of the position data from the position sensor to the processor.
The one or more movement commands provided by the controller may be synchronised with the illumination provided by the illumination source such that position data corresponds to each of the one or more movement commands. Thus, the position data of the moveable element may be known from the memory, and its correspondence with the depth data from the sensor may be obtained by synchronisation. Thus, an accurate correlation and depth map may be provided.
The processor may be configured to calculate a velocity of movement of the moveable element based on the position data stored in the memory. For example, two or more position data points may be stored in the memory. The processor may calculate, using the two or more position data points, a velocity of movement of the moveable element based on a first and subsequent position data point or points. The processor may also be configured to extrapolate future position data based on stored position data. The processor may also calculate the direction of movement of the moveable element from the stored position data points. Based on one or more of these, the processor may be configured to extrapolate or calculate a future position of the moveable element. The depth data and predicted position data may then be correlated to produce the accurate depth map. Most advantageously, the position data calculated or predicted by the processor will correspond to instances of illumination emission such that the depth data can be properly correlated with position data. The corresponding point of illumination emission may be at the beginning, during, or end of emission as set out above.
The position sensor may be configured to periodically determine position data of the moveable element. The periods may be regular intervals. The intervals may be set manually by a user or may be factory calibrated. The periodically determined position data may be stored in the memory. The processor may be configured to interpolate intermediate position data based on the periodically determined position data. That is, the processor may predict position data at a point between one or more periodically determined position data points.
The position sensor may comprise a driver configured to drive the movement of the moveable element. The actuator may be configured to move the illumination in a scanning pattern across at least a part of the subject. The scanning pattern may comprise moving the illumination along a first direction across at least a part of the subject. For example, the scanning pattern may comprise moving the illumination across at least part of the subject in a single direction, such as from one side of the subject to the other side of the subject, so as to substantially cover the subject, or fully cover the subject. The scanning pattern may further comprise moving the illumination along a second direction across at least part of the subject. The first direction may be perpendicular to the second direction, or angled to the second direction in a plane. That is, the first direction may be angled at a non-zero angle to the second direction. The scanning pattern may be a raster scanning pattern. The scanning pattern may be boustrophedonic. Increasing the number of points in the scanning pattern may result in a more uniformly illuminated subject or field of view, which may allow improved resolution of the output frame. However, the more points in the scanning pattern, the more frames which need to be captured and combined in order to generate the output frame. The more frames there are, the more time it takes to combine the frames. Thus, the scanning pattern may be chosen to suit the application.
The illumination may have any suitable form or shape. For example, the illumination may comprise: a light beam having a beam projection configured to tessellate; a light beam having a circular or polygonal beam projection; or a stripe of light. It will be understood that these are merely example types of illumination and are non-limiting. By tessellate, it is meant that the beam shape is configured to substantially cover the subject when moving the illumination across at least part of the subject without the beam shape overlapping. This may be without gaps between the projections, or there may be gaps between projections. The illumination source may be configured to provide the illumination in discrete flashes of light.
The actuator may comprise the position sensor. The actuator may comprise a shape memory alloy (SMA) component, the SMA component being configured to, on contraction, drive movement of the moveable element. The SMA component may be an SMA actuator wire.
An SMA is a material that changes shape over temperature as it transitions from a martensite to an austenite phase. When the SMA wire is heated, it shortens in length and when it cools it becomes more elastic and can be stretched by applying force. In an actuator, an SMA wire is used so that it can respond quickly when heated, typically by applying an electric current and relying on the resistance of the wire to dissipate power. The SMA is returned to a longer state by taking advantage of its elasticity when cooled and applying an opposing force, which might be provided by resilient means or another SMA wire.
The actuator may comprise at least two SMA components. For example, the actuator may comprise four SMA components, or eight SMA components. The SMA components may be arranged in a manner allowing movement of the moveable element relative to the support structure in two orthogonal directions, for example along x and y axes, perpendicular to a notional primary axis extending through the moveable element, for example a z axis. The at least two SMA components may be connected between the moveable element and the support structure and arranged to, on contraction, move the moveable element. In an arrangement comprising four SMA components, the four SMA components may be arranged in a loop around the notional primary axis, here referred to as the optical axis. The four SMA components may consist of a first pair of SMA components arranged on opposite sides of the optical axis and a second pair of SMA components arranged on opposite sides of the optical axis. The first pair of SMA components may be capable of selective driving to move the moveable element relative to the support structure in a first direction, and the second pair of SMA components may be capable of selective driving to move the moveable element relative to the support structure in a second direction transverse to the first direction. Movement in directions other than parallel to the SMA components may be driven by a combination of actuation of these pairs of SMA components to provide a linear combination of movement of the moveable element in the transverse directions. Another way to view this movement is that simultaneous contraction of any pair of the SMA components that are adjacent to each other in the loop will drive movement of the moveable element in a direction bisecting those two of the SMA components (i.e. producing diagonal movement).
As a result, the SMA components may be capable of being selectively driven to move the moveable element relative to the support structure to any position in a range of movement in two orthogonal directions perpendicular to the optical axis. The magnitude of the range of movement depends on the geometry and the range of contraction of the SMA components within their normal operating parameters.
To aid the movement of the moveable element relative to the support structure, bearings may be provided between the support structure and the moveable element. Alternatively, plain bearings may be provided on the support structure to enable the movement of the moveable element.
The moveable element may comprise a lens element. The support structure may support an image sensor. The lens element may be arranged to focus an illumination onto the subject, whereby light scattered by the subject may be captured by the image sensor. The image sensor may capture the image and be of any suitable type, for example, a charge-coupled device (CCD) or a complimentary metal-oxide-semiconductor (CMOS) device.
The SMA component may comprise the position sensor. In this embodiment, the position sensor may be configured to determine the position data by measuring the resistance of the SMA component. A sense resistor may be connected in series with the SMA component. A measurement circuit may be provided to perform a measurement indicative of potential difference across at least the SMA component. The time-of-flight sensor system may further comprise a measurement switch configured to connect between the SMA component and the sense resister. The measurement switch may be configured to connect either to the measurement circuit such that the measurement circuit can perform the measurement, or to a circuit that bypasses the sense resistor.
Such an apparatus improves the efficiency of measuring the resistance of the SMA component, while maintaining the sensitivity and accuracy. This therefore provides an accurate measurement of the position of the moveable element. The measurement switch makes it possible to bypass the sense resistor when the SMA component is driven (i.e. heated) and to switch the sense resistor into the circuit when a measurement is required. It may only be necessary to measure the resistance intermittently. Between measurements, the sense resistor can be bypassed. This reduces the power required to heat the SMA component, thereby improving the efficiency.
Alternatively, the measurement circuit may be configured to perform a measurement indicative of the potential difference across at least the SMA component relative to a reference potential. The reference potential may be connected to a connection potential at the opposite side of the sense resistor from the SMA component such that the reference potential is equal to the connection potential.
Additionally, it is possible to use only a very short measurement pulse (i.e. input only a small amount of power to the SMA component) when making a measurement. For example, the measurement pulse can be shorter than the pulses for heating the SMA component to control its length. By using a short measurement pulse, the measurement can be performed without undesirably heating the SMA component.
The contraction of the SMA component causes movement of the illumination by movement of the moveable element. The resistance of the SMA component at a given time therefore gives an indication of position data of the moveable element at that time, and thus the position data can be used in correlation with the depth data. The position sensor may comprise a magnetic sensor, such as a Hall effect sensor or a magnetic tunnel junction.
The actuator may instead be a voice coil motor. In this example, the voice coil motor may be separate to the position sensor. That is, the voice coil motor may be a different component to the position sensor.
The illumination source may be supported on the moveable element.
According to another aspect of the invention, there is also provided a method of sensing light scattered from a subject for a time-of-flight sensor system, the method comprising: an illumination source illuminating a subject to which a time-of-flight is to be measured; a sensor having a sensor surface sensing light scattered by the subject from the illumination source to provide depth data dependent on sensed light; an actuator moving a moveable element relative to a support structure to move the illumination across at least part of the subject to generate an output frame; and a position sensor determining position data of the moveable element.
According to another aspect of the present invention, there is provided a non-transitory computer-readable medium comprising instructions for performing the method set out above. The non-transitory computer-readable medium may be, for example, a solid state memory, a microprocessor, a CD or DVD-ROM, programmed memory such as non-volatile memory (such as Flash), or read-only memory (firmware), or on a data carrier such as an optical or electrical signal carrier.
Brief Description of the Drawings
Certain embodiments of the presently-claimed invention will now be described, by way of example only, with reference to the accompanying drawings, in which:
Figure 1 is a schematic drawing of a time-of-flight sensor system embodying an aspect of the present invention;
Figure 2 is a schematic drawing of illumination of a subject using a time-of-flight sensor system embodying an aspect of the present invention;
Figure 3 is a graph illustrating operation of a time-of-flight sensor system embodying an aspect of the present invention; Figure 4 is a graph illustrating operation of a time-of-f light sensor system embodying an aspect of the present invention;
Figure 5 is a graph illustrating operation of a time-of-f light sensor system embodying an aspect of the present invention; and
Figure 6 is a schematic drawing of an actuator for a time-of-flight sensor system embodying an aspect of the present invention.
Like features are denoted by like reference numerals.
Detailed description
An example time-of-flight sensor system will not be described with reference to Figures 1 to 6.
Figure 1 illustrates a time-of-flight sensor system 100. The system 100 comprises an illumination source which in this example is a vertical cavity surface emitting laser (VCSEL) 113 for illuminating a subject 101 to which a time-of-flight is to be measured. The system 100 also comprises an optical system comprising an actuator 111. In this example, the actuator 111 comprises four SMA components. This actuator is described in much more detail below and is illustrated by Figure 6. The actuator 111 is configured to move the illumination 103 provided by the VCSEL 113 by moving a moveable element 109 relative to a support structure (not shown). More specifically, the moveable element 109 may move in a direction orthogonal to the optical axis of the illumination. The moveable element 109 comprises a lens element. The system 100 also comprises a sensor 107 having a sensor surface and being configured to sense light scattered by the subject 101 from the VCSEL 113 and to provide depth data dependent on sensed light. Significantly, the system 100 also comprises a position sensor. In this example, the actuator 111 comprises the position sensor. However, it will be understood that the position sensor may be a separate component to the actuator. In this example, the position sensor determines position data of the moveable element by measuring the resistance of at least one of the SMA components. A sense resistor is connected in series with the SMA components, and a measurement circuit is provided to perform a measurement indicative of potential difference across at least one of the SMA components. The system 100 also comprises a processor 115 in connection with the VCSEL 113, the sensor 107 and the position sensor. The processor 115 also comprises a controller. During use, VCSEL 113 provides illumination 103 in the form of a stripe and in a discrete flash. The controller then provides a movement command to the actuator 111 to move the moveable element 109, such that the illumination 103 is moved across at least part of the subject 101. The moveable element is moved by the actuator by the contraction of at least one of the SMA components upon heating the at least one of the SMA components. The VCSEL continues to provide flashes of stripes of illumination 103 as the moveable element 109 moves the illumination 103 across the subject 101. Significantly, in this example, the position sensor is synchronised with the VCSEL. In particular, the position sensor comprises an analogue to digital converter, and the converter is synchronised with the VCSEL. The position sensor provides position data of the moveable element by measuring the resistance of at least one of the SMA components. Due to the synchronisation, this measurement is synchronised to be performed when the VCSEL emits illumination 103. In particular, the position sensor is synchronised to perform the measurement at the start of illumination. However, it will be understood that this measurement could be taken at any time during illumination, or at the end of illumination, as long as consistent synchronisation exists. Light scattered by the subject 101 is received by the sensor 107, which provides depth data dependent on the received light to the processor 115. The position sensor provides position data to the processor. As the illumination 103 is scanned across the subject 101, a plurality of data points for both position data and depth data are obtained, all of which are provided to the processor. The transmission of this data may typically introduce latency from various sources discussed above. However, the effect of this latency is mitigated due to the synchronisation of the position sensor with the VCSEL, as the processor has knowledge of which position data points correspond to which depth data values. Thus, the position data and depth data is correlated by the processor and an accurate depth map 117 is produced due to the avoidance of impact of latency.
It will be understood that while the arrangement illustrated by Figure 1 provides a system configured to reduce the latency of the transmission of data in the system, other arrangements may also reduce latency in other ways as set out above and below.
The example time-of-flight system 10 illustrated in Figure 1 is implemented in smartphone device, but it will be understood that the time-of-flight system could be implemented in any appropriate system.
Figure 2 illustrates a scanning pattern of illumination across a subject 207. In this example, the illumination is provided by a VCSEL (not shown) in flashes of stripes of light. The illumination is moved across the subject 207 in a first direction along a single axis, from right to left. A first flash of illumination 201 is first provided. As the moveable element is moved by the actuator, the illumination is scanned across the subject. Subsequently, a second flash of light 203 and a third flash of light 205 are provided. With each flash of light, depth data will be obtained by the receiving sensor. In addition, position data corresponding to the position of the moveable element will be obtained, to correspond to each flash of light of the scanning pattern. Thus, a plurality of data points of each depth data and position data may be obtained in order to provide correlation to produce a depth map.
It will be understood that while in this example, flashes of stripes of light are provided and are scanned in a single direction, the shape of illumination may be any suitable shape as set out above, and may also be scanned in two or more directions.
Figure 3 is a graph illustrating the use of a time-of-flight sensor system as illustrated in Figure 1. Figure 3 emphasises the synchronisation of the position sensor and the VCSEL. The illumination intensity 201 , 203, 205 corresponds to the flashes of illumination scanning across the subject 207 of Figure 2. A synchronisation signal is provided at the point of emission of illumination from the VCSEL. The synchronisation signal causes the position sensor to determine the position 301 , 303, 305 of the moveable element at the instance of illumination emission. By providing this synchronisation, a correspondence between obtained position data and obtained depth data from the sensor, for each flash of illumination can be obtained. This therefore mitigates the impact of latency of transmission of signals in the system. The position of the moveable element can therefore be known to a high degree of accuracy, which allows the production of an accurate depth map.
Figure 4 is a graph illustrating an alternative way of mitigating latency in a time-of-flight sensor system to that illustrated in Figure 1. The time-of-flight sensor system use illustrated in Figure 4 predicts position data of the moveable element using extrapolation. The position sensor determines position data 401 of the moveable element periodically. The intervals between each determination are in this example constant, but it will be understood that they need not be. The position data 401 is stored in a memory module, in connection with the processor. Based on the position data 401, the processor extrapolates in order to predict a future position data point 403 of the moveable element. In this example, the processor also calculates the velocity of movement of the moveable element based on the position data points 401, but it will be understood that this is not necessary to extrapolate a future position data point. Therefore, position data of the moveable element may be predicted such that position data points are determined corresponding to each instance of emission of light 201, 203, 205. By predicting a position data point 403, the amount of measurements needed from the position sensor is reduced. Furthermore, it is not necessary to determine the position data points at a specific time because the position data points may be acquired by prediction/calculation. Therefore, this reduces the amount the latency is introduced into the system, and therefore improves the accuracy of position data and ultimately the depth map produced from correlating the depth data with position data.
Figure 5 illustrates yet another use of a time-of-f light system configured to reduce the impact of latency in another way. In this example, position data of the moveable element may be interpolated. In this example, the position sensor periodically determines position data of the moveable element. Again, while in this example the position data 503 is determined at regular intervals, it will be understood that it need not be determined at regular intervals. The position data 503 is stored in the memory module in connection with the processor. Based on the position data points 503, the processor interpolates to determine position data 501 of the moveable element at a given time between one or more of the position data points 503. The system is therefore capable of determining position data of the moveable element 501 at any point in time, without requiring further measurement from the position sensor. Once the system knowns when light is emitted, the processor can interpolate to find the position of the moveable element at an instance of illumination emission, and this a position data point can be determined to correspond to each instance of emission of light 201 , 203, 205. This, again, mitigates the impact of latency of transmission from the position sensor and therefore improves the accuracy of the position data.
Figure 6 illustrates an SMA actuator arrangement as implemented in the time-of-flight sensor system illustrated in Figure 1. The actuator arrangement 10 comprises a total of four SMA actuator wires 11, 12, 13, 14 connected between a support block 16 that forms part of a support structure and is mounted to a base and a moveable element 15.
Each of the SMA actuator wires 11 to 14 is held in tension, thereby applying a force between the moveable element 15 and the support block 16 in a direction perpendicular to a notional primary axis, here referred to as an optical axis. In operation, the SMA actuator wires 11 to 14 move the moveable element 15 relative to the support block 16 in two orthogonal directions perpendicular to the optical axis.
The SMA actuator wires 11 to 14 are connected at one end to the moveable element 15 by respective crimping members 17 and at the other end to the support block by crimping members 18. The crimping members 17, 18 crimp the wire to hold it mechanically, optionally strengthened by use of an adhesive. The crimping members 17, 18 also provide an electrical connection to the SMA actuator wires 11 to 14. However, it will be understood that any suitable means for connecting the SMA actuator wires 11 to 14 may alternatively be used. The four SMA wires 11 to 14 are arranged in a loop around the optical axis. The four SMA wires consist of a first pair of SMA wires 11 , 13 arranged on opposite sides of the optical axis and a second pair of SMA wires 12, 14 arranged on opposite sides of the optical axis. The first pair of SMA wires 11 , 13 are capable of selective driving to move the moveable element 15 relative to the support structure in a first direction, and the second pair of SMA wires 12, 14 are capable of selective driving to move the moveable element 15 relative to the support structure in a second direction transverse to the first direction. Movement in directions other than parallel to the SMA wires 11 to 14 are driven by a combination of actuation of these pairs of SMA wires to provide a linear combination of movement of the moveable element in the transverse directions. Another way to view this movement is that simultaneous contraction of any pair of the SMA wires that are adjacent to each other in the loop will drive movement of the moveable element in a direction bisecting those two of the SMA wires (i.e. producing diagonal movement).
As a result, the SMA wires 11 to 14 may be capable of being selectively driven to move the moveable element 15 relative to the support structure to any position in a range of movement in two orthogonal directions perpendicular to the optical axis. The magnitude of the range of movement depends on the geometry and the range of contraction of the SMA wires within their normal operating parameters.
Embodiments of the present invention have been described. It will be appreciated that variations and modifications may be made to the described embodiments within the scope of the present invention. For example, the graphs illustrated in Figures 3 to 5 are all linear. It will be appreciated that the moveable element may be moved at varying speeds, and/or in more than one direction. For example, corresponding graphs to those illustrated in Figures 3 to 5 may appear sinusoidal.
In addition, the time-of-flight sensor system may reduce latency in another way as to the arrangements described above. A controller may provide movement commands to the actuator to move the moveable element, such that the illumination is moved across the subject. When the controller has issued a movement command to the actuator, the position sensor determines the position of the moveable element. That is, the position sensor determines the position of the moveable element corresponding to the movement command issued by the controller. Each time a movement command is provided, the position sensor determines the resulting position of the moveable element. The position data obtained is stored in a memory module. If the controller repeats a movement command, the memory module will contain position data corresponding to the movement command. Therefore, the processor, by accessing the stored position data in the memory module, is able to predict position data without requiring the position sensor to re-determine the position data of the moveable element. This therefore removes the need to use the position sensor when the position data can be predicted, thus mitigating the latency introduced by transmission of the position data from the position sensor, therefore improving the accuracy of the position data. Thus ultimately improves the accuracy of the depth map.

Claims

Claims
1. A time-of-flight sensor system comprising: an illumination source configured to provide illumination for illuminating a subject to which a time-of-flight is to be measured; an optical system having an actuator, the actuator comprising a support structure and moveable element moveable relative to the support structure; a sensor having a sensor surface and being configured to sense light scattered by the subject from the illumination source and to provide depth data dependent on sensed light, wherein the actuator is configured to, by moving the moveable element, move the illumination across at least part of the subject to generate an output frame, and the time-of- flight sensor system further comprises a position sensor configured to determine position data of the moveable element; and a processor configured to correlate the position data with the depth data to provide a depth map.
2. A time-of-flight sensor system according to claim 1 further comprising a processor configured to correlate the position data with the depth data at two or more positions across the subject at different instances.
3. A time-of-flight sensor system according to claim 1 or claim 2 wherein the position sensor and the illumination source are synchronised such that the position data is determined when the illumination source provides illumination.
4. A time-of-flight sensor system according to claim 3 wherein the position sensor comprises a signal converter, the signal converter being synchronised with the illumination source such that the position data is determined when the illumination source provides illumination.
5. A time-of-flight sensor system according to claim 1 , wherein the position data is not directly obtainable from the position sensor.
6. A time-of-flight sensor system according to claim 1 or claim 5 further comprising a memory module, wherein the position data is stored in a memory module.
7. A time-of-flight sensor system according to claim 6 wherein the processor is configured to correlate the depth data with position data stored in the memory module to provide the depth map.
8. A time-of-flight sensor system according to claim 7 wherein the optical system further comprises a controller configured to control movement of the actuator by providing one or more movement commands.
9. A time-of-flight sensor system according to claim 8 wherein the position sensor is configured to determine position data corresponding to each of the one or more movement commands.
10. A time-of-flight sensor system according to claim 9 wherein the one or more movement commands provided by the controller are synchronised with the illumination provided by the illumination source such that position data corresponds to each of the one or more movement commands.
11. A time-of-flight sensor system according to claim 9 or claim 10 wherein the processor is configured to correlate depth data with the position data corresponding to the one or more movement commands to provide the depth map.
12. A time-of-flight sensor system according to claim 6 wherein the processor is configured to calculate a velocity of movement of the actuator based on the position data stored in the memory module.
13. A time-of-flight sensor system according to any of any preceding claims wherein the processor is further configured to extrapolate future position data based on the position data.
14. A time-of-flight sensor system according to any preceding claim wherein the position sensor is configured to periodically determine position data of the moveable element.
15. A time-of-flight sensor system according to claim 14 wherein the processor is configured to interpolate intermediate position data based on the periodically determined position data.
16. A time-of-flight sensor system according to any preceding claim wherein the position data comprises the position of the actuator at a given time.
17. A time-of-flight sensor system according to any preceding claim wherein the position sensor comprises a driver configured to drive the movement of the actuator.
18. A time-of-flight sensor system according to any preceding claim wherein actuator is configured to move the illumination in a scanning pattern across at least part of the subject.
19. A time-of-flight sensor system according to claim 18 wherein the scanning pattern comprises moving the illumination along a first direction across at least part of the subject.
20. A time-of-flight sensor system according to claim 19 wherein the scanning pattern further comprises moving the illumination along a second direction across at least part of the subject.
21. A time-of-flight sensor system according to claim 20 wherein the first direction is perpendicular to the second direction or angled to the second direction in a plane.
22. A time-of-flight sensor system according to any preceding claim wherein the illumination comprises: a light beam having a beam projection configured to tessellate; a light beam having a circular or polygonal beam projection; or a stripe of light.
23. A time-of-flight sensor system according to any preceding claim wherein the illumination source is configured to provide the illumination in discrete flashes of light.
24. A time-of-flight sensor system according to any preceding claim wherein the actuator comprises the position sensor.
25. A time-of-flight sensor system according to any preceding claim wherein the actuator comprises a shape memory alloy (SMA) component, wherein the SMA component is configured to, on contraction, drive movement of the moveable element.
26. A time-of-flight sensor system according to claim 25 wherein SMA component comprises the position sensor.
27. A time-of-flight sensor system according to claim 26 wherein the position sensor is configured to determine the position data by measuring the resistance of the SMA component.
28. A time-of-flight sensor system according to any preceding claim wherein the position sensor comprises a magnetic sensor.
29. A time-of-flight sensor system according to any of claims 1 to 24 wherein the actuator comprises a voice coil motor.
30. A time-of-flight sensor system according to claim 29 wherein the voice coil motor is separate to the position sensor.
31. A method of sensing light scattered from a subject for a time-of-flight sensor system, the method comprising: an illumination source illuminating a subject to which a time-of-flight is to be measured; a sensor having a sensor surface sensing light scattered by the subject from the illumination source to provide depth data dependent on sensed light; an actuator moving a moveable element relative to a support structure to move the illumination across at least part of the subject to generate an output frame; and a position sensor determining position data of the moveable element. a processor correlating the position data with the depth data to provide a depth map.
32. A computer program for instructing a computer to perform the method of claim 31.
PCT/GB2021/051253 2020-05-21 2021-05-21 A time-of-flight sensor system WO2021234420A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202180036666.0A CN115667978A (en) 2020-05-21 2021-05-21 Time-of-flight sensor system
GB2218936.9A GB2611451A (en) 2020-05-21 2021-05-21 A time-of-flight sensor system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB2007631.1 2020-05-21
GBGB2007631.1A GB202007631D0 (en) 2020-05-21 2020-05-21 A time-of-flight sensor system

Publications (1)

Publication Number Publication Date
WO2021234420A1 true WO2021234420A1 (en) 2021-11-25

Family

ID=71406286

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB2021/051253 WO2021234420A1 (en) 2020-05-21 2021-05-21 A time-of-flight sensor system

Country Status (3)

Country Link
CN (1) CN115667978A (en)
GB (2) GB202007631D0 (en)
WO (1) WO2021234420A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080285010A1 (en) * 2007-05-16 2008-11-20 Omron Corporation Object detector
US20160349050A1 (en) * 2015-05-28 2016-12-01 Topcon Corporation Surveying Instrument
US20160349051A1 (en) * 2015-05-28 2016-12-01 Topcon Corporation Surveying Instrument
WO2018202819A1 (en) * 2017-05-05 2018-11-08 Cambridge Mechatronics Limited Sma actuator with position sensors
US10295671B2 (en) * 2015-05-07 2019-05-21 GM Global Technology Operations LLC Array lidar with controllable field of view
WO2020008217A1 (en) * 2018-07-06 2020-01-09 Cambridge Mechatronics Limited Methods for controlling power delivered to an sma actuator

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080285010A1 (en) * 2007-05-16 2008-11-20 Omron Corporation Object detector
US10295671B2 (en) * 2015-05-07 2019-05-21 GM Global Technology Operations LLC Array lidar with controllable field of view
US20160349050A1 (en) * 2015-05-28 2016-12-01 Topcon Corporation Surveying Instrument
US20160349051A1 (en) * 2015-05-28 2016-12-01 Topcon Corporation Surveying Instrument
WO2018202819A1 (en) * 2017-05-05 2018-11-08 Cambridge Mechatronics Limited Sma actuator with position sensors
WO2020008217A1 (en) * 2018-07-06 2020-01-09 Cambridge Mechatronics Limited Methods for controlling power delivered to an sma actuator

Also Published As

Publication number Publication date
GB202218936D0 (en) 2023-02-01
GB202007631D0 (en) 2020-07-08
CN115667978A (en) 2023-01-31
GB2611451A (en) 2023-04-05

Similar Documents

Publication Publication Date Title
US9134117B2 (en) Distance measuring system and distance measuring method
JP5911934B2 (en) Contour measurement device and robot system
KR101521356B1 (en) Distance measurement apparatus, distance measurement method, and computer-readable storage medium
JP6805904B2 (en) Measuring equipment, measuring methods and robots
US10713810B2 (en) Information processing apparatus, method of controlling information processing apparatus, and storage medium
JP6753107B2 (en) Distance measuring device, distance measuring method and program
US10009600B2 (en) Display or projection apparatus for a video signal, and light-emitting module and calibration method therefor
JP6727539B2 (en) Distance sensor, running body, robot and three-dimensional measuring device
US10481263B2 (en) Range finding apparatus, moveable apparatus, robot, three dimensional measurement apparatus, method of measuring three dimensional information, and storage medium
CN107923978B (en) Object detection device, object detection method, and recording medium
US11312029B2 (en) Three-dimensional measuring apparatus, robot, and robot system
JPS61116611A (en) Distance measurement
CN112241012A (en) Distance measuring sensor
JP2007240344A (en) Dynamic shape measuring method and dynamic shape measuring device
JP6776692B2 (en) Parallax calculation system, mobiles and programs
KR20180092738A (en) Apparatus and method for obtaining depth information using digital micro-mirror device
CN112602117A (en) Image processing apparatus and three-dimensional measurement system
WO2021234420A1 (en) A time-of-flight sensor system
CN102401901B (en) Distance measurement system and distance measurement method
CN113126105A (en) Three-dimensional distance measurement method and device
CN108458660B (en) Optical displacement sensor and system provided with same
CN103869593B (en) Three-dimension imaging device, system and method
CN112213730B (en) Three-dimensional distance measurement method and device
EP3733345A1 (en) Processing system, measurement probe, shape measuring device, and program
US10055849B2 (en) Image measurement device and controlling method of the same

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21730269

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 202218936

Country of ref document: GB

Kind code of ref document: A

Free format text: PCT FILING DATE = 20210521

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21730269

Country of ref document: EP

Kind code of ref document: A1