WO2021113147A1 - Efficient algorithm for projecting world points to a rolling shutter image - Google Patents

Efficient algorithm for projecting world points to a rolling shutter image Download PDF

Info

Publication number
WO2021113147A1
WO2021113147A1 PCT/US2020/062365 US2020062365W WO2021113147A1 WO 2021113147 A1 WO2021113147 A1 WO 2021113147A1 US 2020062365 W US2020062365 W US 2020062365W WO 2021113147 A1 WO2021113147 A1 WO 2021113147A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
camera
environment
point
location
Prior art date
Application number
PCT/US2020/062365
Other languages
French (fr)
Inventor
Sheng Zhao
Nicholas Lloyd ARMSTRONG-CREWS
Volker GRABE
Original Assignee
Waymo Llc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Waymo Llc filed Critical Waymo Llc
Priority to EP20896593.9A priority Critical patent/EP4070130A4/en
Priority to CN202080094918.0A priority patent/CN115023627A/en
Publication of WO2021113147A1 publication Critical patent/WO2021113147A1/en

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/481Constructional features, e.g. arrangements of optical elements
    • G01S7/4817Constructional features, e.g. arrangements of optical elements relating to scanning
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/689Motion occurring during a rolling shutter mode
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/51Display arrangements

Definitions

  • ASCn-formatted text file named "camera_model.txt” (size 48,446 bytes, created on Dec. 4, 2019), submitted electronically on Dec. 4, 2019 along with the instantly filed application.
  • Active sensors such as tight detection and ranging (LIDAR) sensors, radio detection and ranging (RADAR) sensors, and sound navigation and ranging (SONAR) sensors, among others, can scan an environment by emitting signals toward the environment and detecting reflections of the emitted signals.
  • Passive sensors such as image sensors (e.g., cameras) and microphones among others, can detect signals originating from sources in the environment.
  • An example LIDAR sensor can determine distances to environmental features while scanning through a scene to assemble a “point cloud” indicative of reflective surfaces.
  • Individual points in the point cloud can be determined, for example, by transmitting a laser pulse and detecting a returning pulse, if any, reflected from an object in the environment, and then determining a distance to the object according to a time delay between the transmission of the pulse and the reception of its reflection.
  • a three-dimensional map of points indicative of locations of reflective features in the environment can be generated.
  • An example image sensor can capture an image of a scene viewable to the image sensor.
  • the image sensor may include an array of complementary metal oxide semiconductor (CMOS) active pixel sensors, or other types of light sensors.
  • CMOS complementary metal oxide semiconductor
  • Each CMOS sensor may receive a portion of light from the scene incident on the array.
  • Each CMOS sensor may then output a measure of the amount of light incident on the CMOS sensor during an exposure time when the CMOS sensor is exposed to the tight from the scene.
  • an image of the scene can be generated, where each pixel in the image indicates one or more values (e.g., colors, etc.) based on outputs from the array of CMOS sensors.
  • a method includes: (i) obtaining an indication of a point in an environment of an autonomous vehicle; (ii) obtaining information about the location and motion of the autonomous vehicle within the environment; (iii) obtaining an image of a portion of the environment of the autonomous vehicle, wherein the image comprises a plurality of rows of pixels, and wherein the image was generated by a camera operating in a rolling shutter mode such that each row of pixels represents light sensed by the camera during a respective exposure time period; and (iv) mapping the point in the environment to a location within the image.
  • Mapping the point in the environment to a location within the image includes: (a) determining an initial estimated time, T 0 , that the camera sensed light from the point in the environment; (b) determining N updated estimated times, T i , wherein N ⁇ 1; and (c) determining, based on the updated estimated time, T N , a location within the image that corresponds to the point in the environment.
  • Each updated estimated time, T i is determined by an update process including: (1) determining, based on the information about the location and motion of the autonomous vehicle, a pose of the camera at the estimated time, T i-1 , (2) based on the pose of the camera at the estimated time, T i-1 , projecting the point in the environment to a projected location within the image, (3) evaluating a cost function that includes a term based on the estimated time, T i-1 , and a term based on a mapping from the projected location to a time that the camera sensed light represented at the projected location within the image, and (4) determining the updated estimated time, T 1 , based on the evaluated cost function.
  • a non-transitory computer readable medium has stored therein instructions executable by a computing device to cause the computing device to perform operations.
  • the operations include: (i) obtaining an indication of a point in an environment of an autonomous vehicle; (ii) obtaining information about the location and motion of the autonomous vehicle within the environment; (iii) obtaining an image of a portion of the environment of the autonomous vehicle, wherein the image comprises a plurality of rows of pixels, and wherein the image was generated by a camera operating in a rolling shutter mode such that each row of pixels represents light sensed by the camera during a respective exposure time period; and (iv) mapping the point in the environment to a location within the image.
  • Mapping the point in the environment to a location within the image includes: (a) determining an initial estimated time, T 0 , that the camera sensed light from the point in the environment; (b) determining N updated estimated times, T i , wherein N ⁇ 1; and (c) determining, based on the updated estimated time, T N , a location within the image that corresponds to the point in the environment.
  • Each updated estimated time, T i is determined by an update process including: (1) determining, based on the information about the location and motion of the autonomous vehicle, a pose of the camera at the estimated time, T i-1 , (2) based on the pose of the camera at the estimated time, T i-1 , projecting the point in the environment to a projected location within the image, (3) evaluating a cost function that includes a term based on the estimated time, T i-1 , and a term based on a mapping from the projected location to a time that the camera sensed light represented at the projected location within the image, and (4) determining the updated estimated time, T i , based on the evaluated cost function.
  • a system includes: (i) a light detection and ranging (LIDAR) sensor coupled to a vehicle; (ii) a camera coupled to the vehicle, wherein the camera is configured to obtain image data indicative of the environment of the vehicle; and (iii) a controller, wherein the controller is operably coupled to the LIDAR sensor and the camera.
  • LIDAR light detection and ranging
  • the controller includes one or more processors configured to perform operations including: (a) operating the LIDAR sensor to generate a plurality of LIDAR data points indicative of distances to one or more objects in the environment of the vehicle; (b) generating an indication of a point in the environment based on at least one LIDAR data point of the plurality of LIDAR data points; (c) operating the camera in a rolling shutter mode to generate an image of a portion of the environment of the vehicle, wherein the image comprises a plurality of rows of pixels, and wherein each row of pixels represents light sensed by the camera during a respective exposure time period; (d) obtaining information about the location and motion of the autonomous vehicle within the environment; and (e) mapping the point in the environment to a location within the image.
  • Mapping the point in the environment to a location within the image includes: (1) determining an initial estimated time, T 0 , that the camera sensed light from the point in the environment; (2) determining N updated estimated times, T i , wherein N ⁇ 1; and (3) determining, based on the updated estimated time, T N , a location within the image that corresponds to the point in the environment.
  • Each updated estimated time, T i is determined by an update process including: (I) determining, based on the information about the location and motion of the autonomous vehicle, a pose of the camera at the estimated time, T i-1 , ( ⁇ ) based on the pose of the camera at the estimated time, T i-1 , projecting the point in the environment to a projected location within the image, (III) evaluating a cost function that includes a term based on the estimated time, T i-1 , and a term based on a mapping from the projected location to a time that the camera sensed light represented at the projected location within the image, and (TV) determining the updated estimated time, T t , based on the evaluated cost function.
  • Figure 1 is a simplified block diagram of a system, according to example embodiments
  • Figure 2A illustrates a device that includes a rotating LIDAR sensor and a rolling shutter camera arrangement, according to example embodiments.
  • Figure 2B is a cross-section view of the rolling shutter camera arrangement of
  • Figure 2C is a conceptual illustration of an operation of the device of Figure 2A.
  • Figure 2D illustrates a top view of the device of Figure 2A.
  • Figure 2E illustrates another top view of the device of Figure 2A.
  • Figure 3 illustrates a cross-section view of another rolling shutter camera arrangement, according to example embodiments.
  • Figure 4 A illustrates the motion of a camera relative to a point in an environment.
  • Figure 4B illustrates the projection of the point illustrated in Figure 4A onto an image generated by the camera illustrated in Figure 4A.
  • Figure 5 is a flowchart of a method, according to example embodiments. DETAILED DESCRIPTION
  • Example Sensors include LIDAR sensors and cameras (or image sensors), other types of sensors are possible as well.
  • a non-exhaustive list of example sensors that can be alternatively employed herein without departing from the scope of the present disclosure includes RADAR sensors, SONAR sensors, sound sensors (e.g., microphones, etc.), motion sensors, temperature sensors, pressure sensors, etc.
  • example sensors herein may include active sensors that emit a signal (e.g., a sequence of pulses or any other modulated signal) based on modulated power provided to the sensor, and then detects reflections of the emitted signal from objects in the surrounding environment.
  • example sensors herein may include passive sensors (e.g., cameras, microphones, antennas, pressure sensors, etc.) that detect external signals (e.g., background signals, etc.) originating from external source(s) in the environment.
  • FIG. 1 is a simplified block diagram of a system 100 that includes sensors (e.g., synchronized sensors), according to an example embodiment.
  • system 100 includes a power supply arrangement 102, a controller 104, one or more sensors 106, one or more sensors 108, a rotating platform 110, one or more actuators 112, a stationary platform 114, a rotary link 116, a housing 118, and a display 140.
  • sensors e.g., synchronized sensors
  • system 100 may include more, fewer, or different components. Additionally, the components shown may be combined or divided in any number of ways. For example, sensor(s) 108 can be implemented as a single physical component (e.g., camera ring). Alternatively, for example, sensor(s) 108 can be implemented as an arrangement of separate physical components. Other examples are possible. Thus, the functional blocks of Figure 1 are illustrated as shown only for convenience in description. Other example components, arrangements, and/or configurations are possible as well without departing from the scope of the present disclosure.
  • Power supply arrangement 102 may be configured to supply, receive, and/or distribute power to various components of system 100.
  • power supply arrangement 102 may include or otherwise take the form of a power source (e.g., battery cells, etc.) disposed within system 100 and connected to various components of system 100 in any feasible manner, so as to supply power to those components.
  • power supply arrangement 102 may include or otherwise take the form of a power adapter configured to receive power from one or more external power sources (e.g., from a power source arranged in a vehicle to which system 100 is mounted, etc.) and to transmit the received power to various components of system 100.
  • Controller 104 may include one or more electronic components and/or systems arranged to facilitate certain operations of system 100. Controller 104 may be disposed within system 100 in any feasible manner. In one embodiment, controller 104 may be disposed, at least partially, within a central cavity region of rotary link 116. In another embodiment, one or more functions of controller 104 can be alternatively performed by one or more physically separate controllers that are each disposed within a respective component (e.g., sensor(s) 106, 108, etc.) of system 100.
  • a respective component e.g., sensor(s) 106, 108, etc.
  • controller 104 may include or otherwise be coupled to wiring used for transfer of control signals to various components of system 100 and/or for transfer of data from various components of system 100 to controller 104.
  • the data that controller 104 receives may include sensor data based on detections of light by LIDAR 106 and/or camera(s) 108, among other possibilities.
  • the control signals sent by controller 104 may operate various components of system 100, such as by controlling emission and/or detection of light or other signal by sensor(s) 106 (e.g., LIDAR, etc.), controlling image pixel capture rate or times via a camera (e.g., included in sensor(s) 108), and/or controlling actuator(s) 112 to rotate rotating platform 110, among other possibilities.
  • controller 104 may include one or more processors, data storage, and program instructions (stored in the data storage) executable by the one or more processors to cause system 100 to perform the various operations described herein.
  • controller 104 may communicate with an external controller or the like (e.g., a computing system arranged in a vehicle, robot, or other mechanical device to which system 100 is mounted) so as to help facilitate transfer of control signals and/or data between the external controller and the various components of system 100.
  • an external controller or the like e.g., a computing system arranged in a vehicle, robot, or other mechanical device to which system 100 is mounted
  • controller 104 may include circuitry wired to perform the various functions described herein. Additionally or alternatively, in some examples, controller 104 may include one or more special purpose processors, servos, or other types of controllers. For example, controller 104 may include a proportional-integral- derivative (PID) controller or other control loop feedback apparatus that operates actuator(s) 112 to modulate rotation of rotating platform 110 according to a particular frequency or phase. Other examples are possible as well.
  • PID proportional-integral- derivative
  • Sensors 106 and 108 can optionally include one or more sensors, such as LIDARs, cameras, gyroscopes, accelerometers, encoders, microphones, RADARs, SONARs, thermometers, etc., that scan a surrounding environment of system 100.
  • sensors such as LIDARs, cameras, gyroscopes, accelerometers, encoders, microphones, RADARs, SONARs, thermometers, etc.
  • Sensor(s) 106 may include any device configured to scan a surrounding environment by emitting a signal and detecting reflections of the emitted signal.
  • sensor(s) 106 may include any type of active sensor. To that end, as shown, sensor 106 includes a transmitter 120 and a receiver 122. In some implementations, sensor 106 may also include one or more optical elements 124.
  • Transmitter 120 may be configured to transmit a signal toward an environment of system 100.
  • transmitter 120 may include one or more light sources (not shown) that emit one or more light beams and/or pulses having wavelengths within a wavelength range.
  • the wavelength range could, for example, be in the ultraviolet, visible, and/or infrared portions of the electromagnetic spectrum depending on the configuration of the light sources.
  • the wavelength range can be a narrow wavelength range, such as provided by lasers and/or some light emitting diodes.
  • the light source(s) in transmitter 120 may include laser diodes, diode bars, light emitting diodes (LEDs), vertical cavity surface emitting lasers (VCSELs), organic light emitting diodes (OLEDs), polymer light emitting diodes (PLEDs), light emitting polymers (LEPs), liquid crystal displays (LCDs), microelectromechanical systems (MEMS), fiber lasers, and/or any other device configured to selectively transmit, reflect, and/or emit light to provide a plurality of emitted light beams and/or pulses.
  • LEDs light emitting diodes
  • VCSELs vertical cavity surface emitting lasers
  • OLEDs organic light emitting diodes
  • PLEDs polymer light emitting diodes
  • LEPs light emitting polymers
  • LCDs liquid crystal displays
  • MEMS microelectromechanical systems
  • fiber lasers and/or any other device configured to selectively transmit, reflect, and/or emit light to provide a plurality of emitted light beams and/
  • transmitter 120 may be configured to emit ER radiation to illuminate a scene.
  • transmitter 120 may include any type of device (e.g., light source, etc.) configured to provide the IR radiation.
  • transmitter 120 may include one or more antennas configured to emit a modulated radio- frequency (RF) signal toward an environment of system 100.
  • RF radio- frequency
  • transmitter 120 may include one or more acoustic transducers, such as piezoelectric transducers, magnetostrictive transducers, electrostatic transducers, etc., configured to emit a modulated sound signal toward an environment of system 100.
  • the acoustic transducers can be configured to emit sound signals within a particular wavelength range (e.g., infrasonic, ultrasonic, etc.). Other examples are possible as well.
  • Receiver 122 may include one or more detectors configured to detect reflections of the signal emitted by transmitter 120.
  • receiver 122 may include one or more antennas configured to detect reflections of the RF signal transmitted by transmitter 120.
  • the one or more antennas of transmitter 120 and receiver 122 can be physically implemented as the same physical antenna structures.
  • receiver 122 may include one or more sound sensors (e.g., microphones, etc.) that are configured to detect reflections of the sound signals emitted by transmitter 120.
  • the one or more components of transmitter 120 and receiver 122 can be physically implemented as the same physical structures (e.g., the same piezoelectric transducer element).
  • receiver 122 may include one or more light detectors (e.g., active pixel sensors, etc.) that are configured to detect a source wavelength of IR fight transmitted by transmitter 120 and reflected off a scene toward receiver 122.
  • light detectors e.g., active pixel sensors, etc.
  • receiver 122 may include one or more light detectors (e.g., photodiodes, avalanche photodiodes, etc .) that are arranged to intercept and detect reflections of the light pulses emitted by transmitter 120 and reflected from one or more objects in a surrounding environment of system 100.
  • receiver 122 may be configured to detect light having wavelengths in the same wavelength range as the light emitted by transmitter 120.
  • sensor 106 e.g., LIDAR
  • receiver 122 may include a photodetector array, which may include one or more detectors each configured to convert detected light (e.g., in the wavelength range of tight emitted by transmitter 120) into an electrical signal indicative of the detected light.
  • a photodetector array could be arranged in one of various ways.
  • the detectors can be disposed on one or more substrates (e.g., printed circuit boards (PCBs), flexible PCBs, etc.) and arranged to detect incoming tight.
  • PCBs printed circuit boards
  • Such a photodetector array could include any feasible number of detectors aligned in any feasible manner.
  • the detectors in the array may take various forms.
  • the detectors may take the form of photodiodes, avalanche photodiodes (e.g., Geiger mode and/or linear mode avalanche photodiodes), silicon photomultipliers (SiPMs), phototransistors, cameras, active pixel sensors (APS), charge coupled devices (CCD), cryogenic detectors, and/or any other sensor of light configured to receive focused tight having wavelengths in the wavelength range of the emitted tight.
  • avalanche photodiodes e.g., Geiger mode and/or linear mode avalanche photodiodes
  • SiPMs silicon photomultipliers
  • phototransistors cameras
  • APS active pixel sensors
  • CCD charge coupled devices
  • cryogenic detectors and/or any other sensor of light configured to receive focused tight having wavelengths in the wavelength range of the emitted tight.
  • sensor 106 can select or adjust a horizontal scanning resolution by changing a rate of rotation of the LIDAR and/or adjusting a pulse rate of light pulses emitted by transmitter 120.
  • transmitter 120 can be configured to emit light pulses at a pulse rate of 15,650 light pulses per second.
  • LIDAR 106 may be configured to rotate at 10 Hz (i.e., ten complete 360° rotations per second).
  • receiver 122 can detect light with a 0.23° horizontal angular resolution.
  • the horizontal angular resolution of 0.23° can be adjusted by changing the rate of rotation of LIDAR 106 or by adjusting the pulse rate.
  • LIDAR 106 can be alternatively configured to scan a particular range of views within less than a complete 360° rotation of LIDAR 106.
  • Optical element(s) 124 can be optionally included in or otherwise coupled to transmitter 120 and/or receiver 122.
  • optical element(s) 124 can be arranged to direct tight from a light source in transmitter 120 toward the environment.
  • optical element(s) 124 can be arranged to focus and/or guide light from the environment toward receiver 122.
  • optical element(s) 124 may include any feasible combination of mirror(s), waveguide(s), tight filters, lens(es), or any other optical components arranged to guide propagation of light through physical space and/or adjust certain light characteristics.
  • optical elements 124 may include a light filter arranged to reduce or prevent light having wavelengths outside the wavelength range of the light emitted by transmitter 120 from propagating toward receiver 122.
  • the light filter can reduce noise due to background light propagating from the scanned environment and originating from an external light source different than light sources of transmitter 120.
  • Sensor(s) 108 may include any type of sensor configured to scan the surrounding environment. As shown, sensors 108 include an array of sensing elements 128. Further, as shown, sensors 108 can optionally include one or more optical elements 130.
  • sensor(s) 108 may include active sensors (e.g., LIDAR, RADAR, SONAR, etc.) that transmit signals and detect reflections thereof.
  • sensors 108 may include a transmitter and a receiver that are similar to, respectively, transmitter 120 and receiver 122.
  • sensor(s) 108 may include passive sensors (e.g., microphones, cameras, image sensors, thermometers, etc.) that detect external signals originating from one or more external sources.
  • sensing elements 128 may include an array of microphones that each detect sounds (e.g., external signals) incident on the respective microphones in the array.
  • the camera(s) may include any camera (e.g., a still camera, a video camera, etc.) configured to capture images of the environment in which system 100 is located.
  • a camera of sensor 108 may include any imaging device that detects and provides data indicative of an image.
  • sensing elements 128 may include one or more arrangements of light sensing elements that each provide a measure of light incident thereon.
  • sensing elements 128 may include charge-coupled devices (CCDs), active pixel sensors, complementary metal-oxide-semiconductor (CMOS) photodetectors, N-type metal- oxide-semiconductor (NMOS) photodetectors, among other possibilities.
  • CCDs charge-coupled devices
  • CMOS complementary metal-oxide-semiconductor
  • NMOS N-type metal- oxide-semiconductor
  • data from sensing elements 128 can be combined according to the arrangement of the sensing elements 128 to generate an image.
  • data from a two-dimensional (2D) array of sensing elements may correspond to a 2D array of image pixels in the image.
  • a 3D arrangement of sensing elements e.g., sensing elements arranged along a curved surface
  • sensing elements arranged along a curved surface can be similarly used to generate a 2D array of image pixels in the image.
  • Other examples are possible as well.
  • a sensing element can optionally include multiple adjacent light detectors (or detectors of other types of signals), where each detector is configured to detect light (or other signal) having a particular wavelength or wavelength range.
  • an image pixel may indicate color information (e.g., red-green-blue or RGB) based on a combination of data from a first detector that detects an intensity of red light, a second detector that detects an intensity of green light, and a third detector that detects an intensity of blue light.
  • RGB red-green-blue
  • sensor(s) 108 may be configured to detect visible light propagating from the scene.
  • receiver 122 of sensor 106 e.g., LIDAR receiver
  • receiver 122 of sensor 106 may be configured to detect invisible light (e.g., infrared, etc.) within a wavelength range of light emitted by transmitter 120.
  • system 100 can then combine data from sensor 106 (e.g., LIDAR) with data from sensor 108 (e.g., camera) to generate a colored three-dimensional (3D) representation (e.g., point cloud) of the scanned environment.
  • 3D three-dimensional
  • sensor(s) 108 may comprise a plurality of cameras (e.g., a camera ring) disposed in a circular arrangement around an axis of rotation of sensor 106 (e.g., LIDAR).
  • a first camera may be arranged to image a first field-of-view (FOV) of the environment that at least partially overlaps a range of pointing directions of sensor 106 as sensor 106 rotates about the axis (or as the signals transmitted by transmitter 120 are otherwise steered to different pointing directions about the axis).
  • a second camera adjacent to and/or overlapping the first camera may image a second FOV adjacent to the first FOV of the first camera, and so on.
  • system 100 may be configured to capture a sequence of images of the respective FOVs simultaneously (and/or synchronously or according to some other timing) with a scan of the environment by sensor 106 as sensor 106 rotates about the axis.
  • sensor(s) 108 may be configured to operate in a rolling shutter mode.
  • each output from a microphone in the array may be associated with a respective exposure time period of a corresponding sensing element (e.g., microphone) to external sounds incident on sensor 108.
  • a sensing element e.g., microphone
  • each pixel or group of pixels output by the camera(s) may be associated with a respective exposure time period of a corresponding sensing element or group of sensing elements to external tight.
  • camera(s) 108 may together provide an array of adjacent rows of sensing elements 128.
  • camera(s) 108 can be configured to output a sequence of image pixels that correspond to measurements of the external tight by corresponding sensing elements in the array.
  • camera(s) 108 may output a first row of image pixels based on data from a first row of sensing elements in the array, followed by a second row of image pixels based on data from a second adjacent row of sensing elements in the array, and so on.
  • the first image pixel row may be associated with a first exposure time period during which the first row of sensing elements was exposed to tight
  • the second image pixel row may be associated with a second exposure time period during which the second adjacent row of sensing elements was exposed to light, etc.
  • the first exposure time period may begin before the second exposure time period begins. For instance, after a time delay from a start time of the first exposure time period (and optionally before the first exposure time period lapses), camera(s) 108 may start exposing the second adjacent row of sensing elements. Additionally, the first exposure time period may end before the second exposure time period ends.
  • controller 104 may read outputs from the first row of sensing elements after the first exposure time period ends and while the second row of sensing elements is still being exposed to the external light, and then read outputs from the second row of sensing elements after the second exposure period ends and while a third row of sensing elements is still being exposed to the external tight, and so on.
  • system 100 may be configured to select the order in which the sequence of image pixels are obtained from sensing elements 128 in the rolling shutter mode based on an order in which transmitter 120 is emitting tight pulses (or other signals).
  • a given row of sensing elements in the array of sensing elements 128 may be aligned (e.g., parallel, etc.) with the axis of rotation of a LIDAR (e.g., sensor 106).
  • the axis of rotation of the LIDAR is a vertical axis
  • the given row may correspond to a vertical row of sensing elements (e.g., vertical linear arrangement parallel to the axis of rotation of the LIDAR).
  • transmitter 120 may be configured to output a plurality of tight beams in an arrangement of one or more vertical tines repeatedly as the LIDAR (e.g., sensor 106) rotates about the axis.
  • sensor(s) 108 e.g., camera(s)
  • camera(s) 108 may then output a second row of image pixels using a second adjacent row of sensing elements in the direction of the rotation of the LIDAR (or other sensorl06).
  • the second row of image pixels may be aligned with a second vertical line of light beams emitted by transmitter 120 after sensor 106 rotates toward the second row of sensing elements, and so on.
  • the sequence of image pixels obtained from camera(s) 108 may include a sufficient number of pixels that were captured at times (and from viewing directions) that are similar to the times and directions of LIDAR light pulses (or other signals) emitted by transmitter 120 (e.g., as transmitter 120 rotates about a vertical axis).
  • the camera(s) e.g., sensor(s) 108 instead captured the sequence of image pixels using a first horizontal row of sensing elements followed by a second horizontal row of sensing elements and so on, then fewer image pixels may be captured at times (and from viewing directions) that are similar to the times and directions of the LIDAR light pulses.
  • Optical element(s) 130 may include any combination of optical components such as lens(es), mirror(s), waveguide(s), light filters) or any other type of optical component similarly to optical element(s) 124. Further, optical elements 130 can be arranged to focus, direct, and/or adjust light characteristics of incident light for propagation toward sensing elements 128. Further, where sensor(s) 108 include a plurality of cameras for instance, optical element(s) 130 may include a plurality of respective camera lenses that focus external light onto respective image sensors of the cameras.
  • optical element(s) 130 may include one or more light filters that selectively transmit particular wavelengths of light toward one or more particular sensing elements of sensor 106.
  • optical element(s) 130 may include one or more light filters that attenuate light wavelengths of light emitted by transmitter 120. With this arrangement, for instance, system 100 can reduce noise measurements (by sensing element(s) 128) that are associated with the high intensity of light pulses (or other signals) emitted by transmitter 120.
  • sensor 108 may include color image sensors (e.g., Bayer filter sensor, layered pixel sensor array, etc.) configured to indicate colors of incident light.
  • optical element(s) 130 may include a color filter array, where each color filter of the array transmits red, green, or blue light to a particular sensing element adjacent to the color filter (and attenuates other wavelengths of light).
  • System 100 can then generate (e.g., by combining outputs from multiple sensing elements that sense light having different colors) image pixels that indicate color information (e.g., red, green, and blue, etc.).
  • optical element(s) 130 may include one or more filters that attenuate wavelengths of the light (or other signal) emitted by transmitter 120 and one or more other filters that allow transmission of these wavelengths.
  • optical element(s) 130 may include a color filter array that includes green, red, and blue light filters.
  • a relatively large number of the color filters can be configured to attenuate the wavelengths of the emitted light of transmitter 120 to reduce the effects of the high intensity signals emitted by transmitter 120.
  • a relatively smaller number of the color filters e.g., one or more of the green light filters, etc.
  • the high intensity light of transmitter 120 can be used to illuminate one or more sensing elements in dark external light conditions (e.g., night time).
  • Rotating platform 110 may be configured to rotate about an axis.
  • sensor 106 and/or transmitter 120 and receiver 122 thereof
  • rotating platform 110 may be supported (directly or indirectly) by rotating platform 110 such that each of these components moves relative to the environment in response to rotation of rotating platform 110.
  • each of these components could be rotated (simultaneously) relative to an axis so that sensor 106 may obtain information from various directions.
  • the axis of rotation of rotating platform 110 is vertical and a pointing direction of sensor 106 can be adjusted horizontally by the rotation of rotating platform 110 about its vertical axis of rotation.
  • Rotating platform 110 can be formed from any solid material suitable for supporting one or more components (e.g., sensor 106) mounted thereon.
  • actuators 112 may actuate rotating platform 110.
  • actuators 112 may include motors, pneumatic actuators, hydraulic pistons, and/or piezoelectric actuators, among other possibilities.
  • controller 104 could operate actuator 112 to rotate rotating platform 110 in various ways so as to obtain information about the environment
  • rotating platform 110 could be rotated in either direction.
  • rotating platform 110 may carry out complete revolutions such that sensor 106 (e.g., LIDAR) provides a 360° horizontal FOV of the environment.
  • rotating platform 110 may rotate at various frequencies so as to cause sensor 106 to scan the environment at various refresh rates and/or scanning resolutions.
  • system 100 may be configured to adjust the pointing direction of the emitted signal (emitted by transmitter 120) in various ways.
  • signal sources e.g., light sources, antennas, acoustic transducers, etc.
  • transmitter 120 can be operated according to a phased array configuration or other type of beam steering configuration.
  • phased array optics e.g., optical elements 1244 that control the phase of light waves emitted by the light sources.
  • controller 104 can be configured to adjust the phased array optics (e.g., phased array beam steering) to change the effective pointing direction of a light signal emitted by transmitter 120 (e.g., even if rotating platform 110 is not rotating).
  • transmitter 120 may include an array of antennas, and controller 104 can provide respective phase-shifted control signals for each individual antenna in the array to modify a pointing direction of a combined RF signal from the array (e.g., phased array beam steering).
  • transmitter 120 may include an array of acoustic transducers, and controller 104 can similarly operate the array of acoustic transducers (e.g., via phase-shifted control signals, etc.) to achieve a target pointing direction of a combined sound signal emitted by the array (e.g., even if the rotating platform 110 is not rotating, etc.).
  • the pointing direction of sensor(s) 106 can be controlled using a deforming flexible structure (e.g., MEMs, etc.) that can be deformed in response to a control signal from controller 104 to adjust a steering direction of the signals emitted by transmitter 120.
  • a deforming flexible structure e.g., MEMs, etc.
  • controller 104 can adjust a steering direction of the signals emitted by transmitter 120.
  • Other examples are possible.
  • Stationary platform 114 may take on any shape or form and may be configured for coupling to various structures, such as to a top of a vehicle for example. Also, the coupling of stationary platform 114 may be carried out via any feasible connector arrangement (e.g., bolts and/or screws). In this way, system 100 could be coupled to a structure so as to be used for various purposes, such as those described herein.
  • sensor(s) 108 can be coupled to stationary platform 114. In this example, sensor(s) 108 can remain stationary relative to the rotational motion of sensor(s) 106 (or the otherwise changing beam directions of signals emitted by transmitter 120). In another example, sensor(s) 108 can be mounted to another physical structure different than stationary platform 114.
  • Rotary link 116 directly or indirectly couples stationary platform 114 to rotating platform 110.
  • rotary link 116 may take on any shape, form and material that provides for rotation of rotating platform 110 about an axis relative to stationary platform 114.
  • rotary link 116 may take the form of a shaft or the like that rotates based on actuation from actuator 112, thereby transferring mechanical forces from actuator 112 to rotating platform 110.
  • rotary link 116 may have a central cavity in which one or more components of system 100 may be disposed.
  • rotary link 116 may also provide a communication link for transferring data and/or instructions between stationary platform 114 and rotating platform 110 (and/or components thereon such as sensor(s) 106, etc.).
  • Housing 118 may take on any shape, form, and material and may be configured to house one or more components of system 100.
  • housing 118 can be a dome- shaped housing.
  • housing 118 may be composed of a material that is at least partially non-transparent, which may allow for blocking of at least some light from entering the interior space of housing 118 and thus help mitigate thermal and noise effects of ambient tight on one or more components of system 100.
  • Other configurations of housing 118 are possible as well.
  • housing 118 may be coupled to rotating platform 110 such that housing 118 is configured to rotate about the above-mentioned axis based on rotation of rotating platform 110.
  • sensor(s) 106 may rotate together with housing 118.
  • housing 118 may remain stationary while sensor(s) 106 rotate within housing 118.
  • System 100 could also include multiple housings similar to housing 118 for housing certain sub-systems or combinations of components of system 100.
  • system 100 may include a first housing for sensor(s) 106 and a separate housing for sensor(s) 108. Other examples are possible as well.
  • Display 140 can optionally be included in system 100 to display information about one or more components of system 100.
  • controller 104 may operate display 140 to display images captured using a camera (e.g., sensor 108), a representation (e.g., 3D point cloud, etc.) of an environment of system 100 indicated by LIDAR data from sensor 106, and/or a representation of the environment based on a combination of the data from sensors 106 and 108 (e.g., colored point cloud, images with superimposed temperature indicators, etc.).
  • display 140 may include any type of display (e.g., liquid crystal display, LED display, cathode ray tube display, projector, etc.).
  • display 140 may have a graphical user interface (GUI) for displaying and/or interacting with images captured by sensor 108, LIDAR data captured using sensor 106, and/or any other information about the various components of system 100 (e.g., power remaining via power supply arrangement 102).
  • GUI graphical user interface
  • a user can manipulate the GUI to adjust a scanning configuration of sensors 106 and/or 108 (e.g., scanning refresh rate, scanning resolution, etc.).
  • system 100 can be combined or separated into a wide variety of different arrangements.
  • sensors 106 and 108 are illustrated as separate components, one or more components of sensors 106 and 108 can alternatively be physically implemented within a single device.
  • this arrangement of system 100 is described for exemplary purposes only and is not meant to be limiting.
  • FIG. 2A illustrates a device 200 that includes a rotating LIDAR sensor 206 and a camera ring 208, according to example embodiments.
  • device 200 includes a LIDAR 206, camera ring 208 (e.g., arrangement of rolling shutter cameras, etc.), a rotating platform 210, a stationary platform 214, a housing 218, a LIDAR lens 224, and camera lenses 230, 232, 234 which may be similar, respectively, to sensor(s) 106, sensor(s) 108, rotating platform 110, stationary platform 114, housing 118, optical element 124, and optical elements 130, for example.
  • LIDAR 206 e.g., arrangement of rolling shutter cameras, etc.
  • camera lenses 230, 232, 234 which may be similar, respectively, to sensor(s) 106, sensor(s) 108, rotating platform 110, stationary platform 114, housing 118, optical element 124, and optical elements 130, for example.
  • LIDAR 206 may provide data (e.g., data point cloud, etc.) indicating distances between the one or more objects and the LIDAR 206 based on detection(s) of the reflected light 290, similarly to the discussion above for sensor 106.
  • data e.g., data point cloud, etc.
  • each camera of camera ring 208 may receive and detect a respective portion of external light 270 incident on the respective camera.
  • external light 270 may include light originating from one or more external light sources, such as the sun, a street lamp, among other possibilities.
  • external light 270 may include light propagating directly from an external light source toward camera lenses 230, 232, and/or 234.
  • external light 270 may include light originating from an external light source and reflecting off one or more objects (not shown) in the environment of device 200 before propagating toward lenses 230, 232, and/or 234.
  • the cameras of camera ring 208 may generate one or more images of the environment based on external light 270.
  • each image generated by a particular camera may correspond to a particular FOV of the particular camera relative to device 200.
  • camera ring 208 may include a plurality of cameras that are arranged in a ring formation (e.g., circular arrangement, oval arrangement, etc.) relative to one another. Each camera of the plurality can be positioned (e.g., mounted to device 200 and/or camera ring 208) at a particular angle and/or orientation.
  • a FOV of a first camera may be adjacent to and/or partially overlapping FOVs of two other adjacent cameras. With this arrangement for instance, images from the individual cameras can be combined into an image of a 360-degree FOV of device 200.
  • the respective angle and/or orientation of each camera can be adjusted to reduce or prevent blind spots (e.g., regions of the surrounding environment that are not within the FOV of any camera in camera ring 208).
  • the respective FOVs of two adjacent cameras can be aligned (e.g., by moving, rotating, and/or otherwise adjusting relative mounting positions of the two cameras, etc.) such that a region of the environment between the FOVs of the two cameras (e.g., “blind spot”) is less than a threshold distance from device 200.
  • camera ring 208 could optionally include a housing (e.g., ring-shaped, etc.) having one or more indentations that receive and/or support the cameras at particular respective mounting positions (e.g., angle, orientation, etc.).
  • a housing e.g., ring-shaped, etc.
  • indentations that receive and/or support the cameras at particular respective mounting positions (e.g., angle, orientation, etc.).
  • an example system e.g., system 100, a calibration system, etc.
  • the example system may be configured to compare images captured by the cameras, and to determine, based on the comparison, alignment offsets that achieve respective target FOVs for the respective cameras.
  • the example system may also include and/or operate a robotic arm, an actuator, and/or any other alignment apparatus to adjust the positions of the cameras in camera ring 208 according the determined alignment offsets.
  • Other examples are possible.
  • device 200 may operate the cameras of camera ring 208 and/or process the captured images therefrom (e.g., combine portions of the captured images, etc.) to form a cohesive circular vision of the environment of device 200.
  • a computing system (not shown) of device 200 or another device may match features in images captured by camera ring 208 to generate a combined image that spans a combination of the FOVs of the cameras.
  • lens 230 may focus light from a first 90-degree FOV of device 200, lens 232 may focus light from a second adjacent 90-degree FOV, and so on.
  • the first FOV could optionally partially overlap the first FOV.
  • the FOV imaged by each camera may be more or less than 90 degrees.
  • an image captured by any of the cameras in camera ring 208 may indicate various types of information such as light intensities for different wavelengths (e.g., colors, etc.) in external light 270, among other examples.
  • LIDAR 206 (and/or housing 218) can be configured to have a substantially cylindrical shape and to rotate about axis 242, based on rotation of rotating platform 210 that supports LIDAR 206 for instance. Further, in some examples, the axis of rotation 242 may be substantially vertical. Thus, for instance, by rotating LIDAR 206 about axis 242, device 200 (and/or a computing system that operates device 200) can determine a three-dimensional map based on data from LIDAR 206) of a 360-degree view of the environment of device 200.
  • device 200 can be configured to tilt the axis of rotation of rotating platform 210 (relative to stationary platform 214), thereby adjusting the FOV of LIDAR 206.
  • rotating platform 210 may include a tilting platform that tilts in one or more directions.
  • LIDAR lens 224 can have an optical power to both collimate (and/or direct) emitted light beams 250 toward an environment of LIDAR 206, and focus reflected light 260 received from the environment onto a LIDAR receiver (not shown) of LIDAR 206.
  • lens 224 has a focal length of approximately 120 mm. Other example focal lengths are possible.
  • LIDAR 206 may include separate transmit and receive lenses.
  • LIDAR 206 can alternatively include a transmit lens that directs emitted light 250 toward the environment, and a separate receive lens that focuses reflected light 260 for detection by a receiver of LIDAR 206.
  • device 200 may include more, fewer, or different components than those shown, and one or more of the components shown may be combined or separated in different ways.
  • device 200 could alternatively include a single camera lens that extends around a circumference of camera ring 208.
  • camera ring 208 is shown to be coupled to stationary platform 214, camera ring 208 can alternatively be implemented as a separate physical structure.
  • camera ring 208 can be positioned above LIDAR 206, without being rotated by rotating platform 214.
  • camera ring 208 may include more or fewer cameras than shown. Other examples are possible.
  • FIG. 2B illustrates a cross-section view of camera ring 208, according to an example embodiment.
  • axis 242 i.e., axis of rotation of LIDAR 206
  • camera ring 208 includes four cameras 208a, 208b, 208c, 208d that are arranged around axis of rotation 242.
  • each of the cameras may be configured to image a respective 90-degree FOV of the environment of device 200.
  • camera ring 208 may include fewer or more cameras than shown.
  • camera ring 208 may alternatively include eight cameras, where each camera is coupled to a respective lens that focuses light from (at least) a respective 45-degree FOV of the environment onto an image sensor of the camera.
  • camera ring 208 may have a wide variety of different configurations and thus the configuration shown includes four cameras only for convenience in description.
  • camera 208a includes lens 230 that focuses a first portion of external light (e.g., light 270) from the environment of device 200 onto an image sensor 226 of camera 208a.
  • camera 208b includes lens 232 that focuses a second portion of the external light onto an image sensor 246 of camera 232.
  • cameras 208c and 208d may be configured to focus respective portions of the external light onto respective image sensors of the cameras.
  • each image sensor may include an array of sensing elements similar to sensing elements 128 for example.
  • image sensor 226 of camera 208a may include an array of adjacent rows of sensing elements, exemplified by sensing elements 228a-228f (which may be similar to sensing elements 128 for example).
  • a first row of sensing elements in image sensor 226 may include sensing element 228a and one or more other sensing elements (not shown) that are vertically arranged through the page (e.g., parallel to axis 242).
  • a second row of sensing elements adjacent to the first row may include sensing element 228b and one or more other sensing elements (not shown) that are vertically arranged through the page, and so on.
  • cameras 208a, 208b, 208c, 208d may together provide an array of adjacent rows of sensing elements that are arranged around axis 242, so as to be able to image various corresponding portions of a 360-degree (horizontal) FOV around device 200.
  • a given row of sensing elements in image sensor 246 of camera 204b may include sensing element 248a (and one or more other sensors arranged parallel to axis 242 through the page).
  • the given row of sensing elements in camera 208b may also be adjacent to a row of sensing elements in camera 208a that includes sensing element 228f.
  • the sequence of image pixels obtained from camera ring 208 may include a row of image pixels obtained using data from the row of sensing elements that includes sensing element 228f, followed by a row of image pixels obtained using data from the row of sensing elements that includes sensing element 248a.
  • image sensor 226 may include more or fewer rows of sensing elements than shown
  • image sensor 226 may alternatively include 3000 rows of sensing elements, and each row may include 1000 sensing elements (extending through the page).
  • camera 208a may thus be configured to output a 3000 x 1000 pixel image. Further, in this embodiment, camera 208a may be configured to capture images at a rate of 60 Hz. Other camera configuration parameters are possible as well.
  • the sizes, shapes, and positions of the various components of device 200 are not necessarily to scale, but are illustrated as shown only for convenience in description.
  • the sizes of the lenses 230, 232, 234, 236, and sensors 226, 246, etc., shown in Figure 2B may be different than the sizes shown.
  • the distance between lens 230 and image sensor 226 may be different than the distance shown.
  • the distance from lens 230 to sensor 226 may correspond to approximately twice the diameter of lens 230.
  • image sensor 226 and camera lens 230 may have other sizes, shapes, and/or positions relative to one another.
  • Figure 2C is a conceptual illustration of an operation of device 200, according to an example embodiment.
  • the sensing elements of image sensor 226 of camera 208a are in the plane of the page. It is noted that some of the components of device 200, such as camera lens 230 and LIDAR 206 for instance, are omitted from the illustration of Figure 2C for convenience in description.
  • device 200 may be configured to operate cameras 208a, 208b, 208c, and/or 208d in a rolling shutter configuration to obtain a sequence of image pixels.
  • a first row of sensing elements that includes sensing elements 228a and 228g may be configured to measure an amount of external light incident thereon during a first exposure time period.
  • Device 200 may also include an analog to digital converter (not shown) that reads and converts the measurements by the first row of sensing elements (after the first exposure time period lapses) for transmission to a controller (e.g., controller 104) of device 200.
  • a controller e.g., controller 104
  • device 200 may start exposing a second row of sensing elements that includes sensing elements 228b and 228h for a second exposure time period.
  • exposure time periods of multiple rows of sensing elements may partially overlap (e.g., the time delay between the start times of the first and second exposure time periods may be less than the first exposure time period, etc.).
  • a camera in the rolling shutter configuration can stagger the start times of the exposure time periods to increase the image refresh rate (e.g., by simultaneously exposing multiple rows of sensing elements during the overlapping portions of their respective exposure time periods).
  • device 200 may then similarly measure and transmit the measurements by the second row of sensing elements to the controller. This process can then be repeated until all the rows of sensing elements (i.e., a complete image flame) are scanned. For example, after a start time of the second exposure time period (and optionally before the second exposure time period lapses), device 200 may begin exposing a third row of sensing elements (adjacent to the second row) to external tight 270, and so on.
  • device 200 may be configured to obtain the sequence of image pixels in an order that is similar to the order in which tight pulses are emitted by LIDAR 206. By doing so, for instance, more image pixels captured by cameras 208a-d may overlap (in both time and viewing direction) with LIDAR data (e.g., detected reflections of the emitted light pulses) than in an implementation where the sequence of image pixels is obtained in a different order.
  • LIDAR data e.g., detected reflections of the emitted light pulses
  • light beams 250a, 250b, 250c may correspond to the emitted light 250 shown in Figure 2A when LIDAR 206 is at a first pointing direction or orientation about axis 242.
  • the device 200 may be configured to scan the first (vertical) row of sensing elements (e.g., including elements 228a and 228g) before scanning sensing elements in the second (vertical) row (e.g., including elements 228b and 228h).
  • the image pixels captured using the first row of sensing elements may be more likely to be matched with detected reflections of light beams 250a-250c in terms of both time and viewing direction.
  • LIDAR 206 may then rotate (e.g., counterclockwise) about axis 242 and emit light beams 252a-252c.
  • Device 200 may then obtain a second row of image pixels using the second row of sensing elements (e.g., including sensing elements 228b and 228h), which may be more likely to be aligned (in both time and viewing direction) with detected reflections of light beams 252a-252c, and so on.
  • the second row of sensing elements e.g., including sensing elements 228b and 228h
  • device 200 may also be configured to obtain a row of image pixels in the sequence according to the order of emission of the light pulses / beams by LIDAR 206.
  • LIDAR 206 emits light beams 250a, 250b, 250c in that order
  • device 200 may be configured to obtain the image pixel row associated with the first row of sensing elements in a similar order (e.g., beginning with sensing element 228a and ending with sensing element 228g).
  • device 200 may instead be configured to obtain the image pixel row in an opposite order (e.g., beginning with sensing element 228g and aiding with sensing element 228a).
  • device 200 may be configured to adjust a time delay between capturing subsequent image pixel rows in the sequence of image pixels based on a rate of rotation of LIDAR 206. For example, if LIDAR 206 increases its rate of rotation (e.g., via actuator(s) 112, etc.), then device 200 may reduce the time delay between obtaining the first row of image pixels associated with the first row of sensing elements (e.g., including sensing elements 228a and 228g) and obtaining the second row of image pixels associated with the second adjacent row of sensing elements.
  • LIDAR 206 increases its rate of rotation (e.g., via actuator(s) 112, etc.)
  • device 200 may reduce the time delay between obtaining the first row of image pixels associated with the first row of sensing elements (e.g., including sensing elements 228a and 228g) and obtaining the second row of image pixels associated with the second adjacent row of sensing elements.
  • FIG. 2D illustrates a top view of device 200.
  • LIDAR 206 may have a first pointing direction that corresponds to an angular position of 0° about axis 242 (e.g., toward the bottom of the page).
  • LIDAR 206 may scan a region of the surrounding environment that corresponds to a center of an image captured using camera 208c (best shown in Figure 2B), which includes lens 234.
  • Figure 2E illustrates another top view of device 200. In the illustration of Figure
  • LIDAR 206 may have a second pointing direction that corresponds to an angular position of 180° about axis 242 (e.g., toward the top of the page). For instance, LIDAR 206 may have the second pointing direction of Figure 2E after LIDAR 206 is rotated from the first pointing direction of Figure 2D by half a complete rotation about axis 242. Further, in this configuration for example, LIDAR 206 may scan a region of the environment that corresponds to a center of an image captured using camera 208a (best shown in Figure 2B), which includes lens 230.
  • the time period in which FOVs of LIDAR 206 overlap the FOV of camera 208a may be less than the exposure time period (and readout time period) suitable for capturing an image using camera 208a.
  • image sensor 226 of camera 208a may measure and output an image frame (i.e., pixel data from all the sensing elements of image sensor 226) over a period of 60 ms.
  • LIDAR 206 may be configured to rotate at a rotation rate of 10 Hz (i.e., one complete rotation about axis 242 every 100ms).
  • device 200 may be configured to synchronize LIDAR light pulses emitted by LIDAR 206 with image pixels captured by some but not all the image sensing elements in a camera.
  • device 200 can be configured to trigger capturing an image by a particular camera such that a particular region of the image (e.g., vertical row(s) of image pixels at or near the center of the image, etc.) is exposed to external light when LIDAR 206 is pointing at a particular pointing direction aligned with the particular region of the image.
  • a particular region of the image e.g., vertical row(s) of image pixels at or near the center of the image, etc.
  • LIDAR 206 rotates at a frequency of 10 Hz
  • image pixels near the center of the image may be relatively more aligned (with respect to timing and viewing direction) with LIDAR light pulses that were emitted / detected when these image pixels were measured.
  • image pixels captured using rows of sensing elements that are further from the center of the image sensor may be relatively misaligned (in time or viewing direction) with LIDAR light pulses that were emitted / detected when these image pixels were measured.
  • cameras 208a, 208b, 208c, 208d can be configured to have partially overlapping FOVs.
  • camera 208d (best shown in Figure 2B) may be configured to have a FOV that partially overlaps the FOV of adjacent camera 208a.
  • device 200 can use the aligned image pixel data from camera 208d (e.g., image pixels near center of captured image) instead of the misaligned image pixel data captured using camera 208a (e.g., image pixels further from the center of the image) for mapping with the LIDAR data.
  • camera 208d e.g., image pixels near center of captured image
  • misaligned image pixel data captured using camera 208a e.g., image pixels further from the center of the image
  • Figure 3 illustrates a cross-section view of another rolling shutter camera arrangement 308 (e.g., camera ring), according to example embodiments.
  • Camera ring 308 may be similar to camera ring 208 shown in Figure 2B.
  • axis 342 may be an axis of rotation of a LIDAR similarly to axis 242.
  • image sensor 326 may be similar to image sensor 226 (and/or 246) and may include an array of sensing elements, exemplified by sensing elements 328a-328e, which may be similar to sensing elements 228a-228f.
  • image sensor 326 may comprise a first row of sensing elements that includes sensing element 328a and one or more other sensing elements (not shown) in a linear arrangement (e.g., perpendicular to the page), and a second adjacent row of sensing elements that includes sensing element 328b and one or more other sensing elements (not shown) in a linear arrangement (e.g., perpendicular to the page).
  • camera ring 308 may also include one or more camera lenses (e.g., similar to camera lenses 230, 232, 234, 236, etc.) that focus portions of external light incident on camera ring 308 toward respective sensing elements in the image sensor 326. Additionally or alternatively, camera ring 308 may include one or more of the components shown in any of system 100 and/or device 200.
  • camera lenses e.g., similar to camera lenses 230, 232, 234, 236, etc.
  • camera ring 308 may include one or more of the components shown in any of system 100 and/or device 200.
  • camera ring 308 includes image sensor 326 that is disposed along a curved surface (e.g., circular surface) around axis 342.
  • image sensor 326 can be implemented on a flexible substrate (e.g., flexible PCB, etc.) that mounts an arrangement of sensing elements (including sensing elements 328a-328e, etc.).
  • each of the rows of sensing elements in image sensor 326 may be at a same given distance to the axis of rotation 342 (e.g., circular or cylindrical arrangement of sensing elements).
  • image sensor 326 can be implemented as a plurality of physically separate rows of sensing elements that are arranged adjacent to one another around axis of rotation 342.
  • each physically separate row of sensing elements may be located at a same given distance to the axis of rotation as the other rows.
  • the curved surface on which each row of sensing elements in image sensor 326 is mounted may improve the overlap (e.g., in terms of viewing direction) between the image pixels captured by the sensing elements and the light beams emitted by a LIDAR sensor that rotates about axis 342.
  • the viewpoint of the LIDAR device e.g., location of LIDAR lens
  • the curved surface of image sensor 326 may resemble the circular path of emitted / detected LIDAR light pulses to improve the likelihood of matching image pixels collected by sensor 326 with LIDAR light pulses (that are detected along a similar curved path in the horizontal direction of the rotation of the LIDAR sensor).
  • the image could be used to identify contents of the environment (e.g., to identify an object of interest).
  • mapping a detected point in the environment e.g., a point of a LIDAR point cloud
  • mapping a detected point in the environment e.g., a point of a LIDAR point cloud
  • Such methods for localizing a point in an environment to a location in an image of a portion of the environment could be part of a sensor fusion algorithm.
  • Sensor fusion algorithms can be employed to merge data from multiple sensors, such as an image sensor and a LIDAR sensor for instance, to generate a representation of a scanned environment.
  • a 3D representation of a scanned environment may indicate color information determined using an image sensor combined with other information (e.g., distance, depth, intensity, texture, reflected light pulse length, etc.) determined using a LIDAR sensor.
  • a variety of methods could be employed to map a point in an environment to a location within an image of the environment. These methods generally use information about the location and orientation, or ‘pose,’ within the environment of the camera that generated the image. In examples where the camera is in motion relative to the environment (e.g., where the camera is part of an automobile, an unmanned aerial vehicle, or some other autonomous or otherwise-configured vehicle), it is desirable to determine the pose of the camera within the environment at the particular period of time when the camera generated the image.
  • Figure 4A illustrates an example point 410 in an environment (e.g., a point that is part of a point cloud generated through the operation of a LIDAR sensor).
  • a camera is also present in the environment and is operable to image a portion of the environment, e.g., a portion that contains the example point 410.
  • the camera is in motion (the direction of motion indicated, in Figure 4A, by the arrow labeled “MOTION” ).
  • the location of the camera at first, second, and third points in time is illustrated, in Figure 4A, by the first 420a, second 420b, and third 420c cameras.
  • Figure 4B is a conceptual illustration of sensing elements 428a-h (e.g., CMOS light sensitive elements, pixels of a CCD sensor) of an image sensor 426 of the in-motion camera illustrated in Figure 4A.
  • sensing elements 428a-h e.g., CMOS light sensitive elements, pixels of a CCD sensor
  • a lens, aperture, and/or or other elements of the camera project light received from the environment from different trajectories to respective different locations on the image sensor 426. Accordingly, the example point 410 will be represented, within an image generated by the camera, at a location that is dependent upon when the camera sensed light from the example point 410.
  • the example point 410 will be mapped to a first location 415a (e.g., a first pixel) within the image.
  • a first location 415a e.g., a first pixel
  • the example point 410 will be mapped to a second location 415b (e.g., a second pixel) within the image.
  • the example point 410 will be mapped to a third location 415c (e.g., a third pixel) within the image.
  • a third location 415c e.g., a third pixel
  • the camera acts according to a “global shutter” mode or otherwise operates to generate the entire image at substantially the same time (e.g., during a single exposure period)
  • locating a point in an environment (e.g., 410) within the frame of the image may be relatively straightforward. For example, the location and orientation of the camera, relative to the point, at the point(s) in time when the image was generated could be determined and used to project the point into the image.
  • the process of projecting the point in to the image can be more complex when different portions of the image are generated at respective different points in time, e.g., if the camera is operated in a ‘rolling shutter' mode. In such an operational mode, each row or column of pixels (or other light-sensitive elements of the camera) is operated during a respective different exposure time to sense light from the environment.
  • a first row of sensing elements that includes sensing elements 428a and 428g may be configured to measure an amount of external light incident thereon during a first exposure time period.
  • the camera may also include an analog to digital converter (not shown) that reads and converts the measurements by the first row of sensing elements (after the first exposure time period lapses) for transmission to a controller (e.g., controller 104).
  • controller e.g., controller 104
  • the camera may start exposing a second row of sensing elements that includes sensing elements 528b and 528h for a second exposure time period.
  • a camera pose could be determined for each of the exposure times (e.g., for each row and/or column) and used to project the point to a respective potential location within the image. Based on the plurality of projections, a particular one of the projections could be selected as the estimated projection of the point into the image. For example, a difference could be determined, for each of the potential locations, between the time of the pose used to generate the projection and the time that the camera imaged the row/column at the projected location within the image. In such an example, the projection corresponding to the lowest-magnitude difference could be selected as the estimated projection of the point into the image.
  • Such a method includes determining a large number of camera poses and projections, using the poses, of the point into the frame of the image. This may be computationally expensive (e.g., in terms of processor cycles, memory use, data bus use), energetically expensive (e.g., in terms of system energy used to compute the poses, projections, or other calculations), expensive in terms of time used to generate an output, or otherwise unfavorable.
  • such a brute-force method may be undesirable or unworkable in applications, such as autonomous vehicle mapping, localization, and/or navigation applications, that are constrained with respect to power, computational resources, latency/time budget, or other factors and/or applications wherein such a mapping much be performed for a great many points (e.g., for each point in a point cloud generated by a LIDAR sensor).
  • the methods described herein allow for lower-cost, lower-latency determination of the location, within an image of a portion of an environment, of a corresponding point located within the environment (e.g., a point from a LIDAR point cloud). These methods achieve these improvements by posing the location-estimation problem as a time-estimation problem. In particular, these methods estimate the time (e.g., the particular exposure time of a row/column of a rolling shutter image) at which light from the point in the environment was sensed by a camera when generating the image of the environment into which the point is to be projected.
  • time e.g., the particular exposure time of a row/column of a rolling shutter image
  • a cost function can be determined for each time estimate and used to update the time estimate (e.g., based on a magnitude and/or sign of the cost function, of a residual of the cost function).
  • Such a cost function could include a difference between a first term that is related to the estimated time (e.g., a first term that is equal to the estimated time, defined relative to a reference time) and a second term that is based on a mapping from a projected location, within the image of the point in the environment to a time that the camera sensed light represented at the projected location within the image.
  • This second term can be determined by, e.g., (i) using a determined pose of the camera at the estimated time to project the point in the environment to a location within the image taken by the camera and (ii) determining a time that the camera sensed light, from the environment, at the projected location within the image (e.g., the time at which the row/column of pixels at the projected location was exposed).
  • convergence of the estimate occurs in relatively few iterations (e.g., two to three iterations), resulting in the performance of relatively few pose extrapolations/interpolations, point projections, or other computationally expensive tasks. Accordingly, these methods allow for the projection of a point in an environment to a corresponding location within an image of a portion of the environment using relatively less computational resources, power, and/or time when compared with alternative (e.g., brute force) methods.
  • the cost function term related to the estimated time and the term related to the time that the camera sensed light represented at the projected location within the image may be defined relative to different reference times/epochs.
  • the first term related to the estimated time could be based on the estimated time as defined relative to a time of an anchor pose of the camera and the second term related to the time that the camera sensed light represented at the projected location within the image could be based on a characteristic time of capture of the image (e.g., a timing of a first, middle, or end exposure time of a series of rolling shutter exposure times, a timing of exposure of a principal point within the image).
  • the cost function could include an additional term related to the offset between the difference between the time references of the first and second terms of the cost function.
  • An example of such a cost function is:
  • t h is the estimated time that light from the point in the environment (e.g., 410) was imaged to generate a corresponding portion of the image of the environment, defined relative to a time, t pose , of an anchor pose used to determine the pose of the camera at a given estimated time.
  • Z n (t h ) is the location, within the image, to which the location of the point in the environment is projected when using the pose of the camera as estimated at the estimated time t h .
  • the function IndexToTime() takes as an input a location within the frame of a rolling- shutter image and outputs a time, relative to a characteristic time t principal _point of the image, that light represented by the row or column of pixels at the input location was sensed (e.g., an exposure tune of the row/column of pixels).
  • the offset term, t offset represents a time difference between the different zero times/epochs of t h and IndexToTime(). So, in this example, t offset could be a static value equal to t pose - t principal _point .
  • IndexToTime() could provide a discontinuous output, e.g., an output time that is one of an enumerated set of output times corresponding to the set of exposure times of the rolling-shutter operation of the camera.
  • IndexToTime() could provide a continuous output, e.g., a linearized time that represents a linearized version of the enumerated set of output times corresponding to the set of exposure times of the rolling-shutter operation of the camera.
  • an initial estimated time is determined and the cost function iteratively applied to generate an output estimated time. This could include performing the iterative process a set number of times, e.g., between 2 and four times, inclusive. Alternatively, the absolute or relative (e.g., relative to the magnitude of the most recent estimated time) reduction in the cost function from one iteration to the next could be assessed (e.g., compared to a threshold value) and used to determine whether the estimated time has converged.
  • Such a threshold-based update process could be constrained to occur no more than a set threshold number of times, e.g., the update process could terminate due to the cost function converging (e.g., the cost function having been reduced in magnitude by less than a threshold amount) or due to the update process having been performed the threshold number of times.
  • the output estimated time can be used to determine a projected location, within the image, of the point in the environment. This can be done by repeating some of the processes employed in the iterative time estimation process (e.g., using the output estimated time to determine a pose, at the output time, of the camera and using the determined pose to project the point in the environment to a location within the image).
  • Updating the estimated time based on the cost function could include applying the output of the cost function (e.g., a residual determined by evaluating the cost function) to the estimated time in order to update the estimated time.
  • the output of the cost function could be applied directly, or could be normalized.
  • a Jacobian of the cost function, with respect to the estimated time could be used to normalize the output of the cost function before applying the output of the cost function to update the estimated time.
  • Such a Jacobian could be determined analytically, using a derivative of one or more terms of the cost function, and/or by a process of numerical estimation.
  • the location, within the image, to which the location of the point in the environment is projected when using the pose of the camera as estimated at the estimated time t h may be calculated in a variety of ways.
  • Z n (t) could be a normalized location within the frame of the image.An example of such a calculation is:
  • [00134] is the pose of the camera at time is the point in the environment to be projected into the image, and proj(x,y) projects a point x into the frame of view of a pose is the point in the environment to be projected into the image, and proj(x, Y) projects a point x into the frame of view of a pose Y.
  • the pose of the camera at a particular time can be determined from one or more anchor poses and respective one or more points in time.
  • An anchor pose may be determined based on global positioning data, magnetometer data, image data, wheel rotation data, LIDAR data, or some other information related to the location of the camera. Such pose data could be determined for the vehicle as a whole and then converted (e.g., by applying an offset translation and rotation) to arrive at the pose of the camera.
  • the anchor pose could be applied without modification.
  • the anchor pose could be extrapolated and/or interpolated to the time of interest.
  • the pose for the camera at a particular time could be determined by a process of interpolation from a known anchor pose of the camera at a different time, An example of such a calculation is:
  • [00136] is pose information for the camera, at the anchor pose time that includes the location and orientation of the camera as well as information about the motion of the camera, e.g., the translational and/or rotational velocity of the camera,
  • Such an extrapolation could be performed in a variety of ways.
  • An example of such an extrapolation is:
  • [00137] is the orientation of the camera at the anchor pose time t pose , is the location of the camera at is the translational velocity of the camera at t pose , and is the rotational velocity of the camera at t pose . In some examples, it may be sufficient to extrapolate the pose based only on the translational velocity of the camera, e.g., in examples where the rotational velocity of the camera is relatively low-magnitude.
  • the pose of the camera for a particular time could be determined by interpolating multiple different known poses of the camera at respective different known points in time. This could include linear interpolation between two known poses (e.g., two known poses corresponding to respective times that are respectively before and after the time for which an interpolated pose is desired), nonlinear interpolation using more than two poses, or some other interpolation process.
  • the determined and/or detected location of the point in the environment could be moving over time.
  • the point could be the detected location of a moving object in the environment, and the translational velocity, rotational velocity, or other motion information about the moving object could also be detected.
  • the embodiments described herein could be modified to account for motion of the point within the environment over time.
  • Such a time-dependent location could be determined in a variety of ways, e.g. by extrapolating the location of the point from a location and velocity of the point detected at a single detection time (e.g., based on a location and a velocity of the point detected at time t detect ), by interpolating the location of the point between two different locations detected at two different times, or by using some other method to estimate the location of the point in the environment as a function of time.
  • FIG. 5 is a flowchart of a method 500, according to example embodiments.
  • Method 500 presents an embodiment of a method that could be used with any of system 100, device 200, and/or camera ring 308, for example.
  • Method 500 may include one or more operations, functions, or actions as illustrated by one or more of blocks 502-514. Although the blocks are illustrated in a sequential order, these blocks may in some instances be performed in parallel, and/or in a different order than those described herein. Also, the various blocks may be combined into fewer blocks, divided into additional blocks, and/or removed based upon the desired implementation.
  • each block may represent a module, a segment, a portion of a manufacturing or operation process, or a portion of program code, which includes one or more instructions executable by a processor for implementing specific logical functions or steps in the process.
  • the program code may be stored on any type of computer readable medium, for example, such as a storage device including a disk or hard drive.
  • the computer readable medium may include a non-transitory computer readable medium, for example, such as computer-readable media that stores data for short periods of time like register memory, processor cache and Random Access Memory (RAM).
  • the computer readable medium may also include non-transitory media, such as secondary or persistent long term storage, like read only memory (ROM), optical or magnetic disks, compact-disc read only memory (CD-ROM), for example.
  • the computer readable media may also be any other volatile or non-volatile storage systems.
  • the computer readable medium may be considered a computer readable storage medium, for example, or a tangible storage device.
  • each block in Figure 4 may represent circuitry that is wired to perform the specific logical functions in the process.
  • method 500 involves obtaining an indication of a point in an environment of an autonomous vehicle.
  • LIDAR sensor 206 may be operated to generate a plurality of LIDAR points related to the location, shape, size or other information about one or more objects in the environment of LIDAR sensor 206 and the indication of the point in the environment could be determined from one or more of the LIDAR points.
  • method 500 involves obtaining information about the location and motion of the autonomous vehicle within the environment. For example, information from a GPS, GLONASS, or other navigational positional receiver, LIDAR data, wheel speed/rotation data, inertial data from one or more accelerometers and/or gyroscopes, magnetic field data from a magnetometer, image data about an environment from one or more cameras, or other information related to the location of the autonomous vehicle could be used to generate information about the location and motion of the autonomous vehicle within the environment. This could include performing sensor fusion, applying a filter (e.g., a Kalman filter) to estimates of the motion/location of the autonomous vehicle, or other processes to determine an accurate estimate of the location and motion of the autonomous vehicle within the environment.
  • a filter e.g., a Kalman filter
  • the method 500 involves obtaining an image of a portion of the environment of the autonomous vehicle.
  • the image includes a plurality of rows of pixels.
  • the image was generated by a camera operating in a rolling shutter mode such that each row of pixels represents light sensed by the camera during a respective exposure time period.
  • the method 500 involves mapping the point in the environment to a location within the image.
  • Mapping the point in the environment to a location within the image involves, at block 510, determining an initial estimated time, T 0 , that the camera sensed light from the point in the environment. This could include setting the initial estimated time according to a characteristic time of generation of the image (e.g., a time at which the rolling- shutter process began or ended, a time at which a ‘middle’ set of pixels of the image was exposed).
  • Mapping the point in the environment to a location within the image also involves, at block 512, determining N updated estimated times, wherein N ⁇ 1.
  • Mapping the point in the environment to a location within the image also involves, at block 514, determining, based on the updated estimated time, T N , a location within the image that corresponds to the point in the environment.
  • Determining N updated estimated times involves determining each updated estimated time, T i , by an update process that includes, at block 513a, determining, based on the information about the location and motion of the autonomous vehicle, a pose of the camera at the estimated time, T i-1 .
  • the update process additionally includes, at block 513b, based on the pose of the camera at the estimated time, T i-1 , projecting the point in the environment to a projected location within the image.
  • the update process also includes, at block 513c, evaluating a cost function that includes (i) a term based on the estimated time, T i-1 and (ii) a term based on a mapping from the projected location to a time that the camera sensed light represented at the projected location within the image. These terms could defined with respect to the same or different ‘zero’ times or epochs.
  • the term based on the estimated time could be defined relative to a time of the information about the location and motion of the autonomous vehicle and the term based on the mapping from the projected location to the time that the camera sensed light represented at the projected location within the image could be defined relative to a characteristic time of generation of the image (e.g., a time at which the rolling-shutter process began or ended, a time at which a ‘middle’ set of pixels of the image was exposed).
  • the cost function could additional include an offset term to compensate for such differences.
  • the update process yet further includes, at block 513d, determining the updated estimated time, T i , based on the evaluated cost function.
  • the update process could be performed a pre-determined number of times, e.g., two, three, or four times.
  • the update process could include performing the update process until the time estimate converges, e.g., until an absolute or relative change in the magnitude of the time estimate, form one update to the next, is less than a specified threshold level.
  • method 500 involves determining a three- dimensional (3D) representation of the environment based on position data (e.g., data from a LIDAR sensor) and image information from the image (e.g., one or more pixels of the image).
  • position data e.g., data from a LIDAR sensor
  • image information from the image
  • an example system may combine LIDAR-based information (e.g., distances to one or more objects in the environment, etc.) with camera-based information (e.g., color, etc.) to generate the 3D representation.
  • LIDAR-based information e.g., distances to one or more objects in the environment, etc.
  • camera-based information e.g., color, etc.
  • Other types of representations e.g., 2D image, image with tags, enhanced image that indicates shading or texture information, etc.
  • method 500 involves determining a representation of the environment based on color information indicated by the image and point location information (e.g., distance, depth, texture, reflectivity, absorbance, reflective light pulse length, etc.) indicated by, e.g., a LIDAR sensor.
  • point location information e.g., distance, depth, texture, reflectivity, absorbance, reflective light pulse length, etc.
  • a system of method 400 may determine depth information for one or more image pixels in the image based on the point in an environment. For instance, the system can assign a depth value for image pixels in the image. Additionally, for instance, the system can generate (e.g., via display 140) a 3D object data model (e.g., a 3D rendering) of one or more objects in the environment (e.g., colored 3D model that indicates 3D features in the environment, etc.). In another instance, an image processing system can identify and distinguish between multiple objects in the image by comparing depth information (indicated by the associated location of the point in the environment) of the respective objects. Other applications are possible. Thus, in some implementations method 500 involves mapping LIDAR data points collected using the LIDAR sensor to image pixels collected using the one or more cameras. For instance, the LIDAR data can be mapped to a coordinate system of an image output by an image sensor or camera.
  • a 3D object data model e.g., a 3D rendering
  • an image processing system can identify and distinguish between
  • a system of method 500 may assign colors (based on data from the one or more cameras) to the point in the environment or to other known points in the environment (e.g., to individual points in a LIDAR point cloud).
  • the example system can then generate (e.g., via display 140) a 3D rendering of the scanned environment that indicate distances to features in the environment along with color (e.g., colored point cloud, etc.) information indicated by the image sensor(s) of the one or more cameras.
  • method 500 involves mapping image pixels from the image to corresponding LIDAR or otherwise-generated location data points. For instance, the image pixel data can be mapped to a coordinate system of LIDAR data output by a LIDAR sensor.
  • the method 500 may include additional or alternative elements.
  • the 500 could include operating an autonomous vehicle (e.g., steering the vehicle, actuating a throttle of the vehicle, controlling a torque and/or rotation output of one or more wheels, motors, engines, or other elements of the vehicle) based on the mapping of the point in the environment of the vehicle to a location (e.g., a pixel) within the image. This could include using the mapping to identify and/or locate one or more objects or other contents of the environment. Additionally or alternatively, this could include using the mapping to determine a navigational plan and/or determining a command to use to operate the autonomous vehicle.
  • an autonomous vehicle e.g., steering the vehicle, actuating a throttle of the vehicle, controlling a torque and/or rotation output of one or more wheels, motors, engines, or other elements of the vehicle
  • a location e.g., a pixel
  • seconds_per_row readout_time / image_height
  • seconds_per_row readout_time / image_height
  • seconds_per_col readout_time / image_height
  • seconds_per_col readout_time / image_height
  • const double f_u calibration.intrinsic(O);
  • const double k1 calibration.intrinsic(4)
  • const double k2 calibration.intrinsic(5)
  • const double k3 calibration.mtrinsic(6); // same as p1 in OpenCV.
  • const double k4 calibration.intrinsic(7); // same as p2 in OpenCV
  • const double k5 calibration.intrinsic(8); // same as k3 in OpenCV.
  • const double min_delta2 le-12 / (f_u * f_u + f_v * f_v);
  • const double rd 1.0 + r2 * k1 + r4 * k2 + r6 * k5;
  • const double du u - u_prev
  • skew_omega SkewSymmetric(cam_omega_camO).
  • CameraModel : CameraModel(const CameraCalibration& calibration)
  • CameraModel : -CameraModel() ⁇
  • const double readout time camera image.camera readout done time() -
  • const double t_principal_point GetPixelT imestamp(
  • range_in_pixel_space calibration_ . width()
  • range_in_pixel_space calibration .height()
  • rolling shutter state ->skew_omega SkewS ymmetric(cam omega cam0) ;
  • Cam_p_f(t) projection(n_p_f, n_tfm cam(t))
  • const double max u static_ cast ⁇ double>(calibration .width()):
  • const double max v static_cast ⁇ double>(calibration_.height()):
  • const double f_u calibration_.intrinsic(0)
  • const double f_v calibration_.intrinsic(l);
  • const double c_u calibration_.intrinsic(2)
  • const double f_u calibration_.intrinsic(0):
  • const double f_v calibration_.intrinsic(1);
  • const double c_v calibration_.intrinsic(3)
  • const double k1 calibration_.intrinsic(4)
  • const double k2 calibration_.intrinsic(5)
  • const double k3 calibration_.intrinsic(6) ; // same as pi in OpenCV.
  • const double k4 calibration_.intrinsic(7) ; // same as p2 in OpenCV
  • const double k5 calibration_.intrinsic(8) ; // same as k3 in OpenCV.
  • const double r2 u_n * u_n + v_n * v_n;
  • const double r_d 1.0 + k1 * r2 + k2 * r4 + k5 * r6;
  • std hypert(calibration_.width(), calibration_.height());
  • const double r2_sqrt_rcp 1.0 / std::sqrt(r2);
  • *v_d v n * r2_sqrt_rcp * roi clippmg radius + c_v;
  • u_nd u_n * r d + 2.0 * k3 * u_n * v n + k4 * (r2 + 2.0 * u_n * u_n);
  • v_nd v n * r d + k3 * (r2 + 2.0 * v n * v_n) + 2.0 * k4 * u_n * v_n;
  • *u_d u_nd * f u + c_u;
  • namespace co ::waymo::open_dataset

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Optical Radar Systems And Details Thereof (AREA)
  • Traffic Control Systems (AREA)

Abstract

Ail improved, efficient method for mapping world points from an environment (e.g., points generated by a LID AR sensor of an autonomous vehicle) to locations (e.g., pixels) within rolling-shutter images taken of the environment is provided. This improved method allows for accurate localization of the world point in a rolling-shutter image via an iterative process that converges in very few iterations. The method poses the localization process as an iterative process for determining die time, within the rolling-shutter exposure period of the image, at which the world point was imaged by the camera. The method reduces the number of times the world point is projected into the normalized space of the camera image, often converging in three or fewer iterations.

Description

EFFICIENT ALGORITHM FOR PROJECTING WORLD POINTS
TO A ROLLING SHUTTER IMAGE
INCORPORATION BY REFERENCE OF TEXT FILE
[0001] This application incorporates by reference, in its entirety, the contents of the
ASCn-formatted text file named "camera_model.txt" (size 48,446 bytes, created on Dec. 4, 2019), submitted electronically on Dec. 4, 2019 along with the instantly filed application.
BACKGROUND
[0002] Active sensors, such as tight detection and ranging (LIDAR) sensors, radio detection and ranging (RADAR) sensors, and sound navigation and ranging (SONAR) sensors, among others, can scan an environment by emitting signals toward the environment and detecting reflections of the emitted signals. Passive sensors, such as image sensors (e.g., cameras) and microphones among others, can detect signals originating from sources in the environment.
[0003] An example LIDAR sensor can determine distances to environmental features while scanning through a scene to assemble a “point cloud” indicative of reflective surfaces.
Individual points in the point cloud can be determined, for example, by transmitting a laser pulse and detecting a returning pulse, if any, reflected from an object in the environment, and then determining a distance to the object according to a time delay between the transmission of the pulse and the reception of its reflection. Thus, a three-dimensional map of points indicative of locations of reflective features in the environment can be generated.
[0004] An example image sensor can capture an image of a scene viewable to the image sensor. For instance, the image sensor may include an array of complementary metal oxide semiconductor (CMOS) active pixel sensors, or other types of light sensors. Each CMOS sensor may receive a portion of light from the scene incident on the array. Each CMOS sensor may then output a measure of the amount of light incident on the CMOS sensor during an exposure time when the CMOS sensor is exposed to the tight from the scene. With this arrangement, an image of the scene can be generated, where each pixel in the image indicates one or more values (e.g., colors, etc.) based on outputs from the array of CMOS sensors.
SUMMARY
[0005] In one example, a method includes: (i) obtaining an indication of a point in an environment of an autonomous vehicle; (ii) obtaining information about the location and motion of the autonomous vehicle within the environment; (iii) obtaining an image of a portion of the environment of the autonomous vehicle, wherein the image comprises a plurality of rows of pixels, and wherein the image was generated by a camera operating in a rolling shutter mode such that each row of pixels represents light sensed by the camera during a respective exposure time period; and (iv) mapping the point in the environment to a location within the image. Mapping the point in the environment to a location within the image includes: (a) determining an initial estimated time, T0, that the camera sensed light from the point in the environment; (b) determining N updated estimated times, Ti, wherein N ≥ 1; and (c) determining, based on the updated estimated time, TN, a location within the image that corresponds to the point in the environment. Each updated estimated time, Ti, is determined by an update process including: (1) determining, based on the information about the location and motion of the autonomous vehicle, a pose of the camera at the estimated time, Ti-1, (2) based on the pose of the camera at the estimated time, Ti-1, projecting the point in the environment to a projected location within the image, (3) evaluating a cost function that includes a term based on the estimated time, Ti-1, and a term based on a mapping from the projected location to a time that the camera sensed light represented at the projected location within the image, and (4) determining the updated estimated time, T1, based on the evaluated cost function.
[0006] In another example, a non-transitory computer readable medium has stored therein instructions executable by a computing device to cause the computing device to perform operations. The operations include: (i) obtaining an indication of a point in an environment of an autonomous vehicle; (ii) obtaining information about the location and motion of the autonomous vehicle within the environment; (iii) obtaining an image of a portion of the environment of the autonomous vehicle, wherein the image comprises a plurality of rows of pixels, and wherein the image was generated by a camera operating in a rolling shutter mode such that each row of pixels represents light sensed by the camera during a respective exposure time period; and (iv) mapping the point in the environment to a location within the image. Mapping the point in the environment to a location within the image includes: (a) determining an initial estimated time, T0, that the camera sensed light from the point in the environment; (b) determining N updated estimated times, Ti, wherein N≥ 1; and (c) determining, based on the updated estimated time, TN, a location within the image that corresponds to the point in the environment. Each updated estimated time, Ti is determined by an update process including: (1) determining, based on the information about the location and motion of the autonomous vehicle, a pose of the camera at the estimated time, Ti-1, (2) based on the pose of the camera at the estimated time, Ti-1, projecting the point in the environment to a projected location within the image, (3) evaluating a cost function that includes a term based on the estimated time, Ti-1, and a term based on a mapping from the projected location to a time that the camera sensed light represented at the projected location within the image, and (4) determining the updated estimated time, Ti, based on the evaluated cost function.
[0007] In yet another example, a system includes: (i) a light detection and ranging (LIDAR) sensor coupled to a vehicle; (ii) a camera coupled to the vehicle, wherein the camera is configured to obtain image data indicative of the environment of the vehicle; and (iii) a controller, wherein the controller is operably coupled to the LIDAR sensor and the camera. The controller includes one or more processors configured to perform operations including: (a) operating the LIDAR sensor to generate a plurality of LIDAR data points indicative of distances to one or more objects in the environment of the vehicle; (b) generating an indication of a point in the environment based on at least one LIDAR data point of the plurality of LIDAR data points; (c) operating the camera in a rolling shutter mode to generate an image of a portion of the environment of the vehicle, wherein the image comprises a plurality of rows of pixels, and wherein each row of pixels represents light sensed by the camera during a respective exposure time period; (d) obtaining information about the location and motion of the autonomous vehicle within the environment; and (e) mapping the point in the environment to a location within the image. Mapping the point in the environment to a location within the image includes: (1) determining an initial estimated time, T0, that the camera sensed light from the point in the environment; (2) determining N updated estimated times, Ti, wherein N ≥ 1; and (3) determining, based on the updated estimated time, TN, a location within the image that corresponds to the point in the environment. Each updated estimated time, Ti, is determined by an update process including: (I) determining, based on the information about the location and motion of the autonomous vehicle, a pose of the camera at the estimated time, Ti-1, (Π) based on the pose of the camera at the estimated time, Ti-1, projecting the point in the environment to a projected location within the image, (III) evaluating a cost function that includes a term based on the estimated time, Ti-1, and a term based on a mapping from the projected location to a time that the camera sensed light represented at the projected location within the image, and (TV) determining the updated estimated time, Tt, based on the evaluated cost function. [0008] These as well as other aspects, advantages, and alternatives will become apparent to those of ordinary skill in the art by reading the following detailed description with reference where appropriate to the accompanying drawings. Further, it should be understood that the description provided in this summary section and elsewhere in this document is intended to illustrate the claimed subject matter by way of example and not by way of limitation.
BRIEF DESCRIPTION OF THE DRAWINGS
[0009] Figure 1 is a simplified block diagram of a system, according to example embodiments
[0010] Figure 2A illustrates a device that includes a rotating LIDAR sensor and a rolling shutter camera arrangement, according to example embodiments.
[0011] Figure 2B is a cross-section view of the rolling shutter camera arrangement of
Figure 2A.
[0012] Figure 2C is a conceptual illustration of an operation of the device of Figure 2A.
[0013] Figure 2D illustrates a top view of the device of Figure 2A.
[0014] Figure 2E illustrates another top view of the device of Figure 2A.
[0015] Figure 3 illustrates a cross-section view of another rolling shutter camera arrangement, according to example embodiments.
[0016] Figure 4 A illustrates the motion of a camera relative to a point in an environment.
[0017] Figure 4B illustrates the projection of the point illustrated in Figure 4A onto an image generated by the camera illustrated in Figure 4A.
[0018] Figure 5 is a flowchart of a method, according to example embodiments. DETAILED DESCRIPTION
[0019] Exemplary implementations are described herein. It should be understood that the word “exemplary” is used herein to mean “serving as an example, instance, or illustration.” Any implementation or feature described herein as “exemplary” or “illustrative” is not necessarily to be construed as preferred or advantageous over other implementations or features. In the figures, similar symbols typically identify similar components, unless context dictates otherwise. The example implementations described herein are not meant to be limiting. It will be readily understood that the aspects of the present disclosure, as generally described herein, and illustrated in the figures, can be arranged, substituted, combined, separated, and designed in a wide variety of different configurations. I. Example Sensors [0020] Although example sensors described herein include LIDAR sensors and cameras (or image sensors), other types of sensors are possible as well. A non-exhaustive list of example sensors that can be alternatively employed herein without departing from the scope of the present disclosure includes RADAR sensors, SONAR sensors, sound sensors (e.g„ microphones, etc.), motion sensors, temperature sensors, pressure sensors, etc.
[0021] To that end, example sensors herein may include active sensors that emit a signal (e.g., a sequence of pulses or any other modulated signal) based on modulated power provided to the sensor, and then detects reflections of the emitted signal from objects in the surrounding environment. Alternatively or additionally, example sensors herein may include passive sensors (e.g., cameras, microphones, antennas, pressure sensors, etc.) that detect external signals (e.g., background signals, etc.) originating from external source(s) in the environment.
[0022] Referring now to the figures, Figure 1 is a simplified block diagram of a system 100 that includes sensors (e.g., synchronized sensors), according to an example embodiment. As shown, system 100 includes a power supply arrangement 102, a controller 104, one or more sensors 106, one or more sensors 108, a rotating platform 110, one or more actuators 112, a stationary platform 114, a rotary link 116, a housing 118, and a display 140.
[0023] In other embodiments, system 100 may include more, fewer, or different components. Additionally, the components shown may be combined or divided in any number of ways. For example, sensor(s) 108 can be implemented as a single physical component (e.g., camera ring). Alternatively, for example, sensor(s) 108 can be implemented as an arrangement of separate physical components. Other examples are possible. Thus, the functional blocks of Figure 1 are illustrated as shown only for convenience in description. Other example components, arrangements, and/or configurations are possible as well without departing from the scope of the present disclosure.
[0024] Power supply arrangement 102 may be configured to supply, receive, and/or distribute power to various components of system 100. To that end, power supply arrangement 102 may include or otherwise take the form of a power source (e.g., battery cells, etc.) disposed within system 100 and connected to various components of system 100 in any feasible manner, so as to supply power to those components. Additionally or alternatively, power supply arrangement 102 may include or otherwise take the form of a power adapter configured to receive power from one or more external power sources (e.g., from a power source arranged in a vehicle to which system 100 is mounted, etc.) and to transmit the received power to various components of system 100.
[0025] Controller 104 may include one or more electronic components and/or systems arranged to facilitate certain operations of system 100. Controller 104 may be disposed within system 100 in any feasible manner. In one embodiment, controller 104 may be disposed, at least partially, within a central cavity region of rotary link 116. In another embodiment, one or more functions of controller 104 can be alternatively performed by one or more physically separate controllers that are each disposed within a respective component (e.g., sensor(s) 106, 108, etc.) of system 100.
[0026] In some examples, controller 104 may include or otherwise be coupled to wiring used for transfer of control signals to various components of system 100 and/or for transfer of data from various components of system 100 to controller 104. Generally, the data that controller 104 receives may include sensor data based on detections of light by LIDAR 106 and/or camera(s) 108, among other possibilities. Moreover, the control signals sent by controller 104 may operate various components of system 100, such as by controlling emission and/or detection of light or other signal by sensor(s) 106 (e.g., LIDAR, etc.), controlling image pixel capture rate or times via a camera (e.g., included in sensor(s) 108), and/or controlling actuator(s) 112 to rotate rotating platform 110, among other possibilities.
[0027] To that end, in some examples, controller 104 may include one or more processors, data storage, and program instructions (stored in the data storage) executable by the one or more processors to cause system 100 to perform the various operations described herein. In some instances, controller 104 may communicate with an external controller or the like (e.g., a computing system arranged in a vehicle, robot, or other mechanical device to which system 100 is mounted) so as to help facilitate transfer of control signals and/or data between the external controller and the various components of system 100.
[0028] Additionally or alternatively, in some examples, controller 104 may include circuitry wired to perform the various functions described herein. Additionally or alternatively, in some examples, controller 104 may include one or more special purpose processors, servos, or other types of controllers. For example, controller 104 may include a proportional-integral- derivative (PID) controller or other control loop feedback apparatus that operates actuator(s) 112 to modulate rotation of rotating platform 110 according to a particular frequency or phase. Other examples are possible as well.
[0029] Sensors 106 and 108 can optionally include one or more sensors, such as LIDARs, cameras, gyroscopes, accelerometers, encoders, microphones, RADARs, SONARs, thermometers, etc., that scan a surrounding environment of system 100.
[0030] Sensor(s) 106 may include any device configured to scan a surrounding environment by emitting a signal and detecting reflections of the emitted signal. For instance, sensor(s) 106 may include any type of active sensor. To that end, as shown, sensor 106 includes a transmitter 120 and a receiver 122. In some implementations, sensor 106 may also include one or more optical elements 124.
[0031] Transmitter 120 may be configured to transmit a signal toward an environment of system 100. [0032] In a first example, where sensor 106 is configured as a LIDAR sensor, transmitter 120 may include one or more light sources (not shown) that emit one or more light beams and/or pulses having wavelengths within a wavelength range. The wavelength range could, for example, be in the ultraviolet, visible, and/or infrared portions of the electromagnetic spectrum depending on the configuration of the light sources. In some examples, the wavelength range can be a narrow wavelength range, such as provided by lasers and/or some light emitting diodes. In some examples, the light source(s) in transmitter 120 may include laser diodes, diode bars, light emitting diodes (LEDs), vertical cavity surface emitting lasers (VCSELs), organic light emitting diodes (OLEDs), polymer light emitting diodes (PLEDs), light emitting polymers (LEPs), liquid crystal displays (LCDs), microelectromechanical systems (MEMS), fiber lasers, and/or any other device configured to selectively transmit, reflect, and/or emit light to provide a plurality of emitted light beams and/or pulses.
[0033] In a second example, where sensor 106 is configured as an active infrared (ER) camera, transmitter 120 may be configured to emit ER radiation to illuminate a scene. To that end, transmitter 120 may include any type of device (e.g., light source, etc.) configured to provide the IR radiation.
[0034] In a third example, where sensor 106 is configured as a RADAR sensor, transmitter 120 may include one or more antennas configured to emit a modulated radio- frequency (RF) signal toward an environment of system 100.
[0035] In a fourth example, where sensor 106 is configured as a SONAR sensor, transmitter 120 may include one or more acoustic transducers, such as piezoelectric transducers, magnetostrictive transducers, electrostatic transducers, etc., configured to emit a modulated sound signal toward an environment of system 100. In some implementations, the acoustic transducers can be configured to emit sound signals within a particular wavelength range (e.g., infrasonic, ultrasonic, etc.). Other examples are possible as well.
[0036] Receiver 122 may include one or more detectors configured to detect reflections of the signal emitted by transmitter 120.
[0076] In a first example, where sensor 106 is configured as a RADAR sensor, receiver 122 may include one or more antennas configured to detect reflections of the RF signal transmitted by transmitter 120. To that end, in some implementations, the one or more antennas of transmitter 120 and receiver 122 can be physically implemented as the same physical antenna structures.
[0038] In a second example, where sensor 106 is configured as a SONAR sensor, receiver 122 may include one or more sound sensors (e.g., microphones, etc.) that are configured to detect reflections of the sound signals emitted by transmitter 120. To that end, in some implementations, the one or more components of transmitter 120 and receiver 122 can be physically implemented as the same physical structures (e.g., the same piezoelectric transducer element).
[0039] In a third example, where sensor 106 is configured as an active IR camera, receiver 122 may include one or more light detectors (e.g., active pixel sensors, etc.) that are configured to detect a source wavelength of IR fight transmitted by transmitter 120 and reflected off a scene toward receiver 122.
[0040] In a fourth example, where sensor 106 is configured as a LIDAR sensor, receiver 122 may include one or more light detectors (e.g., photodiodes, avalanche photodiodes, etc .) that are arranged to intercept and detect reflections of the light pulses emitted by transmitter 120 and reflected from one or more objects in a surrounding environment of system 100. To that end, receiver 122 may be configured to detect light having wavelengths in the same wavelength range as the light emitted by transmitter 120. In this way, for instance, sensor 106 (e.g., LIDAR) may distinguish reflected light pulses originated by transmitter 120 from other fight originating from external light sources in the environment.
[0041] In some instances, receiver 122 may include a photodetector array, which may include one or more detectors each configured to convert detected light (e.g., in the wavelength range of tight emitted by transmitter 120) into an electrical signal indicative of the detected light. In practice, such a photodetector array could be arranged in one of various ways. For instance, the detectors can be disposed on one or more substrates (e.g., printed circuit boards (PCBs), flexible PCBs, etc.) and arranged to detect incoming tight. Also, such a photodetector array could include any feasible number of detectors aligned in any feasible manner. Additionally, the detectors in the array may take various forms. For example, the detectors may take the form of photodiodes, avalanche photodiodes (e.g., Geiger mode and/or linear mode avalanche photodiodes), silicon photomultipliers (SiPMs), phototransistors, cameras, active pixel sensors (APS), charge coupled devices (CCD), cryogenic detectors, and/or any other sensor of light configured to receive focused tight having wavelengths in the wavelength range of the emitted tight.
[0042] In some implementations, sensor 106 (e.g., in a LIDAR configuration) can select or adjust a horizontal scanning resolution by changing a rate of rotation of the LIDAR and/or adjusting a pulse rate of light pulses emitted by transmitter 120. As a specific example, transmitter 120 can be configured to emit light pulses at a pulse rate of 15,650 light pulses per second. In this example, LIDAR 106 may be configured to rotate at 10 Hz (i.e., ten complete 360° rotations per second). As such, receiver 122 can detect light with a 0.23° horizontal angular resolution. Further, the horizontal angular resolution of 0.23° can be adjusted by changing the rate of rotation of LIDAR 106 or by adjusting the pulse rate. For instance, if LIDAR 106 is instead rotated at 20 Hz, the horizontal angular resolution may become 0.46°. Alternatively, if transmitter 120 emits the light pulses at a rate of 31,300 tight pulses per second while maintaining the rate of rotation of 10 Hz, then the horizontal angular resolution may become 0.115°. Other examples are possible as well. Further, in some examples, LIDAR 106 can be alternatively configured to scan a particular range of views within less than a complete 360° rotation of LIDAR 106.
[0043] Optical element(s) 124 can be optionally included in or otherwise coupled to transmitter 120 and/or receiver 122. In one example (e.g., where sensor 106 includes a LIDAR sensor), optical element(s) 124 can be arranged to direct tight from a light source in transmitter 120 toward the environment. In another example, optical element(s) 124 can be arranged to focus and/or guide light from the environment toward receiver 122. As such, optical element(s) 124 may include any feasible combination of mirror(s), waveguide(s), tight filters, lens(es), or any other optical components arranged to guide propagation of light through physical space and/or adjust certain light characteristics. For instance, optical elements 124 may include a light filter arranged to reduce or prevent light having wavelengths outside the wavelength range of the light emitted by transmitter 120 from propagating toward receiver 122. With such arrangement for instance, the light filter can reduce noise due to background light propagating from the scanned environment and originating from an external light source different than light sources of transmitter 120.
[0044] Sensor(s) 108 may include any type of sensor configured to scan the surrounding environment. As shown, sensors 108 include an array of sensing elements 128. Further, as shown, sensors 108 can optionally include one or more optical elements 130.
[0045] In some examples, sensor(s) 108 may include active sensors (e.g., LIDAR, RADAR, SONAR, etc.) that transmit signals and detect reflections thereof. Thus, although not shown, sensors 108 may include a transmitter and a receiver that are similar to, respectively, transmitter 120 and receiver 122. In other examples, sensor(s) 108 may include passive sensors (e.g., microphones, cameras, image sensors, thermometers, etc.) that detect external signals originating from one or more external sources.
[0046] In a first example, where sensor 108 is configured as a sound sensor, sensing elements 128 may include an array of microphones that each detect sounds (e.g., external signals) incident on the respective microphones in the array.
[0047] In a second example, where sensor(s) 108 are configured as one or more cameras, the camera(s) may include any camera (e.g., a still camera, a video camera, etc.) configured to capture images of the environment in which system 100 is located. For example, a camera of sensor 108 may include any imaging device that detects and provides data indicative of an image. For instance, sensing elements 128 may include one or more arrangements of light sensing elements that each provide a measure of light incident thereon. To that end, sensing elements 128 may include charge-coupled devices (CCDs), active pixel sensors, complementary metal-oxide-semiconductor (CMOS) photodetectors, N-type metal- oxide-semiconductor (NMOS) photodetectors, among other possibilities.
[0048] Further, in some examples, data from sensing elements 128 can be combined according to the arrangement of the sensing elements 128 to generate an image. In one example, data from a two-dimensional (2D) array of sensing elements may correspond to a 2D array of image pixels in the image. In another example, a 3D arrangement of sensing elements (e.g., sensing elements arranged along a curved surface) can be similarly used to generate a 2D array of image pixels in the image. Other examples are possible as well.
[0049] In some examples, a sensing element can optionally include multiple adjacent light detectors (or detectors of other types of signals), where each detector is configured to detect light (or other signal) having a particular wavelength or wavelength range. For instance, an image pixel may indicate color information (e.g., red-green-blue or RGB) based on a combination of data from a first detector that detects an intensity of red light, a second detector that detects an intensity of green light, and a third detector that detects an intensity of blue light. Other examples are possible as well.
[0050] In one embodiment, sensor(s) 108 may be configured to detect visible light propagating from the scene. Further, in this embodiment, receiver 122 of sensor 106 (e.g., LIDAR receiver) may be configured to detect invisible light (e.g., infrared, etc.) within a wavelength range of light emitted by transmitter 120. In this embodiment, system 100 (or controller 104) can then combine data from sensor 106 (e.g., LIDAR) with data from sensor 108 (e.g., camera) to generate a colored three-dimensional (3D) representation (e.g., point cloud) of the scanned environment.
[0051] In some examples, sensor(s) 108 may comprise a plurality of cameras (e.g., a camera ring) disposed in a circular arrangement around an axis of rotation of sensor 106 (e.g., LIDAR). For example, a first camera may be arranged to image a first field-of-view (FOV) of the environment that at least partially overlaps a range of pointing directions of sensor 106 as sensor 106 rotates about the axis (or as the signals transmitted by transmitter 120 are otherwise steered to different pointing directions about the axis). Further, a second camera adjacent to and/or overlapping the first camera may image a second FOV adjacent to the first FOV of the first camera, and so on. In this way, for instance, system 100 may be configured to capture a sequence of images of the respective FOVs simultaneously (and/or synchronously or according to some other timing) with a scan of the environment by sensor 106 as sensor 106 rotates about the axis.
[0052] In some examples, sensor(s) 108 may be configured to operate in a rolling shutter mode.
[0053] In a first example, where sensor(s) 108 include a microphone array, each output from a microphone in the array may be associated with a respective exposure time period of a corresponding sensing element (e.g., microphone) to external sounds incident on sensor 108.
[0054] In a second example, where sensor(s) 108 include one or more cameras, each pixel or group of pixels output by the camera(s) may be associated with a respective exposure time period of a corresponding sensing element or group of sensing elements to external tight. By way of example, camera(s) 108 may together provide an array of adjacent rows of sensing elements 128. Further, camera(s) 108 can be configured to output a sequence of image pixels that correspond to measurements of the external tight by corresponding sensing elements in the array. For example, camera(s) 108 may output a first row of image pixels based on data from a first row of sensing elements in the array, followed by a second row of image pixels based on data from a second adjacent row of sensing elements in the array, and so on.
[0055] In this way, the first image pixel row may be associated with a first exposure time period during which the first row of sensing elements was exposed to tight, the second image pixel row may be associated with a second exposure time period during which the second adjacent row of sensing elements was exposed to light, etc. The first exposure time period may begin before the second exposure time period begins. For instance, after a time delay from a start time of the first exposure time period (and optionally before the first exposure time period lapses), camera(s) 108 may start exposing the second adjacent row of sensing elements. Additionally, the first exposure time period may end before the second exposure time period ends. For instance, controller 104 may read outputs from the first row of sensing elements after the first exposure time period ends and while the second row of sensing elements is still being exposed to the external light, and then read outputs from the second row of sensing elements after the second exposure period ends and while a third row of sensing elements is still being exposed to the external tight, and so on.
[0056] In some examples, where sensor 106 includes an image sensor, system 100 may be configured to select the order in which the sequence of image pixels are obtained from sensing elements 128 in the rolling shutter mode based on an order in which transmitter 120 is emitting tight pulses (or other signals). For example, a given row of sensing elements in the array of sensing elements 128 may be aligned (e.g., parallel, etc.) with the axis of rotation of a LIDAR (e.g., sensor 106). For instance, if the axis of rotation of the LIDAR is a vertical axis, then the given row may correspond to a vertical row of sensing elements (e.g., vertical linear arrangement parallel to the axis of rotation of the LIDAR). Further, transmitter 120 may be configured to output a plurality of tight beams in an arrangement of one or more vertical tines repeatedly as the LIDAR (e.g., sensor 106) rotates about the axis. As such, for example, sensor(s) 108 (e.g., camera(s)) may output a first row of image pixels using a first row of sensing elements that are arranged similarly (e.g., vertically, etc.) to the arrangement of the plurality of light beams emitted by transmitter 120. Next, camera(s) 108 may then output a second row of image pixels using a second adjacent row of sensing elements in the direction of the rotation of the LIDAR (or other sensorl06). Thus, for instance, the second row of image pixels may be aligned with a second vertical line of light beams emitted by transmitter 120 after sensor 106 rotates toward the second row of sensing elements, and so on.
[0057] By scanning vertical rows of sensing elements one after another, for instance, the sequence of image pixels obtained from camera(s) 108 may include a sufficient number of pixels that were captured at times (and from viewing directions) that are similar to the times and directions of LIDAR light pulses (or other signals) emitted by transmitter 120 (e.g., as transmitter 120 rotates about a vertical axis). Whereas, for instance, if the camera(s) (e.g., sensor(s) 108) instead captured the sequence of image pixels using a first horizontal row of sensing elements followed by a second horizontal row of sensing elements and so on, then fewer image pixels may be captured at times (and from viewing directions) that are similar to the times and directions of the LIDAR light pulses.
[0058] Optical element(s) 130 may include any combination of optical components such as lens(es), mirror(s), waveguide(s), light filters) or any other type of optical component similarly to optical element(s) 124. Further, optical elements 130 can be arranged to focus, direct, and/or adjust light characteristics of incident light for propagation toward sensing elements 128. Further, where sensor(s) 108 include a plurality of cameras for instance, optical element(s) 130 may include a plurality of respective camera lenses that focus external light onto respective image sensors of the cameras.
[0059] In some examples, optical element(s) 130 may include one or more light filters that selectively transmit particular wavelengths of light toward one or more particular sensing elements of sensor 106. [0060] In a first example, optical element(s) 130 may include one or more light filters that attenuate light wavelengths of light emitted by transmitter 120. With this arrangement, for instance, system 100 can reduce noise measurements (by sensing element(s) 128) that are associated with the high intensity of light pulses (or other signals) emitted by transmitter 120.
[0061] In a second example, sensor 108 may include color image sensors (e.g., Bayer filter sensor, layered pixel sensor array, etc.) configured to indicate colors of incident light. In this example, optical element(s) 130 may include a color filter array, where each color filter of the array transmits red, green, or blue light to a particular sensing element adjacent to the color filter (and attenuates other wavelengths of light). System 100 can then generate (e.g., by combining outputs from multiple sensing elements that sense light having different colors) image pixels that indicate color information (e.g., red, green, and blue, etc.).
[0062] In a third example, optical element(s) 130 may include one or more filters that attenuate wavelengths of the light (or other signal) emitted by transmitter 120 and one or more other filters that allow transmission of these wavelengths. For instance, optical element(s) 130 may include a color filter array that includes green, red, and blue light filters. In this instance, a relatively large number of the color filters can be configured to attenuate the wavelengths of the emitted light of transmitter 120 to reduce the effects of the high intensity signals emitted by transmitter 120. Further, a relatively smaller number of the color filters (e.g., one or more of the green light filters, etc.) can be configured to (at least partially) allow transmission of wavelengths of the light (or other signal) emitted by transmitter 120. With this arrangement, the high intensity light of transmitter 120 (reflecting off objects in the environment of system 100) can be used to illuminate one or more sensing elements in dark external light conditions (e.g., night time).
[0063] Rotating platform 110 may be configured to rotate about an axis. For example, sensor 106 (and/or transmitter 120 and receiver 122 thereof) may be supported (directly or indirectly) by rotating platform 110 such that each of these components moves relative to the environment in response to rotation of rotating platform 110. In particular, each of these components could be rotated (simultaneously) relative to an axis so that sensor 106 may obtain information from various directions. In some examples, the axis of rotation of rotating platform 110 is vertical and a pointing direction of sensor 106 can be adjusted horizontally by the rotation of rotating platform 110 about its vertical axis of rotation. Rotating platform 110 can be formed from any solid material suitable for supporting one or more components (e.g., sensor 106) mounted thereon.
[0064] In order to rotate platform 110 in this manner, one or more actuators 112 may actuate rotating platform 110. To that end, actuators 112 may include motors, pneumatic actuators, hydraulic pistons, and/or piezoelectric actuators, among other possibilities.
[0065] With this arrangement, controller 104 could operate actuator 112 to rotate rotating platform 110 in various ways so as to obtain information about the environment In one example, rotating platform 110 could be rotated in either direction. In another example, rotating platform 110 may carry out complete revolutions such that sensor 106 (e.g., LIDAR) provides a 360° horizontal FOV of the environment. Moreover, rotating platform 110 may rotate at various frequencies so as to cause sensor 106 to scan the environment at various refresh rates and/or scanning resolutions.
[0066] Alternatively or additionally, system 100 may be configured to adjust the pointing direction of the emitted signal (emitted by transmitter 120) in various ways. In some examples, signal sources (e.g., light sources, antennas, acoustic transducers, etc.) of transmitter 120 can be operated according to a phased array configuration or other type of beam steering configuration.
[0067] In a first example, where sensor 106 is configured as a LIDAR sensor, light sources in transmitter 120 can be coupled to phased array optics (e.g., optical elements 124) that control the phase of light waves emitted by the light sources. For instance, controller 104 can be configured to adjust the phased array optics (e.g., phased array beam steering) to change the effective pointing direction of a light signal emitted by transmitter 120 (e.g., even if rotating platform 110 is not rotating).
[0068] In a second example, where sensor 106 is configured as a RADAR sensor, transmitter 120 may include an array of antennas, and controller 104 can provide respective phase-shifted control signals for each individual antenna in the array to modify a pointing direction of a combined RF signal from the array (e.g., phased array beam steering).
[0069] In a third example, where sensor 106 is configured as a SONAR sensor, transmitter 120 may include an array of acoustic transducers, and controller 104 can similarly operate the array of acoustic transducers (e.g., via phase-shifted control signals, etc.) to achieve a target pointing direction of a combined sound signal emitted by the array (e.g., even if the rotating platform 110 is not rotating, etc.).
[0070] In other examples, the pointing direction of sensor(s) 106 can be controlled using a deforming flexible structure (e.g., MEMs, etc.) that can be deformed in response to a control signal from controller 104 to adjust a steering direction of the signals emitted by transmitter 120. Other examples are possible.
[0071] Stationary platform 114 may take on any shape or form and may be configured for coupling to various structures, such as to a top of a vehicle for example. Also, the coupling of stationary platform 114 may be carried out via any feasible connector arrangement (e.g., bolts and/or screws). In this way, system 100 could be coupled to a structure so as to be used for various purposes, such as those described herein. In one example, sensor(s) 108 can be coupled to stationary platform 114. In this example, sensor(s) 108 can remain stationary relative to the rotational motion of sensor(s) 106 (or the otherwise changing beam directions of signals emitted by transmitter 120). In another example, sensor(s) 108 can be mounted to another physical structure different than stationary platform 114.
[0072] Rotary link 116 directly or indirectly couples stationary platform 114 to rotating platform 110. To that end, rotary link 116 may take on any shape, form and material that provides for rotation of rotating platform 110 about an axis relative to stationary platform 114.
In some examples, rotary link 116 may take the form of a shaft or the like that rotates based on actuation from actuator 112, thereby transferring mechanical forces from actuator 112 to rotating platform 110. In one implementation, rotary link 116 may have a central cavity in which one or more components of system 100 may be disposed. In some examples, rotary link 116 may also provide a communication link for transferring data and/or instructions between stationary platform 114 and rotating platform 110 (and/or components thereon such as sensor(s) 106, etc.).
[0073] Housing 118 may take on any shape, form, and material and may be configured to house one or more components of system 100. In one example, housing 118 can be a dome- shaped housing. Further, in some examples, housing 118 may be composed of a material that is at least partially non-transparent, which may allow for blocking of at least some light from entering the interior space of housing 118 and thus help mitigate thermal and noise effects of ambient tight on one or more components of system 100. Other configurations of housing 118 are possible as well. In some implementations, housing 118 may be coupled to rotating platform 110 such that housing 118 is configured to rotate about the above-mentioned axis based on rotation of rotating platform 110. In such implementations, sensor(s) 106 may rotate together with housing 118. In other implementations, housing 118 may remain stationary while sensor(s) 106 rotate within housing 118. System 100 could also include multiple housings similar to housing 118 for housing certain sub-systems or combinations of components of system 100. For example, system 100 may include a first housing for sensor(s) 106 and a separate housing for sensor(s) 108. Other examples are possible as well.
[0074] Display 140 can optionally be included in system 100 to display information about one or more components of system 100. For example, controller 104 may operate display 140 to display images captured using a camera (e.g., sensor 108), a representation (e.g., 3D point cloud, etc.) of an environment of system 100 indicated by LIDAR data from sensor 106, and/or a representation of the environment based on a combination of the data from sensors 106 and 108 (e.g., colored point cloud, images with superimposed temperature indicators, etc.). To that end, display 140 may include any type of display (e.g., liquid crystal display, LED display, cathode ray tube display, projector, etc.). Further, in some examples, display 140 may have a graphical user interface (GUI) for displaying and/or interacting with images captured by sensor 108, LIDAR data captured using sensor 106, and/or any other information about the various components of system 100 (e.g., power remaining via power supply arrangement 102). For example, a user can manipulate the GUI to adjust a scanning configuration of sensors 106 and/or 108 (e.g., scanning refresh rate, scanning resolution, etc.).
[0075] It is noted that the various components of system 100 can be combined or separated into a wide variety of different arrangements. For example, although sensors 106 and 108 are illustrated as separate components, one or more components of sensors 106 and 108 can alternatively be physically implemented within a single device. Thus, this arrangement of system 100 is described for exemplary purposes only and is not meant to be limiting.
[0076] Figure 2A illustrates a device 200 that includes a rotating LIDAR sensor 206 and a camera ring 208, according to example embodiments. As shown, device 200 includes a LIDAR 206, camera ring 208 (e.g., arrangement of rolling shutter cameras, etc.), a rotating platform 210, a stationary platform 214, a housing 218, a LIDAR lens 224, and camera lenses 230, 232, 234 which may be similar, respectively, to sensor(s) 106, sensor(s) 108, rotating platform 110, stationary platform 114, housing 118, optical element 124, and optical elements 130, for example.
[0077] As shown, light beams 250 emitted by LIDAR 206 propagate from lens 224 along a pointing direction of LIDAR 206 toward an environment of LIDAR 206, and reflect off one or more objects (not shown) in the environment as reflected light 260. Further, as shown, LIDAR 206 may then receive reflected light 290 (e.g., through lens 224). Thus, for instance, LIDAR 206 may provide data (e.g., data point cloud, etc.) indicating distances between the one or more objects and the LIDAR 206 based on detection(s) of the reflected light 290, similarly to the discussion above for sensor 106.
[0078] Further, as shown, each camera of camera ring 208 may receive and detect a respective portion of external light 270 incident on the respective camera. To that end, external light 270 may include light originating from one or more external light sources, such as the sun, a street lamp, among other possibilities. For example, external light 270 may include light propagating directly from an external light source toward camera lenses 230, 232, and/or 234. Alternatively or additionally, external light 270 may include light originating from an external light source and reflecting off one or more objects (not shown) in the environment of device 200 before propagating toward lenses 230, 232, and/or 234. Thus, for example, the cameras of camera ring 208 may generate one or more images of the environment based on external light 270. Further, each image generated by a particular camera may correspond to a particular FOV of the particular camera relative to device 200.
[0079] To that end, in some examples, camera ring 208 may include a plurality of cameras that are arranged in a ring formation (e.g., circular arrangement, oval arrangement, etc.) relative to one another. Each camera of the plurality can be positioned (e.g., mounted to device 200 and/or camera ring 208) at a particular angle and/or orientation. Thus, for instance, a FOV of a first camera may be adjacent to and/or partially overlapping FOVs of two other adjacent cameras. With this arrangement for instance, images from the individual cameras can be combined into an image of a 360-degree FOV of device 200. Further, during assembly or calibration of device 200 for instance, the respective angle and/or orientation of each camera can be adjusted to reduce or prevent blind spots (e.g., regions of the surrounding environment that are not within the FOV of any camera in camera ring 208). For example, the respective FOVs of two adjacent cameras can be aligned (e.g., by moving, rotating, and/or otherwise adjusting relative mounting positions of the two cameras, etc.) such that a region of the environment between the FOVs of the two cameras (e.g., “blind spot”) is less than a threshold distance from device 200.
[0080] To facilitate this, in one implementation, camera ring 208 could optionally include a housing (e.g., ring-shaped, etc.) having one or more indentations that receive and/or support the cameras at particular respective mounting positions (e.g., angle, orientation, etc.).
In another implementation, an example system (e.g., system 100, a calibration system, etc.) may be configured to compare images captured by the cameras, and to determine, based on the comparison, alignment offsets that achieve respective target FOVs for the respective cameras. The example system may also include and/or operate a robotic arm, an actuator, and/or any other alignment apparatus to adjust the positions of the cameras in camera ring 208 according the determined alignment offsets. Other examples are possible.
[0081] In some examples, device 200 (or another computing device coupled thereto) may operate the cameras of camera ring 208 and/or process the captured images therefrom (e.g., combine portions of the captured images, etc.) to form a cohesive circular vision of the environment of device 200. For example, a computing system (not shown) of device 200 or another device may match features in images captured by camera ring 208 to generate a combined image that spans a combination of the FOVs of the cameras.
[0082] In one implementation, lens 230 may focus light from a first 90-degree FOV of device 200, lens 232 may focus light from a second adjacent 90-degree FOV, and so on. The first FOV could optionally partially overlap the first FOV. In other implementations, the FOV imaged by each camera may be more or less than 90 degrees. Further, in line with the discussion above, an image captured by any of the cameras in camera ring 208 may indicate various types of information such as light intensities for different wavelengths (e.g., colors, etc.) in external light 270, among other examples.
[0083] In some examples, LIDAR 206 (and/or housing 218) can be configured to have a substantially cylindrical shape and to rotate about axis 242, based on rotation of rotating platform 210 that supports LIDAR 206 for instance. Further, in some examples, the axis of rotation 242 may be substantially vertical. Thus, for instance, by rotating LIDAR 206 about axis 242, device 200 (and/or a computing system that operates device 200) can determine a three-dimensional map based on data from LIDAR 206) of a 360-degree view of the environment of device 200. Additionally or alternatively, in some examples, device 200 can be configured to tilt the axis of rotation of rotating platform 210 (relative to stationary platform 214), thereby adjusting the FOV of LIDAR 206. For instance, rotating platform 210 may include a tilting platform that tilts in one or more directions.
[0084] In some examples, as shown, LIDAR lens 224 can have an optical power to both collimate (and/or direct) emitted light beams 250 toward an environment of LIDAR 206, and focus reflected light 260 received from the environment onto a LIDAR receiver (not shown) of LIDAR 206. In one example, lens 224 has a focal length of approximately 120 mm. Other example focal lengths are possible. By using the same lens 224 to perform both of these functions, instead of a transmit lens for collimating and a receive lens for focusing, advantages with respect to size, cost, and/or complexity can be provided. Alternatively however, LIDAR 206 may include separate transmit and receive lenses. Thus, although not shown, LIDAR 206 can alternatively include a transmit lens that directs emitted light 250 toward the environment, and a separate receive lens that focuses reflected light 260 for detection by a receiver of LIDAR 206.
[0085] It is noted that device 200 may include more, fewer, or different components than those shown, and one or more of the components shown may be combined or separated in different ways. In one example, instead of multiple camera lenses 230, 232, 234, device 200 could alternatively include a single camera lens that extends around a circumference of camera ring 208. In another example, although camera ring 208 is shown to be coupled to stationary platform 214, camera ring 208 can alternatively be implemented as a separate physical structure. In yet another example, camera ring 208 can be positioned above LIDAR 206, without being rotated by rotating platform 214. In still another example, camera ring 208 may include more or fewer cameras than shown. Other examples are possible.
[0086] Figure 2B illustrates a cross-section view of camera ring 208, according to an example embodiment. In the cross-section view of Figure 2B, axis 242 (i.e., axis of rotation of LIDAR 206) extends through the page. As shown, camera ring 208 includes four cameras 208a, 208b, 208c, 208d that are arranged around axis of rotation 242. Thus, in the example shown, each of the cameras may be configured to image a respective 90-degree FOV of the environment of device 200. However, in other embodiments, camera ring 208 may include fewer or more cameras than shown. In one particular embodiment, camera ring 208 may alternatively include eight cameras, where each camera is coupled to a respective lens that focuses light from (at least) a respective 45-degree FOV of the environment onto an image sensor of the camera. Other examples are possible. Thus, camera ring 208 may have a wide variety of different configurations and thus the configuration shown includes four cameras only for convenience in description.
[0087] Further, as shown, camera 208a includes lens 230 that focuses a first portion of external light (e.g., light 270) from the environment of device 200 onto an image sensor 226 of camera 208a. Further, as shown, camera 208b includes lens 232 that focuses a second portion of the external light onto an image sensor 246 of camera 232. Similarly, cameras 208c and 208d may be configured to focus respective portions of the external light onto respective image sensors of the cameras.
[0088] Further, as shown, each image sensor may include an array of sensing elements similar to sensing elements 128 for example. For instance, image sensor 226 of camera 208a may include an array of adjacent rows of sensing elements, exemplified by sensing elements 228a-228f (which may be similar to sensing elements 128 for example). By way of example, a first row of sensing elements in image sensor 226 may include sensing element 228a and one or more other sensing elements (not shown) that are vertically arranged through the page (e.g., parallel to axis 242). Further, a second row of sensing elements adjacent to the first row may include sensing element 228b and one or more other sensing elements (not shown) that are vertically arranged through the page, and so on.
[0089] In this way, for example, cameras 208a, 208b, 208c, 208d may together provide an array of adjacent rows of sensing elements that are arranged around axis 242, so as to be able to image various corresponding portions of a 360-degree (horizontal) FOV around device 200. For instance, a given row of sensing elements in image sensor 246 of camera 204b may include sensing element 248a (and one or more other sensors arranged parallel to axis 242 through the page). Further, in this instance, the given row of sensing elements in camera 208b may also be adjacent to a row of sensing elements in camera 208a that includes sensing element 228f. Thus, in an example scenario, the sequence of image pixels obtained from camera ring 208 may include a row of image pixels obtained using data from the row of sensing elements that includes sensing element 228f, followed by a row of image pixels obtained using data from the row of sensing elements that includes sensing element 248a.
[0090] It is noted that the number of rows of sensing elements in each of the image sensors (e.g., sensors 226, 246, etc.) is illustrated as shown only for convenience in description. However, in some embodiments, image sensor 226 (and/or 246) may include more or fewer rows of sensing elements than shown In one particular embodiment, image sensor 226 may alternatively include 3000 rows of sensing elements, and each row may include 1000 sensing elements (extending through the page). In this embodiment, camera 208a may thus be configured to output a 3000 x 1000 pixel image. Further, in this embodiment, camera 208a may be configured to capture images at a rate of 60 Hz. Other camera configuration parameters are possible as well.
[0091] It is noted that the sizes, shapes, and positions of the various components of device 200 are not necessarily to scale, but are illustrated as shown only for convenience in description. In one example, the sizes of the lenses 230, 232, 234, 236, and sensors 226, 246, etc., shown in Figure 2B may be different than the sizes shown. In another example, the distance between lens 230 and image sensor 226 may be different than the distance shown. In one embodiment, the distance from lens 230 to sensor 226 may correspond to approximately twice the diameter of lens 230. However, in other embodiments, image sensor 226 and camera lens 230 may have other sizes, shapes, and/or positions relative to one another.
[0092] Figure 2C is a conceptual illustration of an operation of device 200, according to an example embodiment. In the illustration of Figure 2C, the sensing elements of image sensor 226 of camera 208a are in the plane of the page. It is noted that some of the components of device 200, such as camera lens 230 and LIDAR 206 for instance, are omitted from the illustration of Figure 2C for convenience in description.
[0093] In some implementations, device 200 may be configured to operate cameras 208a, 208b, 208c, and/or 208d in a rolling shutter configuration to obtain a sequence of image pixels. In the scenario of Figure 2 C for example, a first row of sensing elements that includes sensing elements 228a and 228g may be configured to measure an amount of external light incident thereon during a first exposure time period. Device 200 may also include an analog to digital converter (not shown) that reads and converts the measurements by the first row of sensing elements (after the first exposure time period lapses) for transmission to a controller (e.g., controller 104) of device 200. After a time delay from a start time of the first exposure time period (and optionally before the first exposure time period ends), device 200 may start exposing a second row of sensing elements that includes sensing elements 228b and 228h for a second exposure time period. Thus, in some examples, exposure time periods of multiple rows of sensing elements may partially overlap (e.g., the time delay between the start times of the first and second exposure time periods may be less than the first exposure time period, etc.).
In this way, a camera in the rolling shutter configuration can stagger the start times of the exposure time periods to increase the image refresh rate (e.g., by simultaneously exposing multiple rows of sensing elements during the overlapping portions of their respective exposure time periods).
[0094] Continuing with the scenario, after the second exposure time period lapses, device 200 may then similarly measure and transmit the measurements by the second row of sensing elements to the controller. This process can then be repeated until all the rows of sensing elements (i.e., a complete image flame) are scanned. For example, after a start time of the second exposure time period (and optionally before the second exposure time period lapses), device 200 may begin exposing a third row of sensing elements (adjacent to the second row) to external tight 270, and so on.
[0095] Further, as noted above, device 200 may be configured to obtain the sequence of image pixels in an order that is similar to the order in which tight pulses are emitted by LIDAR 206. By doing so, for instance, more image pixels captured by cameras 208a-d may overlap (in both time and viewing direction) with LIDAR data (e.g., detected reflections of the emitted light pulses) than in an implementation where the sequence of image pixels is obtained in a different order.
[0096] Continuing with the scenario of Figure 2C for example, light beams 250a, 250b, 250c may correspond to the emitted light 250 shown in Figure 2A when LIDAR 206 is at a first pointing direction or orientation about axis 242. In the scenario, the device 200 may be configured to scan the first (vertical) row of sensing elements (e.g., including elements 228a and 228g) before scanning sensing elements in the second (vertical) row (e.g., including elements 228b and 228h). By doing so, the image pixels captured using the first row of sensing elements may be more likely to be matched with detected reflections of light beams 250a-250c in terms of both time and viewing direction. In the scenario, LIDAR 206 may then rotate (e.g., counterclockwise) about axis 242 and emit light beams 252a-252c. Device 200 may then obtain a second row of image pixels using the second row of sensing elements (e.g., including sensing elements 228b and 228h), which may be more likely to be aligned (in both time and viewing direction) with detected reflections of light beams 252a-252c, and so on.
[0097] In some implementations, device 200 may also be configured to obtain a row of image pixels in the sequence according to the order of emission of the light pulses / beams by LIDAR 206. As a variation of the scenario above for example, if LIDAR 206 emits light beams 250a, 250b, 250c in that order, then device 200 may be configured to obtain the image pixel row associated with the first row of sensing elements in a similar order (e.g., beginning with sensing element 228a and ending with sensing element 228g). Whereas, for instance, if LIDAR 206 emits light beams 250c, 250b, 250a in that order, then device 200 may instead be configured to obtain the image pixel row in an opposite order (e.g., beginning with sensing element 228g and aiding with sensing element 228a).
[0098] Further, in some implementations, device 200 may be configured to adjust a time delay between capturing subsequent image pixel rows in the sequence of image pixels based on a rate of rotation of LIDAR 206. For example, if LIDAR 206 increases its rate of rotation (e.g., via actuator(s) 112, etc.), then device 200 may reduce the time delay between obtaining the first row of image pixels associated with the first row of sensing elements (e.g., including sensing elements 228a and 228g) and obtaining the second row of image pixels associated with the second adjacent row of sensing elements. As noted above, for instance, the exposure start times associated with each row of sensing elements may depend on the order and time of obtaining the corresponding image pixels, and thus adjusting the time delay may improve the extent of matching image pixel capture times (and viewing directions) with corresponding LIDAR pulse emission times (and/or detections of corresponding reflections). [0099] Figure 2D illustrates a top view of device 200. In the illustration of Figure 2D, LIDAR 206 may have a first pointing direction that corresponds to an angular position of 0° about axis 242 (e.g., toward the bottom of the page). In this configuration for example, LIDAR 206 may scan a region of the surrounding environment that corresponds to a center of an image captured using camera 208c (best shown in Figure 2B), which includes lens 234.
[00100] Figure 2E illustrates another top view of device 200. In the illustration of Figure
2E, LIDAR 206 may have a second pointing direction that corresponds to an angular position of 180° about axis 242 (e.g., toward the top of the page). For instance, LIDAR 206 may have the second pointing direction of Figure 2E after LIDAR 206 is rotated from the first pointing direction of Figure 2D by half a complete rotation about axis 242. Further, in this configuration for example, LIDAR 206 may scan a region of the environment that corresponds to a center of an image captured using camera 208a (best shown in Figure 2B), which includes lens 230.
[00101] In some scenarios, as LIDAR 206 rotates about axis 242, the time period in which FOVs of LIDAR 206 overlap the FOV of camera 208a may be less than the exposure time period (and readout time period) suitable for capturing an image using camera 208a.
[00102] In one example scenario, where camera 208a is operated in a rolling shutter configuration (e.g., rows of sensing elements in camera 208a exposed according to different exposure start times), image sensor 226 of camera 208a may measure and output an image frame (i.e., pixel data from all the sensing elements of image sensor 226) over a period of 60 ms. Further, in the scenario, LIDAR 206 may be configured to rotate at a rotation rate of 10 Hz (i.e., one complete rotation about axis 242 every 100ms). Thus, LIDAR 206 may scan a range of FOVs that overlap an FOV of camera 208a within a time period of 100ms / 4 = 25 ms (e.g., from t = 37.5 ms to t = 62.5 ms). To account for the difference between the scanning durations of the camera and the LIDAR, in some implementations, device 200 may be configured to synchronize LIDAR light pulses emitted by LIDAR 206 with image pixels captured by some but not all the image sensing elements in a camera.
[00103] For example, device 200 can be configured to trigger capturing an image by a particular camera such that a particular region of the image (e.g., vertical row(s) of image pixels at or near the center of the image, etc.) is exposed to external light when LIDAR 206 is pointing at a particular pointing direction aligned with the particular region of the image.
[00104] Continuing with the scenario above for instance (where LIDAR 206 rotates at a frequency of 10 Hz), at time t = 0 ms, LIDAR 206 (as shown in Figure 2D) may have a first pointing direction (e.g., angular position about axis 242 = 0°). Further, at time t = 50ms, LIDAR 206 (as shown in Figure 2E) may have a second pointing direction (e.g., angular position about axis 242 = 180°).
[00105] In this scenario, device 200 may be configured to synchronize a center of the exposure time period of image sensor 226 (inside camera 208a) with the time (e.g., t = 50ms) at which the FOV of LIDAR 206 overlaps the center of the FOV of camera 208a. For example, where the exposure time period of image sensor 226 is 60ms, then at time t = 30 ms the center vertical rows of sensing elements in image sensor 226 may be exposed to external light. In this example, camera 208a may trigger an image capture at time t = 50 - 30 = 20 ms to align (in both the time domain and space domain) exposure of vertical row(s) of sensing elements near the center of image sensor 226 with the LIDAR light pulses emitted when LIDAR 206 is scanning a FOV that corresponds to the center of the image (e.g., at t = 50 ms).
[00106] With this arrangement, image pixels near the center of the image (e.g., captured using the vertical row including sensing element 228c, or the row including sensing element 228d) may be relatively more aligned (with respect to timing and viewing direction) with LIDAR light pulses that were emitted / detected when these image pixels were measured. On the other hand, image pixels captured using rows of sensing elements that are further from the center of the image sensor may be relatively misaligned (in time or viewing direction) with LIDAR light pulses that were emitted / detected when these image pixels were measured. By way of example, the FOVs of the rotating LIDAR may overlap the camera FOV of camera 208a between times t = 37.5 ms and t = 62.5 ms. In the scenario above however, camera 208a may begin exposing the row of sensing elements that include sensing element 228a (best shown in Figure 2C) at time t = 20 ms (i.e., outside the range of times when the FOV of the LIDAR overlaps the FOV of the camera).
[00107] To mitigate this misalignment, in some examples, cameras 208a, 208b, 208c, 208d can be configured to have partially overlapping FOVs. For example, camera 208d (best shown in Figure 2B) may be configured to have a FOV that partially overlaps the FOV of adjacent camera 208a. Further, the exposure time period associated with a center region of an image captured using camera 208d can be synchronized with the time (e.g., t = 25 ms ) at which LIDAR 206 is pointing toward a FOV associated with the center of the image captured using camera 208d. Thus, in these examples, device 200 (or other computer) can use the aligned image pixel data from camera 208d (e.g., image pixels near center of captured image) instead of the misaligned image pixel data captured using camera 208a (e.g., image pixels further from the center of the image) for mapping with the LIDAR data.
[00108] Figure 3 illustrates a cross-section view of another rolling shutter camera arrangement 308 (e.g., camera ring), according to example embodiments. Camera ring 308 may be similar to camera ring 208 shown in Figure 2B. As shown, for example, axis 342 may be an axis of rotation of a LIDAR similarly to axis 242. Further, for example, image sensor 326 may be similar to image sensor 226 (and/or 246) and may include an array of sensing elements, exemplified by sensing elements 328a-328e, which may be similar to sensing elements 228a-228f. For example, image sensor 326 may comprise a first row of sensing elements that includes sensing element 328a and one or more other sensing elements (not shown) in a linear arrangement (e.g., perpendicular to the page), and a second adjacent row of sensing elements that includes sensing element 328b and one or more other sensing elements (not shown) in a linear arrangement (e.g., perpendicular to the page).
[00109] Although not shown, camera ring 308 may also include one or more camera lenses (e.g., similar to camera lenses 230, 232, 234, 236, etc.) that focus portions of external light incident on camera ring 308 toward respective sensing elements in the image sensor 326. Additionally or alternatively, camera ring 308 may include one or more of the components shown in any of system 100 and/or device 200.
[00110] As shown, camera ring 308 includes image sensor 326 that is disposed along a curved surface (e.g., circular surface) around axis 342. In one example, image sensor 326 can be implemented on a flexible substrate (e.g., flexible PCB, etc.) that mounts an arrangement of sensing elements (including sensing elements 328a-328e, etc.). Thus, with this arrangement, each of the rows of sensing elements in image sensor 326 may be at a same given distance to the axis of rotation 342 (e.g., circular or cylindrical arrangement of sensing elements). In another example, image sensor 326 can be implemented as a plurality of physically separate rows of sensing elements that are arranged adjacent to one another around axis of rotation 342. For instance, each physically separate row of sensing elements may be located at a same given distance to the axis of rotation as the other rows. Other examples are possible. Regardless of the implementation, in the configuration of camera ring 308, the curved surface on which each row of sensing elements in image sensor 326 is mounted may improve the overlap (e.g., in terms of viewing direction) between the image pixels captured by the sensing elements and the light beams emitted by a LIDAR sensor that rotates about axis 342.
[00111] For instance, as the LIDAR sensor rotates about axis 342, the viewpoint of the LIDAR device (e.g., location of LIDAR lens) may move in a circular path. Thus, with this arrangement, the curved surface of image sensor 326 may resemble the circular path of emitted / detected LIDAR light pulses to improve the likelihood of matching image pixels collected by sensor 326 with LIDAR light pulses (that are detected along a similar curved path in the horizontal direction of the rotation of the LIDAR sensor).
Π. Example Manning of Points Into Rolling-Shutter Images
[00112] It can be beneficial in a variety of applications to map one or more points in an environment (e.g., points detected, relative to an autonomous vehicle, by a LIDAR sensor of the autonomous vehicle) to location(s) (e.g., individual pixels) within an image of a portion of the environment. This can be done in order to generate information about the shape, composition, contents, or other properties of the environment to facilitate navigation through the environment, to inventory contents of the environment, to manipulate contents of the environment, or to facilitate some other goal. For example, the image could be used to identify contents of the environment (e.g., to identify an object of interest). In such an example, mapping a detected point in the environment (e.g., a point of a LIDAR point cloud) to a location, within the image, that represents the identified contents could allow the location, within the environment, of the identified contents to be determined.
[00113] Such methods for localizing a point in an environment to a location in an image of a portion of the environment could be part of a sensor fusion algorithm. Sensor fusion algorithms can be employed to merge data from multiple sensors, such as an image sensor and a LIDAR sensor for instance, to generate a representation of a scanned environment. For instance, a 3D representation of a scanned environment may indicate color information determined using an image sensor combined with other information (e.g., distance, depth, intensity, texture, reflected light pulse length, etc.) determined using a LIDAR sensor.
[00114] A variety of methods could be employed to map a point in an environment to a location within an image of the environment. These methods generally use information about the location and orientation, or ‘pose,’ within the environment of the camera that generated the image. In examples where the camera is in motion relative to the environment (e.g., where the camera is part of an automobile, an unmanned aerial vehicle, or some other autonomous or otherwise-configured vehicle), it is desirable to determine the pose of the camera within the environment at the particular period of time when the camera generated the image.
[00115] Figure 4A illustrates an example point 410 in an environment (e.g., a point that is part of a point cloud generated through the operation of a LIDAR sensor). A camera is also present in the environment and is operable to image a portion of the environment, e.g., a portion that contains the example point 410. The camera is in motion (the direction of motion indicated, in Figure 4A, by the arrow labeled “MOTION” ). The location of the camera at first, second, and third points in time is illustrated, in Figure 4A, by the first 420a, second 420b, and third 420c cameras.
[00116] The direction from the camera to the example point 410 at each of the points in time is illustrated, in Figure 4A, by the dashed-line arrows. Since the camera is in motion, the direction from the camera to the example point 410, relative to the orientation of the camera, changes over time. Accordingly, the location of the example point 410, within an image of the environment taken by the camera, will be dependent on the time at which the camera generated the image and/or the time at which the camera generated the portion of the image that represents light received from contents of the environment at the example point 410.
[00117] Figure 4B is a conceptual illustration of sensing elements 428a-h (e.g., CMOS light sensitive elements, pixels of a CCD sensor) of an image sensor 426 of the in-motion camera illustrated in Figure 4A. A lens, aperture, and/or or other elements of the camera project light received from the environment from different trajectories to respective different locations on the image sensor 426. Accordingly, the example point 410 will be represented, within an image generated by the camera, at a location that is dependent upon when the camera sensed light from the example point 410. So, for example, if the camera 420a generated an image by sensing light from the example point 410 at the first point in time, the example point 410 will be mapped to a first location 415a (e.g., a first pixel) within the image. In another example, if the camera 420a generated an image by sensing light from the example point 410 at the second point in time, the example point 410 will be mapped to a second location 415b (e.g., a second pixel) within the image. In yet another example, if the camera 420c generated an image by sensing tight from the example point 410 at the third point in time, the example point 410 will be mapped to a third location 415c (e.g., a third pixel) within the image. [00118] In examples wherein the camera acts according to a “global shutter” mode or otherwise operates to generate the entire image at substantially the same time (e.g., during a single exposure period), locating a point in an environment (e.g., 410) within the frame of the image may be relatively straightforward. For example, the location and orientation of the camera, relative to the point, at the point(s) in time when the image was generated could be determined and used to project the point into the image.
[00119] However, the process of projecting the point in to the image can be more complex when different portions of the image are generated at respective different points in time, e.g., if the camera is operated in a ‘rolling shutter' mode. In such an operational mode, each row or column of pixels (or other light-sensitive elements of the camera) is operated during a respective different exposure time to sense light from the environment.
[00120] For example, a first row of sensing elements that includes sensing elements 428a and 428g may be configured to measure an amount of external light incident thereon during a first exposure time period. The camera may also include an analog to digital converter (not shown) that reads and converts the measurements by the first row of sensing elements (after the first exposure time period lapses) for transmission to a controller (e.g., controller 104). After a time delay from a start time of the first exposure time period (and optionally before the first exposure time period ends), the camera may start exposing a second row of sensing elements that includes sensing elements 528b and 528h for a second exposure time period. [00121] Thus, different portions (e.g., rows/columns of pixels) of an image generated in such a manner represent light sensed by the camera during respective different periods of time. Accordingly, motion of the camera during image acquisition will result in the camera having a different pose, relative to the environment, during acquisition of each of the portions (rows/columns) of the image. It may therefore be necessary to determine which of a number of different poses to use in order to project a point from the environment (e.g., 410) to a location within the image
[00122] A variety of methods are possible to perform this projection and/or pose selection. In some examples, a camera pose could be determined for each of the exposure times (e.g., for each row and/or column) and used to project the point to a respective potential location within the image. Based on the plurality of projections, a particular one of the projections could be selected as the estimated projection of the point into the image. For example, a difference could be determined, for each of the potential locations, between the time of the pose used to generate the projection and the time that the camera imaged the row/column at the projected location within the image. In such an example, the projection corresponding to the lowest-magnitude difference could be selected as the estimated projection of the point into the image.
[00123] Such a method, however, includes determining a large number of camera poses and projections, using the poses, of the point into the frame of the image. This may be computationally expensive (e.g., in terms of processor cycles, memory use, data bus use), energetically expensive (e.g., in terms of system energy used to compute the poses, projections, or other calculations), expensive in terms of time used to generate an output, or otherwise unfavorable. Accordingly, such a brute-force method may be undesirable or unworkable in applications, such as autonomous vehicle mapping, localization, and/or navigation applications, that are constrained with respect to power, computational resources, latency/time budget, or other factors and/or applications wherein such a mapping much be performed for a great many points (e.g., for each point in a point cloud generated by a LIDAR sensor).
[00124] The methods described herein allow for lower-cost, lower-latency determination of the location, within an image of a portion of an environment, of a corresponding point located within the environment (e.g., a point from a LIDAR point cloud). These methods achieve these improvements by posing the location-estimation problem as a time-estimation problem. In particular, these methods estimate the time (e.g., the particular exposure time of a row/column of a rolling shutter image) at which light from the point in the environment was sensed by a camera when generating the image of the environment into which the point is to be projected.
[00125] By posing the problem as one of time estimation, the method can be applied iteratively. A cost function can be determined for each time estimate and used to update the time estimate (e.g., based on a magnitude and/or sign of the cost function, of a residual of the cost function). Such a cost function could include a difference between a first term that is related to the estimated time (e.g., a first term that is equal to the estimated time, defined relative to a reference time) and a second term that is based on a mapping from a projected location, within the image of the point in the environment to a time that the camera sensed light represented at the projected location within the image. This second term can be determined by, e.g., (i) using a determined pose of the camera at the estimated time to project the point in the environment to a location within the image taken by the camera and (ii) determining a time that the camera sensed light, from the environment, at the projected location within the image (e.g., the time at which the row/column of pixels at the projected location was exposed). [00126] In practice, convergence of the estimate occurs in relatively few iterations (e.g., two to three iterations), resulting in the performance of relatively few pose extrapolations/interpolations, point projections, or other computationally expensive tasks. Accordingly, these methods allow for the projection of a point in an environment to a corresponding location within an image of a portion of the environment using relatively less computational resources, power, and/or time when compared with alternative (e.g., brute force) methods.
[00127] The cost function term related to the estimated time and the term related to the time that the camera sensed light represented at the projected location within the image may be defined relative to different reference times/epochs. For example, the first term related to the estimated time could be based on the estimated time as defined relative to a time of an anchor pose of the camera and the second term related to the time that the camera sensed light represented at the projected location within the image could be based on a characteristic time of capture of the image (e.g., a timing of a first, middle, or end exposure time of a series of rolling shutter exposure times, a timing of exposure of a principal point within the image). In such examples, the cost function could include an additional term related to the offset between the difference between the time references of the first and second terms of the cost function. [00128] An example of such a cost function is:
Figure imgf000033_0001
[00129] th is the estimated time that light from the point in the environment (e.g., 410) was imaged to generate a corresponding portion of the image of the environment, defined relative to a time, tpose, of an anchor pose used to determine the pose of the camera at a given estimated time. Zn(th) is the location, within the image, to which the location of the point in the environment is projected when using the pose of the camera as estimated at the estimated time th. The function IndexToTime() takes as an input a location within the frame of a rolling- shutter image and outputs a time, relative to a characteristic time tprincipal _point of the image, that light represented by the row or column of pixels at the input location was sensed (e.g., an exposure tune of the row/column of pixels). The offset term, toffset, represents a time difference between the different zero times/epochs of th and IndexToTime(). So, in this example, toffset could be a static value equal to tpose - tprincipal _point. [00130] IndexToTime() could provide a discontinuous output, e.g., an output time that is one of an enumerated set of output times corresponding to the set of exposure times of the rolling-shutter operation of the camera. Alternatively, IndexToTime() could provide a continuous output, e.g., a linearized time that represents a linearized version of the enumerated set of output times corresponding to the set of exposure times of the rolling-shutter operation of the camera.
[00131] To determine an estimated time, an initial estimated time is determined and the cost function iteratively applied to generate an output estimated time. This could include performing the iterative process a set number of times, e.g., between 2 and four times, inclusive. Alternatively, the absolute or relative (e.g., relative to the magnitude of the most recent estimated time) reduction in the cost function from one iteration to the next could be assessed (e.g., compared to a threshold value) and used to determine whether the estimated time has converged. Such a threshold-based update process could be constrained to occur no more than a set threshold number of times, e.g., the update process could terminate due to the cost function converging (e.g., the cost function having been reduced in magnitude by less than a threshold amount) or due to the update process having been performed the threshold number of times. Once the iterative process has terminated, the output estimated time can be used to determine a projected location, within the image, of the point in the environment. This can be done by repeating some of the processes employed in the iterative time estimation process (e.g., using the output estimated time to determine a pose, at the output time, of the camera and using the determined pose to project the point in the environment to a location within the image).
[00132] Updating the estimated time based on the cost function could include applying the output of the cost function (e.g., a residual determined by evaluating the cost function) to the estimated time in order to update the estimated time. The output of the cost function could be applied directly, or could be normalized. For example, a Jacobian of the cost function, with respect to the estimated time, could be used to normalize the output of the cost function before applying the output of the cost function to update the estimated time. Such a Jacobian could be determined analytically, using a derivative of one or more terms of the cost function, and/or by a process of numerical estimation.
[00133] The location, within the image, to which the location of the point in the environment is projected when using the pose of the camera as estimated at the estimated time th ( Zn(th ) in the example cost function above) may be calculated in a variety of ways. For example, Zn(t) could be a normalized location within the frame of the image.An example of such a calculation is:
Figure imgf000035_0001
[00134]
Figure imgf000035_0004
is the pose of the camera at time
Figure imgf000035_0005
is the point in the environment to be projected into the image, and proj(x,y) projects a point x into the frame of view of a pose is the point in the environment to be projected into the image, and proj(x, Y) projects a point x into the frame of view of a pose Y.
[00135] The pose of the camera at a particular time can be determined from one or more anchor poses and respective one or more points in time. An anchor pose may be determined based on global positioning data, magnetometer data, image data, wheel rotation data, LIDAR data, or some other information related to the location of the camera. Such pose data could be determined for the vehicle as a whole and then converted (e.g., by applying an offset translation and rotation) to arrive at the pose of the camera. In some examples, where the time of a known anchor pose is sufficiently close to the time for which the camera pose is desired, the anchor pose could be applied without modification. However, to improve accuracy, the anchor pose could be extrapolated and/or interpolated to the time of interest. For example, the pose for the camera at a particular time
Figure imgf000035_0009
could be determined by a process of interpolation from a known anchor pose of the camera at a different time, An example of such a calculation is:
Figure imgf000035_0002
[00136]
Figure imgf000035_0010
is pose information for the camera, at the anchor pose time that includes the location and orientation of the camera as well as information about the motion of the camera, e.g., the translational and/or rotational velocity of the camera, Such an extrapolation could be performed in a variety of ways. An example of such an extrapolation is:
Figure imgf000035_0003
[00137]
Figure imgf000035_0006
is the orientation of the camera at the anchor pose time tpose, is the location of the camera at is the translational velocity of the camera
Figure imgf000035_0007
Figure imgf000035_0008
at tpose, and is the rotational velocity of the camera at tpose. In some examples, it may be sufficient to extrapolate the pose based only on the translational velocity of the camera, e.g., in examples where the rotational velocity of the camera is relatively low-magnitude.
[00138] Additionally or alternatively, the pose of the camera for a particular time could be determined by interpolating multiple different known poses of the camera at respective different known points in time. This could include linear interpolation between two known poses (e.g., two known poses corresponding to respective times that are respectively before and after the time for which an interpolated pose is desired), nonlinear interpolation using more than two poses, or some other interpolation process.
[00139] In some examples, the determined and/or detected location of the point in the environment could be moving over time. For example, the point could be the detected location of a moving object in the environment, and the translational velocity, rotational velocity, or other motion information about the moving object could also be detected. In such examples, the embodiments described herein could be modified to account for motion of the point within the environment over time. For example, could represent the location of the location of
Figure imgf000036_0001
the point in the environment over time (as a function of time r). This time-dependent
Figure imgf000036_0002
could replace the static
Figure imgf000036_0003
in the example embodiments described above. Such a time- dependent location could be determined in a variety of ways, e.g„ by extrapolating the location of the point from a location and velocity of the point detected at a single detection time (e.g., based on a location and a velocity of the point detected at time tdetect),
Figure imgf000036_0004
Figure imgf000036_0005
by interpolating the location of the point between two different locations detected at two different times, or by using some other method to estimate the location of the point in the environment as a function of time. m. Example Methods and Computer Readable Media
[00140] Figure 5 is a flowchart of a method 500, according to example embodiments. Method 500 presents an embodiment of a method that could be used with any of system 100, device 200, and/or camera ring 308, for example. Method 500 may include one or more operations, functions, or actions as illustrated by one or more of blocks 502-514. Although the blocks are illustrated in a sequential order, these blocks may in some instances be performed in parallel, and/or in a different order than those described herein. Also, the various blocks may be combined into fewer blocks, divided into additional blocks, and/or removed based upon the desired implementation.
[00141] In addition, for method 500 and other processes and methods disclosed herein, the flowchart shows functionality and operation of one possible implementation of present embodiments. In this regard, each block may represent a module, a segment, a portion of a manufacturing or operation process, or a portion of program code, which includes one or more instructions executable by a processor for implementing specific logical functions or steps in the process. The program code may be stored on any type of computer readable medium, for example, such as a storage device including a disk or hard drive. The computer readable medium may include a non-transitory computer readable medium, for example, such as computer-readable media that stores data for short periods of time like register memory, processor cache and Random Access Memory (RAM). The computer readable medium may also include non-transitory media, such as secondary or persistent long term storage, like read only memory (ROM), optical or magnetic disks, compact-disc read only memory (CD-ROM), for example. The computer readable media may also be any other volatile or non-volatile storage systems. The computer readable medium may be considered a computer readable storage medium, for example, or a tangible storage device. In addition, for method 400 and other processes and methods disclosed herein, each block in Figure 4 may represent circuitry that is wired to perform the specific logical functions in the process.
[00142] At block 502, method 500 involves obtaining an indication of a point in an environment of an autonomous vehicle. For example, LIDAR sensor 206 may be operated to generate a plurality of LIDAR points related to the location, shape, size or other information about one or more objects in the environment of LIDAR sensor 206 and the indication of the point in the environment could be determined from one or more of the LIDAR points.
[00143] At block 504, method 500 involves obtaining information about the location and motion of the autonomous vehicle within the environment. For example, information from a GPS, GLONASS, or other navigational positional receiver, LIDAR data, wheel speed/rotation data, inertial data from one or more accelerometers and/or gyroscopes, magnetic field data from a magnetometer, image data about an environment from one or more cameras, or other information related to the location of the autonomous vehicle could be used to generate information about the location and motion of the autonomous vehicle within the environment. This could include performing sensor fusion, applying a filter (e.g., a Kalman filter) to estimates of the motion/location of the autonomous vehicle, or other processes to determine an accurate estimate of the location and motion of the autonomous vehicle within the environment.
[00144] At block 506 the method 500 involves obtaining an image of a portion of the environment of the autonomous vehicle. The image includes a plurality of rows of pixels. The image was generated by a camera operating in a rolling shutter mode such that each row of pixels represents light sensed by the camera during a respective exposure time period.
[00145] At block 508 the method 500 involves mapping the point in the environment to a location within the image. Mapping the point in the environment to a location within the image involves, at block 510, determining an initial estimated time, T0, that the camera sensed light from the point in the environment. This could include setting the initial estimated time according to a characteristic time of generation of the image (e.g., a time at which the rolling- shutter process began or ended, a time at which a ‘middle’ set of pixels of the image was exposed). Mapping the point in the environment to a location within the image also involves, at block 512, determining N updated estimated times, wherein N≥ 1. Mapping the point in the environment to a location within the image also involves, at block 514, determining, based on the updated estimated time, TN, a location within the image that corresponds to the point in the environment.
[00146] Determining N updated estimated times involves determining each updated estimated time, Ti, by an update process that includes, at block 513a, determining, based on the information about the location and motion of the autonomous vehicle, a pose of the camera at the estimated time, Ti-1. The update process additionally includes, at block 513b, based on the pose of the camera at the estimated time, Ti-1, projecting the point in the environment to a projected location within the image. This could include determining the pose of the camera at the estimated time, Ti-1, by interpolating between the obtained location and motion of the autonomous vehicle and one or more additional locations and motions of the autonomous vehicle, extrapolating the obtained location and motion of the autonomous vehicle to the estimated time, or by performing some additional or alternatively process.
[00147] The update process also includes, at block 513c, evaluating a cost function that includes (i) a term based on the estimated time, Ti-1 and (ii) a term based on a mapping from the projected location to a time that the camera sensed light represented at the projected location within the image. These terms could defined with respect to the same or different ‘zero’ times or epochs. For example, the term based on the estimated time could be defined relative to a time of the information about the location and motion of the autonomous vehicle and the term based on the mapping from the projected location to the time that the camera sensed light represented at the projected location within the image could be defined relative to a characteristic time of generation of the image (e.g., a time at which the rolling-shutter process began or ended, a time at which a ‘middle’ set of pixels of the image was exposed). In such examples where the terms are defined relative to different ‘zero’ times or epochs, the cost function could additional include an offset term to compensate for such differences.
[00148] The update process yet further includes, at block 513d, determining the updated estimated time, Ti, based on the evaluated cost function. The update process could be performed a pre-determined number of times, e.g., two, three, or four times. Alternatively, the update process could include performing the update process until the time estimate converges, e.g., until an absolute or relative change in the magnitude of the time estimate, form one update to the next, is less than a specified threshold level.
[00149] In some implementations, method 500 involves determining a three- dimensional (3D) representation of the environment based on position data (e.g., data from a LIDAR sensor) and image information from the image (e.g., one or more pixels of the image). For example, an example system may combine LIDAR-based information (e.g., distances to one or more objects in the environment, etc.) with camera-based information (e.g., color, etc.) to generate the 3D representation. Other types of representations (e.g., 2D image, image with tags, enhanced image that indicates shading or texture information, etc.) based on a combination of location and image data are possible. Thus, in some implementations, method 500 involves determining a representation of the environment based on color information indicated by the image and point location information (e.g., distance, depth, texture, reflectivity, absorbance, reflective light pulse length, etc.) indicated by, e.g., a LIDAR sensor.
[00150] In a first example, a system of method 400 may determine depth information for one or more image pixels in the image based on the point in an environment. For instance, the system can assign a depth value for image pixels in the image. Additionally, for instance, the system can generate (e.g., via display 140) a 3D object data model (e.g., a 3D rendering) of one or more objects in the environment (e.g., colored 3D model that indicates 3D features in the environment, etc.). In another instance, an image processing system can identify and distinguish between multiple objects in the image by comparing depth information (indicated by the associated location of the point in the environment) of the respective objects. Other applications are possible. Thus, in some implementations method 500 involves mapping LIDAR data points collected using the LIDAR sensor to image pixels collected using the one or more cameras. For instance, the LIDAR data can be mapped to a coordinate system of an image output by an image sensor or camera.
[00151] In a second example, a system of method 500 may assign colors (based on data from the one or more cameras) to the point in the environment or to other known points in the environment (e.g., to individual points in a LIDAR point cloud). The example system can then generate (e.g., via display 140) a 3D rendering of the scanned environment that indicate distances to features in the environment along with color (e.g., colored point cloud, etc.) information indicated by the image sensor(s) of the one or more cameras. Thus, in some implementations, method 500 involves mapping image pixels from the image to corresponding LIDAR or otherwise-generated location data points. For instance, the image pixel data can be mapped to a coordinate system of LIDAR data output by a LIDAR sensor.
[00152] The method 500 may include additional or alternative elements. For example, the 500 could include operating an autonomous vehicle (e.g., steering the vehicle, actuating a throttle of the vehicle, controlling a torque and/or rotation output of one or more wheels, motors, engines, or other elements of the vehicle) based on the mapping of the point in the environment of the vehicle to a location (e.g., a pixel) within the image. This could include using the mapping to identify and/or locate one or more objects or other contents of the environment. Additionally or alternatively, this could include using the mapping to determine a navigational plan and/or determining a command to use to operate the autonomous vehicle. IV. Conclusion
[00153] The particular arrangements shown in the Figures should not be viewed as limiting. It should be understood that other implementations may include more or less of each element shown in a given Figure. Further, some of the illustrated elements may be combined or omitted. Yet further, an exemplary implementation may include elements that are not illustrated in the Figures. Additionally, while various aspects and implementations have been disclosed herein, other aspects and implementations will be apparent to those skilled in the art. The various aspects and implementations disclosed herein are for purposes of illustration and are not intended to be limiting, with the true scope and spirit being indicated by the following claims. Other implementations may be utilized, and other changes may be made, without departing from the spirit or scope of the subject matter presented herein. It will be readily understood that the aspects of the present disclosure, as generally described herein, and illustrated in the figures, can be arranged, substituted, combined, separated, and designed in a wide variety of different configurations.
V, Example Implementation
[00154] // Filename : BUILD
[00155] // Copyright (c) 2019 Waymo LLC. All rights reserved.
[00156] cc_library(
[00157] name = "camera_model",
[00158] srcs = ["camera_model.cc]",
[00159] hdrs = ["camera_model.h"],
[00160] deps = [
[00161] "//waymo_open_dataset:dataset_cc_proto",
[00162] "//waymo_open_dataset/common:integral_types",
[00163] "//waymo_open_dataset/math::vec2d",
[00164] "@com_google_absl//absl/memory",
[00165] "@com_google_absl//absl/types:optional",
[00166] "@com_google_glog//:glog",
[00167] "@eigen_archive'7:eigen".
[00168] ],
[00169] )
[00170] py_library(
[00171] name = "py_camera_model_ops",
[00172] srcs = ["py_camera_model_ops.py"].
[00173] data = [
[00174] ": camera_model_ops.so",
[00175] ],
[00176] )
[00177] cc_binary;
[00178] name = "camera_model_ops.so",
[00179] srcs = [
[00180] "camera_model_ops.cc",
[00181] ],
[00182] copts = [ [00183] "-pthread",
[00184] ], [00183] linkshared = 1,
200186] deps = [
[00187] "//third_party/camera:camera_model",
[00188] "((//waymo_open_dataset:dataset_cc_proto",
[00189] "@local_config_tf//:libtensorflow_framework",
[00190] "@local_config_tf//:tf_header_lib",
[00191] ],
[00192] )
[00193] # Tests
[00194] cc_test(
[00195] name = "camera_model_test",
[00196] srcs = ["camera_model_test.cc"] .
[00197] deps = [
[00198] ":camera_model",
[00199] # Implicit proto dependency.
[00200] "@com_google_googletest//:gtest",
[00201] "@com_google_googletest//:gtest_main",
[00202] "@com_google_absl//absl/memory",
[00203] "//waymo_open_dataset:dataset_cc_proto".
[00204] ],
[00205] )
[00206] py_test(
[00207] name = "camera_model_ops_test",
[00208] srcs = ["camera_model_ops_test.py"],
[00209] python_version = "PY3".
[00210] deps = [
[00211] ":py_camera_model_ops",
[00212] # Implicit tensorflow dependency.
[00213] "//waymo_open_dataset:dataset_proto_py_pb2",
[00214] ], [00215] )
[00216] //Filename:camera_model.h
[00217] // Copyright (c) 2019 Waymo LLC. All rights reserved.
[00218] #ifndef
WAYMO_OPEN_DATASET_THIRD_PARTY_CAMERA_CAMERA_MODEL_H_
[00219] #define
WAYMO_OPEN_DATASET_THIRD_PARTY_CAMERA_CAMERA_MODEL_H_
[00220] #include <memory>
[00221] #include "Eigen/Geometry"
[00222] #include "waymo_open_dataset/dataset..pb.h"
[00223] namespace waymo {
[00224] namespace open_dataset {
[00225] // Example usage:
[00226] // CameraModel camera_model(calibration);
[00227] // camera_model.PrepareProjection(image):
[00228] // camera_model.WorldToImage(...);
[00229] // camera_model.WorldToImage(...);
[00230] // This class is not threadsafe.
[00231] class CameraModel {
[00232] public:
[00233] explicit CameraModel(const CameraCalibration& calibration);
[00234] virtual ~CameraModel();
[00235] 36 This function should be called once per image before calling
[00236] //' WorldToImage'. It pre-computes some projection relevant variables.
[00237] void PrepareProjection(const. CameraImage& camera_ image):
[00238] // Projects a 3D point in global coordinates into the lens distorted image
[00239] // coordinates (u_d, v_d). These projections are in original image frame (x:
[00240] // image width, y; image height).
[00241] //
[00242] // Returns false if the point is behind the camera or if the coordinates
[00243] // cannot be trusted because the radial distortion is too large. When the
[00244] // point is not within the field of view of the camera, u_d, v_d are still [00245] // assigned meaningful values. If the point is in front of the camera image [00246] // plane, actual u_ d and v_d values are calculated. [00247] // [00248] // If the flag check_ image_ bounds is true, also returns false if the point is [00249] // not within the field of view of the camera. [00250] // [00251] // It does rolling shutter projection if the camera is not a [00252] // global shutter camera. To disable rolling shutter projection, override [00253] //rolling_shutter_direction in the camera calibration. [00254] // Requires: 'PrepareProjection' is called. [00255] bool WorldToImage(double x, double y, double z, bool check_image_bounds, [00256] double* u_d, double* v_d) const; [00257] // Converts a point in the image with a known depth into world coordinates. [00258] // Similar as ' WorldToImage' . This method also compensates for: rolling shutter [00259] // effect if applicable. [00260] // Requires: 'PrepareProjection' is called. [00261] void ImageToWorld(double u_d, double v_d, double depth, double* x, double* y, [00262] double* z) const; [00263] // True if the given image coordinates are within the image. [00264] bool InImage(double u, double v) const; [00265] private: [00266] // Projects a point in the 3D camera frame into the lens distorted image [00267] // coordinates (u_d, v_d). [00268] // [00269] // Returns false if the point is behind the camera or if the coordinates [00270] // cannot be trusted because the radial distortion is too large. When the [00271] // point is not within the field of view of the camera, u_d, v_d are still [00272] // assigned meaningful values. If the point is in front of the camera image [00273] // plane, actual u_d and v_d values are calculated. [00274] // [00275] // If the flag check_ image_ bounds is true, also returns false if the point is
[00276] // not within the field of view of the camera.
[00277] bool CameraToImage(double x, double y, double z, bool check_image_bounds,
[00278] double* u_d, double* v_d) const;
[00279] // Similar as "WorldToImage" but only for global shutter.
[00280] // Requires: 'PrepareProjection' is called,
[00281] bool WorldToImageGlobalShutter(double x, double y, double z,
[00282] bool check_image_bounds, double* u_d.
[00283] double* v_d) const;
[00284] // Similar as ImageToWorld' but only for global shutter.
[00285] // Requires: 'PrepareProjection' is called.
[00286] void ImageToWorldGlobalShutter(double u_d, double v_d, double depth,
[00287] double* x, double* y, double* z) const;
[00288] // Converts lens distorted image coordinates (u_d, v_d) to the normalized
[00289] // direction (u_n, v_n).
[00290] void ImageToDirection( double u_d, double v_d, double* u_n. double* v_n) const;
[00291] // Converts normalized direction (u_n, v_n) to the lens-distorted image
[00292] // coordinates (u_d, v_d). Returns false if the radial distortion is too high,
[00293] // but still sets u_d and v_d to clamped out-of-bounds values to get
[00294] // directional information.
[00295] bool DirectionToImage(double u_n, double v_n, double* u_d, double* v_d) const;
[00296] // This is a helper function for rolling shutter projection.
[00297] // It takes the rolling shutter state variable, position of landmark in ENU [00298] // frame, estimated time t_ h, and computes projected feature in normalized
[00299] // coordinate frame, the residual and the jacobian.
[00300] // If the jacobian is given as nullptr we will skip its computation.
[00301] bool ComputeResidualAndJacobian(const Eigen::Vector3d& n_pos_f, double t_h,
[00302] Eigen::Vector2d*normalized_coord, [00303] double* residual, double* jacobian) const;
[00304] // Forward declaration of an internal state used for global shutter projection
[00305] // computation.
[00306] struct GlobalShutterState :
[00307] // Forward declaration of an internal state used for rolling shutter
[00308] // projection computation.
[00309] struct RollingShutterState;
[00310] const CameraCalibration calibration_ ;
[00311] std::unique_ptr<RollingShutterState> rolling_shutter_state_;
[00312] std::unique_ptr<GlobalShutterState> global_shutter_state_;
[00313] };
[00314] } // namespaceopen_dataset
[00315] } // namespace waymo
[00316] #endif
[00317] // Filename: camera_ model.cc
[00318] // Copyright (c) 2019 Waymo LLC. All rights reserved.
[00319] #include "third_party/camera/camera_model.h”
[00320] #include <math.h>
[00321] #include <stddef.h>
[00322] #include <memory>
[00323] #include <glog/logging.h>
[00324] #include "absl/memory /memory.h"
[00325] #include "absl/types/optional.h"
[00326] #include "Eigen/Geometry"
[00327] #include "waymo_open_dataset/common'integral_types.h"
[00328] #include "waymo_open_dataset/dataset.pb.h"
[00329] #include "waymo_open_dataset/math/vec2d.h”
[00330] namespace waymo {
[00331] namespace open_dataset {
[00332] namespace {
[00333] // Bounds on the allowed radial distortion,
[00334] constexpr double kMinRadialDistortion = 0.8: [00335] constexpr double kMaxRadialDistortion = 1.2;
[00336] Vec2d GetProjectionCenter(const CameraCalibration& calibration) {
[00337] return Vec2d(calibration.intrinsic(2), calibration.intrinsic(3)); [00338] }
[00339] Eigen::Matrix3d SkewSymmetric(const Eigen::Vector3d& v) {
[00340] // clang-format off
[00341] Eigen::Matrix3d m;
[00342] m « 0, -v(2), v(1),
[00343] v(2), 0, -v(0),
[00344] -v(1), v(0), 0;
[00345] // clang-format on
[00346] return m;
[00347] }
[00348] Eigen::Isometry3d ToEigenTransform(const Transform& t) {
[00349] Eigen::Isometry3d out;
[00350] for (int r = 0; r < 3; r++) {
[00351] for (int c = 0; c < 4; C++) {
[00352] const int ind = r * 4 + c;
[00353] out(r, c) = t.transform(ind);
[00354] }
[00355] }
[00356] return out;
[00357] }
[00358] double GetPixelTimestamp(
[00359] CameraCalibration::RollingShutterReadOutDirectionreadout_direction.
[00360] double shutter, double camera_trigger_time, double camera_readout_done_time,
[00361] intimage_width, intimage_height, int x, int y) {
[00362] // Please see dataset.proto for an explanation of shutter timings,
[00363] const doublereadout_time =
[00364] camera_readout_done_time -camera_trigger_time - shutter;
[00365] // Cameras have a rolling shutter, so each * sensor* row is exposed at a [00366] // slightly different time, starting with the top row and ending with the
[00367] // bottom row. Because the sensor itself may be rotated, this means that the
[00368] // * image* is captured row-by-row or column-by-column, depending on
[00369] // ' readout direction' .
[00370] double seconds_per_col = 0.0;
[00371] double seconds_per_row = 0.0;
[00372] bool flip_rows = false;
[00373] bool flip cols = false;
[00374] switch (readout_direction) {
[00375] case CameraCalibration: :TOP_TO_BOTTOM:
[00376] seconds_per_row =readout_time / image_height;
[00377] break;
[00378] case CameraCalibration: :BOTTOM_TO_TOP:
[00379] seconds_per_row =readout_time / image_height;
[00380] flip rows = true;
[00381] break;
[00382] case CameraCalibration; : LE FT_T O RIGHT ;
[00383] seconds_per_col =readout_time / image_height;
[00384] break;
[00385] case CameraCalibration: :RJGHT_TO_LEFT :
[00386] seconds_per_col =readout_time / image_height;
[00387] flip_cols = true;
[00388] break;
[00389] default:
[00390] LOG(FATAL) « "Should not reach here " « readout_direction;
[00391] }
[00392] // Final time for this pixel is the initial trigger time + the column and row
[00393] // offset (exactly one of these will be non-zero) + half the shutter time to
[00394] // get the middle of the exposure,
[00395] return (camera_trigger_time +
[00396] (flip_cols ? (image width - x) : x) * seconds_per_col +
[00397] (flip_rows ? (image height - y) : y) * seconds_per_row + [00398] shutter * 0.5);
[00399] }
[00400] // In normalized camera, undistorts point coordinates via iteration,
[00401] void IterateUndistortion(const CameraCalibration& calibration, double u_nd,
[00402] double v_nd, double* u_n, double* v_n) {
[00403] CHECK(u_n);
[00404] CHECK(v_n);
[00405] const double f_u = calibration.intrinsic(O);
[00406] const double f_v = calibration. intrinsic(1);
[00407] const double k1 = calibration.intrinsic(4);
[00408] const double k2 = calibration.intrinsic(5);
[00409] const double k3 = calibration.mtrinsic(6); // same as p1 in OpenCV.
[00410] const double k4 = calibration.intrinsic(7); // same as p2 in OpenCV
[00411] const double k5 = calibration.intrinsic(8); // same as k3 in OpenCV.
[00412] double& u = *u_n;
[00413] double& v = *v_n;
[00414] // Initial guess,
[00415] u = u_ nd;
[00416] v = v_nd;
[00417] CHECK_GT(f_u, 0.0);
[00418] CHECK_GT(f_v, 0.0);
[00419] // Minimum required squared delta before terminating. Note that it is set in
[00420] // normalized camera coordinates at a fraction of a pixel Λ2. The threshold
[00421] // should satisfy unittest accuracy threshold kEpsilon = le-6 even for very
[00422] // slow convergence.
[00423] const double min_delta2 = le-12 / (f_u * f_u + f_v * f_v);
[00424] // Iteratively apply the distortion model to undistort the image coordinates.
[00425] // Maximum number of iterations when estimating undistorted point,
[00426] constexpr int kMaxNumlterations = 20;
[00427] for (int i = 0; i < kMaxNumlterations; ++i) {
[00428] const double r2 = u * u + v * v;
[00429] const double r4 = r2 * r2; [00430] const double r6 = r4 * r2;
[00431] const double rd = 1.0 + r2 * k1 + r4 * k2 + r6 * k5;
[00432] const double u_prev = u;
[00433] const double v_prev = v;
[00434] const double u_tangential = 2.0 * k3 * u * v + k4 * (r2 + 2.0 * u * u);
[00435] const double v_tangential = 2.0 * k4 * u * v + k3 * (r2 + 2.0 * v * v);
[00436] u = (u_nd - u tangential) / rd;
[00437] v = (v_nd - v tangential) / rd;
[00438] const double du = u - u_prev;
[00439] const double dv = v - v_prev;
[00440] // Early exit.
[00441] if (du * du + dv * dv < min_delta2) {
[00442] break;
[00443] }
[00444] }
[00445] }
[00446] } // namespace
[00447] // Some naming conventions:
[00448] // tfm: transform
[00449] // dcm: direction cosine matrix
[00450] // xx0: xx frame at pose timestamp
[00451] // omega: angular velocity
[00452] struct CameraModel::RollingShutterState {
[00453] // Define: t_pose_offset_ = t_pose - t_principal_point. In seconds,
[00454] double t_pose_of fset = 0.0;
[00455] // sign * readout time / normalized coordinate range.
[00456] // The sign depends on readout direction,
[00457] double readout time factor = 0.0;
[00458] // sign * readout time / range_in_pixel_space.
[00459] // The sign depends on readout direction,
[00460] double readout_time_factor_pixel = 0.0;
[00461] // The principal point image coordinate, in pixels. [00462] Eigen: :Vector2d principal_point;
[00463] // Transformation from camera to ENU at pose timestamp.
[00464] Eigen: :Isometry3d n_tfm _cam0;
[00465] // Velocity of camera at ENU frame at pose timestamp.
[00466] Eigen: :Vector3d n_vel_cam0;
[00467] // Define: skew_omega = SkewSymmetric(cam_omega_camO).
[00468] Eigen: :Matrix3dskew_omega;
[00469] // Rotation from ENU to camera at pose timestamp.
[00470] Eigen: :Matrix3d _cam0_dcm_ n;
[00471] // Define: skew_omega_dcm =skew_omega *_cam0_ dcm_ n.
[00472] Eigen: :Matrix3dskew_omega_dcm;
[00473] // Whether rolling shutter direction is horizontal,
[00474] bool readout horizontal direction - false;
[00475] };
[00476] struct CameraModel::GlobalShutterState {
[00477] // Transformation from camera to ENU at pose timestamp.
[00478] Eigen: :Isometry3d n_ tfin_cam0;
[00479] H Transformation from ENU to camera at pose timestamp.
[00480] Eigen: :Isometry3d cam_ tfm_ n;
[00481] };
[00482] CameraModel: : CameraModel(const CameraCalibration& calibration)
[00483] : calibration_(calibration) {}
[00484] CameraModel : : -CameraModel() {}
[00485] void CameraModel: : Prepare Proj ection(const CameraImage& camera image) {
[00486] const Eigen: :Isometry3d n_ tfm_ vehicle0 =
[00487] ToEigenTransform(camera_image.pose());
[00488] const Eigen: :Isome try 3d vehicle- tfm_ cam =
[00489] ToEigenTransform(calibration_.extrinsic());
[00490] if (global_shutter_state_ == nullptr) {
[00491] global shutter state = absl: :make_unique<GlobalShutterState>0;
[00492] >
[00493] global_shutter_state_->n_tfm _cam0 = n_ tfm_ vehicle0* vehicle_ tfm_ cam; [00494] global_shutter_state_->cam_tfm_n =
[00495] global_shutter_state_->n_tfm_cam0.mverse();
[00496] if (calibration_.rolling_shutterdirection() =
[00497] CameraCalibration: : GLOBAL_ SHU TTER) {
[00498] return;
[00499] }
[00500] if (rolling_shutter_state_ = nullptr) {
[00501] rolling shutter state =
[00502] absl::make_unique<CameraModel::RollingShutterState>();
[00503] }
[00504] const double readout time = camera image.camera readout done time() -
[00505] camera_image.camera_trigger_time() -
[00506] camera_image.shutter();
[00507] const Vec2d princ ipal jpoint jp ixel = GetProj ectionCenter(calibration_);
[00508] rolling_shutter_state_->principal_point =
[00509] Eigen: :Vector2d{principal_point_pixel.x(), principal_point_pixel.y()} ;
[00510] const double t_principal_point = GetPixelT imestamp(
[00511] calibration rolling shutter direction(), camera image shutter(),
[00512] camera image.camera trigger timeO,
[00513] camera image camera readout done time(), calibration . width(),
[00514] calibration .height(), principal_point_pixel.x(),
[00515] principal_point_pixel.y());
[00516] rolling_shutter_state__->t_pose_offset =
[00517] camera image.pose timestamp() - t_principal_point;
[00518] if(calibration_.rolling_ shutter_direction() =
[00519] CameraCalibration: :RIGHT TO LEFT ||
[00520] calibration_.rolling_ shutter_ direction() =
[00521] CameraCalibration: :LEFT_TO_RIGHT) {
[00522] rolling_shutter_state_->readout_horizontal_directio = true;
[00523] } else {
[00524] rolling_shutter_state_->readout_horizontal_direction = false;
[00525] > [00526] // Compute readout time factor,
[00527] double normalized coord range = 0;
[00528] double range_in_pixel_space = 0;
[00529] if (roUing_shutter_state_->readout_horizontal_direction) {
[00530] double u_n_first = 0, v_n = 0, u_n end = 0;
[00531] ImageToDirection(0, 0.5 * calibration_.height(), &u_n_first, &v_n);
[00532] ImageToDirection(calibration_.width(), 0.5 * calibration_ .height(),
[00533] &u_n_end, &v_n);
[00534] normalized_coord_range = u_n_end - u_n_first;
[00535] range_in_pixel_space = calibration_ . width();
[00536] } else {
[00537] double u_n - 0,v_n_ first - 0,v_n_ end - 0;
[00538] ImageToDirection(0.5 * calibration_.width0, 0, &u_n, &v_n_first);
[00539] ImageToDirection(0.5 * calibration_.width0, calibration_.height(), &u_n,
[00540] &v_n_end);
[00541] normalized coord range = v n end - v n first;
[00542] range_in_pixel_space = calibration .height();
[00543] }
[00544] bool readout reverse direction = false;
[00545] if (calibration_.rolling_shutter_direction() —
[00546] CameraCalibration::RIGHT_ TO_LEFT ||
[00547] calibration_.rolling_shutter_direction() =
[00548] CameraCalibration::BOTTOM_TO_TOP) {
[00549] readout reverse direction = true;
[00550] }
[00551] rolling_shutter_state_->readout_time_factor =
[00552] readout reverse direction ? -readout time / normalized coord range
[00553] : readout_time / normal ized coord range ;
[00554] rolling shutter state ->readout time factor_pixel =
[00555] readout reverse direction ? -readout time Z range_in_pixel_space
[00556] : readout time / range_in_pixel_space;
[00557] rolling shutter state ->n tfm cam() = n tfm vehicle() * vehicle_ tfm_ cam; [00558] const Velocity& velocity = camera_image.velocity ():
[00559] // Compute cam_ omega_cam0, n vel cam0.
[00560] const Eigen: :Vector3d n_ vel_ vehicle =
[00561] Eigen: : Vector3d {velocity .v_x () velocity .v_y(), velocity .v z()} ;
[00562] const Eigen: :Vector3d vehicle_omega_vehicle =
[00563] Eigen: :Vector3d {velocity .w_x(), velocity .w_y(), velocity .w_zQ};
[00564] const Eigen: :Vector3d n_omega_vehicle =
[00565] n_tfin_vehicle0.rotation() * vehicle omega vehicle;
[00566] const Eigen: : Vector3d cam_omega_cam0 =
[00567] vehicle_tfm_cam.rotation().transpose() * vehicle_omega_vehicle;
[00568] rolling shutter state ->skew_omega = SkewS ymmetric(cam omega cam0) ;
[00569] // Need to compensate velocity lever arm effect.
[00570] rolling_shutter_state_->n_vel_camO -
[00571] n_vel_vehicle + SkewSymmetric(n_omega_vehicle) *
[00572] n_tfin_vehicle0.rotation() *
[00573] vehicle_tfin_cam.translationO:
[00574] rolling_shutter_state_->camO_dcm_n =
[00575] rolling_shutter_state_->n_tfin_cam0.rotation0.transpose();
[00576] rolling_shutter_state_->skew_omega_dcm =
[00577] -rolling_shutter_state_->skew_omega * rolling_shutter_state_-
>cam0_dcm_n; [00578] }
[00579] // In this function, we are solving a scalar nonlinear optimization problem:
[00580] // Min II t_h - IndexT oT imeF romNormalizedCoord(Cam_p f(t)) + t_ offset ||
[00581] // over t_h where t h is explained below.
[00582] // where Cam_p_f(t) = projection(n_p_f, n_tfm cam(t))
[00583] // The timestamps involved in the optimization problem:
[00584] // t capture: The timestamp the rolling shutter camera can actually catch the
[00585] // point landmark, (this defines which scanline the point landmark falls in the
[00586] // image).
[00587] // t_principal_point: The timestamp of the principal point.
[00588] // t_pose: The timestamp of the anchor pose. [00589] // Now we can define:
[00590] // t offset := t_pose - t_principal_point.
[00591] // t_h := t capture - t_pose.
[00592] // IndexToTimeFromNormalizedCoord(norm alized_coord) := t capture -
[00593] // t_principal_point.
[00594] // For this optimization problem we have the equality:
[00595] // t_h - IndexToTimeFromNormalizedCoord(.) + t offset = 0
[00596] // This is efficient because it is a 1-dim problem, and typically converges in
[00597] // 2-3 iterations.
[00598] // To get the best performance and factor in the fact that our camera has little
[00599] // lens distortion, the IndexToTime(.) function is done in the normalized
[00600] // coordinate space instead of going to the distortion space. The testing
[00601] II results show that we get sufficiently good results already in normalized
[00602] I I coordinate space.
[00603] bool CameraModel::WorldToImage(double x, double y, double z,
[00604] bool check image bounds. double* u d.
[00605] double* v_d) const {
[00606] if (calibration_.rolling_shutter_direction() =
[00607] CameraCalibration::GLOBAL_SHUTTER) {
[00608] return WorldToImageGlobalShutter(x, y, z, check image bounds, u_d, v_d);
[00609] }
[00610] // The initial guess is the center of the image,
[00611] double t_h = 0.;
[00612] const Eigen: :Vector3d n_pos_f{x, y, z };
[00613] size t iter num = 0;
[00614] // This threshold roughly corresponds to sub-pixel error for our camera
[00615] // because the readout time per scan line is in the order of le-5 seconds.
[00616] // Of course this number varies with the image size as well,
[00617] constexpr double kThreshold = le-5; // seconds,
[00618] constexpr size t kMaxIterNum = 4;
[00619] Eigen: :Vector2d normal ized coord:
[00620] double residual = 2 * kThreshold; [00621] double Jacobian = 0.;
[00622] while (std::fabs(residual) > kThreshold && iter num < kMaxIterNum) {
[00623] if ((ComputeResidualAndJacobian(n_pos_f, t_ h, &normalized coord,
&residual,
[00624] &jacobian)) {
[00625] return false;
[00626] }
[00627] // Solve for delta t;
[00628] const double delta t = -residual / Jacobian;
[00629] t h 4= delta t;
[00630] ++iter_num;
[00631] }
[00632] // Get normalized coordinate.
[00633] if (!ComputeResidualAndJacobian(n_pos_f, t h, &normalized coord,
&residual,
[00634] /*jacobian=*/nullptr)) {
[00635] return false;
[00636] }
[00637] if (!DirectionToImage(normalized_coord(0), normalized coord(l), u_d, v_d))
[00638] return false;
[00639] }
[00640] // If requested, check if the returned pixel is inside the image,
[00641] if (check image bounds) {
[00642] return Inlmage(*u_d, *v_d);
[00643] }
[00644] return true;
[00645] }
[00646] void CameraModel::ImageToWorld(double u d, double v d, double depth, double* x,
[00647] double* y, double* z) const {
[00648] if (c alibrat ion .ro lling shutter direction() = [00649]
CameraCalibration RollingShutterReadOutDirection GLOBAL SHUTTER) {
[00650] ImageToWorldGlobalShutter(u_d, v d, depth, x, y, z);
[00651] return;
[00652] }
[00653] const auto& rolling shutter state = *rolling shutter_state_;
[00654] // Interpolates the pose of camera,
[00655] const double pixel spacing =
[00656] rolling shutter state-readout horizontal direction
[00657] ? u_d - rolling shutter state.principal pointfO)
[00658] : v d - rolling shutter state .principal_point( 1 ) ;
[00659] const double t_h -
[00660] rolling_shutter_state.readout_time_factor_pixel * pixel_spacing -
[00661] rolling shutter state .t jpose offset;
[00662] const Eigen: :Matrix3d cam dcm n = rolling shutter state.camO dcm n +
[00663] t_h * rolling shutter state .skew_omega dcm;
[00664] const Eigen: :Vector3d n_pos_cam =
[00665] rolling_shutter_state.n_tfin_cam0.translation() +
[00666] t_h * rolling_shutter_state.n_vel_cam();
[00667] // Projects back to world frame,
[00668] double u_n = 0, v_n = 0;
[00669] ImageToDirection(u_d, v_d, &u_n, &v_n);
[00670] const Eigen: :Vector3d cam_pos_f {depth, -u_n * depth, -v_n * depth};
[00671] const Eigen: :Vector3d n_pos_f = cam dcm n-transposeQ * cam_pos_f + n_pos_cam;
[00672] *x = n_pos_f(0);
[00673] *y = n_pos_f(1);
[00674] *z = n_pos_f(2);
[00675] }
[00676] void CameraModel::ImageToWorldGlobalShutter(double u d, double v d,
[00677] double depth, double* x, double* y,
[00678] double* z) const { [00679] CHECK(x);
[00680] CHECK(y);
[00681] CHECK(z);
[00682] double u_n = 0.0, v_n = 0.0;
[00683] ImageToDirection(u_d, v_d, &u_n, &v_n);
[00684] const Eigen: :Vector3d wp = global shutter state ->n tfm_cam0 *
[00685] Eigen: : Vector3d(depth, -u_n * depth, -v_n * depth);
[00686] *x = wp(0)
[00687] *y = wp(l)
[00688] *z = wp(2)
[00689] }
[00690] bool CameraModel::CameraToImage(double x, double y, double z,
[00691] bool check image bounds, double* u_d,
[00692] double* v_d) const {
[00693] // Return if the 3D point is behind the camera,
[00694] if (x <= 0.0) {
[00695] *u_d = -1.0;
[00696] *v_d = -1.0;
[00697] return false;
[00698] }
[00699] // Convert the 3D point to a direction vector. If the distortion is out of
[00700] // the limits, still compute u d and v d but return false,
[00701] const double u = -y / x;
[00702] const double v = -z / x;
[00703] if (!DirectionToImage(u, v, u_d, v_d)) return false;
[00704] // If requested, check if the projected pixel is inside the image,
[00705] return check image bounds ? InImage(*u_ d, *v_d) : true;
[00706] }
[00707] bool CameraModel::InImage(double u, double v) const {
[00708] const double max u = static_ cast<double>(calibration .width()):
[00709] const double max v = static_cast<double>(calibration_.height()):
[00710] return u >= 0.0 && u < max u && v >= 0.0 && v < max v; [00711] }
[00712] bool CameraModel::WorldToImageGlobalShutter(double x, double y, double z,
[00713] bool check image bounds,
[00714] double* u_d, double* v_d) const {
[00715] CHECK(u_d);
[00716] CHECK(v_d);
[00717] const Eigen: :Vector3d cp =
[00718] global_shutter_state_->cam_tfm_n * Eigen: :Vector3d(x, y, z);
[00719] return CameraToIinage(cp(0), cp(l), cp(2), check image bounds, u_d, v_d);
[00720] }
[00721] void CameraModel::ImageToDirection(double u_d, double v_d, double* u_n,
[00722] double* v_n) const {
[00723] const double f_u = calibration_.intrinsic(0);
[00724] const double f_v = calibration_.intrinsic(l);
[00725] const double c_u = calibration_.intrinsic(2);
[00726] const double c_v = calibration_.intrinsic(3);
[00727] // Initial guess, as a direction vector,
[00728] const double u_nd = (u_d - c_u) / f_u;
[00729] const double v_nd - (v_d - c_v) / f v;
[00730] // Iteratively refine estimate.
[00731] IterateUndistortion(calibration_, u_nd, v_nd, u_n, v_n);
[00732] }
[00733] bool CameraModel::DirectionToImage(double u_n, double v_n, double* u_d,
[00734] double* v_d) const {
[00735] const double f_u = calibration_.intrinsic(0):
[00736] const double f_v = calibration_.intrinsic(1);
[00737] const double c_u = calibration_.mtrinsic(2);
[00738] const double c_v = calibration_.intrinsic(3);
[00739] const double k1 = calibration_.intrinsic(4);
[00740] const double k2 = calibration_.intrinsic(5);
[00741] const double k3 = calibration_.intrinsic(6) ; // same as pi in OpenCV. [00742] const double k4 = calibration_.intrinsic(7) ; // same as p2 in OpenCV
[00743] const double k5 = calibration_.intrinsic(8) ; // same as k3 in OpenCV.
[00744] // (u, v, 1) is a normalized direction relative to ROI and principal point,
[00745] const double r2 = u_n * u_n + v_n * v_n;
[00746] const double r4 = r2 * r2;
[00747] const double r6 = r4 * r2;
[00748] // Radial distortion factor based on radius. This is the same for both the
[00749] // perspective and the fisheye camera model.
[00750] const double r_d = 1.0 + k1 * r2 + k2 * r4 + k5 * r6;
[00751] double u_nd, v_nd;
[00752] II If the radial distortion is too large, the computed coordinates will
[00753] // be unreasonable (might even flip signs).
[00754] if (r_d < kMinRadialDistortion || r d > kMaxRadialDistortion) {
[00755] // Check on which side of the image we overshoot, and set the coordinates
[00756] // out of the image bounds accordingly. The coordinates will end up in a
[00757] // viable range and direction but the exact values cannot be trusted,
[00758] const double roi clipping radius =
[00759] std: :hypot(calibration_.width(), calibration_.height());
[00760] const double r2_sqrt_rcp = 1.0 / std::sqrt(r2);
[00761] *u_d - u_n * r2_sqrt_rcp * roi clipping radius + c_u;
[00762] *v_d = v n * r2_sqrt_rcp * roi clippmg radius + c_v;
[00763] return false;
[00764] }
[00765] II Normalized distorted camera coordinates.
[00766] u_nd = u_n * r d + 2.0 * k3 * u_n * v n + k4 * (r2 + 2.0 * u_n * u_n);
[00767] v_nd = v n * r d + k3 * (r2 + 2.0 * v n * v_n) + 2.0 * k4 * u_n * v_n;
[00768] // Un-normalize, un-center, and un-correct for ROI. Output coordinates are in
[00769] // the current ROI frame.
[00770] *u_d = u_nd * f u + c_u;
[00771] *v_ d = v nd * f v + c_v;
[00772] return true;
[00773] } [00774] bool CameraModel : :ComputeResidualAndJacobian(const Eigen: :Vector3d& n_pos_f,
[00775] double t_h,
[00776] Eigen: :Vector2d* normalized coord,
[00777] double* residual,
[00778] double* Jacobian) const {
[00779] // The Jacobian is allowed to be a nullptr.
[00780] CHECK(normalized_coord);
[00781] CHECK(residual);
[00782] CHECK(rolling_shutter_state_);
[00783] const RollingShutterState& rolling shutter state = *rolling_shutter_state_;
[00784]
[00785] const Eigen: :Matrix3d cam dcm n = rolling_shutter_state.cam0_dcm_n +
[00786] t_h * rolling_shutter_state.skew_omega dcm ;
[00787] const Eigen: :Vector3d n_pos_cam =
[00788] rolling_shutter_state.n_tfin_cam0.translation()+
[00789] t_h * rolling_shutter_state.n vel_cam0 ;
[00790] const Eigen: :Vector3d cam_pos_f = cam dcm n * (n_pos_f - n_pos_cam);
[00791] if (cam_pos_f(0) <= 0) {
[00792] // The point is behind camera,
[00793] return false:
[00794] }
[00795] (*normalized_coord)(0) = -cam_pos_f(l) / cam_pos_f(0);
[00796] (*normalized coord)( 1 ) = -cam_pos_f(2) / cam_pos_f(0);
[00797]
[00798] const double normalized_spacing =
[00799] rolling shutter state.readout horizontal direction
[00800] ? (*normalized_coord)(0)
[00801] : (*normalized_coord)(l);
[00802] * residual = t h -
[00803] nonnalized spacing * rolling shutter state-readout time factor +
[00804] rolling_shutter_state.t_pose_offset; [00805] if(jacobian) {
[00806] // The following is based on a reduced form of the derivative. The details
[00807] // of the way to derive that derivative are skipped here,
[00808] const Eigen: : Vector3d jacobian_landmark_to_index =
[00809] -cam_dcm_n * rolling shutter state.n vel_cam0 -
[00810] rolling_ shutter state.skew omega * cam_pos_f;
[00811]
[00812] const double j acobian combined =
[00813] rolling shutter state .readout horizontal direction
[00814] ? rolling shutter state-readout time factor / cam_pos_f(0) *
[00815] ((*normalized_coord)(0) * jacobian_landmark_to_index(0) -
[00816] jacobian_landmark_to_index( 1 ))
[00817] : rolling_shutter_state.readout_time_factor / cam_pos_f(0) *
[00818] (( *nonnalized_coord)(l) * jacobian_landmark_to_index(0) -
[00819] jacobian_landmark_to_index(2));
[00820] *jacobian = 1. - jacobian_ combined;
[00821] }
[00822] return true;
[00823] }
[00824] } // namespace open_dataset
[00825] } // namespace waymo
[00826] // Filename: camera_model_test.cc
[00827] // Copyright (c) 2019 Waymo LLC. All rights reserved.
[00828] #include "third_party/camera/camera_model.h"
[00829] #include "google/protobuf/text format.h"
[00830] #include <gtest/gtest.h>
[00831] #include "absl/memory/memory.h"
[00832] #include "waymo_open_dataset/dataset.pb.h"
[00833] namespace waymo {
[00834] namespace open dataset {
[00835] namespace {
[00836] class CameraModelTest : public ::testing::Test { [00837] public:
[00838] CameraModelT estQ {
[00839] static constexpr char kCalibrationStr[] = R"Text(
[00840] name: FRONT
[00841] intrinsic: 2055.55614936
[00842] intrinsic: 2055.55614936
[00843] intrinsic: 939.657469886
[00844] intrinsic: 641.072182194
[00845] intrinsic: 0.032316008498
[00846] intrinsic: -0.321412482553
[00847] intrinsic: 0.000793258395371
[00848] intrinsic: -0.000625749354133
[00849] intrinsic: 0.0
[00850] extrinsic {
[00851] transform: 0.999892684989
[00852] transform: -0.00599320840002
[00853] transform: 0.0133678704017
[00854] transform: 1.53891424471
[00855] transform: 0.00604223652133
[00856] transform: 0.999975156055
[00857] transform: -0.0036302411765
[00858] transform: -0.0236339408393
[00859] transform: -0.0133457814992
[00860] transform: 0.00371062343188
[00861] transform: 0.999904056092
[00862] transform: 2.11527057298
[00863] transform: 0.0
[00864] transform: 0.0
[00865] transform: 0.0
[00866] transform: 1.0
[00867] }
[00868] width: 1920 ]]0869] height: 1280
[00870] rolling shutter direction: LEFT TO RIGHT
[00871] )Text";
[00872] google: :protobuf: :TextFormat: :ParseFromString(kCalibrationStr,
&ca!ibration_);
[00873] static constexpr char kCameralmageStr[ ] =R"Text(
[00874] name: FRONT
[00875] image: "dummy"
[00876] pose {
[00877] transform: -0.913574384152
[00878] transform: -0.406212760482
[00879] transform: -0.0193141875914
[00880] transform: -4069.03497872
[00881] transform: 0.406637479491
[00882] transform: -0.913082565675
[00883] transform: -0.0304333457449
[00884] transform: 11526.3118079
[00885] transform: -0.00527303457417
[00886] transform: -0.0356569976572
[00887] transform: 0.999350175676
[00888] transform: 86.504
[00889] transform: 0.0
[00890] transform: 0.0
[00891] transform: 0.0
[00892] transform: 1.0
[00893] }
[00894] velocity {
[00895] v_ x: -3.3991382122
[00896] v_y: 1.50920391083
[00897] v_ z: -0.0169006548822
[00898] w_ x: 0.00158374733292
[00899] w_y: 0.00212493073195 [00900] w z: -0.0270753838122
[00901] }
[00902] pose timestamp: 1553640277.26
[00903] shutter: 0.000424383993959
[00904] camera trigger time: 1553640277.23
[00905] camera readout done time : 1553640277.28
[00906] )Text";
[00907] google: :protobuf: :TextFormat: :ParseFromString(kCameraImageStr,
&camera_image_);
[00908] }
[00909] protected:
[00910] CameraCalibration calibration ;
[00911] Cameralmage camera_image_;
[00912] };
[00913] TEST_F(CameraModelTest, RollingShutter) {
[00914] CameraModel camera_model(calibration_);
[00915] camera model. PrepareProj ection(c amera_image_) ;
[00916] double x, y, z;
[00917] camera_model.ImageToWorld( 100, 1000, 20, &x, &y, &z);
[00918] double u_d, v_d;
[00919] EXPECT_TRUE(camera_model.WorldToImage(x, y, z,
/*check_image_bounds=*/true,
[00920] &u_d, &v_d));
[00921] EXPECT_NEAR(u_d, 100,0.1);
[00922] EXPECT_NEAR(v_d, 1000, 0.1);
[00923] EXPECT_NEAR(x, -4091.88016, 0.1);
[00924] EXPECT_NEAR(y, 11527.42299, 0.1);
[00925] EXPECT_NEAR(z, 84.46667, 0.1);
[00926] }
[00927] TEST F(CameraModelTest, GlobalShutter) {
[00928] calibration .set rolling_shutter_direction(CameraCalibration::GLOBAL_SHUTTER); [00929] CameraModel camera_model(calibration_);
[00930] camera_model.PrepareProjection(camera_image_);
[00931] double x, y, z;
[00932] camera_model.ImageToWorld(100, 1000, 20, &x, &y, &z);
[00933]
[00934] double u d, v d;
[00935] EXPECT_TRUE(c amera_model . WorldT oImage(x, y, z,
/*check_image_bounds=*/true,
[00936] &u_d, &v_d));
[00937] EXPECT_NEAR(u_d, 100,0.1);
[00938] EXPECT_NEAR(v_d, 1000, 0.1);
[00939] EXPECT_NEAR(x, -4091.97180, 0.1);
[00940] EXPECT_NEAR(y, 11527.48092, 0.1);
[00941] EXPECT_NEAR(z, 84.46586, 0.1);
[00942] }
[00943] } //namespace
[00944] } // namespace open dataset
[00945] } // namespace waymo
[00946] // Filename: camera model ops . cc
[00947] // Copyright (c) 2019 Waymo LLC. All rights reserved.
[00948] #include <glog/logging.h>
[00949] #include "tensorflow/core/framework/op.h"
[00950] #include "tensorflow/core/framework/op_kemel.h"
[00951] #include "tensorflow/core/fiamework/shape inference.h"
[00952] #include "tensorflow/core/fiamework/tensor.h"
[00953] #include "tensorflow/core/framework/tensor_types.h"
[00954] #include "tensorflow/core/framework/types.h"
[00955] #include "tensorflow/core/fra mework/types.pb.h"
[00956] #include "tensorflow/core/lib/core/status.h"
[00957] #include " waymo_open_dataset/ dataset pb .h"
[00958] #include "third_party/camera/camera_model.h"
[00959] namespace tensorflow { [00960] namespace {
[00961] namespace co = ::waymo::open_dataset;
[00962] // Length of the instrinsic vector,
[00963] constexpr int klntrinsicLen = 9;
[00964] // Length of the camera metadata vector,
[00965] constexpr int kMetadataLen = 3;
[00966] // Length of the camera image metadata vector,
[00967] constexpr int kCameralmageMedataLen = 26;
[00968] struct Input {
[00969] const Tensor* extrinsic = nullptr;
[00970] const Tensor* intrinsic = nullptr;
[00971] const Tensor* metadata - nullptr;
[00972] const Tensor* camera_image_metadata - nullptr;
[00973] const Tensor* input coordinate = nullptr;
[00974] };
[00975] // Parse input tensors to protos.
[00976] void ParseInput(const Input& input, co : : CameraCalibration* calibration_ptr,
[00977] co::CameraImage* image_ptr) {
[00978] auto& calibration = *calibration_ptr;
[00979] auto& image - *image_ptr;
[00980] CHECK_EQ(mput.extrinsic->dim_size(0), 4);
[00981] CHECK_EQ(input.extrmsic->dim_size(l). 4);
[00982] for (int i = 0; i < 4; ++i) {
[00983] for (int j = 0; j < 4; ++j) {
[00984] calibration.mutable_extrinsic()->add_transform(
[00985] input.extrinsic->matrix<float>0(i, j));
[00986] }
[00987] }
[00988] CHECK_EQ(input.intrinsic->dim_size(0), klntrinsicLen);
[00989] for (int i = 0; i < klntrinsicLen; ++i) {
[00990] calibration.add_intrinsic(input.intrinsic->vec<float>0(i));
[00991] } [00992] CHECK_EQ(input.metadata-Xlim_size(0), kMetadataLen) ;
[00993] calihration.set_width(input.metadata->vec<int32>()(0));
[00994] calibration. set_height(inputmetadata->vec<int32>Q( 1 ));
[00995] calibration.set rolling shutter directionf
[00996] static_cast<co::CameraCalibration::RollingShutterReadOutDirection>(
[00997] input.metadata->vec<int32>()(2)));
[00998] CHECK_EQ(input.camera_image_metadata->dim_size(0), kCameralmageMedataLen);
[00999] int idx = 0;
[001000] const auto& cim = input. camera_image_metadata->vec<float>();
[001001] for (; idx < 16; ++idx) {
[001002] image.mutable_pose() >add_transform(cim(idx));
[001003] }
[001004] image.mutable_velocity()->set_v_x(cim(idx++));
[001005] image. mutable_velocity()->set_v_y(cim(idx++));
[001006] image.mutable_velocity()->set_v_z(cim(idx++));
[001007] image.mutable_velocity()->set_w_x(cim(idx++));
[001008] image.mutable_velocity()->set_w_y(cim(idx++));
[001009] image.mutable_velocityO->set_w_z(cmi(idx++));
[001010] image.set_pose_timestamp(cim(idx++)) ;
[001011] image.set_shutter(cim(idx++));
[001012] image.set_camera_trigger_time(cim(idx++));
[001013] image.set_camera_readout_done_time(cim(idx++)) ;
[001014] }
[001015] class WorldTolmageOp final : public OpKemel {
[001016] public:
[001017] explicit WorldToImageOp(OpKemelConstruction* ctx) : OpKemel(ctx) {}
[001018] void Compute(O_p_KemelContext* ctx) override {
[001019] Input input;
[001020] OP_REQUIRES_ OK(ctx, ctx->input("extrinsic", &input.extrinsic));
[001021] OP_REQUIRES_ OK(ctx. ctx->input(" intrinsic", & input, intrinsic));
[001022] OP_REQUIRES_ OK(c tx, ctx->input("metadata", &i nput. metadata)); [001023] OP_REQUIRES_OK(
[001024] ctx, ctx->input("camera_image_metadata",
&input camera_image_metadata));
[001025] OP_REQUIRES_OK(ctx,
[001026] ctx->input("global_coordinate", &input.input_coordinate));
[001027] co::CameraCalibration calibration;
[001028] co::CameraImage image;
[001029] Parselnput(mput, &calibration, &image);
[001030]
[001031] co::CameraModel model(calibration);
[001032] model.PrepareProjection(image);
[001033] const int num_points - input . input_coordinate->dim_size(0) ;
[001034] CHECK_EQ(3, input.input_coordinate->dim_size( 1 ));
[001035] Tensor image_coordinates(DT_FLOAT, {numjpoints, 3});
[001036] for (int i = 0; i < nmn_points; -Hi) {
[001037] double u_d = 0.0;
[001038] double v_d — 0.0;
[001039] const bool valid =
[001040] model.WorldToImage(input.input_coordinate->matrix<float>()(i, 0),
[001041] mput.input_coordinate->matrix<float>()(i, 1 ),
[001042] input.input_coordinate->matrix<float>(Xi, 2),
[001043] / *check_image_bounds= * /false, &u_d, &v_d);
[001044] image_coordinates.matrix<float>()(i, 0) = u d;
[001045] image coordinates .matrix<float>()(i, 1) = v d;
[001046] image coordinates .matrix<float>()(i, 2) = static cast<float>(valid);
[001047] }
[001048] ctx->set_output(0, image coordinates);
[001049] }
[001050]
[001051] REGISTER_KERNEL_BUILDER(Name("WorldToImage").Device(DEVICE
_CPU),
[001052] WorldToImageOp); [001053] class ImageToWorldOp final : public OpKemel {
[001054] public:
[001055] explicit ImageToWorldOp(OpKemelConstruction* ctx) : OpKemel(ctx) {}
[001056] void Compute(OpKemelContext* ctx) override {
[001057] Input input; [001058] OP_REQUIRES_OK(ctx, ctx->input("extrinsic", &input.extrinsic));
[001059] OP_REQUIRES_OK(ctx, ctx->input(" intrinsic", & input, intrinsic));
[001060] OP_REQUIRES_OK(ctx, ctx->input(”metadata", &input.metadata));
[001061] OP_REQUIRES_OK(
[001062] ctx, ctx->input("camera_image_metadata",
&input. camera_image_metadata)) ; [001063] OP_REQUIRES_OK(ctx,
[001064] ctx->input("image_coordinate", &input.input_coordinate));
[001065] co::CameraCalibration calibration;
[001066] co::CameraImage image;
[001067] Parselnput(input, &calibration, &image);
[001068] co::CameraModel model(calibration);
[001069] model.PrepareProjection(image);
[001070] const int num_points = input.input_coordinate->dim_size(0);
[001071] CHECK_EQ(3, input.input_coordinate->dim_size(1));
[001072] Tensor global_coordinates(DT_FLOAT, {num_points, 3});
[001073] for (int i = 0; i < num_points; ++i) {
[001074] double x = 0.0;
[001075] double y = 0.0;
[001076] double z = 0.0;
[001077] model,ImageToWorld(mput.input_coordinate->matrix<float>0(i, 0),
[001078] input.input coordinate->matrix<float>0(i, 1),
[001079] input. input_coordinate->matrix<float>()(i, 2), &x, &y,
[001080] &z);
[001081] global_coordinates.matrix<float>()(i, 0) = x;
[001082] global_coordinates.matrix<float>()(i, 1) = y;
[001083] global coordinates .matrix<float>()(i, 2) = z; [001084] }
[001085] ctx->set_output(0, global coordinates);
[001086] }
[001087] };
[001088] REGISTER_KERNEL_BUILDER(Name("ImageToWorld").Device(DEVICE
_CPU),
[001089] ImageToWorldOp);
[001090] REGISTER_OP("WorldToImage")
[001091] .Input("extrinsic: float")
[001092] .Input("intrinsic: float")
[001093] .Input("metadata: int32")
[001094] Input("camera_image_metadata: float")
[001095] .Input("global_coordinate: float")
[001096] .Output("image_coordinate: float")
[001097] . S et ShapeFn([] (shape mference : :InferenceContext* c) {
[001098] return Status: :OK();
[001099] })
[001100] .Doc(R"doc(
[001101] Maps global coordinates to image coordinates. See dataset.proto for more
[001102] description of each field.
[001103] extrinsic: [4, 4] camera extrinsic matrix. CameraCalibration: : extrinsic .
[001104] intrinsic: [9] camera intrinsic matrix. CameiaCalibration: : intrinsic,
[001105] metadata: [3] CameraCalibration: : [width, height, rolling_shutter_direction] .
[001106] camera_image_metadata : [16 + 6 + 1 + 1 + 1 + 1]=[26] tensor.
[001107] CameraImage::[pose(16), velocity(6), pose_timestamp(l), shutter(l),
[001108] camera_trigger_time( 1 ), camera_readout_done_time( 1 )] .
[001109] global_ coordinate: [N, 3] float tensor. Points in global frame.
[001110] image_coordinate: [N, 3] float tensor. [N, 0:2] are points in image frame.
[001111] The points can be outside of the image. The last channel [N, 2] tells whether
[001112] a projection is valid or not. 0 means invalid. 1 means valid. A projection
[001113] can be invalid if the point is behind the camera or if the radial distortion
[001114] is too large. [001115] )doc");
[001116] REGISTER_OP("ImageToWorld")
[001117] .Input("extrinsic: float")
[001118] .Input("intrinsic: float")
[001119] ,Input("metadata: int32")
[001120] .Input("camera_image_metadata: float")
[001121] .Input("image_coordinate: float")
[001122] Output("global_coordinate: float")
[001123] .SetShapeFn([](shape_inference : :InferenceContext* c) {
[001124] return Status: :OK();
[001125] })
[001126] .Doc(R"doc(
[001127] Maps global coordinates to image coordinates. See dataset-proto for more
[001128] description of each field.
[001129] extrinsic: [4, 4] camera extrinsic matrix. CameraCalibration::extrinsic.
[001130] intrinsic: [9] camera intrinsic matrix. CameraCalibration::intrinsic.
[001131] metadata: [3] CameraCalibration:: [width, height, rolling shutter direction],
[001132] camera_image_metadata : [16 + 6 + 1 + 1 + 1 + 1]=[26] tensor.
[001133] CameraImage::[pose(16), velocity(6), pose timestamp(l), shutter(l),
[001134] camera_trigger_time( 1 ), camera_readout_done_time( 1 )] .
[001135] image coordinate: [N, 3] float tensor. Points in image frame with depth.
[001136] global_coordinate: [N, 3] float tensor. Points in global frame.
[001137] )doc");
[001138] } //namespace
[001139] } // namespace tensorflow
[001140] // Filename: py camer a model ops.py
[001141] // Copyright (c) 2019 Waymo LLC. All rights reserved.
[001142] mm Camera model tensorflow ops python interface, ""
[001143] from future import absolute import
[001144] from future import division
[001145] from future import print function
[001146] import tensorflow as tf [001147] camera_ model_ module = tf . load_ op_ library (
[001148] tf.compat.v1.resource_loader.get_path_to_datafile('camera_model_ops.so'))
[001149] world_ to_ image = camera model module.world_ to_ image
[001150] image_ to_world = camera model module.image_ to_ world
[001151] // Filename: camera_model_ops_test.py
[001152] // Copyright (c) 2019 Waymo LLC. All rights reserved,
[001153] from future import absolute import
[001154] from future import division
[001155] from future import print func tion
[001156] import tensorflow as tf
[001157] fromgoogle.protobuf import text format
[001158] from waymo_open_dataset import dataset_pb2
[001159] from third_party.camera.ops import py camera_ model_ ops
[001160]
[001161] class CameraModelOpsTest(tf.test.TestCase):
[001162] def_BuildInput(self):
[001163] ""Builds input. ""
[001164] calibration = dataset_pb2 CameraCalibration()
[001165] image = dataset_pb2.Cameraimage()
[001166] calibration text - ""
[001167] name: FRONT
[001168] intrinsic: 2055.55614936
[001169] intrinsic: 2055.55614936
[001170] intrinsic: 939.657469886
[001171] intrinsic: 641.072182194
[001172] intrinsic: 0.032316008498
[001173] intrinsic: -0.321412482553
[001174] intrinsic: 0.000793258395371
[001175] intrinsic: -0.000625749354133
[001176] intrinsic: 0.0
[001177] extrinsic {
[001178] transform: 0.999892684989 [001179] transform: -0.00599320840002
[001180] transform: 0.0133678704017
[001181] transform: 1.53891424471
[001182] transform: 0.00604223652133
[001183] transform: 0.999975156055
[001184] transform: -0.0036302411765
[001185] transform: -0.0236339408393
[001186] transform: -0.0133457814992
[001187] transform: 0.00371062343188
[001188] transform: 0.999904056092
[001189] transform: 2.11527057298
[001190] transform: 0.0
[001191] transform: 0.0
[001192] transform: 0.0
[001193] transform: 1.0
[001194] }
[001195] width: 1920
[001196] height: 1280
[001197] rolling_shutter_direction: LEFT T O RIGHT
[001198] ""
[001199] image_ text = ""
[001200] name: FRONT
[001201] image: "dummy"
[001202] pose {
[001203] transform: -0.913574384152
[001204] transform: -0.406212760482
[001205] transform: -0.0193141875914
[001206] transform: -4069.03497872
[001207] transform: 0.406637479491
[001208] transform: -0.913082565675
[001209] transform: -0.0304333457449
[001210] transform: 11526.3118079 [001211] transform: -0.00527303457417
[001212] transform: -0.0356569976572
[001213] transform: 0.999350175676
[001214] transform: 86.504
[001215] transform: 0.0
[001216] transform: 0.0
[001217] transform: 0.0
[001218] transform: 1.0
[001219] }
[001220] velocity {
[001221] v_ x: -3.3991382122
[001222] v_y: 1.50920391083
[001223] v_ z: -0.0169006548822
[001224] w_ x: 0.00158374733292
[001225] w_y: 0.00212493073195
[001226] w_ z: -0.0270753838122
[001227] }
[001228] pose timestamp: 1553640277.26
[001229] shutter: 0.000424383993959
[001230] camera trigger time : 1553640277.23
[001231] camera readout done time: 1553640277.28
[001232] ""
[001233] text_format.Merge(calibration_text, calibration)
[001234] text_format.Merge(image_text, image)
[001235] return calibration, image
[001236] def testCameraModel(self):
[001237] calibration, image = self. Buildlnput()
[001238] g = tf.Graph()
[001239] with g.as default():
[001240] extrinsic = tf.reshape(
[001241] tf.constant(list(calibration.extrinsic.transform), dtype=tf.float32),
[001242] [4,4]) [001243] intrinsic = tf.constant(list(calibration.intrinsic), dtype=tf.float32)
[001244] metadata = tf.constant([
[001245] calibration, width, calibration.height,
[001246] calibration.rolling_ shutter_ direction
[001247] ].
[001248] dtype=tf.int32)
[001249] camera_image_metadata = list(image.pose.transform)
[001250] camera_image_metadata.append(image.velocity.v_x)
[001251] camera_image_metadata.append(image.velocity.v_y)
[001252] camera_image_metadata.append(image.velocity.v_z)
[001253] camera_image_metadata.append(image.velocity.w_x)
[001254] camera_image_metadata.append(image.velocity.w_y)
[001255] camera_image_metadata.append(image.velocity.w_z)
[001256] camera_image_metadata.append(image.pose_timestamp)
[001257] camera_image_metadata.append(image.shutter)
[001258] camera_image_metadata.append(image.camera_ trigger_ time)
[001259] camera_image_metadata.append(image.camera_readout_done_time)
[001260] image_points = tf.constant([[100, 1000, 20], [150, 1000, 20]],
[001261] dtype=tf.float32)
[001262] global_points - py_camera_model_ops.image_to_world(
[001263] extrinsic, intrinsic, metadata, camera_image_metadata, image_points)
[001264] image_points_t = py camera mode l ops. world_to_image(
[001265] extrinsic, intrinsic, metadata, camera_image_metadata, global_points)
[001266] with self.test_session(graph=g) as sess:
[001267] image_points, image_points_t, global_points = httpsV/protect- us.mimecast.com/s/7SyACBBX0khvvQQTjikD7?domain=sess.nm(
[001268] [image_points, image_points_t, global_points])
[001269] self.assertAllClose(
[001270] global_points, [[-4091.97180, 11527.48092, 84.46586],
[001271] [-4091.771, 11527.941, 84.48779]],
[001272] atol=0.1)
[001273] self.assertAllClose(image_points t[:, 0:2], image_points[:, 0:2], atol=0.1) [001274] self. assertAllC lose(image_points_t[:, 2], [1.0, 1.0])
[001275] if_ name_=' _ main_ ".
[001276] tf.compat.v1.disable_ eager_ execution()
[001277] tf.test.main()

Claims

CLAIMS What is claimed:
1. A method, comprising: obtaining an indication of a point in an environment of an autonomous vehicle; obtaining information about the location and motion of the autonomous vehicle within the environment; obtaining an image of a portion of the environment of the autonomous vehicle, wherein the image comprises a plurality of rows of pixels, and wherein the image was generated by a camera operating in a rolling shutter mode such that each row of pixels represents light sensed by the camera during a respective exposure time period; and mapping the point in the environment to a location within the image by: determining an initial estimated time, T0, that the camera sensed light from the point in the environment; determining N updated estimated times, wherein N ≥ 1, and wherein each updated estimated time, Ti, is determined by an update process comprising: determining, based on the information about the location and motion of the autonomous vehicle, a pose of the camera at the estimated time, Ti-1, based on the pose of the camera at the estimated time, Ti-1, projecting the point in the environment to a projected location within the image, evaluating a cost function that includes (i) a term based on the estimated time, Ti-1 and (ii) a term based on a mapping from the projected location to a time that the camera sensed light represented at the projected location within the image, and determining the updated estimated time, Ti, based on the evaluated cost function; and determining, based on the updated estimated time, TN, a location within the image that corresponds to the point in the environment.
2. The method of claim 1, further comprising: operating the autonomous vehicle, based on the determined location within the image, to navigate within the environment.
3. The method of claim 1, wherein obtaining the indication of the point in the environment of the autonomous vehicle comprises: operating a light detection and ranging (LIDAR) sensor of the autonomous vehicle to generate a plurality of LIDAR data points indicative of distances to one or more objects in the environment of the vehicle; and generating the indication of the point in the environment based on at least one LIDAR data point of the plurality of LIDAR data points.
4. The method of claim 1, wherein determining, based on the updated estimated time, TN, a location within the image that corresponds to the point in the environment comprises identifying a particular pixel, within the image, that corresponds to the point in the environment.
5. The method of claim 4, further comprising: determining, based on the indication of the point in the environment, a depth value for the point in the environment; and assigning the depth value to the particular pixel.
6. The method of claim 1, further comprising: identifying, based on the image, that a feature is present in the environment, wherein the feature is represented within the image at least in part by the particular pixel.
7. The method of claim 1, further comprising: normalizing the evaluated cost function by a Jacobian of the evaluated cost function, wherein determining the updated estimated time, Ti, based on the evaluated cost function comprises determining the updated estimated time, Ti, based on the normalized evaluated cost function.
8. The method of claim 1, wherein obtaining information about the location and motion of the autonomous vehicle within the environment comprises obtaining information about a first location of the autonomous vehicle at a first point time and a second location of tiie autonomous vehicle at a second point in time, and wherein determining a pose of the camera at the estimated time comprises interpolating the information about the location of the autonomous vehicle at the at least two different points in time to the estimated time.
9. The method of claim 1, wherein obtaining information about the location and motion of the autonomous vehicle within the environment comprises obtaining information about the location and translational velocity of the autonomous vehicle at a particular point in time, and wherein determining a pose of the camera at the estimated time, Ti-1, comprises extrapolating the information about the location and translational velocity of the autonomous vehicle at the particular point in time to the estimated time, Ti-1.
10. The method of claim 1, wherein the update process additionally comprises determining whether the updated estimated time, Ti, differs from estimated time Ti-1 by less than a threshold amount, wherein determining, based on the updated estimated time, TV, the location within the image that corresponds to the point in the environment is performed responsive to at least one of (i) determining that the updated estimated time, TN, differs from estimated time TV-; by less than a threshold amount or (ii) determining that the update process has been performed a threshold number of times.
11. A non-transitory computer readable medium having stored therein instructions executable by a computing device to cause the computing device to perform operations, wherein the operations comprise: obtaining an indication of a point in an environment of an autonomous vehicle; obtaining information about the location and motion of the autonomous vehicle within the environment; obtaining an image of a portion of the environment of the autonomous vehicle, wherein the image comprises a plurality of rows of pixels, and wherein the image was generated by a camera operating in a rolling shutter mode such that each row of pixels represents light sensed by the camera during a respective exposure time period; and mapping the point in the environment to a location within the image by: determining an initial estimated time, T0, that the camera sensed light from the point in the environment; determining N updated estimated times, wherein N ≥ 1, and wherein each updated estimated time, Ti, is determined by an update process comprising: determining, based on the information about the location and motion of the autonomous vehicle, a pose of the camera at the estimated time, TV/, based on the pose of the camera at the estimated time, Ti-1, projecting the point in the environment to a projected location within the image, evaluating a cost function that includes (i) a term based on the estimated time, Ti-1 and (ii) a term based on a mapping from the projected location to a time that the camera sensed light represented at the projected location within theimage, and determining the updated estimated time, Ti, based on the evaluated cost function; and determining, based on the updated estimated time, TN, a location within the image that corresponds to the point in the environment.
12. The non-transitory computer readable medium of claim 11, wherein the operations further comprise: operating the autonomous vehicle, based on the determined location within the ima,ge to navigate within the environment.
13. The non-transitory computer readable medium of claim 11, wherein determining, based on the updated estimated time, TN, a location within the image that corresponds to the point in the environment comprises identifying a particular pixel, within the image, that corresponds to the point in the environment.
14. The non-transitory computer readable medium of claim 13, wherein the operations further comprise: determining, based on the indication of the point in the environment, a depth value for the point in the environment; and assigning the depth value to the particular pixel.
15. The non-transitory computer readable medium of claim 11, wherein obtaining information about the location and motion of the autonomous vehicle within the environment comprises obtaining information about the location and translational velocity of the autonomous vehicle at a particular point in time, and wherein determining a pose of the camera at the estimated time, Tt-1, comprises extrapolating the information about the location and translational velocity of the autonomous vehicle at the particular point in time to the estimated time, Tt-1.
16. A system, comprising: a light detection and ranging (LIDAR) sensor coupled to a vehicle; a camera coupled to the vehicle, wherein the camera is configured to obtain image data indicative of the environment of the vehicle; and a controller, wherein the controller is operably coupled to the LIDAR sensor and the camera, and wherein the controller comprises one or more processors configured to perform operations comprising: operating the LIDAR sensor to generate a plurality of LIDAR data points indicative of distances to one or more objects in the environment of the vehicle; generating an indication of a point in the environment based on at least one LIDAR data point of the plurality of LIDAR data points; operating the camera in a rolling shutter mode to generate an image of a portion of the environment of the vehicle, wherein the image comprises a plurality of rows of pixels, and wherein each row of pixels represents light sensed by the camera during a respective exposure time period; obtaining information about the location and motion of the autonomous vehicle within the environment; and mapping the point in the environment to a location within the image by: determining an initial estimated time, T0, that the camera sensed light from the point in the environment; determining N updated estimated times, wherein N ≥ 1, and wherein each updated estimated time, Ti, is determined by an update process comprising: determining, based on the information about the location and motion of the autonomous vehicle, a pose of the camera at the estimated time, Tt-1 , based on the pose of the camera at the estimated time, Tt-1, projecting the point in the environment to a projected location within the image, evaluating a cost function that includes (i) a term based on the estimated time, Tt-1 and (ii) a term based on a mapping from the projected location to a time that the camera sensed light represented at the projected location within the image, and determining the updated estimated time, Ti, based on the evaluated cost function; and determining, based on the updated estimated time, TN , a location within the image that corresponds to the point in the environment.
17. The system of claim 16, wherein the operations further comprise: operating the autonomous vehicle, based on the determined location within the image, to navigate within the environment.
18. The system of claim 16, wherein determining, based on the updated estimated time, TN, a location within the image that corresponds to the point in the environment comprises identifying a particular pixel, within the image, that corresponds to the point in the environment.
19. The system of claim 18, wherein the operations further comprise: determining, based on the indication of the point in the environment, a depth value for the point in the environment; and assigning the depth value to the particular pixel.
20. The system of claim 16, wherein obtaining information about the location and motion of the autonomous vehicle within the environment comprises obtaining information about the location and translational velocity of the autonomous vehicle at a particular point in time, and wherein determining a pose of the camera at the estimated time, Tt-1, comprises extrapolating the information about the location and translational velocity of the autonomous vehicle at the particular point in time to the estimated time, Tt-1.
PCT/US2020/062365 2019-12-04 2020-11-25 Efficient algorithm for projecting world points to a rolling shutter image WO2021113147A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP20896593.9A EP4070130A4 (en) 2019-12-04 2020-11-25 Efficient algorithm for projecting world points to a rolling shutter image
CN202080094918.0A CN115023627A (en) 2019-12-04 2020-11-25 Efficient algorithm for projecting world points onto rolling shutter images

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201962943688P 2019-12-04 2019-12-04
US62/943,688 2019-12-04

Publications (1)

Publication Number Publication Date
WO2021113147A1 true WO2021113147A1 (en) 2021-06-10

Family

ID=76221003

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2020/062365 WO2021113147A1 (en) 2019-12-04 2020-11-25 Efficient algorithm for projecting world points to a rolling shutter image

Country Status (4)

Country Link
US (1) US11977167B2 (en)
EP (1) EP4070130A4 (en)
CN (1) CN115023627A (en)
WO (1) WO2021113147A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024081330A1 (en) * 2022-10-14 2024-04-18 Motional Ad Llc Rolling shutter compensation

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11650058B2 (en) * 2017-08-28 2023-05-16 Intel Corporation Pose estimation for mobile autonomous apparatus at fractional time periods of a complete sensor sweep
KR102218120B1 (en) * 2020-09-21 2021-02-22 주식회사 폴라리스쓰리디 Autonomous navigating module, mobile robot including the same and method for estimating its position
US11977150B2 (en) * 2021-03-05 2024-05-07 Black Sesame Technologies Inc. Vehicle localization precision enhancement via multi-sensor fusion
CN112671499B (en) * 2021-03-16 2022-04-01 深圳安途智行科技有限公司 Multi-sensor synchronization method and system and main control equipment
US11405557B1 (en) * 2021-07-20 2022-08-02 Locus Robotics Corp. Rolling shutter compensation for moving digital optical camera sensors

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180149753A1 (en) * 2016-11-30 2018-05-31 Yujin Robot Co., Ltd. Ridar apparatus based on time of flight and moving object
US20190098233A1 (en) * 2017-09-28 2019-03-28 Waymo Llc Synchronized Spinning LIDAR and Rolling Shutter Camera System
US20190310372A1 (en) * 2016-11-30 2019-10-10 Blackmore Sensors and Analytics Inc. Method and System for Doppler Detection and Doppler Correction of Optical Chirped Range Detection

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9488492B2 (en) * 2014-03-18 2016-11-08 Sri International Real-time system for multi-modal 3D geospatial mapping, object recognition, scene annotation and analytics
KR102534792B1 (en) * 2015-02-10 2023-05-19 모빌아이 비젼 테크놀로지스 엘티디. Sparse map for autonomous vehicle navigation
US10048058B2 (en) * 2015-07-29 2018-08-14 Microsoft Technology Licensing, Llc Data capture system for texture and geometry acquisition
US9792709B1 (en) * 2015-11-23 2017-10-17 Gopro, Inc. Apparatus and methods for image alignment
US9973696B1 (en) * 2015-11-23 2018-05-15 Gopro, Inc. Apparatus and methods for image alignment
EP3545337B1 (en) * 2017-01-26 2024-07-03 Mobileye Vision Technologies Ltd. Vehicle navigation based on aligned image and lidar information
US20180288320A1 (en) * 2017-04-03 2018-10-04 Uber Technologies, Inc. Camera Fields of View for Object Detection
US10108867B1 (en) * 2017-04-25 2018-10-23 Uber Technologies, Inc. Image-based pedestrian detection
CN111492403A (en) 2017-10-19 2020-08-04 迪普迈普有限公司 Lidar to camera calibration for generating high definition maps
KR102423172B1 (en) * 2018-03-20 2022-07-22 모빌아이 비젼 테크놀로지스 엘티디 Systems and methods for navigating a vehicle
WO2020069034A1 (en) * 2018-09-26 2020-04-02 Zoox, Inc. Image scan line timestamping
US12046006B2 (en) * 2019-07-05 2024-07-23 Nvidia Corporation LIDAR-to-camera transformation during sensor calibration for autonomous vehicles
CN112752954A (en) * 2019-08-30 2021-05-04 百度时代网络技术(北京)有限公司 Synchronization sensor for autonomous vehicle
US11681032B2 (en) * 2020-01-30 2023-06-20 Pony Ai Inc. Sensor triggering based on sensor simulation
US11964762B2 (en) * 2020-02-11 2024-04-23 Raytheon Company Collaborative 3D mapping and surface registration
US11430224B2 (en) * 2020-10-23 2022-08-30 Argo AI, LLC Systems and methods for camera-LiDAR fused object detection with segment filtering

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180149753A1 (en) * 2016-11-30 2018-05-31 Yujin Robot Co., Ltd. Ridar apparatus based on time of flight and moving object
US20190310372A1 (en) * 2016-11-30 2019-10-10 Blackmore Sensors and Analytics Inc. Method and System for Doppler Detection and Doppler Correction of Optical Chirped Range Detection
US20190098233A1 (en) * 2017-09-28 2019-03-28 Waymo Llc Synchronized Spinning LIDAR and Rolling Shutter Camera System

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
GUOLAI JIANG, LEI YIN, SHAOKUN JIN, CHAORAN TIAN, XINBO MA, YONGSHENG OU: "A Simultaneous Localization and Mapping (SLAM) Framework for 2.5D Map Building Based on Low-Cost LiDAR and Vision Fusion", APPLIED SCIENCES, vol. 9, no. 10, pages 2105, XP055740701, DOI: 10.3390/app9102105 *
JEONG JINYONG; CHO YOUNGHUN; KIM AYOUNG: "The Road is Enough! Extrinsic Calibration of Non-overlapping Stereo Camera and LiDAR using Road Information", IEEE ROBOTICS AND AUTOMATION LETTERS, IEEE, vol. 4, no. 3, 1 July 2019 (2019-07-01), pages 2831 - 2838, XP011731460, DOI: 10.1109/LRA.2019.2921648 *
See also references of EP4070130A4 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024081330A1 (en) * 2022-10-14 2024-04-18 Motional Ad Llc Rolling shutter compensation

Also Published As

Publication number Publication date
EP4070130A4 (en) 2023-12-27
US11977167B2 (en) 2024-05-07
US20210208283A1 (en) 2021-07-08
EP4070130A1 (en) 2022-10-12
CN115023627A (en) 2022-09-06

Similar Documents

Publication Publication Date Title
US11977167B2 (en) Efficient algorithm for projecting world points to a rolling shutter image
JP7146004B2 (en) Synchronous spinning LIDAR and rolling shutter camera system
KR101803164B1 (en) Methods, systems, and apparatus for multi-sensory stereo vision for robotics
TWI408486B (en) Camera with dynamic calibration and method thereof
CN113780349B (en) Training sample set acquisition method, model training method and related device
US12112508B2 (en) Calibrating system for colorizing point-clouds
JP2018155709A (en) Position posture estimation device, position posture estimation method and driving assist device
WO2023193408A1 (en) Laser radar and laser radar control method
US11233961B2 (en) Image processing system for measuring depth and operating method of the same
CN112419427A (en) Method for improving time-of-flight camera accuracy
CN118786361A (en) Information processing device, information processing method, and program
CN113747141A (en) Electronic equipment and depth image shooting method
Stanèiæ et al. A novel low-cost adaptive scanner concept for mobile robots
WO2023120009A1 (en) Ranging device, and sensor device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20896593

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2020896593

Country of ref document: EP

Effective date: 20220704