WO2022051263A1 - Localization methods and architectures for an autonomous vehicle - Google Patents

Localization methods and architectures for an autonomous vehicle Download PDF

Info

Publication number
WO2022051263A1
WO2022051263A1 PCT/US2021/048397 US2021048397W WO2022051263A1 WO 2022051263 A1 WO2022051263 A1 WO 2022051263A1 US 2021048397 W US2021048397 W US 2021048397W WO 2022051263 A1 WO2022051263 A1 WO 2022051263A1
Authority
WO
WIPO (PCT)
Prior art keywords
pose
instance
local
global
data
Prior art date
Application number
PCT/US2021/048397
Other languages
French (fr)
Inventor
Ethan Eade
Abhay Vardhan
Adam Richard Williams
Yekeun JEONG
Original Assignee
Aurora Operations, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Aurora Operations, Inc. filed Critical Aurora Operations, Inc.
Publication of WO2022051263A1 publication Critical patent/WO2022051263A1/en

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • G01C21/1652Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with ranging devices, e.g. LIDAR or RADAR
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging

Definitions

  • Level 1 autonomy a vehicle controls steering or speed (but not both), leaving the operator to perform most vehicle functions.
  • Level 2 autonomy a vehicle is capable of controlling steering, speed and braking in limited circumstances (e.g., while traveling along a highway), but the operator is still required to remain alert and be ready to take over operation at any instant, as well as to handle any maneuvers such as changing lanes or turning.
  • Level 3 autonomy a vehicle can manage most operating variables, including monitoring the surrounding environment, but an operator is still required to remain alert and take over whenever a scenario the vehicle is unable to handle is encountered.
  • Level 4 autonomy provides an ability to operate without operator input, but only in specific conditions such as only certain types of roads (e.g., highways) or only certain geographical areas (e.g., specific cities for which adequate mapping data exists).
  • Level 5 autonomy represents a level of autonomy where a vehicle is capable of operating free of operator control under any circumstances where a human operator could also operate.
  • the present disclosure is directed to particular method(s) or architecture(s) for localization of an autonomous vehicle (/.e., localization of the autonomous vehicle being autonomously controlled).
  • Localization of the autonomous vehicle generally references determining a pose of the autonomous vehicle within its surrounding environment, and generally with respect to a particular frame of reference.
  • Some implementations generate both global pose instances and local pose instances for use in localization of an autonomous vehicle.
  • the local pose instances are utilized at least part of the time (e.g., the majority of the time or even exclusively) as the localization that is used in control of the autonomous vehicle.
  • the method may include, by one or more primary control system processors of a primary control system: generating a global pose instance of a global pose of the autonomous vehicle based on a first sensor data instance of first sensor data that is generated by one or more first sensors of the autonomous vehicle; generating a correction instance based on the global pose instance and based on a determined local pose instance of a local pose of the autonomous vehicle, the local pose instance (a) temporally corresponds to the global pose instance, (b) is determined based on a second sensor data instance of second sensor data that is generated by one or more second sensors of the autonomous vehicle, and (c) is determined without utilization of the first sensor data instance; and transmitting the correction instance to a secondary control system.
  • the method further includes, by one or more secondary control system processors of the secondary control system: receiving the correction instance; and generating an additional local pose instance based on the correction instance and based on an additional second sensor data instance of the
  • generating the additional local pose instance may include generating the additional local pose instance based on the local pose instance, that temporally corresponds to the global pose instance, as modified based on at least the additional second sensor data instance.
  • the method may further include by one or more of the secondary control system processors of the secondary control system, and immediately subsequent to generating the additional local pose instance but prior to receiving any additional correction instance: generating a further local pose instance based on (a) the additional local pose instance, (b) the correction instance, and (c) a further second sensor data instance, of the second sensor data, the further second sensor data instance generated subsequent to the additional second sensor data instance.
  • the method may further include, by one or more of the secondary control system processors of the secondary control system, and immediately subsequent to generating the further local pose instance but prior to receiving any additional correction instance: generating a yet further local pose instance based on (a) the further local pose instance, (b) the correction instance, and (c) a yet further second sensor data instance, of the second sensor data, the yet further second sensor data instance generated subsequent to the further second sensor data instance.
  • generating the correction instance may be further based on at least one prior local pose instance to at least one prior global pose instance that temporally corresponds to the at least one prior local pose instance.
  • generating the correction instance may include comparing the local pose instance to the global pose instance that temporally corresponds to the local pose instance, comparing at least one prior local pose instance to at least one prior global pose instance that temporally corresponds to the at least one prior local pose instance; and generating the correction instance based on comparing the local pose instance to the global pose instance and based on comparing the prior local pose instance to the prior global pose instance.
  • the correction instance may include a drift rate that indicates a first magnitude of drift, in one or more dimensions, over a period of time a second magnitude of drift, in one or more of the dimensions, over a distance, or both.
  • the correction instance may be a linear combination of the first magnitude of drift over the period of time, and the second magnitude of drift over the distance.
  • transmitting the correction instance to the secondary control system may include delaying transmission of the correction instance to the secondary control system by a predetermined period of time.
  • the method may further include, by one or more of the primary control system processors of the primary control system, and prior to generating the global pose instance: identifying one or more candidate tiles based on a prior first sensor data instance of the first sensor data, each of the one or more candidate tiles representing a previously mapped portion of a geographical region.
  • the one or more first sensors may include at least a LIDAR sensor
  • the first sensor data instance may include at. least an instance of LIDAR data generated by a sensing cycle of the LIDAR sensor of the autonomous vehicle.
  • generating the global pose instance for the autonomous vehicle may include assembling the instance of the LIDAR data into one or more point clouds, and aligning, using a geometric matching technique, one or more of the point clouds with a previously stored point cloud associated with the local pose instance that temporally corresponds to the global pose instance to generate the global pose instance.
  • aligning one or more of the point clouds with the given tile using the geometric matching technique to generate the global pose instance may include identifying a given tile associated with the local pose instance that temporally corresponds to the global pose instance, identifying, based on the given tile, the previously stored point cloud associated with the local pose instance, and using the geometric matching technique to align one or more of the point clouds with the previously stored point cloud.
  • the one or more second sensors may include at least one !MU sensor and at least one wheel encoder
  • the second sensor data may include IMU data generated by the at. least one IMU sensor and wheel encoder data generated by the at least one wheel encoder
  • the IMU data may be generated at a first frequency
  • the wheel encoder data may be generated at a second frequency
  • the second sensor data instance and the additional second sensor data instance may include a most recently generated instance of the IMU data and a most recently generated instance of the wheel encoder data.
  • the system may include a primary control system having at least one first processor and a first memory storing first instructions that, when executed, cause the at least one first processor to: generate a global pose instance of a global pose of the autonomous vehicle based on a first sensor data instance of first sensor data that is generated by one or more first sensors of the autonomous vehicle; generate a correction instance based on the global pose instance and based on a determined local pose instance, of a local pose of the autonomous vehicle, that temporally corresponds to the global pose instance, that is determined based on a second sensor data instance of second sensor data that is generated by one or more second sensors of the autonomous vehicle, and that is determined without utilization of the first sensor data instance; and transmit the correction instance to a secondary control system.
  • a primary control system having at least one first processor and a first memory storing first instructions that, when executed, cause the at least one first processor to: generate a global pose instance of a global pose of the autonomous vehicle based on a first sensor data instance of first sensor data that is generated by one or more first sensors of
  • the method may further include the secondary control system having at least one second processor and a second memory storing second instructions that, when executed, cause the at least one second processor to: receive the correction instance; and generate an additional local pose instance based on the correction instance and based on an additional second sensor data instance of the second sensor data.
  • the instructions to generate the correction instance may further cause the at least one processor to: compare the local pose instance to the global pose instance that temporally corresponds to the local pose instance, compare at least one prior local pose instance to at least one prior global pose instance that temporally corresponds to the at least one prior local pose instance, and generate the correction instance based on comparing the local pose instance to the global pose instance and based on comparing the prior local pose instance to the prior global pose instance.
  • the correction instance may include a drift rate that indicates a first magnitude of drift, in one or more dimensions, over a period of time, a second magnitude of drift, in one or more of the dimensions, over a distance, or both.
  • the correction instance may be a linear combination of the first magnitude of drift over the period of time, and the second magnitude of drift over the distance.
  • a method for localization of an autonomous vehicle may include, by one or more primary control system processors of a primary control system: generating a global pose instance of a global pose of the autonomous vehicle based on a first sensor data instance of first sensor data that is generated by one or more first sensors of the autonomous vehicle; and generating a correction instance based on the global pose instance and based on a second sensor data instance of second sensor data that is generated by one or more second sensors of the autonomous vehicle; and transmitting the correction instance to a secondary control system.
  • the method may further include, by one or more secondary control system processors of the secondary control system: receiving the correction instance; and until an additional correction instance is received, generating a plurality of local pose instances of a local pose of the autonomous vehicle. Generating each of the local pose instances may be based on the correction instance and based on corresponding additional instances of the second sensor data.
  • a method for localization of an autonomous vehicle may include, obtaining first sensor data from one or more first sensors of the autonomous vehicle, generating a global pose of the autonomous vehicle based on the first sensor data, obtaining second sensor data from one or more second sensors of the autonomous vehicle, and generating a local pose of the autonomous vehicle based on the second sensor data.
  • the local pose of the autonomous vehicle may be generated without utilization of the first sensor data.
  • the method may further include determining a correction based on the global pose of the autonomous vehicle and the local pose of the autonomous vehicle, obtaining additional second sensor data from the one or more second sensors of the autonomous vehicle, and generating an additional local pose of the autonomous vehicle based on (i) the correction, and (ii) the additional second sensor data.
  • generating the additional local pose may include generating the additional local pose based on the local pose instance as modified based on at least the additional second sensor data.
  • the local pose instance may temporally correspond to the global pose instance [0018] in some implementations, the method may further include, immediately subsequent to generating the additional local pose but prior to receiving any additional correction, generating a further local pose based on: the additional local pose, the correction, and further second sensor data generated subsequent to the additional second sensor data.
  • the method may further include, immediately subsequent to generating the further local pose but prior to receiving any additional correction, generating a yet further local pose based on: the further local pose, the correction, and yet further second sensor data generated subsequent to the further second sensor data.
  • generating the correction is further based on comparing at least one prior local pose to at least one prior global pose that temporally corresponds to the at least one prior local pose.
  • generating the correction may include comparing the local pose to the global pose that temporally corresponds to the local pose, comparing at least one prior local pose to at least one prior global pose that temporally corresponds to the at. least one prior local pose, and generating the correction based on comparing the local pose to the global pose and based on comparing the prior local pose to the prior global pose.
  • the correction may include a drift rate that indicates one or more of: a first magnitude of drift, in one or more dimensions, over a period of time, or a second magnitude of drift, in one or more of the dimensions, over a distance.
  • the correction may be a linear combination of the first magnitude of drift over the period of time, and the second magnitude of drift over the distance.
  • the global pose and the correction may be generated by a primary control system, and the local pose may be generated by a secondary control system.
  • the method may further include identifying one or more candidate tiles based on prior first sensor data, each of the one or more candidate tiles representing a previously mapped portion of a geographical region.
  • the one or more first sensors may include at least a LIDAR sensor, and the first sensor data may include at least LIDAR data generated by a sensing cycle of the LIDAR sensor of the autonomous vehicle.
  • generating the global pose for the autonomous vehicle may include assembling the LIDAR data into one or more point clouds, and aligning, using a geometric matching technique, one or more of the point clouds with a previously stored point cloud associated with the local pose that temporally corresponds to the global pose to generate the global pose.
  • aligning one or more of the point clouds with the given tile using the geometric matching technique to generate the global pose may include identifying a given tile associated with the local pose that temporally corresponds to the global pose, identifying, based on the given tile, the previously stored point cloud associated with the local pose, and using the geometric matching technique to align one or more of the point clouds with the previously stored point cloud,
  • the one or more second sensors may include at least one IMU sensor and at least one wheel encoder
  • the second sensor data may include IMU data generated by the at least one IMU sensor and wheel encoder data generated by the at least one wheel encoder.
  • the IMU data may be generated at a first frequency
  • the wheel encoder data may be generated at a second frequency
  • the second sensor data and the additional second sensor data may include a most recently generated instance of the IMU data and a most recently generated instance of the wheel encoder data.
  • FIG. 1 illustrates an example hardware and software environment for an autonomous vehicle, in accordance with various implementations.
  • FIGS. 2A and 2B are block diagrams illustrating example implementations of the localization subsystems referenced in FIG. 1, in accordance with various implementations.
  • FIG. 3 is a process flow illustrating an example implementation of the localization subsystems referenced in FIGS. 2A and 2B, in accordance with various implementations.
  • FIG. 4 is flowchart illustrating an example method for localization of an autonomous vehicle, in accordance with various implementations.
  • FIG. 5 is flowchart illustrating an example method of generating global pose instances of a global pose of the autonomous vehicle in localization of the autonomous vehicle of FIG. 4, in accordance with various implementations.
  • FIG. 6 is flowchart illustrating an example method of generating correction instances in localization of the autonomous vehicle of FIG. 4, in accordance with various implementations.
  • FIG. 7 is flowchart illustrating an example method of generating local pose instances of a local pose of the autonomous vehicle in localization of the autonomous vehicle of FIG. 4, in accordance with various implementations.
  • localization of an autonomous vehicle includes generating both global pose instances and local pose instances for use in localization of the autonomous vehicle.
  • the local pose instances are utilized at least part of the time (e.g., the majority of the time or even exclusively) as the localization that is used in control of the autonomous vehicle.
  • a global pose instance can be generated based on an instance of first sensor data generated by first sensor(s) of the autonomous vehicle, and can indicate a position and orientation of the autonomous vehicle with respect to a frame of reference (e.g., tile(s)).
  • a local pose instance can likewise indicate a position and orientation of the autonomous vehicle with respect to a frame of reference, but can be generated based on an instance of second sensor data and without utilization of the instance of first sensor data used in generating the global pose instance.
  • the frame of reference for the local pose instance can be the same frame of reference as the global pose instance (e.g., tile(s)) or a distinct frame of reference (e.g., a local frame of reference).
  • first sensor data generated by first sensor(s) will not be directly utilized in generating one or more instances of a local pose.
  • the first sensor data will be directly utilized in generating one or more instances of a global pose.
  • second sensor data generated by second sensors will be directly utilized in generating one or more instances of a local pose, and can optionally also be directly utilized in generating one or more instances of a global pose.
  • a global pose instance or a correction instance (determined based on global pose instance(s) and local pose instance(s)) can be utilized.
  • multiple local pose instances are generated based on a single global pose instance or a single correction instance.
  • a correction instance can be generated based on comparing historical global pose instances to temporally corresponding historical local pose instances.
  • the correction instance can indicate one or more rates of divergence between local and global instances (e.g., drift rate(s)).
  • the correction instance can be utilized in generating two or more local pose instances.
  • each local pose instance can be a function of a preceding local pose instance, the correction instance, and most, recently received second sensor data from the second sensors.
  • multiple local pose instances can be generated using second sensor data and without direct utilization of the first sensor data.
  • the multiple local pose instances can be indirectly influenced by the first sensor data, through utilization of a global pose instance or correction instance that is generated based on the first sensor data.
  • the local pose instances can be generated more frequently than global pose instances and using less computational resources than global pose instances, based at least in part on not being generated based directly on the first sensor data.
  • the term "tile" refers to a previously mapped portion of a geographical area.
  • a plurality of tiles can be stored in memory of various systems described herein, and the plurality of tiles can be used to represent a geographical region.
  • a given geographical region such as a city
  • a given geographical region can be divided into a plurality of tiles (e.g., each square mile of the city, each square kilometer of the city, or other dimensions), and each of the tiles can represent a portion of the geographical region.
  • each of the tiles can be stored in database(s) that are accessible by various systems described herein, and the tiles can be indexed in the database(s) by their respective locations within the geographical region.
  • each of the tiles can include, for example, information contained within each of the tiles, such as intersection information, traffic light information, landmark information, street information, or other information for the geographical area represented by each of the tiles. The information contained within each of the tiles can be utilized to identify a matching tile.
  • the term "pose” refers to location information and orientation information of an autonomous vehicle within its surroundings, and generally with respect to a particular frame of reference.
  • the pose can be an n-dimensional representation of the autonomous vehicle with respect to the particular frame of reference, such any 2D, 3D, 4D, 5D, 6D, or any other dimensional representation.
  • the frame of reference can be, for example, the aforementioned tile(s), an absolute coordinate system (e.g., longitude and latitude coordinates), a relative coordinate system (or a local frame of reference), or other frame(s) of reference.
  • various types of poses are described herein, and different types of poses can be defined with respect different frame(s) of reference.
  • a "global pose" of the autonomous vehicle can refer to location information and orientation information of the autonomous vehicles with respect to tile(s), and can be generated based on at least an instance of first sensor data generated by first sensor(s) of an autonomous vehicle.
  • a "local pose" of the autonomous vehicle can refer to location information and orientation information of the autonomous vehicles with respect to a local frame of reference can be generated based on at least an instance of second sensor data generated by second sensor(s) of an autonomous vehicle that exclude the first sensor(s) utilized in generating the global pose.
  • an Earth Centered Earth Fixed pose can refer to location information and orientation information of the autonomous vehicles with respect to longitude and latitude coordinates, and can be generated based on at least an instance of third sensor data generated by third sensor(s) of an autonomous vehicle.
  • the phrase “instance of sensor data” or the phrase “sensor data instance” can refer to sensor data, for a corresponding instance in time, and for one or more sensors of an autonomous vehicle. Although the sensor data instance is for a corresponding instance in time, it's not necessarily the case that all sensor data of the instance was actually generated by the sensors at the same time.
  • an instance of LIDAR data generated by LIDAR sensor(s) of the autonomous vehicle may include LIDAR data from a sensing cycle of the LIDAR sensor(s) that is generated at a first frequency
  • an instance of IMU data generated by IMU sensor(s) of the autonomous vehicle may include accelerometer readings and gyroscopic readings from the IMU sensor(s) that are generated at a second frequency
  • an instance of wheel encoder data generated by wheel encoder(s) of the autonomous vehicle may include a quantity of accumulated ticks of revolutions of wheel(s) of the autonomous vehicle that are generated at a third frequency.
  • the first frequency, the second frequency, and the third frequency may be distinct frequencies.
  • the phrase "instance of sensor data” or the phrase “sensor data instance” can also refer to sensor data, for a corresponding instance in time that has been processed by one or more components.
  • one or more filtering components e.g., a Kalman filter
  • the outputs from the filtering components can still be considered an "instance of sensor data" or a "sensor data instance”.
  • FIG. 1 illustrates an example autonomous vehicle 100 within which the various techniques disclosed herein may be implemented.
  • Vehicle 100 for example, is shown driving on a road 101, and vehicle 100 may include a powertrain 102 including a prime mover 104 powered by an energy source 106 and capable of providing power to a drivetrain 108, as well as a control system 110 including a direction control 112, a powertrain control 114, and a brake control 116.
  • a powertrain 102 including a prime mover 104 powered by an energy source 106 and capable of providing power to a drivetrain 108, as well as a control system 110 including a direction control 112, a powertrain control 114, and a brake control 116.
  • Vehicle 100 may be implemented as any number of different types of vehicles, including vehicles capable of transporting people or cargo, and capable of traveling by land, by sea, by air, underground, undersea or in space, and it will be appreciated that the aforementioned components 102-116 can vary widely based upon the type of vehicle within which these components are utilized.
  • the prime mover 104 may include one or more electric motors or an internal combustion engine (among others), while energy source 106 may include a fuel system (e.g., providing gasoline, diesel, hydrogen, etc.), a battery system, solar panels or other renewable energy source, a fuel cell system, etc., and drivetrain 108 may include wheels or tires along with a transmission or any other mechanical drive components suitable for converting the output of prime mover 104 into vehicular motion, as well as one or more brakes configured to controllably stop or slow the vehicle and direction or steering components suitable for controlling the trajectory of the vehicle (e.g., a rack and pinion steering linkage enabling one or more wheels of vehicle 100 to pivot, about a generally vertical axis to vary an angle of the rotational planes of the wheels relative to the longitudinal axis of the vehicle).
  • energy source 106 may include a fuel system (e.g., providing gasoline, diesel, hydrogen, etc.), a battery system, solar panels or other renewable energy source, a fuel cell system, etc.
  • different combinations of powertrains 102 and energy sources 106 may be used.
  • one or more electric motors e.g., dedicated to individual wheels or axles
  • the prime mover 104 may include one or more electric motors and the energy source 106 may include a fuel cell system powered by hydrogen fuel.
  • Direction control 112 may include one or more actuators or sensors for controlling and receiving feedback from the direction or steering components to enable the vehicle to follow a desired trajectory.
  • Powertrain control 114 may be configured to control the output of powertrain 102, e.g., to control the output power of prime mover 104, to control a gear of a transmission in drivetrain 108, etc., thereby controlling a speed or direction of the vehicle.
  • Brake control 116 may be configured to control one or more brakes that slow or stop vehicle 100, e.g., disk or drum brakes coupled to the wheels of the vehicle.
  • autonomous control over vehicle 100 (that may include various degrees of autonomy as well as selectively autonomous functionality) is primarily implemented in a primary vehicle control system 120, that may include processor(s) 122 and one or more memories 124, with processor(s) 122 configured to execute program code instruction(s) 126 stored in memory 124.
  • a primary sensor system 130 may include various sensors suitable for collecting information from a vehicle's surrounding environment for use in controlling the operation of the vehicle.
  • a satellite navigation (SATNAV) sensor 132 e.g., compatible with any of various satellite navigation systems such as GPS, GLONASS, Galileo, Compass, etc., may be used to determine the location of the vehicle on the Earth using satellite signals.
  • a Radio Detection and Ranging (RADAR) sensor 134 and a Light Detection and Ranging (LIDAR) sensor 136, as well as digital camera(s) 138 may be used to sense stationary and moving objects within the immediate vicinity of a vehicle.
  • IMU(s) 140 may include multiple gyroscopes and accelerometers capable of detection linear and rotational motion of vehicle 100 in three directions, while wheel encoder(s) 142 may be used to monitor the rotation of one or more wheels of vehicle 100.
  • the outputs of sensors 132-142 may be provided to a set of primary control subsystems 150, including, a localization subsystem 152, a planning subsystem 154, a perception subsystem 156, a control subsystem 158, and a mapping subsystem 160.
  • Localization subsystem 152 determines a "pose" of vehicle 100.
  • the pose can include location information and orientation information of vehicle 100.
  • the pose can additionally or alternatively include velocity information or acceleration information of vehicle.
  • localization subsystem 152 generates a “global pose" of vehicle 100 within its surrounding environment, and with respect to a particular frame of reference.
  • localization subsystem 152 can generate a global pose of vehicle 100 based on matching sensor data output by one or more of sensors 132-142 to a previously mapped portion of a geographical area (also referred to herein as a "tile").
  • Planning subsystem 154 plans a path of motion for vehicle 100 over a timeframe given a desired destination as well as the static and moving objects within the environment, while perception subsystem 156 detects, tracks or identifies elements within the environment surrounding vehicle 100.
  • Control subsystem 158 generates suitable control signals for controlling the various controls in control system 110 in order to implement the planned path of the vehicle.
  • Mapping subsystem 160 may be provided in the illustrated implementations to describe the elements within an environment and the relationships therebetween, and may be accessed by the localization, planning and perception subsystems 152-156 to obtain various information about the environment for use in performing their respective functions.
  • Vehicle 100 also includes a secondary vehicle control system 170, that may include one or more processors 172 and one or more memories 174 capable of storing program code instruction(s) 176 for execution by processor(s) 172.
  • secondary vehicle control system 170 may be used in conjunction with primary vehicle control system 120 in normal operation of vehicle 100.
  • secondary vehicle control system 170 may be used as a redundant or backup control system for vehicle 100, and may be used, among other purposes, to continue planning and navigation to perform controlled stops in response to adverse events detected in primary vehicle control system 120, or both.
  • Adverse events can include, for example, a detected hardware failure in vehicle control systems 120, 170, a detected software failure in vehicle control systems 120, 170, a detected failure of sensor systems 130, 180, other adverse events, or any combination thereof.
  • Secondary vehicle control system 170 may also include a secondary sensor system 180 including various sensors used by secondary vehicle control system 170 to sense the conditions or surroundings of vehicle 100.
  • IMU(s) 182 may be used to generate linear and rotational motion information about the vehicle, while wheel encoder(s) 184 may be used to sense the velocity of each wheel.
  • wheel encoder(s) 184 may be used to sense the velocity of each wheel.
  • One or more of IMU(s) 182 and wheel encoder(s) 184 of secondary sensor system 180 may be the same as or distinct from one or more of IMU(s) 140 and wheel encoder(s) 142 of the primary sensor system 130.
  • secondary vehicle control system 170 may also include secondary control subsystems 190, including at least localization subsystem 192 and controlled stop subsystem 194.
  • Localization subsystem 192 generates a "local pose" of vehicle 100 relative to a previous local pose of vehicle 100.
  • localization subsystem 152 can generate local pose of vehicle 100 by processing sensor data output by one or more of sensors 182-184 to generate the local pose of vehicle 100.
  • Controlled stop subsystem 194 is used to implement a controlled stop for vehicle 100 upon detection of an adverse event.
  • Other sensors and subsystems may be utilized in secondary vehicle control system 170, as well as other variations capable of being implemented in other implementations, will be discussed in greater detail below.
  • localization subsystem 152 that is responsible for generating a global pose of vehicle 100 (e.g., implemented by processor(s) 122), and localization subsystem 192, that is responsible for generating a local pose of vehicle 100 (e.g., implemented by processor(s) 172), are depicted as being implemented by separate hardware components.
  • localization subsystem 192 can generate instances of local pose of vehicle 100 at a faster rate than localization subsystem 152 can generate instances of global pose of vehicle 100.
  • multiple instances of a local pose of vehicle 100 can be generated in the same amount of time as a single instance of global pose of vehicle 100.
  • FIG. 1 is depicted herein as having secondary vehicle control system 170 that includes subsystems and sensors that are distinct from primary vehicle control system 120, it should be understood that is for the sake of example, and is not meant to be limiting.
  • secondary vehicle control system 170 can have the same or substantially similar configuration as primary vehicle control system 120. Accordingly, the respective localization subsystems 152, 192 of the control systems of FIG. 1 can generate both global pose instances of the global pose of vehicle 100 and local pose instances of the local pose of vehicle 100.
  • the processor(s) 122, 172 may be implemented, for example, as a microprocessor and memory 124, 174 may represent the random access memory (RAM) devices comprising a main storage, as well as any supplemental levels of memory, e.g., cache memories, non-volatile or backup memories (e.g., programmable or flash memories), read-only memories, etc.
  • RAM random access memory
  • memory 124, 174 may be considered to include memory storage physically located elsewhere in vehicle 100 (e.g., any cache memory in processor(s) 122, 172), as well as any storage capacity used as a virtual memory (e.g., as stored on a mass storage device or on another computer or controller).
  • processor(s) 124, 174 illustrated in FIG. 1, or entirely separate processors, may be used to implement additional functionality in vehicle 100 outside of the purposes of autonomous control (e.g., to control entertainment systems, to operate doors, lights, convenience features, and so on).
  • vehicle 100 may also include one or more mass storage devices, e.g., a floppy or other removable disk drive, a hard disk drive, a direct access storage device (DASD), an optical drive (e.g., a CD drive, a DVD drive, etc.), a solid state storage drive (SSD), network attached storage, a storage area network, or a tape drive, among others.
  • vehicle 100 may include a user interface 199 to enable vehicle 100 to receive a number of inputs from and generate outputs for a user or operator (e.g., using one or more displays, touchscreens, voice interfaces, gesture interfaces, buttons and other tactile controls, or other input/output devices). Otherwise, user input may be received via another computer or electronic device (e.g., via an app on a mobile device) or via a web interface (e.g., from a remote operator).
  • a web interface e.g., from a remote operator
  • vehicle 100 may include one or more network interfaces 198 suitable for communicating with one or more networks (e.g., a LAN, a WAN, a wired network, a wireless network, or the Internet, among others) to permit the communication of information between various components of vehicle 100 (e.g., between powertrain 102, control system 110, primary vehicle control system 120, secondary vehicle control system 170, or other systems or components), with other vehicles, computers or electronic devices, including, for example, a central service, such as a cloud service, from which vehicle 100 receives environmental and other data for use in autonomous control thereof.
  • vehicle 100 may be in communication with a cloud-based remote vehicle system including a mapping system and a log collection system.
  • various applications, components, programs, objects, modules, etc. may also execute on one or more processors in another computer coupled to vehicle 100 via network, e.g., in a distributed, cloud-based, or client-server computing environment, whereby the processing required to implement the functions of a computer program may be allocated to multiple computers or services over a network.
  • data recorded or collected by a vehicle may be manually retrieved and uploaded to another computer or service for analysis.
  • Program code typically comprises one or more instructions that are resident at various times in various memory and storage devices, and that, when read and executed by one or more processors, perform the steps necessary to execute steps or elements embodying the various aspects of the invention.
  • FIG. 1 the collection of components illustrated in FIG. 1 for primary vehicle control system 120 and secondary vehicle control system 170 are merely for the sake of example. Individual sensors may be omitted in some implementations, multiple sensors of the types illustrated in FIG. 1 may be used for redundancy or to cover different regions around a vehicle, and other types of sensors may be used. Likewise, different types or combinations of control subsystems may be used in other implementations.
  • subsystems 152-160, 192-194 are illustrated as being separate from processors 122, 172 and memory 124, 174, respectively, it will be appreciated that in some implementations, the functionality of subsystems 152-160, 192-194 may be implemented with corresponding program code instruction(s) 126, 176 resident in one or more memories 124, 174 and executed by processor(s) 122, 174 and that these subsystems 152-160, 192- 194 may in some instances be implemented using the same processors and memory.
  • Subsystems 152-160, 192-194 in some implementations may be implemented at. least in part, using various dedicated circuit logic, various processors, various field-programmable gate arrays ("FPGA"), various application-specific integrated circuits ("ASIC"), various real time controllers, and the like, and as noted above, multiple subsystems may utilize common circuitry, processors, sensors, or other components. Further, the various components in primary vehicle control system 120 and secondary vehicle control system 170 may be networked in various manners.
  • FIG. 1 Those skilled in the art will recognize that the exemplary environment illustrated in FIG. 1 is not intended to limit the present invention. Indeed, those skilled in the art will recognize that other alternative hardware or software environments may be used without departing from the scope of the invention.
  • localization subsystem 152 of primary vehicle control system 120 includes at least global pose module 252 and online calibration module 254.
  • localization subsystem 192 of secondary vehicle control system 170 includes at least local pose module 292.
  • Data generated by localization subsystem 152, 192 can be transmitted between localization subsystem 152, 192 (or the modules included therein) and other subsystems described herein.
  • the data can include, for example, global pose instance(s) of a global pose of vehicle 100, local pose instance(s) of a local pose of vehicle 100, correction instance(s), or any combination thereof.
  • data generated by sensors (e.p., primary sensor system 130 secondary sensor system 180, or both) of vehicle 100 can be received by localization subsystem 152, 192.
  • Global pose module 252 can generate global pose instances of a global pose of vehicle 100.
  • the global pose of vehicle 100 represents a pose of vehicle 100 with respect to a reference frame (e.g., tile(s)), and the global pose instance represents position and location information at a given time instance.
  • global pose module 252 can receive instances of the first sensor data 130A, and the global pose instances can be generated based at least in part on instances of first sensor data 130A.
  • the first sensor data 130A can include, for example, LIDAR data generated by the LIDAR sensor 136 of primary sensor system 130.
  • global pose module 252 further generates the global pose instances based on the local pose instances.
  • global pose module 252 further generates the global pose instances based on instances of second sensor data generated by second sensor(s) of vehicle 100 (e.g., IMU(s) 140, 182, wheel encoder(s) 142, 184, other sensors, or any combination thereof). Generating of the global pose instances is described in more detail below (e.g., with respect to FIGS. 3, block 500 of FIG. 4, and method 500A of FIG. 5).
  • global pose module 252 can transmit generated global pose instances to online calibration module 254.
  • LIDAR sensor 136 can have a sensing cycle.
  • the LIDAR sensor 136 can scan a certain area during a particular sensing cycle to detect an object or an environment in the area.
  • a given instance of the LIDAR data can include the LIDAR data from a given sensing cycle of LIDAR sensor 136.
  • a given LIDAR data instance correspond to, for example, a given sweep of the LIDAR sensor 136 generated during the sensing cycle of the LIDAR sensor 136.
  • the LIDAR data generated during the sensing cycle of LIDAR sensor 136 can include, for example, a plurality of points reflected off of a surface of an object in an environment of vehicle 100, and detected by at least one receiver component of the LIDAR component as data points.
  • the LIDAR sensor 136 detects a plurality of data points in an area of the environment of vehicle 100, and corresponding reflections detected.
  • One or more of the data points may also be captured in subsequent sensing cycles. Accordingly, the range and velocity for a point that is indicated by the LIDAR data of a sweep of LIDAR sensor 136 can be based on multiple sensing cycle events by referencing prior (and optionally subsequent) sensing cycle events.
  • multiple sensing cycles can have the same duration, the same field-of-view, or the same pattern of waveform distribution (through directing of the waveform during the sensing cycle).
  • multiple sweeps can have the same duration (e.g., 50 milliseconds, 100 milliseconds, 300 milliseconds, or other durations) and the same field-of-view (e.g., 60°, 90°, 180°, 360”, or other fields-of-view).
  • LIDAR sensor 136 can include a phase coherent LIDAR component during a sensing cycle.
  • the instances of the first sensor data 130A can include LIDAR data from a sensing cycle of LIDAR sensor 136.
  • the LIDAR data from the sensing cycle of LIDAR sensor 136 can include, for example, a transmitted encoded waveform that is sequentially directed to, and sequentially reflects off of, a plurality of points in an environment of vehicle 100 - and reflected portions of the encoded waveform are detected, in a corresponding sensing event of the sensing cycle, by the at least one receiver of the phase coherent LIDAR component as data points.
  • the waveform is directed to a plurality of points in an area of the environment of vehicle 100, and corresponding reflections detected, without the waveform being redirected to those points in the sensing cyde.
  • the range and velocity for a point that is indicated by the LIDAR data of a sensing cyde can be instantaneous in that is based on single sensing event without reference to a prior or subsequent sensing event.
  • multiple (e.g., all) sensing cycles can have the same duration, the same field-of-view, or the same pattern of waveform distribution (through directing of the waveform during the sensing cyde).
  • each of multiple sensing cycles that are each a sweep can have the same duration, the same field-of-view, and the same pattern of waveform distribution.
  • the duration, field-of-view, or waveform distribution pattern can vary amongst one or more sensing cycles.
  • a first sensing cycle can be of a first duration, have a first field- of view, and a first waveform distribution pattern; and a second sensing cycle can be of a second duration that is shorter than the first, have a second field-of-view that is a subset of the first field-of-view, and have a second waveform distribution pattern that is denser than the first.
  • Online calibration module 254 can generate a correction instance 254A based at least in part on the global pose instance transmitted to online calibration module 254 from global pose module 252.
  • the correction instance 254A can include drift rate(s) across multiple local pose instances.
  • the drift rate(s) can indicate a first magnitude of drift, in one or more dimensions (e.g., X-dimension, Y-dimension, Z- dimension, roll-dimension, pitch-dimension, yaw-dimension, or other dimensions), over a period of time, a second magnitude of drift, in one or more of the dimensions, over a distance, or both.
  • the drift rate(s) can include, for example, a temporal drift rate, a distance drift rate, or both.
  • the temporal drift can represent a magnitude of drift, in one or more dimensions, in generating the multiple local pose instances over the period of time in generating the multiple local pose instances.
  • the distance driftrate can represent a magnitude of drift, in one or more dimensions, over a distance travelled in generating the multiple local pose instances.
  • the correction instance 254A can include a linear combination of the temporal drift rate and the distance drift rate.
  • the correction instance can be further generated based on instances of the second sensor data 180A, instances of the third sensor data 130B, or both. As shown in FIG. 2B, online calibration module 254 can receive second sensor data 180A.
  • the second sensor data 180A can include, for example, IMU data generated by one or more of IMU(s) 140, wheel encoder data generated by wheel encoder(s) 142, IMU data generated by one or more of IMU(s) 182, wheel encoder data generated by wheel encoder(s) 184, or any combination thereof.
  • the instances of the second sensor data 180A can include instances of the IMU data, the wheel encoder data, or both.
  • the instances of the IMU data can be the most recently generated instances of the IMU data
  • the instances of the wheel encoder data can be the most recently generated instances of the wheel encoder data.
  • the third sensor data 130B can include, for example, SATNAV data generated by the SATNAV sensor 132 of primary sensor system 130.
  • the instances of the third sensor data 130B can include most recently generated instances of the SATNAV data. Generating of the correction instance 254A is described in more detail below (e.g., with respect to FIGS. 3, block 600 of FIG. 4, and method 600A of FIG. 6). Further, online calibration module 254 can transmit the generated correction instance 254A to local pose module 292. In some additional or alternative implementations, the correction instance 254A can include, or be limited to, global pose instances generated by global pose module 254.
  • Local pose module 292 can generate local pose instances of a local pose of vehicle 100.
  • the local pose of vehicle 100 represents a pose of vehicle 100 with respect to a particular frame of reference
  • the particular frame of reference of the local pose instances can be a local frame of reference.
  • an initial local pose instance can correspond to a certain point in space (e.g., XI, Yl, and Zl).
  • subsequent local pose instances can be determined with respect to this point in space.
  • a first subsequent local pose instance can correspond to Xl+X', Yl+Y', and Zl+Z', where X', Y', and Z' correspond to a positional difference of vehicle 100 between a first time when the initial local pose instance was determined and a second time when the first subsequent local pose instance was determined.
  • a second subsequent local pose instance can correspond to X'+X", Y'+Y", and Z'+Z", and so on for further subsequent local pose instances
  • the particular frame of reference of the local pose instances can be a local frame of reference with respect to the tile(s).
  • an initial global pose instance can provide local pose module 292 with an indication of the tile that vehicle 100 is located therein, and local pose module 292 can then determine the local pose instances relative to the tile(s).
  • the local pose is not generated based on any vision data (e.g., LIDAR data or other vision data). Rather, as shown in FIG. 2B, local pose module 292 can receive instances of the second sensor data 180A described above (e.g., the IMU data and the wheel encoder data), and the local pose instances can be generated based at least in part on instances of the second sensor data 180A. Generating local pose instances without utilization of any vision data can enable the local pose instances to be generated more frequently (e.g., at a frequency that is greater than that of vision data generation) and using less computational resources. Further, generating local pose instances without utilization of any vision data can enable the local pose instances to be generated even when the vision sensor(s) generating the vision data are malfunctioning.
  • any vision data e.g., LIDAR data or other vision data.
  • local pose module 292 can receive instances of the second sensor data 180A described above (e.g., the IMU data and the wheel encoder data), and the local pose instances can be generated based at least in part
  • the local pose instances can be further generated based on the correction instance 254A transmitted to local pose module 292 from online calibration module 254.
  • the local pose instances can be utilized by various other subsystem(s) described herein to control operation of vehicle 100 (e.g., planning subsystem 154, control subsystem 158, controlled stop subsystem 194, or other subsystems).
  • the correction instances are generated based on past global pose instances and local pose instances, but differ from global pose instances.
  • the correction instances can include drift rate(s) in lieu of global pose instances.
  • the local pose module 292 can generate local pose instances, utilizing the drift rate(s), more efficiently than if global pose instances were instead utilized in generating the local pose instances.
  • the correction instances can be applicable to and utilized in generating multiple local pose instances, whereas global pose instances are only applicable to generating a single temporally corresponding local pose instance. Generating of the local pose instances is described in more detail below (e.g., with respect to FIGS. 3, block 700 of FIG. 4, and method 700A of FIG. 7).
  • local pose module 292 can transmit the generated local pose instances to other module(s) or subsystem(s) described herein (e.g., as discussed in more detail below with respect to FIGS. 3-6).
  • localization subsystem 152 of primary vehicle control system 120 may further include Earth Centered Earth Fixed (“ECEF”) pose module 256 and bootstrap module 258.
  • ECEF pose module 256 can generate ECEF pose instances of an ECEF pose of vehicle 100.
  • the ECEF pose of vehicle 100 represents a pose of vehicle 100 with respect to longitude and latitude coordinates, and the ECEF pose instance represents position and location information with respect to the Earth at a given time instance.
  • ECEF pose module 256 can receive instances of the third sensor data 130B described above (e.g., the SATNAV data), and can generate ECEF pose instances based at least in part on the instances of the third sensor data 130B.
  • ECEF pose module 256 further generates the ECEF pose instances based on the local pose instances (e.g., as discussed in greater detail below with respect to FIG. 3, block 500 of FIG.
  • bootstrap module 258 can identify candidate tile(s) in that vehicle 100 is located.
  • the candidate tile(s) can be provided to global pose module 252 as information for the given tile in that vehicle 100 is actually located.
  • Bootstrap module 258 can receive instances of the first sensor data 130A described above (e.g., the LIDAR data), and can identify candidate tile(s) based at least in part on ECEF pose instances and instances of the first sensor data 130A. in some additional or alternative implementations, bootstrap module 258 further identifies the candidate tile(s) based on the local pose instances (e.g., as discussed in greater detail below with respect to FIG. 3, block 500 of FIG. 4, and FIG. 5).
  • the local pose instances can be generated by local pose module 292 at a first frequency f 1 and the correction instances can be generated by online calibration module 254 at a second frequency f 2 , where the first frequency f 1 is higher than the second frequency f 2 .
  • the local pose instances are generated at a faster rate than the correction instances.
  • a plurality of local pose instances can be generated based on the same correction instance, and prior to receiving, at the local pose module 292, an additional correction instance that is generated based on an additional global pose instance.
  • local pose module 292 can track relative movement of vehicle 100, and errors in tracking the relative movement of vehicle 100 can be mitigated by periodically adjusting calculations at local pose module 292 via the correction instance 254A that is generated on global pose actual locations of vehicle 100 as indicated by the global pose instances.
  • FIG. 3 is a process flow illustrating an example implementation of the localization subsystems referenced in FIGS. 2A and 2B is depicted.
  • the process flow of FIG. 3 can be implemented by primary vehicle control system 120 and secondary vehicle control system 170.
  • modules on left side of dashed line 300 can be implemented by secondary vehicle control system 170 (e.g., via localization subsystem 192), and modules on the right side of the dashed line 300 can be performed by primary vehicle control system 120 (e.g., via localization subsystem 152).
  • Local pose module 292 can receive instances of IMU data 182A generated by one or more IMUs of vehicle 100 (e.g., IMU(s) 182 of secondary sensor system 180 or IMU(s) 140 of primary sensory system 130).
  • the IMU data 182A generated by one or more of the IMU(s) can be generated at a first frequency f 1 .
  • local pose module 292 can also receive an instance of wheel encoder data 184A generated by one or more wheel encoders of vehicle 100 (e.g., wheel encoder(s) 184 of secondary sensor system 180 or wheel encoders(s) 142 of primary sensory system 130).
  • the wheel encoder data 184A generated by one or more of the wheel encoders can be generated at a second frequency f 2 that is different from the first frequency f 1 at which the IMU data 182A is generated.
  • the combination of the IMU data 182A and the wheel encoder data is sometimes referred to herein as "second sensor data" (e.g., second sensor data 180A of FIGS. 2A and 2B).
  • second sensor data e.g., second sensor data 180A of FIGS. 2A and 2B
  • the IMU data 182A and the wheel encoder data 184A are generated at different frequencies.
  • Local pose module 292 can include propagated filter(s) that incorporate the most recent version of sensor data in instances of the second sensor data (i.e., anytime machinery). Further, local pose module can receive a correction instance 254A generated by online calibration module 254 as described herein.
  • local pose module 292 can process, using a state estimation model that is filter-based (e.g., Kalman filter, extended Kalman filter, dual Kalman filter, or other filter-based techniques) or observer-based (e.g., recursive least squares or other observerbased techniques), the instance of the second sensor data (including IMU data 182A and wheel encoder data 184A) and the correction instance 254A to generate output.
  • the output can include, for example, a local pose instance 292A of a local pose of vehicle 100, estimated velocities of vehicle 100, estimated accelerations of vehicle 100, or any combination thereof.
  • Local pose module 292 can then transmit the local pose instance 292A to other module(s) (e.g., ECEF pose module 256, bootstrap module 258, global pose module 252, online calibration module 254, or any combination thereof) and subsystem(s) (e.g., planning subsystem 154, control subsystem 158, controlled stop subsystem 194, or any combination thereof) over one or more networks via network interface 198.
  • module(s) e.g., ECEF pose module 256, bootstrap module 258, global pose module 252, online calibration module 254, or any combination thereof
  • subsystem(s) e.g., planning subsystem 154, control subsystem 158, controlled stop subsystem 194, or any combination thereof
  • a frequency at which local pose instances are generated can be based on the frequency at which instances of the second sensor data are generated.
  • ECEF pose module 256 can receive instances of SATNAV data 132A generated by a SATNAV sensor of vehicle (e.g., SATNAV sensor 132 of primary sensor system 130).
  • the SATNAV data 132A generated by one or more of the SATNAV sensors can be generated at a third frequency f 3 that is different from both the first frequency f 1 at which the IMU data 182A is generated and the second frequency f 2 at which the wheel encoder data 184A is generated.
  • ECEF pose module 256 can also receive local pose instances generated by local pose module 292.
  • ECEF pose module 256 can generate an ECEF pose instance 256A of an ECEF pose of vehicle 100 based on the instances of the SATNAV data 132A and the local pose instances from local pose module 292.
  • the ECEF pose instance 256A can map longitude and latitude coordinates included in the SATNAV data 132A to the local pose instances received from local pose module 292.
  • ECEF pose module 256 generates an ECEF pose instance 256A only if vehicle 100 travels a threshold distance (e.g., as indicated by the local pose instances).
  • ECEF pose module 256 generates an ECEF pose instance 256A for an instance of the SATNAV data 132A that is received at ECEF pose module 256.
  • ECEF pose module 256 may only transmit the ECEF pose instance 256A to bootstrap module 258 if there is a sufficiently low error in mapping the SATNAV data 132A to the local pose instances from local pose module 292.
  • ECEF pose module 256 may transmit location information to bootstrap module 258 without any orientation information.
  • Bootstrap module 258 can receive instances of LIDAR data 136A generated by one or more LIDAR sensors of vehicle 100 (e.g., LIDAR 136 of primary sensory system 130). The LIDAR data 136A generated by one or more of the LIDAR sensors can be generated can be generated at a fourth frequency f 4 . Further, bootstrap module 258 can also receive local pose instance(s) 292A generated by local pose module 292 and ECEF pose instances (with or without orientation information) generated by ECEF pose module 256.
  • bootstrap module 258 can process an instance of the LIDAR data 136A, a local pose instance 292A, and an ECEF pose instance 256A to identify candidate tile(s) 258A in which vehicle 100 is potentially located.
  • the candidate tile(s) 258A represent previously mapped portions of geographical areas that are stored in a memory of the system that can be located locally at vehicle 100or remotely from one or more databases.
  • the candidate tile(s) 258A can include information as to a current tile in which vehicle 100 is located.
  • the global pose module 252 can utilize the information to identify a matching tile in which vehicle 100 is located.
  • bootstrap module 258 can identify the candidate tile(s) 258A by assembling point cloud data from the LIDAR data 136A into a point cloud that includes a down sampled version of the LIDAR data with respect to a given tile associated with the local pose instance 292A (and optionally a neighborhood of tiles surrounding the given tile), identifying tile(s) that map geographical areas within a threshold radius from the location on the Earth indicated by the ECEF pose instance 256A, and aligning the down sampled point cloud to the tile(s) to identify the candidate tile(s) 258A.
  • Bootstrap module 258 can align the down sampled point cloud to the tile(s) using various geometric matching techniques (e.g., iterative closest point or other geometry matching algorithms).
  • bootstrap module 258 may remove outlier data or fuzzy data.
  • the system can store a last known tile prior to losing power (e.g., vehicle 100 being turned off), retrieve the last known tile upon power being restored (e.g., vehicle 100 being turned back on), and identify the last known tile as a candidate tile (and optionally a neighborhood of tiles that includes the last known tile).
  • bootstrap module 258 identifies candidate tile(s) 258A to global pose module 252 as instances of the LIDAR data 136A are generated.
  • bootstrap module 258 identifies candidate tile(s) 258A to global pose module 252 at predetermined periods of time (e.g., every ten seconds, every thirty seconds, or other predetermined periods of time).
  • bootstrap module 258 may only transmit the candidate tile(s) 258A to global pose module 252 if there is a sufficiently low error in aligning the down sampled point cloud with respect to a given tile associated with the local pose instance 292A.
  • Global pose module 252 can receive instances of LIDAR data 136A generated by one or more LIDAR sensors of vehicle 100 (e.g., LIDAR 136 of primary sensory system 130). The LIDAR data 136A generated by one or more of the LIDAR sensors can be generated can be generated at a fourth frequency f 4 .
  • the instance of the LIDAR data 136A received by global pose module 252 is the same instance of LIDAR data utilized by bootstrap module 258 to identify the candidate tile(s) 258A.
  • the instance of the LIDAR data 136A received by global pose module 252 is distinct from the instance of LIDAR data utilized by bootstrap module 258 to identify the candidate tile(s) 258A.
  • global pose module 252 can also receive local pose instance(s) 292A generated by local pose module 292 and candidate tile(s) 258A identified by bootstrap module 258.
  • global pose module 252 can process an instance of the LIDAR data 136A, a local pose instance 292A, and candidate tile(s) to generate a global pose instance 252A.
  • the global pose instance 252A can identify a matching tile in that vehicle 100 is located, and position and orientation information of vehicle 100 within the matching tile.
  • global pose module 252 generates the global pose instance 252A by aligning a point cloud generated based on the LIDAR data 136A with one or more previously stored point clouds of a given tile.
  • global pose module 252 can align the point cloud and one or more of the previously stored point clouds using various geometric matching techniques (e.g., iterative closest point or other geometry matching algorithms).
  • the one or more previously stored point clouds can be stored in association with a given tile, and can be accessed over one or more networks (e.g., using mapping subsystem 160).
  • the one or more previously stored point clouds can be identified based on a most recently generated local pose instance (e.g., local pose instance 292A) or based on the second sensor data (e.g., IMU data 182A or wheel encoder data 184A).
  • the one or more previously stored point clouds can be stored in association with the given tile associated with the most recently generated local pose instance (e.g., local pose instance 292A) or a location of vehicle determined based on the second sensor data (e.g., IMU data 182A or wheel encoder data 184A).
  • global pose module 252 generates the global pose instance 252A by assembling point cloud data from the LIDAR data 136A into a down sampled point cloud with respect to a given tile associated with the local pose instance 292A (and optionally a neighborhood of tiles surrounding the given tile) and a point cloud that includes a finer sampled version of the LIDAR data with respect to the given tile associated with the local pose instance 292A (and optionally a neighborhood of tiles surrounding the given tile), aligning the down sampled point cloud to the tile(s) to identify a subset of the candidate tile(s) 258A, and aligning the finer sampled point cloud to the tile(s) to identify a matching tile.
  • Global pose module 252 can align the down sampled point cloud and the finer sampled point cloud to the tile(s) using various geometric matching techniques (e.g., iterative closest point or other geometry matching algorithms). Moreover, in assembling the point cloud data from the LIDAR data 136A into the down sampled point cloud or the finer sampled point cloud, global pose module 252 may remove outlier data or fuzzy data.
  • global pose module 252 generates global pose instances as instances of the LIDAR data 136A are generated. In some versions of those implementations, global pose module 252 may only transmit the global pose instance 252A to online calibration module 254 after a threshold number of global pose instances are successfully generated. By only transmitting the global pose instance 252A after determining a threshold number of global pose instances are successfully generated, the resulting correction instance 254A is generated based on an accurate global pose instance, rather than a global pose instance that does not represent an actual pose of vehicle 100.
  • Online calibration module 254 can receive an instance of the IMU data 182A, an instance of the wheel encoder data 184A, an instance of the SATNAV data 132A, and the global pose instance 252A.
  • online calibration module 254 can process, using a state estimation model that is filter-based (e.g., Kalman filter, extended Kalman filter, dual Kalman filter, or other filter-based techniques) or observer-based (e.g., recursive least squares or other observer-based techniques), the instance of the IMU data 182A, the instance of the wheel encoder data 184A, the instance of the SATNAV data 132A, and the global pose instance 252A to generate output.
  • filter-based e.g., Kalman filter, extended Kalman filter, dual Kalman filter, or other filter-based techniques
  • observer-based e.g., recursive least squares or other observer-based techniques
  • the output can include, for example, estimates of wheel radii of vehicle 100 or sensor biases of individual sensors of vehicle 100 (e.g., sensor(s) included in primary sensor system 130 or secondary sensor system 180).
  • the correction instance 254A can then be generated based on the estimates of the wheel radii of vehicle 100 or the sensor biases of individual sensors of vehicle 100.
  • the output generated across the state estimation model can be the correction instance 254A, such that the state estimation model acts like a black box.
  • online calibration module 254 can generate the correction instance 254A based on historical global pose instance, including the global pose instance 252A, and historical local pose instances (e.g., as indicated by the dashed line from the local pose instances 292A), including the local pose instance 292A.
  • the historical pose instances (both global and local) may be limited to those that are generated with a threshold duration of time with respect to a current time (e.g., pose instance generated in the last 100 seconds, 200 seconds, or other durations of time), such that online calibration module 254 only considers a sliding window of the historical pose instances.
  • online calibration module 254 can identify historical global pose instances that temporally correspond to historical local pose instances, and can use the identified historical global pose instances as indicators in generating the correction instance. As described in more detail herein (e.g., with respect to block 600 of FIG. 4 and FIG. 6), the correction instance can indude drift rate(s). Online calibration module 254 can then transmit the correction instance 254A to local pose module 292 over one or more networks via network interface 198. Thus, local pose instances generated by local pose module 292 can be generated based on the correction instance 254A as well as additional instance of the IMU data 182A and additional instances of the wheel encoder data 184A.
  • FIG. 4 an example method 400 for localization of an autonomous vehicle is illustrated.
  • the method 400 may be performed by an autonomous vehicle analyzing sensor data generated by sensor(s) of the autonomous vehicle (e.g., vehicle 100 of FIG. 1), by another vehicle (autonomous or otherwise), by another computer system that is separate from the autonomous vehicle, or any combination thereof.
  • operations of the method 400 are described herein as being performed by a system (e.g., processor(s) 122 or primary vehicle control system 120, processor(s) 172 of secondary vehicle control system 170, or a combination thereof).
  • the operations of method 400 of FIG. 4 are described herein with respect to method 500A of FIG. 5, method 600A of FIG. 6, and method 700A of FIG. 7. it will be appreciated that the operations of the methods of FIGS. 4-7 may be varied, and that some operations may be performed in parallel or iteratively in some implementations, so the methods illustrated in FIGS. 4-7 are merely provided for illustrative purposes.
  • the system generates global pose instance(s) of a global pose of an autonomous vehicle based on an instance of first sensor data generated by first sensor(s) of the autonomous vehicle and based on a local pose instance of a local pose of the autonomous vehicle. More particularly, the system can generate the global pose instance using localization subsystem 152 of primary vehicle control system 120. As one non-limiting example, FIG. 5 shows an example method 500A of how the system generates the global pose instance(s) of the global pose of the autonomous vehicle at block 500 of FIG. 4.
  • the system receives an instance of third sensor data generated by third sensor(s) of the autonomous vehicle.
  • the third sensor data can include, for example, SATNAV data generated by one or more SATNAV sensors of the autonomous vehicle (e.g., SATNAV sensor 132 of primary sensor system 130).
  • the instance of the third sensor data can include, for example, a most recent instance of the SATNAV data generated by the STANAV sensor that includes location data, such as longitude and latitude coordinates.
  • the system generates an ECEF pose instance of an ECEF pose of the autonomous vehicle based on the instance of the third sensor data received at block 552.
  • the ECEF pose of the autonomous vehicle represents a pose of the autonomous vehicle with respect to longitude and latitude coordinates
  • the ECEF pose instance represents position and location information with respect to given longitude and latitude coordinates at a given time instance.
  • the system generates the ECEF pose instance by determining location and position information for the autonomous vehicle with respect to the Earth based on the longitude and latitude coordinates included in the SATNAV data. Put another way, the ECEF pose instance identifies a location on the Earth (e.g., using longitude and latitude coordinates) that the autonomous vehicle is located. [0088]
  • generating the ECEF pose instance is further based on a local pose instance of a local pose of the autonomous vehicle.
  • the system maps the local pose instance to the ECEF pose instance. For example, the system can identify a given tile associated with the local pose instance, and can determine the given tile associated with the local pose instance is within a threshold radius of the longitude and latitude coordinates associated with the ECEF pose instance. Further, the system may only transmit the ECEF pose instance to other subsystems or modules thereof (e.g., bootstrap module 258) if the system successfully maps the local pose instance to the ECEF pose instance (or an error in the mapping fails to satisfy an error threshold or is sufficiently low).
  • the system receives an instance of the first sensor data generated by the first sensor(s) of the autonomous vehicle.
  • the third sensor data can include, for example, LIDAR data generated by one or more LIDAR sensors of the autonomous vehicle (e.g., LIDAR sensor 136 of primary sensor system 130).
  • the instance of the third sensor data can include, for example, a most recent instance of the LIDAR data that includes point cloud data from a sensing cycle of the one or more LIDAR sensors.
  • the system identifies candidate tile(s) based on the ECEF pose instance of the autonomous vehicle generated at block 564 or the instance of the first sensor data received at block 556.
  • the candidate tile(s) represent previously mapped portions of geographical areas that are stored in memory of the system or remotely.
  • the system can retrieve the identified candidate ti le(s) using, for example, mapping subsystem 160.
  • the system can identify the candidate tile(s) by assembling point cloud data from a sensing cycle of the LIDAR sensor into a down sampled point cloud, identifying tile(s) that map geographical areas within a threshold radius from the location on the Earth indicated by the ECEF pose instance, and aligning the down sampled point cloud to the tile(s) to identify candidate tile(s).
  • the system can align the down sampled point cloud to the tile(s) using various geometric matching techniques (e.g., iterative closest point or other geometry matching algorithms).
  • the system can store a last known tile prior to losing power (e.g., the autonomous vehicle being turned off), retrieve the last known tile upon power being restored (e.g., the autonomous vehicle being turned back on), and identify the last known tile as a candidate tile (and optionally a neighborhood of tiles that includes the last known tile).
  • identifying the candidate tile(s) is further based on a local pose instance of a local pose of the autonomous vehicle.
  • the system can identify the candidate tile(s) by identifying a given tile associated with the local pose instance, aligning the down sampled point cloud to the given tile associated with the local pose instance, and identifying a neighborhood of tile(s), including the given tile, as candidate tile(s).
  • a neighborhood of tile(s) can include, for example, the given tile and a plurality of additional tiles that are locationally proximate to the given tile (e.g., four tiles that border the given tile, eight tiles that surround the given tile, or other configurations of locationally proximate tiles).
  • system may only transmit the candidate tile(s) to other subsystems or modules thereof (e.g., global pose module 252) if the system successfully aligns the down sampled point cloud to the given tile associated with the local pose instance (or an error in the aligning fails to satisfy an error threshold or is sufficiently low).
  • other subsystems or modules thereof e.g., global pose module 252
  • the system generates the global pose instance of the global pose of the autonomous vehicle based on an instance of the first sensor data.
  • the global pose of the autonomous vehicle represents a pose of the autonomous vehicle with respect to tile(s), and the global pose instance represents position and location information with respect to a given tile at a given time instance.
  • the global pose instance is generated based on the same LIDAR data from the same sensing cycle of the one or more LIDAR sensors that is used to identify the candidate tile(s).
  • the global pose instance is generated based on different LIDAR data from a different sensing cycle of the one or more LIDAR sensors that is used to identify the candidate tile(s).
  • the global pose instance is further generated based on instances of second sensor data generated by second sensor(s) of the autonomous vehicle (e.g., IMU(s) 140, 182, wheel encoder(s) 142, 184, or other sensors of FIG. 1).
  • second sensor(s) of the autonomous vehicle e.g., IMU(s) 140, 182, wheel encoder(s) 142, 184, or other sensors of FIG. 1.
  • the system can generate the global pose instance by identifying a matching tile based on the instance of the instance of the first sensor data. In some versions of those implementations, generating the global pose instance is further based on the candidate tile(s) identified at block 558, and the system can identify the matching tile from among the candidate tile(s) responsive to identifying the candidate tile(s) at block 558.
  • the system can identify the matching tile by assembling point cloud data from a sensing cycle of the LIDAR sensor into a down sampled point cloud (e.g., representation that includes only a portion of the LIDAR data generated over the sensing cycle) and a finer sampled point cloud (e.g., representation that includes all LIDAR generated over the sensing cycle), selecting a given one of the candidate tile(s) to compare to the point cloud data, iteratively aligning the down sampled point cloud to the candidate tile(s) to identify a subset of the candidate tile(s), and aligning the finer sampled point cloud to the subset of the candidate tile(s) to identify a matching tile.
  • a down sampled point cloud e.g., representation that includes only a portion of the LIDAR data generated over the sensing cycle
  • a finer sampled point cloud e.g., representation that includes all LIDAR generated over the sensing cycle
  • the system can align the down sampled point cloud or the finer sampled point cloud to the candidate tile(s) using various geometric matching techniques (e.g., iterative closest point or other geometry matching algorithms).
  • various geometric matching techniques e.g., iterative closest point or other geometry matching algorithms.
  • the system aligns the down sampled point cloud to further narrow the candidate tile(s) to those in that the autonomous vehicle is most likely located, and only aligns the finer sampled point cloud to those tile(s) rather than fully processing each of the candidate tile(s).
  • location information and orientation information of the autonomous vehicle can be determined.
  • generating the global pose instance is further based on a local pose instance of a local pose of the autonomous vehicle that temporally corresponds to the global pose instance.
  • the system can identify the matching tile by identifying a given tile associated with the local pose instance and a neighborhood of tiles locationally proximate to the given tile, and can identify the matching tile in a similar manner described above, but using the neighborhood of tile locationally proximate to the given tile associated with the local pose instance rather than the candidate tile(s) identified at block 558.
  • system may only transmit the candidate tile(s) to other subsystems or modules thereof (e.g., online calibration module 254) if the system successfully aligns the fine- sampled point cloud to the given tile associated with the local pose instance (or an error in the aligning fails to satisfy an error threshold or is sufficiently low).
  • other subsystems or modules thereof e.g., online calibration module 254
  • the system determines whether there is any failure in generating the global pose instance.
  • a failure in generating the global pose instance can be detected, for example, based on sensor failure of one or more of the LIDAR sensors of the autonomous vehicle, failure to identify a matching tile (e.g., the error satisfies a threshold due to construction areas altering traffic patterns, due to occlusion caused by other vehicles, foliage, due to unmapped geographical areas, and so on), or other failures. If, at an iteration of block 562, the system does not detect any failure in generating the global pose instance, then the system proceeds to block 564.
  • the system stores the global pose instance, and transmits the global pose instance to an online calibration module.
  • the system can store the global pose instance in memory of the system or remotely. Moreover, and as discussed in greater detail below with respect to block 600 of FIG. 4, the system can use the global pose in generating correction instance(s).
  • the system determines whether a further instance of the first sensor data is received. If, at an iteration of block 566, the system determines that no further instance of the first sensor data is received, then the system continuously monitors, at block 566, for the further instance of the first sensor data. However, at an iteration of block 566, if the system determines that a further instance of the first sensor data is received, then the system returns to block 560 and proceeds with the method 500A.
  • the system can generate the global pose instance based on the further instance of the first sensor data, in this manner, the system can continuously generate the global pose instance(s) for the instance of the first sensor data received by the system, store the generated global pose instance(s), and transmit the global pose instance(s) to the online calibration module to generate correction instance(s).
  • blocks 552-558 are depicted as being separate from blocks 560-566, and blocks 552-558 can be considered initialization operations for generating global pose instance(s).
  • the system may only execute the initialization operations of blocks 552-558 again in the method 500A if the system detects a failure in generating a given global pose instance.
  • the system can store a last known tile prior to losing power (e.g., the autonomous vehicle being turned off), and retrieve the last known tile upon power being restored (e.g., the autonomous vehicle being turned back on).
  • the system may not need to perform the initialization operations of blocks 552-558.
  • the system can use information from previously generated global pose instance(s) and local pose instance(s) in subsequently generating the global pose instance(s) without having to perform the initialization operations of blocks 552-558 at each iteration of generating a global pose instance.
  • the system can periodically perform the initialization operations of blocks 552-558.
  • the system can periodically perform the initialization operations of blocks 552-558 for the sake of redundancy and verification of the tile(s) being utilized in generating the global pose instance(s).
  • the system generates correction instance(s) based on the global pose instance(s) and based on an instance of second sensor data generated by second sensor(s) of the autonomous vehicle. More particularly, the system can generate the correction instance(s) using localization subsystem 152 of primary vehicle control system 120. As one non-limiting example, FIG. 6 shows an example method 600A of how the system generates the correction the correction instance at block 600 of FIG. 4.
  • the system receives the global pose instance of the global pose of the autonomous vehicle from a global pose module.
  • the global pose instance can be generated, for example, as discussed above with respect to block 500 of FIG. 4.
  • the system receives an instance of the second sensor data generated by the second sensor(s) of the autonomous vehicle.
  • the second sensor data can include, for example, IMU data generated by one or more IMUs of the autonomous vehicle (e.g., IMU(s) 140 of primary sensor system 130 or IMU(s) 182 of secondary sensor system 180), wheel encoder data generated by one or more wheel encoders of the autonomous vehicle (e.g., wheel encoder(s) 142 of primary sensor system 130 or wheel encoder(s) 184 of secondary sensor system 180), or both.
  • the instance of the second sensor data can include, for example, a most recent instance of the IMU data generated by one or more of the IMUs or a most recent instance of the wheel encoder data by the one or more wheel encoders that includes location data, such as longitude and latitude coordinates.
  • the system receives an instance of the third sensor data generated by the third sensor(s) of the autonomous vehicle.
  • the third sensor data can include, for example, SATNAV data generated by one or more SATNAV sensors of the autonomous vehicle (e.g., SATNAV sensor 132 of primary sensor system 130).
  • the instance of the third sensor data can include, for example, a most recent instance of the SATNAV data generated by the one or more SATNAV sensors that includes location data, such as longitude and latitude coordinates.
  • the system generates the correction instance based on the global pose instance, the instance of the second sensor data, the instance of the third sensor data, or both.
  • the system can generate the correction instance using a state estimation model that is filter-based (e.g., Kalman filter, extended Kalman filter, dual Kalman filter, or other filter-based techniques) or observer-based (e.g., recursive least squares or other observer-based techniques).
  • the system can apply, as input across the state estimation model, the global pose instance received at block 652, the IMU data or the wheel encoder data included in the instance of the second sensor data received at block 664, or the SATNAV data included in the instance of the third sensor data received at block 656 to generate output.
  • the output generated across the state estimation model can be estimates of wheel radii of the autonomous vehicle or sensor biases of individual sensors of the autonomous vehicle (e.g., sensor(s) included in primary sensor system 130 or secondary sensor system 180).
  • the correction instance can then be generated based on the estimates of the wheel radii of the autonomous vehicle or the sensor biases of individual sensors of the autonomous vehicle.
  • the output generated across the state estimation model can be the correction instance, such that the state estimation model acts like a black box.
  • the system can generate the correction instance based on historical global pose instance, including the global pose instance received at block 652, and historical local pose instances, including a local pose instance generated by the online calibration module based on the instance of the second sensor data received at block 654.
  • the historical pose instances (both global and local) may be limited to those that are generated with a threshold duration of time with respect to a current time ⁇ e.g., pose instance generated in the last 100 seconds, 200 seconds, or other durations of time), such that the system only considers a sliding window of the historical pose instances.
  • the system can identify historical global pose instances that temporally correspond to historical local pose instances, and can use the identified historical global pose instances as indicators in generating the correction instance.
  • a plurality of local pose instances can be generated in the same amount of time it takes to generate a single global pose instance. Further, as local pose instances are generated, each local pose instance can diverge further from a most recent global pose instance, and by identifying the global pose instances that temporally correspond to a given one of the local pose instances, the system can determine how each of the local pose instances are diverging.
  • the system can apply, as input across a Gauss-Newton optimization algorithm, the historical pose instances (both global and local), the IMU data or the wheel encoder data included in the instance of the second sensor data received at block 664, or the SATNAV data included in the instance of the third sensor data received at block 656 to generate output.
  • the output generated across the Gauss-Newton optimization algorithm can be estimates of wheel radii of the autonomous vehicle or sensor biases of individual sensors of the autonomous vehicle (e.g., sensor(s) included in primary sensor system 130 or secondary sensor system 180).
  • the correction instance can then be generated based on the estimates of the wheel radii of the autonomous vehicle or the sensor biases of individual sensors of the autonomous vehicle.
  • the output generated across the Gauss-Newton optimization algorithm can be the correction instance, such that the state estimation model acts like a black box.
  • the correction instance can include drift rate(s), and as indicated at sub-block 658A, the system can determine the drift rate(s) across multiple local pose instances of the local pose of the autonomous vehicle that are generated by the system.
  • the drift rate(s) can indicate a first magnitude of drift, in one or more dimensions (e.g., X-dimension, Y-dimension, Z-dimension, roll-dimension, pitch-dimension, yawdimension, or other dimensions), over a period of time.
  • the drift rate(s) can indicate a second magnitude, in one or more of the dimensions, over a distance.
  • the drift rate(s) can include, for example, a temporal drift rate, a distance drift rate, or both.
  • the temporal drift can represent a first magnitude of drift, in one or more dimensions, in generating the multiple local pose instances over the period of time in generating the multiple local pose instances.
  • the distance drift rate can represent a second magnitude of drift, in one or more of the dimensions, over a distance travelled in generating the multiple local pose instances.
  • the correction instance can include a linear combination of the temporal drift rate and the distance drift rate.
  • a first local pose instance is generated at a first time based on an instance of second sensor data (and optionally further based on a first correction instance).
  • a first global pose instance is generated at or near (e.g., within a threshold amount of time, such as within 100 milliseconds, 300 milliseconds, 500 milliseconds, or other amounts of time) the first time based on an instance of first sensor data (and optionally further based on a prior local pose instance).
  • an additional local pose instance is generated at a second time based on an additional instance of the second sensor data (and optionally further based on a first correction instance).
  • a second global pose instance is generated at or near the second time based on an additional instance of the first sensor data (and optionally further based on a prior local pose instance).
  • the system can determine the first local pose instance temporally corresponds to the first global pose instance based on the pose instances being generated at or near the same time (/'.e., the first time).
  • the system can determine the additional local pose instance temporally corresponds to the additional global pose instance based on the additional pose instances being generated at or near the same time (/.e., the second time).
  • the system can determine the drift rate(s) for the correction instance based on at least the iocal pose instance, the additional local pose instance, the global pose instance, the additional global pose instance, or any combination thereof.
  • the system can compare the local pose instance with the temporally corresponding global pose instance, and the system can also compare the additional local pose instance with the temporally corresponding additional global pose instance. Based on differences determined in these comparisons, the system can determine a magnitude of drift, in one or more dimensions, over a period of time or over a distance. For instance, assume global pose instances indicate that the autonomous vehicle traveled ten meters in one second in a first direction (e.g., the difference between the first time and the third time), but the local pose instances indicate that the autonomous vehicle travelled nine meters in the one second and two degrees off of the first direction. In this example, the determined drift rate(s) would indicate that the local pose instances are being generated with an error to account for the one meter and two degree error.
  • the system can further identify a plurality of further local pose instances that were generated that were generated between the first time and the second time (and optionally further based on a first correction instance).
  • the further local pose instances provide more data that can be used in determining the drift rate(s) even though the further local pose instances may not have temporally corresponding global pose instances.
  • the system stores the correction instance, and transmits the correction instance to a local pose module.
  • the system can store the correction instance in memory of the system or remotely.
  • the system can use the correction instance in generating local pose instance(s).
  • the system may delay transmitting of the correction instance to the local pose module for a predetermined period of time (e.g., one second, five second, ten seconds, or other threshold durations of time). By delaying transmitting of the correction instance, the system can verify the instance of the second sensor data or the instance of the third sensor data used in generating the correction instance does not indude any outiiers, errors, or faulty data.
  • the system can discard the generated correction instance, remove the outliers, errors, or faulty data, and generate an additional correction instance that is not based on outliers, errors, or faulty data.
  • the system determines whether an additional instance of the global pose is received. If, at an iteration of block 662, the system determines that no additional global pose instance is received from the global pose module, then the system continuously monitors, at block 662, for the additional global pose instance. If, at an iteration of block 662, the system determines that an additional global pose instance is received from the global pose module, then the system returns to block 654 and proceeds with the method 600A. At this next iteration of block 654, the system can generate the correction instance based on the additional instance of the global pose.
  • the system can continuously generate the co instance(s) for each instance of the global pose generated by the system, store the correction instance(s), and transmit each of the correction instance(s) to the local pose module to generate local pose instance(s).
  • a correction instance can be generated by the system for each global pose instance that is generated by the system, such that generating the correction instance may also be based on a frequency at which the global pose instance is generated, while in other implementations, the system may extrapolate the global pose from historical global pose instances to generate the correction instances at a faster rate than the frequency at which the global pose instance is generated.
  • the system generates local pose instance(s) of the local pose of the autonomous vehicle based on the correction instance(s) and based on an additional instance of the second sensor data. More particularly, the system can generate the local pose instance(s) using localization subsystem 192 of secondary vehicle control system 170. As one non-limiting example, FIG. 7 shows an example method 700A of how the system generates the local pose instance(s) of the local pose of the autonomous vehicle at block 700 of FIG. 4.
  • the system receives an instance of the second sensor data generated by the second sensor(s) of the autonomous vehicle.
  • the second sensor data can include, for example, IMU data generated by one or more IMUs of the autonomous vehicle (e.g., IMU(s) 140 of primary sensor system 130 or IMU(s) 182 of secondary sensor system 180), wheel encoder data generated by one or more wheel encoders of the autonomous vehicle (e.g., wheel encoder(s) 142 of primary sensor system 130 or wheel encoder(s) 184 of secondary sensor system 180), or both.
  • the instance of the second sensor data can include, for example, a most recent instance of the IMU data generated by one or more of the IMUs and a most recent instance of the wheel encoder data by the one or more wheel encoders that includes location data, such as longitude and latitude coordinates.
  • the system generates the local pose instance of the local pose of the autonomous vehicle based on the instance of the second sensor data.
  • the particular frame of reference of the local pose instances can be a local frame of reference.
  • an initial local pose instance can correspond to a certain point in space (e.g., XI, Yl, and Zl).
  • the particular frame of reference of the local pose instances can be a local frame of reference with respect to the tile(s).
  • an initial global pose instance can provide local pose module 292 with an indication of the tile that vehicle 100 is located therein, and local pose module 292 can then determine the local pose instances relative to the tile(s).
  • the local pose instance can be generated based on the IMU data or the wheel encoder data included in the instance of the second sensor data received at block 752.
  • the system can generate the local pose instance using a state estimation model that is filter-based (e.g., Kalman filter, extended Kalman filter, dual Kalman filter, or other filter-based techniques) or observer-based (e.g., recursive least squares or other observer-based techniques).
  • filter-based e.g., Kalman filter, extended Kalman filter, dual Kalman filter, or other filter-based techniques
  • observer-based e.g., recursive least squares or other observer-based techniques.
  • the system can apply, as input across the state estimation model, the IMU data or the wheel encoder data included in the instance of the second sensor data received at block 752 to generate output.
  • the output generated across the state estimation model can be the local pose instance, estimated velocities of the autonomous vehicle, estimated accelerations of the autonomous vehicle, or any combination thereof.
  • the local pose instance, the estimated velocities of the autonomous vehicle and the estimated accelerations of the autonomous vehicle can be raw values or other representations.
  • the system does not utilize instances of the first sensor data (e.g., LIDAR data) in generating the local pose instances.
  • the local pose instance seeks to model relative position and orientation information of the autonomous vehicle based on actual movement of the autonomous vehicle (e.g., as indicated by the IMU data and the wheel encoder data).
  • the global pose instance seeks to model actual position and orientation information of the autonomous vehicle within the ti le(s) based on matching sensor data to the tile(s) (e.g., using LIDAR data). Further, it should be appreciated that the system can generate multiple local pose instances in parallel for the sake of redundancy or safety. [00114] At block 756, the system stores the local pose instance, and transmits the local pose instance to a plurality of subsystems or modules (e.g., via ECEF pose module 256, bootstrap module, 258, global pose module 252). As described herein with respect to FIGS.
  • the local pose instance can be utilized in generating ECEF pose instances (e.g., via ECEF pose module 256), identifying candidate tile(s) (e.g., via bootstrap module 258), generating global pose instances (e.g., via global pose module 252), generating correction instance(s) (e.g., via online calibration module 254), or by other modules.
  • the local pose instance can be utilized by various other subsystems (e.g., planning subsystem 154, control subsystem 158, controlled stop subsystem 194, or other subsystems).
  • the autonomous vehicle can continue to operate using only secondary vehicle control system 170 and local pose instances generated by the system.
  • the system determines whether a correction instance is received from the online calibration module. If, at an iteration of block 758, the system determines that no correction instance is received from the online calibration module, then the system returns to block 752. Notably, upon initialization of the autonomous vehicle (e.g., as discussed above with respect to blocks 552-558 of FIG. 5), there may not be any correction instances to utilize in generating the local pose instance. However, the system can still generate multiple local pose instances without any correction instance upon initialization. Moreover, at an iteration of block 758, if the system determines that a correction instance is received from the online calibration module, then the system proceeds to block 760.
  • the system receives an additional instance of the second sensor data.
  • the second sensor data can include, for example, IMU data generated by one or more IMUs of the autonomous vehicle (e.g., IMU(s) 140 of primary sensor system 130 or I M U(s) 182 of secondary sensor system 180), wheel encoder data generated by one or more wheel encoders of the autonomous vehicle (e.g., wheel encoder(s) 142 of primary sensor system 130 or wheel encoder(s) 184 of secondary sensor system 180), or both.
  • the additional instance of the second sensor data can include, for example, a most recent instance of the IMU data generated by one or more of the IMUs or a most recent instance of the wheel encoder data by the one or more wheel encoders that includes location data, such as longitude and latitude coordinates.
  • location data such as longitude and latitude coordinates.
  • the additional instance of the second sensor data receive at block 760 is distinct from the instance of the instance of the second sensor data received at block 752.
  • the system generates an additional local pose instance of the local pose of the autonomous vehicle based on the additional instance of the second sensor data and based on the correction instance.
  • the system can generate the additional local pose instance in the same manner described above with respect to block 754.
  • the additional local pose instance can be modified to include the correction instance received at block 758. For example, if the additional local pose instance, the estimated velocities of the autonomous vehicle, or the estimated accelerations of the autonomous vehicle are raw values, then the system can apply drift rate(s) included in the correction instance received at block 758 to the raw calculations.
  • the system can quickly correct errors in wheel radii or sensor bias(es) in the one or more IMU sensors or the one or more wheel encoder sensors that generated the additional instance of the second sensor data received at block 760.
  • the system determines whether an additional correction instance is received from the online calibration module. If, at an iteration of block 764, the system determines that no additional correction instance is received from the online calibration module, the system returns to block 760. However, at an iteration of block 764, if the system determines that an additional correction instance is received from the online calibration module, the system proceeds to block 768. Thus, the system can continuously generate iocal pose instances based on further instances of the second data until an additional correction instance is received at block 764.
  • the system combines the correction instance and the additional correction instance.
  • the additional correction instance received at block 766 can be generated based on local pose instances that were generated based on the correction instance received at block 758.
  • the additional correction instance received at block 766 is a correction instance that is generated relative to the correction instance received at block 758, and the system can combine the correction instances to ensure the system takes both into consideration in generating yet further local pose instances.
  • the correction can be a linear combination of a temporal drift rate, a distance drift rate, or both.
  • the system can combine the correction instance received at block 758 and the additional correction instance received at block 766 by generating a linear combination of the temporal drift rate of the correction instance, the temporal drift rate of the additional correction, the distance drift rate of the correction instance, the distance drift rate of the additional correction instance, or both.
  • the system After combining the correction instance and the additional correction instance, the system then returns to block 760 and generates further local pose instances based on the combined correction instances and based on yet further instances of the second sensor data.
  • FIGS. 4-7 are depicted in a particular order, it should be understood that is for the sake of example and not meant to be limiting. Moreover, it should be appreciated that the operations of FIGS. 4-7 are generally performed in parallel or redundantly.
  • the system can generate local pose instance(s) (e.g., as described with respect to blocks 752-756) as the autonomous vehicle begins moving, and transmit those local pose instance(s) to, for example, block 554 for use in generating ECEF pose instance, block 560 for use in generating global pose instances, block 658 for use in generating correction instances, and so on.
  • FIGS. 5 and 6 are described herein as being performed by primary vehicle control system 120, and the operations of FIGS. 7 are described herein as being performed by secondary vehicle control system 170. Further, the techniques described herein can utilize this split architecture (which is readily apparent from FIGS. 1, 2A, 2B, and 3) to operate the autonomous vehicle using only local pose instances in normal operation or in scenarios when an adverse event is detected at primary vehicle control system 120.

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Automation & Control Theory (AREA)
  • Navigation (AREA)

Abstract

Implementations for localization of an autonomous vehicle can generate global pose instance(s) of a global pose of the autonomous vehicle based on an instance of first sensor data, such as LIDAR data, generate correction instance(s) (254A) for a local pose of the autonomous vehicle based on the global pose instances and based on an instance of second sensor data, such as IMU data and wheel encoder data, and generate local pose instance(s) of the local pose based on the correction instance (254A) and based on an additional instance of the second sensor data. Notably, multiple local pose instances may be generated using the same correction instance prior to generating an additional global pose instance or an additional correction instance. Moreover, the second sensor(s) utilized in generating the local pose instances can exclude the first sensor(s) utilized in generating the global pose instances. Localization subsystem (152) of a primary vehicle control system includes at least global pose module (252) and online calibration module (254). Localization subsystem (192) of a secondary vehicle control system includes at least local pose module (292). The correction instance (254A) can include drift rate(s) across multiple local pose instances, such as a linear combination of a temporal drift rate and a distance drift rate. Thus, local pose module (292) can track relative movement of vehicle, and errors in tracking the relative movement can be mitigated by periodically adjusting calculations at local pose module (292) via the correction instance (254A).

Description

LOCALIZATION METHODS AND ARCHITECTURES FOR AN AUTONOMOUS VEHICLE
Background
[0001] As computing and vehicular technologies continue to evolve, autonomy-related features have become more powerful and widely available, and capable of controlling vehicles in a wider variety of circumstances. For automobiles, for example, the automotive industry has generally adopted SAE International standard J3016, which designates 6 levels of autonomy. A vehicle with no autonomy is designated as Level 0, and with Level 1 autonomy, a vehicle controls steering or speed (but not both), leaving the operator to perform most vehicle functions. With Level 2 autonomy, a vehicle is capable of controlling steering, speed and braking in limited circumstances (e.g., while traveling along a highway), but the operator is still required to remain alert and be ready to take over operation at any instant, as well as to handle any maneuvers such as changing lanes or turning. Starting with Level 3 autonomy, a vehicle can manage most operating variables, including monitoring the surrounding environment, but an operator is still required to remain alert and take over whenever a scenario the vehicle is unable to handle is encountered. Level 4 autonomy provides an ability to operate without operator input, but only in specific conditions such as only certain types of roads (e.g., highways) or only certain geographical areas (e.g., specific cities for which adequate mapping data exists). Finally, Level 5 autonomy represents a level of autonomy where a vehicle is capable of operating free of operator control under any circumstances where a human operator could also operate.
[0002] The fundamental challenges of any autonomy- related technology relates to collecting and interpreting information about a vehicle's surrounding environment, along with making and implementing decisions to appropriately control the vehicle given the current environment within which the vehicle is operating. Therefore, continuing efforts are being made to improve each of these aspects, and by doing so, autonomous vehicles increasingly are able to reliably handle a wider variety of situations and accommodate both expected and unexpected conditions within an environment. Summary
[0003] The present disclosure is directed to particular method(s) or architecture(s) for localization of an autonomous vehicle (/.e., localization of the autonomous vehicle being autonomously controlled). Localization of the autonomous vehicle generally references determining a pose of the autonomous vehicle within its surrounding environment, and generally with respect to a particular frame of reference. Some implementations generate both global pose instances and local pose instances for use in localization of an autonomous vehicle. In some of those implementations, the local pose instances are utilized at least part of the time (e.g., the majority of the time or even exclusively) as the localization that is used in control of the autonomous vehicle.
[0004] Therefore, consistent with one aspect of the invention, a method for localization of an autonomous vehicle is described herein. The method may include, by one or more primary control system processors of a primary control system: generating a global pose instance of a global pose of the autonomous vehicle based on a first sensor data instance of first sensor data that is generated by one or more first sensors of the autonomous vehicle; generating a correction instance based on the global pose instance and based on a determined local pose instance of a local pose of the autonomous vehicle, the local pose instance (a) temporally corresponds to the global pose instance, (b) is determined based on a second sensor data instance of second sensor data that is generated by one or more second sensors of the autonomous vehicle, and (c) is determined without utilization of the first sensor data instance; and transmitting the correction instance to a secondary control system. The method further includes, by one or more secondary control system processors of the secondary control system: receiving the correction instance; and generating an additional local pose instance based on the correction instance and based on an additional second sensor data instance of the second sensor data.
[0005] These and other implementations of technology disclosed herein can optionally include one or more of the following features.
[0006] In some implementations, generating the additional local pose instance may include generating the additional local pose instance based on the local pose instance, that temporally corresponds to the global pose instance, as modified based on at least the additional second sensor data instance.
[0007] In some implementations, the method may further include by one or more of the secondary control system processors of the secondary control system, and immediately subsequent to generating the additional local pose instance but prior to receiving any additional correction instance: generating a further local pose instance based on (a) the additional local pose instance, (b) the correction instance, and (c) a further second sensor data instance, of the second sensor data, the further second sensor data instance generated subsequent to the additional second sensor data instance. In some versions of those implementations, the method may further include, by one or more of the secondary control system processors of the secondary control system, and immediately subsequent to generating the further local pose instance but prior to receiving any additional correction instance: generating a yet further local pose instance based on (a) the further local pose instance, (b) the correction instance, and (c) a yet further second sensor data instance, of the second sensor data, the yet further second sensor data instance generated subsequent to the further second sensor data instance.
[0008] In some implementations, generating the correction instance may be further based on at least one prior local pose instance to at least one prior global pose instance that temporally corresponds to the at least one prior local pose instance. In some versions of those implementations, generating the correction instance may include comparing the local pose instance to the global pose instance that temporally corresponds to the local pose instance, comparing at least one prior local pose instance to at least one prior global pose instance that temporally corresponds to the at least one prior local pose instance; and generating the correction instance based on comparing the local pose instance to the global pose instance and based on comparing the prior local pose instance to the prior global pose instance. In some additional or alternative versions of those implementations, the correction instance may include a drift rate that indicates a first magnitude of drift, in one or more dimensions, over a period of time a second magnitude of drift, in one or more of the dimensions, over a distance, or both. In some versions of those additional or alternative implementation, the correction instance may be a linear combination of the first magnitude of drift over the period of time, and the second magnitude of drift over the distance. In some additional or alternative implementation, transmitting the correction instance to the secondary control system may include delaying transmission of the correction instance to the secondary control system by a predetermined period of time.
[0009] In some implementations, the method may further include, by one or more of the primary control system processors of the primary control system, and prior to generating the global pose instance: identifying one or more candidate tiles based on a prior first sensor data instance of the first sensor data, each of the one or more candidate tiles representing a previously mapped portion of a geographical region. In some versions of those implementations, the one or more first sensors may include at least a LIDAR sensor, and the first sensor data instance may include at. least an instance of LIDAR data generated by a sensing cycle of the LIDAR sensor of the autonomous vehicle. In some further versions of those implementations, generating the global pose instance for the autonomous vehicle may include assembling the instance of the LIDAR data into one or more point clouds, and aligning, using a geometric matching technique, one or more of the point clouds with a previously stored point cloud associated with the local pose instance that temporally corresponds to the global pose instance to generate the global pose instance. In yet further versions of those implementations, aligning one or more of the point clouds with the given tile using the geometric matching technique to generate the global pose instance may include identifying a given tile associated with the local pose instance that temporally corresponds to the global pose instance, identifying, based on the given tile, the previously stored point cloud associated with the local pose instance, and using the geometric matching technique to align one or more of the point clouds with the previously stored point cloud.
[0010] In some implementations, the one or more second sensors may include at least one !MU sensor and at least one wheel encoder, and the second sensor data may include IMU data generated by the at. least one IMU sensor and wheel encoder data generated by the at least one wheel encoder, in some versions of those implementations, the IMU data may be generated at a first frequency, the wheel encoder data may be generated at a second frequency, and the second sensor data instance and the additional second sensor data instance may include a most recently generated instance of the IMU data and a most recently generated instance of the wheel encoder data. [0011] Consistent with another aspect of the invention, a system for localization of an autonomous vehicle is described herein. The system may include a primary control system having at least one first processor and a first memory storing first instructions that, when executed, cause the at least one first processor to: generate a global pose instance of a global pose of the autonomous vehicle based on a first sensor data instance of first sensor data that is generated by one or more first sensors of the autonomous vehicle; generate a correction instance based on the global pose instance and based on a determined local pose instance, of a local pose of the autonomous vehicle, that temporally corresponds to the global pose instance, that is determined based on a second sensor data instance of second sensor data that is generated by one or more second sensors of the autonomous vehicle, and that is determined without utilization of the first sensor data instance; and transmit the correction instance to a secondary control system. The method may further include the secondary control system having at least one second processor and a second memory storing second instructions that, when executed, cause the at least one second processor to: receive the correction instance; and generate an additional local pose instance based on the correction instance and based on an additional second sensor data instance of the second sensor data.
[0012] These and other implementations of technology disclosed herein can optionally include one or more of the following features.
[0013] In some implementations, the instructions to generate the correction instance may further cause the at least one processor to: compare the local pose instance to the global pose instance that temporally corresponds to the local pose instance, compare at least one prior local pose instance to at least one prior global pose instance that temporally corresponds to the at least one prior local pose instance, and generate the correction instance based on comparing the local pose instance to the global pose instance and based on comparing the prior local pose instance to the prior global pose instance. In some versions of those implementations, the correction instance may include a drift rate that indicates a first magnitude of drift, in one or more dimensions, over a period of time, a second magnitude of drift, in one or more of the dimensions, over a distance, or both. In some further versions of those implementations, the correction instance may be a linear combination of the first magnitude of drift over the period of time, and the second magnitude of drift over the distance.
[0014] Consistent with yet another aspect of the invention, a method for localization of an autonomous vehicle is described herein. The method may include, by one or more primary control system processors of a primary control system: generating a global pose instance of a global pose of the autonomous vehicle based on a first sensor data instance of first sensor data that is generated by one or more first sensors of the autonomous vehicle; and generating a correction instance based on the global pose instance and based on a second sensor data instance of second sensor data that is generated by one or more second sensors of the autonomous vehicle; and transmitting the correction instance to a secondary control system. The method may further include, by one or more secondary control system processors of the secondary control system: receiving the correction instance; and until an additional correction instance is received, generating a plurality of local pose instances of a local pose of the autonomous vehicle. Generating each of the local pose instances may be based on the correction instance and based on corresponding additional instances of the second sensor data.
[0015] Consistent with yet another aspect of the invention, a method for localization of an autonomous vehicle is described herein. The method may include, obtaining first sensor data from one or more first sensors of the autonomous vehicle, generating a global pose of the autonomous vehicle based on the first sensor data, obtaining second sensor data from one or more second sensors of the autonomous vehicle, and generating a local pose of the autonomous vehicle based on the second sensor data. The local pose of the autonomous vehicle may be generated without utilization of the first sensor data. The method may further include determining a correction based on the global pose of the autonomous vehicle and the local pose of the autonomous vehicle, obtaining additional second sensor data from the one or more second sensors of the autonomous vehicle, and generating an additional local pose of the autonomous vehicle based on (i) the correction, and (ii) the additional second sensor data.
[0016] These and other implementations of technology disclosed herein can optionally include one or more of the following features. [0017] In some implementations, generating the additional local pose may include generating the additional local pose based on the local pose instance as modified based on at least the additional second sensor data. The local pose instance may temporally correspond to the global pose instance [0018] in some implementations, the method may further include, immediately subsequent to generating the additional local pose but prior to receiving any additional correction, generating a further local pose based on: the additional local pose, the correction, and further second sensor data generated subsequent to the additional second sensor data. In some versions of those implementations, the method may further include, immediately subsequent to generating the further local pose but prior to receiving any additional correction, generating a yet further local pose based on: the further local pose, the correction, and yet further second sensor data generated subsequent to the further second sensor data.
[0019] in some implementations, generating the correction is further based on comparing at least one prior local pose to at least one prior global pose that temporally corresponds to the at least one prior local pose. In some versions of those implementations, generating the correction may include comparing the local pose to the global pose that temporally corresponds to the local pose, comparing at least one prior local pose to at least one prior global pose that temporally corresponds to the at. least one prior local pose, and generating the correction based on comparing the local pose to the global pose and based on comparing the prior local pose to the prior global pose. In some additional or alternative versions of those implementations, the correction may include a drift rate that indicates one or more of: a first magnitude of drift, in one or more dimensions, over a period of time, or a second magnitude of drift, in one or more of the dimensions, over a distance. In some further versions of those implementations, the correction may be a linear combination of the first magnitude of drift over the period of time, and the second magnitude of drift over the distance. In some additional or alternative versions of those implementations, the global pose and the correction may be generated by a primary control system, and the local pose may be generated by a secondary control system. [0020] In some implementations, the method may further include identifying one or more candidate tiles based on prior first sensor data, each of the one or more candidate tiles representing a previously mapped portion of a geographical region. In some versions of those implementations, the one or more first sensors may include at least a LIDAR sensor, and the first sensor data may include at least LIDAR data generated by a sensing cycle of the LIDAR sensor of the autonomous vehicle. In some further versions of those implementations, generating the global pose for the autonomous vehicle may include assembling the LIDAR data into one or more point clouds, and aligning, using a geometric matching technique, one or more of the point clouds with a previously stored point cloud associated with the local pose that temporally corresponds to the global pose to generate the global pose. In yet further versions of those implementations, aligning one or more of the point clouds with the given tile using the geometric matching technique to generate the global pose may include identifying a given tile associated with the local pose that temporally corresponds to the global pose, identifying, based on the given tile, the previously stored point cloud associated with the local pose, and using the geometric matching technique to align one or more of the point clouds with the previously stored point cloud,
[0021] In some implementations, the one or more second sensors may include at least one IMU sensor and at least one wheel encoder, and the second sensor data may include IMU data generated by the at least one IMU sensor and wheel encoder data generated by the at least one wheel encoder. In some versions of those implementations, the IMU data may be generated at a first frequency, the wheel encoder data may be generated at a second frequency, and the second sensor data and the additional second sensor data may include a most recently generated instance of the IMU data and a most recently generated instance of the wheel encoder data.
[0022] It should be appreciated that all combinations of the foregoing concepts and additional concepts described in greater detail herein are contemplated as being part of the subject matter disclosed herein. For example, all combinations of claimed subject matter appearing at the end of this disclosure are contemplated as being part of the subject matter disclosed herein. Brief Description of the Drawings
[0023] FIG. 1 illustrates an example hardware and software environment for an autonomous vehicle, in accordance with various implementations.
[0024] FIGS. 2A and 2B are block diagrams illustrating example implementations of the localization subsystems referenced in FIG. 1, in accordance with various implementations.
[0025] FIG. 3 is a process flow illustrating an example implementation of the localization subsystems referenced in FIGS. 2A and 2B, in accordance with various implementations.
[0026] FIG. 4 is flowchart illustrating an example method for localization of an autonomous vehicle, in accordance with various implementations.
[0027] FIG. 5 is flowchart illustrating an example method of generating global pose instances of a global pose of the autonomous vehicle in localization of the autonomous vehicle of FIG. 4, in accordance with various implementations.
[0028] FIG. 6 is flowchart illustrating an example method of generating correction instances in localization of the autonomous vehicle of FIG. 4, in accordance with various implementations.
[0029] FIG. 7 is flowchart illustrating an example method of generating local pose instances of a local pose of the autonomous vehicle in localization of the autonomous vehicle of FIG. 4, in accordance with various implementations.
Detailed Description
[0030] in various implementations, localization of an autonomous vehicle includes generating both global pose instances and local pose instances for use in localization of the autonomous vehicle. In some of those implementations, the local pose instances are utilized at least part of the time (e.g., the majority of the time or even exclusively) as the localization that is used in control of the autonomous vehicle.
[0031] A global pose instance can be generated based on an instance of first sensor data generated by first sensor(s) of the autonomous vehicle, and can indicate a position and orientation of the autonomous vehicle with respect to a frame of reference (e.g., tile(s)). A local pose instance can likewise indicate a position and orientation of the autonomous vehicle with respect to a frame of reference, but can be generated based on an instance of second sensor data and without utilization of the instance of first sensor data used in generating the global pose instance. The frame of reference for the local pose instance can be the same frame of reference as the global pose instance (e.g., tile(s)) or a distinct frame of reference (e.g., a local frame of reference). Put another way, first sensor data generated by first sensor(s) will not be directly utilized in generating one or more instances of a local pose. However, the first sensor data will be directly utilized in generating one or more instances of a global pose. Further, second sensor data generated by second sensors will be directly utilized in generating one or more instances of a local pose, and can optionally also be directly utilized in generating one or more instances of a global pose.
[0032] In various implementations, in generating a local pose instance, a global pose instance or a correction instance (determined based on global pose instance(s) and local pose instance(s)) can be utilized. In some of those implementations, multiple local pose instances are generated based on a single global pose instance or a single correction instance. For example, a correction instance can be generated based on comparing historical global pose instances to temporally corresponding historical local pose instances. For instance, the correction instance can indicate one or more rates of divergence between local and global instances (e.g., drift rate(s)). Continuing with the example, the correction instance can be utilized in generating two or more local pose instances. For example, each local pose instance can be a function of a preceding local pose instance, the correction instance, and most, recently received second sensor data from the second sensors. In these and other manners, multiple local pose instances can be generated using second sensor data and without direct utilization of the first sensor data. However, the multiple local pose instances can be indirectly influenced by the first sensor data, through utilization of a global pose instance or correction instance that is generated based on the first sensor data. As described herein, the local pose instances can be generated more frequently than global pose instances and using less computational resources than global pose instances, based at least in part on not being generated based directly on the first sensor data. Through utilization of a correction instance or global pose instance in generating the local pose instances, multiple local pose instances may be generated using the same correction instance prior to generating an additional global pose instance or an additional correction instance. Moreover, the second sensor(s) utilized in generating the local pose instances can exclude the first sensor(s) utilized in generating the global pose instances. [0033] As used herein, the term "tile" refers to a previously mapped portion of a geographical area. A plurality of tiles can be stored in memory of various systems described herein, and the plurality of tiles can be used to represent a geographical region. For example, a given geographical region, such as a city, can be divided into a plurality of tiles (e.g., each square mile of the city, each square kilometer of the city, or other dimensions), and each of the tiles can represent a portion of the geographical region. Further, each of the tiles can be stored in database(s) that are accessible by various systems described herein, and the tiles can be indexed in the database(s) by their respective locations within the geographical region. Moreover, each of the tiles can include, for example, information contained within each of the tiles, such as intersection information, traffic light information, landmark information, street information, or other information for the geographical area represented by each of the tiles. The information contained within each of the tiles can be utilized to identify a matching tile.
[0034] As used herein, the term "pose" refers to location information and orientation information of an autonomous vehicle within its surroundings, and generally with respect to a particular frame of reference. The pose can be an n-dimensional representation of the autonomous vehicle with respect to the particular frame of reference, such any 2D, 3D, 4D, 5D, 6D, or any other dimensional representation. The frame of reference can be, for example, the aforementioned tile(s), an absolute coordinate system (e.g., longitude and latitude coordinates), a relative coordinate system (or a local frame of reference), or other frame(s) of reference. Moreover, various types of poses are described herein, and different types of poses can be defined with respect different frame(s) of reference.
[0035] For example, a "global pose" of the autonomous vehicle can refer to location information and orientation information of the autonomous vehicles with respect to tile(s), and can be generated based on at least an instance of first sensor data generated by first sensor(s) of an autonomous vehicle. Further, a "local pose" of the autonomous vehicle can refer to location information and orientation information of the autonomous vehicles with respect to a local frame of reference can be generated based on at least an instance of second sensor data generated by second sensor(s) of an autonomous vehicle that exclude the first sensor(s) utilized in generating the global pose. As another example, an Earth Centered Earth Fixed pose ("ECEF pose”) can refer to location information and orientation information of the autonomous vehicles with respect to longitude and latitude coordinates, and can be generated based on at least an instance of third sensor data generated by third sensor(s) of an autonomous vehicle.
[0036] As used herein, the phrase "instance of sensor data" or the phrase "sensor data instance" can refer to sensor data, for a corresponding instance in time, and for one or more sensors of an autonomous vehicle. Although the sensor data instance is for a corresponding instance in time, it's not necessarily the case that all sensor data of the instance was actually generated by the sensors at the same time. For example, an instance of LIDAR data generated by LIDAR sensor(s) of the autonomous vehicle may include LIDAR data from a sensing cycle of the LIDAR sensor(s) that is generated at a first frequency, an instance of IMU data generated by IMU sensor(s) of the autonomous vehicle may include accelerometer readings and gyroscopic readings from the IMU sensor(s) that are generated at a second frequency, and an instance of wheel encoder data generated by wheel encoder(s) of the autonomous vehicle may include a quantity of accumulated ticks of revolutions of wheel(s) of the autonomous vehicle that are generated at a third frequency. Notably, the first frequency, the second frequency, and the third frequency may be distinct frequencies. Nonetheless, they can all be included in a sensor data instance based on, for example, being most recently generated relative to the instance in time. In some implementations, the phrase "instance of sensor data" or the phrase "sensor data instance" can also refer to sensor data, for a corresponding instance in time that has been processed by one or more components. For example, one or more filtering components (e.g., a Kalman filter) can be utilized to process some or all of the sensor data, and the outputs from the filtering components can still be considered an "instance of sensor data" or a "sensor data instance".
[0037] Prior to further discussion of these and other implementations, however, an example hardware and software environment within which the various techniques disclosed herein may be implemented will be discussed.
[0038] Turning to the drawings, wherein like numbers denote like parts throughout the several views, FIG. 1 illustrates an example autonomous vehicle 100 within which the various techniques disclosed herein may be implemented. Vehicle 100, for example, is shown driving on a road 101, and vehicle 100 may include a powertrain 102 including a prime mover 104 powered by an energy source 106 and capable of providing power to a drivetrain 108, as well as a control system 110 including a direction control 112, a powertrain control 114, and a brake control 116. Vehicle 100 may be implemented as any number of different types of vehicles, including vehicles capable of transporting people or cargo, and capable of traveling by land, by sea, by air, underground, undersea or in space, and it will be appreciated that the aforementioned components 102-116 can vary widely based upon the type of vehicle within which these components are utilized.
[0039] The implementations discussed hereinafter, for example, will focus on a wheeled land vehicle such as a car, van, truck, bus, etc. In such implementations, the prime mover 104 may include one or more electric motors or an internal combustion engine (among others), while energy source 106 may include a fuel system (e.g., providing gasoline, diesel, hydrogen, etc.), a battery system, solar panels or other renewable energy source, a fuel cell system, etc., and drivetrain 108 may include wheels or tires along with a transmission or any other mechanical drive components suitable for converting the output of prime mover 104 into vehicular motion, as well as one or more brakes configured to controllably stop or slow the vehicle and direction or steering components suitable for controlling the trajectory of the vehicle (e.g., a rack and pinion steering linkage enabling one or more wheels of vehicle 100 to pivot, about a generally vertical axis to vary an angle of the rotational planes of the wheels relative to the longitudinal axis of the vehicle). In various implementations, different combinations of powertrains 102 and energy sources 106 may be used. In the case of electric/gas hybrid vehicle implementations, one or more electric motors (e.g., dedicated to individual wheels or axles) may be used as a prime mover 104. In the case of a hydrogen fuel cell implementation, the prime mover 104 may include one or more electric motors and the energy source 106 may include a fuel cell system powered by hydrogen fuel.
[0040] Direction control 112 may include one or more actuators or sensors for controlling and receiving feedback from the direction or steering components to enable the vehicle to follow a desired trajectory. Powertrain control 114 may be configured to control the output of powertrain 102, e.g., to control the output power of prime mover 104, to control a gear of a transmission in drivetrain 108, etc., thereby controlling a speed or direction of the vehicle. Brake control 116 may be configured to control one or more brakes that slow or stop vehicle 100, e.g., disk or drum brakes coupled to the wheels of the vehicle. [0041] Other vehicle types, including but not limited to airplanes, space vehicles, helicopters, drones, military vehicles, all-terrain or tracked vehicles, ships, submarines, construction equipment, etc., will necessarily utilize different, powertrains, drivetrains, energy sources, direction controls, powertrain controls and brake controls, as will be appreciated by those of ordinary skill having the benefit of the instant disclosure. Moreover, in some implementations various components may be combined, e.g., where directional control of a vehicle is primarily handled by varying an output of one or more prime movers. Therefore, the invention is not limited to the particular application of the herein-described techniques in an autonomous wheeled land vehicle.
[0042] In the illustrated implementation, autonomous control over vehicle 100 (that may include various degrees of autonomy as well as selectively autonomous functionality) is primarily implemented in a primary vehicle control system 120, that may include processor(s) 122 and one or more memories 124, with processor(s) 122 configured to execute program code instruction(s) 126 stored in memory 124.
[0043] A primary sensor system 130 may include various sensors suitable for collecting information from a vehicle's surrounding environment for use in controlling the operation of the vehicle. For example, a satellite navigation (SATNAV) sensor 132, e.g., compatible with any of various satellite navigation systems such as GPS, GLONASS, Galileo, Compass, etc., may be used to determine the location of the vehicle on the Earth using satellite signals. A Radio Detection and Ranging (RADAR) sensor 134 and a Light Detection and Ranging (LIDAR) sensor 136, as well as digital camera(s) 138 (that may include various types of image capture devices capable of capturing still or video imagery), may be used to sense stationary and moving objects within the immediate vicinity of a vehicle. Inertial measurement unit(s) (IMU(s)) 140 may include multiple gyroscopes and accelerometers capable of detection linear and rotational motion of vehicle 100 in three directions, while wheel encoder(s) 142 may be used to monitor the rotation of one or more wheels of vehicle 100.
[0044] The outputs of sensors 132-142 may be provided to a set of primary control subsystems 150, including, a localization subsystem 152, a planning subsystem 154, a perception subsystem 156, a control subsystem 158, and a mapping subsystem 160. Localization subsystem 152 determines a "pose" of vehicle 100. In some implementations. the pose can include location information and orientation information of vehicle 100. In other implementations, the pose can additionally or alternatively include velocity information or acceleration information of vehicle. In some implementations, localization subsystem 152 generates a “global pose" of vehicle 100 within its surrounding environment, and with respect to a particular frame of reference. As discussed in greater detail herein, localization subsystem 152 can generate a global pose of vehicle 100 based on matching sensor data output by one or more of sensors 132-142 to a previously mapped portion of a geographical area (also referred to herein as a "tile").
[0045] Planning subsystem 154 plans a path of motion for vehicle 100 over a timeframe given a desired destination as well as the static and moving objects within the environment, while perception subsystem 156 detects, tracks or identifies elements within the environment surrounding vehicle 100. Control subsystem 158 generates suitable control signals for controlling the various controls in control system 110 in order to implement the planned path of the vehicle. Mapping subsystem 160 may be provided in the illustrated implementations to describe the elements within an environment and the relationships therebetween, and may be accessed by the localization, planning and perception subsystems 152-156 to obtain various information about the environment for use in performing their respective functions.
[0046] Vehicle 100 also includes a secondary vehicle control system 170, that may include one or more processors 172 and one or more memories 174 capable of storing program code instruction(s) 176 for execution by processor(s) 172. In some implementations, secondary vehicle control system 170, may be used in conjunction with primary vehicle control system 120 in normal operation of vehicle 100. In some additional or alternative implementations, secondary vehicle control system 170, may be used as a redundant or backup control system for vehicle 100, and may be used, among other purposes, to continue planning and navigation to perform controlled stops in response to adverse events detected in primary vehicle control system 120, or both. Adverse events can include, for example, a detected hardware failure in vehicle control systems 120, 170, a detected software failure in vehicle control systems 120, 170, a detected failure of sensor systems 130, 180, other adverse events, or any combination thereof. [0047] Secondary vehicle control system 170 may also include a secondary sensor system 180 including various sensors used by secondary vehicle control system 170 to sense the conditions or surroundings of vehicle 100. For example, IMU(s) 182 may be used to generate linear and rotational motion information about the vehicle, while wheel encoder(s) 184 may be used to sense the velocity of each wheel. One or more of IMU(s) 182 and wheel encoder(s) 184 of secondary sensor system 180 may be the same as or distinct from one or more of IMU(s) 140 and wheel encoder(s) 142 of the primary sensor system 130.
[0048] Further, secondary vehicle control system 170 may also include secondary control subsystems 190, including at least localization subsystem 192 and controlled stop subsystem 194. Localization subsystem 192 generates a "local pose" of vehicle 100 relative to a previous local pose of vehicle 100. As discussed in greater detail herein, localization subsystem 152 can generate local pose of vehicle 100 by processing sensor data output by one or more of sensors 182-184 to generate the local pose of vehicle 100. Controlled stop subsystem 194 is used to implement a controlled stop for vehicle 100 upon detection of an adverse event. Other sensors and subsystems may be utilized in secondary vehicle control system 170, as well as other variations capable of being implemented in other implementations, will be discussed in greater detail below.
[0049] Notably, localization subsystem 152, that is responsible for generating a global pose of vehicle 100 (e.g., implemented by processor(s) 122), and localization subsystem 192, that is responsible for generating a local pose of vehicle 100 (e.g., implemented by processor(s) 172), are depicted as being implemented by separate hardware components. As discussed in greater detail below, localization subsystem 192 can generate instances of local pose of vehicle 100 at a faster rate than localization subsystem 152 can generate instances of global pose of vehicle 100. As a result, multiple instances of a local pose of vehicle 100 can be generated in the same amount of time as a single instance of global pose of vehicle 100.
[0050] Although FIG. 1 is depicted herein as having secondary vehicle control system 170 that includes subsystems and sensors that are distinct from primary vehicle control system 120, it should be understood that is for the sake of example, and is not meant to be limiting. In various implementations, secondary vehicle control system 170 can have the same or substantially similar configuration as primary vehicle control system 120. Accordingly, the respective localization subsystems 152, 192 of the control systems of FIG. 1 can generate both global pose instances of the global pose of vehicle 100 and local pose instances of the local pose of vehicle 100.
[0051] In general, it should be understood an innumerable number of different architectures, including various combinations of software, hardware, circuit logic, sensors, networks, etc. may be used to implement the various components illustrated in FIG. 1. The processor(s) 122, 172 may be implemented, for example, as a microprocessor and memory 124, 174 may represent the random access memory (RAM) devices comprising a main storage, as well as any supplemental levels of memory, e.g., cache memories, non-volatile or backup memories (e.g., programmable or flash memories), read-only memories, etc. In addition, memory 124, 174 may be considered to include memory storage physically located elsewhere in vehicle 100 (e.g., any cache memory in processor(s) 122, 172), as well as any storage capacity used as a virtual memory (e.g., as stored on a mass storage device or on another computer or controller). Processor(s) 124, 174 illustrated in FIG. 1, or entirely separate processors, may be used to implement additional functionality in vehicle 100 outside of the purposes of autonomous control (e.g., to control entertainment systems, to operate doors, lights, convenience features, and so on).
[0052] In addition, for additional storage, vehicle 100 may also include one or more mass storage devices, e.g., a floppy or other removable disk drive, a hard disk drive, a direct access storage device (DASD), an optical drive (e.g., a CD drive, a DVD drive, etc.), a solid state storage drive (SSD), network attached storage, a storage area network, or a tape drive, among others. Furthermore, vehicle 100 may include a user interface 199 to enable vehicle 100 to receive a number of inputs from and generate outputs for a user or operator (e.g., using one or more displays, touchscreens, voice interfaces, gesture interfaces, buttons and other tactile controls, or other input/output devices). Otherwise, user input may be received via another computer or electronic device (e.g., via an app on a mobile device) or via a web interface (e.g., from a remote operator).
[0053] Moreover, vehicle 100 may include one or more network interfaces 198 suitable for communicating with one or more networks (e.g., a LAN, a WAN, a wired network, a wireless network, or the Internet, among others) to permit the communication of information between various components of vehicle 100 (e.g., between powertrain 102, control system 110, primary vehicle control system 120, secondary vehicle control system 170, or other systems or components), with other vehicles, computers or electronic devices, including, for example, a central service, such as a cloud service, from which vehicle 100 receives environmental and other data for use in autonomous control thereof. For example, vehicle 100 may be in communication with a cloud-based remote vehicle system including a mapping system and a log collection system.
[0054] The processor(s) 122, 172 illustrated in FIG. 1, as well as various additional controllers and subsystems disclosed herein, generally operates under the control of an operating system and executes or otherwise relies upon various computer software applications, components, programs, objects, modules, data structures, etc., as will be described in greater detail below. Moreover, various applications, components, programs, objects, modules, etc. may also execute on one or more processors in another computer coupled to vehicle 100 via network, e.g., in a distributed, cloud-based, or client-server computing environment, whereby the processing required to implement the functions of a computer program may be allocated to multiple computers or services over a network. Further, in some implementations data recorded or collected by a vehicle may be manually retrieved and uploaded to another computer or service for analysis.
[0055] In general, the routines executed to implement the various implementations described herein, whether implemented as part of an operating system or a specific application, component, program, object, module or sequence of instructions, or even a subset thereof, will be referred to herein as "program code." Program code typically comprises one or more instructions that are resident at various times in various memory and storage devices, and that, when read and executed by one or more processors, perform the steps necessary to execute steps or elements embodying the various aspects of the invention. Moreover, while the invention has and hereinafter will be described in the context of fully functioning computers and systems, it will be appreciated that the various implementations described herein are capable of being distributed as a program product in a variety of forms, and that the invention applies equally regardless of the particular type of computer readable media used to actually carry out the distribution. Examples of computer readable media include tangible, non-transitory media such as volatile and non-volatile memory devices, floppy and other removable disks, solid state drives, hard disk drives, magnetic tape, and optical disks (e.g., CD-ROMs, DVDs, etc.), among others.
[0056] In addition, various program code described hereinafter may be identified based upon the application within which it is implemented in a specific implementation. However, it should be appreciated that any particular program nomenclature that follows is used merely for convenience, and thus the invention should not be limited to use solely in any specific application identified or implied by such nomenclature. Furthermore, given the typically endless number of manners in which computer programs may be organized into routines, procedures, methods, modules, objects, and the like, as well as the various manners in which program functionality may be allocated among various software layers that are resident within a typical computer (e.g., operating systems, libraries, API's, applications, applets, etc.), it should be appreciated that the invention is not limited to the specific organization and allocation of program functionality described herein.
[0057] it will be appreciated that the collection of components illustrated in FIG. 1 for primary vehicle control system 120 and secondary vehicle control system 170 are merely for the sake of example. Individual sensors may be omitted in some implementations, multiple sensors of the types illustrated in FIG. 1 may be used for redundancy or to cover different regions around a vehicle, and other types of sensors may be used. Likewise, different types or combinations of control subsystems may be used in other implementations. Further, while subsystems 152-160, 192-194 are illustrated as being separate from processors 122, 172 and memory 124, 174, respectively, it will be appreciated that in some implementations, the functionality of subsystems 152-160, 192-194 may be implemented with corresponding program code instruction(s) 126, 176 resident in one or more memories 124, 174 and executed by processor(s) 122, 174 and that these subsystems 152-160, 192- 194 may in some instances be implemented using the same processors and memory.
Subsystems 152-160, 192-194 in some implementations may be implemented at. least in part, using various dedicated circuit logic, various processors, various field-programmable gate arrays ("FPGA"), various application-specific integrated circuits ("ASIC"), various real time controllers, and the like, and as noted above, multiple subsystems may utilize common circuitry, processors, sensors, or other components. Further, the various components in primary vehicle control system 120 and secondary vehicle control system 170 may be networked in various manners.
[0058] Those skilled in the art will recognize that the exemplary environment illustrated in FIG. 1 is not intended to limit the present invention. Indeed, those skilled in the art will recognize that other alternative hardware or software environments may be used without departing from the scope of the invention.
[0059] Turning now to FIGS. 2A and 2B, block diagrams illustrating example implementations of the localization subsystems referenced in FIG. 1 are depicted. As shown in FIG. 2A, localization subsystem 152 of primary vehicle control system 120 includes at least global pose module 252 and online calibration module 254. Further, localization subsystem 192 of secondary vehicle control system 170 includes at least local pose module 292. Data generated by localization subsystem 152, 192 can be transmitted between localization subsystem 152, 192 (or the modules included therein) and other subsystems described herein. The data can include, for example, global pose instance(s) of a global pose of vehicle 100, local pose instance(s) of a local pose of vehicle 100, correction instance(s), or any combination thereof. Further, data generated by sensors (e.p., primary sensor system 130 secondary sensor system 180, or both) of vehicle 100 can be received by localization subsystem 152, 192.
[0060] Global pose module 252 can generate global pose instances of a global pose of vehicle 100. The global pose of vehicle 100 represents a pose of vehicle 100 with respect to a reference frame (e.g., tile(s)), and the global pose instance represents position and location information at a given time instance. As shown in FIG. 2B, global pose module 252 can receive instances of the first sensor data 130A, and the global pose instances can be generated based at least in part on instances of first sensor data 130A. The first sensor data 130A can include, for example, LIDAR data generated by the LIDAR sensor 136 of primary sensor system 130. In some additional or alternative implementations, global pose module 252 further generates the global pose instances based on the local pose instances. In some additional or alternative implementations, global pose module 252 further generates the global pose instances based on instances of second sensor data generated by second sensor(s) of vehicle 100 (e.g., IMU(s) 140, 182, wheel encoder(s) 142, 184, other sensors, or any combination thereof). Generating of the global pose instances is described in more detail below (e.g., with respect to FIGS. 3, block 500 of FIG. 4, and method 500A of FIG. 5).
Further, global pose module 252 can transmit generated global pose instances to online calibration module 254.
[0061] In some implementations, LIDAR sensor 136 can have a sensing cycle. For example, the LIDAR sensor 136 can scan a certain area during a particular sensing cycle to detect an object or an environment in the area. In some versions of those implementations, a given instance of the LIDAR data can include the LIDAR data from a given sensing cycle of LIDAR sensor 136. In other words, a given LIDAR data instance correspond to, for example, a given sweep of the LIDAR sensor 136 generated during the sensing cycle of the LIDAR sensor 136. The LIDAR data generated during the sensing cycle of LIDAR sensor 136 can include, for example, a plurality of points reflected off of a surface of an object in an environment of vehicle 100, and detected by at least one receiver component of the LIDAR component as data points. During a given sensing cycle, the LIDAR sensor 136 detects a plurality of data points in an area of the environment of vehicle 100, and corresponding reflections detected. One or more of the data points may also be captured in subsequent sensing cycles. Accordingly, the range and velocity for a point that is indicated by the LIDAR data of a sweep of LIDAR sensor 136 can be based on multiple sensing cycle events by referencing prior (and optionally subsequent) sensing cycle events. In some versions of those implementations, multiple (e.g., all) sensing cycles can have the same duration, the same field-of-view, or the same pattern of waveform distribution (through directing of the waveform during the sensing cycle). For example, multiple sweeps can have the same duration (e.g., 50 milliseconds, 100 milliseconds, 300 milliseconds, or other durations) and the same field-of-view (e.g., 60°, 90°, 180°, 360”, or other fields-of-view).
[0062] In some implementations, LIDAR sensor 136 can include a phase coherent LIDAR component during a sensing cycle. In some versions of those implementations, the instances of the first sensor data 130A can include LIDAR data from a sensing cycle of LIDAR sensor 136. The LIDAR data from the sensing cycle of LIDAR sensor 136 can include, for example, a transmitted encoded waveform that is sequentially directed to, and sequentially reflects off of, a plurality of points in an environment of vehicle 100 - and reflected portions of the encoded waveform are detected, in a corresponding sensing event of the sensing cycle, by the at least one receiver of the phase coherent LIDAR component as data points. During a sensing cycle, the waveform is directed to a plurality of points in an area of the environment of vehicle 100, and corresponding reflections detected, without the waveform being redirected to those points in the sensing cyde. Accordingly, the range and velocity for a point that is indicated by the LIDAR data of a sensing cyde can be instantaneous in that is based on single sensing event without reference to a prior or subsequent sensing event. In some versions of those implementations, multiple (e.g., all) sensing cycles can have the same duration, the same field-of-view, or the same pattern of waveform distribution (through directing of the waveform during the sensing cyde). For example, each of multiple sensing cycles that are each a sweep can have the same duration, the same field-of-view, and the same pattern of waveform distribution. However, in many other implementations the duration, field-of-view, or waveform distribution pattern can vary amongst one or more sensing cycles. For example, a first sensing cycle can be of a first duration, have a first field- of view, and a first waveform distribution pattern; and a second sensing cycle can be of a second duration that is shorter than the first, have a second field-of-view that is a subset of the first field-of-view, and have a second waveform distribution pattern that is denser than the first.
[0063] Online calibration module 254 can generate a correction instance 254A based at least in part on the global pose instance transmitted to online calibration module 254 from global pose module 252. In some implementations, the correction instance 254A can include drift rate(s) across multiple local pose instances. The drift rate(s) can indicate a first magnitude of drift, in one or more dimensions (e.g., X-dimension, Y-dimension, Z- dimension, roll-dimension, pitch-dimension, yaw-dimension, or other dimensions), over a period of time, a second magnitude of drift, in one or more of the dimensions, over a distance, or both. Put another way, the drift rate(s) can include, for example, a temporal drift rate, a distance drift rate, or both. The temporal drift can represent a magnitude of drift, in one or more dimensions, in generating the multiple local pose instances over the period of time in generating the multiple local pose instances. Further, the distance driftrate can represent a magnitude of drift, in one or more dimensions, over a distance travelled in generating the multiple local pose instances. In some versions of those implementations, the correction instance 254A can include a linear combination of the temporal drift rate and the distance drift rate. [0064] In some implementations, the correction instance can be further generated based on instances of the second sensor data 180A, instances of the third sensor data 130B, or both. As shown in FIG. 2B, online calibration module 254 can receive second sensor data 180A. The second sensor data 180A can include, for example, IMU data generated by one or more of IMU(s) 140, wheel encoder data generated by wheel encoder(s) 142, IMU data generated by one or more of IMU(s) 182, wheel encoder data generated by wheel encoder(s) 184, or any combination thereof. The instances of the second sensor data 180A can include instances of the IMU data, the wheel encoder data, or both. For example, the instances of the IMU data can be the most recently generated instances of the IMU data, and the instances of the wheel encoder data can be the most recently generated instances of the wheel encoder data. Further, the third sensor data 130B can include, for example, SATNAV data generated by the SATNAV sensor 132 of primary sensor system 130. The instances of the third sensor data 130B can include most recently generated instances of the SATNAV data. Generating of the correction instance 254A is described in more detail below (e.g., with respect to FIGS. 3, block 600 of FIG. 4, and method 600A of FIG. 6). Further, online calibration module 254 can transmit the generated correction instance 254A to local pose module 292. In some additional or alternative implementations, the correction instance 254A can include, or be limited to, global pose instances generated by global pose module 254.
[0065] Local pose module 292 can generate local pose instances of a local pose of vehicle 100. In some implementations, the local pose of vehicle 100 represents a pose of vehicle 100 with respect to a particular frame of reference, in some implementations, the particular frame of reference of the local pose instances can be a local frame of reference. For example, an initial local pose instance can correspond to a certain point in space (e.g., XI, Yl, and Zl). In this example, subsequent local pose instances can be determined with respect to this point in space. For instance, a first subsequent local pose instance can correspond to Xl+X', Yl+Y', and Zl+Z', where X', Y', and Z' correspond to a positional difference of vehicle 100 between a first time when the initial local pose instance was determined and a second time when the first subsequent local pose instance was determined. Further, a second subsequent local pose instance can correspond to X'+X", Y'+Y", and Z'+Z", and so on for further subsequent local pose instances, in some additional or alternative implementations, the particular frame of reference of the local pose instances can be a local frame of reference with respect to the tile(s). For example, an initial global pose instance can provide local pose module 292 with an indication of the tile that vehicle 100 is located therein, and local pose module 292 can then determine the local pose instances relative to the tile(s).
[0066] In some implementations, the local pose is not generated based on any vision data (e.g., LIDAR data or other vision data). Rather, as shown in FIG. 2B, local pose module 292 can receive instances of the second sensor data 180A described above (e.g., the IMU data and the wheel encoder data), and the local pose instances can be generated based at least in part on instances of the second sensor data 180A. Generating local pose instances without utilization of any vision data can enable the local pose instances to be generated more frequently (e.g., at a frequency that is greater than that of vision data generation) and using less computational resources. Further, generating local pose instances without utilization of any vision data can enable the local pose instances to be generated even when the vision sensor(s) generating the vision data are malfunctioning.
[0067] In some implementations, the local pose instances can be further generated based on the correction instance 254A transmitted to local pose module 292 from online calibration module 254. By generating the local pose instances based on the correction instance 254A, where the correction instance 254A is generated based on global pose instances, errors in generating the local pose instances can be quickly and efficiently corrected. Thus, the local pose instances more accurately reflect an actual pose of vehicle 100, and the local pose instances can be utilized by various other subsystem(s) described herein to control operation of vehicle 100 (e.g., planning subsystem 154, control subsystem 158, controlled stop subsystem 194, or other subsystems). As described herein, in various implementations the correction instances are generated based on past global pose instances and local pose instances, but differ from global pose instances. For example, the correction instances can include drift rate(s) in lieu of global pose instances. Further, the local pose module 292 can generate local pose instances, utilizing the drift rate(s), more efficiently than if global pose instances were instead utilized in generating the local pose instances. Yet further, the correction instances can be applicable to and utilized in generating multiple local pose instances, whereas global pose instances are only applicable to generating a single temporally corresponding local pose instance. Generating of the local pose instances is described in more detail below (e.g., with respect to FIGS. 3, block 700 of FIG. 4, and method 700A of FIG. 7). Further, local pose module 292 can transmit the generated local pose instances to other module(s) or subsystem(s) described herein (e.g., as discussed in more detail below with respect to FIGS. 3-6).
[0068] Moreover, and as shown in FIG. 2B, localization subsystem 152 of primary vehicle control system 120 may further include Earth Centered Earth Fixed ("ECEF") pose module 256 and bootstrap module 258. ECEF pose module 256 can generate ECEF pose instances of an ECEF pose of vehicle 100. The ECEF pose of vehicle 100 represents a pose of vehicle 100 with respect to longitude and latitude coordinates, and the ECEF pose instance represents position and location information with respect to the Earth at a given time instance. ECEF pose module 256 can receive instances of the third sensor data 130B described above (e.g., the SATNAV data), and can generate ECEF pose instances based at least in part on the instances of the third sensor data 130B. In some additional or alternative implementations, ECEF pose module 256 further generates the ECEF pose instances based on the local pose instances (e.g., as discussed in greater detail below with respect to FIG. 3, block 500 of FIG.
4, and FIG. 5). Generating of the ECEF pose instances is described in more detail below (e.g., with respect to FIGS. 3, block 500 of FIG. 4, and method 500A of FIG. 5). Further, bootstrap module 258 can identify candidate tile(s) in that vehicle 100 is located. The candidate tile(s) can be provided to global pose module 252 as information for the given tile in that vehicle 100 is actually located. Bootstrap module 258 can receive instances of the first sensor data 130A described above (e.g., the LIDAR data), and can identify candidate tile(s) based at least in part on ECEF pose instances and instances of the first sensor data 130A. in some additional or alternative implementations, bootstrap module 258 further identifies the candidate tile(s) based on the local pose instances (e.g., as discussed in greater detail below with respect to FIG. 3, block 500 of FIG. 4, and FIG. 5).
[0069] Notably, and as depicted in FIGS. 2A and 2B, the local pose instances can be generated by local pose module 292 at a first frequency f1 and the correction instances can be generated by online calibration module 254 at a second frequency f2, where the first frequency f1 is higher than the second frequency f 2. Put another way, the local pose instances are generated at a faster rate than the correction instances. In this manner, a plurality of local pose instances can be generated based on the same correction instance, and prior to receiving, at the local pose module 292, an additional correction instance that is generated based on an additional global pose instance. When the additional correction instance is received at local pose module 292, a plurality of additional local pose instances can then be generated based on the additional correction instance, and so on. Thus, local pose module 292 can track relative movement of vehicle 100, and errors in tracking the relative movement of vehicle 100 can be mitigated by periodically adjusting calculations at local pose module 292 via the correction instance 254A that is generated on global pose actual locations of vehicle 100 as indicated by the global pose instances.
[0070] Turning now to FIG. 3 is a process flow illustrating an example implementation of the localization subsystems referenced in FIGS. 2A and 2B is depicted. The process flow of FIG. 3 can be implemented by primary vehicle control system 120 and secondary vehicle control system 170. In particular, modules on left side of dashed line 300 can be implemented by secondary vehicle control system 170 (e.g., via localization subsystem 192), and modules on the right side of the dashed line 300 can be performed by primary vehicle control system 120 (e.g., via localization subsystem 152).
[0071] Local pose module 292 can receive instances of IMU data 182A generated by one or more IMUs of vehicle 100 (e.g., IMU(s) 182 of secondary sensor system 180 or IMU(s) 140 of primary sensory system 130). The IMU data 182A generated by one or more of the IMU(s) can be generated at a first frequency f1. Further, local pose module 292 can also receive an instance of wheel encoder data 184A generated by one or more wheel encoders of vehicle 100 (e.g., wheel encoder(s) 184 of secondary sensor system 180 or wheel encoders(s) 142 of primary sensory system 130). The wheel encoder data 184A generated by one or more of the wheel encoders can be generated at a second frequency f2 that is different from the first frequency f1 at which the IMU data 182A is generated. The combination of the IMU data 182A and the wheel encoder data is sometimes referred to herein as "second sensor data" (e.g., second sensor data 180A of FIGS. 2A and 2B). Notably, the IMU data 182A and the wheel encoder data 184A are generated at different frequencies. Local pose module 292 can include propagated filter(s) that incorporate the most recent version of sensor data in instances of the second sensor data (i.e., anytime machinery). Further, local pose module can receive a correction instance 254A generated by online calibration module 254 as described herein.
[0072] Moreover, local pose module 292 can process, using a state estimation model that is filter-based (e.g., Kalman filter, extended Kalman filter, dual Kalman filter, or other filter-based techniques) or observer-based (e.g., recursive least squares or other observerbased techniques), the instance of the second sensor data (including IMU data 182A and wheel encoder data 184A) and the correction instance 254A to generate output. The output can include, for example, a local pose instance 292A of a local pose of vehicle 100, estimated velocities of vehicle 100, estimated accelerations of vehicle 100, or any combination thereof. Local pose module 292 can then transmit the local pose instance 292A to other module(s) (e.g., ECEF pose module 256, bootstrap module 258, global pose module 252, online calibration module 254, or any combination thereof) and subsystem(s) (e.g., planning subsystem 154, control subsystem 158, controlled stop subsystem 194, or any combination thereof) over one or more networks via network interface 198. It should also be noted that a frequency at which local pose instances are generated can be based on the frequency at which instances of the second sensor data are generated.
[0073] ECEF pose module 256 can receive instances of SATNAV data 132A generated by a SATNAV sensor of vehicle (e.g., SATNAV sensor 132 of primary sensor system 130). The SATNAV data 132A generated by one or more of the SATNAV sensors can be generated at a third frequency f3 that is different from both the first frequency f1 at which the IMU data 182A is generated and the second frequency f2 at which the wheel encoder data 184A is generated. Further, ECEF pose module 256 can also receive local pose instances generated by local pose module 292.
[0074] Moreover, ECEF pose module 256 can generate an ECEF pose instance 256A of an ECEF pose of vehicle 100 based on the instances of the SATNAV data 132A and the local pose instances from local pose module 292. The ECEF pose instance 256A can map longitude and latitude coordinates included in the SATNAV data 132A to the local pose instances received from local pose module 292. In some implementations, ECEF pose module 256 generates an ECEF pose instance 256A only if vehicle 100 travels a threshold distance (e.g., as indicated by the local pose instances). In other implementations, ECEF pose module 256 generates an ECEF pose instance 256A for an instance of the SATNAV data 132A that is received at ECEF pose module 256. In various implementations, ECEF pose module 256 may only transmit the ECEF pose instance 256A to bootstrap module 258 if there is a sufficiently low error in mapping the SATNAV data 132A to the local pose instances from local pose module 292. In some versions of those implementations, if the ECEF pose instance 256A is not generated with a sufficiently low error in the mapping, then ECEF pose module 256 may transmit location information to bootstrap module 258 without any orientation information.
[0075] Bootstrap module 258 can receive instances of LIDAR data 136A generated by one or more LIDAR sensors of vehicle 100 (e.g., LIDAR 136 of primary sensory system 130). The LIDAR data 136A generated by one or more of the LIDAR sensors can be generated can be generated at a fourth frequency f4. Further, bootstrap module 258 can also receive local pose instance(s) 292A generated by local pose module 292 and ECEF pose instances (with or without orientation information) generated by ECEF pose module 256.
[0076] Moreover, bootstrap module 258 can process an instance of the LIDAR data 136A, a local pose instance 292A, and an ECEF pose instance 256A to identify candidate tile(s) 258A in which vehicle 100 is potentially located. The candidate tile(s) 258A represent previously mapped portions of geographical areas that are stored in a memory of the system that can be located locally at vehicle 100or remotely from one or more databases. In some implementations, the candidate tile(s) 258A can include information as to a current tile in which vehicle 100 is located. The global pose module 252 can utilize the information to identify a matching tile in which vehicle 100 is located. In some implementations, bootstrap module 258 can identify the candidate tile(s) 258A by assembling point cloud data from the LIDAR data 136A into a point cloud that includes a down sampled version of the LIDAR data with respect to a given tile associated with the local pose instance 292A (and optionally a neighborhood of tiles surrounding the given tile), identifying tile(s) that map geographical areas within a threshold radius from the location on the Earth indicated by the ECEF pose instance 256A, and aligning the down sampled point cloud to the tile(s) to identify the candidate tile(s) 258A. Bootstrap module 258 can align the down sampled point cloud to the tile(s) using various geometric matching techniques (e.g., iterative closest point or other geometry matching algorithms). Moreover, in assembling the point cloud data from the LIDAR data 136A into the down sampled point cloud, bootstrap module 258 may remove outlier data or fuzzy data.
[0077] In various implementations, the system can store a last known tile prior to losing power (e.g., vehicle 100 being turned off), retrieve the last known tile upon power being restored (e.g., vehicle 100 being turned back on), and identify the last known tile as a candidate tile (and optionally a neighborhood of tiles that includes the last known tile). In some implementations, bootstrap module 258 identifies candidate tile(s) 258A to global pose module 252 as instances of the LIDAR data 136A are generated. In other implementations, bootstrap module 258 identifies candidate tile(s) 258A to global pose module 252 at predetermined periods of time (e.g., every ten seconds, every thirty seconds, or other predetermined periods of time). In various implementations, bootstrap module 258 may only transmit the candidate tile(s) 258A to global pose module 252 if there is a sufficiently low error in aligning the down sampled point cloud with respect to a given tile associated with the local pose instance 292A.
[0078] Global pose module 252 can receive instances of LIDAR data 136A generated by one or more LIDAR sensors of vehicle 100 (e.g., LIDAR 136 of primary sensory system 130). The LIDAR data 136A generated by one or more of the LIDAR sensors can be generated can be generated at a fourth frequency f4. In some implementations, the instance of the LIDAR data 136A received by global pose module 252 is the same instance of LIDAR data utilized by bootstrap module 258 to identify the candidate tile(s) 258A. In other implementations, the instance of the LIDAR data 136A received by global pose module 252 is distinct from the instance of LIDAR data utilized by bootstrap module 258 to identify the candidate tile(s) 258A. Further, global pose module 252 can also receive local pose instance(s) 292A generated by local pose module 292 and candidate tile(s) 258A identified by bootstrap module 258.
[0079] Moreover, global pose module 252 can process an instance of the LIDAR data 136A, a local pose instance 292A, and candidate tile(s) to generate a global pose instance 252A. The global pose instance 252A can identify a matching tile in that vehicle 100 is located, and position and orientation information of vehicle 100 within the matching tile. In some implementations, global pose module 252 generates the global pose instance 252A by aligning a point cloud generated based on the LIDAR data 136A with one or more previously stored point clouds of a given tile. In some versions of those implementations, global pose module 252 can align the point cloud and one or more of the previously stored point clouds using various geometric matching techniques (e.g., iterative closest point or other geometry matching algorithms). The one or more previously stored point clouds can be stored in association with a given tile, and can be accessed over one or more networks (e.g., using mapping subsystem 160). In some further versions of those implementations, the one or more previously stored point clouds can be identified based on a most recently generated local pose instance (e.g., local pose instance 292A) or based on the second sensor data (e.g., IMU data 182A or wheel encoder data 184A). The one or more previously stored point clouds can be stored in association with the given tile associated with the most recently generated local pose instance (e.g., local pose instance 292A) or a location of vehicle determined based on the second sensor data (e.g., IMU data 182A or wheel encoder data 184A).
[0080] in some versions of those implementations, global pose module 252 generates the global pose instance 252A by assembling point cloud data from the LIDAR data 136A into a down sampled point cloud with respect to a given tile associated with the local pose instance 292A (and optionally a neighborhood of tiles surrounding the given tile) and a point cloud that includes a finer sampled version of the LIDAR data with respect to the given tile associated with the local pose instance 292A (and optionally a neighborhood of tiles surrounding the given tile), aligning the down sampled point cloud to the tile(s) to identify a subset of the candidate tile(s) 258A, and aligning the finer sampled point cloud to the tile(s) to identify a matching tile. Global pose module 252 can align the down sampled point cloud and the finer sampled point cloud to the tile(s) using various geometric matching techniques (e.g., iterative closest point or other geometry matching algorithms). Moreover, in assembling the point cloud data from the LIDAR data 136A into the down sampled point cloud or the finer sampled point cloud, global pose module 252 may remove outlier data or fuzzy data.
[0081] In some implementations, global pose module 252 generates global pose instances as instances of the LIDAR data 136A are generated. In some versions of those implementations, global pose module 252 may only transmit the global pose instance 252A to online calibration module 254 after a threshold number of global pose instances are successfully generated. By only transmitting the global pose instance 252A after determining a threshold number of global pose instances are successfully generated, the resulting correction instance 254A is generated based on an accurate global pose instance, rather than a global pose instance that does not represent an actual pose of vehicle 100. [0082] Online calibration module 254 can receive an instance of the IMU data 182A, an instance of the wheel encoder data 184A, an instance of the SATNAV data 132A, and the global pose instance 252A. In some implementations, online calibration module 254 can process, using a state estimation model that is filter-based (e.g., Kalman filter, extended Kalman filter, dual Kalman filter, or other filter-based techniques) or observer-based (e.g., recursive least squares or other observer-based techniques), the instance of the IMU data 182A, the instance of the wheel encoder data 184A, the instance of the SATNAV data 132A, and the global pose instance 252A to generate output. In some versions of those implementations, the output can include, for example, estimates of wheel radii of vehicle 100 or sensor biases of individual sensors of vehicle 100 (e.g., sensor(s) included in primary sensor system 130 or secondary sensor system 180). The correction instance 254A can then be generated based on the estimates of the wheel radii of vehicle 100 or the sensor biases of individual sensors of vehicle 100. In other versions of those implementations, the output generated across the state estimation model can be the correction instance 254A, such that the state estimation model acts like a black box.
[0083] In other implementations, online calibration module 254 can generate the correction instance 254A based on historical global pose instance, including the global pose instance 252A, and historical local pose instances (e.g., as indicated by the dashed line from the local pose instances 292A), including the local pose instance 292A. The historical pose instances (both global and local) may be limited to those that are generated with a threshold duration of time with respect to a current time (e.g., pose instance generated in the last 100 seconds, 200 seconds, or other durations of time), such that online calibration module 254 only considers a sliding window of the historical pose instances. Further, online calibration module 254 can identify historical global pose instances that temporally correspond to historical local pose instances, and can use the identified historical global pose instances as indicators in generating the correction instance. As described in more detail herein (e.g., with respect to block 600 of FIG. 4 and FIG. 6), the correction instance can indude drift rate(s). Online calibration module 254 can then transmit the correction instance 254A to local pose module 292 over one or more networks via network interface 198. Thus, local pose instances generated by local pose module 292 can be generated based on the correction instance 254A as well as additional instance of the IMU data 182A and additional instances of the wheel encoder data 184A.
[0084] Turning now to FIG. 4, an example method 400 for localization of an autonomous vehicle is illustrated. The method 400 may be performed by an autonomous vehicle analyzing sensor data generated by sensor(s) of the autonomous vehicle (e.g., vehicle 100 of FIG. 1), by another vehicle (autonomous or otherwise), by another computer system that is separate from the autonomous vehicle, or any combination thereof. For the sake of simplicity, operations of the method 400 are described herein as being performed by a system (e.g., processor(s) 122 or primary vehicle control system 120, processor(s) 172 of secondary vehicle control system 170, or a combination thereof). Moreover, the operations of method 400 of FIG. 4 are described herein with respect to method 500A of FIG. 5, method 600A of FIG. 6, and method 700A of FIG. 7. it will be appreciated that the operations of the methods of FIGS. 4-7 may be varied, and that some operations may be performed in parallel or iteratively in some implementations, so the methods illustrated in FIGS. 4-7 are merely provided for illustrative purposes.
[0085] At block 500 of FIG. 4, the system generates global pose instance(s) of a global pose of an autonomous vehicle based on an instance of first sensor data generated by first sensor(s) of the autonomous vehicle and based on a local pose instance of a local pose of the autonomous vehicle. More particularly, the system can generate the global pose instance using localization subsystem 152 of primary vehicle control system 120. As one non-limiting example, FIG. 5 shows an example method 500A of how the system generates the global pose instance(s) of the global pose of the autonomous vehicle at block 500 of FIG. 4.
[0086] At block 552, the system receives an instance of third sensor data generated by third sensor(s) of the autonomous vehicle. The third sensor data can include, for example, SATNAV data generated by one or more SATNAV sensors of the autonomous vehicle (e.g., SATNAV sensor 132 of primary sensor system 130). The instance of the third sensor data can include, for example, a most recent instance of the SATNAV data generated by the STANAV sensor that includes location data, such as longitude and latitude coordinates. [0087] At block 554, the system generates an ECEF pose instance of an ECEF pose of the autonomous vehicle based on the instance of the third sensor data received at block 552. The ECEF pose of the autonomous vehicle represents a pose of the autonomous vehicle with respect to longitude and latitude coordinates, and the ECEF pose instance represents position and location information with respect to given longitude and latitude coordinates at a given time instance. In some implementations, the system generates the ECEF pose instance by determining location and position information for the autonomous vehicle with respect to the Earth based on the longitude and latitude coordinates included in the SATNAV data. Put another way, the ECEF pose instance identifies a location on the Earth (e.g., using longitude and latitude coordinates) that the autonomous vehicle is located. [0088] In some additional or alternative implementations, generating the ECEF pose instance is further based on a local pose instance of a local pose of the autonomous vehicle. In some versions of those implementation, the system maps the local pose instance to the ECEF pose instance. For example, the system can identify a given tile associated with the local pose instance, and can determine the given tile associated with the local pose instance is within a threshold radius of the longitude and latitude coordinates associated with the ECEF pose instance. Further, the system may only transmit the ECEF pose instance to other subsystems or modules thereof (e.g., bootstrap module 258) if the system successfully maps the local pose instance to the ECEF pose instance (or an error in the mapping fails to satisfy an error threshold or is sufficiently low).
[0089] At block 556, the system receives an instance of the first sensor data generated by the first sensor(s) of the autonomous vehicle. The third sensor data can include, for example, LIDAR data generated by one or more LIDAR sensors of the autonomous vehicle (e.g., LIDAR sensor 136 of primary sensor system 130). The instance of the third sensor data can include, for example, a most recent instance of the LIDAR data that includes point cloud data from a sensing cycle of the one or more LIDAR sensors.
[0090] At block 558, the system identifies candidate tile(s) based on the ECEF pose instance of the autonomous vehicle generated at block 564 or the instance of the first sensor data received at block 556. The candidate tile(s) represent previously mapped portions of geographical areas that are stored in memory of the system or remotely. The system can retrieve the identified candidate ti le(s) using, for example, mapping subsystem 160. In some implementations, the system can identify the candidate tile(s) by assembling point cloud data from a sensing cycle of the LIDAR sensor into a down sampled point cloud, identifying tile(s) that map geographical areas within a threshold radius from the location on the Earth indicated by the ECEF pose instance, and aligning the down sampled point cloud to the tile(s) to identify candidate tile(s). The system can align the down sampled point cloud to the tile(s) using various geometric matching techniques (e.g., iterative closest point or other geometry matching algorithms). In some implementations, the system can store a last known tile prior to losing power (e.g., the autonomous vehicle being turned off), retrieve the last known tile upon power being restored (e.g., the autonomous vehicle being turned back on), and identify the last known tile as a candidate tile (and optionally a neighborhood of tiles that includes the last known tile).
[0091] in some additional or alternative implementations, identifying the candidate tile(s) is further based on a local pose instance of a local pose of the autonomous vehicle. In some versions of those implementation, the system can identify the candidate tile(s) by identifying a given tile associated with the local pose instance, aligning the down sampled point cloud to the given tile associated with the local pose instance, and identifying a neighborhood of tile(s), including the given tile, as candidate tile(s). A neighborhood of tile(s) can include, for example, the given tile and a plurality of additional tiles that are locationally proximate to the given tile (e.g., four tiles that border the given tile, eight tiles that surround the given tile, or other configurations of locationally proximate tiles). Further, the system may only transmit the candidate tile(s) to other subsystems or modules thereof (e.g., global pose module 252) if the system successfully aligns the down sampled point cloud to the given tile associated with the local pose instance (or an error in the aligning fails to satisfy an error threshold or is sufficiently low).
[0092] At block 560, the system generates the global pose instance of the global pose of the autonomous vehicle based on an instance of the first sensor data. The global pose of the autonomous vehicle represents a pose of the autonomous vehicle with respect to tile(s), and the global pose instance represents position and location information with respect to a given tile at a given time instance. In some implementations, the global pose instance is generated based on the same LIDAR data from the same sensing cycle of the one or more LIDAR sensors that is used to identify the candidate tile(s). In other implementations, the global pose instance is generated based on different LIDAR data from a different sensing cycle of the one or more LIDAR sensors that is used to identify the candidate tile(s). in some additional or alternative implementations, the global pose instance is further generated based on instances of second sensor data generated by second sensor(s) of the autonomous vehicle (e.g., IMU(s) 140, 182, wheel encoder(s) 142, 184, or other sensors of FIG. 1).
[0093] In some implementations, and as indicated at sub-block 560A, the system can generate the global pose instance by identifying a matching tile based on the instance of the instance of the first sensor data. In some versions of those implementations, generating the global pose instance is further based on the candidate tile(s) identified at block 558, and the system can identify the matching tile from among the candidate tile(s) responsive to identifying the candidate tile(s) at block 558. The system can identify the matching tile by assembling point cloud data from a sensing cycle of the LIDAR sensor into a down sampled point cloud (e.g., representation that includes only a portion of the LIDAR data generated over the sensing cycle) and a finer sampled point cloud (e.g., representation that includes all LIDAR generated over the sensing cycle), selecting a given one of the candidate tile(s) to compare to the point cloud data, iteratively aligning the down sampled point cloud to the candidate tile(s) to identify a subset of the candidate tile(s), and aligning the finer sampled point cloud to the subset of the candidate tile(s) to identify a matching tile. Further, the system can align the down sampled point cloud or the finer sampled point cloud to the candidate tile(s) using various geometric matching techniques (e.g., iterative closest point or other geometry matching algorithms). Thus, at sub-block 560A, the system aligns the down sampled point cloud to further narrow the candidate tile(s) to those in that the autonomous vehicle is most likely located, and only aligns the finer sampled point cloud to those tile(s) rather than fully processing each of the candidate tile(s). Moreover, by identifying the matching tile using the finer sampled point cloud, location information and orientation information of the autonomous vehicle can be determined.
[0094] In some additional or alternative versions of those implementations, generating the global pose instance is further based on a local pose instance of a local pose of the autonomous vehicle that temporally corresponds to the global pose instance. In some versions of those implementation, the system can identify the matching tile by identifying a given tile associated with the local pose instance and a neighborhood of tiles locationally proximate to the given tile, and can identify the matching tile in a similar manner described above, but using the neighborhood of tile locationally proximate to the given tile associated with the local pose instance rather than the candidate tile(s) identified at block 558.
Further, the system may only transmit the candidate tile(s) to other subsystems or modules thereof (e.g., online calibration module 254) if the system successfully aligns the fine- sampled point cloud to the given tile associated with the local pose instance (or an error in the aligning fails to satisfy an error threshold or is sufficiently low).
[0095] At block 562, the system determines whether there is any failure in generating the global pose instance. A failure in generating the global pose instance can be detected, for example, based on sensor failure of one or more of the LIDAR sensors of the autonomous vehicle, failure to identify a matching tile (e.g., the error satisfies a threshold due to construction areas altering traffic patterns, due to occlusion caused by other vehicles, foliage, due to unmapped geographical areas, and so on), or other failures. If, at an iteration of block 562, the system does not detect any failure in generating the global pose instance, then the system proceeds to block 564.
[0096] At block 564, the system stores the global pose instance, and transmits the global pose instance to an online calibration module. The system can store the global pose instance in memory of the system or remotely. Moreover, and as discussed in greater detail below with respect to block 600 of FIG. 4, the system can use the global pose in generating correction instance(s).
[0097] At block 566, the system determines whether a further instance of the first sensor data is received. If, at an iteration of block 566, the system determines that no further instance of the first sensor data is received, then the system continuously monitors, at block 566, for the further instance of the first sensor data. However, at an iteration of block 566, if the system determines that a further instance of the first sensor data is received, then the system returns to block 560 and proceeds with the method 500A. At this next iteration of block 560, the system can generate the global pose instance based on the further instance of the first sensor data, in this manner, the system can continuously generate the global pose instance(s) for the instance of the first sensor data received by the system, store the generated global pose instance(s), and transmit the global pose instance(s) to the online calibration module to generate correction instance(s).
[0098] However, at an iteration of block 562, if the system detects a failure in generating the global pose instance, then the system returns to block 552 and proceeds with the method 500A. Notably, the operations of blocks 552-558 are depicted as being separate from blocks 560-566, and blocks 552-558 can be considered initialization operations for generating global pose instance(s). In some implementations, the system may only execute the initialization operations of blocks 552-558 again in the method 500A if the system detects a failure in generating a given global pose instance. As noted above, the system can store a last known tile prior to losing power (e.g., the autonomous vehicle being turned off), and retrieve the last known tile upon power being restored (e.g., the autonomous vehicle being turned back on). Thus, even upon power being restored at the autonomous vehicle after losing power, the system may not need to perform the initialization operations of blocks 552-558. Put another way, the system can use information from previously generated global pose instance(s) and local pose instance(s) in subsequently generating the global pose instance(s) without having to perform the initialization operations of blocks 552-558 at each iteration of generating a global pose instance. In other implementations, and although not depicted, the system can periodically perform the initialization operations of blocks 552-558. The system can periodically perform the initialization operations of blocks 552-558 for the sake of redundancy and verification of the tile(s) being utilized in generating the global pose instance(s).
[0099] Referring back to FIG. 4, at block 600, the system generates correction instance(s) based on the global pose instance(s) and based on an instance of second sensor data generated by second sensor(s) of the autonomous vehicle. More particularly, the system can generate the correction instance(s) using localization subsystem 152 of primary vehicle control system 120. As one non-limiting example, FIG. 6 shows an example method 600A of how the system generates the correction the correction instance at block 600 of FIG. 4.
[00100] At block 652, the system receives the global pose instance of the global pose of the autonomous vehicle from a global pose module. The global pose instance can be generated, for example, as discussed above with respect to block 500 of FIG. 4. At block 654, the system receives an instance of the second sensor data generated by the second sensor(s) of the autonomous vehicle. The second sensor data can include, for example, IMU data generated by one or more IMUs of the autonomous vehicle (e.g., IMU(s) 140 of primary sensor system 130 or IMU(s) 182 of secondary sensor system 180), wheel encoder data generated by one or more wheel encoders of the autonomous vehicle (e.g., wheel encoder(s) 142 of primary sensor system 130 or wheel encoder(s) 184 of secondary sensor system 180), or both. The instance of the second sensor data can include, for example, a most recent instance of the IMU data generated by one or more of the IMUs or a most recent instance of the wheel encoder data by the one or more wheel encoders that includes location data, such as longitude and latitude coordinates.
[00101] At block 656, the system receives an instance of the third sensor data generated by the third sensor(s) of the autonomous vehicle. As noted above, the third sensor data can include, for example, SATNAV data generated by one or more SATNAV sensors of the autonomous vehicle (e.g., SATNAV sensor 132 of primary sensor system 130). The instance of the third sensor data can include, for example, a most recent instance of the SATNAV data generated by the one or more SATNAV sensors that includes location data, such as longitude and latitude coordinates.
[00102] At block 658, the system generates the correction instance based on the global pose instance, the instance of the second sensor data, the instance of the third sensor data, or both. In some implementations, the system can generate the correction instance using a state estimation model that is filter-based (e.g., Kalman filter, extended Kalman filter, dual Kalman filter, or other filter-based techniques) or observer-based (e.g., recursive least squares or other observer-based techniques). For example, the system can apply, as input across the state estimation model, the global pose instance received at block 652, the IMU data or the wheel encoder data included in the instance of the second sensor data received at block 664, or the SATNAV data included in the instance of the third sensor data received at block 656 to generate output. In some versions of those implementations, the output generated across the state estimation model can be estimates of wheel radii of the autonomous vehicle or sensor biases of individual sensors of the autonomous vehicle (e.g., sensor(s) included in primary sensor system 130 or secondary sensor system 180). The correction instance can then be generated based on the estimates of the wheel radii of the autonomous vehicle or the sensor biases of individual sensors of the autonomous vehicle.
In other versions of those implementations, the output generated across the state estimation model can be the correction instance, such that the state estimation model acts like a black box.
[00103] in other implementations, the system can generate the correction instance based on historical global pose instance, including the global pose instance received at block 652, and historical local pose instances, including a local pose instance generated by the online calibration module based on the instance of the second sensor data received at block 654. The historical pose instances (both global and local) may be limited to those that are generated with a threshold duration of time with respect to a current time {e.g., pose instance generated in the last 100 seconds, 200 seconds, or other durations of time), such that the system only considers a sliding window of the historical pose instances. The system can identify historical global pose instances that temporally correspond to historical local pose instances, and can use the identified historical global pose instances as indicators in generating the correction instance. As described herein, a plurality of local pose instances can be generated in the same amount of time it takes to generate a single global pose instance. Further, as local pose instances are generated, each local pose instance can diverge further from a most recent global pose instance, and by identifying the global pose instances that temporally correspond to a given one of the local pose instances, the system can determine how each of the local pose instances are diverging.
[00104] In some versions of those implementations, the system can apply, as input across a Gauss-Newton optimization algorithm, the historical pose instances (both global and local), the IMU data or the wheel encoder data included in the instance of the second sensor data received at block 664, or the SATNAV data included in the instance of the third sensor data received at block 656 to generate output. In some versions of those implementations, the output generated across the Gauss-Newton optimization algorithm can be estimates of wheel radii of the autonomous vehicle or sensor biases of individual sensors of the autonomous vehicle (e.g., sensor(s) included in primary sensor system 130 or secondary sensor system 180). The correction instance can then be generated based on the estimates of the wheel radii of the autonomous vehicle or the sensor biases of individual sensors of the autonomous vehicle. In other versions of those implementations, the output generated across the Gauss-Newton optimization algorithm can be the correction instance, such that the state estimation model acts like a black box.
[00105] In various implementations, the correction instance can include drift rate(s), and as indicated at sub-block 658A, the system can determine the drift rate(s) across multiple local pose instances of the local pose of the autonomous vehicle that are generated by the system. The drift rate(s) can indicate a first magnitude of drift, in one or more dimensions (e.g., X-dimension, Y-dimension, Z-dimension, roll-dimension, pitch-dimension, yawdimension, or other dimensions), over a period of time. Further, the drift rate(s) can indicate a second magnitude, in one or more of the dimensions, over a distance. Put another way, the drift rate(s) can include, for example, a temporal drift rate, a distance drift rate, or both. The temporal drift can represent a first magnitude of drift, in one or more dimensions, in generating the multiple local pose instances over the period of time in generating the multiple local pose instances. Further, the distance drift rate can represent a second magnitude of drift, in one or more of the dimensions, over a distance travelled in generating the multiple local pose instances. In some versions of those implementations, the correction instance can include a linear combination of the temporal drift rate and the distance drift rate.
[00106] For example, assume a first local pose instance is generated at a first time based on an instance of second sensor data (and optionally further based on a first correction instance). Further assume a first global pose instance is generated at or near (e.g., within a threshold amount of time, such as within 100 milliseconds, 300 milliseconds, 500 milliseconds, or other amounts of time) the first time based on an instance of first sensor data (and optionally further based on a prior local pose instance). Further assume an additional local pose instance is generated at a second time based on an additional instance of the second sensor data (and optionally further based on a first correction instance). Further assume a second global pose instance is generated at or near the second time based on an additional instance of the first sensor data (and optionally further based on a prior local pose instance). In some implementations, the system can determine the first local pose instance temporally corresponds to the first global pose instance based on the pose instances being generated at or near the same time (/'.e., the first time). Further, the system can determine the additional local pose instance temporally corresponds to the additional global pose instance based on the additional pose instances being generated at or near the same time (/.e., the second time). The system can determine the drift rate(s) for the correction instance based on at least the iocal pose instance, the additional local pose instance, the global pose instance, the additional global pose instance, or any combination thereof.
[00107] For example, the system can compare the local pose instance with the temporally corresponding global pose instance, and the system can also compare the additional local pose instance with the temporally corresponding additional global pose instance. Based on differences determined in these comparisons, the system can determine a magnitude of drift, in one or more dimensions, over a period of time or over a distance. For instance, assume global pose instances indicate that the autonomous vehicle traveled ten meters in one second in a first direction (e.g., the difference between the first time and the third time), but the local pose instances indicate that the autonomous vehicle travelled nine meters in the one second and two degrees off of the first direction. In this example, the determined drift rate(s) would indicate that the local pose instances are being generated with an error to account for the one meter and two degree error. In some further versions of those implementations, the system can further identify a plurality of further local pose instances that were generated that were generated between the first time and the second time (and optionally further based on a first correction instance). In some versions of those implementations, the further local pose instances provide more data that can be used in determining the drift rate(s) even though the further local pose instances may not have temporally corresponding global pose instances.
[00108] At block 660, the system stores the correction instance, and transmits the correction instance to a local pose module. The system can store the correction instance in memory of the system or remotely. Moreover, and as discussed in greater detail below with respect to block 700 of FIG. 4, the system can use the correction instance in generating local pose instance(s). In some implementations, the system may delay transmitting of the correction instance to the local pose module for a predetermined period of time (e.g., one second, five second, ten seconds, or other threshold durations of time). By delaying transmitting of the correction instance, the system can verify the instance of the second sensor data or the instance of the third sensor data used in generating the correction instance does not indude any outiiers, errors, or faulty data. In some versions of those implementations, if the determines the sensor data includes outliers, errors, or faulty data, then the system can discard the generated correction instance, remove the outliers, errors, or faulty data, and generate an additional correction instance that is not based on outliers, errors, or faulty data.
[00109] At block 662, the system determines whether an additional instance of the global pose is received. If, at an iteration of block 662, the system determines that no additional global pose instance is received from the global pose module, then the system continuously monitors, at block 662, for the additional global pose instance. If, at an iteration of block 662, the system determines that an additional global pose instance is received from the global pose module, then the system returns to block 654 and proceeds with the method 600A. At this next iteration of block 654, the system can generate the correction instance based on the additional instance of the global pose. In this manner, the system can continuously generate the co instance(s) for each instance of the global pose generated by the system, store the correction instance(s), and transmit each of the correction instance(s) to the local pose module to generate local pose instance(s). Moreover, it should be noted that, in some implementations, a correction instance can be generated by the system for each global pose instance that is generated by the system, such that generating the correction instance may also be based on a frequency at which the global pose instance is generated, while in other implementations, the system may extrapolate the global pose from historical global pose instances to generate the correction instances at a faster rate than the frequency at which the global pose instance is generated.
[00110] Referring back to FIG. 4, at block 700, the system generates local pose instance(s) of the local pose of the autonomous vehicle based on the correction instance(s) and based on an additional instance of the second sensor data. More particularly, the system can generate the local pose instance(s) using localization subsystem 192 of secondary vehicle control system 170. As one non-limiting example, FIG. 7 shows an example method 700A of how the system generates the local pose instance(s) of the local pose of the autonomous vehicle at block 700 of FIG. 4.
[00111] At block 752, the system receives an instance of the second sensor data generated by the second sensor(s) of the autonomous vehicle. As noted above, the second sensor data can include, for example, IMU data generated by one or more IMUs of the autonomous vehicle (e.g., IMU(s) 140 of primary sensor system 130 or IMU(s) 182 of secondary sensor system 180), wheel encoder data generated by one or more wheel encoders of the autonomous vehicle (e.g., wheel encoder(s) 142 of primary sensor system 130 or wheel encoder(s) 184 of secondary sensor system 180), or both. The instance of the second sensor data can include, for example, a most recent instance of the IMU data generated by one or more of the IMUs and a most recent instance of the wheel encoder data by the one or more wheel encoders that includes location data, such as longitude and latitude coordinates.
[00112] At block 754, the system generates the local pose instance of the local pose of the autonomous vehicle based on the instance of the second sensor data. In some implementations, the particular frame of reference of the local pose instances can be a local frame of reference. For example, an initial local pose instance can correspond to a certain point in space (e.g., XI, Yl, and Zl). In some additional or alternative implementations, the particular frame of reference of the local pose instances can be a local frame of reference with respect to the tile(s). For example, an initial global pose instance can provide local pose module 292 with an indication of the tile that vehicle 100 is located therein, and local pose module 292 can then determine the local pose instances relative to the tile(s). The local pose instance can be generated based on the IMU data or the wheel encoder data included in the instance of the second sensor data received at block 752. In some implementations, the system can generate the local pose instance using a state estimation model that is filter-based (e.g., Kalman filter, extended Kalman filter, dual Kalman filter, or other filter-based techniques) or observer-based (e.g., recursive least squares or other observer-based techniques). For example, the system can apply, as input across the state estimation model, the IMU data or the wheel encoder data included in the instance of the second sensor data received at block 752 to generate output. The output generated across the state estimation model can be the local pose instance, estimated velocities of the autonomous vehicle, estimated accelerations of the autonomous vehicle, or any combination thereof. The local pose instance, the estimated velocities of the autonomous vehicle and the estimated accelerations of the autonomous vehicle can be raw values or other representations. [00113] Notably, and in contrast with generating the global pose instances, the system does not utilize instances of the first sensor data (e.g., LIDAR data) in generating the local pose instances. Moreover, the local pose instance seeks to model relative position and orientation information of the autonomous vehicle based on actual movement of the autonomous vehicle (e.g., as indicated by the IMU data and the wheel encoder data). In contrast, the global pose instance seeks to model actual position and orientation information of the autonomous vehicle within the ti le(s) based on matching sensor data to the tile(s) (e.g., using LIDAR data). Further, it should be appreciated that the system can generate multiple local pose instances in parallel for the sake of redundancy or safety. [00114] At block 756, the system stores the local pose instance, and transmits the local pose instance to a plurality of subsystems or modules (e.g., via ECEF pose module 256, bootstrap module, 258, global pose module 252). As described herein with respect to FIGS. 5 and 6, the local pose instance can be utilized in generating ECEF pose instances (e.g., via ECEF pose module 256), identifying candidate tile(s) (e.g., via bootstrap module 258), generating global pose instances (e.g., via global pose module 252), generating correction instance(s) (e.g., via online calibration module 254), or by other modules. Moreover, the local pose instance can be utilized by various other subsystems (e.g., planning subsystem 154, control subsystem 158, controlled stop subsystem 194, or other subsystems). Thus, even if the autonomous vehicle loses power to primary vehicle control system 120, the autonomous vehicle can continue to operate using only secondary vehicle control system 170 and local pose instances generated by the system.
[00115] At block 758, the system determines whether a correction instance is received from the online calibration module. If, at an iteration of block 758, the system determines that no correction instance is received from the online calibration module, then the system returns to block 752. Notably, upon initialization of the autonomous vehicle (e.g., as discussed above with respect to blocks 552-558 of FIG. 5), there may not be any correction instances to utilize in generating the local pose instance. However, the system can still generate multiple local pose instances without any correction instance upon initialization. Moreover, at an iteration of block 758, if the system determines that a correction instance is received from the online calibration module, then the system proceeds to block 760. [00116] At block 760, the system receives an additional instance of the second sensor data. As noted above, the second sensor data can include, for example, IMU data generated by one or more IMUs of the autonomous vehicle (e.g., IMU(s) 140 of primary sensor system 130 or I M U(s) 182 of secondary sensor system 180), wheel encoder data generated by one or more wheel encoders of the autonomous vehicle (e.g., wheel encoder(s) 142 of primary sensor system 130 or wheel encoder(s) 184 of secondary sensor system 180), or both. The additional instance of the second sensor data can include, for example, a most recent instance of the IMU data generated by one or more of the IMUs or a most recent instance of the wheel encoder data by the one or more wheel encoders that includes location data, such as longitude and latitude coordinates. Notably, the additional instance of the second sensor data receive at block 760 is distinct from the instance of the instance of the second sensor data received at block 752.
[00117] At block 762, the system generates an additional local pose instance of the local pose of the autonomous vehicle based on the additional instance of the second sensor data and based on the correction instance. The system can generate the additional local pose instance in the same manner described above with respect to block 754. However, the additional local pose instance can be modified to include the correction instance received at block 758. For example, if the additional local pose instance, the estimated velocities of the autonomous vehicle, or the estimated accelerations of the autonomous vehicle are raw values, then the system can apply drift rate(s) included in the correction instance received at block 758 to the raw calculations. By applying the drift rate(s) to the additional local pose instance, the system can quickly correct errors in wheel radii or sensor bias(es) in the one or more IMU sensors or the one or more wheel encoder sensors that generated the additional instance of the second sensor data received at block 760.
[00118] At block 764, the system determines whether an additional correction instance is received from the online calibration module. If, at an iteration of block 764, the system determines that no additional correction instance is received from the online calibration module, the system returns to block 760. However, at an iteration of block 764, if the system determines that an additional correction instance is received from the online calibration module, the system proceeds to block 768. Thus, the system can continuously generate iocal pose instances based on further instances of the second data until an additional correction instance is received at block 764.
[00119] At block 768, the system combines the correction instance and the additional correction instance. Notably, the additional correction instance received at block 766 can be generated based on local pose instances that were generated based on the correction instance received at block 758. Thus, the additional correction instance received at block 766 is a correction instance that is generated relative to the correction instance received at block 758, and the system can combine the correction instances to ensure the system takes both into consideration in generating yet further local pose instances. As noted above with respect to block 658 of FIG. 6, the correction can be a linear combination of a temporal drift rate, a distance drift rate, or both. Thus, the system can combine the correction instance received at block 758 and the additional correction instance received at block 766 by generating a linear combination of the temporal drift rate of the correction instance, the temporal drift rate of the additional correction, the distance drift rate of the correction instance, the distance drift rate of the additional correction instance, or both. After combining the correction instance and the additional correction instance, the system then returns to block 760 and generates further local pose instances based on the combined correction instances and based on yet further instances of the second sensor data.
[00120] Although the operations of FIGS. 4-7 are depicted in a particular order, it should be understood that is for the sake of example and not meant to be limiting. Moreover, it should be appreciated that the operations of FIGS. 4-7 are generally performed in parallel or redundantly. As one non-limiting example, the system can generate local pose instance(s) (e.g., as described with respect to blocks 752-756) as the autonomous vehicle begins moving, and transmit those local pose instance(s) to, for example, block 554 for use in generating ECEF pose instance, block 560 for use in generating global pose instances, block 658 for use in generating correction instances, and so on.
[00121] Moreover, it should be noted that the operations of FIGS. 5 and 6 are described herein as being performed by primary vehicle control system 120, and the operations of FIGS. 7 are described herein as being performed by secondary vehicle control system 170. Further, the techniques described herein can utilize this split architecture (which is readily apparent from FIGS. 1, 2A, 2B, and 3) to operate the autonomous vehicle using only local pose instances in normal operation or in scenarios when an adverse event is detected at primary vehicle control system 120.
[00122] Other variations will be apparent to those of ordinary skill. Therefore, the invention lies in the claims hereinafter appended.

Claims

CLAIMS What is claimed is:
1. A method for localization of an autonomous vehicle, the method comprising: obtaining first sensor data from one or more first sensors of the autonomous vehicle, generating, based on the first sensor data, a global pose of the autonomous vehicle; obtaining second sensor data from one or more second sensors of the autonomous vehicle; generating, based on the second sensor data, a local pose of the autonomous vehicle, wherein the local pose of the autonomous vehicle is generated without utilization of the first sensor data; determining a correction based on the global pose of the autonomous vehicle and the local pose of the autonomous vehicle; obtaining additional second sensor data from the one or more second sensors of the autonomous vehicle; generating, based on (i) the correction and (ii) the additional second sensor data, an additional local pose of the autonomous vehicle,
2. The method of claim 1, wherein generating the additional local pose comprises: generating the additional local pose based on the local pose instance as modified based on at least the additional second sensor data, wherein the local pose instance temporally corresponds to the global pose instance.
3. The method of claim 1 or 2, further comprising: immediately subsequent to generating the additional local pose but prior to receiving any additional correction: generating a further local pose based on: the additional local pose, the correction, and further second sensor data generated subsequent to the additional second sensor data.
4. The method of claim 3, further comprising: immediately subsequent to generating the further local pose but prior to receiving any additional correction: generating a yet further local pose based on: the further local pose, the correction, and yet further second sensor data generated subsequent to the further second sensor data.
5. The method of any preceding claim, wherein determining the correction is further based on comparing at least one prior local pose to at least one prior global pose that temporally corresponds to the at least one prior local pose.
6. The method of claim 5, wherein determining the correction comprises: comparing the local pose to the global pose that temporally corresponds to the local pose; comparing at least one prior local pose to at least one prior global pose that temporally corresponds to the at least one prior local pose; and generating the correction based on comparing the local pose to the global pose and based on comparing the prior local pose to the prior global pose.
7. The method of any preceding claim, wherein the correction includes a drift rate that indicates one or more of: a first magnitude of drift, in one or more dimensions, over a period of time, or a second magnitude of drift, in one or more of the dimensions, over a distance.
8. The method of claim 7, wherein the correction is a linear combination of the first magnitude of drift over the period of time, and the second magnitude of drift over the distance.
9. The method of any preceding claim, wherein the global pose and the correction are generated by a primary control system, and wherein the local pose is generated by a secondary control system.
10. The method of any preceding claim, further comprising: identifying one or more candidate tiles based on prior first sensor data, each of the one or more candidate tiles representing a previously mapped portion of a geographical region.
11. The method of any preceding claim, wherein the one or more first sensors include at least a LIDAR sensor, and wherein the first sensor data includes at least LIDAR data generated by a sensing cycle of the LIDAR sensor of the autonomous vehicle.
12. The method of claim 11, wherein generating the global pose of the autonomous vehicle comprises: assembling the LIDAR data into one or more point clouds; and aligning, using a geometric matching technique, one or more of the point clouds with a previously stored point cloud associated with the local pose that temporally corresponds to the global pose to generate the global pose.
13. The method of claim 12, wherein aligning one or more of the point clouds with the previously stored point cloud using the geometric matching technique to generate the global pose comprises: identifying a given tile associated with the local pose that temporally corresponds to the global pose; identifying, based on the given tile, the previously stored point cloud associated with the local pose; and using the geometric matching technique to align one or more of the point clouds with the previously stored point cloud.
14. The method of any preceding claim, wherein the one or more second sensors include at least one IMU sensor and at least one wheel encoder, and wherein the second sensor data includes IMU data generated by the at least one IMU sensor and wheel encoder data generated by the at least one wheel encoder.
15. The method of claim 14, wherein the IMU data is generated at a first frequency, wherein the wheel encoder data is generated at a second frequency, and wherein the second sensor data and the additional second sensor data include a most recently generated instance of the IMU data and a most recently generated instance of the wheel encoder data.
PCT/US2021/048397 2020-09-03 2021-08-31 Localization methods and architectures for an autonomous vehicle WO2022051263A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202063074111P 2020-09-03 2020-09-03
US63/074,111 2020-09-03

Publications (1)

Publication Number Publication Date
WO2022051263A1 true WO2022051263A1 (en) 2022-03-10

Family

ID=78085737

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2021/048397 WO2022051263A1 (en) 2020-09-03 2021-08-31 Localization methods and architectures for an autonomous vehicle

Country Status (1)

Country Link
WO (1) WO2022051263A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11718320B1 (en) 2020-08-21 2023-08-08 Aurora Operations, Inc. Using transmission sensor(s) in localization of an autonomous vehicle
US11859994B1 (en) 2021-02-18 2024-01-02 Aurora Innovation, Inc. Landmark-based localization methods and architectures for an autonomous vehicle

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190051056A1 (en) * 2017-08-11 2019-02-14 Sri International Augmenting reality using semantic segmentation
US20200004266A1 (en) * 2019-08-01 2020-01-02 Lg Electronics Inc. Method of performing cloud slam in real time, and robot and cloud server for implementing the same
US20200264258A1 (en) * 2019-02-19 2020-08-20 Great Wall Motor Company Limited Localization Methods and Systems for Autonomous Systems

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190051056A1 (en) * 2017-08-11 2019-02-14 Sri International Augmenting reality using semantic segmentation
US20200264258A1 (en) * 2019-02-19 2020-08-20 Great Wall Motor Company Limited Localization Methods and Systems for Autonomous Systems
US20200004266A1 (en) * 2019-08-01 2020-01-02 Lg Electronics Inc. Method of performing cloud slam in real time, and robot and cloud server for implementing the same

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11718320B1 (en) 2020-08-21 2023-08-08 Aurora Operations, Inc. Using transmission sensor(s) in localization of an autonomous vehicle
US11859994B1 (en) 2021-02-18 2024-01-02 Aurora Innovation, Inc. Landmark-based localization methods and architectures for an autonomous vehicle

Similar Documents

Publication Publication Date Title
US11964663B2 (en) Control of autonomous vehicle based on determined yaw parameter(s) of additional vehicle
US11829143B2 (en) Labeling autonomous vehicle data
US11933902B2 (en) Control of autonomous vehicle based on environmental object classification determined using phase coherent LIDAR data
US11403492B2 (en) Generating labeled training instances for autonomous vehicles
US11774966B2 (en) Generating testing instances for autonomous vehicles
US11630458B2 (en) Labeling autonomous vehicle data
US11256263B2 (en) Generating targeted training instances for autonomous vehicles
US11874660B2 (en) Redundant lateral velocity determination and use in secondary vehicle control systems
JP2022512359A (en) Technology for estimating motion behavior and dynamic behavior in autonomous vehicles
US11167751B2 (en) Fail-operational architecture with functional safety monitors for automated driving system
US11016489B2 (en) Method to dynamically determine vehicle effective sensor coverage for autonomous driving application
WO2022051263A1 (en) Localization methods and architectures for an autonomous vehicle
US20230061950A1 (en) Localization Methods And Architectures For A Trailer Of An Autonomous Tractor-Trailer
US10775804B1 (en) Optical array sensor for use with autonomous vehicle control systems
US11859994B1 (en) Landmark-based localization methods and architectures for an autonomous vehicle
WO2022232747A1 (en) Systems and methods for producing amodal cuboids
US11718320B1 (en) Using transmission sensor(s) in localization of an autonomous vehicle
US11900689B1 (en) Traffic light identification and/or classification for use in controlling an autonomous vehicle
US20220326343A1 (en) Detection or correction for multipath reflection
US20240192378A1 (en) Control of Autonomous Vehicle Based on Environmental Object Classification Determined Using Phase Coherent LIDAR Data

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21787521

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21787521

Country of ref document: EP

Kind code of ref document: A1