WO2021056283A1 - Systems and methods for adjusting a vehicle pose - Google Patents

Systems and methods for adjusting a vehicle pose Download PDF

Info

Publication number
WO2021056283A1
WO2021056283A1 PCT/CN2019/107925 CN2019107925W WO2021056283A1 WO 2021056283 A1 WO2021056283 A1 WO 2021056283A1 CN 2019107925 W CN2019107925 W CN 2019107925W WO 2021056283 A1 WO2021056283 A1 WO 2021056283A1
Authority
WO
WIPO (PCT)
Prior art keywords
point cloud
vehicle
point
plane
filtered
Prior art date
Application number
PCT/CN2019/107925
Other languages
French (fr)
Inventor
Fei Wang
Original Assignee
Beijing Didi Infinity Technology And Development Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Didi Infinity Technology And Development Co., Ltd. filed Critical Beijing Didi Infinity Technology And Development Co., Ltd.
Priority to PCT/CN2019/107925 priority Critical patent/WO2021056283A1/en
Publication of WO2021056283A1 publication Critical patent/WO2021056283A1/en

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0268Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means
    • G05D1/0274Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means using mapping information stored in a memory device
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3833Creation or updating of map data characterised by the source of data
    • G01C21/3837Data obtained from a single source
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • G05D1/0248Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means in combination with a laser
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Definitions

  • the present disclosure relates to systems and methods for adjusting a vehicle pose, and more particularly to, systems and methods for adjusting a vehicle pose by adapting acquired point clouds to a three-dimensional (3D) map.
  • Autonomous driving technology relies heavily on an accurate estimation of the vehicle’s position.
  • accuracy of positioning is critical to functions of autonomous driving vehicles, such as ambience recognition, decision making and control.
  • the vehicle is usually positioned in a previously constructed three-dimensional (3D) map by matching the real-time point cloud with the point cloud of the 3D map.
  • 3D maps may be obtained by aggregating images and information acquired by various sensors, detectors, and other devices equipped on survey vehicles as they drive around.
  • a vehicle may be equipped with multiple integrated sensors such as a LiDAR sensor and one or more cameras, to capture features of the road on which the vehicle is driving or the surrounding objects.
  • the real-time point cloud acquired by a vehicle when moving along a trajectory may deviate from the corresponding point cloud in a 3D map that is stored on the vehicle. Such deviation causes mismatches between the real-time point cloud and the 3D map, making it difficult to accurately position the vehicle in the 3D map. Therefore, an improved system and method is needed for adjusting a pose of the vehicle by adapting acquired point clouds to the 3D map.
  • Embodiments of the disclosure address the above problems by providing methods and systems for adjusting a vehicle pose by adapting acquired point clouds to a 3D map.
  • a method for adjusting a vehicle pose by adapting acquired point clouds to a three-dimensional (3D) map may include receiving a first point cloud acquired by at least one sensor equipped on a vehicle, the first point cloud comprising a plurality of 3D points. The method may also include obtaining a second point cloud from the 3D map corresponding to the first point cloud and filtering the first point cloud and the second point cloud to obtain a filtered first point cloud and a filtered second point cloud. The method may further include computing a rotation matrix based on the filtered first point cloud and the filtered second point cloud and adjusting a pose of the vehicle corresponding to the first point cloud based on the rotation matrix.
  • An exemplary system may include a communication interface configured to receive a first point cloud acquired by at least one sensor configured to acquire a first point cloud, the first point cloud comprising a plurality of 3D points.
  • the system may also include a storage configured to store the 3D map.
  • the system may further include at least one processor.
  • the at least one processor may be configured to obtain a second point cloud from the 3D map corresponding to the first point cloud.
  • the at least one processor may also be configured to filter the first point cloud and the second point cloud to obtain a filtered first point cloud and a filtered second point cloud.
  • the at least one processor may further be configured to compute a rotation matrix based on the filtered first point cloud and the filtered second point cloud. Moreover, the at least one processor may be configured to adjust a pose of the vehicle corresponding to the first point cloud based on the rotation matrix.
  • a non-transitory computer-readable medium having instructions stored thereon that, when execute by one or more processors, cause the one or more processors to perform a method for adjusting a vehicle pose by adapting acquired point clouds to a three-dimensional (3D) map.
  • the method may include receiving a first point cloud acquired by at least one sensor equipped on a vehicle, the first point cloud comprising a plurality of 3D points.
  • the method may also include obtaining a second point cloud from the 3D map corresponding to the first point cloud and filtering the first point cloud and the second point cloud to obtain a filtered first point cloud and a filtered second point cloud.
  • the method may further include computing a rotation matrix based on the filtered first point cloud and the filtered second point cloud and adjusting a pose of the vehicle corresponding to the first point cloud based on the rotation matrix.
  • FIG. 1 illustrates a schematic diagram of an exemplary vehicle equipped with a LiDAR system, according to embodiments of the disclosure.
  • FIG. 2 illustrates a block diagram of an exemplary vehicle pose adjustment system for adjusting the pose of a vehicle, according to embodiments of the disclosure.
  • FIG. 3 illustrates an exemplary vehicle with a point cloud acquired by the vehicle and a point cloud queried from a three-dimensional (3D) map, according to embodiments of the disclosure.
  • FIG. 4 illustrates a flow chart of an exemplary method for adjusting a vehicle pose by adapting acquired point clouds to a 3D map, according to embodiments of the disclosure.
  • FIG. 1 illustrates a schematic diagram of an exemplary vehicle 100 equipped with a LiDAR system 102, according to embodiments of the disclosure.
  • vehicle 100 may be a survey vehicle configured to acquiring data for constructing a high-definition map or 3D buildings and city modeling.
  • vehicle 100 may be autonomous vehicle or semi-autonomous vehicle that uses LiDAR system 102 to detect and position obstacles and objects around it to make driving decisions.
  • vehicle 100 may an Unmanned Aerial Vehicle (UAV) that uses LiDAR system 102 to model a terrain or navigate for obstacle avoidance.
  • UAV Unmanned Aerial Vehicle
  • vehicle 100 may be an electric vehicle, a fuel cell vehicle, a hybrid vehicle, or a conventional internal combustion engine vehicle.
  • Vehicle 100 may have a body 104 and at least one wheel 106.
  • Body 104 may be any body style, such as a sports vehicle, a coupe, a sedan, a pick-up truck, a station wagon, a sports utility vehicle (SUV) , a minivan, or a conversion van.
  • vehicle 100 may include a pair of front wheels and a pair of rear wheels, as illustrated in FIG. 1. However, it is contemplated that vehicle 100 may have less wheels or equivalent structures that enable vehicle 100 to move around.
  • Vehicle 100 may be configured to be all wheel drive (AWD) , front wheel drive (FWR) , or rear wheel drive (RWD) .
  • vehicle 100 may be configured to be operated by an operator occupying the vehicle, remotely controlled, and/or autonomous.
  • vehicle 100 may be equipped with LiDAR system 102 mounted to body 104 via a mounting structure 108.
  • Mounting structure 108 may be an electro-mechanical device installed or otherwise attached to body 104 of vehicle 100. In some embodiments of the present disclosure, mounting structure 108 may use screws, adhesives, or another mounting mechanism.
  • mounting structure 108 may be a gimbal that configured to adjust an attitude of LiDAR system 102 (e.g., LiDAR sensor) .
  • the attitude may include a pitch angle, a roll angle, and a yaw angle.
  • the gimbal may include one or more motors for actuating LiDAR system 102. It is contemplated that the manners in which LiDAR system 102 can be equipped on vehicle 100 are not limited by the example shown in FIG. 1 and may be modified depending on the types of LiDAR system 102 and/or vehicle 100 to achieve desirable 3D sensing performance.
  • vehicle 100 may be equipped with additional sensors for positioning vehicle 100.
  • GPS Global Positioning System
  • barometer not shown
  • vehicle 100 may be equipped with additional sensors for positioning vehicle 100.
  • GPS Global Positioning System
  • barometer not shown
  • vehicle 100 may be equipped with additional sensors for positioning vehicle 100.
  • LiDAR system 102 may be configured to capture data as vehicle 100 moves along a trajectory. LiDAR system 102 measures distance to a target by illuminating the target with pulsed laser light and measuring the reflected pulses with a receiver.
  • the laser light used for LiDAR system 102 may be ultraviolet, visible, or near infrared.
  • a transmitter of LiDAR system 102 is configured to scan a surrounding object, and a receiver of LiDAR system 102 is configured to receive light backscattered from the surrounding object.
  • the received signals may be processed to construct point clouds reflecting the position, shape, and size of the object.
  • LiDAR system 102 may continuously capture and process data.
  • vehicle 100 may be equipped with additional sensors other than LiDAR system 102.
  • vehicle 100 may be equipped with one or more imaging devices configured to capture images.
  • vehicle 100 may include one or more cameras or other cost-effective imaging devices such as a monocular, binocular, or panorama camera that may acquire a plurality of images (each known as an image frame) as vehicle 100 moves along a trajectory.
  • vehicle 100 may include a controller 110 inside body 104 of vehicle 100 or communicate with a remote computing device, such as a server, (not illustrated in FIG. 1) for positioning vehicle 100 based on the captured sensor data and a 3D map.
  • controller 110 may have different modules in a single device, such as an integrated circuit (IC) chip (implemented as an application-specific integrated circuit (ASIC) or a field-programmable gate array (FPGA) ) , or separate devices with dedicated functions.
  • IC integrated circuit
  • ASIC application-specific integrated circuit
  • FPGA field-programmable gate array
  • controller 110 may be located inside vehicle 100 or may be alternatively in a mobile device, in the cloud, or another remote location. Components of controller 110 may be in an integrated device or distributed at different locations but communicate with each other through a network (not shown) .
  • vehicle pose adjustment system (e.g., implemented by controller 110 or a remote server) may be used to adjust a pose of vehicle 100 before or as part of positioning vehicle 100.
  • LiDAR system 102 may be configured to constantly capture data and process the captured data to generate point cloud data as vehicle 100 moves along a trajectory.
  • the vehicle pose adjustment system may use the point cloud data to query a 3D map to obtain reference point cloud data. Due to positioning errors, the point cloud data may deviate from the reference point cloud data.
  • the reference point cloud data may be used by the vehicle pose adjustment system to adjust a pose of the vehicle, further adapting the point cloud data to the 3D map.
  • the point cloud data and the reference point cloud data may be filtered by the vehicle pose adjustment system embedded in vehicle 100 to generate a filtered point cloud data and a filtered reference point cloud data.
  • the vehicle pose adjustment system may fit a first plane based on the filtered point cloud data, and fit a second plane based on the filtered reference point cloud data.
  • the vehicle pose adjustment system may further evaluate the first plane and the second plane to determine if they are abnormal.
  • the vehicle pose adjustment system may further compute a rotation matrix based on the first plane and the second plane.
  • the vehicle pose adjustment system may further compute a vertical translation based on the first plane and the second plane.
  • the vehicle pose adjustment system may use the rotation matrix and the vertical translation to adjust the pose of the vehicle, further adjusting the point cloud data acquired by vehicle 100 to the 3D map. The adjusted the pose and point cloud are used to position vehicle 100.
  • FIG. 2 illustrates a block diagram of an exemplary vehicle pose adjustment system 224 for adjusting the pose of a vehicle (e.g., vehicle 100 as shown in FIG. 1) .
  • vehicle pose adjustment system may be embedded into controller 110 of vehicle 100 or a remote server connected with vehicle 100 through a network.
  • vehicle pose adjustment system 224 may include a communication interface 202, a processor 200, a memory 206, and a storage 208.
  • vehicle pose adjustment system 224 may have different modules in a single device, such as an integrated circuit (IC) chip (implemented as an application-specific integrated circuit (ASIC) or a field-programmable gate array (FPGA) ) , or separate devices with dedicated functions.
  • IC integrated circuit
  • ASIC application-specific integrated circuit
  • FPGA field-programmable gate array
  • Components of vehicle pose adjustment system 224 may be in an integrated device or distributed at different locations but communicate with each other through a network or a bus.
  • vehicle pose adjustment system 224 may be connected with LiDAR system 102 for receiving signals acquired by sensors.
  • vehicle pose adjustment system 224 may be connected with sensor 205 for receiving point cloud data acquired by sensor 205.
  • Sensor 205 may be, e.g., LiDAR system 102.
  • vehicle pose adjustment system 224 may receive point cloud 203 acquired by sensor 205 from communication interface 202.
  • Point cloud 203 may include a plurality of 3D points.
  • Vehicle pose adjustment system 224 may query a 3D map based on point cloud 203 to obtain a point cloud 204 from the 3D map, point cloud 204 may correspond with point cloud 203.
  • Vehicle pose adjustment system 224 may filter point cloud 203 and point cloud 204 to obtain a filtered point cloud 203 and a filtered point cloud 204.
  • Vehicle pose adjustment system 224 may compute a rotation matrix and a vertical translation based on filtered point cloud 203 and filtered point cloud 204.
  • Vehicle pose adjustment system 224 may further adjust the pose of vehicle 100 based on the rotation matrix and the vertical translation.
  • Communication interface 202 may send data to and receive data from components such as sensor 205 via direct communication links, a Wireless Local Area Network (WLAN) , a Wide Area Network (WAN) , wireless communication networks using radio waves, a cellular network, and/or a local wireless network (e.g., Bluetooth TM or WiFi) , or other communication methods.
  • communication interface 202 can be an integrated services digital network (ISDN) card, cable modem, satellite modem, or a modem to provide a data communication connection.
  • ISDN integrated services digital network
  • communication interface 202 can be a local area network (LAN) card to provide a data communication connection to a compatible LAN.
  • Wireless links can also be implemented by communication interface 202.
  • communication interface 202 can send and receive electrical, electromagnetic or optical signals that carry digital data streams representing various types of information via a network.
  • communication interface 202 may receive data (e.g., point cloud) from sensor 205 and provide the received data to memory 206 and/or storage 208 for storage or to processor 200 for processing.
  • data e.g., point cloud
  • FIG. 3 illustrates an exemplary point cloud 310 including a plurality of 3D points obtained by sensor 205 installed on vehicle 300 and point cloud 320 obtained by querying a 3D map based on point cloud 310.
  • Point cloud 310 may correspond to point cloud 320.
  • point cloud 320 is obtained by querying the 3D map based on each 3D point in point cloud 310.
  • Processor 200 may include any appropriate type of general-purpose or special-purpose microprocessor, digital signal processor (DSP) , or microcontroller.
  • DSP digital signal processor
  • Processor 200 may be configured as a separate processor module dedicated to compute a rotation matrix for adjusting a vehicle pose.
  • Processor 200 may be configured as a separate processor module dedicated to compute a vertical translation for adjusting a vehicle pose.
  • processor 200 may include multiple models, such as a query unit 240, a filter unit 242, a computation unit 244, and an adjusting unit 246.
  • These modules can be hardware units (e.g., portions of an integrated circuit) of processor 200 designed for use with other components or to execute part of a program.
  • the program may be stored on a computer-readable medium (e.g., memory 206 and/or storage 208) , and when executed by processor 200, it may perform one or more functions as disclosed herein.
  • FIG. 2 shows units 240-246 all within one processor 200, it is contemplated that these units may be distributed among multiple processors located near or remotely with each other.
  • Query unit 240 is configured to query a map based on point cloud data acquired by sensor 205.
  • query unit 240 may receive point cloud 203 and query a 3D map based on point cloud 203 to obtain point cloud 204 from the 3D map.
  • query unit 240 may query the 3D map based on a pose of a vehicle (e.g., vehicle 100) and coordinates of point cloud 203. Consistent with some embodiments, the pose of the vehicle may include a position of the vehicle and an attitude of sensor 205.
  • query unit 240 may query the 3D map based on (X, Y, Z, pitch, roll, yaw) and (x, y, z) to obtain coordinates of point cloud 204 (x, y, elevation) , where (X, Y, Z) denotes the position of the vehicle, (pitch, roll, yaw) denotes the attitude of sensor 205, and (x, y, z) denotes coordinates of point cloud 203. As another example, query unit 240 may query the 3D map based on (X, Y, Z, pitch, roll, yaw) and (x, y) to obtain coordinates of point cloud 204 (x, y, elevation) .
  • Filter unit 242 is configured to filter point cloud 203 and point cloud 204 to generate a filtered point cloud 203 and a filtered point cloud 204.
  • filter unit 242 may detect a first 3D point in point cloud 203 and detect a second 3D point corresponding to the first 3D point in point cloud 204.
  • Filter unit 242 may further compute a difference between a specified coordinate of the first 3D point and the specified coordinate of the second 3D point.
  • the specified coordinate may be the z-axis coordinate. Accordingly, the z-axis value of the first 3D point in the real-time point cloud is compared to the z-axis value of the second 3D point in the 3D map.
  • filter unit 242 may remove the first 3D point from point cloud 203 and remove the second 3D point from point cloud 204.
  • (x, y, z) denotes the position of the first 3D point and (x, y, elevation) denotes the position of the second 3D point.
  • Filter unit 242 may compute the difference between “z” and “elevation, ” and determine if the difference exceeds a threshold. When the difference exceeds a threshold, filter unit 242 may remove the first 3D point from point cloud 203 and remove the second 3D point from point cloud 204.
  • Computation unit 244 is configured to compute a rotation matrix based on filtered point cloud 203 and filtered point cloud 204.
  • computation unit 244 is configured to fit a first plane based on point cloud 203, and fit a second plane based on point cloud 204. Further, computation unit 244 is configured to compute the rotation matrix based on the first plane and the second plane. Consistent with some embodiments, the first plane or the second plane may be fitted by using a random sample consensus method.
  • computation unit 244 is further configured to evaluate the first plane and the second plane to determine whether the first plane and the second plane are abnormal. For example, computation unit 244 may compute a first distance between the elevation of a vehicle (e.g., the elevation measured by GPS or barometer on the vehicle) and the elevation of the vehicle as queried from the 3D map. Computation unit 244 may further compute a second distance between the first plane and the second plane, such as computing the vertical distance between the origin of the first plane and the origin of the second plane as the second distance. Computation unit 244 may compute the distance between the first distance and the second distance. When the distance exceeds a threshold, the first plane and the second plane may be discarded. When the distance is within the threshold, the first plane and the second plane are validated, and may be further used to compute the rotation matrix.
  • a threshold the first plane and the second plane may be discarded.
  • computation unit 244 is further configured to compute a vertical translation based on the first plane and the second plane.
  • the vertical translation may be the vertical distance between the first plane and the second plane (e.g., the vertical difference between the origin of the first plane and the origin of the second plane) .
  • Adjusting unit 246 is configured to adjust the pose of the vehicle based on the rotation matrix.
  • adjusting unit 246 is configured to adjust the pose of the vehicle based on the rotation matrix and the vertical translation.
  • the rotation matrix transforms the pose measured in the coordinate system of the point cloud acquired in real- time to a pose in the coordinate system of the reference point cloud.
  • the pose of the vehicle (X, Y, Z, pitch, roll, yaw) may be multiplied by the rotation matrix, then translated based on the vertical translation for the adjustment.
  • Memory 206 and storage 208 may include any appropriate type of mass storage provided to store any type of information that processor 200 may need to operate.
  • Memory 206 and storage 208 may be a volatile or non-volatile, magnetic, semiconductor, tape, optical, removable, non-removable, or other type of storage device or tangible (i.e., non-transitory) computer-readable medium including, but not limited to, a ROM, a flash memory, a dynamic RAM, and a static RAM.
  • Memory 206 and/or storage 208 may be configured to store one or more computer programs that may be executed by processor 200 to perform pose adjustment functions disclosed herein.
  • memory 206 and/or storage 208 may be configured to store program (s) that may be executed by processor 200 to adjust the pose of the vehicle.
  • Memory 206 and/or storage 208 may be further configured to store information and data used by processor 200.
  • memory 206 and/or storage 208 may be configured to store a 3D map or data acquired by sensor 205 (e.g., point cloud data) .
  • the 3D map maybe constructed with previously acquired point cloud data.
  • Vehicle 100 may use the 3D map for navigation, positioning, or obstacle detection. For example, by using the 3D map, vehicle 100 may determine its location or detect obstacles in ambient environment.
  • the 3D map may be an offline map.
  • the 3D map may be an online map. The online map may be updated in real-time.
  • FIG. 4 illustrates a flow chart of an exemplary method 400 for adjusting a vehicle pose by adapting acquired point clouds to a 3D map.
  • method 400 may be implemented by vehicle pose adjustment system 224.
  • method 400 is not limited to that exemplary embodiment.
  • Method 400 may include steps 410-480 as described below. It is to be appreciated that some of the steps may be optional to perform the disclosure provided herein. Further, some of the steps may be performed simultaneously, or in a different order than shown in FIG. 4.
  • communicate interface 202 may receive a first point cloud (e.g., point cloud 203) which is acquired by sensor 205 (e.g., LiDAR sensor) .
  • Sensor 205 may measure distance to a target by illuminating the target with pulsed laser light and measuring the reflected pulses with s sensor. Differences of the time for laser light sending and returning, and wavelengths can then be used to make digital 3D representations of the target.
  • the data acquired by sensor 205 may include point clouds including a plurality of 3D points. Consistent with some embodiments, after the first point cloud is acquired by sensor 205, sensor 205 may transmit/send the first point cloud to communication interface 202.
  • Communication interface 202 may provide the first point cloud to memory 206 and/or storage 208 for storage or to processor 200 for processing.
  • processor 200 may query a 3D map based on the first point cloud to obtain a second pint cloud (e.g., point cloud 204) corresponding to the first point cloud.
  • the 3D map may be queried based on the pose of the vehicle and coordinates of the first point cloud.
  • (X, Y, Z, pitch, roll, yaw) denotes the pose of the vehicle
  • (X, Y, Z) denotes the position of the vehicle
  • (pitch, roll, yaw) denotes the attitude of sensor 205.
  • (x, y, z) denotes coordinates of the first point cloud.
  • each 3D point in the first point cloud may have a corresponding 3D point in the second point cloud.
  • the 3D map may be queried based on the pose of vehicle (X, Y, Z, pitch, roll, yaw) and a two-dimensional (2D) coordinates (x, y) of the first point cloud.
  • (x, y, elevation) denotes coordinates of the second point cloud, and (x, y, elevation) may be obtained based on the pose of vehicle (X, Y, Z, pitch, roll, yaw) and the 2D coordinate (x, y) of the first point cloud.
  • processor 200 may filter the first point cloud and the second point cloud to obtain a filtered first point cloud and a filtered second point cloud.
  • processor 200 may determine a first 3D point in the first point cloud and a second 3D point in the second point that corresponds to the first 3D point. Processor 200 may compute the difference between a specified coordinate of the first 3D point and the specified coordinate of the second 3D point. Consistent with some embodiments, processor 200 may compute the difference between the vertical coordinate (z) of the first 3D point and the vertical coordinate (elevation) of the second 3D point. If the difference exceeds a threshold, processor 200 may filter the first and second point clouds by removing the first 3D point from the first point cloud and removing the second 3D point from the second point cloud.
  • processor 200 may fit a first plane based on the filtered first point cloud and fit a second plane based on the filtered second point cloud.
  • the filtered first point cloud may be a subset of the first point cloud and the filtered second point cloud may be a subset of the second point cloud.
  • the first plane or the second plane may be fitted by using a random sample consensus method.
  • processor 200 may evaluate the first plane and the second plane to determine if the first plane and the second plane are abnormal.
  • processor 200 may compute a first distance between the elevation of the vehicle (e.g., the elevation measured by GPS or barometer on the vehicle) and the elevation of the vehicle as queried from the 3D map.
  • Processor 200 may further compute a second distance between the first plane and the second plane, such as computing the vertical distance between the origin of the first plane and the origin of the second plane as the second distance.
  • Processor 200 may compute the distance between the first distance and the second distance. When the distance exceeds a threshold, the first plane and the second plane may be discarded. When the distance is within the threshold, the first plane and the second plane are validated, and may be further used to compute a rotation matrix.
  • processor 200 may compute a rotation matrix based on the first plane and the second plane.
  • processor 200 may compute the rotation matrix according to Equations 1-5 below.
  • A denotes a normal vector of the first plane
  • B denotes a normal vector of the second plane
  • G denotes a two-dimensional (2D) rotation matrix by an angle ⁇
  • U denotes the rotation matrix that is used to adjust the pose of the vehicle.
  • processor 200 may compute a vertical translation based on the first plane and the second plane.
  • processor 200 may compute the vertical translation based on the elevation of the vehicle and the elevation of the vehicle as queried from the 3D map, for example, the vertical translation may be the difference between the elevation of the vehicle and the elevation of the vehicle as queried from the 3D map.
  • the vertical translation may be the vertical difference between the first plane and the second plane, for example, the vertical difference between the origin of the first plane and the origin of the second plane.
  • processor 200 may adjust the pose of the vehicle based on the rotation and the vertical translation.
  • processor 200 may multiply the rotation matrix U with the pose of the vehicle (X, Y, Z, pitch, roll, yaw) to obtain an adjusted pose (X’, Y’, Z’, pitch’, roll’, yaw’) .
  • processor 200 may translate the adjusted pose by the vertical translation as computed above.
  • steps of method 400 may be iteratively performed by vehicle pose adjustment system 224 of the vehicle to adjust the pose of the vehicle in real-time.
  • Real-time adjustment may help the vehicle to estimate the intensity of the object in real-time, e.g., to assist an autonomous vehicle to make real-time driving decisions.
  • the disclosed systems and methods can effectively adjust a vehicle pose by adapting acquired point clouds to a 3D map.
  • the vehicle may accurately detect and position obstacles and objects around it to make driving decisions.

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Theoretical Computer Science (AREA)
  • Optics & Photonics (AREA)
  • Multimedia (AREA)
  • Electromagnetism (AREA)
  • Traffic Control Systems (AREA)
  • Navigation (AREA)

Abstract

A system and method for adjusting a vehicle pose by adapting acquired point clouds to a three-dimensional (3D) map is disclosed. An exemplary method includes receiving a first point cloud acquired by at least one sensor equipped on a vehicle (410), the first point cloud comprising a plurality of 3D points, obtaining a second point cloud from the 3D map corresponding to the first point cloud (420), filtering the first point cloud and the second point cloud to obtain a filtered first point cloud and a filtered second point cloud (430), computing a rotation matrix based on the filtered first point cloud and the filtered second point cloud (460), and adjusting a pose of the vehicle corresponding to the first point cloud based on the rotation matrix (480).

Description

SYSTEMS AND METHODS FOR ADJUSTING A VEHICLE POSE TECHNICAL FIELD
The present disclosure relates to systems and methods for adjusting a vehicle pose, and more particularly to, systems and methods for adjusting a vehicle pose by adapting acquired point clouds to a three-dimensional (3D) map.
BACKGROUND
Autonomous driving technology relies heavily on an accurate estimation of the vehicle’s position. For example, accuracy of positioning is critical to functions of autonomous driving vehicles, such as ambience recognition, decision making and control.
The vehicle is usually positioned in a previously constructed three-dimensional (3D) map by matching the real-time point cloud with the point cloud of the 3D map. For example, 3D maps may be obtained by aggregating images and information acquired by various sensors, detectors, and other devices equipped on survey vehicles as they drive around. For example, a vehicle may be equipped with multiple integrated sensors such as a LiDAR sensor and one or more cameras, to capture features of the road on which the vehicle is driving or the surrounding objects.
Due to position error, the real-time point cloud acquired by a vehicle when moving along a trajectory may deviate from the corresponding point cloud in a 3D map that is stored on the vehicle. Such deviation causes mismatches between the real-time point cloud and the 3D map, making it difficult to accurately position the vehicle in the 3D map. Therefore, an improved system and method is needed for adjusting a pose of the vehicle by adapting acquired point clouds to the 3D map.
Embodiments of the disclosure address the above problems by providing methods and systems for adjusting a vehicle pose by adapting acquired point clouds to a 3D map.
SUMMARY
In one aspect, a method for adjusting a vehicle pose by adapting acquired point clouds to a three-dimensional (3D) map is disclosed. An exemplary method may include receiving a first point cloud acquired by at least one sensor equipped on a vehicle, the first point cloud comprising a plurality of 3D points. The method may also include obtaining a second point cloud from the 3D map corresponding to the first point cloud and filtering the first point cloud and the second point cloud to obtain a filtered first point cloud and a filtered second point cloud. The method may further include computing a rotation matrix based on the  filtered first point cloud and the filtered second point cloud and adjusting a pose of the vehicle corresponding to the first point cloud based on the rotation matrix.
In another aspect, a system for adjusting a vehicle pose by adapting acquired point clouds to a three-dimensional (3D) map is disclosed. An exemplary system may include a communication interface configured to receive a first point cloud acquired by at least one sensor configured to acquire a first point cloud, the first point cloud comprising a plurality of 3D points. The system may also include a storage configured to store the 3D map. The system may further include at least one processor. The at least one processor may be configured to obtain a second point cloud from the 3D map corresponding to the first point cloud. The at least one processor may also be configured to filter the first point cloud and the second point cloud to obtain a filtered first point cloud and a filtered second point cloud. The at least one processor may further be configured to compute a rotation matrix based on the filtered first point cloud and the filtered second point cloud. Moreover, the at least one processor may be configured to adjust a pose of the vehicle corresponding to the first point cloud based on the rotation matrix.
In yet another aspect, a non-transitory computer-readable medium having instructions stored thereon that, when execute by one or more processors, cause the one or more processors to perform a method for adjusting a vehicle pose by adapting acquired point clouds to a three-dimensional (3D) map is disclosed. The method may include receiving a first point cloud acquired by at least one sensor equipped on a vehicle, the first point cloud comprising a plurality of 3D points. The method may also include obtaining a second point cloud from the 3D map corresponding to the first point cloud and filtering the first point cloud and the second point cloud to obtain a filtered first point cloud and a filtered second point cloud. The method may further include computing a rotation matrix based on the filtered first point cloud and the filtered second point cloud and adjusting a pose of the vehicle corresponding to the first point cloud based on the rotation matrix.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 illustrates a schematic diagram of an exemplary vehicle equipped with a LiDAR system, according to embodiments of the disclosure.
FIG. 2 illustrates a block diagram of an exemplary vehicle pose adjustment system for adjusting the pose of a vehicle, according to embodiments of the disclosure.
FIG. 3 illustrates an exemplary vehicle with a point cloud acquired by the vehicle and a point cloud queried from a three-dimensional (3D) map, according to embodiments of the disclosure.
FIG. 4 illustrates a flow chart of an exemplary method for adjusting a vehicle pose by adapting acquired point clouds to a 3D map, according to embodiments of the disclosure.
DETAILED DESCRIPTION
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. Wherever possible, the same reference numbers will be used throughout the drawings to refer to the same or like parts.
FIG. 1 illustrates a schematic diagram of an exemplary vehicle 100 equipped with a LiDAR system 102, according to embodiments of the disclosure. Consistent with some embodiments, vehicle 100 may be a survey vehicle configured to acquiring data for constructing a high-definition map or 3D buildings and city modeling. Consistent with some embodiments, vehicle 100 may be autonomous vehicle or semi-autonomous vehicle that uses LiDAR system 102 to detect and position obstacles and objects around it to make driving decisions. Consistent with some embodiments, vehicle 100 may an Unmanned Aerial Vehicle (UAV) that uses LiDAR system 102 to model a terrain or navigate for obstacle avoidance.
It is contemplated that vehicle 100 may be an electric vehicle, a fuel cell vehicle, a hybrid vehicle, or a conventional internal combustion engine vehicle. Vehicle 100 may have a body 104 and at least one wheel 106. Body 104 may be any body style, such as a sports vehicle, a coupe, a sedan, a pick-up truck, a station wagon, a sports utility vehicle (SUV) , a minivan, or a conversion van. In some embodiments of the present disclosure, vehicle 100 may include a pair of front wheels and a pair of rear wheels, as illustrated in FIG. 1. However, it is contemplated that vehicle 100 may have less wheels or equivalent structures that enable vehicle 100 to move around. Vehicle 100 may be configured to be all wheel drive (AWD) , front wheel drive (FWR) , or rear wheel drive (RWD) . In some embodiments of the present disclosure, vehicle 100 may be configured to be operated by an operator occupying the vehicle, remotely controlled, and/or autonomous.
As illustrated in FIG. 1, vehicle 100 may be equipped with LiDAR system 102 mounted to body 104 via a mounting structure 108. Mounting structure 108 may be an  electro-mechanical device installed or otherwise attached to body 104 of vehicle 100. In some embodiments of the present disclosure, mounting structure 108 may use screws, adhesives, or another mounting mechanism. In some embodiments, mounting structure 108 may be a gimbal that configured to adjust an attitude of LiDAR system 102 (e.g., LiDAR sensor) . The attitude may include a pitch angle, a roll angle, and a yaw angle. The gimbal may include one or more motors for actuating LiDAR system 102. It is contemplated that the manners in which LiDAR system 102 can be equipped on vehicle 100 are not limited by the example shown in FIG. 1 and may be modified depending on the types of LiDAR system 102 and/or vehicle 100 to achieve desirable 3D sensing performance.
Consistent with some embodiments, vehicle 100 may be equipped with additional sensors for positioning vehicle 100. For example, Global Positioning System (GPS) or a barometer (not shown) may be used to determine the position or the elevation of vehicle.
Consistent with some embodiments, LiDAR system 102 may be configured to capture data as vehicle 100 moves along a trajectory. LiDAR system 102 measures distance to a target by illuminating the target with pulsed laser light and measuring the reflected pulses with a receiver. The laser light used for LiDAR system 102 may be ultraviolet, visible, or near infrared. For example, a transmitter of LiDAR system 102 is configured to scan a surrounding object, and a receiver of LiDAR system 102 is configured to receive light backscattered from the surrounding object. The received signals may be processed to construct point clouds reflecting the position, shape, and size of the object. As vehicle 100 moves along the trajectory, LiDAR system 102 may continuously capture and process data.
In some embodiments, vehicle 100 may be equipped with additional sensors other than LiDAR system 102. For example, vehicle 100 may be equipped with one or more imaging devices configured to capture images. For example, vehicle 100 may include one or more cameras or other cost-effective imaging devices such as a monocular, binocular, or panorama camera that may acquire a plurality of images (each known as an image frame) as vehicle 100 moves along a trajectory.
Consistent with the present disclosure, vehicle 100 may include a controller 110 inside body 104 of vehicle 100 or communicate with a remote computing device, such as a server, (not illustrated in FIG. 1) for positioning vehicle 100 based on the captured sensor data and a 3D map. In some embodiments of the present disclosure, controller 110 may have different modules in a single device, such as an integrated circuit (IC) chip (implemented as an application-specific integrated circuit (ASIC) or a field-programmable gate array (FPGA) ) , or separate devices with dedicated functions. In some embodiments of the present  disclosure, one or more components of controller 110 may be located inside vehicle 100 or may be alternatively in a mobile device, in the cloud, or another remote location. Components of controller 110 may be in an integrated device or distributed at different locations but communicate with each other through a network (not shown) .
In some embodiments, vehicle pose adjustment system (e.g., implemented by controller 110 or a remote server) may be used to adjust a pose of vehicle 100 before or as part of positioning vehicle 100. For example, LiDAR system 102 may be configured to constantly capture data and process the captured data to generate point cloud data as vehicle 100 moves along a trajectory. The vehicle pose adjustment system may use the point cloud data to query a 3D map to obtain reference point cloud data. Due to positioning errors, the point cloud data may deviate from the reference point cloud data. To position the vehicle and detect obstacles, the reference point cloud data may be used by the vehicle pose adjustment system to adjust a pose of the vehicle, further adapting the point cloud data to the 3D map.
Consistent with some embodiments, the point cloud data and the reference point cloud data may be filtered by the vehicle pose adjustment system embedded in vehicle 100 to generate a filtered point cloud data and a filtered reference point cloud data. The vehicle pose adjustment system may fit a first plane based on the filtered point cloud data, and fit a second plane based on the filtered reference point cloud data. The vehicle pose adjustment system may further evaluate the first plane and the second plane to determine if they are abnormal. The vehicle pose adjustment system may further compute a rotation matrix based on the first plane and the second plane. The vehicle pose adjustment system may further compute a vertical translation based on the first plane and the second plane. The vehicle pose adjustment system may use the rotation matrix and the vertical translation to adjust the pose of the vehicle, further adjusting the point cloud data acquired by vehicle 100 to the 3D map. The adjusted the pose and point cloud are used to position vehicle 100.
FIG. 2 illustrates a block diagram of an exemplary vehicle pose adjustment system 224 for adjusting the pose of a vehicle (e.g., vehicle 100 as shown in FIG. 1) . In some embodiments, vehicle pose adjustment system may be embedded into controller 110 of vehicle 100 or a remote server connected with vehicle 100 through a network. As shown in FIG. 2, vehicle pose adjustment system 224 may include a communication interface 202, a processor 200, a memory 206, and a storage 208. In some embodiments, vehicle pose adjustment system 224 may have different modules in a single device, such as an integrated circuit (IC) chip (implemented as an application-specific integrated circuit (ASIC) or a field-programmable gate array (FPGA) ) , or separate devices with dedicated functions.  Components of vehicle pose adjustment system 224 may be in an integrated device or distributed at different locations but communicate with each other through a network or a bus.
In some embodiments, vehicle pose adjustment system 224 may be connected with LiDAR system 102 for receiving signals acquired by sensors. For example, vehicle pose adjustment system 224 may be connected with sensor 205 for receiving point cloud data acquired by sensor 205. Sensor 205 may be, e.g., LiDAR system 102.
In some embodiments, vehicle pose adjustment system 224 may receive point cloud 203 acquired by sensor 205 from communication interface 202. Point cloud 203 may include a plurality of 3D points. Vehicle pose adjustment system 224 may query a 3D map based on point cloud 203 to obtain a point cloud 204 from the 3D map, point cloud 204 may correspond with point cloud 203. Vehicle pose adjustment system 224 may filter point cloud 203 and point cloud 204 to obtain a filtered point cloud 203 and a filtered point cloud 204. Vehicle pose adjustment system 224 may compute a rotation matrix and a vertical translation based on filtered point cloud 203 and filtered point cloud 204. Vehicle pose adjustment system 224 may further adjust the pose of vehicle 100 based on the rotation matrix and the vertical translation.
Communication interface 202 may send data to and receive data from components such as sensor 205 via direct communication links, a Wireless Local Area Network (WLAN) , a Wide Area Network (WAN) , wireless communication networks using radio waves, a cellular network, and/or a local wireless network (e.g., Bluetooth TM or WiFi) , or other communication methods. In some embodiments, communication interface 202 can be an integrated services digital network (ISDN) card, cable modem, satellite modem, or a modem to provide a data communication connection. As another example, communication interface 202 can be a local area network (LAN) card to provide a data communication connection to a compatible LAN. Wireless links can also be implemented by communication interface 202. In such an implementation, communication interface 202 can send and receive electrical, electromagnetic or optical signals that carry digital data streams representing various types of information via a network.
In some embodiments, communication interface 202 may receive data (e.g., point cloud) from sensor 205 and provide the received data to memory 206 and/or storage 208 for storage or to processor 200 for processing.
For example, FIG. 3 illustrates an exemplary point cloud 310 including a plurality of 3D points obtained by sensor 205 installed on vehicle 300 and point cloud 320 obtained by  querying a 3D map based on point cloud 310. Point cloud 310 may correspond to point cloud 320. Consistent with some embodiments, point cloud 320 is obtained by querying the 3D map based on each 3D point in point cloud 310.
Referring back to FIG. 2, Processor 200 may include any appropriate type of general-purpose or special-purpose microprocessor, digital signal processor (DSP) , or microcontroller. Processor 200 may be configured as a separate processor module dedicated to compute a rotation matrix for adjusting a vehicle pose. Processor 200 may be configured as a separate processor module dedicated to compute a vertical translation for adjusting a vehicle pose.
As shown in FIG. 2, processor 200 may include multiple models, such as a query unit 240, a filter unit 242, a computation unit 244, and an adjusting unit 246. These modules (and any corresponding sub-modules or sub-units) can be hardware units (e.g., portions of an integrated circuit) of processor 200 designed for use with other components or to execute part of a program. The program may be stored on a computer-readable medium (e.g., memory 206 and/or storage 208) , and when executed by processor 200, it may perform one or more functions as disclosed herein. Although FIG. 2 shows units 240-246 all within one processor 200, it is contemplated that these units may be distributed among multiple processors located near or remotely with each other.
Query unit 240 is configured to query a map based on point cloud data acquired by sensor 205. In some embodiments, query unit 240 may receive point cloud 203 and query a 3D map based on point cloud 203 to obtain point cloud 204 from the 3D map. For example, query unit 240 may query the 3D map based on a pose of a vehicle (e.g., vehicle 100) and coordinates of point cloud 203. Consistent with some embodiments, the pose of the vehicle may include a position of the vehicle and an attitude of sensor 205. As an example, query unit 240 may query the 3D map based on (X, Y, Z, pitch, roll, yaw) and (x, y, z) to obtain coordinates of point cloud 204 (x, y, elevation) , where (X, Y, Z) denotes the position of the vehicle, (pitch, roll, yaw) denotes the attitude of sensor 205, and (x, y, z) denotes coordinates of point cloud 203. As another example, query unit 240 may query the 3D map based on (X, Y, Z, pitch, roll, yaw) and (x, y) to obtain coordinates of point cloud 204 (x, y, elevation) .
Filter unit 242 is configured to filter point cloud 203 and point cloud 204 to generate a filtered point cloud 203 and a filtered point cloud 204. In some embodiments, filter unit 242 may detect a first 3D point in point cloud 203 and detect a second 3D point corresponding to the first 3D point in point cloud 204. Filter unit 242 may further compute a difference between a specified coordinate of the first 3D point and the specified coordinate of the  second 3D point. For example, the specified coordinate may be the z-axis coordinate. Accordingly, the z-axis value of the first 3D point in the real-time point cloud is compared to the z-axis value of the second 3D point in the 3D map. When the difference exceeds a threshold, filter unit 242 may remove the first 3D point from point cloud 203 and remove the second 3D point from point cloud 204. As an example, (x, y, z) denotes the position of the first 3D point and (x, y, elevation) denotes the position of the second 3D point. Filter unit 242 may compute the difference between “z” and “elevation, ” and determine if the difference exceeds a threshold. When the difference exceeds a threshold, filter unit 242 may remove the first 3D point from point cloud 203 and remove the second 3D point from point cloud 204.
Computation unit 244 is configured to compute a rotation matrix based on filtered point cloud 203 and filtered point cloud 204. In some embodiments, computation unit 244 is configured to fit a first plane based on point cloud 203, and fit a second plane based on point cloud 204. Further, computation unit 244 is configured to compute the rotation matrix based on the first plane and the second plane. Consistent with some embodiments, the first plane or the second plane may be fitted by using a random sample consensus method.
In some embodiments, computation unit 244 is further configured to evaluate the first plane and the second plane to determine whether the first plane and the second plane are abnormal. For example, computation unit 244 may compute a first distance between the elevation of a vehicle (e.g., the elevation measured by GPS or barometer on the vehicle) and the elevation of the vehicle as queried from the 3D map. Computation unit 244 may further compute a second distance between the first plane and the second plane, such as computing the vertical distance between the origin of the first plane and the origin of the second plane as the second distance. Computation unit 244 may compute the distance between the first distance and the second distance. When the distance exceeds a threshold, the first plane and the second plane may be discarded. When the distance is within the threshold, the first plane and the second plane are validated, and may be further used to compute the rotation matrix.
In some embodiments, computation unit 244 is further configured to compute a vertical translation based on the first plane and the second plane. For example, the vertical translation may be the vertical distance between the first plane and the second plane (e.g., the vertical difference between the origin of the first plane and the origin of the second plane) .
Adjusting unit 246 is configured to adjust the pose of the vehicle based on the rotation matrix. In some embodiments, adjusting unit 246 is configured to adjust the pose of the vehicle based on the rotation matrix and the vertical translation. The rotation matrix transforms the pose measured in the coordinate system of the point cloud acquired in real- time to a pose in the coordinate system of the reference point cloud. Consistent with some embodiments, the pose of the vehicle (X, Y, Z, pitch, roll, yaw) may be multiplied by the rotation matrix, then translated based on the vertical translation for the adjustment.
Memory 206 and storage 208 may include any appropriate type of mass storage provided to store any type of information that processor 200 may need to operate. Memory 206 and storage 208 may be a volatile or non-volatile, magnetic, semiconductor, tape, optical, removable, non-removable, or other type of storage device or tangible (i.e., non-transitory) computer-readable medium including, but not limited to, a ROM, a flash memory, a dynamic RAM, and a static RAM. Memory 206 and/or storage 208 may be configured to store one or more computer programs that may be executed by processor 200 to perform pose adjustment functions disclosed herein. For example, memory 206 and/or storage 208 may be configured to store program (s) that may be executed by processor 200 to adjust the pose of the vehicle.
Memory 206 and/or storage 208 may be further configured to store information and data used by processor 200. For instance, memory 206 and/or storage 208 may be configured to store a 3D map or data acquired by sensor 205 (e.g., point cloud data) . The 3D map maybe constructed with previously acquired point cloud data. Vehicle 100 may use the 3D map for navigation, positioning, or obstacle detection. For example, by using the 3D map, vehicle 100 may determine its location or detect obstacles in ambient environment. Consistent with some embodiments, the 3D map may be an offline map. Consistent with some embodiments, the 3D map may be an online map. The online map may be updated in real-time.
FIG. 4 illustrates a flow chart of an exemplary method 400 for adjusting a vehicle pose by adapting acquired point clouds to a 3D map. In some embodiments, method 400 may be implemented by vehicle pose adjustment system 224. However, method 400 is not limited to that exemplary embodiment. Method 400 may include steps 410-480 as described below. It is to be appreciated that some of the steps may be optional to perform the disclosure provided herein. Further, some of the steps may be performed simultaneously, or in a different order than shown in FIG. 4.
In step 410, communicate interface 202 may receive a first point cloud (e.g., point cloud 203) which is acquired by sensor 205 (e.g., LiDAR sensor) . Sensor 205 may measure distance to a target by illuminating the target with pulsed laser light and measuring the reflected pulses with s sensor. Differences of the time for laser light sending and returning, and wavelengths can then be used to make digital 3D representations of the target. The data acquired by sensor 205 may include point clouds including a plurality of 3D points. Consistent with some embodiments, after the first point cloud is acquired by sensor 205,  sensor 205 may transmit/send the first point cloud to communication interface 202. Communication interface 202 may provide the first point cloud to memory 206 and/or storage 208 for storage or to processor 200 for processing.
In step 420, processor 200 may query a 3D map based on the first point cloud to obtain a second pint cloud (e.g., point cloud 204) corresponding to the first point cloud. In some embodiments, the 3D map may be queried based on the pose of the vehicle and coordinates of the first point cloud. For example, (X, Y, Z, pitch, roll, yaw) denotes the pose of the vehicle, where (X, Y, Z) denotes the position of the vehicle, and (pitch, roll, yaw) denotes the attitude of sensor 205. (x, y, z) denotes coordinates of the first point cloud. Consistent with some embodiments, each 3D point in the first point cloud may have a corresponding 3D point in the second point cloud. In alternative embodiments, the 3D map may be queried based on the pose of vehicle (X, Y, Z, pitch, roll, yaw) and a two-dimensional (2D) coordinates (x, y) of the first point cloud. In some embodiments, (x, y, elevation) denotes coordinates of the second point cloud, and (x, y, elevation) may be obtained based on the pose of vehicle (X, Y, Z, pitch, roll, yaw) and the 2D coordinate (x, y) of the first point cloud.
In step 430, processor 200 may filter the first point cloud and the second point cloud to obtain a filtered first point cloud and a filtered second point cloud.
Consistent with some embodiments, processor 200 may determine a first 3D point in the first point cloud and a second 3D point in the second point that corresponds to the first 3D point. Processor 200 may compute the difference between a specified coordinate of the first 3D point and the specified coordinate of the second 3D point. Consistent with some embodiments, processor 200 may compute the difference between the vertical coordinate (z) of the first 3D point and the vertical coordinate (elevation) of the second 3D point. If the difference exceeds a threshold, processor 200 may filter the first and second point clouds by removing the first 3D point from the first point cloud and removing the second 3D point from the second point cloud.
In alternative embodiments, processor 200 may compute the difference in a specified coordinate between each corresponding pair among the first point cloud and the second point cloud in accordance with the method described above, and discard those corresponding pairs with the difference above the threshold.
In step 440, processor 200 may fit a first plane based on the filtered first point cloud and fit a second plane based on the filtered second point cloud. In some embodiments, the filtered first point cloud may be a subset of the first point cloud and the filtered second point  cloud may be a subset of the second point cloud. The first plane or the second plane may be fitted by using a random sample consensus method.
In step 450, processor 200 may evaluate the first plane and the second plane to determine if the first plane and the second plane are abnormal. In some embodiments, processor 200 may compute a first distance between the elevation of the vehicle (e.g., the elevation measured by GPS or barometer on the vehicle) and the elevation of the vehicle as queried from the 3D map. Processor 200 may further compute a second distance between the first plane and the second plane, such as computing the vertical distance between the origin of the first plane and the origin of the second plane as the second distance. Processor 200 may compute the distance between the first distance and the second distance. When the distance exceeds a threshold, the first plane and the second plane may be discarded. When the distance is within the threshold, the first plane and the second plane are validated, and may be further used to compute a rotation matrix.
In step 460, processor 200 may compute a rotation matrix based on the first plane and the second plane. In some embodiments, processor 200 may compute the rotation matrix according to Equations 1-5 below. In the Equations, A denotes a normal vector of the first plane, B denotes a normal vector of the second plane, G denotes a two-dimensional (2D) rotation matrix by an angle θ, and U denotes the rotation matrix that is used to adjust the pose of the vehicle.
Figure PCTCN2019107925-appb-000001
Figure PCTCN2019107925-appb-000002
Figure PCTCN2019107925-appb-000003
ω=B×A
Figure PCTCN2019107925-appb-000004
U=F -1GF           (5)
In step 470, processor 200 may compute a vertical translation based on the first plane and the second plane. In some embodiments, processor 200 may compute the vertical translation based on the elevation of the vehicle and the elevation of the vehicle as queried from the 3D map, for example, the vertical translation may be the difference between the elevation of the vehicle and the elevation of the vehicle as queried from the 3D map. In alternative embodiments, the vertical translation may be the vertical difference between the  first plane and the second plane, for example, the vertical difference between the origin of the first plane and the origin of the second plane.
In step 480, processor 200 may adjust the pose of the vehicle based on the rotation and the vertical translation. In some embodiments, processor 200 may multiply the rotation matrix U with the pose of the vehicle (X, Y, Z, pitch, roll, yaw) to obtain an adjusted pose (X’, Y’, Z’, pitch’, roll’, yaw’) . Further, processor 200 may translate the adjusted pose by the vertical translation as computed above.
In some embodiments, steps of method 400 may be iteratively performed by vehicle pose adjustment system 224 of the vehicle to adjust the pose of the vehicle in real-time. Real-time adjustment may help the vehicle to estimate the intensity of the object in real-time, e.g., to assist an autonomous vehicle to make real-time driving decisions.
The disclosed systems and methods can effectively adjust a vehicle pose by adapting acquired point clouds to a 3D map. With a reduction of position error, the vehicle may accurately detect and position obstacles and objects around it to make driving decisions.
It will be apparent to those skilled in the art that various modifications and variations can be made to the disclosed system and related methods. Other embodiments will be apparent to those skilled in the art from consideration of the specification and practice of the disclosed system and related methods.
It is intended that the specification and examples be considered as exemplary only, with a true scope being indicated by the following claims and their equivalents.

Claims (20)

  1. A method for adjusting a vehicle pose by adapting acquired point clouds to a three-dimensional (3D) map, comprising:
    receiving, by a communication interface, a first point cloud acquired by at least one sensor equipped on a vehicle, the first point cloud comprising a plurality of 3D points;
    obtaining, by at least one processor, a second point cloud from the 3D map corresponding to the first point cloud;
    filtering, by the at least one processor, the first point cloud and the second point cloud to obtain a filtered first point cloud and a filtered second point cloud;
    computing, by the at least one processor, a rotation matrix based on the filtered first point cloud and the filtered second point cloud; and
    adjusting, by the at least one processor, a pose of the vehicle corresponding to the first point cloud based on the rotation matrix.
  2. The method of claim 1, wherein the 3D map is constructed with previously acquired point cloud data.
  3. The method of claim 1, wherein the at least one sensor comprises a camera or a LiDAR sensor.
  4. The method of claim 1, wherein the second point cloud is obtained from the 3D map based on the pose of the vehicle and coordinates of the first point cloud.
  5. The method of claim 4, wherein the pose includes a position and an attitude of the at least one sensor.
  6. The method of claim 1, wherein filtering the first point cloud and the second point cloud further comprises:
    detecting a first 3D point in the first point cloud and a second 3D point in the second point cloud that corresponds to the first 3D point, wherein a difference between a specified coordinate of the first 3D point and the specified coordinate of the second 3D point exceeds a threshold;
    removing the first 3D point from the first point cloud; and
    removing the second 3D point form the second point cloud.
  7. The method of claim 1, wherein computing the rotation matrix further comprises:
    fitting a first plane based on the first filtered point cloud;
    fitting a second plane based on the second filtered point cloud; and
    computing the rotation matrix based on the first plane and the second plane.
  8. The method of claim 7, wherein computing the rotation matrix further comprises:
    evaluating the first plane and the second plane by comparing a first distance between the first plane and the second plane and a second distance of an elevation of the vehicle and an elevation of the vehicle in the 3D map; and
    computing the rotation matrix based on a result of the evaluation.
  9. The method of claim 8, wherein evaluating the first plane and the second plane further comprises confirming that a difference between the first distance and the second distance is less than a threshold.
  10. The method of claim 7, wherein adjusting the pose of the vehicle comprises:
    computing a vertical translation between the first plane and the second plane; and
    adjusting the pose of the vehicle based on the vertical translation and the rotation matrix.
  11. A system for adjusting a vehicle pose by adapting acquired point clouds to a three-dimensional (3D) map, comprising:
    a communication interface configured to receive a first point cloud acquired by at least one sensor configured to acquire a first point cloud, the first point cloud comprising a plurality of 3D points;
    a storage configured to store the 3D map; and
    at least one processor configured to:
    obtain a second point cloud from the 3D map corresponding to the first point cloud;
    filter the first point cloud and the second point cloud to obtain a filtered first point cloud and a filtered second point cloud;
    compute a rotation matrix based on the filtered first point cloud and the filtered second point cloud; and
    adjust a pose of the vehicle corresponding to the first point cloud based on the rotation matrix.
  12. The system of claim 11, wherein the 3D map is constructed with previously acquired point cloud data.
  13. The system of claim 11, wherein the at least one sensor comprises a camera and/or a LiDAR sensor.
  14. The system of claim 11, wherein to query the 3D map, the at least one processor is further configured to obtain the second point cloud from the 3D map based on the pose of the vehicle and coordinates of the first point cloud.
  15. The system of claim 14, wherein the pose includes a position and an attitude of the at least one sensor.
  16. The system of claim 11, wherein to filter the first point cloud and the second point cloud, the at least one processor is further configured to:
    detecting a first 3D point in the first point cloud and a second 3D point in the second point cloud that corresponds to the first 3D point, wherein a difference between a specified coordinate of the first 3D point and the specified coordinate of the second 3D point exceeds a threshold;
    removing the first 3D point from the first point cloud; and
    removing the second 3D point form the second point cloud.
  17. The system of claim 11, wherein to compute the rotation matrix, the at least one processor is further configured to:
    fit a first plane based on the first filtered point cloud;
    fit a second plane based on the second filtered point cloud; and
    compute the rotation matrix based on the first plane and the second plane.
  18. The system of claim 17, wherein to compute the rotation matrix, the at least one processor is further configured to:
    evaluate the first plane and the second plane by comparing a first distance between the first plane and the second plane and a second distance of an elevation of the vehicle and an elevation of the vehicle in the 3D map; and
    compute the rotation matrix based on a result of the evaluation.
  19. The system of claim 18, wherein evaluating the first plane and the second plane further comprises confirming that a difference between the first distance and the second distance is less than a threshold.
  20. A non-transitory computer-readable medium having instructions stored thereon that, when execute by one or more processors, cause the one or more processors to perform a method for adjusting a vehicle pose by adapting acquired point clouds to a three-dimensional (3D) map, the method comprising:
    receiving a first point cloud acquired by at least one sensor equipped on a vehicle, the first point cloud comprising a plurality of 3D points;
    obtaining a second point cloud from the 3D map corresponding to the first point cloud;
    filtering the first point cloud and the second point cloud to obtain a filtered first point cloud and a filtered second point cloud;
    computing a rotation matrix based on the filtered first point cloud and the filtered second point cloud; and
    adjusting a pose of the vehicle corresponding to the first point cloud based on the rotation matrix.
PCT/CN2019/107925 2019-09-25 2019-09-25 Systems and methods for adjusting a vehicle pose WO2021056283A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/107925 WO2021056283A1 (en) 2019-09-25 2019-09-25 Systems and methods for adjusting a vehicle pose

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/107925 WO2021056283A1 (en) 2019-09-25 2019-09-25 Systems and methods for adjusting a vehicle pose

Publications (1)

Publication Number Publication Date
WO2021056283A1 true WO2021056283A1 (en) 2021-04-01

Family

ID=75165499

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/107925 WO2021056283A1 (en) 2019-09-25 2019-09-25 Systems and methods for adjusting a vehicle pose

Country Status (1)

Country Link
WO (1) WO2021056283A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113791423A (en) * 2021-09-24 2021-12-14 山东新一代信息产业技术研究院有限公司 Robot repositioning method based on multi-sensor fusion
CN113805157A (en) * 2021-09-22 2021-12-17 航天新气象科技有限公司 Height measuring method, device and equipment based on target

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120089295A1 (en) * 2010-10-07 2012-04-12 Samsung Electronics Co., Ltd. Moving robot and method to build map for the same
CN106406338A (en) * 2016-04-14 2017-02-15 中山大学 Omnidirectional mobile robot autonomous navigation apparatus and method based on laser range finder
CN106969763A (en) * 2017-04-07 2017-07-21 百度在线网络技术(北京)有限公司 For the method and apparatus for the yaw angle for determining automatic driving vehicle
US20180018787A1 (en) * 2016-07-18 2018-01-18 King Abdullah University Of Science And Technology System and method for three-dimensional image reconstruction using an absolute orientation sensor
CN107665506A (en) * 2016-07-29 2018-02-06 成都理想境界科技有限公司 Realize the method and system of augmented reality
CN109523581A (en) * 2017-09-19 2019-03-26 华为技术有限公司 A kind of method and apparatus of three-dimensional point cloud alignment
CN109579852A (en) * 2019-01-22 2019-04-05 杭州蓝芯科技有限公司 Robot autonomous localization method and device based on depth camera
CN110260867A (en) * 2019-07-29 2019-09-20 浙江大华技术股份有限公司 Method, equipment and the device that pose is determining in a kind of robot navigation, corrects

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120089295A1 (en) * 2010-10-07 2012-04-12 Samsung Electronics Co., Ltd. Moving robot and method to build map for the same
CN106406338A (en) * 2016-04-14 2017-02-15 中山大学 Omnidirectional mobile robot autonomous navigation apparatus and method based on laser range finder
US20180018787A1 (en) * 2016-07-18 2018-01-18 King Abdullah University Of Science And Technology System and method for three-dimensional image reconstruction using an absolute orientation sensor
CN107665506A (en) * 2016-07-29 2018-02-06 成都理想境界科技有限公司 Realize the method and system of augmented reality
CN106969763A (en) * 2017-04-07 2017-07-21 百度在线网络技术(北京)有限公司 For the method and apparatus for the yaw angle for determining automatic driving vehicle
CN109523581A (en) * 2017-09-19 2019-03-26 华为技术有限公司 A kind of method and apparatus of three-dimensional point cloud alignment
CN109579852A (en) * 2019-01-22 2019-04-05 杭州蓝芯科技有限公司 Robot autonomous localization method and device based on depth camera
CN110260867A (en) * 2019-07-29 2019-09-20 浙江大华技术股份有限公司 Method, equipment and the device that pose is determining in a kind of robot navigation, corrects

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113805157A (en) * 2021-09-22 2021-12-17 航天新气象科技有限公司 Height measuring method, device and equipment based on target
CN113791423A (en) * 2021-09-24 2021-12-14 山东新一代信息产业技术研究院有限公司 Robot repositioning method based on multi-sensor fusion

Similar Documents

Publication Publication Date Title
CN111656136B (en) Vehicle positioning system using lidar
TWI693422B (en) Integrated sensor calibration in natural scenes
EP3612854B1 (en) Vehicle navigation system using pose estimation based on point cloud
US11474247B2 (en) Methods and systems for color point cloud generation
CN110148185B (en) Method and device for determining coordinate system conversion parameters of imaging equipment and electronic equipment
US10996072B2 (en) Systems and methods for updating a high-definition map
US10996337B2 (en) Systems and methods for constructing a high-definition map based on landmarks
US11866056B2 (en) Ballistic estimation of vehicle data
WO2021056283A1 (en) Systems and methods for adjusting a vehicle pose
WO2020113425A1 (en) Systems and methods for constructing high-definition map
US11138448B2 (en) Identifying a curb based on 3-D sensor data
AU2018102199A4 (en) Methods and systems for color point cloud generation
WO2020232709A1 (en) Method and system for evaluating quality of a point cloud map
EP4345750A1 (en) Position estimation system, position estimation method, and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19946796

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19946796

Country of ref document: EP

Kind code of ref document: A1