WO2020113425A1 - Systems and methods for constructing high-definition map - Google Patents

Systems and methods for constructing high-definition map Download PDF

Info

Publication number
WO2020113425A1
WO2020113425A1 PCT/CN2018/119199 CN2018119199W WO2020113425A1 WO 2020113425 A1 WO2020113425 A1 WO 2020113425A1 CN 2018119199 W CN2018119199 W CN 2018119199W WO 2020113425 A1 WO2020113425 A1 WO 2020113425A1
Authority
WO
WIPO (PCT)
Prior art keywords
landmark
vehicle
data
parameters
processor
Prior art date
Application number
PCT/CN2018/119199
Other languages
French (fr)
Inventor
Teng MA
Sheng Yang
Xiaoling ZHU
Original Assignee
Beijing Didi Infinity Technology And Development Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Didi Infinity Technology And Development Co., Ltd. filed Critical Beijing Didi Infinity Technology And Development Co., Ltd.
Priority to CN201880095637.XA priority Critical patent/CN112424568A/en
Priority to PCT/CN2018/119199 priority patent/WO2020113425A1/en
Publication of WO2020113425A1 publication Critical patent/WO2020113425A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3859Differential updating map data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3833Creation or updating of map data characterised by the source of data
    • G01C21/3848Data obtained from both position sensors and additional sensors

Definitions

  • the present disclosure relates to systems and methods for constructing a high-definition (HD) map, and more particularly to, systems and methods for constructing an HD map based on integrating point cloud data acquired of a same landmark from different poses.
  • HD high-definition
  • HD maps may be obtained by aggregating images and information acquired by various sensors, detectors, and other devices equipped on vehicles as they drive around.
  • a vehicle may be equipped with multiple integrated sensors such as a LiDAR, a Global Positioning System (GPS) receiver, one or more Inertial Measurement Unit (IMU) sensors, and one or more cameras, to capture features of the road on which the vehicle is driving or the surrounding objects.
  • Data captured may include, for example, center line or border line coordinates of a lane, coordinates and images of an object, such as a building, another vehicle, a landmark, a pedestrian, or a traffic sign.
  • the point cloud data obtained by the integrated sensors may be affected by the errors from the sensors themselves (e.g., laser ranging error, GPS positioning error, IMU attitude measurement error, etc. ) .
  • errors of pose information accumulate significantly when the GPS signal is weak.
  • Some solutions have been developed to improve the accuracy of point cloud data acquisition. For example, one solution based on Kalman filtering integrate the LiDAR unit and the navigation unit (e.g., the GPS/IMU unit) to estimate the pose of the vehicle. Another solution is to optimize the pose information iteratively subject to a set of constraints (such as point cloud matching) , such as using a Gauss–Newton method. Although these solutions may mitigate accurate errors to some extent, they are not sufficiently robust and still susceptible to noises in image coordinates. Therefore, an improved system and method for updating an HD map based on optimization techniques is needed.
  • a set of constraints such as point cloud matching
  • Embodiments of the disclosure address the above problems by methods and systems for constructing an HD map by integrating point cloud data acquired of a same landmark from different poses.
  • Embodiments of the disclosure provide a method for constructing an HD map.
  • the method may include receiving, by a communication interface, sensor data acquired of a target region by at least one sensor equipped on a vehicle as the vehicle travels along a trajectory.
  • the method may further include identifying, by at least one processor, a plurality of data frames associated with a landmark, each data frame corresponding to a pose of the vehicle on the trajectory.
  • the method may further include determining, by the at least one processor, a set of parameters of the landmark within each identified data frame and associating the set of parameters with the pose of the vehicle corresponding to each data frame.
  • the method may also include constructing, by the at least one processor, the HD map based on the sets of parameters and the associated poses.
  • Embodiments of the disclosure also provide a system for constructing an HD map.
  • the system may include a communication interface configured to receive sensor data acquired of a target region by at least one sensor equipped on a vehicle, as the vehicle travels along a trajectory, via a network.
  • the system may further include a storage configured to store the HD map.
  • the system may also include at least one processor.
  • the at least one processor may be configured to identify a plurality of data frames associated with a landmark, each data frame corresponding to a pose of the vehicle on the trajectory.
  • the at least one processor may be further configured to determine a set of parameters of the landmark within each identified data frame and associate the set of parameters with the pose of the vehicle corresponding to each data frame.
  • the at least one processor may also be configured to construct the HD map based on the sets of parameters and the associated poses.
  • Embodiments of the disclosure further provide a non-transitory computer-readable medium having instructions stored thereon that, when executed by one or more processors, causes the one or more processors to perform a method for updating an HD map.
  • the method may include receiving, sensor data acquired of a target region by at least one sensor equipped on a vehicle as the vehicle travels along a trajectory.
  • the method may further include identifying, a plurality of data frames associated with a landmark, each data frame corresponding to a pose of the vehicle on the trajectory.
  • the method may further include determining, a set of parameters of the landmark within each identified data frame and associating the set of parameters with the pose of the vehicle corresponding to each data frame.
  • the method may also include constructing, by the at least one processor, the HD map based on the sets of parameters and the associated poses.
  • FIG. 1 illustrates a schematic diagram of an exemplary vehicle equipped with sensors, according to embodiments of the disclosure.
  • FIG. 2 illustrates a block diagram of an exemplary system for constructing an HD map, according to embodiments of the disclosure.
  • FIG. 3 illustrates an exemplary method for optimizing the set of parameters of a landmark and poses of the vehicle, according to embodiments of the disclosure.
  • FIG. 4 illustrates a flowchart of an exemplary method for constructing an HD map, according to embodiments of the disclosure.
  • FIG. 5 shows an exemplary point cloud frame before and after using RANSAC algorithm, according to embodiments of the disclosure.
  • FIG. 1 illustrates a schematic diagram of an exemplary vehicle 100 having a plurality of sensors 140 and 150, according to embodiments of the disclosure.
  • vehicle 100 may be a survey vehicle configured for acquiring data for constructing an HD map or three-dimensional (3-D) city modeling. It is contemplated that vehicle 100 may be an electric vehicle, a fuel cell vehicle, a hybrid vehicle, or a conventional internal combustion engine vehicle. Vehicle 100 may have a body 110 and at least one wheel 120. Body 110 may be any body style, such as a sports vehicle, a coupe, a sedan, a pick-up truck, a station wagon, a sports utility vehicle (SUV) , a minivan, or a conversion van.
  • SUV sports utility vehicle
  • vehicle 100 may include a pair of front wheels and a pair of rear wheels, as illustrated in FIG. 1. However, it is contemplated that vehicle 100 may have less wheels or equivalent structures that enable vehicle 100 to move around. Vehicle 100 may be configured to be all wheel drive (AWD) , front wheel drive (FWR) , or rear wheel drive (RWD) . In some embodiments, vehicle 100 may be configured to be operated by an operator occupying the vehicle, remotely controlled, and/or autonomous.
  • ATD all wheel drive
  • FWR front wheel drive
  • RWD rear wheel drive
  • vehicle 100 may be configured to be operated by an operator occupying the vehicle, remotely controlled, and/or autonomous.
  • vehicle 100 may be equipped with various sensors 140 and 150.
  • Sensor 140 may be mounted to body 110 via a mounting structure 130.
  • Mounting structure 130 may be an electro-mechanical device installed or otherwise attached to body 110 of vehicle 100. In some embodiments, mounting structure 130 may use screws, adhesives, or another mounting mechanism.
  • Vehicle 100 may be additionally equipped with sensor 150 inside or outside body 110 using any suitable mounting mechanisms. It is contemplated that the manners in which sensor 140 or 150 can be equipped on vehicle 100 are not limited by the example shown in FIG. 1 and may be modified depending on the types of sensors of 140/150 and/or vehicle 100 to achieve desirable sensing performance.
  • sensors 140 and 150 may be configured to capture data as vehicle 100 travels along a trajectory.
  • sensor 140 may be a LiDAR scanner configured to scan the surrounding and acquire point clouds. LiDAR measures distance to a target by illuminating the target with pulsed laser light and measuring the reflected pulses with a sensor. Differences in laser return times and wavelengths can then be used to make digital 3-D representations of the target.
  • the light used for LiDAR scan may be ultraviolet, visible, or near infrared. Because a narrow laser beam can map physical features with very high resolution, a LiDAR scanner is particularly suitable for HD map surveys. In some embodiments, a LiDAR scanner may capture point cloud.
  • sensor 140 may continuously capture data.
  • Each set of scene data captured at a certain time range is known as a data frame.
  • the point cloud data captured by a LiDAR may include multiple point cloud data frames corresponding to different time ranges.
  • Each data frame also corresponds to a pose of the vehicle along the trajectory.
  • the scene may include a landmark, and thus multiple data frames captured of the scene may include data associated with the landmark. Because the data frames are captured at different vehicle poses, the data in each frame contains landmark features observed from different angles and distances. These features, however, may be matched and associated among the different data frames, to facilitate construction of the HD map.
  • vehicle 100 may be additionally equipped with sensor 150, which may include sensors used in a navigation unit, such as a GPS receiver and one or more IMU sensors.
  • a GPS is a global navigation satellite system that provides geolocation and time information to a GPS receiver.
  • An IMU is an electronic device that measures and provides a vehicle’s specific force, angular rate, and sometimes the magnetic field surrounding the vehicle, using various inertial sensors, such as accelerometers and gyroscopes, sometimes also magnetometers.
  • sensor 150 can provide real-time pose information of vehicle 100 as it travels, including the positions and orientations (e.g., Euler angles) of vehicle 100 at each time point.
  • the point cloud data acquired by the LiDAR unit of sensor 140 may be initially in a local coordinate system of the LiDAR unit and may need to be transformed into a global coordinate system (e.g. the longitude/latitude coordinates) for later processing.
  • Vehicle 100’s real-time pose information collected by sensor 150 of the navigation unit may be used for transforming the point cloud data from the local coordinate system into the global coordinate system by point cloud data registration, for example, based on vehicle 100’s poses at the time each point cloud data frame was acquired.
  • sensors 140 and 150 may be integrated as an integrated sensing system such that the cloud point data can be aligned by registration with the pose information when they are collected.
  • the integrated sensing system may be calibrated with respect to a calibration target to reduce the integration errors, including but not limited to, mounting angle error and mounting vector error of sensors 140 and 150.
  • sensors 140 and 150 may communicate with server 160.
  • server 160 may be a local physical server, a cloud server (as illustrated in FIG. 1) , a virtual server, a distributed server, or any other suitable computing device.
  • server 160 may construct an HD map.
  • the HD map may be constructed using point cloud data acquired by a LiDAR. LiDAR measures distance to a target by illuminating the target with pulsed laser light and measuring the reflected pulses with a sensor. Differences in laser return times and wavelengths can then be used to construct digital 3-D representations of the target.
  • the light used for LiDAR scan may be ultraviolet, visible, or near infrared. Because a narrow laser beam can map physical features with very high resolution, LiDAR is particularly suitable for HD map surveys.
  • server 160 may construct the HD map based on point cloud data containing multiple data frames acquired of one or more landmarks from different vehicle poses.
  • Server 160 may receive the point cloud data, identify landmarks within the multiple frames of point cloud data, associate sets of parameters with the landmarks and construct HD maps based on the sets of parameters.
  • Server 160 may communicate with sensors 140, 150, and/or other components of vehicle 100 via a network, such as a Wireless Local Area Network (WLAN) , a Wide Area Network (WAN) , wireless networks such as radio waves, a cellular network, a satellite communication network, and/or a local or short-range wireless network (e.g., Bluetooth TM ) .
  • WLAN Wireless Local Area Network
  • WAN Wide Area Network
  • wireless networks such as radio waves, a cellular network, a satellite communication network, and/or a local or short-range wireless network (e.g., Bluetooth TM ) .
  • FIG. 2 illustrates a block diagram of an exemplary server 160 for constructing an HD map, according to embodiments of the disclosure.
  • server 160 may receive sensor data 203 from sensor 140 and vehicle pose 205 information from sensor 150. Based on sensor data 203, server 160 may identify data frames associated with landmarks, and determine sets of parameters within the data frames and associate them with poses of the vehicle when acquiring the respective data frames and construct an HD map based on the sets of parameters.
  • server 160 may include a communication interface 202, a processor 204, a memory 206, and a storage 208.
  • server 160 may have different modules in a single device, such as an integrated circuit (IC) chip (implemented as an application-specific integrated circuit (ASIC) or a field-programmable gate array (FPGA) ) , or separate devices with dedicated functions.
  • IC integrated circuit
  • ASIC application-specific integrated circuit
  • FPGA field-programmable gate array
  • server 160 may be located in a cloud or may be alternatively in a single location (such as inside vehicle 100 or a mobile device) or distributed locations. Components of server 160 may be in an integrated device or distributed at different locations but communicate with each other through a network (not shown) .
  • Communication interface 202 may send data to and receive data from components such as sensors 140 and 150 via communication cables, a Wireless Local Area Network (WLAN) , a Wide Area Network (WAN) , wireless networks such as radio waves, a cellular network, and/or a local or short-range wireless network (e.g., Bluetooth TM ) , or other communication methods.
  • communication interface 202 can be an integrated services digital network (ISDN) card, cable modem, satellite modem, or a modem to provide a data communication connection.
  • ISDN integrated services digital network
  • communication interface 202 can be a local area network (LAN) card to provide a data communication connection to a compatible LAN.
  • Wireless links can also be implemented by communication interface 202.
  • communication interface 202 can send and receive electrical, electromagnetic or optical signals that carry digital data streams representing various types of information via a network.
  • communication interface 202 may receive sensor data 203 such as point cloud data captured by sensor 140, as well as pose information 205 captured by sensor 150. Communication interface may further provide the received data to storage 208 for storage or to processor 204 for processing. Communication interface 202 may also receive a point cloud generated by processor 204 and provide the point cloud to any local component in vehicle 100 or any remote device via a network.
  • Processor 204 may include any appropriate type of general-purpose or special-purpose microprocessor, digital signal processor, or microcontroller. Processor 204 may be configured as a separate processor module dedicated to construct HD maps. Alternatively, processor 204 may be configured as a shared processor module for performing other functions unrelated to color point cloud generation.
  • processor 204 may include multiple modules, such as a landmark feature extraction unit 210, a landmark feature matching unit 212, a landmark parameter determination unit 214, and an HD map construction unit 216, and the like. These modules (and any corresponding sub-modules or sub-units) can be hardware units (e.g., portions of an integrated circuit) of processor 204 designed for use with other components or software units implemented by processor 204 through executing at least part of a program.
  • the program may be stored on a computer-readable medium, and when executed by processor 204, it may perform one or more functions.
  • FIG. 2 shows units 210-216 all within one processor 204, it is contemplated that these units may be distributed among multiple processors located near or remotely with each other.
  • modules related to landmark feature extraction such as landmark feature extraction unit 210, landmark feature matching unit 212, landmark parameter determination unit 214, etc. may be within a processor on vehicle 100.
  • modules related to constructing HD map such as HD map construction unit 216 may be within a processor on a remote server.
  • Landmark feature extraction unit 210 may be configured to extract landmark features from sensor data 203.
  • the landmark features may be geometric features of a landmark. Different methods may be used to extract the landmark features based on the type of the landmark.
  • the landmark may be a road mark (e.g., a traffic lane or pedestrian marks) or a standing object (e.g., a tree or road board) .
  • Processor 204 may determine the type of the landmark.
  • landmark feature extraction unit 210 may identify the landmark based on point cloud intensity of the landmarks. For example, landmark feature extraction unit 210 may use a Random Sample Consensus (RANSAC) method to segment the point cloud data associated with the road surface where the vehicle travels on. Because road marks are typically made using special labeling materials that correspond to high-intensity point cloud, landmark feature extraction unit 210 may extract features of the road marks based on the intensity. For example, landmark feature extraction unit 210 may use regional growing or clustering methods. In some other embodiments, if the landmark is determined to be a standing object, landmark feature extraction unit 210 may extract the landmark features based on a Principal Component Analysis method. For example, landmark feature extraction unit 210,
  • Landmark feature matching unit 212 may be configured to divide sensor data 203 into subsets. For example, landmark feature matching unit 212 may divide sensor data 203 into data frames based on the time point the sensor data was captured. The data frames contain point cloud data associated with the same landmark captured at different vehicle poses along the trajectory. Landmark feature matching unit 212 may further be configured to match the landmark features among the subsets and identify the data frames associated with the landmark.
  • landmark features may be matched using learning models trained based on sample landmark features that are known to be associated with a same landmark.
  • landmark feature matching unit 212 may use landmark features such as types, collection properties, and/or geometric features of the landmark as sample landmark features and combine the features with the associated vehicle pose to identify the landmark within different subsets.
  • Landmark feature matching unit 212 may then train learning models (e.g., rule-based machine learning method) based on the sample landmark features that are associated with a same landmark. The trained model can then be applied to find matching landmark features.
  • learning models e.g., rule-based machine learning method
  • Landmark parameter determination unit 214 may be configured to determine a set of parameters of the landmark based on the matched landmark features.
  • the set of parameters of the landmark may be determined based on the type of the landmark.
  • the landmark is line segment type object (e.g., a street light lamp stick)
  • it may be represented with 4 or 6 degrees of freedom, including the line direction (2 degrees of freedom) , tangential positions (2 degrees of freedom) , and endpoints (0 or 2 degrees of freedom) .
  • the landmark is symmetric type object (e.g., a tree or road board)
  • For landmarks that are not the above two types of object they may be represented with 6 degrees of freedom, including Euler angles (3 degrees of freem) and the spatial location of the landmark (3 degrees of freedom) .
  • HD map construction unit 216 may be configured to construct an HD map based on the set of parameters.
  • optimization methods may be used to construct the HD map.
  • the matched landmark features obtained by landmark feature matching unit 212 and set of parameters determined by landmark parameter determination unit 214 provide additional constraints that can be used during the optimization method for HD map construction.
  • bundle adjustment may be added as an ancillary component of the optimization to improve the robustness of the map construction. For example, bundle adjustment method may be applied in addition to a traditional map optimization method (e.g., to add constraints) .
  • the extended traditional map optimization method (e.g., with bundle adjustment constraints added) is more robust in optimizing the vehicle pose information and the set of parameters of the landmark, and thus may increase the accuracy of the HD map construction (e.g., when the GPS positioning accuracy is at a decimeter level, the HD map can still be constructed at a centimeter level accuracy) .
  • processor 204 may additionally include a sensor calibration unit (not shown) configured to determine one or more calibration parameters associated with sensor 140 or 150.
  • the sensor calibration unit may instead be inside vehicle 100, in a mobile device, or otherwise located remotely from processor 204.
  • sensor calibration may be used to calibrate a LiDAR scanner and the positioning sensor (s) .
  • Memory 206 and storage 208 may include any appropriate type of mass storage provided to store any type of information that processor 204 may need to operate.
  • Memory 206 and storage 208 may be a volatile or non-volatile, magnetic, semiconductor, tape, optical, removable, non-removable, or other type of storage device or tangible (i.e., non-transitory) computer-readable medium including, but not limited to, a ROM, a flash memory, a dynamic RAM, and a static RAM.
  • Memory 206 and/or storage 208 may be configured to store one or more computer programs that may be executed by processor 204 to perform color point cloud generation functions disclosed herein.
  • memory 206 and/or storage 208 may be configured to store program (s) that may be executed by processor 204 to construct an HD map based on sensor data captured by sensor 140 and 150.
  • Memory 206 and/or storage 208 may be further configured to store information and data used by processor 204.
  • memory 206 and/or storage 208 may be configured to store the various types of sensor data (e.g., point cloud data frames, pose information, etc. ) captured by sensors 140 and 150 and the HD map.
  • Memory 206 and/or storage 208 may also store intermediate data such as machine learning models, landmark features, and sets of parameters associated with landmarks, etc.
  • the various types of data may be stored permanently, removed periodically, or disregarded immediately after each frame of data is processed.
  • FIG. 3 illustrates an exemplary method for optimizing the set of parameters of a landmark and poses of the vehicle when the data frame of the landmark is acquired, according to embodiments of the disclosure.
  • P i represents the pose information of the vehicle 100 at the time point i the sensor data F i and V i are acquired.
  • P i may include [S i T i R i ] which stands for vehicle pose at the moment sensor data F i and V i are acquired.
  • S i T i R i may be parameters representing pose of vehicle 100 in global coordinates.
  • Sensor data F i is the observed set of parameters (e.g., line direction, tangential positions and endpoints for line segment type objects) of the landmark and sensor data V i is the difference between vehicle pose information P i and the observed set of parameters F i .
  • P 1 is the pose information of the vehicle 100 at the time point sensor data F 1 and V 1 are acquired and F 1 is the observed set of parameters of the landmark.
  • V 1 is the vector from P 1 to F 1 .
  • ⁇ F 1 , F 2 , ..., F n ⁇ and ⁇ V 1 , V 2 , ..., V n ⁇ may be divided into subsets of sensor data based on the time point the sensor data was collected (e.g., ⁇ F 1 , F 2 , ..., F n ⁇ may be divided into ⁇ F 1 , F 2 , ..., F k ⁇ and ⁇ F k+1 , F k+2 , ..., F n ⁇ ; ⁇ V 1 , V 2 , ..., V n ⁇ may be divided into ⁇ V 1 , V 2 , ..., V k ⁇ and ⁇ V k+1 , V k+2 , ..., V n ⁇ ) .
  • [C C T c R c ] represents the set of parameters of the landmark in a global coordinate (e.g., C C , T c , R c may stand for line direction, tangential positions, and endpoints of the landmark respectively if the landmark is a line segment type object) .
  • d i represents an observation error, which equals to the difference between the sensor data F i which is acquired by sensor 140 and 150, and the set of parameters of the landmark [C C T c R c ] in the global coordinate.
  • the disclosed method includes first, extracting landmark features from sensor data ⁇ F 1 , F 2 , ..., F i ⁇ and ⁇ V 1 , V 2 , ..., V i ⁇ . The method then includes dividing the sensor data into subsets.
  • sensor data ⁇ F 1 , F 2 , ..., F i ⁇ and ⁇ V 1 , V 2 , ..., V i ⁇ may be divided into subsets ⁇ F 1 , F 2 , ..., F m ⁇ and ⁇ V 1 , V 2 , ..., V m ⁇ , and ⁇ F m+1 , F m+2 , ..., F i ⁇ and ⁇ V m+1 , V m+2 , ..., V i ⁇ .
  • the method may also include matching the landmark features among the subsets and identifying the plurality of data frames associated with the landmark.
  • the method then includes determining, sets of parameters of the landmarks within each identified data frame and associating the sets of parameters with the pose of the vehicle corresponding to each data frame P i . Finally, optimizing the poses and the sets of parameters simultaneously. For example, the optimization method may be used to find the optimal ⁇ T i , R i , P i ⁇ to minimize the sum of the observation errors
  • FIG. 4 illustrates a flowchart of an exemplary method for constructing an HD map, according to embodiments of the disclosure.
  • method 400 may be implemented by a HD map construction system that includes, among other things, server 160 and sensors 140 and 150.
  • method 400 is not limited to that exemplary embodiment.
  • Method 400 may include steps S402-S416 as described below. It is to be appreciated that some of the steps may be optional to perform the disclosure provided herein. Further, some of the steps may be performed simultaneously, or in a different order than shown in FIG. 4.
  • one or more of sensors 140 and 150 may be calibrated.
  • vehicle 100 may be dispatched for a calibration trip to collect data used for calibrating sensor parameters. Calibration may occur before the actual survey is performed for constructing and/or updating the map.
  • Point cloud data captured by a LiDAR (as an example of sensor 140) and pose information acquired by positioning devices such as a GPS receiver and one or more IMU sensors may be calibrated.
  • sensor 140 may capture sensor data 203 and pose information 205 as vehicle 100 travels along a trajectory.
  • the sensor data 203 of the target region may be point cloud data.
  • Vehicle 100 may be equipped with sensor 140, such as a LiDAR laser scanner. As vehicle 100 travels along the trajectory, sensor 140 may continuously capture frames of sensor data 203 at different time points in the form of a frame of point cloud data.
  • Vehicle 100 may be also equipped with sensor 150, such as a GPS receiver and one or more IMU sensors. Sensors 140 and 150 may form an integrated sensing system. In some embodiments, when vehicle 100 travels along the trajectory in the natural scene and when sensor 140 captures the set of point cloud data indicative of the target region, sensor 150 may acquire real-time pose information of vehicle 100.
  • the captured data may be transmitted from sensors 140/150 to server 160 in real-time.
  • the data may be streamed as they become available. Real-time transmission of data enables server 160 to process the data frame by frame in real-time while subsequent frames are being captured. Alternatively, data may be transmitted in bulk after a section of, or the entire survey is completed.
  • processor 204 may extract landmark features from the sensor data.
  • landmarks may be extracted based on the type of the landmarks. For example, processor 204 may determine if the landmarks are road marks (e.g., traffic lanes) , or standing objects (e.g., trees or boards) . In some embodiments, if the landmarks are determined to be road marks, processor 204 may identify the landmarks based on point cloud intensity of the landmarks. For example, landmark feature extraction unit 210 may segment the sensor data using RANASC algorithm. Based on the segment, processor 204 may further identify the landmarks based on the point cloud intensity of the landmarks.
  • road marks e.g., traffic lanes
  • standing objects e.g., trees or boards
  • processor 204 may identify the landmarks based on point cloud intensity of the landmarks.
  • landmark feature extraction unit 210 may segment the sensor data using RANASC algorithm. Based on the segment, processor 204 may further identify the landmarks based on the point cloud intensity of the landmarks.
  • FIG. 5 illustrates exemplary point clouds 510 and 520 of the same object (e.g., road marks) before and after point cloud intensity identification of the landmarks, respectively, according to embodiments of the disclosure.
  • Point cloud 510 of the road marks are data collected by sensors 140 and 150 before point cloud intensity identification.
  • point cloud 520 of the same road marks are data re-generated after point cloud intensity identification (e.g., using RANSAC algorithm to segment the sensor data) .
  • the landmarks (road marks) are more distinguishable in point cloud 520 than the same landmarks (road marks) shown in point cloud 510 since the sensor data collected by sensors 140 and 150 are filtered to reduce noise by the RANSAC method.
  • processor 204 may identify the landmarks based on a Principal Component Analysis method. For example, processor 204 may use an orthogonal transformation to convert a set of observations of possibly correlated variables (e.g., point cloud data of the nearby area of the landmarks) into a set of values of linearly uncorrelated variables of the landmarks.
  • processor 204 may use an orthogonal transformation to convert a set of observations of possibly correlated variables (e.g., point cloud data of the nearby area of the landmarks) into a set of values of linearly uncorrelated variables of the landmarks.
  • processor 204 may divide the sensor data into subsets. For example, processor 204 may divide sensor data 203 into data frames based on the time point the sensor data was captured. The data frames contain point cloud data associated with the same landmark captured at different vehicle poses along the trajectory. Processor 204 may further be configured to match the landmark features among the subsets and identify the data frames associated with the landmark.
  • landmark features may be matched using learning models trained based on sample landmark features that are known to be associated with a same landmark.
  • processor 204 may use landmark features such as types, collection properties, and/or geometric features as sample landmark features and combine the features with the associated vehicle pose to identify the landmark within different subsets.
  • Processor 204 may then train learning models (e.g., using rule-based machine learning method) based on the sample landmark features of a matched landmark.
  • the trained model may be applied to match landmark features associated with the same landmark.
  • processor 204 may identify a plurality of data frames associated with a landmark. For example, if the matching result of a plurality of data frames among different subsets is higher than a predetermined threshold level corresponding to a sufficient level of matching, processor 204 may associate the plurality of data frames with the landmark.
  • processor 204 may determine a set of parameters associated with the landmark.
  • the set of parameters of the landmark may be determined based on the type of the landmark.
  • the landmark is line segment type object (e.g., a street light lamp stick) , it may be represented with 4 or 6 degrees of freedom, including the line direction (2 degrees of freedom) , tangential positions (2 degrees of freedom) , and endpoints (0 or 2 degrees of freedom) .
  • the landmark is symmetric type object (e.g., a tree or road board) , it may be represented with 5 degrees of freedom, including the normal vector (2 degrees of freedom) and the spatial location of the landmark (3 degrees of freedom) that has .
  • landmarks that are not the above two types of object they may be represented with 6 degrees of freedom, including Euler angles (3 degrees of freem) and the spatial location of the landmark (3 degrees of freedom) .
  • processor 204 may associate the set of parameters with the pose of the vehicle corresponding to each data frame.
  • each set of parameters may be associated with the pose information 205 of vehicle 100 at the time point the data frame is acquired.
  • processor 204 may construct an HD map based on the sets of parameters and the associated pose information.
  • optimization methods may be used to construct the HD map.
  • the matched landmark features obtained in step S410 and set of parameters determined in step S412, may provide additional constraints that can be used during the optimization method for HD map construction.
  • bundle adjustment may be added as an ancillary component of the optimization to improve the robustness of the map construction.
  • bundle adjustment method may be applied in addition to a traditional map optimization method (e.g., to add constraints) .
  • the extended traditional map optimization method (e.g., with bundle adjustment constraints added) is more robust in optimizing the vehicle pose and the set of parameters of the landmark, and thus may increase the accuracy of the HD map construction.
  • the computer-readable medium may include volatile or non-volatile, magnetic, semiconductor, tape, optical, removable, non-removable, or other types of computer-readable medium or computer-readable storage devices.
  • the computer-readable medium may be the storage device or the memory module having the computer instructions stored thereon, as disclosed.
  • the computer-readable medium may be a disc or a flash drive having the computer instructions stored thereon.

Abstract

Systems and methods for updating an HD map. The system may include a communication interface (202) configured to receive sensor (140/150) data (203) acquired of a target region by at least one sensor (140/150) equipped on a vehicle (100), as the vehicle (100) travels along a trajectory, via a network. The system may further include a storage configured to store the HD map. The system may also include at least one processor (204). The at least one processor (204) may be configured to identify a plurality of data frames associated with a landmark, each data frame corresponding to a pose of the vehicle (100) on the trajectory. The at least one processor (204) may be further configured to determine a set of parameters of the landmark within each identified data frame. The at least one processor (204) may be further configured to associate the set of parameters with the pose of the vehicle (100) corresponding to each data frame. The at least one processor (204) may also be configured to construct the HD map based on the sets of parameters and the associated poses.

Description

SYSTEMS AND METHODS FOR CONSTRUCTING A HIGH-DEFINITION MAP TECHNICAL FIELD
The present disclosure relates to systems and methods for constructing a high-definition (HD) map, and more particularly to, systems and methods for constructing an HD map based on integrating point cloud data acquired of a same landmark from different poses.
BACKGROUND
Autonomous driving technology relies heavily on an accurate map. For example, accuracy of a navigation map is critical to functions of autonomous driving vehicles, such as positioning, ambiance recognition, decision making and control. HD maps may be obtained by aggregating images and information acquired by various sensors, detectors, and other devices equipped on vehicles as they drive around. For example, a vehicle may be equipped with multiple integrated sensors such as a LiDAR, a Global Positioning System (GPS) receiver, one or more Inertial Measurement Unit (IMU) sensors, and one or more cameras, to capture features of the road on which the vehicle is driving or the surrounding objects. Data captured may include, for example, center line or border line coordinates of a lane, coordinates and images of an object, such as a building, another vehicle, a landmark, a pedestrian, or a traffic sign.
The point cloud data obtained by the integrated sensors may be affected by the errors from the sensors themselves (e.g., laser ranging error, GPS positioning error, IMU attitude measurement error, etc. ) . For example, errors of pose information accumulate significantly when the GPS signal is weak.
Some solutions have been developed to improve the accuracy of point cloud data acquisition. For example, one solution based on Kalman filtering integrate the LiDAR unit and the navigation unit (e.g., the GPS/IMU unit) to estimate the pose of the vehicle. Another solution is to optimize the pose information iteratively subject to a set of constraints (such as point cloud matching) , such as using a Gauss–Newton method. Although these solutions may mitigate accurate errors to some extent, they are not sufficiently robust and still susceptible to noises in image coordinates. Therefore, an improved system and method for updating an HD map based on optimization techniques is needed.
Embodiments of the disclosure address the above problems by methods and systems for constructing an HD map by integrating point cloud data acquired of a same landmark from different poses.
SUMMARY
Embodiments of the disclosure provide a method for constructing an HD map. The method may include receiving, by a communication interface, sensor data acquired of a target region by at least one sensor equipped on a vehicle as the vehicle travels along a trajectory. The method may further include identifying, by at least one processor, a plurality of data frames associated with a landmark, each data frame corresponding to a pose of the vehicle on the trajectory. The method may further include determining, by the at least one processor, a set of parameters of the landmark within each identified data frame and associating the set of parameters with the pose of the vehicle corresponding to each data frame. The method may also include constructing, by the at least one processor, the HD map based on the sets of parameters and the associated poses.
Embodiments of the disclosure also provide a system for constructing an HD map. The system may include a communication interface configured to receive sensor data acquired of a target region by at least one sensor equipped on a vehicle, as the vehicle travels along a trajectory, via a network. The system may further include a storage configured to store the HD map. The system may also include at least one processor. The at least one processor may be configured to identify a plurality of data frames associated with a landmark, each data frame corresponding to a pose of the vehicle on the trajectory. The at least one processor may be further configured to determine a set of parameters of the landmark within each identified data frame and associate the set of parameters with the pose of the vehicle corresponding to each data frame. The at least one processor may also be configured to construct the HD map based on the sets of parameters and the associated poses.
Embodiments of the disclosure further provide a non-transitory computer-readable medium having instructions stored thereon that, when executed by one or more processors, causes the one or more processors to perform a method for updating an HD map. The method may include receiving, sensor data acquired of a target region by at least one sensor equipped on a vehicle as the vehicle travels along a trajectory. The method may further include identifying, a plurality of data frames associated with a landmark, each data frame corresponding to a pose of the vehicle on the trajectory. The method may further include determining, a set of parameters of the landmark within each identified data frame and associating the set of parameters with the pose of the vehicle corresponding to each data frame. The method may also include constructing, by the at least one processor, the HD map based on the sets of parameters and the associated poses.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 illustrates a schematic diagram of an exemplary vehicle equipped with sensors, according to embodiments of the disclosure.
FIG. 2 illustrates a block diagram of an exemplary system for constructing an HD map, according to embodiments of the disclosure.
FIG. 3 illustrates an exemplary method for optimizing the set of parameters of a landmark and poses of the vehicle, according to embodiments of the disclosure.
FIG. 4 illustrates a flowchart of an exemplary method for constructing an HD map, according to embodiments of the disclosure.
FIG. 5 shows an exemplary point cloud frame before and after using RANSAC algorithm, according to embodiments of the disclosure.
DETAILED DESCRIPTION
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. Wherever possible, the same reference numbers will be used throughout the drawings to refer to the same or like parts.
FIG. 1 illustrates a schematic diagram of an exemplary vehicle 100 having a plurality of  sensors  140 and 150, according to embodiments of the disclosure. Consistent with some embodiments, vehicle 100 may be a survey vehicle configured for acquiring data for constructing an HD map or three-dimensional (3-D) city modeling. It is contemplated that vehicle 100 may be an electric vehicle, a fuel cell vehicle, a hybrid vehicle, or a conventional internal combustion engine vehicle. Vehicle 100 may have a body 110 and at least one wheel 120. Body 110 may be any body style, such as a sports vehicle, a coupe, a sedan, a pick-up truck, a station wagon, a sports utility vehicle (SUV) , a minivan, or a conversion van. In some embodiments, vehicle 100 may include a pair of front wheels and a pair of rear wheels, as illustrated in FIG. 1. However, it is contemplated that vehicle 100 may have less wheels or equivalent structures that enable vehicle 100 to move around. Vehicle 100 may be configured to be all wheel drive (AWD) , front wheel drive (FWR) , or rear wheel drive (RWD) . In some embodiments, vehicle 100 may be configured to be operated by an operator occupying the vehicle, remotely controlled, and/or autonomous.
As illustrated in FIG. 1, vehicle 100 may be equipped with  various sensors  140 and 150. Sensor 140 may be mounted to body 110 via a mounting structure 130. Mounting structure 130 may be an electro-mechanical device installed or otherwise attached to body 110 of vehicle 100. In some embodiments, mounting structure 130 may use screws, adhesives, or another mounting mechanism. Vehicle 100 may be additionally equipped with sensor 150 inside or outside body 110 using any suitable mounting mechanisms. It is contemplated that the manners in which  sensor  140 or 150 can be equipped on vehicle 100 are not limited by the example shown in FIG. 1 and may be modified depending on the types of sensors of 140/150 and/or vehicle 100 to achieve desirable sensing performance.
Consistent with some embodiments,  sensors  140 and 150 may be configured to capture data as vehicle 100 travels along a trajectory. For example, sensor 140 may be a LiDAR scanner configured to scan the surrounding and acquire point clouds. LiDAR measures distance to a target by illuminating the target with pulsed laser light and measuring the reflected pulses with a sensor. Differences in laser return times and wavelengths can then be used to make digital 3-D representations of the target. The light used for LiDAR scan may be ultraviolet, visible, or near infrared. Because a narrow laser beam can map physical features with very high resolution, a LiDAR scanner is particularly suitable for HD map surveys. In some embodiments, a LiDAR scanner may capture point cloud.
As vehicle 100 travels along the trajectory, sensor 140 may continuously capture data. Each set of scene data captured at a certain time range is known as a data frame. For example, the point cloud data captured by a LiDAR may include multiple point cloud data frames corresponding to different time ranges. Each data frame also corresponds to a pose of the vehicle along the trajectory. In some embodiments, the scene may include a landmark, and thus multiple data frames captured of the scene may include data associated with the landmark. Because the data frames are captured at different vehicle poses, the data in each frame contains landmark features observed from different angles and distances. These features, however, may be matched and associated among the different data frames, to facilitate construction of the HD map.
As illustrated in FIG. 1, vehicle 100 may be additionally equipped with sensor 150, which may include sensors used in a navigation unit, such as a GPS receiver and one or more IMU sensors. A GPS is a global navigation satellite system that provides geolocation and time information to a GPS receiver. An IMU is an electronic device that measures and provides a vehicle’s specific force, angular rate, and sometimes the magnetic field surrounding the vehicle, using various inertial sensors, such as accelerometers and  gyroscopes, sometimes also magnetometers. By combining the GPS receiver and the IMU sensor, sensor 150 can provide real-time pose information of vehicle 100 as it travels, including the positions and orientations (e.g., Euler angles) of vehicle 100 at each time point.
In some embodiments, the point cloud data acquired by the LiDAR unit of sensor 140 may be initially in a local coordinate system of the LiDAR unit and may need to be transformed into a global coordinate system (e.g. the longitude/latitude coordinates) for later processing. Vehicle 100’s real-time pose information collected by sensor 150 of the navigation unit may be used for transforming the point cloud data from the local coordinate system into the global coordinate system by point cloud data registration, for example, based on vehicle 100’s poses at the time each point cloud data frame was acquired. In order to register the point cloud data with the matching real-time pose information,  sensors  140 and 150 may be integrated as an integrated sensing system such that the cloud point data can be aligned by registration with the pose information when they are collected. The integrated sensing system may be calibrated with respect to a calibration target to reduce the integration errors, including but not limited to, mounting angle error and mounting vector error of  sensors  140 and 150.
Consistent with the present disclosure,  sensors  140 and 150 may communicate with server 160. In some embodiments, server 160 may be a local physical server, a cloud server (as illustrated in FIG. 1) , a virtual server, a distributed server, or any other suitable computing device. Consistent with the present disclosure, server 160 may construct an HD map. In some embodiments, the HD map may be constructed using point cloud data acquired by a LiDAR. LiDAR measures distance to a target by illuminating the target with pulsed laser light and measuring the reflected pulses with a sensor. Differences in laser return times and wavelengths can then be used to construct digital 3-D representations of the target. The light used for LiDAR scan may be ultraviolet, visible, or near infrared. Because a narrow laser beam can map physical features with very high resolution, LiDAR is particularly suitable for HD map surveys.
Consistent with the present disclosure, server 160 may construct the HD map based on point cloud data containing multiple data frames acquired of one or more landmarks from different vehicle poses. Server 160 may receive the point cloud data, identify landmarks within the multiple frames of point cloud data, associate sets of parameters with the landmarks and construct HD maps based on the sets of parameters. Server 160 may communicate with  sensors  140, 150, and/or other components of vehicle 100 via a network, such as a Wireless Local Area Network (WLAN) , a Wide Area Network (WAN) , wireless  networks such as radio waves, a cellular network, a satellite communication network, and/or a local or short-range wireless network (e.g., Bluetooth TM) .
For example, FIG. 2 illustrates a block diagram of an exemplary server 160 for constructing an HD map, according to embodiments of the disclosure. Consistent with the present disclosure, server 160 may receive sensor data 203 from sensor 140 and vehicle pose 205 information from sensor 150. Based on sensor data 203, server 160 may identify data frames associated with landmarks, and determine sets of parameters within the data frames and associate them with poses of the vehicle when acquiring the respective data frames and construct an HD map based on the sets of parameters.
In some embodiments, as shown in FIG. 2, server 160 may include a communication interface 202, a processor 204, a memory 206, and a storage 208. In some embodiments, server 160 may have different modules in a single device, such as an integrated circuit (IC) chip (implemented as an application-specific integrated circuit (ASIC) or a field-programmable gate array (FPGA) ) , or separate devices with dedicated functions. In some embodiments, one or more components of server 160 may be located in a cloud or may be alternatively in a single location (such as inside vehicle 100 or a mobile device) or distributed locations. Components of server 160 may be in an integrated device or distributed at different locations but communicate with each other through a network (not shown) .
Communication interface 202 may send data to and receive data from components such as  sensors  140 and 150 via communication cables, a Wireless Local Area Network (WLAN) , a Wide Area Network (WAN) , wireless networks such as radio waves, a cellular network, and/or a local or short-range wireless network (e.g., Bluetooth TM) , or other communication methods. In some embodiments, communication interface 202 can be an integrated services digital network (ISDN) card, cable modem, satellite modem, or a modem to provide a data communication connection. As another example, communication interface 202 can be a local area network (LAN) card to provide a data communication connection to a compatible LAN. Wireless links can also be implemented by communication interface 202. In such an implementation, communication interface 202 can send and receive electrical, electromagnetic or optical signals that carry digital data streams representing various types of information via a network.
Consistent with some embodiments, communication interface 202 may receive sensor data 203 such as point cloud data captured by sensor 140, as well as pose information 205 captured by sensor 150. Communication interface may further provide the received data to storage 208 for storage or to processor 204 for processing. Communication interface 202  may also receive a point cloud generated by processor 204 and provide the point cloud to any local component in vehicle 100 or any remote device via a network.
Processor 204 may include any appropriate type of general-purpose or special-purpose microprocessor, digital signal processor, or microcontroller. Processor 204 may be configured as a separate processor module dedicated to construct HD maps. Alternatively, processor 204 may be configured as a shared processor module for performing other functions unrelated to color point cloud generation.
As shown in FIG. 2, processor 204 may include multiple modules, such as a landmark feature extraction unit 210, a landmark feature matching unit 212, a landmark parameter determination unit 214, and an HD map construction unit 216, and the like. These modules (and any corresponding sub-modules or sub-units) can be hardware units (e.g., portions of an integrated circuit) of processor 204 designed for use with other components or software units implemented by processor 204 through executing at least part of a program. The program may be stored on a computer-readable medium, and when executed by processor 204, it may perform one or more functions. Although FIG. 2 shows units 210-216 all within one processor 204, it is contemplated that these units may be distributed among multiple processors located near or remotely with each other. For example, modules related to landmark feature extraction, such as landmark feature extraction unit 210, landmark feature matching unit 212, landmark parameter determination unit 214, etc. may be within a processor on vehicle 100. Modules related to constructing HD map, such as HD map construction unit 216 may be within a processor on a remote server.
Landmark feature extraction unit 210 may be configured to extract landmark features from sensor data 203. In some embodiments, the landmark features may be geometric features of a landmark. Different methods may be used to extract the landmark features based on the type of the landmark. For example, the landmark may be a road mark (e.g., a traffic lane or pedestrian marks) or a standing object (e.g., a tree or road board) .
Processor 204 may determine the type of the landmark. In some embodiments, if the landmark is determined to be a road mark, landmark feature extraction unit 210 may identify the landmark based on point cloud intensity of the landmarks. For example, landmark feature extraction unit 210 may use a Random Sample Consensus (RANSAC) method to segment the point cloud data associated with the road surface where the vehicle travels on. Because road marks are typically made using special labeling materials that correspond to high-intensity point cloud, landmark feature extraction unit 210 may extract features of the road marks based on the intensity. For example, landmark feature extraction  unit 210 may use regional growing or clustering methods. In some other embodiments, if the landmark is determined to be a standing object, landmark feature extraction unit 210 may extract the landmark features based on a Principal Component Analysis method. For example, landmark feature extraction unit 210,
Landmark feature matching unit 212 may be configured to divide sensor data 203 into subsets. For example, landmark feature matching unit 212 may divide sensor data 203 into data frames based on the time point the sensor data was captured. The data frames contain point cloud data associated with the same landmark captured at different vehicle poses along the trajectory. Landmark feature matching unit 212 may further be configured to match the landmark features among the subsets and identify the data frames associated with the landmark.
In some embodiments, landmark features may be matched using learning models trained based on sample landmark features that are known to be associated with a same landmark. For example, landmark feature matching unit 212 may use landmark features such as types, collection properties, and/or geometric features of the landmark as sample landmark features and combine the features with the associated vehicle pose to identify the landmark within different subsets. Landmark feature matching unit 212 may then train learning models (e.g., rule-based machine learning method) based on the sample landmark features that are associated with a same landmark. The trained model can then be applied to find matching landmark features.
Landmark parameter determination unit 214 may be configured to determine a set of parameters of the landmark based on the matched landmark features. In some embodiments, the set of parameters of the landmark may be determined based on the type of the landmark. For example, if the landmark is line segment type object (e.g., a street light lamp stick) , it may be represented with 4 or 6 degrees of freedom, including the line direction (2 degrees of freedom) , tangential positions (2 degrees of freedom) , and endpoints (0 or 2 degrees of freedom) . As another example, if the landmark is symmetric type object (e.g., a tree or road board) , it may be represented with 5 degrees of freedom, including the normal vector (2 degrees of freedom) and the spatial location of the landmark (3 degrees of freedom) that has . For landmarks that are not the above two types of object, they may be represented with 6 degrees of freedom, including Euler angles (3 degrees of freem) and the spatial location of the landmark (3 degrees of freedom) .
HD map construction unit 216 may be configured to construct an HD map based on the set of parameters. In some embodiments, optimization methods may be used to construct  the HD map. The matched landmark features obtained by landmark feature matching unit 212 and set of parameters determined by landmark parameter determination unit 214 provide additional constraints that can be used during the optimization method for HD map construction. In some embodiments, bundle adjustment may be added as an ancillary component of the optimization to improve the robustness of the map construction. For example, bundle adjustment method may be applied in addition to a traditional map optimization method (e.g., to add constraints) . The extended traditional map optimization method (e.g., with bundle adjustment constraints added) is more robust in optimizing the vehicle pose information and the set of parameters of the landmark, and thus may increase the accuracy of the HD map construction (e.g., when the GPS positioning accuracy is at a decimeter level, the HD map can still be constructed at a centimeter level accuracy) .
In some embodiments, processor 204 may additionally include a sensor calibration unit (not shown) configured to determine one or more calibration parameters associated with  sensor  140 or 150. In some embodiments, the sensor calibration unit may instead be inside vehicle 100, in a mobile device, or otherwise located remotely from processor 204. For example, sensor calibration may be used to calibrate a LiDAR scanner and the positioning sensor (s) .
Memory 206 and storage 208 may include any appropriate type of mass storage provided to store any type of information that processor 204 may need to operate. Memory 206 and storage 208 may be a volatile or non-volatile, magnetic, semiconductor, tape, optical, removable, non-removable, or other type of storage device or tangible (i.e., non-transitory) computer-readable medium including, but not limited to, a ROM, a flash memory, a dynamic RAM, and a static RAM. Memory 206 and/or storage 208 may be configured to store one or more computer programs that may be executed by processor 204 to perform color point cloud generation functions disclosed herein. For example, memory 206 and/or storage 208 may be configured to store program (s) that may be executed by processor 204 to construct an HD map based on sensor data captured by  sensor  140 and 150.
Memory 206 and/or storage 208 may be further configured to store information and data used by processor 204. For instance, memory 206 and/or storage 208 may be configured to store the various types of sensor data (e.g., point cloud data frames, pose information, etc. ) captured by  sensors  140 and 150 and the HD map. Memory 206 and/or storage 208 may also store intermediate data such as machine learning models, landmark features, and sets of parameters associated with landmarks, etc. The various types of data may be stored  permanently, removed periodically, or disregarded immediately after each frame of data is processed.
FIG. 3 illustrates an exemplary method for optimizing the set of parameters of a landmark and poses of the vehicle when the data frame of the landmark is acquired, according to embodiments of the disclosure. As shown in FIG. 3, P i represents the pose information of the vehicle 100 at the time point i the sensor data F i and V i are acquired. In some embodiments, P i may include [S i T i R i] which stands for vehicle pose at the moment sensor data F i and V i are acquired. For example, S i T i R i may be parameters representing pose of vehicle 100 in global coordinates. Sensor data F i is the observed set of parameters (e.g., line direction, tangential positions and endpoints for line segment type objects) of the landmark and sensor data V i is the difference between vehicle pose information P i and the observed set of parameters F i. For example, P 1 is the pose information of the vehicle 100 at the time point sensor data F 1 and V 1 are acquired and F 1 is the observed set of parameters of the landmark. V 1 is the vector from P 1 to F 1.
In some embodiments, {F 1, F 2, …, F n} and {V 1, V 2, …, V n} may be divided into subsets of sensor data based on the time point the sensor data was collected (e.g., {F 1, F 2, …, F n} may be divided into {F 1, F 2, …, F k} and {F k+1, F k+2, …, F n} ; {V 1, V 2, …, V n} may be divided into {V 1, V 2, …, V k} and {V k+1, V k+2, …, V n} ) . [C C T c R c] represents the set of parameters of the landmark in a global coordinate (e.g., C C, T c, R c may stand for line direction, tangential positions, and endpoints of the landmark respectively if the landmark is a line segment type object) . d i represents an observation error, which equals to the difference between the sensor data F i which is acquired by  sensor  140 and 150, and the set of parameters of the landmark [C C T c R c] in the global coordinate.
In some embodiments, the disclosed method includes first, extracting landmark features from sensor data {F 1, F 2, …, F i} and {V 1, V 2, …, V i} . The method then includes dividing the sensor data into subsets. For example, sensor data {F 1, F 2, …, F i} and {V 1, V 2, …, V i} may be divided into subsets {F 1, F 2, …, F m} and {V 1, V 2, …, V m} , and {F m+1, F m+2, …, F i} and {V m+1, V m+2, …, V i} . The method may also include matching the landmark features among the subsets and identifying the plurality of data frames associated with the landmark. The method then includes determining, sets of parameters of the landmarks within each identified data frame and associating the sets of parameters with the pose of the vehicle corresponding to each data frame P i. Finally, optimizing the poses and the sets of parameters  simultaneously. For example, the optimization method may be used to find the optimal {T i, R i, P i} to minimize the sum of the observation errors
Figure PCTCN2018119199-appb-000001
FIG. 4 illustrates a flowchart of an exemplary method for constructing an HD map, according to embodiments of the disclosure. In some embodiments, method 400 may be implemented by a HD map construction system that includes, among other things, server 160 and  sensors  140 and 150. However, method 400 is not limited to that exemplary embodiment. Method 400 may include steps S402-S416 as described below. It is to be appreciated that some of the steps may be optional to perform the disclosure provided herein. Further, some of the steps may be performed simultaneously, or in a different order than shown in FIG. 4.
In step S402, one or more of  sensors  140 and 150 may be calibrated. In some embodiments, vehicle 100 may be dispatched for a calibration trip to collect data used for calibrating sensor parameters. Calibration may occur before the actual survey is performed for constructing and/or updating the map. Point cloud data captured by a LiDAR (as an example of sensor 140) and pose information acquired by positioning devices such as a GPS receiver and one or more IMU sensors may be calibrated.
In step S404, sensor 140 may capture sensor data 203 and pose information 205 as vehicle 100 travels along a trajectory. In some embodiments, the sensor data 203 of the target region may be point cloud data. Vehicle 100 may be equipped with sensor 140, such as a LiDAR laser scanner. As vehicle 100 travels along the trajectory, sensor 140 may continuously capture frames of sensor data 203 at different time points in the form of a frame of point cloud data. Vehicle 100 may be also equipped with sensor 150, such as a GPS receiver and one or more IMU sensors.  Sensors  140 and 150 may form an integrated sensing system. In some embodiments, when vehicle 100 travels along the trajectory in the natural scene and when sensor 140 captures the set of point cloud data indicative of the target region, sensor 150 may acquire real-time pose information of vehicle 100.
In some embodiments, the captured data, including e.g., sensor data 203 and pose information 205, may be transmitted from sensors 140/150 to server 160 in real-time. For example, the data may be streamed as they become available. Real-time transmission of data enables server 160 to process the data frame by frame in real-time while subsequent frames are being captured. Alternatively, data may be transmitted in bulk after a section of, or the entire survey is completed.
In step S406, processor 204 may extract landmark features from the sensor data. In some embodiments, landmarks may be extracted based on the type of the landmarks. For  example, processor 204 may determine if the landmarks are road marks (e.g., traffic lanes) , or standing objects (e.g., trees or boards) . In some embodiments, if the landmarks are determined to be road marks, processor 204 may identify the landmarks based on point cloud intensity of the landmarks. For example, landmark feature extraction unit 210 may segment the sensor data using RANASC algorithm. Based on the segment, processor 204 may further identify the landmarks based on the point cloud intensity of the landmarks.
For example, FIG. 5 illustrates  exemplary point clouds  510 and 520 of the same object (e.g., road marks) before and after point cloud intensity identification of the landmarks, respectively, according to embodiments of the disclosure. Point cloud 510 of the road marks are data collected by  sensors  140 and 150 before point cloud intensity identification. In contrast, point cloud 520 of the same road marks are data re-generated after point cloud intensity identification (e.g., using RANSAC algorithm to segment the sensor data) . For example, the landmarks (road marks) are more distinguishable in point cloud 520 than the same landmarks (road marks) shown in point cloud 510 since the sensor data collected by  sensors  140 and 150 are filtered to reduce noise by the RANSAC method.
In some other embodiments, if the landmarks are determined to be standing objects, processor 204 may identify the landmarks based on a Principal Component Analysis method. For example, processor 204 may use an orthogonal transformation to convert a set of observations of possibly correlated variables (e.g., point cloud data of the nearby area of the landmarks) into a set of values of linearly uncorrelated variables of the landmarks.
In step S408, processor 204 may divide the sensor data into subsets. For example, processor 204 may divide sensor data 203 into data frames based on the time point the sensor data was captured. The data frames contain point cloud data associated with the same landmark captured at different vehicle poses along the trajectory. Processor 204 may further be configured to match the landmark features among the subsets and identify the data frames associated with the landmark.
In some embodiments, landmark features may be matched using learning models trained based on sample landmark features that are known to be associated with a same landmark. For example, processor 204 may use landmark features such as types, collection properties, and/or geometric features as sample landmark features and combine the features with the associated vehicle pose to identify the landmark within different subsets. Processor 204 may then train learning models (e.g., using rule-based machine learning method) based on the sample landmark features of a matched landmark. The trained model may be applied to match landmark features associated with the same landmark.
Based on the matching result, in step S410, processor 204 may identify a plurality of data frames associated with a landmark. For example, if the matching result of a plurality of data frames among different subsets is higher than a predetermined threshold level corresponding to a sufficient level of matching, processor 204 may associate the plurality of data frames with the landmark.
In step S412, processor 204 may determine a set of parameters associated with the landmark. In some embodiments, the set of parameters of the landmark may be determined based on the type of the landmark.
For example, if the landmark is line segment type object (e.g., a street light lamp stick) , it may be represented with 4 or 6 degrees of freedom, including the line direction (2 degrees of freedom) , tangential positions (2 degrees of freedom) , and endpoints (0 or 2 degrees of freedom) . As another example, if the landmark is symmetric type object (e.g., a tree or road board) , it may be represented with 5 degrees of freedom, including the normal vector (2 degrees of freedom) and the spatial location of the landmark (3 degrees of freedom) that has . For landmarks that are not the above two types of object, they may be represented with 6 degrees of freedom, including Euler angles (3 degrees of freem) and the spatial location of the landmark (3 degrees of freedom) .
In step S414, processor 204 may associate the set of parameters with the pose of the vehicle corresponding to each data frame. For example, each set of parameters may be associated with the pose information 205 of vehicle 100 at the time point the data frame is acquired.
In step S416, processor 204 may construct an HD map based on the sets of parameters and the associated pose information. In some embodiments, optimization methods may be used to construct the HD map. The matched landmark features obtained in step S410 and set of parameters determined in step S412, may provide additional constraints that can be used during the optimization method for HD map construction. In some embodiments, bundle adjustment may be added as an ancillary component of the optimization to improve the robustness of the map construction. For example, bundle adjustment method may be applied in addition to a traditional map optimization method (e.g., to add constraints) . The extended traditional map optimization method (e.g., with bundle adjustment constraints added) is more robust in optimizing the vehicle pose and the set of parameters of the landmark, and thus may increase the accuracy of the HD map construction. 
Another aspect of the disclosure is directed to a non-transitory computer-readable medium storing instructions which, when executed, cause one or more processors to perform the methods, as discussed above. The computer-readable medium may include volatile or non-volatile, magnetic, semiconductor, tape, optical, removable, non-removable, or other types of computer-readable medium or computer-readable storage devices. For example, the computer-readable medium may be the storage device or the memory module having the computer instructions stored thereon, as disclosed. In some embodiments, the computer-readable medium may be a disc or a flash drive having the computer instructions stored thereon.
It will be apparent to those skilled in the art that various modifications and variations can be made to the disclosed system and related methods. Other embodiments will be apparent to those skilled in the art from consideration of the specification and practice of the disclosed system and related methods.
It is intended that the specification and examples be considered as exemplary only, with a true scope being indicated by the following claims and their equivalents.

Claims (20)

  1. A method for constructing an HD map, comprising:
    receiving, by a communication interface, sensor data acquired of a target region by at least one sensor equipped on a vehicle as the vehicle travels along a trajectory;
    identifying, by at least one processor, a plurality of data frames associated with a landmark, each data frame corresponding to a pose of the vehicle on the trajectory;
    determining, by the at least one processor, a set of parameters of the landmark within each identified data frame;
    associating the set of parameters with the pose of the vehicle corresponding to each data frame; and
    constructing, by the at least one processor, the HD map based on the sets of parameters and the associated poses.
  2. The method of claim 1, wherein identifying the plurality of data frames associated with the landmark further comprises:
    extracting landmark features from the sensor data;
    dividing the sensor data into subsets; and
    matching the landmark features among the subsets to identify the plurality of data frames associated with the landmark.
  3. The method of claim 2, wherein extracting the landmark features further comprises:
    segmenting the sensor data; and
    recognizing the landmark based on the segmented sensor data.
  4. The method of claim 3, wherein segmenting the sensor data is based on a RANSAC algorithm.
  5. The method of claim 4, wherein recognizing the landmark further comprises determining material features of the landmark based on intensity bands.
  6. The method of claim 3, wherein recognizing the landmark further comprises determining geometric features of the landmark based on a PCA method.
  7. The method of claim 1, wherein the at least one sensor includes a LiDAR, and the sensor data include point cloud data.
  8. The method of claim 2, wherein matching the landmark features among the subsets further comprises calculating a matching level of the landmark features between two subsets.
  9. The method of claim 1, wherein the set of parameters of the landmark include a direction, a tangent and an endpoint of the landmark.
  10. The method of claim 1, wherein the set of parameters of the landmark include Euler angles and a spatial location of the landmark within the target region.
  11. The method of claim 1, wherein the set of parameters of the landmark include a normal vector and a spatial location of the landmarks.
  12. The method of claim 1, wherein constructing the HD map further comprises optimizing the poses and the sets of parameters simultaneously.
  13. A system for constructing an HD map, comprising:
    a communication interface configured to receive sensor data acquired of a target region by at least one sensor equipped on a vehicle as the vehicle travels along a trajectory via a network;
    a storage configured to store the HD map; and
    at least one processor configured to:
    identify a plurality of data frames associated with a landmark, each data frame corresponding to a pose of the vehicle on the trajectory;
    determine a set of parameters of the landmark within each identified data frame;
    associate the set of parameters with the pose of the vehicle corresponding to each data frame; and
    construct the HD map based on the sets of parameters and the associated poses.
  14. The system of claim 13, wherein to identify the plurality of data frames associated with the landmark the at least one processor is further configured to:
    extract landmark features from the sensor data;
    divide the sensor data into subsets; and
    match the landmark features among the subsets to identify the plurality of data frames associated with the landmark.
  15. The system of claim 14, wherein to match the landmark features among the subsets, the at least one processor is further configured to calculate a matching level of the landmark features between two subsets.
  16. The system of claim 13, wherein to extract the landmark features, the at least one processor is further configured to:
    segment the sensor data; and
    recognize the landmark based on the segmented sensor data.
  17. The system of claim 15, wherein the at least one processor is further configured to segment the sensor data based on a RANSAC algorithm.
  18. The system of claim 13, wherein the at least one processor is further configured to determine geometric features of the landmark based on a PCA method.
  19. The system of claim 13, wherein the at least one processor is further configured to optimize the poses and the sets of parameters simultaneously.
  20. A non-transitory computer-readable medium having a computer program stored thereon, wherein the computer program, when executed by at least one processor, performs a method for constructing an HD map, the method comprising:
    receiving sensor data acquired of a target region by at least one sensor equipped on a vehicle as the vehicle travels along a trajectory;
    identifying a plurality of data frames associated with a landmark, each data frame
    corresponding to a pose of the vehicle on the trajectory;
    determining a set of parameters of the landmark within each identified data frame;
    associating the set of parameters with the pose of the vehicle corresponding to each data frame; and
    constructing the HD map based on the sets of parameters and the associated poses.
PCT/CN2018/119199 2018-12-04 2018-12-04 Systems and methods for constructing high-definition map WO2020113425A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201880095637.XA CN112424568A (en) 2018-12-04 2018-12-04 System and method for constructing high-definition map
PCT/CN2018/119199 WO2020113425A1 (en) 2018-12-04 2018-12-04 Systems and methods for constructing high-definition map

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/119199 WO2020113425A1 (en) 2018-12-04 2018-12-04 Systems and methods for constructing high-definition map

Publications (1)

Publication Number Publication Date
WO2020113425A1 true WO2020113425A1 (en) 2020-06-11

Family

ID=70973697

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/119199 WO2020113425A1 (en) 2018-12-04 2018-12-04 Systems and methods for constructing high-definition map

Country Status (2)

Country Link
CN (1) CN112424568A (en)
WO (1) WO2020113425A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023028892A1 (en) * 2021-08-31 2023-03-09 Intel Corporation Hierarchical segment-based map optimization for localization and mapping system
CN113984071B (en) * 2021-09-29 2023-10-13 云鲸智能(深圳)有限公司 Map matching method, apparatus, robot, and computer-readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160161265A1 (en) * 2014-12-09 2016-06-09 Volvo Car Corporation Method and system for improving accuracy of digital map data utilized by a vehicle
WO2017215964A1 (en) * 2016-06-14 2017-12-21 Robert Bosch Gmbh Method and apparatus for producing an optimised localisation map, and method for producing a localisation map for a vehicle
CN108351218A (en) * 2015-11-25 2018-07-31 大众汽车有限公司 Method and system for generating numerical map
DE102017207257A1 (en) * 2017-04-28 2018-10-31 Robert Bosch Gmbh Method and apparatus for creating and providing a high accuracy card

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130060461A1 (en) * 2011-09-07 2013-03-07 INRO Technologies Limited Method and apparatus for using pre-positioned objects to localize an industrial vehicle
US9342888B2 (en) * 2014-02-08 2016-05-17 Honda Motor Co., Ltd. System and method for mapping, localization and pose correction of a vehicle based on images
US10962982B2 (en) * 2016-07-21 2021-03-30 Mobileye Vision Technologies Ltd. Crowdsourcing the collection of road surface information
JP2019527832A (en) * 2016-08-09 2019-10-03 ナウト, インコーポレイテッドNauto, Inc. System and method for accurate localization and mapping
CN108280866B (en) * 2016-12-30 2021-07-27 法法汽车(中国)有限公司 Road point cloud data processing method and system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160161265A1 (en) * 2014-12-09 2016-06-09 Volvo Car Corporation Method and system for improving accuracy of digital map data utilized by a vehicle
CN108351218A (en) * 2015-11-25 2018-07-31 大众汽车有限公司 Method and system for generating numerical map
WO2017215964A1 (en) * 2016-06-14 2017-12-21 Robert Bosch Gmbh Method and apparatus for producing an optimised localisation map, and method for producing a localisation map for a vehicle
DE102017207257A1 (en) * 2017-04-28 2018-10-31 Robert Bosch Gmbh Method and apparatus for creating and providing a high accuracy card

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023028892A1 (en) * 2021-08-31 2023-03-09 Intel Corporation Hierarchical segment-based map optimization for localization and mapping system
CN113984071B (en) * 2021-09-29 2023-10-13 云鲸智能(深圳)有限公司 Map matching method, apparatus, robot, and computer-readable storage medium

Also Published As

Publication number Publication date
CN112424568A (en) 2021-02-26

Similar Documents

Publication Publication Date Title
US10860871B2 (en) Integrated sensor calibration in natural scenes
CA3028653C (en) Methods and systems for color point cloud generation
US10937231B2 (en) Systems and methods for updating a high-resolution map based on binocular images
US10996072B2 (en) Systems and methods for updating a high-definition map
WO2020093378A1 (en) Vehicle positioning system using lidar
US10996337B2 (en) Systems and methods for constructing a high-definition map based on landmarks
JP6278791B2 (en) Vehicle position detection device, vehicle position detection method, vehicle position detection computer program, and vehicle position detection system
WO2020113425A1 (en) Systems and methods for constructing high-definition map
CN114503044A (en) System and method for automatically labeling objects in 3D point clouds
WO2021056185A1 (en) Systems and methods for partially updating high-definition map based on sensor data matching
AU2018102199A4 (en) Methods and systems for color point cloud generation
WO2020232709A1 (en) Method and system for evaluating quality of a point cloud map

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18941997

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18941997

Country of ref document: EP

Kind code of ref document: A1