WO2020107151A1 - Systems and methods for managing a high-definition map - Google Patents

Systems and methods for managing a high-definition map Download PDF

Info

Publication number
WO2020107151A1
WO2020107151A1 PCT/CN2018/117443 CN2018117443W WO2020107151A1 WO 2020107151 A1 WO2020107151 A1 WO 2020107151A1 CN 2018117443 W CN2018117443 W CN 2018117443W WO 2020107151 A1 WO2020107151 A1 WO 2020107151A1
Authority
WO
WIPO (PCT)
Prior art keywords
point cloud
map
tiles
map tiles
tile
Prior art date
Application number
PCT/CN2018/117443
Other languages
French (fr)
Inventor
Lu Feng
Xing Nian
Teng MA
Original Assignee
Beijing Didi Infinity Technology And Development Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Didi Infinity Technology And Development Co., Ltd. filed Critical Beijing Didi Infinity Technology And Development Co., Ltd.
Priority to PCT/CN2018/117443 priority Critical patent/WO2020107151A1/en
Priority to CN201880092600.1A priority patent/CN112074871B/en
Publication of WO2020107151A1 publication Critical patent/WO2020107151A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3863Structures of map data
    • G01C21/387Organisation of map data, e.g. version management or database structures
    • G01C21/3881Tile-based structures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3863Structures of map data
    • G01C21/3867Geometry of map features, e.g. shape points, polygons or for simplified maps
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3885Transmission of map data to client devices; Reception of map data by client devices
    • G01C21/3889Transmission of selected map data, e.g. depending on route
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3885Transmission of map data to client devices; Reception of map data by client devices
    • G01C21/3896Transmission of map data from central databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/005Tree description, e.g. octree, quadtree
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • G06T9/001Model-based coding, e.g. wire frame
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • G06T9/40Tree coding, e.g. quadtree, octree
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/597Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/56Particle system, point based geometry or rendering

Definitions

  • the present disclosure relates to systems and methods for managing a high-definition map, and more particularly to, systems and methods for managing a large-scale high-definition map for positioning services in autonomous driving applications.
  • a vehicle’s movement is partially or entirely controlled by a driving control system.
  • the driving control system makes driving decisions based on live information as well as prior knowledge about the vehicle’s surrounding areas.
  • the live information can be obtained through various sensors, such as one or more cameras, Global Positioning System (GPS) receivers, Inertial Measurement Unit (IMU) sensors, and/or LiDARs.
  • GPS Global Positioning System
  • IMU Inertial Measurement Unit
  • LiDARs LiDARs
  • the prior knowledge such as map information, can be downloaded from a remote server and stored in a local storage.
  • the live information can then be compared with prior knowledge by the driving control system to assist in making driving decisions. For example, positioning of the vehicle can be achieved by matching live images of the road on which the vehicle is travelling and certain features along the road with previously acquired images.
  • the live images may be in a two-dimensional (2D) form captured by one or more cameras mounted on the vehicle.
  • This 2D-image-based method may not be able to provide high-precision positioning due to limitations in ambient lighting, image distortions, field of view, etc.
  • a more precise approach is to use a LiDAR, which can capture a 3D image, also referred to as a point cloud, representing surface profiles of surrounding objects with built-in range/distance information.
  • the captured point cloud data can then be compared with a previously acquired point cloud repository, also referred to as a high-definition map, to determine the current position of the vehicle.
  • a high-definition map needs routine and frequent updates to account for changes in the road conditions.
  • the updates may be conducted by dispatching a survey vehicle to an area of interest to capture new point cloud data.
  • the new point cloud data may then be used to replace corresponding outdated point cloud data in the high-definition map.
  • point cloud data captured from new, uncharted areas may be added to the high-definition map to expand its coverage.
  • Such frequent updates and expansion to the high-definition map involve processing a large amount of data and it is challenging to effectively and efficiently manage the information contained in the high-definition map.
  • Embodiments of the disclosure address the above problems by methods and systems for managing a high-definition map based on a hierarchy of map tiles to organize positioning point cloud blocks and multi-resolution compression algorithms to enhance data storage and delivery efficiency.
  • Embodiments of the disclosure provide a system for managing a high-definition map.
  • the system may include at least one storage device configured to store point cloud data and instructions.
  • the system may also include at least one processor configured to execute the instructions to perform operations for managing the high-definition map based on the point cloud data.
  • the operations may include determining geographic coordinates of a point cloud.
  • the operations may also include associating the point cloud with one or more map tiles of the high-definition map based on the geographic coordinates. For each of the one or more map tiles associated with the point cloud, the operations may include generating a positioning point cloud block corresponding to the map tile based on at least a portion of the point cloud.
  • the operations may include providing the positioning point cloud block corresponding to at least one map tile to a client device based on information indicating a location of the client device.
  • Embodiments of the disclosure also provide a method for managing a high-definition map.
  • the method may include determining geographic coordinates of a point cloud.
  • the method may also include associating the point cloud with one or more map tiles of the high-definition map based on the geographic coordinates.
  • the method may include generating a positioning point cloud block corresponding to the map tile based on at least a portion of the point cloud.
  • the method may include providing the positioning point cloud block corresponding to at least one map tile to a client device based on information indicating a location of the client device.
  • Embodiments of the disclosure further provide a non-transitory computer-readable medium having instructions stored thereon that, when executed by at least one processor, causes the at least one processor to perform a method for managing a high-definition map.
  • the method may include determining geographic coordinates of a point cloud.
  • the method may also include associating the point cloud with one or more map tiles of the high-definition map based on the geographic coordinates.
  • the method may include generating a positioning point cloud block corresponding to the map tile based on at least a portion of the point cloud.
  • the method may include providing the positioning point cloud block corresponding to at least one map tile to a client device based on information indicating a location of the client device.
  • FIG. 1 illustrates an exemplary autonomous driving scene, according to embodiments of the disclosure.
  • FIG. 2 illustrates a schematic diagram of an exemplary vehicle equipped with sensors, according to embodiments of the disclosure.
  • FIG. 3 illustrates a block diagram of an exemplary system for managing a high-definition map, according to embodiments of the disclosure.
  • FIG. 4. shows an exemplary hierarchical method for managing map data, according to embodiments of the disclosure.
  • FIG. 5 shows an exemplary multi-resolution compression scheme, according to embodiments of the disclosure.
  • FIG. 6 illustrates an exemplary queuing process, according to embodiments of the disclosure.
  • FIG. 7 illustrates am exemplary system for providing high-resolution positioning map service, according to embodiments of the disclosure.
  • FIG. 8 illustrates a flowchart of an exemplary method for managing a high-definition map, according to embodiments of the disclosure.
  • FIG. 1 illustrate an exemplary autonomous driving scene.
  • a vehicle 110 may be partially or entirely controlled by a driving control system 112 to travel along a road 120.
  • Vehicle 110 may be equipped with a sensor system 114, which may capture live information about vehicle 110’s surrounding areas, such as traffic marks 144, trees 140, building 142, etc.
  • Vehicle 110 may also communicate with a server 130 to receive map information in an area covering vehicle 110.
  • Driving control system 112 may process the live information captured by sensor system 114 and map information received from server 130 to determine driving instructions for controlling vehicle 110.
  • server 130 may maintain and manage a high-definition map including point cloud data.
  • server 130 may provide point cloud data previously obtained in or about the same location of vehicle 110 to driving control system 112.
  • Driving control system 112 may then compare the received point cloud data with live-captured point cloud information (e.g., captured by sensor system 114) to determine high-precision position information of vehicle 110, which may be used by driving control system 112 to make driving decisions.
  • FIG. 2 illustrates a schematic diagram of an exemplary vehicle 110 having sensor system 114 and driving control system 112, according to embodiments of the disclosure.
  • vehicle 110 may be an electric vehicle, a fuel cell vehicle, a hybrid vehicle, or a conventional internal combustion engine vehicle.
  • Vehicle 110 may have a body 116 and at least one wheel 118.
  • Body 116 may be any body style, such as a sports vehicle, a coupe, a sedan, a pick-up truck, a station wagon, a sports utility vehicle (SUV) , a minivan, or a conversion van.
  • vehicle 110 may include a pair of front wheels and a pair of rear wheels, as illustrated in FIG. 2.
  • vehicle 110 may have more or less wheels or equivalent structures that enable vehicle 110 to move around.
  • Vehicle 110 may be configured to be all wheel drive (AWD) , front wheel drive (FWR) , or rear wheel drive (RWD) .
  • vehicle 110 may be configured to be operated by an operator occupying the vehicle, remotely controlled, and/or autonomously controlled by driving control system 112.
  • vehicle 110 may be equipped with sensor system 114.
  • sensor system 114 may be mounted or attached to the outside of body 116, as shown in FIG. 2.
  • sensor system 114 may be equipped inside body 110.
  • sensor system 114 may include part of its component (s) equipped outside body 110 and part of its component (s) equipped inside body 110. It is contemplated that the manners in which sensor system 114 can be equipped on vehicle 110 are not limited by the example shown in FIG. 2, and may be modified depending on the types of sensor (s) included in sensor system 114 and/or vehicle 110 to achieve desirable sensing performance.
  • sensor system 114 may be configured to capture live data as vehicle 110 travels along a path. Consistent with the present disclosure, sensor system 114 may include a LiDAR to capture point cloud data of the surrounding. LiDAR measures distance to a target by illuminating the target with pulsed laser light and measuring the reflected pulses with a sensor. Differences in laser return times and wavelengths can then be used to construct digital 3-D representations of the target.
  • the light used for LiDAR scan may be ultraviolet, visible, or near infrared. Because a narrow laser beam can map physical features with very high resolution, LiDAR scanner is particularly suitable for high-resolution positioning. For example, the LiDAR may capture a point cloud frame at each of a sequence of time points.
  • Each point cloud frame may represent a 3D surface profile of the surround objects at one particular time point.
  • Multiple point cloud frames may be combined (e.g., through time/space shifting) to form a point cloud, which may represent the 3D surface profile of objects within a certain space.
  • a point cloud may represent the surface profile of objects along a certain distance of the path travelled by vehicle 110.
  • the surface profile may be represented by a spatial distribution of points at which the light waves emitted by the LiDAR are reflected.
  • live point cloud information can be obtained by sensor system 114, which may be used to compare with a previously captured point cloud (e.g., a portion of the high-definition map) to determine the position of vehicle 110 with high precision.
  • sensor system 114 may also include a navigation unit, such as a GPS receiver and one or more IMU sensors.
  • a GPS is a global navigation satellite system that provides location and time information to a GPS receiver. Since the location information provided by the GPS receiver (e.g., for civilian usage) usually cannot achieve the high level of resolution or precision required by autonomous driving, the GPS location information may be used to estimate a rough position of vehicle 110, while the high-resolution/high-precision positioning may be accomplished by processing the point cloud information based on the estimate.
  • An IMU is an electronic device that measures and provides a vehicle’s specific force, angular rate, and sometimes the magnetic field surrounding the vehicle, using various inertial sensors, such as accelerometers and gyroscopes, sometimes also magnetometers. Information captured by the IMU may also be used to position vehicle 110.
  • Vehicle 110 may communicate with server 130 to obtain prior knowledge of the path it travels along, such as map information.
  • Server 130 may be a local physical server, a cloud server (as illustrated in FIGs. 1 and 2) , a virtual server, a distributed server, or any other suitable computing device. Consistent with the present disclosure, server 130 may store a high-definition map. The high-definition map may be constructed using point cloud data, which may be acquired by one or more LiDARs during survey trip (s) .
  • server 130 may be also responsible for managing the high-definition map, including organizing point cloud data, updating point cloud data from time to time to reflect changes at certain portions of the map, and/or providing point cloud information to vehicles requesting high-resolution/high-precision positioning service.
  • Server 130 may communicate with vehicle 110, and/or components of vehicle 110 (e.g., sensor system 114, driving control system 112, etc. ) via a network, such as a Wireless Local Area Network (WLAN) , a Wide Area Network (WAN) , wireless networks such as radio waves, a cellular network, a satellite communication network, and/or a local or short-range wireless network (e.g., Bluetooth TM ) .
  • WLAN Wireless Local Area Network
  • WAN Wide Area Network
  • wireless networks such as radio waves, a cellular network, a satellite communication network, and/or a local or short-range wireless network (e.g., Bluetooth TM ) .
  • FIG. 3 shows an exemplary server 130 for managing a high-definition map, according to embodiments of the disclosure.
  • server 130 may receive a point cloud 302.
  • Point cloud 302 may be provided by a survey vehicle equipped with a sensor system similar to sensor system 114.
  • Point cloud 302 may cover a geographic area of interest, such as an area requiring updates or a new area.
  • Server 130 may be configured to aggregate point cloud 302 into the high-definition map.
  • server 130 may include a communication interface 310, a processor 320, a memory 330, and a storage 340.
  • server 130 may have different modules in a single device, such as an integrated circuit (IC) chip (implemented as an application-specific integrated circuit (ASIC) or a field-programmable gate array (FPGA) ) , or separate devices with dedicated functions.
  • IC integrated circuit
  • ASIC application-specific integrated circuit
  • FPGA field-programmable gate array
  • server 130 may be located in a cloud, or may be alternatively in a single location (such as inside vehicle 110 or a mobile device) or distributed locations.
  • Components of server 130 may be in an integrated device, or distributed at different locations but communicate with each other through a network (not shown) .
  • Communication interface 310 may send data to and receive data from a vehicle (e.g., an autonomous driving vehicle or a survey vehicle) or its components such as sensor system 114 and/or driving control system 112 via communication cables, a Wireless Local Area Network (WLAN) , a Wide Area Network (WAN) , wireless networks such as radio waves, a cellular network, and/or a local or short-range wireless network (e.g., Bluetooth TM ) , or other communication methods.
  • communication interface 310 can be an integrated services digital network (ISDN) card, cable modem, satellite modem, or a modem to provide a data communication connection.
  • ISDN integrated services digital network
  • communication interface 310 can be a local area network (LAN) card to provide a data communication connection to a compatible LAN.
  • Wireless links can also be implemented by communication interface 310.
  • communication interface 310 can send and receive electrical, electromagnetic or optical signals that carry digital data streams representing various types of information via a network.
  • communication interface 202 may receive point cloud 302.
  • Communication interface may further provide the received point cloud 302 to storage 330 for storage or to processor 320 for processing.
  • Communication interface 310 may also receive a positioning point cloud block generated by processor 320, and provide the positioning point cloud block to any component in vehicle 110.
  • Processor 320 may include any appropriate type of general-purpose or special-purpose microprocessor, digital signal processor, or microcontroller. Processor 320 may be configured as a separate processor module dedicated to updating the high-definition map. Alternatively, processor 320 may be configured as a shared processor module for performing other functions unrelated to color point cloud generation.
  • processor 320 may include multiple modules, such as a point cloud tiling unit 322, a positioning point cloud block generating unit 324, and a queuing unit 326, and the like. These modules (and any corresponding sub-modules or sub-units) can be hardware units (e.g., portions of an integrated circuit) of processor 320 designed for use with other components or software units implemented by processor 320 through executing at least part of a program.
  • the program may be stored on a computer-readable medium, and when executed by processor 320, it may perform one or more functions or operations.
  • FIG. 3 shows units 322-326 all within one processor 320, it is contemplated that these units may be distributed among multiple processors located near or remotely with each other.
  • Point cloud tiling unit 320 may be configured to associate a point cloud contained in point cloud 302 with one or more map tiles of the high-definition map.
  • the high-definition map managed by server 130 may have a hierarchic structure in which a large-scale map is divided into multiple levels of map tiles.
  • FIG. 4 shows an exemplary hierarchic structure based on Web Mercator projection. Under this structure, a large-scale map, such as the world map, is represented by different number of map tiles at different levels. Each map tile may have a predetermined resolution, such as 256 ⁇ 256.
  • the whole map is represented as a single map tile, which may be denoted by its level number 0 and tile coordinate (tileX, tileY) , in this case (0, 0) .
  • every map tile in the previous level (level 0) becomes 4 map tiles in a 2 ⁇ 2 layout.
  • the single map tile in level 0 is refined into 4 map tiles having map tile coordinates (0, 0) , (1, 0) , (0, 1) , and (1, 1) . Because each map tile has the same predetermined resolution (e.g., 256 ⁇ 256) , the resolution of the whole world map in level 1 becomes 512 ⁇ 512, higher than the previous level (level 0) .
  • every map tile in the previous level is further divided into four map tiles, quadrupling the total number of map tiles.
  • level 3 as shown in FIG. 4, the total number of map tiles becomes 64 (8 ⁇ 8) .
  • the map is divided into smaller map tiles, and since each map tile has the same predetermined resolution, a smaller map tile may provide finer details.
  • a location on the map can be represented by a series of map tiles. For example, in level 0, a location on the map should correspond to the same map tile because there is only one map tile covering the entire world. In level 1, however, the same location should be represented by, in most cases, only one of the four tiles. Similarly, at level 3, the same location should be represented by, in most cases, only one of the 64 tiles. It is contemplated that if the location occupies two or more map tiles in a particular level, then multiple map tiles at the same level should be used to represent the location.
  • a map tile can be identified by its level number and map tile coordinates. Therefore, locating a place on the map can be equivalent to finding the map tile coordinates at a particular level, depending on the required resolution.
  • Server 130 may pre-store map tiles at different levels and select one or more pre-stored map tiles to a requesting client according to a proper resolution level. In this way, no map data need to be generated on-the-fly on the server side and the limit on resolution depends mainly on the available bandwidth.
  • point cloud data may be associated with map tiles such that a point cloud may be represented by a series of map tiles at different levels.
  • point cloud tiling unit 322 may determine geographic coordinates of point cloud 302 and convert the geographic coordinates to map tile coordinates. The conversion may be performed as follows:
  • point cloud 302 can be associated with one or more map tiles of the high-definition map.
  • map tile (A1, B1) is divided into four smaller map tiles (X1, Y1) , (X1, Y2) , (X2, Y1) , and (X2, Y2) in level A+1.
  • point cloud 302 ’s geographic coordinates correspond to those of map tiles (X1, Y2) and (X2, Y2) , but not (X1, Y1) and (X2, Y1) .
  • point cloud 302 may be associated with map tiles (X1, Y2) and (X2, Y2) . Similar associations can be performed at any level in the hierarchy.
  • positioning point cloud block generating unit 324 may generate a positioning point cloud block (or “positioning block” for short) for each map tile.
  • the positioning block is a 3D representation of the portion of point cloud 302 that falls within an individual map tile in a particular level.
  • unit 322 may generate a positioning block for each map tile in that level.
  • unit 322 may generate positioning blocks for different levels.
  • unit 324 may be configured to generate a representation of the point cloud in different levels according to the different resolutions at those levels.
  • point cloud 322 may contain a large amount of data that is not efficient to be stored in the original form.
  • unit 322 may compress the point cloud data corresponding to each associated map tile in a particular level based on the map resolution in that level. For example, referring to the lower left corner of FIG. 4, at level A, point cloud 302 is associated with map tile (A1, B1) , which may have a resolution of 256x256.
  • map tile (A1, B1) may have a resolution of 256x256.
  • unit 324 may represent the space defined by map tile (A1, B1) and a height enough to cover point cloud 302 using a plurality of voxels.
  • Each voxel may be a cube have a linear length equal to 1/256 of the side of map tile (A1, B1) .
  • Point cloud 302 may be much denser than the cubic voxels, so multiple points of point cloud 302 may fall within a single cubic voxel.
  • FIG. 5 shows an exemplary voxel view of the tile (A1, B1) , where only four voxels are shown.
  • voxel 502 contains several points of point cloud 302.
  • unit 324 may use a Normal Distribution Transform (NDT) algorithm.
  • NDT Normal Distribution Transform
  • unit 324 calculates a 3D distribution of a surface feature corresponding to the voxel, including an average intensity value, and intensity distribution along x, y, and z directions:
  • is the average intensity value
  • p i is the intensity values of point cloud points within the voxel
  • is the variance of Gaussian distribution.
  • unit 324 can represent all the points of point cloud 302 within voxel 502 with a single point 510, having an average intensity value and intensity distribution along x, y, and z directions. In other words, unit 324 compresses all the points of point cloud 302 within voxel 502 into a single point 510.
  • each voxel contains at most one compressed point that represent the original one or more points in point cloud 302 that fall within that voxel.
  • unit 324 may generate a positioning block corresponding to the map tile (e.g., map tile (A1, B1) ) using the compressed points within the space defined by map tile (A1, B1) .
  • Unit 324 may store the position block in memory 330 and/or storage 340. Because the compressed positioning block has the same resolution as all positioning blocks in the same level, aggregation and integration of positioning blocks within the same level are relatively easy to achieve.
  • Unit 324 may also compress point cloud 302 in different levels using different voxel volumes. For example, referring to FIG. 5, in level A+1, each voxel of level A becomes eight smaller voxels. The calculation of 3D distribution of surface features is now conducted for each of the eight smaller voxels. Because the voxels become smaller and the resolution becomes higher, some voxels may not contain points from point cloud 302. For example, out of the eight voxels, maybe only four voxels contain points. Therefore, the compression may result in four compressed points 520, 522, 524, and 526, each representing a 3D distribution of a surface feature of the point cloud points corresponding to that voxel. A positioning block may be generated based on the compressed points 520, 522, 524, and 526 for level A+1.
  • Queuing unit 326 may be configured to properly queue the map tiles of the high-definition map such that positioning point cloud map can be delivered to a requesting client device in a seamless manner.
  • FIG. 6 shows an exemplary priority queue 600 to manage map tiles for delivering positioning point cloud map.
  • Queue 600 may include a plurality of priority levels P0, P1, and P2, where P2 has the lowest priority and P0 has the highest priority.
  • Queuing unit 326 may receive information indicating the location of a client device, such as GPS location information from sensor system 114 of vehicle 110. Based on the information, queuing unit 326 may determine distances between the location of the client device and multiple map tiles, such as those surround the location of the client device.
  • the distance may be between the location of the client device and the center of a map tile. If the distance is shorter than a threshold, then a map tile not already in the queue, such as Tile (a3, b2) may enter the queue. The newly entered tile may be docked in the lowest priority level P2. After a predetermined period of time has passed since entering the queue, tiles having the lowest priority may be promoted to P1.
  • Memory 330 and storage 340 may include any appropriate type of mass storage provided to store any type of information that processor 320 may need to operate.
  • Memory 330 and storage 340 may be a volatile or non-volatile, magnetic, semiconductor, tape, optical, removable, non-removable, or other type of storage device or tangible (i.e., non-transitory) computer-readable medium including, but not limited to, a ROM, a flash memory, a dynamic RAM, and a static RAM.
  • Memory 330 and/or storage 340 may be configured to store one or more computer programs that may be executed by processor 320 to perform color point cloud generation functions disclosed herein.
  • memory 330 and/or storage 340 may be configured to store program (s) that may be executed by processor 320 to manage a high-definition map.
  • Memory 330 and/or storage 340 may be further configured to store information and data used by processor 320.
  • memory 330 and/or storage 340 may be configured to store the various types of data (e.g., GPS information, point cloud, positioning blocks, etc. ) captured by sensor system 114 and the high-definition map.
  • the various types of data may be stored permanently, removed periodically, or disregarded immediately after each frame of data is processed.
  • FIG. 7 shows exemplary signal flows during a positioning process.
  • Server 130 may include a map server 132 and a positioning server 134.
  • Map server 132 may maintain a large-scale high-definition map with pre-stored positioning point cloud blocks.
  • Positioning server 134 may provide positioning service to requesting client devices.
  • sensor system 114 may provide an initial GPS message to map server 132.
  • the initial GPS message may contain location information of vehicle 110 provided by a GPS receiver.
  • map server 132 may identify an area of interest in the high-definition map and provide a positioning point cloud block corresponding to the area of interest to positioning server 134 to replace a previous positioning block.
  • Position server 134 may receive additional live information from sensor system 114, such as point cloud data, GPS information, IMU information, etc. Position server may then process the live information together with the point cloud block received from map server 132 to determine the high-precision position of vehicle 110. The high-precision position may be feed back to map server 132 to update or refine the initial GPS information.
  • FIG. 8 illustrates a flowchart of an exemplary method 800 for managing a high-definition map, according to embodiments of the disclosure.
  • method 800 may be implemented by server 130.
  • method 800 is not limited to that exemplary embodiment.
  • Method 800 may include steps S802-S820 as described below. It is to be appreciated that some of the steps may be optional to perform the disclosure provided herein. Further, some of the steps may be performed simultaneously, or in a different order than shown in FIG. 8.
  • server 130 may receive point cloud raw data from sensor system 114.
  • sensor system may include a LiDAR to capture live point cloud raw data and provide to server 130.
  • the raw point cloud data may be preprocessed to remove moving objects and reduce noises to generate a point cloud, in step S804.
  • point cloud tiling unit 322 may determine geographic coordinates of the point cloud. For example, the geographic coordinates may be obtained by a GPS receiver included in sensor system 114.
  • point cloud tiling unit 322 may associate the point cloud with one or more map tiles. For example, point cloud tiling unit 322 may convert the geographic coordinates into tile coordinate according to equation (1) .
  • positioning point cloud block generating unit 324 may compress at least part of the point cloud correspond to each map tile. For example, unit 324 may compress the point cloud data into a compressed point within each voxel according to equations (2) - (4) . In addition, unit 324 may compress the point cloud in different level of the high-definition map using different map resolutions.
  • unit 324 may generate a positioning point cloud block corresponding to each map tile based on the compressed at least part of the point cloud. For example, the positioning point cloud block may include all compressed points within a map tile.
  • queuing unit 326 may determine a distance between the center of a map tile and the location of a vehicle requesting position service. For example, queuing unit 326 may receive location information of the vehicle from, for example, a GPS receive of sensor system 114. Based on the location information, queuing unit 326 may determine the distance between the location of the vehicle and the center of the map tile. In step S816, queuing unit 326 may queue candidate positioning point cloud blocks in a priority queue according to the distance information. For example, a map tile may enter into the queue when the distance is within a preset value. Adjutant map tiles in the queue may be moved up to higher priority according to the distance. In step S818, queuing unit may determine whether the distance is smaller than a threshold.
  • step S820 in which queuing unit 326 moves a positioning block out of queue and provides to the requiting vehicle. If the distance is not less than the threshold, then method 800 proceeds back to step S816 to continue the queuing process.
  • the computer-readable medium may include volatile or non-volatile, magnetic, semiconductor, tape, optical, removable, non-removable, or other types of computer-readable medium or computer-readable storage devices.
  • the computer-readable medium may be the storage device or the memory module having the computer instructions stored thereon, as disclosed.
  • the computer-readable medium may be a disc or a flash drive having the computer instructions stored thereon.

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Geometry (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Computer Graphics (AREA)
  • Signal Processing (AREA)
  • Navigation (AREA)
  • Instructional Devices (AREA)

Abstract

Systems and methods for managing a high-definition map. The system may include at least one storage device (340) configured to store point cloud data and instructions, and at least one processor (320) configured to execute the instructions to perform operations. The operations may include determining geographic coordinates of a point cloud (302) and associating the point cloud (302) with one or more map tiles of the high-definition map based on the geographic coordinates. For each of the one or more map tiles associated with the point cloud (302), the operations may also include generating a positioning point cloud block corresponding to the map tile based on at least a portion of the point cloud (302). The operations may further include providing the positioning point cloud block corresponding to at least one map tile to a client device based on information indicating a location of the client device.

Description

SYSTEMS AND METHODS FOR MANAGING A HIGH-DEFINITION MAP TECHNICAL FIELD
The present disclosure relates to systems and methods for managing a high-definition map, and more particularly to, systems and methods for managing a large-scale high-definition map for positioning services in autonomous driving applications.
BACKGROUND
In autonomous driving, a vehicle’s movement is partially or entirely controlled by a driving control system. The driving control system makes driving decisions based on live information as well as prior knowledge about the vehicle’s surrounding areas. The live information can be obtained through various sensors, such as one or more cameras, Global Positioning System (GPS) receivers, Inertial Measurement Unit (IMU) sensors, and/or LiDARs. The prior knowledge, such as map information, can be downloaded from a remote server and stored in a local storage. The live information can then be compared with prior knowledge by the driving control system to assist in making driving decisions. For example, positioning of the vehicle can be achieved by matching live images of the road on which the vehicle is travelling and certain features along the road with previously acquired images. The live images may be in a two-dimensional (2D) form captured by one or more cameras mounted on the vehicle. This 2D-image-based method, however, may not be able to provide high-precision positioning due to limitations in ambient lighting, image distortions, field of view, etc. A more precise approach is to use a LiDAR, which can capture a 3D image, also referred to as a point cloud, representing surface profiles of surrounding objects with built-in range/distance information. The captured point cloud data can then be compared with a previously acquired point cloud repository, also referred to as a high-definition map, to determine the current position of the vehicle.
A high-definition map needs routine and frequent updates to account for changes in the road conditions. The updates may be conducted by dispatching a survey vehicle to an area of interest to capture new point cloud data. The new point cloud data may then be used to replace corresponding outdated point cloud data in the high-definition map. In addition, point cloud data captured from new, uncharted areas may be added to the high-definition map to expand its coverage. Such frequent updates and expansion to the high-definition map involve processing a large amount of data and it is challenging to effectively and efficiently manage the information contained in the high-definition map. While some existing systems  may use ad-hoc approaches to manage small-scale high-definition maps that are limited in geographic coverage, these solutions lack a uniform data management framework capable of organizing, storing, and delivering a large-scale high-definition map covering wide geographic areas.
Embodiments of the disclosure address the above problems by methods and systems for managing a high-definition map based on a hierarchy of map tiles to organize positioning point cloud blocks and multi-resolution compression algorithms to enhance data storage and delivery efficiency.
SUMMARY
Embodiments of the disclosure provide a system for managing a high-definition map. The system may include at least one storage device configured to store point cloud data and instructions. The system may also include at least one processor configured to execute the instructions to perform operations for managing the high-definition map based on the point cloud data. The operations may include determining geographic coordinates of a point cloud. The operations may also include associating the point cloud with one or more map tiles of the high-definition map based on the geographic coordinates. For each of the one or more map tiles associated with the point cloud, the operations may include generating a positioning point cloud block corresponding to the map tile based on at least a portion of the point cloud. In addition, the operations may include providing the positioning point cloud block corresponding to at least one map tile to a client device based on information indicating a location of the client device.
Embodiments of the disclosure also provide a method for managing a high-definition map. The method may include determining geographic coordinates of a point cloud. The method may also include associating the point cloud with one or more map tiles of the high-definition map based on the geographic coordinates. For each of the one or more map tiles associated with the point cloud, the method may include generating a positioning point cloud block corresponding to the map tile based on at least a portion of the point cloud. In addition, the method may include providing the positioning point cloud block corresponding to at least one map tile to a client device based on information indicating a location of the client device.
Embodiments of the disclosure further provide a non-transitory computer-readable medium having instructions stored thereon that, when executed by at least one processor, causes the at least one processor to perform a method for managing a high-definition map.  The method may include determining geographic coordinates of a point cloud. The method may also include associating the point cloud with one or more map tiles of the high-definition map based on the geographic coordinates. For each of the one or more map tiles associated with the point cloud, the method may include generating a positioning point cloud block corresponding to the map tile based on at least a portion of the point cloud. In addition, the method may include providing the positioning point cloud block corresponding to at least one map tile to a client device based on information indicating a location of the client device.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 illustrates an exemplary autonomous driving scene, according to embodiments of the disclosure.
FIG. 2 illustrates a schematic diagram of an exemplary vehicle equipped with sensors, according to embodiments of the disclosure.
FIG. 3 illustrates a block diagram of an exemplary system for managing a high-definition map, according to embodiments of the disclosure.
FIG. 4. shows an exemplary hierarchical method for managing map data, according to embodiments of the disclosure.
FIG. 5 shows an exemplary multi-resolution compression scheme, according to embodiments of the disclosure.
FIG. 6 illustrates an exemplary queuing process, according to embodiments of the disclosure.
FIG. 7 illustrates am exemplary system for providing high-resolution positioning map service, according to embodiments of the disclosure.
FIG. 8 illustrates a flowchart of an exemplary method for managing a high-definition map, according to embodiments of the disclosure.
DETAILED DESCRIPTION
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. Wherever possible, the same reference numbers will be used throughout the drawings to refer to the same or like parts.
FIG. 1 illustrate an exemplary autonomous driving scene. As shown in FIG. 1, a vehicle 110 may be partially or entirely controlled by a driving control system 112 to travel  along a road 120. Vehicle 110 may be equipped with a sensor system 114, which may capture live information about vehicle 110’s surrounding areas, such as traffic marks 144, trees 140, building 142, etc. Vehicle 110 may also communicate with a server 130 to receive map information in an area covering vehicle 110. Driving control system 112 may process the live information captured by sensor system 114 and map information received from server 130 to determine driving instructions for controlling vehicle 110. For example, server 130 may maintain and manage a high-definition map including point cloud data. Based on live information captured by sensor system 114, such as GPS information indicating the current location of vehicle 110, server 130 may provide point cloud data previously obtained in or about the same location of vehicle 110 to driving control system 112. Driving control system 112 may then compare the received point cloud data with live-captured point cloud information (e.g., captured by sensor system 114) to determine high-precision position information of vehicle 110, which may be used by driving control system 112 to make driving decisions.
FIG. 2 illustrates a schematic diagram of an exemplary vehicle 110 having sensor system 114 and driving control system 112, according to embodiments of the disclosure. It is contemplated that vehicle 110 may be an electric vehicle, a fuel cell vehicle, a hybrid vehicle, or a conventional internal combustion engine vehicle. Vehicle 110 may have a body 116 and at least one wheel 118. Body 116 may be any body style, such as a sports vehicle, a coupe, a sedan, a pick-up truck, a station wagon, a sports utility vehicle (SUV) , a minivan, or a conversion van. In some embodiments, vehicle 110 may include a pair of front wheels and a pair of rear wheels, as illustrated in FIG. 2. However, it is contemplated that vehicle 110 may have more or less wheels or equivalent structures that enable vehicle 110 to move around. Vehicle 110 may be configured to be all wheel drive (AWD) , front wheel drive (FWR) , or rear wheel drive (RWD) . In some embodiments, vehicle 110 may be configured to be operated by an operator occupying the vehicle, remotely controlled, and/or autonomously controlled by driving control system 112.
As illustrated in FIG. 2, vehicle 110 may be equipped with sensor system 114. In some embodiments, sensor system 114 may be mounted or attached to the outside of body 116, as shown in FIG. 2. In some embodiments, sensor system 114 may be equipped inside body 110. In some embodiments, sensor system 114 may include part of its component (s) equipped outside body 110 and part of its component (s) equipped inside body 110. It is contemplated that the manners in which sensor system 114 can be equipped on vehicle 110 are not limited by the example shown in FIG. 2, and may be modified depending on the types  of sensor (s) included in sensor system 114 and/or vehicle 110 to achieve desirable sensing performance.
In some embodiments, sensor system 114 may be configured to capture live data as vehicle 110 travels along a path. Consistent with the present disclosure, sensor system 114 may include a LiDAR to capture point cloud data of the surrounding. LiDAR measures distance to a target by illuminating the target with pulsed laser light and measuring the reflected pulses with a sensor. Differences in laser return times and wavelengths can then be used to construct digital 3-D representations of the target. The light used for LiDAR scan may be ultraviolet, visible, or near infrared. Because a narrow laser beam can map physical features with very high resolution, LiDAR scanner is particularly suitable for high-resolution positioning. For example, the LiDAR may capture a point cloud frame at each of a sequence of time points. Each point cloud frame may represent a 3D surface profile of the surround objects at one particular time point. Multiple point cloud frames may be combined (e.g., through time/space shifting) to form a point cloud, which may represent the 3D surface profile of objects within a certain space. For example, a point cloud may represent the surface profile of objects along a certain distance of the path travelled by vehicle 110. The surface profile may be represented by a spatial distribution of points at which the light waves emitted by the LiDAR are reflected. In this way, live point cloud information can be obtained by sensor system 114, which may be used to compare with a previously captured point cloud (e.g., a portion of the high-definition map) to determine the position of vehicle 110 with high precision.
In some embodiments, sensor system 114 may also include a navigation unit, such as a GPS receiver and one or more IMU sensors. A GPS is a global navigation satellite system that provides location and time information to a GPS receiver. Since the location information provided by the GPS receiver (e.g., for civilian usage) usually cannot achieve the high level of resolution or precision required by autonomous driving, the GPS location information may be used to estimate a rough position of vehicle 110, while the high-resolution/high-precision positioning may be accomplished by processing the point cloud information based on the estimate. An IMU is an electronic device that measures and provides a vehicle’s specific force, angular rate, and sometimes the magnetic field surrounding the vehicle, using various inertial sensors, such as accelerometers and gyroscopes, sometimes also magnetometers. Information captured by the IMU may also be used to position vehicle 110.
Vehicle 110 may communicate with server 130 to obtain prior knowledge of the path it travels along, such as map information. Server 130 may be a local physical server, a cloud server (as illustrated in FIGs. 1 and 2) , a virtual server, a distributed server, or any other suitable computing device. Consistent with the present disclosure, server 130 may store a high-definition map. The high-definition map may be constructed using point cloud data, which may be acquired by one or more LiDARs during survey trip (s) .
Consistent with the present disclosure, server 130 may be also responsible for managing the high-definition map, including organizing point cloud data, updating point cloud data from time to time to reflect changes at certain portions of the map, and/or providing point cloud information to vehicles requesting high-resolution/high-precision positioning service. Server 130 may communicate with vehicle 110, and/or components of vehicle 110 (e.g., sensor system 114, driving control system 112, etc. ) via a network, such as a Wireless Local Area Network (WLAN) , a Wide Area Network (WAN) , wireless networks such as radio waves, a cellular network, a satellite communication network, and/or a local or short-range wireless network (e.g., Bluetooth TM) .
FIG. 3 shows an exemplary server 130 for managing a high-definition map, according to embodiments of the disclosure. Consistent with the present disclosure, server 130 may receive a point cloud 302. Point cloud 302 may be provided by a survey vehicle equipped with a sensor system similar to sensor system 114. Point cloud 302 may cover a geographic area of interest, such as an area requiring updates or a new area. Server 130 may be configured to aggregate point cloud 302 into the high-definition map.
In some embodiments, as shown in FIG. 3, server 130 may include a communication interface 310, a processor 320, a memory 330, and a storage 340. In some embodiments, server 130 may have different modules in a single device, such as an integrated circuit (IC) chip (implemented as an application-specific integrated circuit (ASIC) or a field-programmable gate array (FPGA) ) , or separate devices with dedicated functions. In some embodiments, one or more components of server 130 may be located in a cloud, or may be alternatively in a single location (such as inside vehicle 110 or a mobile device) or distributed locations. Components of server 130 may be in an integrated device, or distributed at different locations but communicate with each other through a network (not shown) .
Communication interface 310 may send data to and receive data from a vehicle (e.g., an autonomous driving vehicle or a survey vehicle) or its components such as sensor system 114 and/or driving control system 112 via communication cables, a Wireless Local Area Network (WLAN) , a Wide Area Network (WAN) , wireless networks such as radio  waves, a cellular network, and/or a local or short-range wireless network (e.g., Bluetooth TM) , or other communication methods. In some embodiments, communication interface 310 can be an integrated services digital network (ISDN) card, cable modem, satellite modem, or a modem to provide a data communication connection. As another example, communication interface 310 can be a local area network (LAN) card to provide a data communication connection to a compatible LAN. Wireless links can also be implemented by communication interface 310. In such an implementation, communication interface 310 can send and receive electrical, electromagnetic or optical signals that carry digital data streams representing various types of information via a network.
Consistent with some embodiments, communication interface 202 may receive point cloud 302. Communication interface may further provide the received point cloud 302 to storage 330 for storage or to processor 320 for processing. Communication interface 310 may also receive a positioning point cloud block generated by processor 320, and provide the positioning point cloud block to any component in vehicle 110.
Processor 320 may include any appropriate type of general-purpose or special-purpose microprocessor, digital signal processor, or microcontroller. Processor 320 may be configured as a separate processor module dedicated to updating the high-definition map. Alternatively, processor 320 may be configured as a shared processor module for performing other functions unrelated to color point cloud generation.
As shown in FIG. 3, processor 320 may include multiple modules, such as a point cloud tiling unit 322, a positioning point cloud block generating unit 324, and a queuing unit 326, and the like. These modules (and any corresponding sub-modules or sub-units) can be hardware units (e.g., portions of an integrated circuit) of processor 320 designed for use with other components or software units implemented by processor 320 through executing at least part of a program. The program may be stored on a computer-readable medium, and when executed by processor 320, it may perform one or more functions or operations. Although FIG. 3 shows units 322-326 all within one processor 320, it is contemplated that these units may be distributed among multiple processors located near or remotely with each other.
Point cloud tiling unit 320 may be configured to associate a point cloud contained in point cloud 302 with one or more map tiles of the high-definition map. In some embodiments, the high-definition map managed by server 130 may have a hierarchic structure in which a large-scale map is divided into multiple levels of map tiles. FIG. 4 shows an exemplary hierarchic structure based on Web Mercator projection. Under this structure, a large-scale map, such as the world map, is represented by different number of  map tiles at different levels. Each map tile may have a predetermined resolution, such as 256×256. At level 0, the whole map is represented as a single map tile, which may be denoted by its level number 0 and tile coordinate (tileX, tileY) , in this case (0, 0) . In the next level, level 1, every map tile in the previous level (level 0) becomes 4 map tiles in a 2×2 layout. In this case, the single map tile in level 0 is refined into 4 map tiles having map tile coordinates (0, 0) , (1, 0) , (0, 1) , and (1, 1) . Because each map tile has the same predetermined resolution (e.g., 256×256) , the resolution of the whole world map in level 1 becomes 512×512, higher than the previous level (level 0) . Similarly, in level 2, every map tile in the previous level (level 1) is further divided into four map tiles, quadrupling the total number of map tiles. In level 3, as shown in FIG. 4, the total number of map tiles becomes 64 (8×8) . As the level number becomes larger, the map is divided into smaller map tiles, and since each map tile has the same predetermined resolution, a smaller map tile may provide finer details.
Under the hierarchic structure, a location on the map can be represented by a series of map tiles. For example, in level 0, a location on the map should correspond to the same map tile because there is only one map tile covering the entire world. In level 1, however, the same location should be represented by, in most cases, only one of the four tiles. Similarly, at level 3, the same location should be represented by, in most cases, only one of the 64 tiles. It is contemplated that if the location occupies two or more map tiles in a particular level, then multiple map tiles at the same level should be used to represent the location.
A map tile can be identified by its level number and map tile coordinates. Therefore, locating a place on the map can be equivalent to finding the map tile coordinates at a particular level, depending on the required resolution. Server 130 may pre-store map tiles at different levels and select one or more pre-stored map tiles to a requesting client according to a proper resolution level. In this way, no map data need to be generated on-the-fly on the server side and the limit on resolution depends mainly on the available bandwidth.
In some embodiments, point cloud data may be associated with map tiles such that a point cloud may be represented by a series of map tiles at different levels. For example, point cloud tiling unit 322 may determine geographic coordinates of point cloud 302 and convert the geographic coordinates to map tile coordinates. The conversion may be performed as follows:
Figure PCTCN2018117443-appb-000001
Figure PCTCN2018117443-appb-000002
In this way, point cloud 302 can be associated with one or more map tiles of the high-definition map. Referring to FIG. 4, the lower left corner shows that point cloud 302 has geographic coordinates matching those of map tile (A1, B1) in level A. Therefore, point cloud 302 can be associated with map tile (A1, B1) in level A. In addition, map tile (A1, B1) is divided into four smaller map tiles (X1, Y1) , (X1, Y2) , (X2, Y1) , and (X2, Y2) in level A+1. In level A+1, however, point cloud 302’s geographic coordinates correspond to those of map tiles (X1, Y2) and (X2, Y2) , but not (X1, Y1) and (X2, Y1) . Thus, in level A+1, point cloud 302 may be associated with map tiles (X1, Y2) and (X2, Y2) . Similar associations can be performed at any level in the hierarchy.
Referring back to FIG. 3, after point cloud tiling unit 322 associates point cloud 302 with map tile (s) of certain levels, positioning point cloud block generating unit 324 may generate a positioning point cloud block (or “positioning block” for short) for each map tile. The positioning block is a 3D representation of the portion of point cloud 302 that falls within an individual map tile in a particular level. As a result, at the same level, if point cloud 302 is associated with multiple map tiles, such as (X1, Y2) and (X2, Y2) shown in FIG. 4, then unit 322 may generate a positioning block for each map tile in that level. In addition, unit 322 may generate positioning blocks for different levels. In other words, unit 324 may be configured to generate a representation of the point cloud in different levels according to the different resolutions at those levels.
In some embodiments, point cloud 322 may contain a large amount of data that is not efficient to be stored in the original form. In these cases, unit 322 may compress the point cloud data corresponding to each associated map tile in a particular level based on the map resolution in that level. For example, referring to the lower left corner of FIG. 4, at level A, point cloud 302 is associated with map tile (A1, B1) , which may have a resolution of 256x256. To account for the 3D nature of point cloud 302, unit 324 may represent the space defined by map tile (A1, B1) and a height enough to cover point cloud 302 using a plurality of voxels. Each voxel may be a cube have a linear length equal to 1/256 of the side of map tile (A1, B1) . Point cloud 302 may be much denser than the cubic voxels, so multiple points of point cloud 302 may fall within a single cubic voxel. FIG. 5 shows an exemplary voxel  view of the tile (A1, B1) , where only four voxels are shown. Among the four voxels, voxel 502 contains several points of point cloud 302. To compress the point cloud data such that each voxel is represented by a single point, unit 324 may use a Normal Distribution Transform (NDT) algorithm. Specifically, for each voxel, such as voxel 502, unit 324 calculates a 3D distribution of a surface feature corresponding to the voxel, including an average intensity value, and intensity distribution along x, y, and z directions:
Figure PCTCN2018117443-appb-000003
M = [p 1-μ, p 2-μ, …, p n-μ] (3)
Figure PCTCN2018117443-appb-000004
where μ is the average intensity value; p i is the intensity values of point cloud points within the voxel; ∑ is the variance of Gaussian distribution.
Use the NDT algorithm, unit 324 can represent all the points of point cloud 302 within voxel 502 with a single point 510, having an average intensity value and intensity distribution along x, y, and z directions. In other words, unit 324 compresses all the points of point cloud 302 within voxel 502 into a single point 510.
After compression, each voxel contains at most one compressed point that represent the original one or more points in point cloud 302 that fall within that voxel. Accordingly, unit 324 may generate a positioning block corresponding to the map tile (e.g., map tile (A1, B1) ) using the compressed points within the space defined by map tile (A1, B1) . Unit 324 may store the position block in memory 330 and/or storage 340. Because the compressed positioning block has the same resolution as all positioning blocks in the same level, aggregation and integration of positioning blocks within the same level are relatively easy to achieve.
Unit 324 may also compress point cloud 302 in different levels using different voxel volumes. For example, referring to FIG. 5, in level A+1, each voxel of level A becomes eight smaller voxels. The calculation of 3D distribution of surface features is now conducted for each of the eight smaller voxels. Because the voxels become smaller and the resolution becomes higher, some voxels may not contain points from point cloud 302. For example, out of the eight voxels, maybe only four voxels contain points. Therefore, the compression may result in four  compressed points  520, 522, 524, and 526, each representing a 3D distribution of a surface feature of the point cloud points corresponding to that voxel. A positioning block may be generated based on the  compressed points  520, 522, 524, and 526 for level A+1.
Queuing unit 326 may be configured to properly queue the map tiles of the high-definition map such that positioning point cloud map can be delivered to a requesting client device in a seamless manner. FIG. 6 shows an exemplary priority queue 600 to manage map tiles for delivering positioning point cloud map. Queue 600 may include a plurality of priority levels P0, P1, and P2, where P2 has the lowest priority and P0 has the highest priority. Queuing unit 326 may receive information indicating the location of a client device, such as GPS location information from sensor system 114 of vehicle 110. Based on the information, queuing unit 326 may determine distances between the location of the client device and multiple map tiles, such as those surround the location of the client device. In some embodiments, the distance may be between the location of the client device and the center of a map tile. If the distance is shorter than a threshold, then a map tile not already in the queue, such as Tile (a3, b2) may enter the queue. The newly entered tile may be docked in the lowest priority level P2. After a predetermined period of time has passed since entering the queue, tiles having the lowest priority may be promoted to P1.
Memory 330 and storage 340 may include any appropriate type of mass storage provided to store any type of information that processor 320 may need to operate. Memory 330 and storage 340 may be a volatile or non-volatile, magnetic, semiconductor, tape, optical, removable, non-removable, or other type of storage device or tangible (i.e., non-transitory) computer-readable medium including, but not limited to, a ROM, a flash memory, a dynamic RAM, and a static RAM. Memory 330 and/or storage 340 may be configured to store one or more computer programs that may be executed by processor 320 to perform color point cloud generation functions disclosed herein. For example, memory 330 and/or storage 340 may be configured to store program (s) that may be executed by processor 320 to manage a high-definition map.
Memory 330 and/or storage 340 may be further configured to store information and data used by processor 320. For instance, memory 330 and/or storage 340 may be configured to store the various types of data (e.g., GPS information, point cloud, positioning blocks, etc. ) captured by sensor system 114 and the high-definition map. The various types of data may be stored permanently, removed periodically, or disregarded immediately after each frame of data is processed.
FIG. 7 shows exemplary signal flows during a positioning process. Server 130 may include a map server 132 and a positioning server 134. Map server 132 may maintain a large-scale high-definition map with pre-stored positioning point cloud blocks. Positioning server 134 may provide positioning service to requesting client devices. In some  embodiments, sensor system 114 may provide an initial GPS message to map server 132. The initial GPS message may contain location information of vehicle 110 provided by a GPS receiver. Based on the initial GPS message, map server 132 may identify an area of interest in the high-definition map and provide a positioning point cloud block corresponding to the area of interest to positioning server 134 to replace a previous positioning block. Position server 134 may receive additional live information from sensor system 114, such as point cloud data, GPS information, IMU information, etc. Position server may then process the live information together with the point cloud block received from map server 132 to determine the high-precision position of vehicle 110. The high-precision position may be feed back to map server 132 to update or refine the initial GPS information.
FIG. 8 illustrates a flowchart of an exemplary method 800 for managing a high-definition map, according to embodiments of the disclosure. In some embodiments, method 800 may be implemented by server 130. However, method 800 is not limited to that exemplary embodiment. Method 800 may include steps S802-S820 as described below. It is to be appreciated that some of the steps may be optional to perform the disclosure provided herein. Further, some of the steps may be performed simultaneously, or in a different order than shown in FIG. 8.
In step S802, server 130 may receive point cloud raw data from sensor system 114. For example, sensor system may include a LiDAR to capture live point cloud raw data and provide to server 130. The raw point cloud data may be preprocessed to remove moving objects and reduce noises to generate a point cloud, in step S804.
In step S806, point cloud tiling unit 322 may determine geographic coordinates of the point cloud. For example, the geographic coordinates may be obtained by a GPS receiver included in sensor system 114. In step S808, point cloud tiling unit 322 may associate the point cloud with one or more map tiles. For example, point cloud tiling unit 322 may convert the geographic coordinates into tile coordinate according to equation (1) . In step S810, positioning point cloud block generating unit 324 may compress at least part of the point cloud correspond to each map tile. For example, unit 324 may compress the point cloud data into a compressed point within each voxel according to equations (2) - (4) . In addition, unit 324 may compress the point cloud in different level of the high-definition map using different map resolutions. In step S812, unit 324 may generate a positioning point cloud block corresponding to each map tile based on the compressed at least part of the point cloud. For example, the positioning point cloud block may include all compressed points within a map tile.
In step S814, queuing unit 326 may determine a distance between the center of a map tile and the location of a vehicle requesting position service. For example, queuing unit 326 may receive location information of the vehicle from, for example, a GPS receive of sensor system 114. Based on the location information, queuing unit 326 may determine the distance between the location of the vehicle and the center of the map tile. In step S816, queuing unit 326 may queue candidate positioning point cloud blocks in a priority queue according to the distance information. For example, a map tile may enter into the queue when the distance is within a preset value. Adjutant map tiles in the queue may be moved up to higher priority according to the distance. In step S818, queuing unit may determine whether the distance is smaller than a threshold. If YES, method 800 proceeds to step S820, in which queuing unit 326 moves a positioning block out of queue and provides to the requiting vehicle. If the distance is not less than the threshold, then method 800 proceeds back to step S816 to continue the queuing process.
Another aspect of the disclosure is directed to a non-transitory computer-readable medium storing instructions which, when executed, cause one or more processors to perform the methods, as discussed above. The computer-readable medium may include volatile or non-volatile, magnetic, semiconductor, tape, optical, removable, non-removable, or other types of computer-readable medium or computer-readable storage devices. For example, the computer-readable medium may be the storage device or the memory module having the computer instructions stored thereon, as disclosed. In some embodiments, the computer-readable medium may be a disc or a flash drive having the computer instructions stored thereon.
It will be apparent to those skilled in the art that various modifications and variations can be made to the disclosed system and related methods. Other embodiments will be apparent to those skilled in the art from consideration of the specification and practice of the disclosed system and related methods.
It is intended that the specification and examples be considered as exemplary only, with a true scope being indicated by the following claims and their equivalents.

Claims (20)

  1. A system for managing a high-definition map, comprising:
    at least one storage device configured to store point cloud data and instructions; and
    at least one processor configured to execute the instructions to perform operations for managing the high-definition map based on the point cloud data, the operations comprising: determining geographic coordinates of a point cloud;
    associating the point cloud with one or more map tiles of the high-definition map based on the geographic coordinates;
    for each of the one or more map tiles associated with the point cloud, generating a positioning point cloud block corresponding to the map tile based on at least a portion of the point cloud; and
    providing the positioning point cloud block corresponding to at least one map tile to a client device based on information indicating a location of the client device.
  2. The system of claim 1, wherein the operations further comprise:
    for each of the one or more map tiles associated with the point cloud, compressing at
    least part of the point cloud corresponding to the map tile based on characteristics of the map tile; and
    generating the positioning point cloud block for the map tile based on the compressed at least part of the point cloud.
  3. The system of claim 2, wherein the operations further comprise:
    compressing at least part of the point cloud corresponding to the map tile using a Normal Distribution Transform (NDT) algorithm.
  4. The system of claim 2, wherein the characteristics of the map tile include a map resolution of the map tile.
  5. The system of claim 1, wherein:
    the positioning point cloud block includes a plurality of voxels; and
    the operations further comprise:
    calculating, for each voxel, a three-dimensional (3D) distribution of a surface feature corresponding to the voxel.
  6. The system of claim 5, wherein the operations further comprise:
    calculating multiple 3D distributions of surface features using different voxel volumes.
  7. The system of claim 1, wherein the operations further comprise:
    receiving the information indicating the location of the client device;
    determining distances between the location of the client device and multiple map tiles;
    queuing the multiple map tiles into a plurality of priority levels based on the distances; and
    providing the positioning point cloud block corresponding to at least one map tile in a
    predetermined priority level to the client device.
  8. The system of claim 1, wherein:
    the high-definition map is a world map;
    the predetermined map tile structure includes a multi-level map tile structure;
    at each level, the high-definition map is divided into a predetermined number of map tiles; and
    the operations further comprise:
    associating the point cloud with a first set of map tiles at a first level; and
    associating the point cloud with a second set of map tiles at a second level, wherein the first and second sets of map tiles include different number of map tiles.
  9. The system of claim 8, wherein the operations further comprise:
    generating a first set of positioning point cloud blocks corresponding to the first set of map tiles, wherein the first set of positioning point cloud blocks include voxels having a first voxel size;
    generating a second set of positioning point cloud blocks corresponding to the second set of map tiles, wherein the second set of positioning point cloud blocks include voxels having a second voxel size; and
    wherein the first voxel size is different from the second voxel size.
  10. A method for managing a high-definition map, comprising:
    determining geographic coordinates of a point cloud;
    associating the point cloud with one or more map tiles of the high-definition map based  on the geographic coordinates;
    for each of the one or more map tiles associated with the point cloud, generating a positioning point cloud block corresponding to the map tile based on at least a portion of the point cloud; and
    providing the positioning point cloud block corresponding to at least one map tile to a client device based on information indicating a location of the client device.
  11. The method of claim 10, further comprising:
    for each of the one or more map tiles associated with the point cloud, compressing at least part of the point cloud corresponding to the map tile based on characteristics of the map tile; and
    generating the positioning point cloud block for the map tile based on the compressed at least part of the point cloud.
  12. The method of claim 11, further comprising:
    compressing at least part of the point cloud corresponding to the map tile using a Normal Distribution Transform (NDT) algorithm.
  13. The method of claim 11, wherein the characteristics of the map tile include a map resolution of the map tile.
  14. The method of claim 10, wherein:
    the positioning point cloud block includes a plurality of voxels; and the method further comprises:
    calculating, for each voxel, a three-dimensional (3D) distribution of a surface feature corresponding to the voxel.
  15. The method of claim 14, further comprising:
    calculating multiple 3D distributions of surface features using different voxel volumes.
  16. The method of claim 10, further comprising:
    receiving the information indicating the location of the client device;
    determining distances between the location of the client device and multiple map tiles;
    queuing the multiple map tiles into a plurality of priority levels based on the distances;  and
    providing the positioning point cloud block corresponding to at least one map tile in a predetermined priority level to the client device.
  17. The method of claim 10, wherein:
    the high-definition map is a world map;
    the predetermined map tile structure includes a multi-level map tile structure;
    at each level, the high-definition map is divided into a predetermined number of map tiles; and
    the method further comprises:
    associating the point cloud with a first set of map tiles at a first level; and
    associating the point cloud with a second set of map tiles at a second level, wherein the first and second sets of map tiles include different number of map tiles.
  18. The method of claim 17, further comprising:
    generating a first set of positioning point cloud blocks corresponding to the first set of map tiles, wherein the first set of positioning point cloud blocks include voxels having a first voxel size;
    generating a second set of positioning point cloud blocks corresponding to the second set of map tiles, wherein the second set of positioning point cloud blocks include voxels having a second voxel size; and
    wherein the first voxel size is different from the second voxel size.
  19. A non-transitory computer-readable medium having instructions stored thereon, wherein the instructions, when executed by at least one processor, cause the at least one processor to perform a method for managing a high-definition map, the method comprising:
    determining geographic coordinates of a point cloud;
    associating the point cloud with one or more map tiles of the high-definition map based on the geographic coordinates;
    for each of the one or more map tiles associated with the point cloud, generating a positioning point cloud block corresponding to the map tile based on at least a portion of the point cloud; and
    providing the positioning point cloud block corresponding to at least one map tile to a client device based on information indicating a location of the client device.
  20. The non-transitory computer-readable medium of claim 19, wherein:
    the high-definition map is a world map;
    the predetermined map tile structure includes a multi-level map tile structure;
    at each level, the high-definition map is divided into a predetermined number of map tiles; and
    the method further comprises:
    associating the point cloud with a first set of map tiles at a first level;
    associating the point cloud with a second set of map tiles at a second level, wherein the first and second sets of map tiles include different number of map tiles;
    generating a first set of positioning point cloud blocks corresponding to the first set of map tiles, wherein the first set of positioning point cloud blocks include voxels having a first voxel size;
    generating a second set of positioning point cloud blocks corresponding to the second set of map tiles, wherein the second set of positioning point cloud blocks include voxels having a second voxel size; and
    wherein the first voxel size is different from the second voxel size.
PCT/CN2018/117443 2018-11-26 2018-11-26 Systems and methods for managing a high-definition map WO2020107151A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2018/117443 WO2020107151A1 (en) 2018-11-26 2018-11-26 Systems and methods for managing a high-definition map
CN201880092600.1A CN112074871B (en) 2018-11-26 2018-11-26 High definition map management system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/117443 WO2020107151A1 (en) 2018-11-26 2018-11-26 Systems and methods for managing a high-definition map

Publications (1)

Publication Number Publication Date
WO2020107151A1 true WO2020107151A1 (en) 2020-06-04

Family

ID=70854724

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/117443 WO2020107151A1 (en) 2018-11-26 2018-11-26 Systems and methods for managing a high-definition map

Country Status (2)

Country Link
CN (1) CN112074871B (en)
WO (1) WO2020107151A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006057477A1 (en) * 2004-11-26 2006-06-01 Electronics And Telecommunications Research Institute Method for storing multipurpose geographic information
US20170132478A1 (en) * 2015-03-16 2017-05-11 Here Global B.V. Guided Geometry Extraction for Localization of a Device
CN107291879A (en) * 2017-06-19 2017-10-24 中国人民解放军国防科学技术大学 The method for visualizing of three-dimensional environment map in a kind of virtual reality system
US20180056801A1 (en) * 2016-09-01 2018-03-01 Powerhydrant Llc Robotic Charger Alignment
CN108268514A (en) * 2016-12-30 2018-07-10 乐视汽车(北京)有限公司 High in the clouds map map rejuvenation equipment based on Octree
WO2018166747A1 (en) * 2017-03-15 2018-09-20 Jaguar Land Rover Limited Improvements in vehicle control

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101325926B1 (en) * 2012-05-22 2013-11-07 동국대학교 산학협력단 3d data processing apparatus and method for real-time 3d data transmission and reception
US10436595B2 (en) * 2017-02-02 2019-10-08 Baidu Usa Llc Method and system for updating localization maps of autonomous driving vehicles
CN108320329B (en) * 2018-02-02 2020-10-09 维坤智能科技(上海)有限公司 3D map creation method based on 3D laser
CN108801276B (en) * 2018-07-23 2022-03-15 奇瑞汽车股份有限公司 High-precision map generation method and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006057477A1 (en) * 2004-11-26 2006-06-01 Electronics And Telecommunications Research Institute Method for storing multipurpose geographic information
US20170132478A1 (en) * 2015-03-16 2017-05-11 Here Global B.V. Guided Geometry Extraction for Localization of a Device
US20180056801A1 (en) * 2016-09-01 2018-03-01 Powerhydrant Llc Robotic Charger Alignment
CN108268514A (en) * 2016-12-30 2018-07-10 乐视汽车(北京)有限公司 High in the clouds map map rejuvenation equipment based on Octree
WO2018166747A1 (en) * 2017-03-15 2018-09-20 Jaguar Land Rover Limited Improvements in vehicle control
CN107291879A (en) * 2017-06-19 2017-10-24 中国人民解放军国防科学技术大学 The method for visualizing of three-dimensional environment map in a kind of virtual reality system

Also Published As

Publication number Publication date
CN112074871A (en) 2020-12-11
CN112074871B (en) 2024-06-18

Similar Documents

Publication Publication Date Title
US10860871B2 (en) Integrated sensor calibration in natural scenes
US10937231B2 (en) Systems and methods for updating a high-resolution map based on binocular images
CN108319655B (en) Method and device for generating grid map
US11474247B2 (en) Methods and systems for color point cloud generation
CN107850445B (en) Method and system for generating and using positioning reference data
WO2020093378A1 (en) Vehicle positioning system using lidar
KR20190094405A (en) Method and system for video-based positioning and mapping
KR20190053217A (en) METHOD AND SYSTEM FOR GENERATING AND USING POSITIONING REFERENCE DATA
US10996072B2 (en) Systems and methods for updating a high-definition map
CN110895821B (en) Image processing device, storage medium storing image processing program, and driving support system
WO2020107151A1 (en) Systems and methods for managing a high-definition map
CN113196341A (en) Method for detecting and modeling objects on the surface of a road
US11138448B2 (en) Identifying a curb based on 3-D sensor data
CN114419180A (en) Method and device for reconstructing high-precision map and electronic equipment
AU2018102199A4 (en) Methods and systems for color point cloud generation
WO2021056185A1 (en) Systems and methods for partially updating high-definition map based on sensor data matching
KR20220050386A (en) Method of generating map and visual localization system using the map
CN113447032A (en) Positioning method, positioning device, electronic equipment and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18941381

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18941381

Country of ref document: EP

Kind code of ref document: A1