WO2022052881A1 - 一种构建地图的方法及计算设备 - Google Patents

一种构建地图的方法及计算设备 Download PDF

Info

Publication number
WO2022052881A1
WO2022052881A1 PCT/CN2021/116601 CN2021116601W WO2022052881A1 WO 2022052881 A1 WO2022052881 A1 WO 2022052881A1 CN 2021116601 W CN2021116601 W CN 2021116601W WO 2022052881 A1 WO2022052881 A1 WO 2022052881A1
Authority
WO
WIPO (PCT)
Prior art keywords
grid
laser
laser point
map
point cloud
Prior art date
Application number
PCT/CN2021/116601
Other languages
English (en)
French (fr)
Inventor
王舜垚
胡伟龙
陈超越
潘杨杰
李旭鹏
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2022052881A1 publication Critical patent/WO2022052881A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/20Drawing from basic elements, e.g. lines or circles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/20Drawing from basic elements, e.g. lines or circles
    • G06T11/206Drawing of charts or graphs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Definitions

  • the present application relates to the field of laser processing, and in particular, to a method and computing device for constructing a map.
  • positioning technology is to achieve precise positioning of autonomous vehicles through the fusion of various positioning methods and various sensor data, so that autonomous vehicles can obtain their exact location.
  • Precise positioning is an essential function of autonomous vehicles. Because laser sensors such as lidar and 3D laser scanners have high measurement accuracy, the laser point clouds obtained by these laser sensors are widely used in precise positioning.
  • the premise of achieving precise positioning through a laser sensor is to first obtain a map based on a laser point cloud (may be referred to as a laser map), and then match the laser point cloud obtained by the laser sensor in real time with the laser map to achieve positioning.
  • a laser map may be referred to as a laser map
  • the other is to compress the three-dimensional laser point cloud into two-dimensional information, and construct a two-dimensional occupancy grid map (OGM) according to the compressed two-dimensional information, and the two-dimensional OGM constitutes Laser map.
  • OFGM occupancy grid map
  • the above two methods have defects.
  • the storage capacity required for the laser map obtained by the first method is too large, which is difficult to reuse in engineering.
  • the second method compresses the three-dimensional laser point cloud into two-dimensional information and loses many features. This affects the subsequent positioning accuracy.
  • the embodiments of the present application provide a method and computing device for constructing a map.
  • a composite laser map of the main map and the grid map is constructed, which reduces the storage of the composite laser map. capacity, while retaining more feature information for subsequent matching and positioning.
  • the embodiments of the present application provide a method for constructing a map, and the map can be applied to the field of laser processing in the field of automatic driving, for example, can be applied to intelligent car), the method includes: the computing device will first obtain the laser point cloud data (also referred to as laser point cloud) required for building a map, and perform feature extraction on the obtained laser point cloud to obtain target features, the target The feature is the laser point extracted from the laser point cloud data that meets the preset conditions, and the laser point includes the coordinates of the laser point and the reflection intensity of the laser point.
  • the laser point cloud data also referred to as laser point cloud
  • the laser point includes the coordinates of the laser point and the reflection intensity of the laser point.
  • a laser sensor in a standard pose can obtain one frame of laser point cloud at different geographical locations, and after a total of n frames of laser point cloud are obtained, the computing device can perform feature extraction on the n frames of laser point cloud to obtain the target. It can also be that every time a laser sensor in a standard pose acquires a frame of laser point cloud in different geographical locations, it will be sent to the computing device for feature extraction until all n frames of laser point cloud have been processed. There is no restriction on how the device processes the laser point cloud. After the corresponding target features are extracted from the laser point cloud, a function to fit each target feature can be constructed, and the constraints of the function can be obtained.
  • the 800 target features extracted in total are the target features that have been filtered and merged.
  • 10 target features and 8 target features are extracted from different 2 frames of laser point clouds, respectively.
  • the same target features need to be merged into one target feature first, assuming that the laser point cloud from the previous frame is extracted to 10 2 of the target features are the same thing as 2 of the 8 target features extracted from the laser point cloud of the next frame, so you only need to keep 2 of the target features.
  • the subsequent extracted target features are processed in this way, and will not be repeated here.
  • the computing device will also construct a submap for each frame of the acquired laser point cloud, and the constructed submap is an occupied grid submap (OGM).
  • OGM occupied grid submap
  • a main map and an OGM are obtained.
  • the obtained main map and OGM need to be linked to form a composite laser map.
  • the computing device can be in the main map.
  • the index of the OGM is established on the map to combine to obtain a composite laser map.
  • the computing device performs two operations on the acquired laser point cloud.
  • One operation is target feature extraction, constructing a function that fits each target feature, and obtaining the corresponding restriction conditions for each function. , these functions and the constraints corresponding to the functions constitute the main map.
  • Another operation is to construct the sub-map OGM based on the laser point cloud. After that, by establishing the OGM index on the main map, the composite laser map of the main map and the OGM is constructed. The storage capacity of the composite laser map is reduced, while more feature information is reserved for subsequent matching and positioning.
  • the computing device constructing the OGM according to the laser point cloud may be: the computing device first constructs a corresponding occupancy grid submap for the laser point cloud obtained in each frame, or, For several frames of continuous laser point clouds, construct a corresponding occupied grid submap.
  • each occupied grid submap is as follows: After setting the length and width of the occupied grid submap (that is, setting the size of the occupied grid submap) and grid resolution, the computing device projects each frame of laser point cloud in the obtained laser coordinate system into the corresponding occupied grid submap. If there is no laser point in a grid, it is considered to be empty, and there is at least one The laser point considers that there is an obstacle corresponding to the grid.
  • the sum of the probabilities of the two is 1.
  • the computing device is projected to the occupancy grid.
  • Each frame of the laser point cloud of the grid map undergoes a series of mathematical transformations, and the grid is positioned as an occupied state or an idle state according to the probability of whether each grid is occupied.
  • the center position of the occupied grid submap is the occupied grid.
  • the index of the OGM established by the computing device on the main map may be: first, the computing device converts the center position (ie the origin) of the occupied grid submap corresponding to each laser point cloud into Coordinate values in the universal transverse mercator grid system (UTM) coordinate system, and then add each origin coordinate value on the main map as the occupied grid submap corresponding to each frame of laser point cloud. index label.
  • the limiting conditions for the computing device to obtain the function may be: obtaining the value interval of the independent variable corresponding to the function; or, obtaining the value of the target independent variable in the function, the target independent variable Including the coordinates of the target laser point, the target laser point belongs to the above-mentioned target feature.
  • the OGM may include: the height of the obstacles in the first grid and the average value of the reflection intensity of the laser spots falling into the first grid, where the first grid is the Occupying any occupied grid in the grid map, that is, each occupied grid (ie, the first grid) in the OGM obtained in the embodiment of the present application stores a laser point falling into the grid
  • the average height ie, the average height of obstacles in the grid
  • the average reflection intensity ie, the average value of the reflection intensities of the laser spots falling into the grid.
  • the difference between the provided OGM and the existing OGM is that not only the average height of the obstacle corresponding to the occupied grid is stored, but also the average reflection intensity of the laser spot falling into the occupied grid is stored, Existing OGMs only store the average height of obstacles, while the reflected intensity of the laser spot is stored elsewhere.
  • the advantage of such storage in the embodiment of the present application is that it is easy to search, and it is more convenient in the actual application process.
  • the computing device may store the height of the obstacle occupying the grid in the OGM as shaping data, and may also store the average reflection intensity of the laser points in the OGM falling into the occupied grid as the shaping data data, and the height of the obstacles occupied by the grid in the OGM and the average reflection intensity of the laser points falling into the grid can be stored as shaping data, where the shaping data is numerical data that does not contain a fractional part, and the shaping Data is only used to represent integers and is stored in binary form.
  • the data occupied by the grid storage in the OGM can be shaped data, and the existing solutions store data in the form of floating-point data.
  • the shaped data occupies less The storage space (theoretically, the storage capacity occupied by the integer data is 1/4 of that of the floating-point data). Therefore, the advantage of this embodiment of the present application is that the storage capacity can be saved.
  • each occupied grid in the OGM obtained in the embodiment of the present application can be divided into partially occupied and fully occupied, and fully occupied refers to obstacles extending from the ground to a certain height (
  • fully occupied refers to obstacles extending from the ground to a certain height
  • general buildings such as office buildings and residential buildings
  • local occupation refers to buildings that occupy a part of the space such as bridge openings, tunnels, viaducts, and skywalks, which can also be called suspended obstacles.
  • the height of the obstacle stored in the grid is the upper edge of the obstacle.
  • the difference between the improved OGM provided by the embodiment of the present application and the existing OGM is that the obstacles occupying the grid are classified into general buildings and suspended obstacles.
  • the types of obstacles are stored at different heights, and the average reflection intensity of the laser spots falling into the occupied grid is also stored. If the grid is occupied by the existing OGM, it is considered to be fully occupied.
  • the advantage of storing the height of the obstacle in this way in the embodiment of the present application is that more detailed features of the obstacle are preserved, and the accuracy of subsequent positioning is improved.
  • the target feature in the embodiment of the present application is essentially to extract some special laser points from a frame of laser point cloud
  • the extracted target features are generally line features and surface features, wherein,
  • the line feature is used to indicate that the laser points extracted from the laser point cloud data are located on the same straight line
  • the surface feature is used to indicate that the laser points extracted from the laser point cloud data are located on the same plane.
  • a second aspect of the embodiments of the present application provides a computing device, where the computing device has a function of implementing the method of the first aspect or any possible implementation manner of the first aspect.
  • This function can be implemented by hardware or by executing corresponding software by hardware.
  • the hardware or software includes one or more modules corresponding to the above functions.
  • a third aspect of an embodiment of the present application provides a computing device, which may include a memory, a processor, and a bus system, wherein the memory is used to store a program, and the processor is used to call the program stored in the memory to execute the first aspect of the embodiment of the present application or any one of the possible implementation methods of the first aspect.
  • a fourth aspect of the present application provides a computer-readable storage medium, where instructions are stored in the computer-readable storage medium, when the computer-readable storage medium runs on a computer, the computer can execute the first aspect or any one of the possible implementations of the first aspect way method.
  • a fifth aspect of the embodiments of the present application provides a computer program, which, when running on a computer, causes the computer to execute the above-mentioned first aspect or any one of the possible implementation methods of the first aspect.
  • a sixth aspect of an embodiment of the present application provides a chip, the chip includes at least one processor and at least one interface circuit, the interface circuit is coupled to the processor, and the at least one interface circuit is configured to perform a transceiving function and send an instruction to At least one processor, at least one processor is used to run a computer program or instruction, which has the function of implementing the method as described above in the first aspect or any possible implementation manner of the first aspect, and the function can be implemented by hardware or software.
  • the implementation can also be implemented by a combination of hardware and software, where the hardware or software includes one or more modules corresponding to the above functions.
  • the interface circuit is used to communicate with other modules other than the chip.
  • the interface circuit can send the composite laser map obtained by the processor on the chip to various intelligent driving (such as unmanned driving, assisted driving, etc. ) for motion planning (eg, driving behavior decision-making, global path planning, etc.).
  • FIG. 1 is a schematic diagram of OGMs with different resolutions provided by an embodiment of the present application.
  • Fig. 2 is a schematic diagram of the OGM of a certain region obtained by constructing provided by the embodiment of the application;
  • FIG. 3 is a schematic diagram of an overall architecture of an autonomous driving vehicle provided by an embodiment of the present application.
  • FIG. 4 is a schematic structural diagram of an automatic driving vehicle provided by an embodiment of the present application.
  • FIG. 5 is a schematic flowchart of a method for constructing a map provided by an embodiment of the present application.
  • FIG. 6 is a schematic diagram of extracting target features based on a frame of laser point cloud and constructing a function according to an embodiment of the present application
  • FIG. 7 is a schematic diagram of splicing 9 occupied grid submaps correspondingly constructed by 9 frames of laser point clouds provided by an embodiment of the present application into an OGM;
  • Fig. 8 is a schematic diagram of the OGM storage data type obtained by construction provided by the embodiment of the present application.
  • Fig. 9 is another schematic diagram of the OGM storage data type obtained by construction provided by the embodiment of the present application.
  • FIG. 10 is a schematic diagram of a process for constructing a composite laser map provided by an embodiment of the present application.
  • FIG. 11 is a schematic diagram of the actual application process of the constructed composite laser map provided by the embodiment of the application.
  • FIG. 12 is a schematic diagram of a computing device provided by an embodiment of the present application.
  • FIG. 13 is another schematic diagram of a computing device provided by an embodiment of the present application.
  • the embodiments of the present application provide a method and computing device for constructing a map.
  • a composite laser map of the main map and the grid map is constructed, which reduces the storage of the composite laser map. capacity, while retaining more feature information for subsequent matching and positioning.
  • Laser point cloud can also be called laser point cloud data.
  • the laser information received by laser sensors such as lidar and 3D laser scanner is presented in the form of point cloud, and the point data collection of the surface of the measured object obtained by measuring instruments It is called a point cloud.
  • the measuring instrument is a laser sensor, then the obtained point cloud is called a laser point cloud (generally 32 lines of laser will have tens of thousands of laser points at the same time).
  • the laser information contained in the laser point cloud can be Denoted as [x, y, z, intensity], the laser information represents the three-dimensional coordinates of the target position of each laser point in the laser coordinate system and the reflection intensity of the laser point.
  • UTM coordinates are a type of planar Cartesian coordinates, and this coordinate grid system and the projections on which it is based have been widely used in topographic maps, as reference grids for satellite imagery and natural resource databases, and for other applications requiring precise positioning, such as automatic
  • the precise positioning of the driving vehicle generally uses UTM coordinates.
  • the UTM projection is an ellipsoid cylinder secant to the earth's ellipsoid, and the centerline of the ellipsoid cylinder is located on the equatorial plane of the ellipsoid and passes through the ellipsoid particle. This projects the points on the ellipsoid onto the ellipsoid cylinder.
  • the two secant circles have no change in length on the UTM projection, that is, two standard meridian circles.
  • the center of the two secant circles is the central meridian circle, and the projected length of the central meridian is 0.9996 times that before the projection.
  • the scale factor k the projected length/the actual length before the projection.
  • the longitude difference between the standard secant and the central meridian is 1.6206°, or 1°37′14.244′′.
  • the UTM longitude zone ranges from 1 to 60, of which 58 zones span 6° from east to west.
  • the longitude zone covers the mid-latitude range of the earth All zones from 80°S to 84°N.
  • A, B , Y, Z areas are not within the scope of the system, they cover the South Pole and North Pole.
  • UTM coordinates are represented in the format: north of east of the latitude zone of the longitude zone, where east represents the projected distance from the central meridian of the longitude zone, and north represents the projected distance from the equator. Both values are in meters. For example, using UTM for longitude/latitude coordinates (61.44, 25.40) results in 35V 414668 6812844, and for longitude/latitude coordinates (-47.04, -73.48) the result is 18G 615471 4789269.
  • OGM is a map representation commonly used by robots. Robots often use laser sensors, and the sensor data has noise. For example, using a laser sensor to detect how far the obstacle ahead is from the robot is impossible to detect an accurate value, such as an angle. , if the accurate value is 4 meters, then the detected obstacle is 3.9 meters at the current moment, but 4.1 meters is detected at the next moment. The position of both distances cannot be regarded as obstacles. To solve this problem, OGM is used. , Figure 1 shows two OGMs with different resolutions, the black points are laser points, and all the laser points mapped in the OGM form a laser point cloud.
  • the size of the OGM generally used is 300*300 , that is, there are 300*300 small grids (ie grids), and the size of each grid (ie length * width, which refers to how many meters each grid corresponds to in the vehicle coordinate system) is called the resolution of OGM , the higher the resolution, the smaller the grid size, the less laser points the laser point cloud acquired by the laser sensor falls in a specific grid at a certain moment, as shown in the left figure of Figure 1, falling on There are 4 laser points in the gray bottom grid (the 6th row and the 11th column of the left picture in Figure 1), on the contrary, the lower the resolution, the larger the grid size, then the same, at the same time, the laser sensor obtained
  • the laser point cloud has fewer laser points that fall in a particular grid, as shown in the graph on the right of Figure 1, the laser points that fall in the gray-bottom grid (row 4, column 7 of the graph on the right in Figure 1) to 9.
  • the laser point cloud is mapped into the OGM, and after a series of mathematical transformations, the grid is positioned as an occupied state or an idle state according to the probability of whether each grid is occupied. It should be noted that, in general, the center of the OGM is the origin of the OGM, and the triangle shown in the left figure of Figure 1 indicates the origin of the OGM.
  • FIG. 2 illustrates the OGM of a certain area constructed by the embodiment of the present application.
  • the map constructed based on the laser point cloud in the embodiments of the present application can be applied to scenarios in which motion planning (eg, driving behavior decision, global path planning, etc.) is performed on various intelligent driving (eg, unmanned, assisted driving, etc.) agents
  • motion planning eg, driving behavior decision, global path planning, etc.
  • intelligent driving eg, unmanned, assisted driving, etc.
  • Figure 3 shows the top-down layered architecture.
  • Environmental perception is the most basic part of intelligent driving vehicles. Whether it is making driving behavior decisions or global path planning, it needs to be based on environmental perception. Based on the real-time perception results of the road traffic environment, corresponding judgments, Decision-making and planning to enable vehicles to drive intelligently.
  • the environmental perception system mainly uses various sensors to obtain relevant environmental information, so as to complete the construction of the environmental model and the knowledge expression of the traffic scene.
  • the sensors used include cameras, single-line radar (SICK), four-line radar (IBEO), 3D LiDAR (HDL-64E), etc.
  • the camera is mainly responsible for traffic light detection, lane line detection, road sign detection, vehicle identification, etc.
  • the laser sensor is mainly responsible for the detection, identification and tracking of dynamic/static obstacles and its own
  • the laser emitted by 3D lidar generally collects external environment information at a frequency of 10FPS, and returns the laser point cloud at each moment.
  • the acquired real-time laser point cloud is sent to the autonomous decision-making system for further decision-making and planning.
  • the autonomous decision-making system is a key part of intelligent driving vehicles.
  • the system is mainly divided into two core subsystems: behavioral decision-making and motion planning.
  • the behavioral decision-making subsystem mainly obtains the global optimal driving route by running the global planning layer.
  • the autonomous decision-making system In order to clarify the specific driving task, and then according to the current real-time road information sent by the environment perception system (that is, the real-time environment perception information in FIG. 3 ), specifically, in the embodiment of the present application, the autonomous decision-making system
  • the real-time laser point cloud of each frame output the position, orientation and other information of objects around the vehicle, and use the matching algorithm to match with the map constructed in advance in the embodiment of the application (ie the laser map in FIG. 3), so as to realize the automatic precise positioning of the vehicle.
  • the commonly used matching algorithms include direct point cloud matching (eg, iterative closest point (ICP) algorithm), probability matching (eg, normal distribution transform (NDT) algorithm), filter matching (eg , histogram filtering),
  • a reasonable driving behavior is determined according to the positioning of the vehicle, as well as the position and orientation of surrounding objects, and the driving behavior instruction is sent to the motion planning subsystem. Further, according to the received driving behavior instructions and the current environmental perception information, a feasible driving trajectory is planned based on indicators such as safety and stability, and sent to the control system.
  • control system is also divided into two parts: the control subsystem and the execution subsystem.
  • the control subsystem is used to convert the feasible driving trajectory generated by the autonomous decision-making system into the specific execution instructions of each execution module, and transmit them to the execution Subsystem; the execution subsystem receives the execution command from the control subsystem and sends it to each control object to reasonably control the steering, braking, accelerator, gear, etc. of the vehicle, so that the vehicle can drive automatically to complete the corresponding driving operation.
  • FIG. 3 the overall architecture of the autonomous vehicle shown in FIG. 3 is only for illustration. In practical applications, more or less systems/subsystems or modules may be included, and each system/subsystem or module may be It includes multiple components, which are not specifically limited here.
  • FIG. 4 is a schematic structural diagram of an autonomous driving vehicle provided by an embodiment of the present application.
  • the autonomous driving vehicle 100 is configured in a fully or partially autonomous driving mode.
  • the autonomous driving vehicle 100 can control itself while in the autonomous driving mode. , and can determine the current state of the vehicle and its surrounding environment through human operation, determine the possible behavior of at least one other vehicle in the surrounding environment, and determine the confidence level corresponding to the possibility of the other vehicle executing the possible behavior, based on the determined information to control the autonomous vehicle 100 .
  • the autonomous vehicle 100 may also be placed to operate without human interaction when the autonomous vehicle 100 is in the autonomous driving mode.
  • the autonomous vehicle 100 may include various subsystems, such as a travel system 102, a sensor system 104 (eg, cameras, SICK, IBEO, LIDAR, etc. in FIG. 3 are all modules in the sensor system 104), a control system 106, a or more peripherals 108 as well as power supply 110 , computer system 112 and user interface 116 .
  • the autonomous vehicle 100 may include more or fewer subsystems, and each subsystem may include multiple components. Additionally, each of the subsystems and components of the autonomous vehicle 100 may be wired or wirelessly interconnected.
  • the travel system 102 may include components that provide powered motion for the autonomous vehicle 100 .
  • travel system 102 may include engine 118 , energy source 119 , transmission 120 , and wheels/tires 121 .
  • the engine 118 may be an internal combustion engine, an electric motor, an air compression engine, or other types of engine combinations, such as a hybrid engine composed of a gasoline engine and an electric motor, and a hybrid engine composed of an internal combustion engine and an air compression engine.
  • Engine 118 converts energy source 119 into mechanical energy. Examples of energy sources 119 include gasoline, diesel, other petroleum-based fuels, propane, other compressed gas-based fuels, ethanol, solar panels, batteries, and other sources of electricity.
  • the energy source 119 may also provide energy to other systems of the autonomous vehicle 100 .
  • Transmission 120 may transmit mechanical power from engine 118 to wheels 121 .
  • Transmission 120 may include a gearbox, a differential, and a driveshaft. In one embodiment, transmission 120 may also include other devices, such as clutches.
  • the drive shaft may include one or more axles that may be coupled to one or more wheels 121 .
  • the sensor system 104 may include several sensors that sense information about the environment surrounding the autonomous vehicle 100 .
  • the sensor system 104 may include a positioning system 122 (the positioning system may be a global positioning GPS system, a Beidou system or other positioning systems), an inertial measurement unit (IMU) 124, a radar 126, a laser rangefinder 128 and camera 130.
  • the sensor system 104 may also include sensors that monitor the internal systems of the autonomous vehicle 100 (eg, an in-vehicle air quality monitor, a fuel gauge, an oil temperature gauge, etc.). Sensing data from one or more of these sensors can be used to detect objects and their corresponding properties (position, shape, orientation, velocity, etc.). This detection and identification is a critical function for the safe operation of the autonomous autonomous vehicle 100 .
  • the laser sensor is a very important sensing module in the sensor system 104 .
  • the positioning system 122 may be used to estimate the geographic location of the autonomous vehicle 100.
  • a laser sensor may be used as one of the positioning systems 122 to achieve precise positioning of the autonomous vehicle 100.
  • the IMU 124 is used for Position and orientation changes of the autonomous vehicle 100 are sensed based on inertial acceleration.
  • IMU 124 may be a combination of an accelerometer and a gyroscope.
  • the radar 126 can use radio signals to perceive objects in the surrounding environment of the autonomous vehicle 100 , and can be embodied as a millimeter-wave radar or a lidar. In some embodiments, in addition to sensing objects, radar 126 may be used to sense the speed and/or heading of objects.
  • the laser rangefinder 128 may utilize the laser light to sense objects in the environment in which the autonomous vehicle 100 is located.
  • the laser rangefinder 128 may include one or more laser sources, laser scanners, and one or more detectors, among other system components.
  • Camera 130 may be used to capture multiple images of the surrounding environment of autonomous vehicle 100 .
  • Camera 130 may be a still camera or a video camera.
  • Control system 106 controls the operation of the autonomous vehicle 100 and its components.
  • Control system 106 may include various components including steering system 132 , throttle 134 , braking unit 136 , computer vision system 140 , line control system 142 , and obstacle avoidance system 144 .
  • the steering system 132 is operable to adjust the heading of the autonomous vehicle 100 .
  • it may be a steering wheel system.
  • the throttle 134 is used to control the operating speed of the engine 118 and thus the speed of the autonomous vehicle 100 .
  • the braking unit 136 is used to control the deceleration of the autonomous vehicle 100 .
  • the braking unit 136 may use friction to slow the wheels 121 .
  • the braking unit 136 may convert the kinetic energy of the wheels 121 into electrical current.
  • the braking unit 136 may also take other forms to slow the wheels 121 to control the speed of the autonomous vehicle 100 .
  • Computer vision system 140 may be operable to process and analyze images captured by camera 130 in order to identify objects and/or features in the environment surrounding autonomous vehicle 100 .
  • the objects and/or features may include traffic signals, road boundaries and obstacles.
  • Computer vision system 140 may use object recognition algorithms, structure from motion (SFM) algorithms, video tracking, and other computer vision techniques.
  • SFM structure from motion
  • the computer vision system 140 may be used to map the environment, track objects, estimate the speed of objects, and the like.
  • the route control system 142 is used to determine the travel route and travel speed of the autonomous vehicle 100 .
  • the route control system 142 may include a lateral planning module 1421 and a longitudinal planning module 1422, respectively, for combining information from the obstacle avoidance system 144, the GPS 122, and one or more predetermined maps
  • the data for the autonomous vehicle 100 determines the driving route and driving speed.
  • Obstacle avoidance system 144 is used to identify, evaluate and avoid or otherwise traverse obstacles in the environment of autonomous vehicle 100 , which may be embodied as actual obstacles and virtual moving objects that may collide with autonomous vehicle 100 .
  • the control system 106 may additionally or alternatively include components in addition to those shown and described. Alternatively, some of the components shown above may be reduced.
  • Peripherals 108 may include a wireless communication system 146 , an onboard computer 148 , a microphone 150 and/or a speaker 152 .
  • peripherals 108 provide a means for a user of autonomous vehicle 100 to interact with user interface 116 .
  • the onboard computer 148 may provide information to a user of the autonomous vehicle 100 .
  • User interface 116 may also operate on-board computer 148 to receive user input.
  • the onboard computer 148 can be operated via a touch screen.
  • peripherals 108 may provide a means for autonomous vehicle 100 to communicate with other devices located within the vehicle.
  • Wireless communication system 146 may wirelessly communicate with one or more devices, either directly or via a communication network.
  • wireless communication system 146 may use 3G cellular communications, such as CDMA, EVDO, GSM/GPRS, or 4G cellular communications, such as LTE. Or 5G cellular communications.
  • the wireless communication system 146 may communicate using a wireless local area network (WLAN).
  • WLAN wireless local area network
  • the wireless communication system 146 may communicate directly with the device using an infrared link, Bluetooth, or ZigBee.
  • Other wireless protocols, such as various vehicle communication systems, for example, wireless communication system 146 may include one or more dedicated short range communications (DSRC) devices, which may include communication between vehicles and/or roadside stations public and/or private data communications.
  • DSRC dedicated short range communications
  • the power source 110 may provide power to various components of the autonomous vehicle 100 .
  • the power source 110 may be a rechargeable lithium-ion or lead-acid battery.
  • One or more battery packs of such batteries may be configured as a power source to provide power to various components of the autonomous vehicle 100 .
  • power source 110 and energy source 119 may be implemented together, such as in some all-electric vehicles.
  • Computer system 112 may include at least one processor 113 that executes instructions 115 stored in a non-transitory computer-readable medium such as memory 114 .
  • Computer system 112 may also be a plurality of computing devices that control individual components or subsystems of autonomous vehicle 100 in a distributed fashion.
  • the processor 113 may be any conventional processor, such as a commercially available central processing unit (CPU).
  • the processor 113 may be a dedicated device such as an application specific integrated circuit (ASIC) or other hardware-based processor.
  • processors, memory, and other components of the computer system 112 may actually include not stored in the same Multiple processors, or memories, within a physical enclosure.
  • memory 114 may be a hard drive or other storage medium located within a different enclosure than computer system 112 .
  • references to processor 113 or memory 114 will be understood to include references to sets of processors or memories that may or may not operate in parallel.
  • some components such as the steering and deceleration components may each have their own processor that only performs computations related to component-specific functions .
  • the processor 113 may be located remotely from the autonomous vehicle 100 and in wireless communication with the autonomous vehicle 100 . In other aspects, some of the processes described herein are performed on the processor 113 disposed within the autonomous vehicle 100 while others are performed by the remote processor 113, including taking the necessary steps to perform a single maneuver.
  • memory 114 may include instructions 115 (eg, program logic) executable by processor 113 to perform various functions of autonomous vehicle 100 , including those described above.
  • Memory 114 may also contain additional instructions, including instructions to send data to, receive data from, interact with, and/or control one or more of travel system 102 , sensor system 104 , control system 106 , and peripherals 108 . instruction.
  • memory 114 may store data such as road maps, route information, vehicle location, direction, speed, and other such vehicle data, among other information. Such information may be used by the autonomous vehicle 100 and the computer system 112 during operation of the autonomous vehicle 100 in autonomous, semi-autonomous, and/or manual modes.
  • a user interface 116 for providing information to or receiving information from a user of the autonomous vehicle 100 .
  • user interface 116 may include one or more input/output devices within the set of peripheral devices 108 , such as wireless communication system 146 , onboard computer 148 , microphone 150 and speaker 152 .
  • Computer system 112 may control functions of autonomous vehicle 100 based on input received from various subsystems (eg, travel system 102 , sensor system 104 , and control system 106 ) and from user interface 116 .
  • computer system 112 may utilize input from control system 106 to control steering system 132 to avoid obstacles detected by sensor system 104 and obstacle avoidance system 144 .
  • computer system 112 is operable to provide control over many aspects of autonomous vehicle 100 and its subsystems.
  • one or more of these components described above may be installed or associated with the autonomous vehicle 100 separately.
  • memory 114 may exist partially or completely separate from autonomous vehicle 100 .
  • the above-described components may be communicatively coupled together in a wired and/or wireless manner.
  • An autonomous vehicle traveling on a road can identify objects within its surroundings to determine adjustments to current speed.
  • the objects may be other vehicles, traffic control equipment, or other types of objects.
  • each identified object may be considered independently, and based on the object's respective characteristics, such as its current speed, acceleration, distance from the vehicle, etc., may be used to determine the speed at which the autonomous vehicle is to adjust.
  • autonomous vehicle 100 or a computing device associated with autonomous vehicle 100 such as computer system 112, computer vision system 140, memory 114 of FIG. traffic, rain, ice on the road, etc.
  • each identified object is dependent on the behavior of the other, so it is also possible to predict the behavior of a single identified object by considering all identified objects together.
  • the autonomous vehicle 100 can adjust its speed based on the predicted behavior of the identified object. In other words, the autonomous vehicle 100 can determine what steady state the vehicle will need to adjust to (eg, accelerate, decelerate, or stop) based on the predicted behavior of the object.
  • the computing device may also provide instructions to modify the steering angle of the autonomous vehicle 100 so that the autonomous vehicle 100 follows a given trajectory and/or maintains a close proximity to the autonomous vehicle 100 safe lateral and longitudinal distances for objects that are not in use (for example, cars in adjacent lanes on the road).
  • the self-driving vehicle 100 can be a car, a truck, a motorcycle, a bus, a boat, an airplane, a helicopter, a lawn mower, a recreational vehicle, a playground vehicle, construction equipment, a tram, a golf cart, a train, a cart, etc. , the embodiments of the present application are not particularly limited.
  • the embodiment of the present application provides a method for constructing a map, and the constructed map can be applied to various intelligent driving (such as unmanned driving, assisted driving, etc.)
  • various intelligent driving such as unmanned driving, assisted driving, etc.
  • motion planning such as driving behavior decision-making, global path planning, etc.
  • the computing device will first obtain the laser point cloud required for constructing the map, and perform feature extraction on each frame of the obtained laser point cloud to obtain the target feature, which is extracted from the laser point cloud data and meets the preset conditions.
  • the laser spot which includes the coordinates of the laser spot and the reflection intensity of the laser spot.
  • a laser sensor in a standard pose can obtain one frame of laser point cloud at different geographic locations, and after a total of n frames of laser point cloud are obtained, the computing device will then perform a calculation on each of the n frames of laser point cloud.
  • Feature extraction to obtain target features can also be that every time a laser sensor in a standard pose obtains a frame of laser point cloud in different geographic locations, it is sent to a computing device for feature extraction until all n frames of laser point cloud are processed.
  • the manner in which the computing device processes the laser point cloud is not limited here.
  • the target features described in the embodiments of the present application are essentially extracting some special laser points from a frame of laser point cloud, and the extracted target features are generally line features and surface features. feature, where the line feature is used to indicate that the laser points extracted from the laser point cloud data are located on the same straight line, and the surface feature is used to indicate that the laser points extracted from the laser point cloud data are located on the same plane.
  • the method for extracting target features from a laser point cloud is generally realized by using various screening methods, which can be specifically summarized as laser point cloud curvature feature extraction, that is, by calculating the curvature of the laser point cloud, filtering and screening according to the curvature, Determine which laser points are located on the same plane and which laser points are located on the same straight line in a frame of laser point cloud.
  • a function to fit each target feature can be constructed, and the restriction conditions of the function can be obtained.
  • three targets are extracted from the first frame of laser point cloud. features, of which 2 are line features and 1 is a surface feature, then you can build functions to fit these 3 target features (3 functions in total), and also get the constraints of each function (3 in total) Constraints), similarly, by performing the above processing for each frame of laser point cloud, the functions and constraints corresponding to the target features of all n frames of laser point clouds can be obtained, and the obtained functions and constraints constitute the main map. For example, assuming that there are 100 frames of laser point clouds, and 800 target features are extracted from the Chinese Communist Party, then 800 functions can be obtained correspondingly, and the corresponding constraints of the 800 functions can be obtained. These 800 target features and 800 constraints constitute the main map.
  • the 800 target features extracted in total are the target features that have been filtered and merged.
  • 10 target features and 8 target features are extracted from different 2 frames of laser point clouds respectively.
  • some target features for example, the same street light, roadblock, etc.
  • it is necessary to combine the same target features into one target feature assuming that 10 targets are extracted from the previous frame of laser point cloud
  • the extracted target features are processed in this way, and will not be repeated here.
  • each time a frame of laser point cloud is obtained target features are extracted from the currently obtained frame of laser point cloud (assuming that 3 target features are extracted), and then the proposed frame is constructed. Combine the function of the target feature and the corresponding constraints of the obtained function (for example, three functions and three constraints are obtained by constructing), and the laser point cloud obtained in each frame is processed until all laser points are processed.
  • n frames eg, 100 frames
  • target feature extraction on all the n frames of laser point clouds (assuming A total of 800 target features after screening are extracted)
  • a function that fits all target features is constructed and the corresponding constraints of the function are obtained (for example, 800 functions and 800 constraints are obtained from the construction).
  • FIG. 6 assuming that the figure on the left of FIG. 6 is a certain frame of laser point cloud (visualization) corresponding to a certain geographical location obtained by the computing device, first calculate The device extracts target features from the obtained frame of laser point cloud, that is, finds which laser points are on the same plane and which laser points are on the same straight line. After these target features are extracted, the corresponding functions are fitted according to these target features.
  • target features from the obtained frame of laser point cloud, that is, finds which laser points are on the same plane and which laser points are on the same straight line.
  • Figure 6 shows the two extracted target features as an example (actually a frame of laser point cloud may extract more than a dozen target features, here For illustration only), assuming that one of the two target features extracted by the computing device is a line feature and the other is a surface feature, then the computing device performs fitting on the two extracted target features respectively.
  • the function f shown in Figure 6 is obtained by fitting the surface features. If the function f and the function g have no restrictions, then the function f represents a straight line with no length, The function g is a plane without boundaries. Therefore, it is necessary to obtain the constraints of these two functions.
  • the constraint is that the obtained function can just fit the extracted target features, as shown in Figure 6.
  • the function with constraints f and the restricted function g can just fit the extracted target features.
  • the limiting condition of the function may be the value interval of the independent variable corresponding to the function.
  • the values of the parameters a and b can be determined (within the fitting error range), but the function f that determines the value of the parameter is a straight line in a wirelessly extended three-dimensional space. Therefore, it can be The minimum interval in which the extracted three-dimensional coordinates of the target feature fall, that is, the value interval of the independent variable corresponding to the function f, is taken as the limiting condition of the function f.
  • the constraints of the function may also be the coordinates of some key laser points.
  • the coordinates of some key laser points such as the corner points of the plane of the target feature (the target feature fits the function g) in Fig. 6 .
  • the specific expression form of the restriction condition of the function is not limited here.
  • the coordinate system of each function and restriction condition constituting the main map may adopt a universal transverse mercator grid system (UTM) coordinate system.
  • UDM universal transverse mercator grid system
  • each function can also be numbered, that is, assigned an ID, to facilitate subsequent searches. Specifically, the numbering method is not limited here.
  • feature information for visual positioning can also be added to the main map, that is, the perception information obtained by a variety of different types of sensors is fused on the main map (for example, The real-time picture information captured by the camera installed on the autonomous vehicle) makes the subsequent positioning more accurate.
  • Two operations are required for each frame of the obtained laser point cloud.
  • One operation is to extract the target features, build a function that fits each target feature, and obtain the constraints corresponding to each function, so as to obtain the main map (such as step 501).
  • another operation is to construct a submap, and the constructed submap is an occupied grid submap (OGM).
  • OGM occupied grid submap
  • the computing device will first construct a corresponding occupied grid submap, or, for several consecutive frames of laser points Cloud, build a corresponding occupied grid submap. For example, if there are 100 frames of laser point clouds in total, then 100 occupied grid submaps can be constructed (can be constructed one-to-one or many-to-one, do not do).
  • the process of constructing each occupied grid submap is as follows: after setting the length and width of the occupied grid submap (that is, setting the size of the occupied grid submap) and the grid resolution, The computing device projects each frame of laser point cloud in the obtained laser coordinate system to the corresponding occupied grid submap.
  • the grid is considered to be empty.
  • the sum of the probabilities of the two is 1.
  • the computing device is projected to the occupancy grid.
  • Each frame of the laser point cloud of the grid map undergoes a series of mathematical transformations, and the grid is positioned as an occupied state or an idle state according to the probability of whether each grid is occupied. Among them, the center position of the occupied grid submap is the occupied grid.
  • Figure 7 shows that 9 occupied grid submaps constructed corresponding to 9 frames of laser point clouds are spliced into an OGM, where O1 to O9 are these 9 images respectively.
  • the origin of the grid sub-map is occupied, and a whole OGM is obtained by splicing according to the order of the coordinates of the origin, and the OGM constitutes the sub-map of the map provided by the embodiment of the present application.
  • the constructed submap may be an OGM as described in the above-mentioned related concepts, that is, each occupied grid in the obtained OGM stores the laser light falling into the grid.
  • the average height of the point that is, the average height of the obstacles in the grid
  • the average reflection intensity that is, the average value of the reflection intensity of the laser points falling into the grid
  • An occupied grid in the OGM stores the average height h1 (based on the ground) and the average reflection intensity R1 of the corresponding obstacles, where O is the origin of the OGM, that is, the center point.
  • the difference between the OGM shown in FIG. 8 provided by the embodiment of the present application and the existing OGM is that not only the average height of obstacles corresponding to the occupied grid is stored, but also the average height of obstacles falling into the occupied grid is stored.
  • the average reflection intensity of the laser spot, the existing OGM only stores the average height of the obstacle, and the reflection intensity of the laser spot is stored elsewhere.
  • the advantage of such storage in the embodiment of the present application is that it is easy to search, and it is more convenient in the actual application process.
  • the constructed submap may be an improved OGM, that is, each occupied grid in the obtained OGM can be divided into partially occupied and fully occupied, and all occupied is Refers to obstacles that extend from the ground to a certain height (such as office buildings, residential buildings and other general buildings), and local occupation refers to buildings that occupy a part of the space, such as bridge openings, tunnels, viaducts, and skywalks, which can also be called suspended obstacle.
  • the constructed submap may be an improved OGM, that is, each occupied grid in the obtained OGM can be divided into partially occupied and fully occupied, and all occupied is Refers to obstacles that extend from the ground to a certain height (such as office buildings, residential buildings and other general buildings), and local occupation refers to buildings that occupy a part of the space, such as bridge openings, tunnels, viaducts, and skywalks, which can also be called suspended obstacle.
  • the height of the obstacle stored in the grid is the upper edge of the obstacle.
  • Fig. 9 shows that the two grids in the OGM are divided into fully occupied and partially occupied, and these two grids are respectively stored as "h2, R2" as shown in Fig. 9.
  • the OGM provided in the examples of the present application can further enrich the storage of obstacle heights, for example, for multi-layer suspended obstacles, h01, h02, h03, . .., height division of h0n.
  • the improved OGM shown in FIG. 9 provided in the embodiment of the present application is different from the existing OGM in that the obstacles occupying the grid are classified into general buildings and suspended buildings. Obstacles, different heights are stored for different types of obstacles, and the average reflection intensity of the laser spots falling into the occupied grid is also stored. If the grid is occupied by the existing OGM, it is considered to be fully occupied.
  • the advantage of storing the height of the obstacle in this way in the embodiment of the present application is that more detailed features of the obstacle are preserved, and the accuracy of subsequent positioning is improved.
  • the height of the obstacles occupied by the grid in the OGM may be stored as shaping data, and the average reflection intensity of the laser points in the OGM falling into the grid may also be stored as
  • the shaping data can also be used to store the heights of obstacles in the OGM occupied grid and the average reflection intensity of laser spots falling into the occupied grid as shaping data.
  • the integer data refers to numerical data that does not contain a fractional part, and the integer data is only used to represent integers and is stored in binary form. Example: Take 0.1m (meter) discrete as an example, the data of int8 can express the height of 0 ⁇ 25.6m.
  • the existing OGM will be directly stored as floating-point data 6.7789m, while in the embodiment of this application, it will be stored as integer data 68, because it is discrete at 0.1m, so the integer data 68 represents 6.8m .
  • step 503 can be executed before step 501, step 503 can also be executed after step 502, and step 503 can also be executed at the same time as step 501, which is not limited here. .
  • the computing device may establish an index of the OGM on the main map.
  • One implementation method may be: first, the computing device converts the center position (ie the origin) of the occupied grid submap corresponding to each laser point cloud into UTM coordinates Then, add each origin coordinate value on the main map as the index label of the occupied grid submap corresponding to each frame of laser point cloud.
  • FIG. 7 For ease of understanding, still take FIG. 7 as an example for illustration.
  • FIG. 7 there are 9 occupied grid submaps, and their origins are O1 to O9 respectively.
  • the computing device converts the coordinates of these 9 origins on the grid map. are the coordinates in the UTM coordinate system (9 coordinates in total), and then store the coordinates of each origin as an index label in the main map.
  • the self-driving vehicle also uses the UTM coordinate system to position itself.
  • the computing device performs two operations on each frame of the acquired laser point cloud.
  • one operation is target feature extraction, constructing a function to fit each target feature, and The constraints corresponding to each function are obtained. These functions and the constraints corresponding to the functions constitute the main map.
  • Another operation is to construct a submap according to each frame of laser point cloud (the occupied grid submap corresponding to each frame of laser point cloud). splicing), the constructed submap is the occupied grid submap (OGM). After that, by establishing the OGM index on the main map, the composite laser map of the main map and the OGM is constructed, which reduces the storage capacity of the composite laser map. , while retaining more feature information for subsequent matching and positioning.
  • the laser sensor installed on the automatic driving vehicle is real-time Obtain the laser point cloud, and preprocess the obtained laser point cloud at the current moment.
  • the IMU on the self-driving vehicle perceives the position and orientation changes of the self-driving vehicle and other attitude information of the self-driving vehicle based on inertial acceleration.
  • the pre-built composite laser map is combined with the matching algorithm to obtain the positioning result of the autonomous vehicle.
  • the positioning process can be processed in two stages, which can be first matched with the main map, and then matched with the OGM according to the index label, so as to achieve precise positioning.
  • Common matching algorithms include iterative adjacent point matching, feature matching, filter matching, and probability matching.
  • the map directly composed of the original laser point cloud retains the original information of the laser point cloud, and can use a variety of matching algorithms, while the OGM currently used in the industry can generally only perform filter matching and probability matching. Since the laser map retains both the feature information of the laser point cloud (ie the main map) and the gridded point cloud information (ie the OGM), it can support the feature matching, filter matching and probability matching of the laser point cloud at the same time. , which can provide more precise positioning results on the lightweight composite laser map.
  • a standard pose of a standard vehicle will be used to construct it.
  • the initial pose of the autonomous vehicle needs to be adjusted to make it as close to the standard position as possible. pose, so that the positioning accuracy is higher.
  • the composite laser map provided by the embodiment of the present application compared with the method of directly constructing the original laser point cloud into a laser map, the composite laser map provided by the embodiment of the present application has a reduced storage space requirement and is more lightweight;
  • the existing method of compressing a three-dimensional laser point cloud into two-dimensional information, and constructing a two-dimensional OGM according to the compressed two-dimensional information, the composite laser map provided by the embodiment of the present application improves the OGM, so that the OGM is retained There are more detailed features.
  • the original OGM is relatively uniform.
  • the main map in the composite laser map provided by the embodiment of the present application can provide line features and surface features such as street light poles and street signs to improve the matching accuracy.
  • FIG. 12 is a schematic structural diagram of a computing device 1200 provided by an embodiment of the present application.
  • the computing device 1200 may be deployed in various intelligent driving (eg, unmanned, assisted driving, etc.) intelligent bodies (eg, , automatic driving vehicles, assisted driving vehicles, etc.
  • the computing device 1200 can also be an independent terminal device,
  • smart devices such as mobile phones, personal computers, tablets, etc.
  • intelligent driving such as unmanned driving, assisted driving, etc.
  • self-driving vehicles, assisted driving vehicles, etc. in wheeled mobile devices for localization of agents.
  • the computing device 1200 may include: an extraction module 1201, a first construction module 1202, a second construction module 1203, and an indexing module 1204, wherein the extraction module 1201 is configured to extract target features based on laser point cloud data, and the target features are from The laser point that meets the preset conditions extracted from the laser point cloud data, the laser point includes the coordinates of the laser point and the reflection intensity of the laser point; the first building module 1202 is used to build a function that fits the target feature, and obtains The restriction condition of the function, the function and the restriction condition constitute the main map; the second construction module 1203 is used to construct an occupancy grid map (OGM) according to the laser point cloud data; the index module 1204 is used to construct an occupied grid map (OGM) on the main map Build an index of OGM.
  • the extraction module 1201 is configured to extract target features based on laser point cloud data, and the target features are from The laser point that meets the preset conditions extracted from the laser point cloud data, the laser point includes the coordinates of the laser point and the reflection intensity of the
  • the computing device 1200 has to perform two operations on the acquired laser point cloud.
  • One operation is to perform target feature extraction through the extraction module 1201, and further through the first building module 1202 to construct and fit each The function of the target feature, and obtain the corresponding constraints of each function, these functions and the corresponding constraints of the function constitute the main map
  • another operation is to use the second building module 1203 to construct a sub-map according to each frame of the laser point cloud.
  • the constructed sub-map is the OGM.
  • the index module 1204 establishes the OGM index on the main map, and constructs the composite laser map of the main map and the OGM, which reduces the storage capacity of the composite laser map and retains more feature information for use. for subsequent matching positioning.
  • the second construction module 1203 is specifically configured to: construct a first occupied grid submap according to the first laser point cloud data, where the first laser point cloud data belongs to the obtained laser point cloud. Any frame or multiple frames; and splicing the first occupied grid submap corresponding to the constructed first laser point cloud data to obtain the occupied grid map. For example, assuming that there are 100 frames of laser point clouds, 100 occupied grid submaps can be constructed. The process of constructing each occupied grid submap is as follows: After setting the length and width of the occupied grid submap (that is, setting the After occupying the grid submap size) and grid resolution, the computing device 1200 projects each frame of the laser point cloud in the obtained laser coordinate system into the corresponding occupied grid submap.
  • the computing device 1200 projects to the occupancy
  • Each frame of the laser point cloud of the grid submap undergoes a series of mathematical transformations, and the grid is positioned as an occupied state or an idle state according to the probability of whether each grid is occupied.
  • the above processing is performed for each frame of laser point cloud, so that all n frames of laser point cloud correspond to one occupied grid submap (that is, the first occupied grid submap, a total of n), and then , splicing the occupied grid submaps corresponding to the n frames of laser point clouds to obtain a complete OGM.
  • the indexing module 1204 is specifically used to: first, convert the center position (ie the origin) of the occupied grid submap corresponding to each laser point cloud into a coordinate value in the UTM coordinate system, and then, Add each origin coordinate value on the main map as the index label of the occupied grid submap corresponding to each frame of laser point cloud.
  • the first building module 1202 is specifically used to: obtain the value interval of the independent variable corresponding to the function; or, obtain the value of the target independent variable in the function, where the target independent variable includes the target laser point The coordinates of the target laser point belong to the target feature.
  • the OGM includes: the height of obstacles in a first grid and the average value of reflection intensities of laser spots falling into the first grid, where the first grid is within the occupied grid map Any occupied grid, that is, each occupied grid (that is, the first grid) in the OGM obtained in the embodiment of the present application stores the average height of the laser spots falling into the grid (that is, the first grid). The average height of obstacles in the grid) and the average reflection intensity (ie, the average of the reflection intensities of the laser spots falling within the grid).
  • the difference between the provided OGM and the existing OGM is that not only the average height of the obstacle corresponding to the occupied grid is stored, but also the average reflection intensity of the laser spot falling into the occupied grid is stored, Existing OGMs only store the average height of obstacles, while the reflected intensity of the laser spot is stored elsewhere.
  • the advantage of such storage in the embodiment of the present application is that it is easy to search, and it is more convenient in the actual application process.
  • the heights of obstacles occupying the grid in the OGM can be stored as shaped data
  • the average reflected intensity of laser points falling into the occupied grids in the OGM can be stored as shaped data
  • the OGM can also be stored as shaped data. Both the height of obstacles in the occupied grid and the average reflection intensity of laser spots falling into the occupied grid can be stored as shaped data.
  • the data occupied by the grid storage in the OGM can be shaped data, and the existing solutions store data in the form of floating-point data.
  • the shaped data occupies less The storage space (theoretically, the storage capacity occupied by the integer data is 1/4 of that of the floating-point data). Therefore, the advantage of this embodiment of the present application is that the storage capacity can be saved.
  • each occupied grid in the OGM obtained in the embodiment of the present application can be divided into partially occupied and fully occupied, and fully occupied refers to obstacles extending from the ground to a certain height (for example, office buildings, Residential buildings and other general buildings), local occupation refers to buildings that occupy a part of the space such as bridge openings, tunnels, viaducts, skywalks, etc., and can also be called suspended obstacles.
  • fully occupied refers to obstacles extending from the ground to a certain height
  • local occupation refers to buildings that occupy a part of the space such as bridge openings, tunnels, viaducts, skywalks, etc.
  • suspended obstacles for these two types of occupation, when an occupied grid in the OGM is fully occupied, that is, the obstacle of the grid is a general obstacle, then the height of the obstacle stored in the grid is the upper edge of the obstacle.
  • the difference between the improved OGM provided by the embodiment of the present application and the existing OGM is that the obstacles occupying the grid are classified into general buildings and suspended obstacles.
  • the types of obstacles are stored at different heights, and the average reflection intensity of the laser spots falling into the occupied grid is also stored. If the grid is occupied by the existing OGM, it is considered to be fully occupied.
  • the advantage of storing the height of the obstacle in this way in the embodiment of the present application is that more detailed features of the obstacle are preserved, and the accuracy of subsequent positioning is improved.
  • the target feature in this embodiment of the present application is essentially to extract some special laser points from a frame of laser point cloud, and the extracted target features are generally line features and surface features. It is used to indicate that the laser points extracted from the laser point cloud data are located on the same straight line, and the surface feature is used to indicate that the laser points extracted from the laser point cloud data are located on the same plane.
  • FIG. 13 is a schematic structural diagram of the computing device provided by the embodiment of the present application. For the convenience of description, only the part related to the embodiment of the present application is shown. If the specific technical details are not disclosed, please refer to the method part of the embodiments of the present application.
  • the computing device 1300 may be deployed with the modules of the computing device described in the embodiment corresponding to FIG. 12 , to implement the functions of the computing device in the embodiment corresponding to FIG. 12 .
  • the computing device 1300 is implemented by one or more servers, and computing Device 1300 may vary widely by configuration or performance, and may include one or more central processing units (CPUs) 1322 (eg, one or more) and memory 1332, one or more storage applications
  • a storage medium 1330 eg, one or more mass storage devices for programs 1342 or data 1344.
  • the memory 1332 and the storage medium 1330 may be short-term storage or persistent storage.
  • the program stored in the storage medium 1330 may include one or more modules (not shown in the figure), and each module may include a series of instructions to operate on the training device.
  • the central processing unit 1322 may be configured to communicate with the storage medium 1330 to execute a series of instruction operations in the storage medium 1330 on the computing device 1300 .
  • Computing device 1300 may also include one or more power supplies 1326, one or more wired or wireless network interfaces 1350, one or more input output interfaces 1358, and/or, one or more operating systems 1341, such as Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM and many more.
  • operating systems 1341 such as Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM and many more.
  • the steps performed by the computing device in the embodiment corresponding to FIG. 5 may be implemented based on the structure shown in FIG. 13 , and details are not repeated here.
  • the device embodiments described above are only schematic, wherein the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be A physical unit, which can be located in one place or distributed over multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
  • the connection relationship between the modules indicates that there is a communication connection between them, which may be specifically implemented as one or more communication buses or signal lines.
  • U disk U disk
  • mobile hard disk ROM
  • RAM random access memory
  • disk or CD etc.
  • a computer device which can be a personal computer, training equipment, or network equipment, etc. to execute the methods described in the various embodiments of the present application.
  • the computer program product includes one or more computer instructions.
  • the computer may be a general purpose computer, special purpose computer, computer network, or other programmable device.
  • the computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be retrieved from a website, computer, training device, or data
  • the center transmits to another website site, computer, training equipment, or data center by wire (eg, coaxial cable, optical fiber, digital subscriber line) or wireless (eg, infrared, wireless, microwave, etc.).
  • the computer-readable storage medium may be any available medium that can be stored by a computer, or a data storage device such as a training device, a data center, or the like that includes an integration of one or more available media.
  • the available media may be magnetic media (eg, floppy disks, hard disks, magnetic tapes), optical media (eg, high-density digital video discs (DVDs)), or semiconductor media (eg, solid state disks) , SSD)) etc.

Abstract

公开了一种构建地图的方法及计算设备,构建好的地图可应用于自动驾驶领域中的激光处理领域,具体可应用在智能行驶的智能体,如智能汽车、智能网联汽车、自动驾驶汽车上,该方法包括:对获取到的每帧激光点云都进行两种操作,一种是目标特征提取,构建拟合每个目标特征的函数,并得到每个函数对应的限制条件,这些函数和函数对应的限制条件构成主地图,另一种是根据每帧激光点云构建子地图(由每帧激光点云对应的占据栅格子地图拼接得到),该构建的子地图就是占据栅格地图(OGM),之后通过在主地图上建立OGM的索引,构建主地图和OGM的复合激光地图,该复合激光地图存储容量要求低,同时保留更多的特征信息用于匹配定位。

Description

一种构建地图的方法及计算设备
本申请要求于2020年9月14日提交中国专利局、申请号为202010960450.0、申请名称为“一种构建地图的方法及计算设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及激光处理领域,尤其涉及一种构建地图的方法及计算设备。
背景技术
定位技术作为自动驾驶的关键技术之一,是通过各种定位手段与多种传感器数据融合实现自动驾驶汽车的精确定位,让自动驾驶汽车获得自身确切位置。精确定位是自动驾驶汽车必不可少的功能,其中,由于激光雷达、三维激光扫描仪等激光传感器具有较高的测量精度,因此通过这些激光传感器获得的激光点云被广泛应用于精确定位中。
通过激光传感器实现精确定位的前提是要先获得一张基于激光点云构建的地图(可简称为激光地图),再根据激光传感器实时获取的激光点云同激光地图进行匹配实现定位。目前,在已实现的方案中,构建激光地图的方式主要有两种:一种是将原始的激光点云直接构建成激光地图,在后续定位时直接将激光传感器实时获取的激光点云和激光地图进行匹配;另一种是将三维的激光点云压缩为二维信息,并根据压缩的二维信息构建二维的占据栅格地图(occupancy grid map,OGM),该二维的OGM就构成激光地图。
然而上述两种方式都存在缺陷,第一种方式得到的激光地图所需的存储容量过大,难以工程复用,第二种方式将三维的激光点云压缩为二维信息损失了很多特征,从而影响后续的定位精度。
发明内容
本申请实施例提供了一种构建地图的方法及计算设备,通过在主地图上建立占据栅格地图的索引,构建主地图和占据栅格地图的复合激光地图,降低了该复合激光地图的存储容量,同时保留更多的特征信息用于后续的匹配定位。
基于此,本申请实施例提供以下技术方案:
第一方面,本申请实施例提供了一种构建地图的方法,该地图可应用于自动驾驶领域中的激光处理领域,例如,可应用在智能行驶的智能体(如,智能汽车、智能网联汽车)上,该方法包括:计算设备会先获取构建地图所需的激光点云数据(也可简称为激光点云),并对获取到的激光点云进行特征提取,得到目标特征,该目标特征为从激光点云数据中提取到的符合预设条件的激光点,该激光点包括激光点的坐标和激光点的反射强度。例如,可以是处于标准位姿的激光传感器在不同地理位置处分别获取一帧激光点云,共得到n帧激光点云后,再由计算设备对这n帧激光点云进行特征提取,得到目标特征;也可以是处于标准位姿的激光传感器在不同地理位置每获取到一帧激光点云,就发送给计算设备进行特征提取,直至处理完所有的n帧激光点云,具体此处对计算设备处理激光点云的方式不 做限定。针对激光点云提取到对应的目标特征后,就可构建拟合每个目标特征的函数,并得到该函数的限制条件,例如,假设从第一帧激光点云中提取到了3个目标特征,其中2个是线特征,1个是面特征,那么就可以分别构建拟合这3个目标特征的函数(共3个函数),并且还将得到每个函数的限制条件(共3个限制条件),类似地,可以针对每帧激光点云都进行所述处理,就可得到所有n帧激光点云的目标特征对应的函数和限制条件,得到的这些函数和限制条件就构成主地图。例如,假设共有100帧激光点云,从中共提取到800个目标特征,那么对应就可以得到800个函数,以及800个函数对应的限制条件,这800个目标特征和800个限制条件就构成主地图,这里需要注意的是,共提取到的这800个目标特征是已经经过筛选合并后的目标特征,例如,不同的2帧激光点云中分别提取到10个目标特征和8个目标特征,其中可能会有部分目标特征表征的同一个事物(如,是同一个路灯、路障等),那么就需要先对这相同的目标特征合并为一个目标特征,假设从前一帧激光点云提取到10个目标特征中有2个目标特征是与从后一帧激光点云提取到的8个目标特征中的2个目标特征是同一个事物,那么就只需保留其中一份中的2个目标特征,后续提取到的目标特征均是如此处理,此处不予赘述。计算设备还将对获取到的每帧激光点云构建子地图,该构建的子地图就是占据栅格子地图(OGM)。针对每帧激光点云进行上述处理后,就得到了一个主地图和一个OGM,那么此时就需要将得到的主地图和OGM联系起来,组成复合激光地图,具体地,计算设备可以是在主地图上建立OGM的索引以组合得到复合激光地图。
在本申请上述实施方式中,计算设备对获取到的激光点云都进行两种操作,一种操作是目标特征提取,构建拟合每个目标特征的函数,并得到每个函数对应的限制条件,这些函数和函数对应的限制条件就构成主地图,另一种操作是根据激光点云构建子地图OGM,之后,通过在主地图上建立OGM的索引,构建主地图和OGM的复合激光地图,降低了该复合激光地图的存储容量,同时保留更多的特征信息用于后续的匹配定位。
在第一方面的一种可能的设计中,计算设备根据激光点云构建OGM可以是:计算设备针对每帧获取到的激光点云,都先构建一个对应的占据栅格子地图,或者是,针对几帧连续的激光点云,构建一个对应的占据栅格子地图,例如,假设共有100帧激光点云,那么就可构建得到100个占据栅格子地图(可以一对一构建,也可以多对一构建,不做限定,此处仅为示意),构建每个占据栅格子地图的过程如下:在设置好占据栅格子地图的长和宽(即设置占据栅格子地图尺寸)以及栅格分辨率后,计算设备将得到的激光坐标系中的每帧激光点云投影到对应的占据栅格子地图中,若某个栅格中没有激光点就认为是空,有至少一个激光点就认为该栅格对应存在障碍物。因此,对于一个栅格把它是空的概率表示为p(s=1),有障碍物表示为p(s=0),两者的概率和为1,之后,计算设备对投影至占据栅格子地图的每帧激光点云经过一系列数学变换,根据各个栅格是否被占据的概率把这个栅格定位为占据状态或者空闲状态,其中,占据栅格子地图的中心位置就是该占据栅格子地图的原点O。类似地,针对每一帧激光点云,都进行上述处理,这样所有的n帧激光点云就分别对应有一个占据栅格子地图(共n个),之后,将这n帧激光点云对应的占据栅格子地图进行拼接,从而得到一张完整的OGM。
在本申请上述实施方式中,具体阐述了如何由激光点云构建对应的占据栅格子地图,并将这些占据栅格子地图拼接成一张完整的OGM,具备可实现性。
在第一方面的一种可能的设计中,计算设备在主地图上建立OGM的索引可以是:首先,计算设备将各个激光点云对应的占据栅格子地图的中心位置(即原点)转换为横墨卡托格网系统(universal transverse mercator grid system,UTM)坐标系下的坐标值,然后,再在主地图上添加每个原点坐标值作为每帧激光点云对应的占据栅格子地图的索引标签。
在本申请上述实施方式中,阐述了如何将主地图与OGM建立联系的一种具体实现方式,即在主地图上添加各个占据栅格子地图的索引标签,这种实现方式易于实现,操作简便。
在第一方面的一种可能的设计中,计算设备得到函数的限制条件可以是:得到该函数对应自变量的取值区间;或,得到该函数中目标自变量的取值,该目标自变量包括目标激光点的坐标,该目标激光点属于上述目标特征。
在本申请上述实施方式中,给出了几种函数的限制条件的具体表现形式,具备灵活性和可选择性。
在第一方面的一种可能的设计中,OGM可以包括:第一栅格中障碍物的高度和落入该第一栅格中激光点的反射强度的平均值,该第一栅格为该占据栅格地图内任意一个被占据的栅格,也就是说,本申请实施例得到的OGM中每个被占据的栅格(即第一栅格)内存储有落入该栅格的激光点的平均高度(即该栅格内障碍物的平均高度)和平均反射强度(即落入该栅格内的激光点的反射强度的平均值)。
在本申请上述实施方式中,所提供的OGM与已有的OGM的区别在于:不仅存储有占据栅格对应障碍物的平均高度,还存储有落入占据栅格的激光点的平均反射强度,已有的OGM只存储有障碍物的平均高度,而激光点的反射强度存储在别处。本申请实施例这样存储的好处在于:易于查找,在实际应用过程中,更加简便。
在第一方面的一种可能的设计中,计算设备可以将OGM中占据栅格障碍物的高度存储为整形数据,也可以将OGM中落入占据栅格中激光点的平均反射强度存储为整形数据,还可以将OGM中占据栅格障碍物的高度和落入该占据栅格中激光点的平均反射强度均可以存储为整形数据,其中,整形数据是不包含小数部分的数值型数据,整形数据只用来表示整数,以二进制形式存储。
在本申请上述实施方式中,阐述了OGM中占据栅格存储的数据可以是整形数据,已有的方案存储数据的方式均是浮点型数据,整形数据相比浮点型数据占用更少的存储空间(理论上整形数据占据的存储容量为浮点型数据的1/4),因此,本申请实施例这样做的好处在于:可以节约存储容量。
在第一方面的一种可能的设计中,本申请实施例得到的OGM中每个被占据的栅格可以区分为部分占据和完全占据,全部占据是指从地面延伸到一定高度的障碍物(如,写字楼、住宅楼等一般建筑物),局部占据是指如桥洞、隧道、高架桥、空中人行横道等在空间占据一部分的建筑物,也可称为悬空障碍物。对这两种占据类型,当OGM中某个被占据的栅格是完全占据,也就是该栅格的障碍物为一般障碍物,那么该栅格内存储的障碍物高 度就是障碍物的上边缘距离地面的高度;当OGM中某个被占据的栅格是部分占据,也就是该栅格的障碍物为悬空障碍物,那么该栅格内存储的障碍物高度就是障碍物的下边缘距离地面的第一高度和障碍物的上边缘距离地面的第二高度。
在本申请上述实施方式中,本申请实施例提供的改善型OGM与已有的OGM的区别在于:对占据栅格内的障碍物进行了分类,分为一般建筑物和悬空障碍物,针对不同类型的障碍物进行不同高度的存储,同时还存储有落入占据栅格的激光点的平均反射强度,而已有的OGM若有栅格被占据,就认为是完全占据。本申请实施例这样存储障碍物高度的好处在于:保留了障碍物更多的细节特征,提高后续定位的精确性。
在第一方面的一种可能的设计中,本申请实施例该目标特征实质就是从一帧激光点云中提取出一些特别的激光点,提取的目标特征一般为线特征和面特征,其中,线特征用于表示从激光点云数据提取的激光点位于同一直线上,面特征用于表示从激光点云数据提取的激光点位于同一平面上。
在本申请上述实施方式中,具体阐述了提取到的目标特征所符合的一些条件,提取的目标特征具备利于后续定位的关键特征。
本申请实施例第二方面提供一种计算设备,该计算设备具有实现上述第一方面或第一方面任意一种可能实现方式的方法的功能。该功能可以通过硬件实现,也可以通过硬件执行相应的软件实现。该硬件或软件包括一个或多个与上述功能相对应的模块。
本申请实施例第三方面提供一种计算设备,可以包括存储器、处理器以及总线系统,其中,存储器用于存储程序,处理器用于调用该存储器中存储的程序以执行本申请实施例第一方面或第一方面任意一种可能实现方式的方法。
本申请第四方面提供一种计算机可读存储介质,该计算机可读存储介质中存储有指令,当其在计算机上运行时,使得计算机可以执行上述第一方面或第一方面任意一种可能实现方式的方法。
本申请实施例第五方面提供了一种计算机程序,当其在计算机上运行时,使得计算机执行上述第一方面或第一方面任意一种可能实现方式的方法。
本申请实施例第六方面提供了一种芯片,该芯片包括至少一个处理器和至少一个接口电路,该接口电路和该处理器耦合,至少一个接口电路用于执行收发功能,并将指令发送给至少一个处理器,至少一个处理器用于运行计算机程序或指令,其具有实现如上述第一方面或第一方面任意一种可能实现方式的方法的功能,该功能可以通过硬件实现,也可以通过软件实现,还可以通过硬件和软件组合实现,该硬件或软件包括一个或多个与上述功能相对应的模块。此外,该接口电路用于与该芯片之外的其它模块进行通信,例如,该接口电路可将芯片上处理器得到的复合激光地图发送给各种智能行驶(如,无人驾驶、辅助驾驶等)的智能体进行运动规划(如,驾驶行为决策、全局路径规划等)。
附图说明
图1为本申请实施例提供的不同分辨率的OGM的一个示意图;
图2为本申请实施例提供的构建得到的某个地区的OGM的一个示意图;
图3为本申请实施例提供的自动驾驶车辆的总体架构的一个示意图;
图4为本申请实施例提供的自动驾驶车辆的一种结构示意图;
图5为本申请实施例提供的构建地图的方法的一个流程示意图;
图6为本申请实施例提供的基于一帧激光点云提取目标特征并构建函数的一个示意图;
图7为本申请实施例提供的9帧激光点云对应构建的9张占据栅格子地图拼接为一张OGM的示意图;
图8为本申请实施例提供的构建得到的OGM存储数据类型的一个示意图;
图9为本申请实施例提供的构建得到的OGM存储数据类型的另一示意图;
图10为本申请实施例提供的构建复合激光地图流程的一个示意图;
图11为本申请实施例提供的构建好的复合激光地图实际应用过程的一个示意图;
图12为本申请实施例提供的计算设备的一个示意图;
图13为本申请实施例提供的计算设备的另一示意图。
具体实施方式
本申请实施例提供了一种构建地图的方法及计算设备,通过在主地图上建立占据栅格地图的索引,构建主地图和占据栅格地图的复合激光地图,降低了该复合激光地图的存储容量,同时保留更多的特征信息用于后续的匹配定位。
本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的术语在适当情况下可以互换,这仅仅是描述本申请的实施例中对相同属性的对象在描述时所采用的区分方式。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,以便包含一系列单元的过程、方法、系统、产品或设备不必限于那些单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它单元。
本申请实施例涉及了许多关于激光点云、地图等相关知识,为了更好地理解本申请实施例的方案,下面先对本申请实施例可能涉及的相关术语和概念进行介绍。应理解的是,相关术语和概念的解释可能会因为本申请实施例的具体情况有所限制,但并不代表本申请仅能局限于该具体情况,在不同实施例的具体情况可能也会存在差异,具体此处不做限定。
(1)激光点云
激光点云又可称为激光点云数据,使用激光雷达、三维激光扫描仪等激光传感器接收到的激光信息以点云的形式呈现,而通过测量仪器得到的被测对象外观表面的点数据集合就称为点云,若该测量仪器是激光传感器,那么得到的点云则称为激光点云(一般32线激光在同一时刻会有数万个激光点),激光点云包含的激光信息可记为[x,y,z,intensity],该激光信息表示的分别是每个激光点所打目标位置处在激光坐标系的三维坐标以及该激光点的反射强度。
(2)横墨卡托格网系统(universal transverse mercator grid system,UTM)坐标系
UTM坐标是一种平面直角坐标,这种坐标格网系统及其所依据的投影已经广泛用于地 形图,作为卫星影像和自然资源数据库的参考格网以及要求精确定位的其他应用,例如,自动驾驶车辆的精确定位一般采用的就是UTM坐标。
UTM投影为椭圆柱横正轴割地球椭球体,椭圆柱的中心线位于椭球体赤道面上,且通过椭球体质点。从而将椭球体上的点投影到椭圆柱上。两条割线圆在UTM投影图上长度无变,即2条标准经线圆。两条割线圆之正中间为中央经线圆,中央经线投影后的长度为其投影前的0.9996倍,比例因子k=投影后的长度/投影前的实际长度。则标准割线和中央经线的经度差为1.6206°,即1°37′14.244″。UTM经度区范围为1到60,其中58个区的东西跨度为6°。经度区涵盖了地球中纬度范围从80°S到84°N之间的所有区域。一共有20个UTM纬度区,每个区的南北跨度为8°,使用字母C到X标识(其中没有字母I和O)。A、B、Y、Z区不在系统范围以内,它们覆盖了南极和北极区。
UTM坐标的表示格式为:经度区纬度区以东以北,其中以东表示从经度区的中心子午线的投影距离,而以北表示距离赤道的投影距离。这个两个值的单位均为米。举例来说,使用UTM表示经/纬度坐标(61.44,25.40)的结果就是35V 414668 6812844,而经/纬度坐标(-47.04,-73.48)的表示结果为18G 615471 4789269。
(3)占据栅格地图(occupancy grid map,OGM)
OGM是一种机器人常用的地图表示法,机器人经常使用激光传感器,而传感器数据存在噪音,例如,用激光传感器检测前方障碍物距离机器人多远,不可能检测到一个准确的数值,比如一个角度下,如果准确值是4米,那么在当前时刻检测到障碍物是3.9米,但是下一刻检测的是4.1m,不能将两个距离的位置都认为是障碍物,为解决这一问题就采用OGM,如图1就是示意的两个不同分辨率的OGM,黑色的点为激光点,所有映射在OGM中的激光点就构成激光点云,在实际应用中,一般采用的OGM尺寸为300*300,即有300*300个小格子(即栅格)组成,每个栅格的尺寸(即长*宽,指每个栅格对应在车辆坐标系中是多少米)就称为OGM的分辨率,分辨率越高,栅格尺寸越小,那么某一时刻激光传感器获取到的激光点云落在某个特定栅格中的激光点就越少,如图1左边的图所示,落在灰色底栅格(图1左边图的第6行第11列)中的激光点为4个,反之,分辨率越低,栅格尺寸越大,那么同样的,在同一时刻激光传感器获取到的激光点云落在某个特定栅格中的激光点就越少,如图1右边的图所示,落在灰色底栅格(图1右边图的第4行第7列)中的激光点为9个。对一般的地图来讲,地图上某一个点要么有障碍物要么没有,但在OGM中,在某一特定时刻,若某个栅格中没有激光点就认为是空,有至少一个激光点就认为该栅格对应存在障碍物。因此,对于一个栅格把它是空的概率表示为p(s=1),有障碍物表示为p(s=0),两者的概率和为1,之后,把各个不同时刻获取到的激光点云映射到OGM中,并经过一系列数学变换,根据各个栅格是否被占据的概率把这个栅格定位为占据状态或者空闲状态。需要注意的是,一般来说,OGM的中心位置就是OGM的原点,如图1左边图所示的三角形示意的就是该OGM的原点。
为便于理解,下面示意一种本申请实施例构建好的二维OGM(栅格一般为分米级),每个被占据的栅格内存储有落入该栅格的激光点的平均高度(即该栅格内障碍物的平均高度)和平均反射强度,请参阅图2,图2示意的是本申请实施例构建得到的某个地区的OGM。
下面结合附图,对本申请的实施例进行描述。本领域普通技术人员可知,随着技术的发展和新场景的出现,本申请实施例提供的技术方案对于类似的技术问题,同样适用。
本申请实施例基于激光点云构建的地图可以应用于对各种智能行驶(如,无人驾驶、辅助驾驶等)的智能体进行运动规划(如,驾驶行为决策、全局路径规划等)的场景中,以该智能体为自动驾驶车辆为例,先对自动驾驶车辆的总体架构进行说明,具体请参阅图3,图3示意的是自上而下的分层式体系架构,各系统之间可有定义接口,用于对系统间的数据进行传输,以保证数据的实时性和完整性。下面对各个系统进行简单介绍:
(1)环境感知系统
环境感知是智能驾驶车辆中最为基础的一个部分,无论是做驾驶行为决策还是全局路径规划,都需要建立在环境感知的基础上,依据对道路交通环境的实时感知结果,进行相对应的判断、决策和规划,使车辆实现智能驾驶。
环境感知系统主要是利用各种传感器获取相关的环境信息,从而完成对环境模型的构建以及对于交通场景的知识表达,所使用的传感器包括摄像机、单线雷达(SICK)、四线雷达(IBEO)、三维激光雷达(HDL-64E)等,其中,摄像机主要是负责红绿灯检测、车道线检测、道路指示牌检测、车辆识别等;激光传感器主要负责动/静态的障碍物的检测、识别和跟踪以及自身的精确定位,例如,三维激光雷达发出的激光一般以10FPS的频率采集外部环境信息,并返回每个时刻的激光点云,最后,将获取的实时激光点云发送给自主决策系统做进一步的决策和规划。
(2)自主决策系统
自主决策系统是智能驾驶车辆中的关键组成部分,该系统主要分为行为决策和运动规划这两个核心子系统,其中,行为决策子系统主要是通过运行全局规划层来获取全局最优行驶路线,以明确具体驾驶任务,再根据环境感知系统发来的当前实时道路信息(即图3中的实时环境感知信息),具体地,在本申请实施例中,自主决策系统根据环境感知系统发来的实时的每帧激光点云,输出自车周围物体的位置、朝向等信息,并采用匹配算法与本申请实施例事先构建好的地图(即图3中的激光地图)进行匹配,从而实现自车的精确定位。其中,常用的匹配算法有直接点云匹配(如,最近点搜索(iterative closest point,ICP)算法)、概率匹配(如,正态分布变换(normal distribution transform,NDT)算法),滤波匹配(如,直方图滤波)、特征匹配等。
最后,基于道路交通规则和驾驶经验,根据自车的定位,以及周围物体的位置、朝向等信息决策出合理的驾驶行为,并将该驾驶行为指令发送给运动规划子系统,运动规划子系统则进一步根据接收的驾驶行为指令以及当前的环境感知信息,基于安全性、平稳性等指标规划处一条可行驾驶轨迹,并发送给控制系统。
(3)控制系统
控制系统具体来说也分为两个部分:控制子系统和执行子系统,其中,控制子系统用于将自主决策系统产生的可行驾驶轨迹转化为各个执行模块的具体执行指令,并传递给执行子系统;执行子系统接收来自控制子系统的执行指令后将其发送给各个控制对象,对车辆的转向、制动、油门、档位等进行合理的控制,从而使车辆自动行驶以完成对应的驾驶 操作。
需要说明的是,图3所示的自动驾驶车辆的总体架构仅为示意,在实际应用中,可包含更多或更少的系统/子系统或模块,并且每个系统/子系统或模块可包括多个部件,具体此处不做限定。
为了更进一步的理解本方案,基于图3对应所述的自动驾驶车辆的总体架构,本申请实施例中还将结合图4对自动驾驶车辆内部各个结构的具体功能进行介绍,请先参阅图4,图4为本申请实施例提供的自动驾驶车辆的一种结构示意图,自动驾驶车辆100配置为完全或部分地自动驾驶模式,例如,自动驾驶车辆100可以在处于自动驾驶模式中的同时控制自身,并且可通过人为操作来确定车辆及其周边环境的当前状态,确定周边环境中的至少一个其他车辆的可能行为,并确定其他车辆执行可能行为的可能性相对应的置信水平,基于所确定的信息来控制自动驾驶车辆100。在自动驾驶车辆100处于自动驾驶模式中时,也可以将自动驾驶车辆100置为在没有和人交互的情况下操作。
自动驾驶车辆100可包括各种子系统,例如行进系统102、传感器系统104(如,图3中的摄像机、SICK、IBEO、激光雷达等均属于传感器系统104中的模块)、控制系统106、一个或多个外围设备108以及电源110、计算机系统112和用户接口116。可选地,自动驾驶车辆100可包括更多或更少的子系统,并且每个子系统可包括多个部件。另外,自动驾驶车辆100的每个子系统和部件可以通过有线或者无线互连。
行进系统102可包括为自动驾驶车辆100提供动力运动的组件。在一个实施例中,行进系统102可包括引擎118、能量源119、传动装置120和车轮/轮胎121。
其中,引擎118可以是内燃引擎、电动机、空气压缩引擎或其他类型的引擎组合,例如,汽油发动机和电动机组成的混动引擎,内燃引擎和空气压缩引擎组成的混动引擎。引擎118将能量源119转换成机械能量。能量源119的示例包括汽油、柴油、其他基于石油的燃料、丙烷、其他基于压缩气体的燃料、乙醇、太阳能电池板、电池和其他电力来源。能量源119也可以为自动驾驶车辆100的其他系统提供能量。传动装置120可以将来自引擎118的机械动力传送到车轮121。传动装置120可包括变速箱、差速器和驱动轴。在一个实施例中,传动装置120还可以包括其他器件,比如离合器。其中,驱动轴可包括可耦合到一个或多个车轮121的一个或多个轴。
传感器系统104可包括感测关于自动驾驶车辆100周边的环境的信息的若干个传感器。例如,传感器系统104可包括定位系统122(定位系统可以是全球定位GPS系统,也可以是北斗系统或者其他定位系统)、惯性测量单元(inertial measurement unit,IMU)124、雷达126、激光测距仪128以及相机130。传感器系统104还可包括被监视自动驾驶车辆100的内部系统的传感器(例如,车内空气质量监测器、燃油量表、机油温度表等)。来自这些传感器中的一个或多个的传感数据可用于检测对象及其相应特性(位置、形状、方向、速度等)。这种检测和识别是自主自动驾驶车辆100的安全操作的关键功能。在本申请实施例中,激光传感器是属于传感器系统104中非常重要的一个感知模块。
其中,定位系统122可用于估计自动驾驶车辆100的地理位置,在本申请实施例中,激光传感器可作为定位系统122中的一种,用于实现自动驾驶车辆100的精确定位,IMU 124 用于基于惯性加速度来感知自动驾驶车辆100的位置和朝向变化。在一个实施例中,IMU124可以是加速度计和陀螺仪的组合。雷达126可利用无线电信号来感知自动驾驶车辆100的周边环境内的物体,具体可以表现为毫米波雷达或激光雷达。在一些实施例中,除了感知物体以外,雷达126还可用于感知物体的速度和/或前进方向。激光测距仪128可利用激光来感知自动驾驶车辆100所位于的环境中的物体。在一些实施例中,激光测距仪128可包括一个或多个激光源、激光扫描器以及一个或多个检测器,以及其他系统组件。相机130可用于捕捉自动驾驶车辆100的周边环境的多个图像。相机130可以是静态相机或视频相机。
控制系统106为控制自动驾驶车辆100及其组件的操作。控制系统106可包括各种部件,其中包括转向系统132、油门134、制动单元136、计算机视觉系统140、线路控制系统142以及障碍避免系统144。
其中,转向系统132可操作来调整自动驾驶车辆100的前进方向。例如在一个实施例中可以为方向盘系统。油门134用于控制引擎118的操作速度并进而控制自动驾驶车辆100的速度。制动单元136用于控制自动驾驶车辆100减速。制动单元136可使用摩擦力来减慢车轮121。在其他实施例中,制动单元136可将车轮121的动能转换为电流。制动单元136也可采取其他形式来减慢车轮121转速从而控制自动驾驶车辆100的速度。计算机视觉系统140可以操作来处理和分析由相机130捕捉的图像以便识别自动驾驶车辆100周边环境中的物体和/或特征。所述物体和/或特征可包括交通信号、道路边界和障碍体。计算机视觉系统140可使用物体识别算法、运动中恢复结构(structure from motion,SFM)算法、视频跟踪和其他计算机视觉技术。在一些实施例中,计算机视觉系统140可以用于为环境绘制地图、跟踪物体、估计物体的速度等等。线路控制系统142用于确定自动驾驶车辆100的行驶路线以及行驶速度。在一些实施例中,线路控制系统142可以包括横向规划模块1421和纵向规划模块1422,横向规划模块1421和纵向规划模块1422分别用于结合来自障碍避免系统144、GPS 122和一个或多个预定地图的数据为自动驾驶车辆100确定行驶路线和行驶速度。障碍避免系统144用于识别、评估和避免或者以其他方式越过自动驾驶车辆100的环境中的障碍体,前述障碍体具体可以表现为实际障碍体和可能与自动驾驶车辆100发生碰撞的虚拟移动体。在一个实例中,控制系统106可以增加或替换地包括除了所示出和描述的那些以外的组件。或者也可以减少一部分上述示出的组件。
自动驾驶车辆100通过外围设备108与外部传感器、其他车辆、其他计算机系统或用户之间进行交互。外围设备108可包括无线通信系统146、车载电脑148、麦克风150和/或扬声器152。在一些实施例中,外围设备108为自动驾驶车辆100的用户提供与用户接口116交互的手段。例如,车载电脑148可向自动驾驶车辆100的用户提供信息。用户接口116还可操作车载电脑148来接收用户的输入。车载电脑148可以通过触摸屏进行操作。在其他情况中,外围设备108可提供用于自动驾驶车辆100与位于车内的其它设备通信的手段。例如,麦克风150可从自动驾驶车辆100的用户接收音频(例如,语音命令或其他音频输入)。类似地,扬声器152可向自动驾驶车辆100的用户输出音频。无线通信系统146可以直接地或者经由通信网络来与一个或多个设备无线通信。例如,无线通信系统146 可使用3G蜂窝通信,例如CDMA、EVD0、GSM/GPRS,或者4G蜂窝通信,例如LTE。或者5G蜂窝通信。无线通信系统146可利用无线局域网(wireless local area network,WLAN)通信。在一些实施例中,无线通信系统146可利用红外链路、蓝牙或ZigBee与设备直接通信。其他无线协议,例如各种车辆通信系统,例如,无线通信系统146可包括一个或多个专用短程通信(dedicated short range communications,DSRC)设备,这些设备可包括车辆和/或路边台站之间的公共和/或私有数据通信。
电源110可向自动驾驶车辆100的各种组件提供电力。在一个实施例中,电源110可以为可再充电锂离子或铅酸电池。这种电池的一个或多个电池组可被配置为电源为自动驾驶车辆100的各种组件提供电力。在一些实施例中,电源110和能量源119可一起实现,例如一些全电动车中那样。
自动驾驶车辆100的部分或所有功能受计算机系统112控制。计算机系统112可包括至少一个处理器113,处理器113执行存储在例如存储器114这样的非暂态计算机可读介质中的指令115。计算机系统112还可以是采用分布式方式控制自动驾驶车辆100的个体组件或子系统的多个计算设备。处理器113可以是任何常规的处理器,诸如商业可获得的中央处理器(central processing unit,CPU)。可选地,处理器113可以是诸如专用集成电路(application specific integrated circuit,ASIC)或其它基于硬件的处理器的专用设备。尽管图1功能性地图示了处理器、存储器、和在相同块中的计算机系统112的其它部件,但是本领域的普通技术人员应该理解该处理器、或存储器实际上可以包括不存储在相同的物理外壳内的多个处理器、或存储器。例如,存储器114可以是硬盘驱动器或位于不同于计算机系统112的外壳内的其它存储介质。因此,对处理器113或存储器114的引用将被理解为包括可以并行操作或者可以不并行操作的处理器或存储器的集合的引用。不同于使用单一的处理器来执行此处所描述的步骤,诸如转向组件和减速组件的一些组件每个都可以具有其自己的处理器,所述处理器只执行与特定于组件的功能相关的计算。
在此处所描述的各个方面中,处理器113可以位于远离自动驾驶车辆100并且与自动驾驶车辆100进行无线通信。在其它方面中,此处所描述的过程中的一些在布置于自动驾驶车辆100内的处理器113上执行而其它则由远程处理器113执行,包括采取执行单一操纵的必要步骤。
在一些实施例中,存储器114可包含指令115(例如,程序逻辑),指令115可被处理器113执行来执行自动驾驶车辆100的各种功能,包括以上描述的那些功能。存储器114也可包含额外的指令,包括向行进系统102、传感器系统104、控制系统106和外围设备108中的一个或多个发送数据、从其接收数据、与其交互和/或对其进行控制的指令。除了指令115以外,存储器114还可存储数据,例如道路地图、路线信息,车辆的位置、方向、速度以及其它这样的车辆数据,以及其他信息。这种信息可在自动驾驶车辆100在自主、半自主和/或手动模式中操作期间被自动驾驶车辆100和计算机系统112使用。用户接口116,用于向自动驾驶车辆100的用户提供信息或从其接收信息。可选地,用户接口116可包括在外围设备108的集合内的一个或多个输入/输出设备,例如无线通信系统146、车载电脑148、麦克风150和扬声器152。
计算机系统112可基于从各种子系统(例如,行进系统102、传感器系统104和控制系统106)以及从用户接口116接收的输入来控制自动驾驶车辆100的功能。例如,计算机系统112可利用来自控制系统106的输入以便控制转向系统132来避免由传感器系统104和障碍避免系统144检测到的障碍体。在一些实施例中,计算机系统112可操作来对自动驾驶车辆100及其子系统的许多方面提供控制。
可选地,上述这些组件中的一个或多个可与自动驾驶车辆100分开安装或关联。例如,存储器114可以部分或完全地与自动驾驶车辆100分开存在。上述组件可以按有线和/或无线方式来通信地耦合在一起。
可选地,上述组件只是一个示例,实际应用中,上述各个模块中的组件有可能根据实际需要增添或者删除,图4不应理解为对本申请实施例的限制。在道路行进的自动驾驶车辆,如上面的自动驾驶车辆100,可以识别其周围环境内的物体以确定对当前速度的调整。所述物体可以是其它车辆、交通控制设备、或者其它类型的物体。在一些示例中,可以独立地考虑每个识别的物体,并且基于物体的各自的特性,诸如它的当前速度、加速度、与车辆的间距等,可以用来确定自动驾驶车辆所要调整的速度。
可选地,自动驾驶车辆100或者与自动驾驶车辆100相关联的计算设备如图4的计算机系统112、计算机视觉系统140、存储器114可以基于所识别的物体的特性和周围环境的状态(例如,交通、雨、道路上的冰、等等)来预测所识别的物体的行为。可选地,每一个所识别的物体都依赖于彼此的行为,因此还可以将所识别的所有物体全部一起考虑来预测单个识别的物体的行为。自动驾驶车辆100能够基于预测的所识别的物体的行为来调整它的速度。换句话说,自动驾驶车辆100能够基于所预测的物体的行为来确定车辆将需要调整到(例如,加速、减速、或者停止)什么稳定状态。在这个过程中,也可以考虑其它因素来确定自动驾驶车辆100的速度,诸如,自动驾驶车辆100在行驶的道路中的横向位置、道路的曲率、静态和动态物体的接近度等等。除了提供调整自动驾驶车辆的速度的指令之外,计算设备还可以提供修改自动驾驶车辆100的转向角的指令,以使得自动驾驶车辆100遵循给定的轨迹和/或维持与自动驾驶车辆100附近的物体(例如,道路上的相邻车道中的轿车)的安全横向和纵向距离。
上述自动驾驶车辆100可以为轿车、卡车、摩托车、公共汽车、船、飞机、直升飞机、割草机、娱乐车、游乐场车辆、施工设备、电车、高尔夫球车、火车、和手推车等,本申请实施例不做特别的限定。
本申请实施例提供了一种构建地图的方法,构建好的地图可应用于各种智能行驶(如,无人驾驶、辅助驾驶等)的智能体(如,图3、图4对应的自动驾驶车辆的总体架构及各结构功能模块)进行运动规划(如,驾驶行为决策、全局路径规划等)的场景中,请参阅图5,图5为本申请实施例提供的构建地图的方法的一种流程示意图,可以包括如下步骤:
501、基于激光点云数据,提取目标特征,该目标特征为从激光点云数据中提取到的符合预设条件的激光点。
计算设备会先获取构建地图所需的激光点云,并对获取到的每帧激光点云进行特征提取,得到目标特征,该目标特征为从激光点云数据中提取到的符合预设条件的激光点,该 激光点包括激光点的坐标和激光点的反射强度。例如,可以是处于标准位姿的激光传感器在不同地理位置处分别获取一帧激光点云,共得到n帧激光点云后,再由计算设备对这n帧激光点云中的每一帧进行特征提取,得到目标特征;也可以是处于标准位姿的激光传感器在不同地理位置每获取到一帧激光点云,就发送给计算设备进行特征提取,直至处理完所有的n帧激光点云,具体此处对计算设备处理激光点云的方式不做限定。
需要说明的是,在本申请的一些实施方式中,本申请实施例所述的目标特征实质就是从一帧激光点云中提取出一些特别的激光点,提取的目标特征一般为线特征和面特征,其中,线特征用于表示从激光点云数据提取的激光点位于同一直线上,面特征用于表示从激光点云数据提取的激光点位于同一平面上。
在本申请实施例中,对激光点云提取目标特征的方法一般是采用多种过筛手段实现,具体可以概括为激光点云曲率特征提取,即通过计算激光点云曲率,根据曲率过滤筛选,判定在一帧激光点云中,哪些激光点是位于同一个平面上以及哪些激光点是位于同一直线上。
502、构建拟合目标特征的函数,并得到函数的限制条件,所述函数和所述限制条件构成主地图。
针对每帧激光点云提取到对应的目标特征后,就可构建拟合每个目标特征的函数,并得到该函数的限制条件,例如,假设从第一帧激光点云中提取到了3个目标特征,其中2个是线特征,1个是面特征,那么就可以分别构建拟合这3个目标特征的函数(共3个函数),并且还将得到每个函数的限制条件(共3个限制条件),类似地,针对每帧激光点云都进行所述处理,就可得到所有n帧激光点云的目标特征对应的函数和限制条件,得到的这些函数和限制条件就构成主地图。例如,假设共有100帧激光点云,从中共提取到800个目标特征,那么对应就可以得到800个函数,以及800个函数对应的限制条件,这800个目标特征和800个限制条件就构成主地图。
这里需要注意的是,共提取到的这800个目标特征是已经经过筛选合并后的目标特征,例如,不同的2帧激光点云中分别提取到10个目标特征和8个目标特征,其中可能会有部分目标特征表征的同一个事物(如,是同一个路灯、路障等),那么就需要先对这相同的目标特征合并为一个目标特征,假设从前一帧激光点云提取到10个目标特征中有2个目标特征是与从后一帧激光点云提取到的8个目标特征中的2个目标特征是同一个事物,那么就只需保留其中一份中的2个目标特征,后续提取到的目标特征均是如此处理,此处不予赘述。
需要说明的是,在本申请的一些实施方式中,可以是每得到一帧激光点云,就对当前得到的这帧激光点云提取目标特征(假设提取了3个目标特征),再构建拟合该目标特征的函数以及得到函数对应的限制条件(如,构建得到3个函数和3个限制条件),针对每帧获取的激光点云,均进行所述处理,直至处理完所有的激光点云;在本申请的另一些实施方式中,也可以是先把所有的n帧(如,100帧)激光点云都获取到,然后先对这n帧激光点云全部进行目标特征提取(假设共提取到筛选后的800个目标特征),再构建拟合所有目标特征的函数以及得到函数对应的限制条件(如,构建得到800个函数和800个限制条件)。
为便于理解上述步骤501和步骤502,下面举例进行示意,请参阅图6,假设图6左边图是计算设备获取到的在某个地理位置对应的某一帧激光点云(可视化),首先计算设备对获取到的这帧激光点云提取目标特征,即找到哪些激光点是在同一平面上、哪些激光点是在同一直线上。提取到这些目标特征后,就根据这些目标特征拟合对应的函数,图6以提取出的两个目标特征为例进行示意(实际一帧激光点云可能可以提取出十几个目标特征,这里仅为示意),假设计算设备提取出的两个目标特征一个为线特征,一个为面特征,那么计算设备针对提取出来的这两个目标特征,分别进行拟合,假设针对线特征拟合得到的是如图6所示的函数f,针对面特征拟合得到的是如图6所示的函数g,若函数f和函数g没有限制条件,那么函数f表示的就是一条无线长的直线,函数g就是一个没有边界的平面,因此,还需要分别得到这两个函数的限制条件,限制条件是使得到的函数刚好能够拟合提取到的目标特征,如图6中加了限制条件的函数f和加了限制条件的函数g就刚好能够拟合提取到的目标特征。
需要说明的是,在本申请的一些实施方式中,函数的限制条件可以是函数对应自变量的取值区间。为便于理解,依然以图6为例进行示意,假设拟合得到的函数f的表达式为f(x)=ax+b,其中,x为激光点的三维坐标,a、b为参变量,根据提取的目标特征,就可确定参变量a、b的取值(在拟合误差范围内),但是确定了参变量取值的函数f是一条无线延伸的三维空间中的直线,因此,可以将提取的目标特征三维坐标的取值落入的最小区间,也就是函数f对应自变量的取值区间,作为该函数f的限制条件。
还需要说明的是,在本申请的一些实施方式中,函数的限制条件也可以是一些关键激光点的坐标,例如,图6中的位于目标特征(该目标特征拟合的是函数f)两端、图6中位于目标特征(该目标特征拟合的是函数g)平面角点等一些关键的激光点的坐标。具体此处对函数的限制条件的具体表现形式不做限定。
需要说明的是,在本申请的一些实施方式中,构成主地图的各个函数以及限制条件的坐标系可以采用通用横墨卡托格网系统(universal transverse mercator grid dystem,UTM)坐标系。
还需要说明的是,由于主地图是由多个函数和对应的限制条件构成的,因此,为便于区分,还可以给每个函数进行编号,即分配一个ID,便于后续查找。具体此处对编号方式不做限定。
还需要说明的是,在本申请的一些实施方式中,还可以在主地图中添加用于视觉定位的特征信息,也就是在主地图上融合多种不同类型传感器获取到的感知信息(如,安装在自动驾驶车辆上的摄像机实时拍摄的图片信息),使得后续定位更加精确。
503、根据该激光点云数据构建占据栅格地图。
获取到的每帧激光点云都需要进行两种操作,一种操作是提取目标特征,构建拟合每个目标特征的函数,得到每个函数对应的限制条件,从而得到主地图(如步骤501至步骤502所述),另一种操作是构建子地图,该构建的子地图就是占据栅格子地图(OGM)。
这里对如何根据每帧激光点云构建OGM进行详细介绍,计算设备针对每一帧获取到的激光点云,都会先构建一个对应的占据栅格子地图,或者是,针对几帧连续的激光点云, 构建一个对应的占据栅格子地图,例如,假设共有100帧激光点云,那么就可构建得到100个占据栅格子地图(可以一对一构建,也可以多对一构建,不做限定,此处仅为示意),构建每个占据栅格子地图的过程如下:在设置好占据栅格子地图的长和宽(即设置占据栅格子地图尺寸)以及栅格分辨率后,计算设备将得到的激光坐标系中的每帧激光点云投影到对应的占据栅格子地图中,若某个栅格中没有激光点就认为是空,有至少一个激光点就认为该栅格对应存在障碍物。因此,对于一个栅格把它是空的概率表示为p(s=1),有障碍物表示为p(s=0),两者的概率和为1,之后,计算设备对投影至占据栅格子地图的每帧激光点云经过一系列数学变换,根据各个栅格是否被占据的概率把这个栅格定位为占据状态或者空闲状态,其中,占据栅格子地图的中心位置就是该占据栅格子地图的原点O。类似地,针对每一帧激光点云,都进行上述处理,这样所有的n帧激光点云就分别对应有一个占据栅格子地图(共n个),之后,将这n帧激光点云对应的占据栅格子地图进行拼接,从而得到一张完整的OGM。
为便于理解,下面举例进行示意,具体请参阅图7,图7示意的是9帧激光点云对应构建的9张占据栅格子地图拼接为一张OGM,其中O1至O9分别为这9张占据栅格子地图的原点,根据原点坐标的大小顺序,依次拼接,得到一整张OGM,该OGM就构成本申请实施例提供的地图的子地图。
需要说明的是,在本申请的一些实施方式中,构建的子地图可以是如上述相关概念介绍的OGM,即得到的OGM中每个被占据的栅格内存储有落入该栅格的激光点的平均高度(即该栅格内障碍物的平均高度)和平均反射强度(即落入该栅格内的激光点的反射强度的平均值),具体可参阅图8,图8示意的是OGM中某个被占据的栅格存储了对应障碍物的平均高度h1(以地面为基准)和平均反射强度R1,其中,O为OGM的原点,也就是中心点。
这里需要注意的是,本申请实施例提供的如图8所示的OGM,与已有的OGM的区别在于:不仅存储有占据栅格对应障碍物的平均高度,还存储有落入占据栅格的激光点的平均反射强度,已有的OGM只存储有障碍物的平均高度,而激光点的反射强度存储在别处。本申请实施例这样存储的好处在于:易于查找,在实际应用过程中,更加简便。
还需要说明的是,在本申请的一些实施方式中,构建的子地图可以是改善型的OGM,即得到的OGM中每个被占据的栅格可以区分为部分占据和完全占据,全部占据是指从地面延伸到一定高度的障碍物(如,写字楼、住宅楼等一般建筑物),局部占据是指如桥洞、隧道、高架桥、空中人行横道等在空间占据一部分的建筑物,也可称为悬空障碍物。对这两种占据类型,当OGM中某个被占据的栅格是完全占据,也就是该栅格的障碍物为一般障碍物,那么该栅格内存储的障碍物高度就是障碍物的上边缘距离地面的高度;当OGM中某个被占据的栅格是部分占据,也就是该栅格的障碍物为悬空障碍物,那么该栅格内存储的障碍物高度就是障碍物的下边缘距离地面的第一高度和障碍物的上边缘距离地面的第二高度。为便于理解,具体可参阅图9,图9示意的是OGM中两个栅格分为被完全占据和部分占据,且这两个栅格分别存储为如图9所示的“h2,R2”和“(h0,h3),R3”,其中,h2是指对应栅格的障碍物从地面一直延伸到障碍物顶部的高度值,R2为落入该栅格内的激 光点平均反射强度;h0是指对应栅格的障碍物下边缘距离地面的第一高度,h3是指对应栅格的障碍物的上边缘距离地面的第二高度,R3为落入该栅格内的激光点平均反射强度。需要说明的是,在本申请的一些实施方式中,本申请实施例提供的OGM还可以进一步丰富障碍物高度的存储,例如,针对多层悬空障碍物,还可以提供h01、h02、h03、...、h0n的高度划分。
这里需要注意的是,本申请实施例提供的如图9所示的改善型OGM,与已有的OGM的区别在于:对占据栅格内的障碍物进行了分类,分为一般建筑物和悬空障碍物,针对不同类型的障碍物进行不同高度的存储,同时还存储有落入占据栅格的激光点的平均反射强度,而已有的OGM若有栅格被占据,就认为是完全占据。本申请实施例这样存储障碍物高度的好处在于:保留了障碍物更多的细节特征,提高后续定位的精确性。
还需要说明的是,在本申请的一些实施方式中,可以将OGM中占据栅格障碍物的高度存储为整形数据,也可以将OGM中落入占据栅格中激光点的平均反射强度存储为整形数据,还可以将OGM中占据栅格障碍物的高度和落入该占据栅格中激光点的平均反射强度均可以存储为整形数据。其中,整形数据是指不包含小数部分的数值型数据,整形数据只用来表示整数,以二进制形式存储。举例示意:以0.1m(米)离散为例,int8的数据可表达0~25.6m的高度,假设某个占据栅格的障碍物高度为6.7789m,已有的OGM存储障碍物的高度是浮点型,已有的OGM会直接存储为浮点型数据6.7789m,而在本申请实施例中,则会存储为整形数据68,因为是以0.1m离散的,因此整形数据68表示的6.8m。
还需要说明的是,在本申请的一些实施方式中,步骤503可以在步骤501之前执行,步骤503也可以在步骤502之后执行,步骤503还可以与步骤501同时执行,具体此处不做限定。
504、在主地图上建立占据栅格地图的索引。
针对每帧激光点云进行上述处理后,就得到了一个主地图和一个OGM,那么此时就需要将得到的主地图和OGM联系起来,组成复合激光地图。具体地,计算设备可以是在主地图上建立OGM的索引,一种实现方式可以是:首先,计算设备将各个激光点云对应的占据栅格子地图的中心位置(即原点)转换为UTM坐标系下的坐标值,然后,再在主地图上添加每个原点坐标值作为每帧激光点云对应的占据栅格子地图的索引标签。
为便于理解,依然以图7为例进行说明,图7中共有9个占据栅格子地图,其原点分别为O1至O9,首先,计算设备将这9个原点在栅格地图上的坐标转为UTM坐标系下的坐标(共9个坐标),然后,将各个原点坐标作为索引标签存储在主地图中。在实际应用中,自动驾驶车辆也是以该UTM坐标系进行自身定位。
在本申请上述实施方式中,计算设备对获取到的每帧激光点云都进行两种操作,如图10所示,一种操作是目标特征提取,构建拟合每个目标特征的函数,并得到每个函数对应的限制条件,这些函数和函数对应的限制条件就构成主地图,另一种操作是根据每帧激光点云构建子地图(由每帧激光点云对应的占据栅格子地图拼接得到),该构建的子地图就是占据栅格子地图(OGM),之后,通过在主地图上建立OGM的索引,构建主地图和OGM的复合激光地图,降低了该复合激光地图的存储容量,同时保留更多的特征信息用于后续 的匹配定位。
基于本申请上述实施方式得到的复合激光地图,下面对自动驾驶车辆如何根据该得到的复合激光地图进行精确定位进行说明,具体请参阅图11,首先,安装在自动驾驶车辆上的激光传感器实时获取激光点云,针对获取到的当前时刻的激光点云进行预处理,同时自动驾驶车辆上的IMU基于惯性加速度来感知自动驾驶车辆的位置和朝向变化等自动驾驶车辆的姿态信息,之后,基于事先构建的复合激光地图,结合匹配算法,得到该自动驾驶车辆的定位结果,定位的过程可以进行两个阶段的处理,可以是先与主地图进行匹配,再根据索引标签与OGM进行匹配,从而实现精确定位。一般的匹配算法有迭代邻近点匹配、特征匹配、滤波匹配及概率匹配等。直接由原始的激光点云构成的地图由于保留了激光点云原始信息,可以采用多种匹配算法,而业界目前采用的OGM一般只能进行滤波匹配和概率匹配,本申请实施例构建得到的复合激光地图由于即保留有激光点云的特征信息(即主地图),又保留有网格化的点云信息(即OGM),因此可以同时支撑激光点云的特征匹配、滤波匹配及概率匹配等,能够在轻量级的复合激光地图上提供更加精确的定位结果。
这里需要注意的是,在构建复合激光地图时,会采用一种标准车辆的标准位姿来构建,在实际定位过程中,就需要调整自动驾驶车辆的初始位姿,使得其尽可能接近标准位姿,这样定位的精度才更高。
在本申请上述实施方式中,相对于将原始的激光点云直接构建成激光地图这种方式,本申请实施例所提供的复合激光地图对存储空间的要求降低了,更加轻量级;相对于已有的将三维的激光点云压缩为二维信息,并根据压缩的二维信息构建二维的OGM这种方式,本申请实施例所提供的复合激光地图对OGM进行了改善,使得OGM保留有更多细节特征。此外,在一些道路周围比较开阔的场景,原有的OGM比较均匀,这时只采用原来的OGM进行匹配往往带来比较大的定位误差(因为处在开阔场景的不同位置,OGM中的相邻的一些局部位置看起来都比较相似),而本申请实施例提供的复合激光地图中的主地图可以提供路灯杆、路牌等线特征、面特征来提高匹配的精度。
在图5所对应的实施例的基础上,为了更好的实施本申请实施例的上述方案,下面还提供用于实施上述方案的相关设备。具体参阅图12,图12为本申请实施例提供的计算设备1200的一种结构示意图,该计算设备1200可部署于各种智能行驶(如,无人驾驶、辅助驾驶等)的智能体(如,轮式移动设备中的自动驾驶车辆、辅助驾驶车辆等),用于构建复合激光地图,使得智能体基于该构建好的复合激光地图进行定位;该计算设备1200也可以是独立的终端设备,如,手机、个人计算机、平板等智能设备,用于构建复合用于构建复合激光地图并将构建好的复合激光地图发送至各种智能行驶(如,无人驾驶、辅助驾驶等)的智能体(如,轮式移动设备中的自动驾驶车辆、辅助驾驶车辆等),用于智能体进行定位。该计算设备1200可以包括:提取模块1201、第一构建模块1202、第二构建模块1203以及索引模块1204,其中,提取模块1201,用于基于激光点云数据,提取目标特征,该目标特征为从激光点云数据中提取到的符合预设条件的激光点,该激光点包括激光点的坐标和激光点的反射强度;第一构建模块1202,用于构建拟合该目标特征的函数,并得到该函数的限制条件,该函数和该限制条件构成主地图;第二构建模块1203,用于根据该激光点 云数据构建占据栅格地图(OGM);索引模块1204,用于在该主地图上建立OGM的索引。
在本申请上述实施方式中,计算设备1200对获取到的激光点云都要进行两种操作,一种操作是通过提取模块1201进行目标特征提取,进一步通过第一构建模块1202构建拟合每个目标特征的函数,并得到每个函数对应的限制条件,这些函数和函数对应的限制条件就构成主地图,另一种操作是通过第二构建模块1203根据每帧激光点云构建子地图,该构建的子地图就是OGM,之后,通过索引模块1204在主地图上建立OGM的索引,构建主地图和OGM的复合激光地图,降低了该复合激光地图的存储容量,同时保留更多的特征信息用于后续的匹配定位。
在一种可能的设计中,第二构建模块1203,具体用于:根据第一激光点云数据构建第一占据栅格子地图,该第一激光点云数据属于获取到的激光点云中的任意一帧或多帧;并将构建的第一激光点云数据对应的第一占据栅格子地图进行拼接,得到该占据栅格地图。例如,假设共有100帧激光点云,那么就可构建得到100个占据栅格子地图,构建每个占据栅格子地图的过程如下:在设置好占据栅格子地图的长和宽(即设置占据栅格子地图尺寸)以及栅格分辨率后,计算设备1200将得到的激光坐标系中的每帧激光点云投影到对应的占据栅格子地图中,若某个栅格中没有激光点就认为是空,有至少一个激光点就认为该栅格对应存在障碍物。因此,对于一个栅格把它是空的概率表示为p(s=1),有障碍物表示为p(s=0),两者的概率和为1,之后,计算设备1200对投影至占据栅格子地图的每帧激光点云经过一系列数学变换,根据各个栅格是否被占据的概率把这个栅格定位为占据状态或者空闲状态,其中,占据栅格子地图的中心位置就是该占据栅格子地图的原点O。类似地,针对每一帧激光点云,都进行上述处理,这样所有的n帧激光点云就分别对应有一个占据栅格子地图(即第一占据栅格子地图,共n个),之后,将这n帧激光点云对应的占据栅格子地图进行拼接,从而得到一张完整的OGM。
在本申请上述实施方式中,具体阐述了第二构建模块1203如何由激光点云数据构建对应的占据栅格子地图,并将这些占据栅格子地图拼接成一张完整的OGM,具备可实现性。
在一种可能的设计中,索引模块1204,具体用于:首先,将各个激光点云对应的占据栅格子地图的中心位置(即原点)转换为UTM坐标系下的坐标值,然后,再在主地图上添加每个原点坐标值作为每帧激光点云对应的占据栅格子地图的索引标签。
在本申请上述实施方式中,阐述了如何将主地图与OGM建立联系的一种具体实现方式,即在主地图上添加各个占据栅格子地图的索引标签,这种实现方式易于实现,操作简便。
在一种可能的设计中,第一构建模块1202,具体用于:得到该函数对应自变量的取值区间;或,得到该函数中目标自变量的取值,该目标自变量包括目标激光点的坐标,该目标激光点属于所述目标特征。
在本申请上述实施方式中,给出了几种函数的限制条件的具体表现形式,具备灵活性和可选择性。
在一种可能的设计中,OGM包括:第一栅格中障碍物的高度和落入该第一栅格中激光点的反射强度的平均值,该第一栅格为该占据栅格地图内任意一个被占据的栅格,也就是 说,本申请实施例得到的OGM中每个被占据的栅格(即第一栅格)内存储有落入该栅格的激光点的平均高度(即该栅格内障碍物的平均高度)和平均反射强度(即落入该栅格内的激光点的反射强度的平均值)。
在本申请上述实施方式中,所提供的OGM与已有的OGM的区别在于:不仅存储有占据栅格对应障碍物的平均高度,还存储有落入占据栅格的激光点的平均反射强度,已有的OGM只存储有障碍物的平均高度,而激光点的反射强度存储在别处。本申请实施例这样存储的好处在于:易于查找,在实际应用过程中,更加简便。
在一种可能的设计中,可以将OGM中占据栅格障碍物的高度存储为整形数据,也可以将OGM中落入占据栅格中激光点的平均反射强度存储为整形数据,还可以将OGM中占据栅格障碍物的高度和落入该占据栅格中激光点的平均反射强度均可以存储为整形数据。
在本申请上述实施方式中,阐述了OGM中占据栅格存储的数据可以是整形数据,已有的方案存储数据的方式均是浮点型数据,整形数据相比浮点型数据占用更少的存储空间(理论上整形数据占据的存储容量为浮点型数据的1/4),因此,本申请实施例这样做的好处在于:可以节约存储容量。
在一种可能的设计中,本申请实施例得到的OGM中每个被占据的栅格可以区分为部分占据和完全占据,全部占据是指从地面延伸到一定高度的障碍物(如,写字楼、住宅楼等一般建筑物),局部占据是指如桥洞、隧道、高架桥、空中人行横道等在空间占据一部分的建筑物,也可称为悬空障碍物。对这两种占据类型,当OGM中某个被占据的栅格是完全占据,也就是该栅格的障碍物为一般障碍物,那么该栅格内存储的障碍物高度就是障碍物的上边缘距离地面的高度;当OGM中某个被占据的栅格是部分占据,也就是该栅格的障碍物为悬空障碍物,那么该栅格内存储的障碍物高度就是障碍物的下边缘距离地面的第一高度和障碍物的上边缘距离地面的第二高度。
在本申请上述实施方式中,本申请实施例提供的改善型OGM与已有的OGM的区别在于:对占据栅格内的障碍物进行了分类,分为一般建筑物和悬空障碍物,针对不同类型的障碍物进行不同高度的存储,同时还存储有落入占据栅格的激光点的平均反射强度,而已有的OGM若有栅格被占据,就认为是完全占据。本申请实施例这样存储障碍物高度的好处在于:保留了障碍物更多的细节特征,提高后续定位的精确性。
在一种可能的设计中,本申请实施例该的目标特征实质就是从一帧激光点云中提取出一些特别的激光点,提取的目标特征一般为线特征和面特征,其中,线特征用于表示从激光点云数据提取的激光点位于同一直线上,面特征用于表示从激光点云数据提取的激光点位于同一平面上。
在本申请上述实施方式中,具体阐述了提取到的目标特征所符合的一些条件,使得提取的目标特征具备利于后续定位的关键特征。
需要说明的是,图12对应实施例所述的计算设备中各模块/单元之间的信息交互、执行过程等内容,与本申请中图5对应的方法实施例基于同一构思,具体内容可参见本申请前述所示的方法实施例中的叙述,此处不再赘述。
本申请实施例还提供了一种计算设备,请参阅图13,图13是本申请实施例提供的计 算设备一种结构示意图,为便于说明,仅示出了与本申请实施例相关的部分,具体技术细节未揭示的,请参照本申请实施例方法部分。计算设备1300上可以部署有图12对应实施例中所描述的计算设备的模块,用于实现图12对应实施例中计算设备的功能,具体的,计算设备1300由一个或多个服务器实现,计算设备1300可因配置或性能不同而产生比较大的差异,可以包括一个或一个以上中央处理器(central processing units,CPU)1322(例如,一个或一个以上)和存储器1332,一个或一个以上存储应用程序1342或数据1344的存储介质1330(例如一个或一个以上海量存储设备)。其中,存储器1332和存储介质1330可以是短暂存储或持久存储。存储在存储介质1330的程序可以包括一个或一个以上模块(图示没标出),每个模块可以包括对训练设备中的一系列指令操作。更进一步地,中央处理器1322可以设置为与存储介质1330通信,在计算设备1300上执行存储介质1330中的一系列指令操作。
计算设备1300还可以包括一个或一个以上电源1326,一个或一个以上有线或无线网络接口1350,一个或一个以上输入输出接口1358,和/或,一个或一个以上操作系统1341,例如Windows ServerTM,Mac OS XTM,UnixTM,LinuxTM,FreeBSDTM等等。
在本申请实施例中,上述图5对应的实施例中由计算设备所执行的步骤可以基于该图13所示的结构实现,具体此处不予赘述。
另外需说明的是,以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。另外,本申请提供的装置实施例附图中,模块之间的连接关系表示它们之间具有通信连接,具体可以实现为一条或多条通信总线或信号线。
通过以上的实施方式的描述,所属领域的技术人员可以清楚地了解到本申请可借助软件加必需的通用硬件的方式来实现,当然也可以通过专用硬件包括专用集成电路、专用CPU、专用存储器、专用元器件等来实现。一般情况下,凡由计算机程序完成的功能都可以很容易地用相应的硬件来实现,而且,用来实现同一功能的具体硬件结构也可以是多种多样的,例如模拟电路、数字电路或专用电路等。但是,对本申请而言更多情况下软件程序实现是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在可读取的存储介质中,如计算机的软盘、U盘、移动硬盘、只读存储器(read only memory,ROM)、随机存取存储器(random access memory,RAM)、磁碟或者光盘等,包括若干指令用以使得一台计算机设备(可以是个人计算机,训练设备,或者网络设备等)执行本申请各个实施例所述的方法。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。
所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是 通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、训练设备或数据中心通过有线(例如同轴电缆、光纤、数字用户线)或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、训练设备或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存储的任何可用介质或者是包含一个或多个可用介质集成的训练设备、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,高密度数字视频光盘(digital video disc,DVD))、或者半导体介质(例如,固态硬盘(solid state disk,SSD))等。

Claims (19)

  1. 一种构建地图的方法,其特征在于,包括:
    基于激光点云数据,提取目标特征,所述目标特征为从所述激光点云数据中提取到的符合预设条件的激光点,所述激光点包括激光点的坐标和激光点的反射强度;
    构建拟合所述目标特征的函数,并得到所述函数的限制条件,所述函数和所述限制条件构成主地图;
    根据所述激光点云数据构建占据栅格地图;
    在所述主地图上建立所述占据栅格地图的索引。
  2. 根据权利要求1所述的方法,其特征在于,所述根据所述激光点云数据构建占据栅格地图包括:
    根据第一激光点云数据构建第一占据栅格子地图,所述第一激光点云数据属于所述激光点云数据中的任意一帧或多帧;
    将构建的第一占据栅格子地图进行拼接,得到所述占据栅格地图。
  3. 根据权利要求2所述的方法,其特征在于,所述在所述主地图上建立所述占据栅格地图的索引包括:
    将构建的第一占据栅格子地图的中心位置转换为在通用横墨卡托格网系统(UTM)坐标系下的坐标值;
    在所述主地图上添加所述坐标值作为构建的第一占据栅格子地图的索引标签。
  4. 根据权利要求1-3中任一项所述的方法,其特征在于,所述得到所述函数的限制条件包括:
    得到所述函数对应自变量的取值区间;
    或,
    得到所述函数中目标自变量的取值,所述目标自变量包括目标激光点的坐标,所述目标激光点属于所述目标特征。
  5. 根据权利要求1-4中任一项所述的方法,其特征在于,所述占据栅格地图包括:
    第一栅格中障碍物的高度和落入所述第一栅格中激光点的反射强度的平均值,所述第一栅格为所述占据栅格地图内任意一个被占据的栅格。
  6. 根据权利要求5所述的方法,其特征在于,
    所述第一栅格中障碍物的高度存储为整形数据;
    和/或,
    落入所述第一栅格中激光点的反射强度的平均值存储为整形数据。
  7. 根据权利要求5-6中任一项所述的方法,其特征在于,当所述第一栅格中障碍物为悬空障碍物,所述障碍物的高度包括:
    所述障碍物的下边缘距离地面的第一高度和所述障碍物的上边缘距离地面的第二高度。
  8. 根据权利要求1-7中任意一项所述的方法,其特征在于,所述目标特征包括:
    线特征,所述线特征用于表示从所述激光点云数据提取的所述激光点位于同一直线上;
    和/或,
    面特征,所述面特征用于表示从所述激光点云数据提取的所述激光点位于同一平面上。
  9. 一种计算设备,其特征在于,包括:
    提取模块,基于激光点云数据,提取目标特征,所述目标特征为从所述激光点云数据中提取到的符合预设条件的激光点,所述激光点包括激光点的坐标和激光点的反射强度;
    第一构建模块,用于构建拟合所述目标特征的函数,并得到所述函数的限制条件,所述函数和所述限制条件构成主地图;
    第二构建模块,用于根据所述激光点云数据构建占据栅格地图;
    索引模块,用于在所述主地图上建立所述占据栅格地图的索引。
  10. 根据权利要求9所述的设备,其特征在于,所述第二构建模块,具体用于:
    根据第一激光点云数据构建第一占据栅格子地图,所述第一激光点云数据属于所述激光点云数据中的任意一帧或多帧;
    将构建的第一占据栅格子地图进行拼接,得到所述占据栅格地图。
  11. 根据权利要求10所述的设备,其特征在于,所述索引模块,具体用于:
    将构建的第一占据栅格子地图的中心位置转换为在通用横墨卡托格网系统(UTM)坐标系下的坐标值;
    在所述主地图上添加所述坐标值作为构建的第一占据栅格子地图的索引标签。
  12. 根据权利要求9-11中任一项所述的设备,其特征在于,所述第一构建模块,具体用于:
    得到所述函数对应自变量的取值区间;
    或,
    得到所述函数中目标自变量的取值,所述目标自变量包括目标激光点的坐标,所述目标激光点属于所述目标特征。
  13. 根据权利要求9-12中任一项所述的设备,其特征在于,所述占据栅格地图包括:
    第一栅格中障碍物的高度和落入所述第一栅格中激光点的反射强度的平均值,所述第一栅格为所述占据栅格地图内任意一个被占据的栅格。
  14. 根据权利要求13所述的设备,其特征在于,
    所述第一栅格中障碍物的高度存储为整形数据;
    和/或,
    落入所述第一栅格中激光点的反射强度的平均值存储为整形数据。
  15. 根据权利要求13-14中任一项所述的设备,其特征在于,当所述第一栅格中障碍物为悬空障碍物,所述障碍物的高度包括:
    所述障碍物的下边缘距离地面的第一高度和所述障碍物的上边缘距离地面的第二高度。
  16. 根据权利要求9-15中任一项所述的设备,其特征在于,所述目标特征包括:
    线特征,所述线特征用于表示从所述激光点云数据提取的所述激光点位于同一直线上;
    和/或,
    面特征,所述面特征用于表示从所述激光点云数据提取的所述激光点位于同一平面上。
  17. 一种计算设备,其特征在于,包括处理器,所述处理器和存储器耦合,所述存储器存储有程序指令,当所述存储器存储的程序指令被所述处理器执行时实现权利要求1-8中任一项所述的方法。
  18. 一种芯片系统,其特征在于,所述芯片系统包括处理器和通信接口,所述通信接口和所述处理器耦合,所述处理器用于运行计算机程序或指令,使得权利要求1-8中任一项所述的方法被执行。
  19. 一种计算机可读存储介质,包括程序,当其在计算机上运行时,使得计算机执行如权利要求1-8中任一项所述的方法。
PCT/CN2021/116601 2020-09-14 2021-09-06 一种构建地图的方法及计算设备 WO2022052881A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010960450.0 2020-09-14
CN202010960450.0A CN114255275A (zh) 2020-09-14 2020-09-14 一种构建地图的方法及计算设备

Publications (1)

Publication Number Publication Date
WO2022052881A1 true WO2022052881A1 (zh) 2022-03-17

Family

ID=80632632

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/116601 WO2022052881A1 (zh) 2020-09-14 2021-09-06 一种构建地图的方法及计算设备

Country Status (2)

Country Link
CN (1) CN114255275A (zh)
WO (1) WO2022052881A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115965756A (zh) * 2023-03-13 2023-04-14 安徽蔚来智驾科技有限公司 地图构建方法、设备、驾驶设备和介质

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117405130B (zh) * 2023-12-08 2024-03-08 新石器中研(上海)科技有限公司 目标点云地图获取方法、电子设备及存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016162568A1 (en) * 2015-04-10 2016-10-13 The European Atomic Energy Community (Euratom), Represented By The European Commission Method and device for real-time mapping and localization
CN108171780A (zh) * 2017-12-28 2018-06-15 电子科技大学 一种基于激光雷达构建室内真实三维地图的方法
CN108319655A (zh) * 2017-12-29 2018-07-24 百度在线网络技术(北京)有限公司 用于生成栅格地图的方法和装置
CN110274602A (zh) * 2018-03-15 2019-09-24 奥孛睿斯有限责任公司 室内地图自动构建方法及系统

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016162568A1 (en) * 2015-04-10 2016-10-13 The European Atomic Energy Community (Euratom), Represented By The European Commission Method and device for real-time mapping and localization
CN108171780A (zh) * 2017-12-28 2018-06-15 电子科技大学 一种基于激光雷达构建室内真实三维地图的方法
CN108319655A (zh) * 2017-12-29 2018-07-24 百度在线网络技术(北京)有限公司 用于生成栅格地图的方法和装置
CN110274602A (zh) * 2018-03-15 2019-09-24 奥孛睿斯有限责任公司 室内地图自动构建方法及系统

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
LIU ZIMING, CHEN CHIN-YIN;LI YANG;PENG WENFEI: "Map construction of variable-height lidar odometry in indoor uneven ground environment", JOURNAL OF NINGBO UNIVERSITY (NATURAL SCIENCE & ENGINEERING EDITION), vol. 33, no. 4, 2 July 2020 (2020-07-02), pages 17 - 22, XP055911234 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115965756A (zh) * 2023-03-13 2023-04-14 安徽蔚来智驾科技有限公司 地图构建方法、设备、驾驶设备和介质
CN115965756B (zh) * 2023-03-13 2023-06-06 安徽蔚来智驾科技有限公司 地图构建方法、设备、驾驶设备和介质

Also Published As

Publication number Publication date
CN114255275A (zh) 2022-03-29

Similar Documents

Publication Publication Date Title
WO2021027568A1 (zh) 障碍物避让方法及装置
WO2021103511A1 (zh) 一种设计运行区域odd判断方法、装置及相关设备
WO2022027304A1 (zh) 一种自动驾驶车辆的测试方法及装置
US20220332348A1 (en) Autonomous driving method, related device, and computer-readable storage medium
WO2021000800A1 (zh) 道路可行驶区域推理方法及装置
WO2021102955A1 (zh) 车辆的路径规划方法以及车辆的路径规划装置
CN112639882B (zh) 定位方法、装置及系统
WO2021238306A1 (zh) 一种激光点云的处理方法及相关设备
US20220215639A1 (en) Data Presentation Method and Terminal Device
WO2022142839A1 (zh) 一种图像处理方法、装置以及智能汽车
US20220019845A1 (en) Positioning Method and Apparatus
WO2021189210A1 (zh) 一种车辆换道方法及相关设备
WO2022052881A1 (zh) 一种构建地图的方法及计算设备
WO2022148172A1 (zh) 车道线规划方法及相关装置
WO2022156309A1 (zh) 一种轨迹预测方法、装置及地图
JP2023534406A (ja) 車線境界線を検出するための方法および装置
CN112810603B (zh) 定位方法和相关产品
WO2021217646A1 (zh) 检测车辆可通行区域的方法及装置
CN115205311B (zh) 图像处理方法、装置、车辆、介质及芯片
US20220309806A1 (en) Road structure detection method and apparatus
CN115056784B (zh) 车辆控制方法、装置、车辆、存储介质及芯片
CN115100630B (zh) 障碍物检测方法、装置、车辆、介质及芯片
WO2021159397A1 (zh) 车辆可行驶区域的检测方法以及检测装置
CN113792566B (zh) 一种激光点云的处理方法及相关设备
CN115082886B (zh) 目标检测的方法、装置、存储介质、芯片及车辆

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21865935

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21865935

Country of ref document: EP

Kind code of ref document: A1